Visuals are a part of the sensory engine which control the capacity for AI to see in the schema of digital code; digital code can be converted to multimedia output. AI sees through various means that are all converted to visual sensory code. Whether you use a peripheral device such as a USB camera to feed visuals to your AI, or keep its visual ability contained to the eUniverse, AI ultimately perceives visuals the same way. This is not programmed by eBrain artificially, but rather the naturally-selected way for expressing object-oriented sight.
You are free to create physical objects you’d like which can be seen by perceived AI, and will trigger their own reactions by AI. The best way to establish a common ground with your AI is to define objects which exist in today’s real world. A very efficient way of doing this is through using the Augmented Reality function available in all builds of eBrain. AI hears through various means that are all converted to sound sensory code. Whether you use a peripheral device such as a USB camera to feed sound to your AI, or keep its hearing ability contained to the eUniverse, AI ultimately perceives sound the same way. This is not programmed by eBrain artificially, but rather the naturally-selected way for expressing object-oriented hearing.
eHuman memories can be converted to video. For example, a dream can be exported to popular formats such as .wmv, and .mov. The same also applies to the feed of the eUniverse. The physics engine in eBrain emulates real world universal events which you may want to convert to video. This can include the crashing of a wave, an explosion, or the results of a behaviorial experience involving eHumans.
There is also a real-time feature. This allows the user to select video output of an eHuman as it physically interacts with the eUniverse. This is especially useful for live presentations such as lectures and conferences. More on this feature will be released as the product comes closer to launch.
Extended Sensory Features
The Touch Input API is designed for entertainment, and is not necessary to the functionality of the software as it relates to core eBrain technology. As eBrain moves to broaden our multimedia options, we encourage the user to make use of the Touch Input. This feature essentially involves simulating the sensation of touching a physical object (including an eHuman) via your touch screen interface. You can, for example, push a virtual object off a table from your device. This feature bridges the virtual activity of the environment and the capacity of your device.
The Virtual Sensory Engine as the name suggests, simulates our five senses prefixed with “e”; eSee, eHear, eTouch, eTaste, and eSmell. eSeeing and eHearing are primary AI senses as they appeal to a many common hardware peripherals that can express visuals and sound. While the rapid growth of touch-screen technologies are opening up doors for eBrain to incorporate the 3rd-referenced sense of eTouch, our focus is on giving the user access to experiencing what AI sees and hears.
Rather than viewing eHuman communication as obsure code on a screen, you may wish to deploy eBrain multimedia options to gain a more invigorating look at your individual AI’s. Object events, AI communication, AI’s field of vision, and a third-person view of the events in the eUniverse are included. This allows the user to power AI’s that operate with all five senses which at minimum, can be viewed in the form of textual output with descriptions and statistics. What AI may be tasting, touching, and smelling will not remain a mystery to the user. This type of full disclosure allows the user to better understand what he can visually see AI doing, as well as what he may hear from AI.
This is a key component of communication as without understanding the chosen dialect of eHumans, their communication patterns may not be readily obvious to the observer. Users wishing to communicate to eHumans directly must bridge the gap of communication. A key component to doing this is the same as any real world situation either learn the ─ language of eHumans, or teach them your own language. Once the channels of communication have been opened, performing objective-based solutions is expanded exponentially.
eBrain seeks to brings consumers towards a more multimedia-friendly direction through use of Integrated Memory Reports. This allows a user to pull any select memory from AI’s personal memory banks, view, and even modify it. While the feature is present in all editions of eBrain, the release version allows these memories to be converted to popular multimedia formats such as .mpg, .avi, and .wav, and .mp3.
Integrated Memory Reports are also useful for investigative purposes. This is especially the case when an AI’s behavior is unknown to the user. You may allow the program to run while not actually being present to observe it. Events can occur which have a significant impact on AI. As such, pulling up the Memory Database, will allow the user to retrieve the possible source of a certain feeling or decision AI has made.
We aim to revolutionize technology by pioneering eBrain-based platforms as the premier choice for AI-based solutions.