Switches and keyboards are easy to use but they are also easy to support in software. They work well for many applications but they are not always the most efficient way of interacting with a computer. Unfortunately more natural interactions requires more computational power and more sophisticated sensors. Luckily these are now available to programmers but creating the software to handle these new input streams from devices like Creative's Senz3D (Fig. 1) is not easy. The sensor uses technology from SoftKinetic (see Time-Of-Flight 3D Coming To A Device Near You) that is also found in the Microsoft Kinect 2.
Related Articles
- Time-Of-Flight 3D Coming To A Device Near You
- Kinecting With The Point Cloud Library
- How Microsoft’s PrimeSense-based Kinect Really Works
Intel's Perceptual Computing SDK is a framework that provides a higher level of input from users using a range of sensors such as 3D imaging. Barry Solomon, Senior Product Manager, provides some insights into the SDK and perceptual technology.
Wong: What is perceptual computing?
Solomon: Perceptual technology enables our devices to understand our intentions in a more natural way than more traditional forms of input, such as a keyboard and mouse. Intel’s intention is to enable computers to work around us, rather than us continuing to work around them. To achieve this, we need to give our devices more human-like senses, which we can liken to elements of the human body.
The analogy of the CPU being the brain of the computer is not new, and touchscreens and sensors, which are now common in Ultrabooks, can be likened to the nervous system of a device. Now with the introduction of two perceptual-enabled cameras with higher resolution, devices will have better recognition capabilities along with better perception of depth, similar to how our eyes work. The first new experience this enables is facial authentication — your face is your password. [For additional information on what this enables, see this page on Distance and Movement Technologies]
Then there are the microphones which we can liken to human ears, that can interpret us through the use of advanced voice recognition technology. This is complemented by the use of speakers that synthesize a voice for the device. Intel has worked with major industry players and partners to achieve new levels of microphone accuracy in Ultrabooks, notebooks, and All in One devices through the use of dual-array microphones.
Wong: What is the Intel Perceptual Computing SDK?
Solomon: The Intel Perceptual Computing SDK allows software developers to build applications that take advantage of human-like senses to create immersive, interactive, and intuitive user interfaces. The SDK gives developers code samples, user experience guidelines, documentation and forum support. [For additional information, see the Intel Perceptual Computing SDK page]
Wong: What kinds of user interaction does the SDK support?
Solomon: The SDK currently supports five categories of capabilities.
Speech recognition can be used to give commands to the device or take dictation. When a person speaks to their computing device, the speech recognition algorithm in the SDK interprets the speech, recognizes that the user has made a command pre-programmed into the application and passes the command on to the application.
Close-range tracking, sometimes known as hand and finger tracking, is the overall term for a sub-category of perceptual computing interactivity that includes recognition and tracking of hand poses, such as a thumbs up, and other hand gestures and movements. This is made possible through the Creative Interactive Gesture Camera’s 3D capability.
Facial analysis or facial tracking supports attribution detection, such as recognizing a smile, and can also be used for facial authentication, in which your face takes the place of a password.
An augmented reality experience is enabled through 2D and 3D object tracking, which combines real-time input from the Creative camera with other graphics or video sources.
Lastly, using information from the depth cloud, we've enabled background subtraction, a technology that allows developers to differentiate and separate objects or people in the foreground from the background. This technology works in real time, so people can eliminate the background from video to maximize their viewable desktop for multitasking or immerse themselves in video chat, online collaboration and more.
Wong: What kind of hardware is required to use the SDK?
Solomon: Developers need the Perceptual Computing SDK, which can be downloaded for free, as well as a Creative Interactive Gesture Camera which can be purchased at intel.com/software/perceptual.
Additional system requirements include at minimum IA-32 or 2nd Generation Intel Core processor with Intel 64 architecture, 500 MB of free hard disk space, system natively running Microsoft Windows 7 or 8 with SP1 or later, and Microsoft Visual Studio C++ 2008 with SP1 or newer.
Wong: What kind of hardware and software is required to deploy applications developed with the SDK?
Solomon: Applications designed to take advantage of Intel Perceptual Computing technology require users to own a Creative Senz3D camera, which can be purchased on Amazon or other online retailers.
Wong: Are there any kind of licensing requirements, costs or restrictions?
Solomon: The Intel Perceptual Computing SDK is free to download. All licensing requirements, costs, and restrictions can be found in the End User License Agreement.