Gesture based controls are starting to change the way users interact with computer applications - as seen, for example, with the Wiimote a remote control used with the Wii games console.
Wouldn't it be great if the system could detect what was happening by interpreting movement and gestures directly without the need for any input device?
In order to do this, technology is required that can build up a 3-Dimensional picture of the scene or environment where the user is placed. Movement and gestures within this 3D enviroment can then be used by the system to determine what the user is trying to do.
The ZCamTM is a video camera that can capture depth information (which is used to build the 3D model) along with video and is produced by 3DV Systems.
The technology is based on the Time of Flight principle. In this technique, 3D depth data is generated by sending pulses of infra-red light into the scene and detecting the light reflected from the surfaces of objects in the scene. Using the time taken for a light pulse to travel to the target and back, the distance can be calculated and used to build up 3D depth information for all objects in the scene.
The technology performs superior depth imaging (depth resolution of millimeters) in real-time (60 fps or more), using little or no CPU.
The latest ZCamTM is based on the DeepCTM technology which is a chipset that incoporates the sensing technology.
There are a number of related publications which describe the technology in greater detail and can be found at
Creating an intuitive mechanism to replace the keyboard and/or mouse has been an aspiration for many people ever since Tom Cruise in 'Minority Report' looked so cool moving files around with his fingers.
Perhaps this technology can be used in helping to achieve those aims.