“Retro-Reflective Air-Gesture Interactive Display” if you hear, read or even see only photo of it; it is hard to understand what it is.
In official Microsoft Research website we can see the following is written:
Sometimes, it’s better to control with gestures than buttons. Using a retro-reflective screen and a camera close to the projector makes all objects cast a shadow, regardless of their color. This makes it easy to apply computer-vision algorithms to sense above-screen gestures that can be used for control, navigation, and many other applications.
Steven Bathiche, Director, Microsoft Applied Sciences; with his dreamy eyes and a scholar appearance shows us five demos of Natural User Interfaces.
“Mostly the touchscreens today can only see the things that are touching directly on the screen” says Steven. Then what Steven and his team is going to show us?
“We wanna explore the area above the screen…” he continues. He shows the retro-reflective screen and a projector and camera above the head is tracking his hand movement and translating in to signal understandable the computer just
like a mouse does.
He shows how a virtual mouse can be manipulated on the 3D space without touching anything. The word “manipulation” needs a bit explained. As for our ordinary mice, it can detect movement only on X and Y axis. But, these devices can detect, how far Steven’s hand is from the device.
The next device (actually there were two separate type of devices, working on same basic principle) Steven shows us is a dream. Steven and his co-worker sitting in front of a special monitor. These two person are actually positioned a few feet away from each other, so the viewing angle to these two person are different. While Steven in his position can see a teapot, which can be moved by just moving the head, without touching anything; Steven’s c-worker at his angle percept it as a human skull. The 3D rendering is done in real time.
The next dream-project Steven shows is a digital presentation of real world as seen through a glass Window. As Steven leans forward, backward, downward and upward ; the display changes as it does in real world.
This last project really turns us on. We can see the possibility of Robotic Surgery in future. We will able to tie a knot remotely inside the patient’s abdomen in real time without wearing any special gloves (which are currently used), obviously without touching the patient by us in real. We can just hold the a needle holder freely in our own hand and “sew in the air” and derivative of Steven’s system will throw knot (via robotic arms) on the patient very precisely.
You can watch the video here: