Most robots are controlled using a joystick or other traditional input device, but this new system allows operators to issue commands to robots using simple hand and body gestures.
Developed by engineer Tsuyoshi Horo at Tokyo University, the system uses a circular array of cameras to detect human movements in the room, then convey them to a robot as directional commands.
The system produces a real time 3-dimension volumetric model of people or objects inside the circle of cameras, allowing for precise tracking of movements. Be sure to check out this cool video clip showing what the cameras “see” once run through Horo’s software:
Controlling robots is just one possible application for the gesture recognition software. Horo has also implemented several computer user interface prototypes using the same basic system. You can find more information on Horo’s research website (translated from Japanese) and more videos over on his YouTube page.