Gesture tracking with the Smart Laser Scanner
The problem of tracking hands and fingers on natural scenes has received much attention using passive acquisition vision systems and computationally intense image processing. We are currently studying a simple active tracking system using a laser diode (visible or invisible light), steering mirrors, and a single non-imaging photodetector. The system is capable of acquiring three dimensional coordinates in real time without the need of any image processing at all. Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning are to a very narrow window precisely the size of the target.
Tracking of multiple targets is also possible without replicating any part of the system (targets are considered sequentially). Applications of a multiple target tracking system are countless. Such a configuration allows, for instance, multiple users to interact on the same virtual space; or a single user to control several virtual tools at the same time, resize windows and control information screens, as imagined in Spielberg’s film “Minority Report” – but without the need to wear special gloves nor markers. A very interesting characteristic of the proposed 3D laser-based locator, is that it also can be used as an output device: indeed, the laser scanner can be used to write information back to the user, by projecting alphanumeric data onto any available surface, like the palm of the hand. This has been successfully demonstrated, without having to stop the tracking. Finally, hardware simplicity is such that using state-of-the-art Micro-Opto-Electro-Mechanical-System (MOEMS) technology, it should be possible to integrate the whole system on a single chip, making a versatile human-machine input/output interface for use in mobile computing devices.
Markerless finger tracking:
Gestural interface (proof-of-principle) demos:
- New: check “Sticky Light” and “scoreLight” setups.
- Tracking one (bare) finger [WMV, 7.1MB]
- Far reach (using scaled-up “model” of fingetips) [WMV, 3.7MB] / [WMV, 1.5MB]
- A. Cassinelli, S. Perrin and M. Ishikawa, Smart Laser-Scanner for 3D Human-Machine Interface, ?ACM SIGCHI 2005 (CHI ’05) International Conference on Human Factors in Computing Systems, Portland, OR, USA April 02 – 07, 2005, pp. 1138 – 1139 (2005). Abstract: [PDF-835KB]. Video Demo: Good Quality: [MPG-176MB], Compressed: [MPG-28MB]. Slides Presentation (click on images to launch video demos) [PPT-10MB].
- A. Cassinelli, S. Perrin and M. Ishikawa, Markerless Laser-based Tracking for Real-Time 3D Gesture Acquisition, ACM SIGGRAPH 2004, Los Angeles. Abstract: [PDF-87KB]. Video Demo: Good Quality [AVI-24,7MB], Compressed [AVI-6,3MB]. Poster [JPG-835KB].
- S. Perrin, A. Cassinelli and M. Ishikawa, Gesture Recognition Using Laser-based Tracking System, 6th International Conference on Automatic Face and Gesture Recognition 2004 (FG 2004), Seoul, Korea, 17-19 May 2004 [PDF-402KB], Poster [PPT-457KB].
- S. Perrin, A. Cassinelli and M. Ishikawa, Laser-Based Finger Tracking System Suitable for MOEMS Integration, Image and Vision Computing, New Zealand (IVCNZ 2003), Massey Univ., 26-28 Nov. 2003, pp.131-136, (2003). [PDF-239KB]. Poster presentation [PPT-1432KB].
- Check a more detailed webpage here: [ http://www.k2.t.u-tokyo.ac.jp/fusion/LaserActiveTracking/index-e.html ]