and future plans

My previous research at the Ishikawa-Watanabe laboratory was on optical networks
and optical computing. My research on these topics can be found here. In 2004 I co-founded
and lead the Meta-Perception group, a group specialized in developing new technologies and
algorithms in the field of human computer interfaces to be deployed on wearables and public
space (hence my work as a Media Artist). Most of the projects described there are directly or
indirectly the result of my 14 years of work in that laboratory. If I had to explain my research in
a generic way, I would say that I have been working to bridge the gap between multiple
disciplines, creating new avenues for research and opening the eyes to new design dimensions
that would have been difficult to appreciate if I had not have a wide range of interests and
background (physics & mathematics, cognitive sciences, and art).

If I had to list a few research achievements in a more concrete way, I would say that I have
done some paradigm-shifting work on the following fields:

1. Human Computer interfaces using novel optical technologies, in particular smart sensors
using laser technology capable of tracking at 10kHz or realizing simple-shape projection
mapping on highly deformable surfaces in real time without any perceivable delay or

2. Mediated Self & Devices that Alter Perception. I co-created a specific workshop called
“DAP” in conjunction with ISMAR, and I have been continuously working since 2005 on
devices to help the blind navigating in complex environments (Haptic Radar, Virtual haptic
radar, haptic-car, etc.), in collaboration with cognitive scientists but also companies
concerned about the accessibility of their technology (for instance, NISSAN).

3. Studied the concept of “Space as Media”, both from the point of view of the Media Art
and architecture (how to integrate technology in the living space, in collaboration with
Okamura furniture company), as well as the point of view of “spatialized” computer human
interfaces. The Volume Slicing display is a concrete example of a technology that could be
used in the near future to explore CT or MRI scans; The BrainCloud is another example of a
large scale ongoing project whose goal is to create a spatialized database for neurosciences
and genetics (the common “support” of academic papers and latest results appear in the
form of twits in a virtual brain volume).

4. Ubiquitous computing and augmented inter-personal communication. I have coined the
term “Invoked Computing” and presented a prototype of this futuristic form of ambient
intelligence. I have also developed the concept of “minimal displays” (≠ ambient displays)
capable of augmented inter-personal communication using Spatial Augmented Reality at
the scale of the individual or public spaces (the biggest experiment took place over a

For more details, I refer to the Meta-Perception group page as well as my presentations at
slideshare, and of course my personal page