Kinected ConferenceLining Yao, Anthony DeVincenzi, Ramesh Raskar, Hiroshi Ishii / 2011

Kinected Conference

What we can do if the screen in videoconference rooms can turn into an interactive display? With Kinect camera and sound sensors, We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. Four features are implemented, which are “Talking to Focus”, “Freezing Former Frames”, “Privacy Zone” and “Spacial Augmenting Reality”.

 

Copyright & Usage policy

By downloading this picture, you accept that it is licensed to you under the following conditions:

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

© 2012 Tangible Media Group / MIT Media Lab

Papers