posted Wednesday, November 27, 2013 at 1:21 PM EST
Instead, this new technology borrows a technique from telecommunications industries, and embeds timing information into the light it sends out. Here's how the press release describes it:
Instead, the new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled, says Ramesh Raskar, an associate professor of media arts and sciences and leader of the Camera Culture group within the Media Lab, who developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.Because of that, the new nanocamera can be used to accurately account for clear objects, and still locate their information in 3D space. And since the fix is essentially software based, the camera was able to be built using extant technology, notably LEDs, which are capable of strobing at high enough speeds for this new method to work.
“We use a new method that allows us to encode information in time,” Raskar says. “So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal.”
The researchers behind the project say that it could be used for medical imaging, or better collision detection for cars since it can handle obscured conditions like snow and fog. Or else for more accurate gesture and motion tracking, which, given the popularity of the Kinect, might be a slightly more likely path.
0 comments:
Post a Comment