Tired of using keybord and keypad of devices as input device? Also the small size of devices typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality.
No problem here comes the SKINPUT a very new technology yet to come. It uses human skin as a INPUT DEVICE developed by Chris Harrison D. Tan, D. Morris by taking our technology beyond future. Just touch you body and it acts as an Input device. Isn't that woderful?
A brif about the team:
Chris Harrison is a fifth year Ph.D. student in the Human-Computer Interaction Institute at Carnegie Mellon University advised by Scott Hudson. And also a Microsoft Research Ph.D. Fellow and editor-in-chief of XRDS, ACM's flagship magazine for students.
D. Morris is a researcher in the CUE group (Computational User Experiences) at Microsoft Research.
D. Tan is a a Principal Researcher at Microsoft Research, where he manages the Computational User Experiences group in Redmond, Washington, as well as the Human-Computer Interaction group in Beijing, China. He also holds an affiliate faculty appointment in the Department of Computer Science and Engineering at the University of Washington.
They proposed this technology in late 2010 with the leading brand in operating system Microsoft.
What is SKINPUT?
Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.
Operation:
Skinput has been publicly demonstrated as an armband, which sits on the biceps. This prototype contains ten small cantilevered Piezo elements configured to be highly resonant, sensitive to frequencies between 25 and 78Hz. This configuration acts like a mechanical Fast Fourier transform and provides extreme out-of-band noise suppression, allowing the system to function even while the user is in motion. From the upper arm, the sensors can localize finger taps provided to any part of the arm, all the way down to the finger tips, with accuracies in excess of 90% (as high as 96% for five input locations). Classification is driven by a support vector machine using a series of time-independent acoustic features that act like a fingerprint. Like speech recognition systems, the Skinput recognition engine must be trained on the "sound" of each input location before use. After training, locations can be bound to interactive functions, such as pause/play song, increase/decrease music volume, speed dial, and menu navigation.
With the addition of a pico-projector to the armband, Skinput allows users to interact with a graphical user interface displayed directly on the skin. This enables several interaction modalities, including button-based hierarchical navigation, list-based sliding navigation (similar to an iPod), text/number entry (e.g., telephone number keypad), and gaming (e.g., Tetris, Frogger).
But do you want to know how exactly this technology works?
Well give a look to the filed given below. Just download this small paper about Skinput and help yourself.
Download Paper
Also to get more informaiton visit here.
No problem here comes the SKINPUT a very new technology yet to come. It uses human skin as a INPUT DEVICE developed by Chris Harrison D. Tan, D. Morris by taking our technology beyond future. Just touch you body and it acts as an Input device. Isn't that woderful?
A brif about the team:
Chris Harrison is a fifth year Ph.D. student in the Human-Computer Interaction Institute at Carnegie Mellon University advised by Scott Hudson. And also a Microsoft Research Ph.D. Fellow and editor-in-chief of XRDS, ACM's flagship magazine for students.
D. Morris is a researcher in the CUE group (Computational User Experiences) at Microsoft Research.
D. Tan is a a Principal Researcher at Microsoft Research, where he manages the Computational User Experiences group in Redmond, Washington, as well as the Human-Computer Interaction group in Beijing, China. He also holds an affiliate faculty appointment in the Department of Computer Science and Engineering at the University of Washington.
They proposed this technology in late 2010 with the leading brand in operating system Microsoft.
What is SKINPUT?
Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.
Operation:
Skinput has been publicly demonstrated as an armband, which sits on the biceps. This prototype contains ten small cantilevered Piezo elements configured to be highly resonant, sensitive to frequencies between 25 and 78Hz. This configuration acts like a mechanical Fast Fourier transform and provides extreme out-of-band noise suppression, allowing the system to function even while the user is in motion. From the upper arm, the sensors can localize finger taps provided to any part of the arm, all the way down to the finger tips, with accuracies in excess of 90% (as high as 96% for five input locations). Classification is driven by a support vector machine using a series of time-independent acoustic features that act like a fingerprint. Like speech recognition systems, the Skinput recognition engine must be trained on the "sound" of each input location before use. After training, locations can be bound to interactive functions, such as pause/play song, increase/decrease music volume, speed dial, and menu navigation.
With the addition of a pico-projector to the armband, Skinput allows users to interact with a graphical user interface displayed directly on the skin. This enables several interaction modalities, including button-based hierarchical navigation, list-based sliding navigation (similar to an iPod), text/number entry (e.g., telephone number keypad), and gaming (e.g., Tetris, Frogger).
But do you want to know how exactly this technology works?
Well give a look to the filed given below. Just download this small paper about Skinput and help yourself.
Download Paper
Also to get more informaiton visit here.
No comments:
Post a Comment