While we often used gestures on our smartphones, this way of behaving with computer technology may have a huge impact on our lives.
Gesture-based computing refers to using our human body to interact with digital technology without needing to use the usual input devices, such as a game controller, keyboard, mouse, or voice-entry.
There are many different applications for this type of technology. For kids, it could enable a much more intuitive and active style of learning- one that seems more like they’re playing a game. Not only would gesture-based computing allow educators to truly engage students both mentally and physically, but it would make using technology more intuitive for everyone.
If you’ve ever heard of the G-Speak project, you’re probably aware of the downsides. While the motion-tracking gloves allow users to use gesture-based computing, the lab required at least a dozen cameras in order for it to work. Obviously, this is unachievable for the average user.
Now, however, gesture-based computing is getting smarter. In 2010, MIT researchers managed to make gesture-based computing cost-effective and accessible, with the help of some cheap, colorful gloves.
With just the clever software, a standard webcam, and the lycra gloves, the system could instantly translate hand movements into on-screen control gestures or commands. Until this project, capturing detail needed expensive and time-consuming motion-capture systems like the ones we often see used for Hollywood’s special effects. These are the markers placed around a body, with data gloves packed with sensors and costing up to $20,000.
The MIT gloves, however, include 10 different colors, separated into 20 patches and arranged with the best color separation possible. This means that the computer system can use a webcam to track the colors, identifying the location of each finger and distinguishing between both the back and front of the hand. The system database has 100,000 images of hands in different positions. It can accurately replicate most of the alphabet for American Sign Language, although it’s still perfecting those requiring rapid motions or involving the thumb.
Since these gloves are so cheap to make, they can make gesture-based computing mainstream. Of course, for truly widespread appeal, eventually, the gloves will have to go. The goal is for markerless motion-capture, which may not be as far away as we think.
Not only will gesture-based computing have numerous applications for games, but 3D modeling scenarios will be able to be used in industries such as design and engineering as well.
Perhaps the most famous example of gesture-based computing in popular culture is Tom Cruise in Minority Report. Who could forget the image of him standing in front of a screen, manipulating images and ordering his computer to do various tasks simply by waving, pinching, and making other gestures? Sure, we may not quite be there, but it’s only a matter of time before actually needing to use a touch screen or speak to your smartphone or computer is considered ‘old fashioned.’
Of course, there are numerous roadblocks to overcome before we get to that place. One of them, for example, is the fact that universal gestures aren’t quite as universal as we might think. In North America, you may beckon someone by holding your hand up in a clenched fist and using your index finger. In Africa, this is completed with your full hand. This is again different in various European and Asian countries.
Researchers must develop systems that are capable of learning, potentially with the help of AI and machine learning. Luckily, there are some very smart people around the world, working to make gesture-based computing possible for people of all ages.