Gesture recognition is a computing process that attempts to recognize and interpret human gestures through the use of mathematical algorithms. Gesture recognition is not limited to just human hand gestures, but rather can be used to recognize everything from head nods to different walking gaits.
Gesture recognition is a growing field of computer science, with an international conference devoted to gesture and facial recognition. As the field continues to grow, so will the ways that it can be utilized. Gesture recognition computer processes are designed to enhance human-computer interaction, and can occur in multiple ways, such as through the use of touch screens, a camera, or peripheral devices.
Touch screen gesture recognition has become second nature to many people today. While some computers and operating systems allow for customized gesture recognition, most people today know that they can pinch-to-zoom on a touch screen when they want to get a closer look at something. This specific gesture transcends nearly all user interfaces, from smart phones to personal computers. Touchscreens allow for relatively easy interaction between humans and computers.
Gesture recognition technology that is vision based uses a camera and motion sensor to track user movements and translate them in real time. Newer cameras and programs allow for tracking of depth data as well, which can help improve gesture tracking. Through the use of real-time image processing, users can interact with the program immediately to achieve the desired results. For example, the Xbox Kinect relied on a camera to translate players movements as part of different games.There have also been experiments performed around using a camera to track an individual’s gait and then utilizing deep learning algorithms in order to assess their chance of falling, and to make recommendations on how to lower those chances.
There have been devices created, such as those by Leap Motion, that use specialized cameras and programs specifically around hand tracking in order to optimize the motion-tracking results. By focusing only on hand gesture recognition, such programs can get increased accuracy, allowing users to interact with their systems easily and completely hands-off. Through integrating such technology into existing touch screen devices and kiosks, such gesture-based technology can allow multiple users to interact with the same device without the fear of spreading germs.
There are multiple different peripheral devices that allow for different gesture interfaces. For example, most virtual reality or augmented reality systems have some kind of glove or controller that users must utilize to detect hand gestures and translate their movements into the movements of the character in the game.
Gesture recognition can be used to improve a variety of fields, such as:
- Public health: By removing the need for touchscreens on self-service kiosks, business and organizations could help reduce the number of germs being spread. This is especially helpful for mitigating the spread of infectious diseases such a Covid-19 or influenza.
- Health diagnostics: Through analysis of movements, doctors can help diagnose patients with diseases or fall risks in order to improve their overall outlook. On top of analyzing gaits, gesture analysis and machine learning can be used to identify small movements such as tics and spasms to help recognize potential diagnoses.
- Security: Programs can be set up to recognize hand gestures and to send alerts in response. For example, home security systems can be set up to recognize what hands look like when they are holding guns and then send an alert in response. Or an organization can set up and train employees around a specific hand gesture, so that when cameras pick up the gesture, they can silently alert law enforcement, such as in the event of a robbery.