Your Engineer Is Here!...

Call us now or complete the simple form below. Our account managers will help and advise you to make your business communications more productive.





Exploiting mouth gestures to improve interaction in virtual reality.

Exploiting mouth gestures to improve interaction in virtual reality.

Researchers have developed a new technology that exploit mouth gestures to create interaction between users and virtual reality. The team at Binghamton University, State University of New York, moved one step forward in the experential field, allowing a deeper immersion in the virtual world.

In the recent years, there has been a significant progress in technology, with an increasing number of devices that, through the head-mounted displays, allowed users to dive in virtual reality scenarios through notable visual experiences. However, the usual devices, based on head-mounted displays, present some limitations, as they occlude the upper half of users’ faces preventing a full scale facial action recognition. To find a solution to this problem, a team of researchers at Binghamton University led by the Professor of Computer Science Lijun Yin, created a new technology that is capable of interpreting mouth gestures and exploit them to improve real-time interaction within virtual reality.

Yin’s team tested the application on a group of graduate students. The students were asked to wear a head-mounted display, and then were presented with a simplistic game. The game’s objective was about guiding the player’s avatar around a forest and eat as many cakes as possible. The students involved had to select their movement direction using head rotation, move using mouth gestures and could only eat the cake by smiling. The new system was capable of recognising, describing and classifying the user’s mouth movements, and reached high correct recognition rates. The system efficacy has also been tested, demonstrated and validated through the development of a real-time virtual reality application.

“We hope to make this applicable to more than one person, maybe two. Think Skype interviews and communication,” explained Yin. “Imagine if it felt like you were in the same geometric space, face to face, and the computer program can efficiently depict your facial expressions and replicate them so it looks real.”

Even though the new technology is still a test phase, Yin believes his technology is applicable to a variety of fields.

“The virtual world isn’t only for entertainment. For instance, health care uses VR to help disabled patients,” explained Yin. “Medical professionals or even military personal can go through training exercises that may not be possible to experience in real life. This technology allows the experience to be more realistic.”

Students Umur Aybars Ciftci and Xing Zhang contributed to this research.

The paper illustrating the technology, “Partially occluded facial action recognition and interaction in virtual reality applications,” was presented at the 2017 IEEE International Conference on Multimedia and Expo.

Written by: Pietro Paolo Frigenti

Journal Reference: Ciftci, U., Zhang, X. and Tin, L., 2017, July. Partially occluded facial action recognition and interaction in virtual reality applications. In Multimedia and Expo (ICME), 2017 IEEE International Conference on (pp. 715-720). IEEE.

Need more information?

Call us now on
0800 988 0094
for immediate assistance or just to speak to one of our friendly experts!

Sign up for the latest news!