clock menu more-arrow no yes mobile

Filed under:

Kinect used in sign-language recognition and translation

In a collaboration with Microsoft Research Asia and the Institute of Computing Technology at the Chinese Academy of Sciences (CAS) researchers tested how the Kinect's body-tracking features can be used in sign-language recognition, according to the official Inside Microsoft Research website.

According to the blog, initial results show that the technology built from Kinect for Windows can help those who use sign language as primary language to better interact with computers, similar to speech recognition software.

The system features a Translation Mode that translates sign language into text or speech, which currently supports American sign language with the capacity to support other sign language varieties.

Using an avatar, the system's Communications Mode allows a deaf or hard-of-hearing person to communicate with a hearing person. The avatar acts out the text input from a keyboard and when the deaf or hard-of-hearing person responds with sign language, it is converted into text.

"From our point of view," says CAS Professor Xilin Chen told the blog. "The most significant contribution is that the project demonstrates the possibility of sign-language recognition with readily available, low-cost 3-D and 2-D sensors."

The research features in the paper "Sign Language Recognition and Translation with Kinect." The paper was co-authored by CAS researchers Xiujuan Chai, Guang Li, Yushun Lin, Zhihao Xu, Yili Tang, and Chen, as well as principal researcher at Microsoft Research Asia, Ming Zhou. The project was supported by Microsoft Research Connections.

You can watch the technology in action in the video above.

Sign up for the newsletter Sign up for Patch Notes

A weekly roundup of the best things from Polygon