Microsoft Works to Help Deaf Gamers

A team of top researchers at Microsoft are working with Chinese coding experts to tap the Xbox 360 Kinect motion sensor’s body-tracking capabilities to create sign language recognition capabilities for the deaf and hard-of-hearing. This could be a brilliant step forward as they could create a great way for deaf gamers to communicate more easily. The members of the Microsoft Research Asia team have created a new paper called the Sign Language Recognition and Translation with Kinect  and co-authored it with a number of researchers at the Institute of Computing Technology at the Chinese Academy of Sciences (CAS).

The paper explains that although sign language recognition has been researched and considered as a possibility for many years. However, it also shed light on the fact that it has been difficult to achieve accurately in conditions that are not completely exact, where there are background and lighting issues that can vary considerably and make it impossible to work correctly. In the pat it was true that special “data gloves” and web cams have been required to make the technology work correctly, but they are now considered to be too expensive to be a viable possibility according the paper. However, as the report explains: Fortunately, Kinect is able to provide depth and colour data simultaneously, based on which the hand and body action can be tracked more accurate and easier. Therefore, 3D motion trajectory of each sign language vocabulary is aligned and matched between probe and gallery to get the recognised result. This could be a very good thing for gamers who are dear or have trouble hearing. The software could make communication much easier in games and even have more broad and encompassing real world applications. As not a lot of investment is put into creating the kind of software that makes gaming more accessible for people with hearing difficulties it’s hoped that this step forward could make a massive difference and encourage further innovation within the field. Further research and studies will take place to see what more can be done to assist deaf people live a ‘normal’ life.

But let’s take a look at how it is going to work. First, the 3D trajectory of the signer’s hands is mapped using a Kinect Windows SDK, then a resampling is done so that the software can fully understand what the signer is attempted to. It does this by taking account of differences in hand speed. The result is then scored and matched with the closest result from a list of gallery trajectories to be able to convey meanings. The coders are then able to use their 3D trajectory matching algorithm to build a sign language recognition and translation system with two primary functions. First it can translate sign language into text or speech, and secondly it can translate text into sign language through a computer-generated avatar. This will make speaking using sign language possible and allow deaf gamers to converse with others. It’s true that issues such as language accessibility for hard-of-hearing gamers is not an issue that is often thought about. And these kinds of technologies are often not given the platform they deserve. However, work like this with the potential for transforming people’s lives in a meaningful way, not just by making their smartphone apps easier to find, could help redress the balance. This is a very good thing for the gaming community as it makes games and gaming more accessible and could help to improve the lives of those people who would like to be involved.

deafgamingtechnology