Google’s Looking Into Sign Language Detection for Video Calls

If there is one thing that this global pandemic has revealed, it is that new and better modes of communication are essential in order to make it so that no matter what happens you can stay connected to the ones you love. Now, one thing about video calls that we often assume is normal and does not really need to be changed in any way, shape or form has to do with the fact that when you start speaking you will be highlighted in the video call, something that can help reduce the clamor that is often associated with group calls and help people that are trying to say something gain the spotlight necessary for whatever it is that they are trying to say to end up being heard.

There is a major problem associated with this, though, and this is that not everyone can use the power of speech. Some people who are hard of hearing or have trouble speaking use sign language, and up until now there was no way for video call algorithms to detect such a thing. The good news is that Google has started researching ways in which sign language can be detected during video calls so that people that use sign language can end up being similarly highlighted when they begin to sign something.

Google has released a virtual version of this paper which shows how the algorithm will differentiate between sign language motion and regular movements without compromising on video quality. The way this is done is that the figure of the person speaking is simplified through a model known as PoseNet into a stick figure like form. This figure is then analyzed and if movements typical of sign language occur then the person signing will end up getting highlighted. This will be a huge step forward for video calls in general.



Read next: Google Maps live view feature introduces landmarks and makes it easy for people to change their orientation
Previous Post Next Post