This Security Researcher Provides Evidence of Flaws in Google's reCAPTCHA Technology, By Using Google's Own Tools

A software developer and analyst Nikolai Tschacher recently demonstrated the flaws in Google's reCAPTCHA technology by running old audio software against the security measures, effectively breaking them.

CAPTCHA, a very long acronym (Completely Automated Public Turing test to tell Computers and Humans Apart) that essentially boils down to automated Turing test, has been a mainstay of the internet since the 90's. Almost every individual on Earth has been, at some point or the other, stopped and asked whether they're a robot or not, to the amusement of some and confusion of others. A Turing test is a series of questions posed to a subject in order to determine whether they're human or AI posing as humans. Technology that's even more relevant now than ever due to the high prevalence of bots online. The main difference between a Turing test and CAPTCHA is that the former is conducted by a human, and the latter is conducted by online browsers.

reCAPTCHA is Google's version of CAPTCHA, and is widely regarded as the superior counterpart. While it offers more or less the same security measures as its predecessor, reCAPTCHA uses the answers generated by actual humans in order to facilitate machine learning. So, every time one comes across the wonky-looking letters and gets them right, an AI picks up on that, absorbs the new information, and then uses it for functions such as typing out old texts and newspapers with wildly different fonts and storing them for public use.

Then again, despite all this, reCAPTCHA isn't infallible and researcher Nikolai Tschacher is here to shine a light on the cracks. In an almost hilarious case of Google's own features being used against it, he fed the Turing test's audio version for visually impaired users into the company's own speech-to-text API. The API would give back the correct answer an estimated 97% of the time. This workaround, proven via video evidence, also works with the latest version of reCAPTCHA, v3.

Tschacher has, however, stated that such exploits cannot be used on a much larger scale due to Google's habits of rate-limiting audio CAPTCHA tests, as well as noting down habits from certain browsers in order to track down bots. This does, however, prove how much automated Turing tests need to be updated.

Holes in the system are being constantly prodded at, and at the rate technology is developing across the world, AI are getting smarter by the minute. While this may not be the rise of the machines, bots can be anywhere from a nuisance to genuinely harmful. Security measures against them and their constant development are important.

Read next: Through a bug in a Google Docs, hackers can have access to your private data
Previous Post Next Post