Google Claims Its Play Store Has Become More Efficient At Dealing With Data Siphoning Apps Via Machine Learning and Better Policy Enforcement

Google has recently shed some light on how its Play Store seeds out the more nefarious third party applications on the platform, giving particular credit to machine learning and its role in the process.

Making statements along the lines of how cybersecurity, in some cases, take superiority over physical security may have been controversial a decade ago, but now are very much part of normal day conversation (if not still debatable). The virtual world has some form of access to the most sensitive personal information we hold dear to us. Personal addresses, IDs, credit and debit card numbers, social security numbers, the list really does keep going on. And with such sensitive information frolicking around in our smartphones and mobile devices, one of the biggest threats to them comes in the form of shady applications in the guise of fun games and such that ultimately try to siphon your personal data.

Naturally, with such a high level of risk associated with downloading any application, platforms such as Google Play are actively trying their best to ensure that the general userbase isn't hurt. Apple's solution towards this problem was rather cut and dry: with its new Tracking and Transparency feature, developers had to be both open about the data they would require from users, and ask for explicit user consent while doing so. Otherwise, they'd be booted off the Apple Store. While Google hasn't taken any such strict action, it too has developed defense mechanisms. Be it verifying certain top notch apps to signify their safety or, as we'll discuss, relying on machine learning and AI to isolate unsafe ones.

The tech giant claims that a mixture of machine learning with more rigorous review system has led to the elimination of over an estimated 962,000 applications that could pose a threat to users from the store. Along with this, over 119,000 developers were banned for noted infarctions that unfairly exploited user data for personal gain. While particulars about how the machine learning process worked were not delved into, it's simple enough to surmise that the AI was simply fed a list of rules that counted as infarctions, along with examples in the form of applications. And with every new shady application stomped down on, the technology learnt and simply got better.

Google Play's also started enforcing rules that limit developer access to sensitive information. For example, applications that asked for access to a user's location were booted off the platform unless reasonable justification was provided for said access.

Read next: Android 12 is now capable of stop bothering you about using apps to open web links
Previous Post Next Post