Google Reveals Gemini Chatbot Stores Conversations Separately, Raises Security Questions

If you’re not a fan of sharing your data online without consent, then this next piece of news is for you.

A shocking new support document from Google has raised eyebrows after it unveiled how its Gemini Chatbot app actually store user data through default means online. This is witnessed across the web, for iOS users, and those using Android too.

"Conversations that have been reviewed or annotated by human reviewers (and related data like your language, device type, location info, or feedback) are not deleted when you delete your Gemini Apps activity because they are kept separately and are not connected to your Google Account. Instead, they are retained for up to three years.", reveals Google in a support page.

The company noted how human minds can read, process, and label chats taking place across Gemini’s AI apps, even if they’re disconnected through the usual accounts on Google. This is done to better enhance the service seen online.

Moreover, right now, it’s not too clear about how such annotators better the service, either in-house or through the likes of outsourcing. And in case you’re wondering, that might make a huge difference when it comes to matters like data security.

Now, the search engine giant says the company has quite a bit of control over which Gemini-relevant data is stored and how it’s stored in the database too, raising massive concern amongst users and critics, ever since the news was rolled out.

So what’s the solution? How can one prevent any of this from arising?

According to Google, the only solution appears to be turning the apps from Gemini off by clicking on the My Activity tab as that gets enabled through default means and stops future chats from being accessed or stored online.

On the other hand, if you’re more concerned about individual chats or prompts done through Gemini, don’t worry, you can delete those through the Apps Activity screen witnessed online.

Search engine giant Google mentioned how the chats carried out through the Gemini chats can still be stored on the database, even if the activity of Gemini is switched off. But that’s saved for 72 hours to ensure a safe and secure user experience on Gemini and also done to enhance the user experience when making use of the apps.

This is why the Android maker is now rolling out warnings to users to stop adding any kind of sensitive or pirated data into chats that they would never like to expose in the first place for product and service enhancements, not to mention machine learning technology.

To be more fair, the company’s GenAI collection of information and any policies in store for data retention do not differ too much from that witnessed on the part of rivals. For instance, tech giant OpenAI ends up saving the chats on the AI tool ChatGPT for a month, no matter if the user turns off their history feature or not. There are certainly some exceptions to this rule including when the user subscribes to plans taking place on the enterprise level that entails customized data retention rules.

But right now, the company’s policy illustrates how there are plenty of challenges seen on the web in terms of how to balance privacy with plenty of GenAI models that develop as we speak and feed user data to ensure self-improvement.

As it is, we’ve seen in the recent past how plenty of liberal GenAI policies for data retention have cost vendors a lot, putting them in the hot seat with regulators, as witnessed in the past.

One such example goes as far back as last year during the summertime when detailed requests for data were generated from the likes of OpenAI in terms of how the firm makes use of user data for the sake of AI model training purposes. And above all, consumer data seems to be at the top of the list.

Data remains protected, even if it is in the hands of third parties. We’ve seen OpenAI be slammed in nations like Italy where regulators raised their voices against the tech giant and how it was deficient in terms of the right legal means for collecting data at large and storing user data for the sake of Generative AI training of models.

As a whole, so many tools continue to proliferate as we speak, with plenty of companies continuing to grow massively while the risks linked to privacy remain.

Another survey was rolled out in this regard where Cisco proved through its stats how 63% of all firms created restrictions on which type of data would be added to Generative AI tools. Meanwhile, 27% of them ended up banning Generative AI as a whole.

Another similar survey went into detail about how close to 45% of all workers entered data deemed to be problematic across various AI tools. This entailed data belonging to workers in the firm and any files that were non-public and pertained to a certain worker in question.

Photo: DIW - AIGen

Read next: Pinterest’s Latest Earnings Report Displays 12% Rise In Revenue As Monthly Users Reach New High
Previous Post Next Post