Over the last several days, Meta has come under renewed scrutiny for how intimately it tracks user behavior—especially following the launch of its AI chatbot and eye-opening revelations from a former employee.
Sarah Wynn-Williams, once with Meta and now an author, spoke before the U.S. Senate, alleging that Meta internally used emotional indicators from users, especially teens, to refine advertising precision. She described how the company could identify states like hopelessness or poor self-esteem and provide advertisers with access to that data. For instance, if a teenage girl deletes a photo—possibly reflecting low confidence—the algorithm might push beauty products or slimming teas in that very moment.
This kind of emotional microtargeting — particularly toward adolescents — raises major ethical concerns. It highlights a disturbing trend where tech firms commodify mental states for profit.
Simultaneously, Meta’s new AI chatbot has reignited privacy debates. Designed for personalized chats, the bot pulls data not just from messages but also from broader user activity across Facebook and Instagram. Everything typed into it sharpens its learning model. Analysts at The Washington Post have noted that this data collection goes well beyond what ChatGPT or Gemini currently gathers.
Though Meta’s practices have raised red flags before, the outrage after Cambridge Analytica gradually faded. As the dust settled, Meta capitalized on the public’s tendency to trade privacy for convenience—letting the wheels keep spinning.
Yet the depth of data Meta has collected is staggering. A 2015 study from Stanford and Cambridge universities demonstrated that Facebook "likes" alone could predict users' personality traits more precisely than even close friends or spouses.
The strength lies not in single actions, but in the mosaic of choices. Following meme pages or liking pop stars may seem mundane—yet, en masse, they tell stories. They might hint at smoking habits, biases, or impulsivity—even without explicit declarations.
Some digital footprints are clear, others subtle—but Meta’s algorithms can connect the dots with uncanny precision. Even though fewer young users are sharing personal content on Facebook, the chatbot delivers a new pipeline of high-quality data.
Available across apps, the chatbot spans countless topics. It encourages users to speak openly—offering Meta a rich supply of preferences, moods, and motivations to fuel its vast advertising engine.
Meta claims it refrains from storing harmful or sensitive chatbot inputs and gives users deletion options. But these controls demand initiative, and most users don’t actively manage what’s logged.
Despite existing privacy toggles, studies show most people leave defaults untouched. That inertia benefits Meta. Its Advantage+ ad platform, run by machine learning, thrives on a deep reservoir of behavioral data.
As Meta's AI grows more advanced, its capacity to intuit thoughts, desires, and intentions will only grow. Whether users scroll, post, or chat—they continue to feed the system.
In exchange for quick answers and smart replies, people give up ever-deeper pieces of themselves. And considering Meta’s track record, it’s worth asking—how much more should we allow them to learn?
Image: DIW-Aigen
Read next:
• Study Reveals When U.S. Residents Are Most Likely to Detach from Their Phones
• Game-Changing Digital Technologies to Watch by 2030
Sarah Wynn-Williams, once with Meta and now an author, spoke before the U.S. Senate, alleging that Meta internally used emotional indicators from users, especially teens, to refine advertising precision. She described how the company could identify states like hopelessness or poor self-esteem and provide advertisers with access to that data. For instance, if a teenage girl deletes a photo—possibly reflecting low confidence—the algorithm might push beauty products or slimming teas in that very moment.
This kind of emotional microtargeting — particularly toward adolescents — raises major ethical concerns. It highlights a disturbing trend where tech firms commodify mental states for profit.
Simultaneously, Meta’s new AI chatbot has reignited privacy debates. Designed for personalized chats, the bot pulls data not just from messages but also from broader user activity across Facebook and Instagram. Everything typed into it sharpens its learning model. Analysts at The Washington Post have noted that this data collection goes well beyond what ChatGPT or Gemini currently gathers.
Though Meta’s practices have raised red flags before, the outrage after Cambridge Analytica gradually faded. As the dust settled, Meta capitalized on the public’s tendency to trade privacy for convenience—letting the wheels keep spinning.
Yet the depth of data Meta has collected is staggering. A 2015 study from Stanford and Cambridge universities demonstrated that Facebook "likes" alone could predict users' personality traits more precisely than even close friends or spouses.
The strength lies not in single actions, but in the mosaic of choices. Following meme pages or liking pop stars may seem mundane—yet, en masse, they tell stories. They might hint at smoking habits, biases, or impulsivity—even without explicit declarations.
Some digital footprints are clear, others subtle—but Meta’s algorithms can connect the dots with uncanny precision. Even though fewer young users are sharing personal content on Facebook, the chatbot delivers a new pipeline of high-quality data.
Available across apps, the chatbot spans countless topics. It encourages users to speak openly—offering Meta a rich supply of preferences, moods, and motivations to fuel its vast advertising engine.
Meta claims it refrains from storing harmful or sensitive chatbot inputs and gives users deletion options. But these controls demand initiative, and most users don’t actively manage what’s logged.
Despite existing privacy toggles, studies show most people leave defaults untouched. That inertia benefits Meta. Its Advantage+ ad platform, run by machine learning, thrives on a deep reservoir of behavioral data.
As Meta's AI grows more advanced, its capacity to intuit thoughts, desires, and intentions will only grow. Whether users scroll, post, or chat—they continue to feed the system.
In exchange for quick answers and smart replies, people give up ever-deeper pieces of themselves. And considering Meta’s track record, it’s worth asking—how much more should we allow them to learn?
Image: DIW-Aigen
Read next:
• Study Reveals When U.S. Residents Are Most Likely to Detach from Their Phones
• Game-Changing Digital Technologies to Watch by 2030