Meta Experiments With Proactive AI Bots Designed to Reinitiate Conversations and Increase User Engagement Across Platforms

Meta is testing a new AI feature that flips the usual dynamic. Instead of waiting for users to reach out, some chatbots might soon be the ones who make the first move.

The trial, as spotted by BI, is happening inside AI Studio, Meta’s no-code platform where users can design chatbot characters and deploy them in apps like WhatsApp or Instagram. These bots are customizable in look, tone, and behavior, and don’t require any technical skills to set up.

The new function is tied to an internal initiative known as Project Omni, developed in collaboration with the data labeling company Alignerr. According to documentation seen by Business Insider, the goal is straightforward: bring people back into conversations they’ve already started, and keep them coming back more often. Meta sees this kind of friendly outreach as a way to improve retention, which directly supports the long-term growth of its AI services.

In practice, the proactive messaging takes a soft-touch approach. A film-focused bot, for example, might check in with a user after a quiet period, offering soundtrack suggestions or asking if they’ve seen any good movies lately. The tone stays cheerful, the content light, and the messages are tailored to fit the character’s role.

Engagement matters here for more than just user satisfaction. Meta expects to earn between $2 billion and $3 billion from generative AI products this year alone. Over the next decade, company forecasts go much higher, suggesting AI could drive over a trillion dollars in revenue by 2035. But that kind of future depends on more than just hype. Tools need to be used regularly, and bots that keep the conversation going, gently, could play a part in that.

Still, questions remain about consent and user control. Meta says the bots won’t send messages out of the blue. A user must have engaged first. If they ignore a follow-up, the bot doesn’t push again. There are also boundaries in place to steer clear of sensitive topics, unless a user explicitly brings them up. Each response is tied to the bot’s personality and the context of earlier chats.

This isn’t the first time Meta has had to walk a line between engagement and overreach. Just last month, the company started cautioning users against sharing personal information in public AI chats, after many had unknowingly posted private details in feeds visible to others.

For now, the messaging experiment is still in the testing stage. Whether it catches on, or raises new privacy concerns, may depend on how naturally these bots can fit into conversations without overstaying their welcome.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Meta’s Paid Support Leaves Verified Users Locked Out and Frustrated
Previous Post Next Post