Pages

Meta’s New AI-Powered Chatbot Accused Of Being Obsessed With Politics And Acting Like A Homophobe

Meta has a new AI-powered chatbot and while the design interface is awfully interesting, it has been accused of showing some bizarre behavior.

When we talk about bizarre, we mean obsessed with the world of politics and having views on controversial matters such as religion, politics, and homophobia.

While the company brushed off the allegations a while back, claiming users needed to clean up their cookies and history to prevent such mishaps from occurring, that isn’t helping much.

Last week, so many users based in the US were given the freedom to conduct trials involving the BlenderBot3. Many were so curious about how the demo would go, as Meta was on the lookout for feedback that it could use to better its product.

Quite a few users chose to put forward each of their experiences online. Some interactions were intriguing, others interesting, and then there came those that were just plain and outright bizarre.

Many referred to their exchanges with the product as bewilderment that went beyond their expectations.

Remember, when you begin your interactions about controversial topics, the chatbot gives you a heads-up and even issues trigger warnings beforehand.

They inform you that the territory that you’re about to enter into isn’t safe and on that note, the user should consider steering the chat in a different direction to avoid being disappointed. But this is what the company revealed it’s programmed to do. Unfortunately, the experiences that people had with it weren’t quite like this.

One tech investor by the name of Allie Miler expressed extreme disappointment in regard to what her experience was like. She was shocked at the way it behaved and the way it managed to keep turning back toward the subject of politics and the spread of misinformation.

But wait, that was just the beginning. Another user who happened to be a journalist from Wall Street couldn’t believe the type of experience that he was having.

Jeff Horwitz showed Meta’s AI-driven innovation was guiding it toward believing that it had found an innovative conspiracy theory that needed to be followed. Another conversation revolved around the Jews community getting projected as America’s rich and famous.

Jeff couldn’t hold in his emotions and began to share his experience with the world online through his Twitter account. Next came a detailed experience regarding the chatbot and his views on Trump.

While on some occasions it appeared to be influenced by Trump and his life, others reported how it even started to behave in an anti-Trump biased manner. To put it simply, it had some rather conflicting views on the former US President that left so many confused.

While on a few occasions it repeated claims of denying the elections and adding how Trump was still the country’s leader, there were some instances where it asserted a strong dislike for the former commander in chief.

Other users saw the chatbot making confessions about being homophobic. Then we also saw users reporting how they had strong feelings regarding both race and religion as well.

But Meta says it had already issued warnings in this regard and how the chatbot could be offensive in its terms and language. This comes despite safeguards being in-built.


Read next: Facebook Highlighted For Having The Most Difficult Terms Of Service That Only College Graduates Can Comprehend

No comments: