New Warning Issued By The FTC Speaks Of Zero Tolerance Against AI Scams By Tech Firms

The FTC is in no mood of joking around with tech giants carrying out AI-based shenanigans that end up harming consumers.

The regulatory body issued a new bold warning recently where they spoke highly about how strongly it was working to get rid of fraud, shady business trade, and deception that continues to peak as we speak.

The company which is based in the US, first came into being in the year 1914, and since then, it’s been working hard to keep companies at bay and ensure they carry out minimal harm to consumers with their actions. Right now, it appears that their major concerns are linked to the advancing world of AI and how it’s negatively affecting American citizens on a daily basis.

It feels like it’s time to find and publish brands that indulge in the nonsense that is designed to do nothing but harm. It’s also against those who hijack reviews on Amazon, forcing people to buy products that are far from what is promised. And now with DALL-E 2 and ChatGPT on the rise, it may violate the ordeal and give rise to unfairness which they find to be unacceptable behavior.

Under this new Act, the FTC says any dealing that causes a lot of harm and does more than any good involved must be outlined. They claim it’s not fair if the company is causing harm or injury to people as it’s something that can be avoided easily and isn’t outweighed by a great many benefits that it adds to both consumers and the competition in the market.

The FTC knows how chatbots including ChatGPT, Bard, Bing, and more can play with people’s emotions and trick them into making a decision that they might be at a complete loss at. Today, we’re seeing them get hired as a negotiator for the supply chain at leading American department store Walmart and even as talk therapists too. These jobs are designed to impact those in the surrounding.

And when that’s combined with some effects linked to automation bias where users are more readily accepting of the term that relates to impartial systems based on AI, some individuals might end up thinking that they’re chatting with someone who understands them.

But the problems linked to AI technology go really above and beyond the likes of the FTC’s immediate review. Still, seeing the FTC jump right in is proof of how they’re not going to let anyone fool consumers or take them for granted into making a decision that’s harmful to them.

In recent terms, a lot of cases have come into the spotlight linked to finance-related offers, game purchases, and ways to cancel out services.

The emergence of these guardrails is proof of how ads are getting placed with generative AI apps and is similar to how Google adds ads to the search results. It feels that consumers need to be aware of how a certain product or service’s response is generated by AI and that is motivating their purchasing decision toward a particular direction due to some kind of commercial relationship.

Most important, every person has the right to know whether or not they are speaking to real people or just a machine.

Lastly, the FTC issued a warning to the tech world about how they need to use tools based on AI technology as it’s not the first time that it’s happening.

The company feels this might not be the best time for organizations to fire those employees heading the departments linked to ethics and responsibility when you know the risks of AI technology is plenty. After all, who will keep a check on the non-stop efforts.


Read next: New Data Leak At Samsung Forces Company To Ban All Employees From Using AI Tools
Previous Post Next Post