The Rise Of AI-Powered Chatbots Causes Google’s Management To Issue Code Red

Chatbots driven by AI-powered technology are proving to take over the tech world by storm and it was Google who was recently seen issuing a code red as the release of ChatGPT took center stage.

We saw the buzzy conversational AI chatbot produced by OpenAI spark some massive concerns across the board regarding the future of the search engine as reported by NYT.

The CEO of Google was seen taking part in a few meetings around the company’s AI strategy and has therefore ended up addressing the massive threat linked to ChatGPT that it has on the search engine’s business. This was revealed through an internal memo and an audio recording that had gotten reviewed by the Times.

A few teams working at Google research spoke about the Trust and Safety division and a few other departments that direct users to switch gears and help in the creation and launch of several AI prototypes and services. There were a few workers that were requested to design AI products that produce art and graphics that are very similar to the likes of DALL-E that’s used by millions out there as reported by the Times.

While Google is yet to respond to any direct comments being made it's way, the company’s drive to create a robust AI product portfolio arises at a time when workers at Google as well as experts debate on ChatGPT that’s run by an ex-Y Combinator president. This carries the potential to get rid of search engines and in turn, may affect Google’s business model linked to advertising revenue.

Such chatbots were highlighted because they end up stopping users from pressing Google links that entail ads. And that was responsible for producing revenue worth $208 billion, which is generally 80% of revenue generated through Alphabet in the year 2021.

ChatGPT has managed to amass a little over one million users around five days after the pandemic struck. It produces a great conversational tone that’s very human-like and produces information through millions of web pages. Hence, users end up asking such chatbots for help in writing college-level essays and even provide coding advice or serve to give them therapy.

But it’s not uncommon to notice how this bot gets riddled with some major errors. It can’t fact-check what is being said and is unable to distinguish between facts that are verified and any misinformation being outlined as explained by AI experts. Similarly, it can make answers that some deem to be hallucinations.
This bot can generate responses that people regard to be offensive or even racist. And there happens to be a huge margin of error here while vulnerability to toxicity is another factor why Google is too hesitant to have its own chatbot named LaMDA out in the market.

Google’s head of AI says chatbots aren’t something that can be made use of so reliably. For this reason, the company is putting more effort into making improvements to its search engine with time instead of taking it away as a whole.

Read next: What Country Has the Fastest 5G Speeds?
Previous Post Next Post