Google In The Hot Seat After Its Viral Video Showcasing Gemini’s AI Capabilities Accused Of Misleading Viewers

Search engine giant Google was on cloud nine after the launch of its popular Large Language Model Gemini which arose a few days back.

Many couldn’t help but talk about the grand rollout to the masses but little did anyone realize what they were getting into. Therefore, to help provide the public with a greater understanding, the company opted for a marketing video that showcased the massive AI capabilities of Gemini.

But that had experts claiming it was too good to be true and how the Android maker was misleading users to try and attain fame for the mega launch. After all, a demo that seems so promising is bound to attain success.

The video got an overwhelming response and didn’t take long to go viral with nearly 1.6 million views across YouTube. Moreover, how Gemini continues to flip back and forth in terms of how AI can generate AI responses in real-time and give the user exactly what they’re in search of was mindblowing to many.

Simple prompts generated through videos and verbal prompts were again worth a notice but many had doubts if it was really capable of all that. Everything was glistening and attractive, entailing all the visual and sound effects required to captivate audiences.

For the description of the content, Google mentioned how it did end up making edits to help users understand what Gemini was capable of and that included it admitting how it sped up the responses, only for the tutorial’s sake.

But Google has also admitted how the AI rollout failed in terms of responding to both voice and video. In its latest blog post that was released during the demo’s time, the firm admitted more details into the video’s making to help give people a clearer understanding as it got criticized for overpromising the masses.

In a recently held interview with media outlet, Google says it was created by sending prompts to the AI through still images garnered from footage while generating prompts via text as well.

Referring to it as hands-on content in terms of an ideal demo, it confirmed again how all prompts were real and how the output was real too, giving developers the right type of inspiration to curate content while proving to others how capable Gemini was in terms of variation and resourcefulness.

For those who are yet to see it, we suggest taking a look at the video above. You can find one person rolling out a long list of queries to the chatbot while displaying objects across a screen. At first, the AI is not familiar with what’s on display but after it makes a sound, it ends up accurately identifying it.

But the issue is that what happens in reality is different than what was shown in terms of generating prompts. The AI has actually displayed still pictures of a duck and asked about the material on display. This was after it was fed with text prompts featuring the duck making squeaking noise and that is what led to the right identification.

Hence, the crux of the matter is that if you intentionally help the AI model then that needs to be mentioned because you’re making others over expect what Gemini is, one expert mentioned. And that has put Google in the hot seat and forced it to generate a clarification on the subject.

Just like this, so many other examples were shown, and Google was forcing its AI endeavor to produce the right response with the help of clues that the video failed to feature. But without the clues, it’s kind of useless, or at least that’s what it appears.

Again, hope and expectations are high and such a response by Google has left plenty of people frazzled in terms of what Gemini can and cannot do. Did we mention how the video was launched after the AI world entered into chaos with OpenAI ousting its own CEO?

Time will tell if Google’s promises regarding AI can be met or not and until then, we’ll just patiently wait and watch as the LLM is in action.

Read next: The top countries paving the way for women in STEM
Previous Post Next Post