Meta Challenges Nvidia's AI Vision: LeCun Questions Text-Based AI, Touts Transformer Models for Versatility

Tech giant Meta recently held a media event in San Francisco that was designed to celebrate ten years of its AI Research Team.

This is where we heard the firm’s leading scientist and deep learning expert speak more about the developments in the world of AI and what he feels about quantum computing.

Yann LeCun did not shy away from putting his point of view out there, despite it differing intensely from what was outlined by other leading contenders in the tech world.

According to LeCun, it has to do with common sense being added to evaluate the current condition and how it deeply contrasts with that seen from Nvidia.

Jensen Huang who is Nvidia’s President was quoted as mentioning how AI technology is going to be very fair in terms of competition with the human race in nearly 5 years. This would beat out people being mentally engaged in a wide range of labor-intensive work.

But LeCun added during the event how the head of Nvidia had a lot to benefit from in terms of the craze linked to AI. And it was even more shocking for him to call him out for leading the AI war by providing the necessary supplies.

He gave examples of this by speaking in detail about how technologists were working toward creating artificial general intelligence or AGI and for that, they require more computer chips that only Nvidia would be supplying.

LeCun further shed light on how society would get AI years that were similar to that seen at the dog or cat level, years before it reached the human level. Moreover, the current focus of the tech world has to do with LLMs and text and that’s not enough in his opinion to dream of these huge advancements that threaten humans. It’s a dream that many AI researchers have been seeing for years but this scientist is not certain that it’s going to come anytime soon.

Calling out text as the poorest kind of data, he explained more in detail about how it could take around 20k years for humans to grab a hold of the text used for training purposes and actually read what it says.

This is why Meta’s heads for AI with LeCun are trying hard to figure out how transformer models could be utilized to produce apps like ChatGPT that could be customized to work around a long list of data like pictures, videos, and even audio. And the greater such AI systems discover billions of correlations like these over time, the greater their performance in this regard can be and the further their thinking will go.

So much of Meta’s research entails software that’s created to help people with tasks like playing tennis while adorning their AR glasses from Project Aria. These put together digital graphics combined with reality from the world in a seamless manner. In the end, executives proved through a demo how those wearing AR glasses could witness visual cues on how to go about the game in the best possible manner.

Coming down to the biggest benefactor related to Generative AI, the pricey graphics became the standard tools for training LLMs. Today, Meta relies on nearly 16k A100 GPUs for training its Llama AI software.

Media outlets requested a reply on whether more hardware providers will be required by the tech world or not as so many researchers continue to work on the likes of such intricately designed AI models. And that’s when LeCun added how it would be nice but today, GPU is the gold standard for AI.

Read next: The Rise of Smart Homes in America
Previous Post Next Post