The AI community continues to live on the hype that bigger models would be better in design, but that’s not the truth, as per Meta’s top AI researcher.
Meta’s AI chief is spitting out the truth on how larger-scale models do not mean better performance as one would assume, due to bigger datasets, more parameters, and increased computing. The idea has been around for a long time, and that has given rise to huge investment in model infrastructure.
Now, we are seeing the cracks in that theory, as shared by Yann Lecun, who didn’t hold back on his criticism. He also challenged the concept while delivering a speech at the National University of Singapore. According to him, more data and power do not mean smarter AI systems.
He further revealed how smaller and simpler AI model systems will give false hopes about bigger models being more intelligent, and that takes researchers in this domain down the wrong path.
Many of the breakthroughs in the world of AI continue to plateau as high-quality data reaches its limit due to a shortage. Some of the largest models are yet to approach anything near human intelligence. Others claim that scaling AI gave rise to a dumb approach that the more you push AI further, the better it will become.
As far as predictions for the next AI generation, LeCun feels they would need to do so much more than predict text and ingest so much data. It would instead mean AI systems would learn newer tasks at a faster speed. They would realize the world’s surroundings and not just simple text, giving rise to common sense.
The world model variant to design AI can predict how the real world alters given different actions. It’s a huge improvement from the current software in use, he continued. Previously, we saw the AI researcher share how actual innovation will arise in machines that don’t only reply to data but also realize cause and effect in dynamic settings.
It’s clear that he’s making a lot of sense with these claims, similar to how he refuses to believe that AI can replace humans entirely.
Read next: Is Elon Musk’s X Really Winning or Just Hiding Something Big in Europe?
Meta’s AI chief is spitting out the truth on how larger-scale models do not mean better performance as one would assume, due to bigger datasets, more parameters, and increased computing. The idea has been around for a long time, and that has given rise to huge investment in model infrastructure.
Now, we are seeing the cracks in that theory, as shared by Yann Lecun, who didn’t hold back on his criticism. He also challenged the concept while delivering a speech at the National University of Singapore. According to him, more data and power do not mean smarter AI systems.
He further revealed how smaller and simpler AI model systems will give false hopes about bigger models being more intelligent, and that takes researchers in this domain down the wrong path.
Many of the breakthroughs in the world of AI continue to plateau as high-quality data reaches its limit due to a shortage. Some of the largest models are yet to approach anything near human intelligence. Others claim that scaling AI gave rise to a dumb approach that the more you push AI further, the better it will become.
As far as predictions for the next AI generation, LeCun feels they would need to do so much more than predict text and ingest so much data. It would instead mean AI systems would learn newer tasks at a faster speed. They would realize the world’s surroundings and not just simple text, giving rise to common sense.
The world model variant to design AI can predict how the real world alters given different actions. It’s a huge improvement from the current software in use, he continued. Previously, we saw the AI researcher share how actual innovation will arise in machines that don’t only reply to data but also realize cause and effect in dynamic settings.
It’s clear that he’s making a lot of sense with these claims, similar to how he refuses to believe that AI can replace humans entirely.
Read next: Is Elon Musk’s X Really Winning or Just Hiding Something Big in Europe?