New Chinese AI Outperforms Meta LLM With Half the Parameters

It seems like every major firm out there is trying to create their own Large Language Model, ostensibly in an attempt to compete with ChatGPT after it took the Artificial intelligence (AI) market by storm in late 2022. Meta’s Llama 2 is just one example of an LLM created by a tech giant, and it uses around 70 billion parameters to get the job done. Another major player in this race is Falcon, created by the Technology Innovation Institute in Abu Dhabi, which utilizes a whopping 180 billion parameters.

However, it turns out that a Chinese startup called 01.AI has a new LLM that uses 34 billion parameters and outperforms each of the aforementioned LLMs by a large margin. The LLM, dubbed Yi-34B, is the brainchild of noted AI veteran Kai-Fu Lee, and it might represent the single biggest threat to ChatGPT that has been seen so far.

When checked against various benchmarks, 34B was able to show an impressive performance despite the far lower number of parameters it was working with. Its common reasoning score was 80.1 compared to Llama’s 71.9, and 76.4 for reading comprehension compared to Llama’s 69.4. The massive multitask language understanding benchmark was yet another area where 32B was able to surge past its rivals, with a score of 76.4. By comparison, Falcon reached a score of 70.4, whereas Llama 2 was only able to get 68.9.

When Lee founded his company earlier this year, he pledged to create an LLM that would aid humans in a variety of tasks, boost productivity and spark a series of paradigm shifts in the global economy.

It appears that 34B actually does have the potential to reach its creator’s lofty goals, and it is paired with another LLM that uses 6 billion parameters which limits it to an extent but still provides surprisingly robust performance. Funded by Sinovation Ventures and Alibaba, the former of which is chaired by Lee himself, this new AI company looks to be a game changer in an industry that is rapidly becoming saturated with new players.

Meanwhile, Meta will be taking a long hard look at itself considering that a new startup that likely didn’t get as much funding as Llama was able to surpass it so resoundingly. 32B offers support not just in English but in Chinese as well, which further drives home the superiority of the LLM as compared to so many other models that are vying for investor dollars and consumer hype.

32B is not the only East Asian contender. HyperCLOVA X from Naver has mined 6,500 times more data from Korea than ChatGPT, giving it an edge in terms of localized performance. This LLM is able to comprehend Korean customs and colloquialisms in the Korean language, proving that even if ChatGPT continues to hold onto the top spot with an iron fist, smaller competitors will always be able to find a way to survive. In the case of 32B, the competitors may not even be that small.

Read next: Exposing Project Nessie: Amazon Accused Of Robbing Billions Of Dollars From Customers Through Its Secret Ad Pricing Scheme
Previous Post Next Post