Meta Introduces New AI Language Model Called LLaMA That’s Similar To Chatbots

The race to produce the best chatbot in the industry is on as Facebook’s parent firm makes a shocking new unveiling.

The company has launched its own AI language model called LLaMA and it works in a similar manner to the viral chatbots taking the lead today. Moreover, the firm says that it even has the capability to outperform others in the industry like ChatGPT by OpenAI.

The tool has the capability to function in a manner that’s awfully similar to a regular chatbot. This means it can generate human-like chats in a manner by which other AI chatbots function. But at the same time, Meta boasts how it runs in a more efficient manner than other models taking the lead. Also, they claim it features much fewer requirements linked to hardware.

One of the tech giant’s researchers mentioned how LLaMA may be around ten times smaller in size but it has the capability to outperform in so many different ways. However, despite all these wonderful claims, it’s only designed for release to researchers.

They wish to gather more insights on the product from them, including constructive criticism that would help it become the best tool out there. Only then would Meta like to allow for these chatbots to come forward and provide the masses with benefits.

Meta similarly hopes to eliminate the possibility of any biases and factual mistakes that are commonly prevalent in tools like ChatGPT. And the benefit of running on just one GPU means it can further enhance access to language models.

By the looks of it so far, things definitely do appear to be promising. But we must remember that in the past, Meta’s track record for such chatbots isn’t too great.

They failed to generate the same sort of excitement and popularity that we’re seeing for ChatGPT and the new AI Bing search by Microsoft. Nearly 100 million are making use of ChatGPT and there are no signs of them stopping just yet.

Hence, this might be one reason why it really wishes to take out some time before a launch to ensure all things are as perfect as can be.

On the other hand, Facebook’s parent firm says that attaining access to big language models is not always easy and most people are faced with great limitations. They need servers for functioning and access is often restricted in terms of understanding how such models work. The end result is a big roadblock in terms of its robustness and a more likely chance of producing misinformation.

Meta did detail how its AI language model arises in four different variants and it operates in a variety of different parameters. This is much smaller than what we’re seeing for the currently popular ChatGPT.

Meta claims LLaMA could outperform the likes of bigger language models by enabling it to train on some more data snippets which it called tokens.

So far, they’ve conducted their training well on trillion tokens and text from the web.


Read next: Meta's New Subscription Service for Verification Might Bring in $2 Billion in Revenue
Previous Post Next Post