Alibaba Launches New Series of State of the Art Open-Source AI LLMs Called Qwen3

It looks like e-commerce giant Alibaba from China is on a roll to make headwaves in the AI world again.

The company just launched a new series of state-of-the-art Large Language Models (LLMs) called Qwen 3. These seem to be leading the pack in terms of open model designs and great performance when compared to archrivals Google and OpenAI.


The models are said to feature a mixture of experts and six dense variants for eight new models. This approach involves having different specialty model types integrated into a single one. Only the relevant models would be given tasks and get activated through internal settings.

As per the team, this 235 billion parameter vision is designed using important benchmarks set up from third parties such as ArenaHard and gets high-performance reviews for difficult domains such as software engineering and maths. It’s even said to be in the same line of competition as Google’s Gemini 2.5 Pro.

As a whole, the benchmark data positions are one of the most powerful available models and attain great superiority depending on what the industry has to offer. There is a hybrid model that gets trained using hybrid reasoning capabilities. This enables users the chance to toggle between quick and correct replies and those that are more complex and require greater time durations. This was an approach used by previous experts at Nous Research for research collectives.

Through Qwen3, users get the chance to engage in a more intensive thinking mode with the help of buttons marked using Qwen Chat or through different prompts or an API. It all depends on how complex the task might be in the end.

Users get the chance to access and roll out models across various platforms and can interact directly through a web chat interface for Qwen and some mobile apps. Many were left impressed on their first use, as it could produce images with great accuracy and speed after a decent prompt was given. Seeing it seamlessly blend text with image was an eye-opener to some.

One disadvantage noticed so far had to do with restrictions linked to Chinese Content that prompts users to log in quite often, whether they like that or not. As far as model training is concerned, Qwen3 stands a notch above its predecessor, Qwen 2.5. This might have to do with the fact that its pretraining dataset is nearly double in size and amounts to 36 trillion tokens.

Such models are proof to the world that competition in the AI race continues to heat up as AI providers wish to provide the best and most accessible models that stay powerful in design and savvy in appearance. Remember, the AI landscape keeps evolving, and this new product from Alibaba marks a major milestone as it innovates classic LLMs.

Read next: Bigger Isn’t Better: Meta’s AI Chief Says Larger-Scaled Models Are Far from Impressive
Previous Post Next Post