OpenAI’s Latest GPT-4 Model Called Out For Degraded Performance And Weak Logic

The makers of ChatGPT are in the hot seat as industry experts are not happy with the latest GPT-4 offering.

OpenAI has gotten criticism across the board for the popular AI model being weak, lazy, and dumb, despite the company raving about its fast speed. The huge redesign may be the real cause behind its degraded performance.

So many users are also complaining about how the model is just not up to the mark and when you compare it to the past, it’s just embarrassing, to say the least.

Users were seen expressing extreme frustration on the subject through forums on Twitter, adding how incorrect responses were getting produced and it seemed that the AI model lost track of data being put in.

In the same way, it was called out for having trouble keeping track of what was going on at a certain moment in time while a complete failure of following instructions raised more eyebrows on the subject. Did we mention how it even forgot to include brackets when putting out classic software codes as well? Hence, all it was actually doing was recalling prompts produced recently.

Right now, GPT4 is extremely disappointing, explained one developer, adding how he just couldn’t rely on it for adding code functions to his pages. He compared it to a luxury car being converted into an old vehicle, adding how he would certainly never pay for a service like this.

Another was quick to add that while outputs were getting produced at a swift pace, it was useless because the quality was going down the drain. Even simple queries such as needing assistance in clear-cut writing was proving to be a task as was the concept of producing ideas. And if quality is not up to par, then what’s the point, right?

A lot of people are noticing this and they feel the model is just downright lazy and that’s shocking, considering the major success and popularity that AI has attained in such a short period of time.

Other experts shed light on how the AI model was looping various types of outputs in a haphazard manner and data was running in circles. Hence, we even had one go as far as calling it braindead and useless so if you really wish to rely on it, you might wish to consider it as it’s now dumber.

So it seems like it went from being slow in speed and expensive in cost to a system that is faster but highly inaccurate and unreliable. Yikes, this is not what OpenAI would ever wish to hear.

There was a lot riding on GPT-4 and so many people anticipated it. Clearly, it has fallen short of what previous models had offered.

It was the month March of this year that we saw the bigger GPT-4 get rolled out and developers saw it as gold as did others in the tech sector. Today, its multimodal functionality enables it to not only comprehend text but also understand pictures, so much better.

But for what it’s worth, such models aren’t going to be something that developers would be loving for a long time.


Read next: GPT Detection Tools Are Designed To Discriminate Against Non-Native English Speakers, New Study Proves
Previous Post Next Post