ChatGPT Gives Better Results When Used Methodically, New Study Finds

The process by which ChatGPT sorts through data and provides answers is modelled off of the human brain’s own mode of thinking, so it stands to reason that some of the same rules apply. According to Nobel Prize winning psychologist Daniel Kahneman, there are two basic ways in which the human brain can create a solution for a problem, namely fast thinking and slow thinking.

Over the years, these two styles were developed into System 1 and System 2. The first involves quickly coming up with a solution through intuition, whereas the second uses a slow and methodical method that takes more time but provides more accurate responses.

While ChatGPT and other Large Language Models like it are impressive for the speed with which they can provide answers, it turns out that using them slowly can make them even more intelligent. A study that was recently published in Nature Computational Science revealed that ChatGPT can surpass human intelligence if it slows down.

The study was conducted by Michal Kosinski, who teaches organizational behavior at Stanford’s Graduate School of Business. As LLMs continue to evolve, they might be able to slow their thinking processes down, thereby reducing many of the inaccuracies that are currently holding them back. This capability sheds light on where these LLMs may go in the future, prioritizing accuracy instead of speedy responses.

In the study, Kosinski partnered with Thilo Hagendorff, a philosopher, and Sarah Fabi, a noted psychologist. They took ten generations of GPT and gave them tasks that were specifically designed to make them opt for System 1 thinking. The purpose of the study was to determine whether or not quick thinking led to cognitive biases, such as those that can be seen when humans try to speed their way through a task.

Earlier models such as GPT 1 and 2 struggled to comprehend complex ideas, and always opted for System 1 thinking as a result. Despite this, more advanced versions of GPT were successful able to break concepts down, and they might even carefully consider responses prior to providing them.

The tests were meant to gauge their reasoning skills rather than pure intuition, and as the models get increasingly advanced, they may be able to use System 2 thinking to a greater effect. It will be interesting to see what impact methodical thinking as on GPT in the future.


Read next: ChatGPT’s Web Traffic Is Back On Track After Experiencing Massive Dip In Summer Of 2023
Previous Post Next Post