LLMs Are Able to Disregard Non-Pertinent Information With New Technique

If there is one thing that is holding Large Language Models back despite the massive advancements that they have made over the past year, it is their lack of reasoning skills. Problem solving is widely seen as an LLM’s Achilles heel because of the fact that this is the sort of thing that could potentially end up preventing it from reaching its true potential. However, a new technique has emerged that could make their reasoning skills more refined than might have been the case otherwise.

With all of that having been said and now out of the way, it is important to note that Meta has just announced System 2 Attention, or S2A for short. It takes insights from research into psychology and modifies user prompts in order to make them more concise. This removes any and all irrelevant information from the search queries, thereby improving the LLMs ability to reason and come up with more precise answers.

The experiments that were conducted using S2A have yielded positive results with all things having been considered and taken into account. One major issue that needs to be dealt with is that LLMs often just end up echoing the sentiments of the users themselves. In spite of the fact that this is the case, they need to be able to provide answers that are accurate regardless of the opinions users have provided. It appears that S2A is an important step in the right direction in this regard.

LLMs have a natural ability to derive context from the queries that they receive. Their language reasoning capabilities can be modified for the purposes of stripping unnecessary information and make it so that the answers provided are as specific as can be, something that will go a long way towards taking LLMs to the next stage of their current evolution.

The name of this feature is a nod to a psychological study which revealed System 1 and System 2 modes of thinking. System 1 is all about kneejerk responses that are rapid and intuitive, whereas System 2 usually involves careful consideration and incorporates a more analytical approach.

S2A can refine user queries and vastly improve efficiency in LLMs. This could be the very thing that allows Meta to surpass some its rivals in the AI race that is taking the world by storm.

New technique, System 2 Attention (S2A), enhances reasoning in LLMs by refining prompts and removing irrelevant information.

Read next: Tech Companies are Looking for Individuals who can Work as Prompt Engineers and AI Heads
Previous Post Next Post