For as long as people have been working with computers, there has been a constant search for better ways to communicate with them. In the earliest days, machines only understood strict, coded instructions. Users had to speak the computer’s language. Over time, the gap narrowed as programming languages became more user-friendly, graphical interfaces appeared, and search engines began responding to everyday phrases. Now, with large language models, we’ve entered another turning point. But as the technology has developed, so has the way we think about how to guide it.
When the idea of “prompt engineering” first took hold, it came from a simple observation. The way you ask a question, or frame a task, has a direct impact on how well an AI system can respond. At first, the focus was on crafting smart prompts, the kind that would help a language model stay on track, answer more precisely, or complete complex instructions. The term made sense at the time. People were essentially feeding the AI snippets of text to steer it.
But as the field has grown, and as language models have expanded their capabilities, it has become clearer that this is not just about writing clever prompts. What actually happens inside these systems depends heavily on what information they are exposed to while working through a task. The challenge is no longer about simply phrasing a request in a certain way. It is now about shaping the entire set of information that the model sees at any given moment.
This is why many experts are now leaning towards a more fitting term: context engineering.
Rather than focusing on the short instruction that users might type into a chatbot, context engineering refers to the broader skill of managing the entire environment in which the model operates. It is about selecting, structuring, and balancing the right mix of examples, background details, and supporting information that surrounds the request. This might include not only the direct instructions, but also the task history, relevant documents, previous outputs, and carefully chosen reference materials. In more advanced systems, it can involve integrating outside tools, databases, and even visual content. Getting this mix right takes both technical skill and a sense of judgment.
This idea has started to spread within the AI community because it better captures the real work involved in building useful, reliable AI-powered tools. The label “prompt engineering” now feels too narrow, and in some settings, even misleading. In everyday use, people often think of prompts as simple questions or short commands, like something you would type into a search box. That casual understanding has stuck, even though the actual process of guiding a language model, especially in professional and industrial applications, has grown far more complex.
When people work with large language models, the real challenge often comes from managing what’s sitting inside the context window, the place where all the useful pieces of information are gathered as the model gets ready to produce the next response. Deciding what should go in there takes careful thought. Some details help, some only take up space, and sometimes there’s just not enough room to fit everything. So the person guiding the model has to keep adjusting, picking the right examples, trimming the less important parts, and sometimes pulling in extra data on the spot to fill any gaps. The process can feel like walking a tightrope, balancing structure and instinct, weighing what’s essential and what can be left behind, all while staying within the limits of what the model can handle.
This change in wording is more than a simple swap of terms. It shows that people are beginning to see these systems more clearly, understanding how they really work beneath the surface and what’s needed to guide them well. In the early days, many believed success came down to finding the perfect set of words to type. That view is fading now. The real effort has shifted to shaping the whole stream of information that surrounds the task, not just the wording of a single question.
In that sense, context engineering does a better job of describing what is really happening behind the scenes. It points to something much bigger than simply choosing the right words, it’s really about creating the kind of environment where the model has what it needs to work properly.
As the technology moves forward, and as people continue to find new ways to apply language models to business, science, and education, this idea of context engineering is likely to become a central part of the conversation. It is not just a different way of saying prompt engineering, it is a more accurate way of describing the careful, layered process of making sure these systems have the right information, in the right form, at the right time.
Read next: Which Industries Rely More On Digital Technologies For The Next Five Years?
When the idea of “prompt engineering” first took hold, it came from a simple observation. The way you ask a question, or frame a task, has a direct impact on how well an AI system can respond. At first, the focus was on crafting smart prompts, the kind that would help a language model stay on track, answer more precisely, or complete complex instructions. The term made sense at the time. People were essentially feeding the AI snippets of text to steer it.
But as the field has grown, and as language models have expanded their capabilities, it has become clearer that this is not just about writing clever prompts. What actually happens inside these systems depends heavily on what information they are exposed to while working through a task. The challenge is no longer about simply phrasing a request in a certain way. It is now about shaping the entire set of information that the model sees at any given moment.
This is why many experts are now leaning towards a more fitting term: context engineering.
Rather than focusing on the short instruction that users might type into a chatbot, context engineering refers to the broader skill of managing the entire environment in which the model operates. It is about selecting, structuring, and balancing the right mix of examples, background details, and supporting information that surrounds the request. This might include not only the direct instructions, but also the task history, relevant documents, previous outputs, and carefully chosen reference materials. In more advanced systems, it can involve integrating outside tools, databases, and even visual content. Getting this mix right takes both technical skill and a sense of judgment.
+1 for "context engineering" over "prompt engineering".
— Andrej Karpathy (@karpathy) June 25, 2025
People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window… https://t.co/Ne65F6vFcf
This idea has started to spread within the AI community because it better captures the real work involved in building useful, reliable AI-powered tools. The label “prompt engineering” now feels too narrow, and in some settings, even misleading. In everyday use, people often think of prompts as simple questions or short commands, like something you would type into a search box. That casual understanding has stuck, even though the actual process of guiding a language model, especially in professional and industrial applications, has grown far more complex.
When people work with large language models, the real challenge often comes from managing what’s sitting inside the context window, the place where all the useful pieces of information are gathered as the model gets ready to produce the next response. Deciding what should go in there takes careful thought. Some details help, some only take up space, and sometimes there’s just not enough room to fit everything. So the person guiding the model has to keep adjusting, picking the right examples, trimming the less important parts, and sometimes pulling in extra data on the spot to fill any gaps. The process can feel like walking a tightrope, balancing structure and instinct, weighing what’s essential and what can be left behind, all while staying within the limits of what the model can handle.
This change in wording is more than a simple swap of terms. It shows that people are beginning to see these systems more clearly, understanding how they really work beneath the surface and what’s needed to guide them well. In the early days, many believed success came down to finding the perfect set of words to type. That view is fading now. The real effort has shifted to shaping the whole stream of information that surrounds the task, not just the wording of a single question.
In that sense, context engineering does a better job of describing what is really happening behind the scenes. It points to something much bigger than simply choosing the right words, it’s really about creating the kind of environment where the model has what it needs to work properly.
As the technology moves forward, and as people continue to find new ways to apply language models to business, science, and education, this idea of context engineering is likely to become a central part of the conversation. It is not just a different way of saying prompt engineering, it is a more accurate way of describing the careful, layered process of making sure these systems have the right information, in the right form, at the right time.
Read next: Which Industries Rely More On Digital Technologies For The Next Five Years?