Anthropic's Prompt Engineering Tutorial (2024)
github.com344 points by cjbarber a day ago
344 points by cjbarber a day ago
I find the word "engineering" used in this context extremely annoying. There is no "engineering" here. Engineering is about applying knowledge, laws of physics, and rules learned over many years to predictably design and build things. This is throwing stuff at the wall to see if it sticks.
Words often have multiple meanings. The “engineering” in “prompt engineering“ is like in “social engineering”. It’s a secondary, related but distinct meaning.
For example, Google defines the second meaning of "engineering" as:
2. the action of working _artfully_ to bring something about. "if not for his shrewd engineering, the election would have been lost"
(https://www.google.com/search?q=define%3AEngineering)
Merriam-Webster has:
3 : calculated manipulation or direction (as of behavior), giving the example of “social engineering”
(https://www.merriam-webster.com/dictionary/engineering)
Random House has:
3. skillful or artful contrivance; maneuvering
(https://www.collinsdictionary.com/dictionary/english/enginee...)
Webster's has:
The act of maneuvering or managing.
(https://www.yourdictionary.com/engineering)
Look up “engineering” in almost any dictionary, and it will list something along those lines as one of the meanings of the word. It is a well-established, nontechnical meaning of “engineering”.
While that may be true, I have a hard time believing that's relevant to the intent of people putting "engineer" into every job title out there.
Your posted definitions contradict your conclusion - I would argue there is nothing calculated (as parent poster said, there is no calculation, it just trying and watching what works), artful or skillful (because it's so random, what skill is there to develop?) about "prompt engineering".
If you are going to play that game, "engineering" used to mean that you worked with engines.
Words evolve over time because existing words get adapted in ways to help people understand new concepts.
And in fact, the first engines were developed without a robust understanding of the physics behind them. So, the original version of 'engineering' is more closely to the current practices surrounding AI than the modern reinterpretation the root comment demands.
I am a cereal eating engineer, while I review the cereal box specification.
I do that every morning, before applying my bus-taking engineering to my job.
Because I do prompt engineering for a living.
So many words lost their meaning today... I am glad I'm not the only one annoyed by this.
I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in. The US way of every software dev, mechanic, hvac installer or plumber is an engineer is ridiculous.
Disagree. I think it's valid to describe your work as engineering if it is in fact engineering, regardless of credential. If the distinction is important, call it "<credential name> Engineer". But to simply seize the word and say you can't use it until you have this credential is authoritarian, unnecessary, rent seeking corruption.
Doctors and Lawyers are like this. Maybe something like CPA where you can be an accountant or a certified accountant which you need for something important.
CPA is a great example. I'm a half decent accountant, and it should be legal for me to claim that when applying for a position. But it would be fraud to claim I'm a CPA.
what you are describing exists and is called a Professional Engineer in the US
https://www.nspe.org/about/about-professional-engineering/wh...
> authoritarian, unnecessary, rent seeking corruption.
Or maybe it's a public service, which reduces instances of fraudulent behavior, and provides cleaner signal in the market of ideas.
Sorry but in Canada using the word Engineer near your name also means you take legal responsibility personnaly for your professional acts. We are assermented when we earn the title of Junior Engineer after 4 years of university. Then after a period of a few years in the workplace you can have a sponsor Engineer vouch for you. You pass yet another exam and only then you become an Engineer.
This is not true for most so-called Engineers in the US. Anyone can declare themselves an engineer with no exam, no sponsor, no assermentation and no real legal ties to their shoddy work.
>> This is not true for most so-called Engineers in the US. >> Anyone can declare themselves an engineer with no exam, >> no sponsor, no assermentation and no real legal ties >> to their shoddy work.
I don't think that's correct. While there are exemptions, each state requires anyone offering engineering services to the public to be licensed.
https://educatingengineers.com/blog/pe-license-requirements-...
Sure, the term "Professional Engineer" is protected, but not "Engineer" by itself.
> I still like the Canadian approach that to have a title with the word Engineer in it you have to be licensed by the engineering regulator for the province you work in.
That's just not true.
(Despite what Engineers Canada and related parasites tell you.)
hey now don't disparage plumbers they are usually certified and licensed, unlike engineers :P
I've seen some good arguments recently that software engineering is weird in that computers ARE completely predictable - which isn't the case for other engineering fields, where there are far more unpredictable forces at play and the goal is to engineer in tolerances to account for that.
So maybe "prompt engineering" is closer to real engineering than "software engineering" is!
With distributed systems I'd say network unreliability introduces a good amount of unpredictability. Whether that's comparable to what traditional engineering disciplines see, I couldn't say. Some types of embedded programming, especially those deployed out in the field, might also need to account for non-favorable conditions. But the predictability argument is interesting nonetheless.
You could make this same argument about a lot of work that fall onto "engineering" teams.
There's an implicit assumption that anything an engineer does is engineering (and a deeper assumption that software as a whole is worthy of being called software engineering in the first place)
Perhaps. My point is that the word "engineering" describes a specific approach, based on rigor and repeatability.
If the results of your work depend on a random generator seed, it's not engineering. If you don't have established practices, it's not engineering (hence "software engineering" was always a dubious term).
Throwing new prompts at a machine with built-in randomness to see if one sticks is DEFINITELY not engineering.
i dont see where a random seed would have any bearing on "a specific approach, based on rigour and repeatability"
the approach uses random seeds, and the rigours make it repeatable.
if im thinking about mechanical engineering, something like the strength of a particular beam or the cycle life of a bearing is a random number. An engineer's job includes making random things predictable, by apply design tools like safety factors, and observability tools. thats why we prefer ductile materials; over brittle ones. both have a random strength around the spec, but one visibly changes before it fails, where the other doesnt. we can design in inspection processes that accounts for the randomness.
all kinds of tuning operations also start with somewhat random numbers and bring them to a spot. for the very contemporary example: training an ML model. start with random weights, and predictably change them until you get an effective function.
i dont think the randomness excludes "prompt engineering" from being engineering. instead, it's the rigour of the process in turning the random inputs into predictable outputs
It can perfectly be engineering if you have the right validation process. It is, if you can prove that the given randomness can provide satisfactory results to solve the given problem on 99,995% of the cases, then you have a product that solves a given problem following a typical engineering approach.
> Throwing new prompts at a machine with built-in randomness to see if one sticks is DEFINITELY not engineering.
Where does all the knowledge, laws of physics, and rules learned over many years to predictably design and build things come from, if not by throwing things at the wall and looking at what sticks and what does not, and then building a model based on the differences between what stuck and what did not, and then deriving a theory of stickiness and building up a set of rules on how things work?
"Remember kids, the only difference between screwing around and science is writing it down." -Adam Savage
They come from science. Engineering applies laws, concepts and knowledge discovered through science. Engineering and science are not the same, they are different disciplines with different outcome expectations.
Your analogy would work if eg gravity randomly changed, or on occasion disappeared entirely until you pointed it out.
"Great point, you're absolutely correct - things should not be floating around like that." - ChatGPT (probably)
I call it "Vibe Prompting".
Even minor changes to models can render previous prompts useless or invalidate assumptions for new prompts.
Even minor changes to a chemical formulation can render previous process design useless or invalidate assumptions for a new formulation.
Changing the production or operating process in the face of changing inputs or desired outputs is the bread and butter of countless engineers.
I dont think that is a good argument. In chemical engineering world, the provider who just randomly changes formulations would be called "ureliable, shoody, crappy giving us something we not ordered".
Engineers work with non-deterministic systems all the time. Getting them to work predictably within a known tolerance window and/or with a quantified and acceptable failure rate is absolutely engineering.
How do you quantify or decide an acceptable failure rate for llm output?
Same way as any other production model in ML. Or any field that requires quality control. Really, this is not fundamentally different in conceptual approach than implementing any other technology or area of knowledge which is a near verbatim definition of engineering.
Depends on the failure mode and application. But a first approximation is the same way you would for a human output. E.g. process engineering for a support chatbot has many of the same principles as process engineering for a human staffed call center.
The 'potatolicious rebuttal:
https://news.ycombinator.com/item?id=44978319
(They're not an LLM fan; also: I directionally agree about "prompt" engineering, but the argument proves too much if it disqualifies "context" engineering, which is absolutely a normal CS development problem).
Unless you are going into a legal definition, where there's a global enumeration of the tasks it does, "engineering" means building stuff. Mostly stuff that is not "art", but sometimes even it.
Building a prompt is "prompt engineering". You could also call it "prompt crafting", or "prompt casting", but any of those would do.
Also, engineering also had a strong connotation of messing with stuff you don't understand until it works reliably. Your idea of it is very new, and doesn't even apply to all areas that are officially named that way.
It’s not engineering if you throw anything together without much understanding of the why of things.
But if you understand the model architecture, training process, inference process, computational linguistics, applied linguistics in the areas of semantics, syntax, and more— and apply that knowledge to prompt creation… this application of knowledge from systemic fields of inquiry is the definition of engineering.
Black box spaghetti-hits-wall prompt creation? Sure, not so much.
Part of the problem is the “physics” of prompting changes with the models. At the prompt level, is it Even Possible to engineer when the laws of the universe aren’t even stable.
Engineering of the model architecture, sure. You can mathematically model it.
Prompts? Perhaps never possible.
It changes with any different flavor of a technology in any field of engineering, at least at the level of abstraction that you’re choosing to engage with the problem. Otherwise, this is just machine learning. It yields to the same conceptual approaches in quality control that require fundamental understanding of the underlying fields of study as any area of implementing technology—pretty much the definition of engineering.
You can no more assume the same exact production flow will produce equivalently for a different LLM model than you could for control of a different molecular compound put into product. If you choose only to consider it at the level of equipment assembly then sure, the basic rules of how you assemble the materials— the “physics”— doesn’t hold. If you do so at the same time that such efforts are informed by knowledge of the relevant fields such as material science and of course chemistry then you’re doing chemical engineering. Maybe you don’t want to call the construction workers engineers— though heck in that field many are! But certainly folks like the ones creating the guide posted are being informed by the exact sort of knowledge in the relevant underlying fields.
Indeed. Engineering is the act of employing our best predictive theorems to manifest machines that work in reality. Here we see people doing the opposite, describing theorems (and perhaps superstitions) that are hoped to be predictive, on the basis of observing reality. However insofar as these theorems remain poor in their predictive power, their application can scarcely be called engineering.
Is this an AI generated post?
Yes, it was written by a SoTA AGI trained for more than 30 years.
I would like to add that predictable generation defeats the very purpose of generative AI, so prompt engineering in this context will never equate to what engineering means in general.
I agree with you about what's described here.
There is engineering when this is done seriously, though.
Build a test set and design metrics for it. Do rigorous measurement on any change of the system, including the model, inference parameters, context, prompt text, etc. Use real statistical tests and adjust for multiple comparisons as appropriate. Have monitoring that your assumptions during initial prompt design continue to be valid in the future, and alert on unexpected changes.
I'm surprised to see none of that advice in the article.
This article talks about prompt evals https://www.anthropic.com/engineering/writing-tools-for-agen.... There are plenty of approaches to provide some degree of rigor around the slot machine output.
1) Software engineers don't often have deep physical knowledge of computer systems, and their work is far more involved with philosophy and to a certain extent mathematics than it is with empirical science.
2) I can tell you're not current with advances in AI. To be brief, just like with computer science more broadly, we have developed an entire terminology, reference framework and documentation for working with prompts. This is an entire field that you cannot learn in any school, and increasingly they won't hire anyone without experience.
I saw a talk by somebody from a big national lab recently, and she was announced as the "facilities manager". I wondered for about 5 seconds why the janitor was giving a talk at a technical conference, but it turns out facility meant the equivalent of a whole lab/instrument. She was the top boss.
This tutorial itself is very old. (In recent AI innovation timelines)
Now it's all about Context Engineering which is very much engineering.
First they came for science: Physics, Chemistry, Biology -vs- social science, political science, nutrition science, education science, management science...
Now they come for engineering: software engineering, prompt engineering...
:P
Assume for the sake of argument, that this is literally sorcery -- ie communing with spirits through prayer.
_Even in that case_, if you can design prayers that get relatively predictable results from gods and incorporate that into automated systems, that is still engineering. Trying to tame chaotic and unpredictable systems is a big part of what engineering is. Even designing systems where _humans_ do all the work -- just as messy a task as dealing with LLMs, if not more -- is a kind of engineering.
> rules learned over many years
How do you think they learned those rules? People were doing engineering for centuries before science even existed as a discipline. They built steam engines first and _then_ discovered the laws of thermodynamics.
Same when it's applied to programming though. "Software engineer" has always been a bit silly.
100% I’m old enough to remember when they were called “developers”. Now someone who codes in html and ccs is a “front end engineer”. It’s silly
Yesterday I was trying to make a small quantized model work, but it just refused to follow all my instructions. I tried to use all the tricks I could remember, but fixing instruction-following for one rule would always break another.
Then I had an idea: do I really want to be a "prompt engineer" and waste time on this, when the latest SOTA models probably already have knowledge of how to make good prompts in their training data?
Five minutes and a few back-and-forths with GPT-5 later, I had a working prompt that made the model follow all my instructions. I did it manually, but I'm sure you can automate this "prompt calibration" with two LLMs: a prompt rewriter and a judge in a loop.
In today’s episode of Alchemy for beginners!
Reminds me of a time that I found I could speed up by 30% an Algo in a benchmark set if I seed the random number generator with the number 7. Not 8. Not 6. 7.
It does make things non deterministic and complicated. Like it or not, this IS the job now. If you don't do it, someone else is going to have to.
In my AI application I made deliberate decisions to divorce prompt engineering from the actual engineering, create all the tooling needed to do the prompt engineering as methodically as possible (componentize, version, eval) and handed it off to the subject matter experts. Clearly people who think this is the equivalent of choosing a seed shouldn't be writing prompts.
> Like it or not, this IS the job now.
Nope. The job is still to come up with working code on the end.
If LLMs make your life harder, and you just don't use them, then you'll just get the job done without them.
"Engineering" here seems rhetorically designed to convince people they're not just writing sentences. With respect "prompt writing" probably sounds bad to the same type of person who thinks there are "soft" skills.
This strikes me as a silly semantics argument.
One could similarly argue software engineering is also just writing sentences with funny characters sprinkled in. Personally, my most productive "software engineering" work is literally writing technical documents (full of sentences!) and talking to people. My mechanical engineering friends report similar as they become more senior.
I dont think so. It says the words were choosen to wngineer peoples emotions and make then feel right way.
Tech people do not feel good about "writing propt essay" so it is called engineering to buy their emotional acceptance.
Just like we call wrong output "hallucination" rather then "bullshit" or "lie" or "bug" or "wrong output". Hallucination is used to make us feel better and more acceptiong.
There absolutely are soft skills and it is clear that you do not have them.
Here's my best advice of prompt engineering for hard problems. Always funnel out and then funnel in. Let me explain.
State your concrete problem and context. Then we funnel out by asking the AI to do a thorough analysis and investigate all the possible options and approaches for solving the issue. Ask it to go search the web for all possible relevant information. And now we start funneling in again by asking it to list the pros and cons of each approach. Finally we asked it to choose which one or two solutions are the most relevant to our problem at hand.
For easy problems you can just skip all of this and just ask directly because it'll know and it'll answer.
The issue with harder problems is that if you just ask it directly to come up with a solution then it'll just make something up and it will make up reasons for why it'll work. You need to ground it in reality first.
So you do: contrete context and problem, thorough analysis of options, list pros and cons, and pick a winner.
Doesn't this also apply for non-AI problem solving as well?
“Honey, which restaurant should we eat at tonight? First, create a list of restaurants and highlight the pros and cons of each. Conduct a web search. Narrow this down to 2 restaurants and wait for a response.”
The big unlock for me reading this is to think about the order of the output. As in, ask it to produce evidence and indicators before answering a question. Obviously I knew LLMs are a probabilistic auto complete. For some reason, I didn't think to use this for priming.
Note that this is not relevant for reasoning models, since they will think about the problem in whatever order it wants to before outputting the answer. Since it can “refer” back to its thinking when outputting the final answer, the output order is less relevant to the correctness. The relative robustness is likely why openai is trying to force reasoning onto everyone.
This is misleading if not wrong. A thinking model doesn’t fundamentally work any different from a non-thinking model. It is still next token prediction, with the same position independence, and still suffers from the same context poisoning issues. It’s just that the “thinking” step injects this instruction to take a moment and consider the situation before acting, as a core system behavior.
But specialized instructions to weigh alternatives still works better as it ends up thinking about thinking, thinking, then making a choice.
I think you are misleading as well. Thinking models do recursively generate the final “best” prompt to get the most accurate output. Unless you are genuinely giving new useful information in the prompt, it is kind of useless to structure the prompt in one way or another because reasoning models can generate intermediate steps that give best output. The evidence on this is clear - benchmarks reveal that thinking models are way more performant.
You're both kind of right. The order is less important for reasoning models, but if you carefully read thinking traces you'll find that the final answer is sometimes not the same as the last intermediary result. On slightly more challenging problems LLMs flip flop quite a bit and ordering the output cleverly can uplift the result. That might stop being true for newer or future models but I iterated quite a bit in this for sonnet 4.
Furthermore, the opposite behavior is very, very bad. Ask it to give you an answer and justify it, it will output a randomish reply and then enter bullshit mode rationalizing it.
Ask it to objectively list pros and cons from a neutral/unbiased perspective and then proclaim an answer, and you’ll get something that is actually thought through.
I typically ask it to start with some short, verbatim quotes of sources it found online (if relevant), as this grounds the context into “real” information, rather than hallucinations. It works fairly well in situations where this is relevant (I recently went through a whole session of setting up Cloudflare Zero Trust for our org, this was very much necessary).
I try so hard for chatgpt to link and quote real documentation. It makes up links, fake quotes, it even gaslights me when i clarify the information isn’t real.
This is written for the 3 models (Sonnet, Haiku, Opus 3). While some lessons will be relevant today, others will not be useful or necessary on smarter, RL’d models like Sonnet 4.5.
> Note: This tutorial uses our smallest, fastest, and cheapest model, Claude 3 Haiku. Anthropic has two other models, Claude 3 Sonnet and Claude 3 Opus, which are more intelligent than Haiku, with Opus being the most intelligent.
Yes, Chapters 3 and 6 are likely less relevant now. Any others? Specifically assuming the audience is someone writing a prompt that’ll be re-used repeatedly or needs to be optimized for accuracy.
Agree with the other commenters here that this doesn't feel like engineering.
However, Anthropic has done some cool work on model interpretability [0]. If that tool was exposed through the public API, then we could at least start to get a feedback loop going where we could compare the internal states of the model with different prompts, and try and tune them systematically.
[0] https://www.anthropic.com/research/tracing-thoughts-language...
It's one year old. Curious how much of it is irrelevant already. Would be nice to see it updated.
My workflow has gotten pretty lax around prompts since the models have gotten better. Especially with Claude 4.5 (and 4 before it) once they have a bit of context loaded about the task at hand.
I keep it short and conversational, but I do supervise it. If it goes off the rails just smash esc and give it a course correction.
And then if you're coming from no context: I throw a bit more detail in at the start and usually start by ending the initial prompt with a question asking it if it can see what I'm talking about in the code; or if it's going to be big: I use planning mode.
Suggest adding 2024 to the title
So we've taught this thing how to do what we did and now we need to be taught how to get it to do the things we taught it to do. If this didn't have the entire US economy behind it, it would catch fire like a hot balloon.
Is there an up to date version of this that was written against their latest models?
Don't write prompts yourself, use DSPy. That's real prompt "engineering"
I really struggle to feel the AGI when I read such things. I understand this is all of year old. And that we have superhuman results in mathematics, basic science, game playing, and other well-defined fields. But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
> But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
It's right there in the name. Large language models model language and predict tokens. They are not trained to deeply comprehend, as we don't really know how to do that.
Have you ever tried to get an average human to do that? It’s a mixed bag. Computers til now were highly repeatable relative to humans, once programmed, but hopeless at “fuzzy” or associative tasks. Now they have a new trick, that lets them grapple with ambiguity, but the cost is losing that repeatability. The best, most reliable humans were not born that way, it took years or decades of education, and even then it can take a lot of talking to transfer your idea into their brain.
> superhuman results in mathematics
LLMs mostly spew nonsense if you ask them basic questions on research or even master's degree-level mathematics. I've only ever seen non-mathematicians suggest otherwise, and even the biggest mathematician advocate for AI, Terry Tao, seems to recognise this too.
Ask yourself "what is intelligence?". Can intelligence at the level of human experience exist without that which we all also (allegedly) have... "consciousness". What is the source of "consciousness"? Can consciousness be computed?
Without answers to these questions, I don't think we are ever achieving AGI. At the end of the day, frontier models are just arithmetic, conditionals, and loops.
Nothing about telling it to fuck off, of course to "engineer" its user sentiment analysis?
This AI madness is getting more stupid every day…
[dead]