Prompt
The instruction you give an LLM
The input text you give an LLM — its quality largely determines the quality of the answer.
A prompt is the single input you give the model: question, instruction, context, examples — everything fits into the prompt. Prompt engineering is the craft of writing this text carefully so the same model produces much better output.
Anatomy of a good prompt: role ("you are an expert editor"), task ("summarize this text"), context (relevant background), examples (if any), format ("JSON or markdown"), constraints ("max 200 words, in Turkish").
Modern prompting techniques: zero-shot (instruction only), few-shot (with examples), chain-of-thought (think step by step), ReAct (think-act loops), tree-of-thoughts (explore alternatives).
Onboarding a new assistant. Say "do something" and they freeze. Say "read this report, summarize in 3 bullets, one sentence each, audience is leadership" and you get exactly what you wanted. LLMs work the same — the more precise the instruction, the more accurate the output.
Bad prompt: "Write a job ad."
Good prompt: "You're an experienced HR specialist. Write a job ad for a Senior Backend Developer. Company: 50-person fintech startup in Istanbul. Stack: Go, PostgreSQL, AWS, Kubernetes. Include: role summary, 5 responsibilities, 5 requirements, 3 'plus' skills. Format: markdown with headers. Tone: professional but warm. Max 350 words."
First prompt: generic, cliché, unusable. Second prompt: tailored, formatted, ready to ship.
- Every LLM interaction — prompting isn't optional, it's the interface
- Changing behavior — always try prompting before fine-tuning
- Fast iteration: a prompt change takes seconds; fine-tuning takes days
- Doing the same task thousands of times where token cost balloons — consider fine-tuning
- Trying to 'add' a capability the model lacks (e.g. teach a new language) — prompting alone won't do it
Negative instructions are weak
Say 'do Y' instead of 'don't do X'. Models respond better to positive examples. Replace 'don't write anything but JSON' with 'output only valid JSON'.
Instructions in the middle
In long contexts, instructions placed at the start or end get followed better. Stuck in the middle = 'lost in the middle' problem.
No prompt versioning
In production, prompts are code. Version them, A/B test, monitor. I've seen accuracy drop 20% because a developer 'tweaked a few words.'