How I Talk to AI to Actually Get What I Want
There’s something nobody tells you when you start using ChatGPT, Claude or Gemini: the result you get depends almost entirely on how you ask. Not on the model. Not on the subscription you bought. On how you write.
It’s called prompt engineering. Complicated name for a simple concept: giving AI the right instructions to get what you need. After two years of daily experimentation, I’ve figured out that only a handful of things actually make a difference.
The Basic Concept: AI Doesn’t Read Your Mind
When you write a vague prompt, AI does what it can. It has no context, doesn’t know what you really need, doesn’t know your goal. It fills the gaps with the most likely responses — which are usually the most generic ones.
The turning point for me came when I stopped asking questions and started giving instructions.
The practical difference looks like this:
“Explain machine learning to me” → textbook answer.
“Explain machine learning as if you had to describe it to my cousin who knows nothing about technology but understands everything about football” → something much more useful.
Same topic. Completely different prompt. Results that don’t even come close.
The Things That Changed How I Use AI
The first was assigning a role. It sounds like a small thing, but it works. If you write “you are an experienced email marketing copywriter” before your request, the model calibrates to that register. Not because it “believes” it — but because context drives text generation. I use this every day for writing formal communications at work — I change the role and the whole tone of the response shifts.
The second was giving concrete examples. Instead of describing what I want, I show what I want. If I need a tagline, I write one similar to what I have in mind and ask the AI to work in that direction. AI is excellent at replicating patterns — give it one.
The third, the one that has saved me the most time, was narrowing the scope. Length, tone, what to exclude. “Max 150 words, no jargon, don’t mention competitors”. The more constraints you give, the less you have to rewrite afterwards.
The fourth, and probably the smartest, is meta-prompting. Instead of breaking your head trying to build the perfect prompt, just ask the AI to build it for you. You explain roughly what you want to achieve, and it gives you back a structured prompt. Then you use that to get the real answer.
Here’s how it works in practice. I need to write a difficult email to a client. Instead of overthinking it, I write to Claude: “I need to write an email to a client who complained about a late delivery. I want to be professional but not cold, acknowledge the problem without over-apologizing and propose a concrete solution. Write me the best prompt to get this email.” Claude gives me a detailed prompt. I use it. The email comes out right almost every time.
Yes, you’re using AI to talk better to AI. There’s something slightly absurd about it — I know. But it works, and it has saved me a lot of time in those moments when I knew what I wanted but had no idea how to ask for it.
One last thing, something I discovered more recently that seems obvious until you actually try it: avoid aggressive language in prompts. Phrases like “YOU MUST absolutely” or “NEVER EVER” written in caps don’t intimidate AI — they make it worse. From what I understand, these models were trained on billions of human texts — and in human texts, calm and structured language produces quality communication. Agitated language doesn’t. AI learned the same thing. Calm, direct instructions work better. Full stop.
When Things Go Wrong (And They Do)
Prompt engineering is not a magic formula. There are days when AI produces exactly what you asked for and the result is still useless. And there are days when you get the prompt wrong and end up with something unexpectedly good.
Debugging is part of the game. If the response doesn’t work, don’t throw everything out: fix the prompt surgically. What was missing? Context? An example? A constraint?
Something I use a lot is the funnel prompt: I start broad and narrow down progressively. First I ask for an overview, then I focus on one aspect, then I go deeper. It’s slower, but the final result is almost always better than what I get from one long, single prompt.
Another useful technique, especially for complex tasks, is breaking the request into sub-questions. Instead of asking everything at once, I ask one question at a time and use each answer as the basis for the next. AI has limited working memory — if you give it too much to handle at once, something gets lost along the way.
Where This Actually Gets Used
Something people ask me a lot: but in practice, when do you actually need this? Everywhere, is the honest answer. I use these approaches for writing formal communications at work, handling difficult client interactions, drafting briefings. On the blog I use them to structure articles, do research, compare tools. When I deal with personal finance, I use them to get complex concepts explained or to analyze data.
The point isn’t the industry. It’s that once you understand how the mechanism works, you apply it everywhere you use AI. It becomes a reflex.
Some People Are Already Calling It Something Else
Andrej Karpathy — one of the most respected names in AI — has proposed renaming all of this “context engineering”. His reasoning is that the model works like a CPU, its context window is RAM, and your job is to act as the operating system: loading the right information at the right moment.
It’s not just a name change. It shifts how you think about the problem. It’s no longer about finding the magic phrase — it’s about building the best possible context for what you need.
And anyway, whatever you call it: the perfect prompt doesn’t exist. There’s only the best prompt for that moment, with that model, for that specific purpose.
I have prompts that worked great with GPT-3.5 and produce different results on Claude. Prompts that are perfect for blog posts and useless for emails. Claude, ChatGPT and Gemini behave very differently from each other — a prompt optimized for one doesn’t necessarily work on another. Every tool has its quirks, every context needs calibration.
The only way to get better is to experiment — and keep the prompts that work. I have a document where I save the ones I reuse. One of the most useful things I’ve ever done.
If you’re just starting out, my advice is simple: stop asking AI questions. Start giving it instructions. Then, gradually, start giving it more precise ones.
The rest takes care of itself.
