Six elements that turn a vague question into a precise, usable response
The single fastest improvement most people can make to the quality of their AI outputs is to think more carefully about what they are actually asking. A vague prompt produces a vague response. A well-structured prompt gives the AI enough to work with and enough constraint to stay on target.
The framework below provides six elements to consider before you type. You do not need all six every time. A quick factual question needs none of them. A complex task that requires a genuinely useful, contextually grounded response benefits from most of them.
The Framework
Each element plays a distinct role. The table below shows what each one does and why it matters.
These two elements do the most work. Setting a clear role and grounding the request in real context consistently produces more useful responses than any other single change.
A good first response is a starting point, not a finished output. Ask the AI to simplify, expand, or reframe. Treat the exchange as a conversation, not a one-shot transaction.
Asking the AI to respond as a sceptic, a regulator, or a customer who disagrees can surface considerations that a single framing misses entirely.
Adding "let me know how confident you are in this response" or "flag anything you are uncertain about" surfaces the limits of what the AI actually knows, rather than leaving them hidden behind fluent prose.
Not every prompt needs all six elements. A quick factual query needs none. A complex analytical task needs most. The skill is knowing which elements carry the most weight for what you are trying to do.
The AI has no access to your organisation, your context, or your professional judgement unless you provide it. The more specific your input, the more specific and useful the output.
Why this works: The AI has a clear role, understands the actual situation, knows exactly what is needed, and has guardrails on length and tone. The confirmation element catches any gaps before the response is generated.
Why this works: Even without all six elements, the prompt is specific enough to guide the AI to a usable, well-calibrated output. The audience note in Constraints does significant work here.
Why this fails: No role, no context, no constraints, no format. The AI has no way to calibrate the response to your situation, your level of knowledge, or your purpose. What comes back will be broad, generic, and unlikely to be directly useful.
Why this fails: Contradictory constraints ("short but detailed", "fun but professional") give the AI irreconcilable instructions. The format request is equally conflicted. The result will be a muddle that satisfies none of the criteria properly.
Skipping Role and Context is the most common mistake. Without them, the AI defaults to a generic frame that may be entirely wrong for your situation. The output will look plausible but will not engage your actual context, your audience, or your real purpose.
A first response that looks good enough often becomes the final output, not because it genuinely is good enough, but because time is short. The framework helps you get a stronger first response. Iteration is what turns a good response into an excellent one.
The prompt framework is the starting point. How you engage throughout a conversation, and how you evaluate what comes back, are where the real development sits. Download the full guide for the complete picture.
Download the Full Guide