Most people who use AI tools regularly will say they find them useful. Ask what we use them for, and for many, the answers cluster in a predictable place: drafting emails, summarising documents, generating ideas, getting quick answers. Useful, certainly. But if that is where the engagement stops, a significant amount of what these tools are capable of is going untouched.

This is not a criticism. AI tools have developed faster than we have really thought about how to use them well and we are continually playing catch up. In effect, we learned to type queries before we learned to have conversations. And because the outputs were better than nothing, the habit stuck.

The gap between using AI and using it well is not about which tool we choose. It is about how we engage with it.

It starts with the prompt

The single fastest improvement most people can make is to think more carefully about what they are actually asking. A vague prompt produces a vague response. An AI tool is not a search engine and does not reward brevity in the same way. The more clearly we frame what we need, the more precisely we get it back.

A simple framework helps here. Before we type, consider six things: the role we want the AI to take (expert, coach, analyst), the context it needs to understand our situation, the specific request we are making, any constraints on the output (length, tone, format), how we want the response structured, and a prompt for it to flag anything it needs before it starts. We do not need all six every time. A quick clarification question needs none of them. A complex analytical task needs most of them.

What this does is shift the AI from producing a generic response to producing a response that is calibrated to our situation. The difference in output quality is often significant, and it costs nothing except a few minutes of deliberate thinking before we start.

But the prompt is only the beginning

Here is where it gets more interesting. How we engage during a conversation matters as much as how we start it. Most AI interactions end too early. A first response that looks good enough becomes a final response, not because it genuinely is good enough, but because time is short and the threshold has been cleared.

Productive AI engagement tends to follow three stages. Understanding what each one is for, and what it costs us to skip it, changes the quality of what we get out significantly.

Stage 1
Collaborative

We are using the AI as a thinking partner: exploring ideas, building a position, testing reasoning. At this stage, the AI should be pushing our thinking forward, not just confirming what we already believe. This means prompting it to offer alternatives, to identify what we have not considered, to play a different role in the conversation than the one it defaults to. Most people reach this stage. Fewer use it fully. The tendency is to accept the first version of something that looks reasonable, rather than using the collaborative exchange to develop something genuinely strong.

Stage 2
Evaluative

This stage requires a deliberate gear change. Once we have a developed position, a draft document, or a proposed approach, we bring something more demanding to it. Not a request to tidy it up or improve the flow, but a genuine challenge to the substance. Ask the AI to identify the assumptions we have not examined. Ask it where the reasoning breaks down. Ask it what a well-informed sceptic would say. Ask it to be honest rather than agreeable, because agreeable is what it defaults to unless we explicitly instruct otherwise. This stage is uncomfortable by design. It surfaces things we would rather not hear about work we have already invested time in. But that discomfort is the point. A document that has been through a genuine evaluative challenge is a different object to one that has not. The gap between them is often larger than people expect, and the gap between where most AI interactions stop and where this stage begins is where most of the unrealised value sits.

Stage 3
Reflective

Having been through the first two, we step back from the content and look at the conversation itself. Where were the prompts strong? Where did the engagement drift because the brief was not clear enough? What did we accept without sufficient challenge? What would we do differently? This is not an abstract exercise. It is how AI use becomes a learning discipline rather than a series of one-off outputs. Outputs improve through iteration. Practice improves through reflection. The third stage is what makes the difference between someone who is getting better at using AI and someone who is just getting more accustomed to it.

The cognitive dimension

Three established frameworks from education and psychology help explain why the quality of AI engagement varies so much between individuals, even when they are using the same tools.

Bloom's Taxonomy
What cognitive work are we actually doing?

Most AI use sits at the lower levels: retrieving information, getting explanations, producing content on request. Higher-order use involves analysis, evaluation, and original synthesis. These are not out of reach, but they require deliberate intent. The tool will not push us toward them on its own.

Perry's Scheme
How critically do we evaluate what the AI tells us?

A significant number of users accept AI responses as authoritative without seeking corroboration or applying their own judgement. This is particularly worth examining in relation to confident-sounding responses on complex or contested topics, where fluency and accuracy are not the same thing.

SOLO Taxonomy
Do our AI conversations build coherent understanding?

Transactional use produces isolated outputs. Architectural use builds something cumulative, where each conversation connects to and extends what came before. The distinction reveals whether we are accumulating information or developing understanding.

Together, these three dimensions reveal a useful diagnostic picture. Strong prompting combined with uncritical acceptance of responses is a genuinely risky combination. Well-structured engagement that never moves beyond basic cognitive tasks is inefficient. The goal is to develop across all three simultaneously, and to know which of the three is currently the limiting factor in our own practice.

What this means in practice

None of this requires a completely different approach to the tools we are already using. It requires a more deliberate one. Think before we prompt. Push further than the first response. Ask harder questions of what comes back. Use the second stage even when it is uncomfortable. And periodically step back and ask whether the quality of our AI engagement is genuinely developing, or whether we are doing the same thing slightly faster than we were six months ago.

The distinction matters because the tools will keep improving regardless. The question is whether the human side of the partnership is keeping pace.

About the author

John Dynes EdD

John is Head of Insights for a training organisation operating in the Defence sector, with a focus on educational insight, pedagogical development, and the application of Generative AI. His work sits at the intersection of how people learn, how organisations develop capability, and how AI, used well, can extend the reach and quality of both.