For me, 2022 was an exciting year. Having emerged from the pandemic, life was settling into its new normal, at least until November, when OpenAI released ChatGPT onto an unsuspecting general public. Within a couple of days of hearing the announcement, I had my free account and was exploring.
I should say at the outset that I can be a bit of a technophobe. I am a baby boomer and AI felt like entering some sort of dark science world. No background in digital content, no coding, no website builds. I had no framework for it. I just started pressing buttons and asking questions.
My approach in those early months was probably pretty typical. I experimented with writing letters, policies, articles. I even had a go at poetry — that was great — and tried image generation but wasn't that impressed (this is certainly an area AI has really improved in). All told, it genuinely was fun and it taught me the rudiments of a sensible prompt. What I didn't understand at the time was what generative AI was actually doing, or what hallucination even meant. I probably believed pretty much everything the tool produced was grounded in some kind of digital reality. When I started exploring areas where the ChatGPT model of the time genuinely struggled — maths and time logic being the main ones — it became clear that AI couldn't do everything it claimed it could, a claim that was made with complete confidence.
That made me wonder whether I was the problem. I found my way to YouTube, where an entire cottage industry of explainer videos had appeared almost overnight, and I started to understand a little more. My output improved but crucially, the improvement came from how I prompted, checked and validated rather than the tool getting better. That basic discipline has never left me. Though I'll say this: the tools have got considerably better as well.
Nobody was telling me if I was doing this right. No manager had mapped out how AI should fit into my professional practice, no guidance was given on what looked good. It was just my personal interest and motivation. I was reading, watching, experimenting and making my own judgements. This was not the most comfortable position for someone who started from a place of genuine technological uncertainty, and there were moments where I really wondered whether I was pushing into territory I had no business being in.
What made it harder was the noise. LinkedIn, the platform I use most professionally, was producing a completely contradictory picture. On one side, a steady stream of voices treating AI as an existential threat to genuine human work, a shortcut that undermines thinking, a tool for people who have run out of ideas. On the other, an equally insistent set of voices encouraging the full automation of professional life, every friction point smoothed out, every process optimised, efficiency as the only measure of value. Neither position felt honest to what I was actually experiencing. Unfortunately, that situation hasn't changed — the friction still exists and I, like many others, am finding my own path through this mire. There is still room for those of us that are simply curious, moving carefully, and trying to work out what this could genuinely offer rather than what a headline said it should. And to be honest, I'm still a bit wary of handing over too much autonomous power to my AI.
I started looking at my AI as a collaborator rather than simply a tool. With this approach, I had different experiences.
I found out that I might be able to do more than I had previously thought possible. I also found that the quality of thinking I was getting into through these engagements, the outputs I was producing, the connections I was making between ideas all started improving. I didn't feel like I was compromising my standards — everything still felt genuinely my own work but better and more fully thought through than I could have managed alone. That is its own kind of answer.
Fast forward to mid-2025. Still very much a ChatGPT user but I was beginning to experiment more widely.
On a whim, I set ChatGPT and Claude the same task from opposing positions. ChatGPT had to argue the case against pineapple on pizza. Claude had to argue for it. What struck me wasn't the conclusion either reached. It was the approach each took to get there. ChatGPT framed its argument around scientific evidence and nutritional benefit. Claude argued from the perspective of human choice, taste culture, and culinary tradition. Two different models, the same rhetorical task, and two completely different philosophical starting points.
I pushed further. I copied each response into the other platform and asked both to review the opposing argument. Both acknowledged the strengths of the other side and both maintained their original position. Then I brought Gemini in as a third voice to assess which argument was stronger. It sided with ChatGPT on the basis of the scientific framing. My own preference was Claude. Not because the argument was more rigorous, but because it felt more human. That small experiment, absurd as the topic was, became a genuine turning point. I switched from being primarily a ChatGPT user to buying into the Claude approach. I am still there. Claude is my primary tool, with Perplexity as my research layer when I need it.
Switching to Claude turned out to be my catalyst for thinking about AI differently. Instead of using it primarily to draft content, I started using it as a collaborative partner, a safe place to test thoughts and ideas. I wanted AI to challenge my assumptions and push my thinking. I have to say that it was pretty good at it, but AI is programmed to help and its tendency to please left me never quite sure about whether I was making genuine inroads or not. I wanted honest and developmental feedback and, through trial and error, developed an approach that seemed to work.
Honest feedback is like having a critical friend who is confident enough to call you out when you need it, but as well as calling you out, can provide helpful suggestions. It does take a bit of getting used to, and if I'm honest, it can be annoying when you think your output is pretty damned good and AI comes up with reasons why it's not quite there yet. Taking a step back and thinking about the feedback and considerations is really powerful. What is important is that I'm in charge of what I choose to accept and act upon, and what I am going to ignore.
By the start of 2026, I had moved on from just prompting AI to challenge me. Amongst other things, I had built some tools that I could consistently call up and ask Claude to evaluate and give me some honest feedback. And then came the next eureka moment. I attended a short course led by Darren Coxon on AI assisted development. It was like the proverbial light being switched on. Could I use AI to build my own website? The simple answer was yes and it was really quite easy once I got going. I now have a fully working website that acts as a central location to house and access everything I build, develop and write. And the best part is, the more I do, the more the ideas flow.
Along the way I have discovered that different platforms are not as interchangeable as I had thought. What Claude handles naturally within a project can take pages of detailed instruction to reproduce elsewhere. The tools may look similar from the outside, but the working relationship is not.
Different things are now important. Token counts, API interactions, cost management — these are now part of my working vocabulary in a way that would have been completely foreign to me even a year ago. I have moved from being a user of AI tools to someone who is beginning to understand, at least in part, how they work and what that means for how I build with them.
Is my site professionally perfect? Probably not. I have almost certainly fallen foul of conventions I am not aware of. But dynes-insights.com is entirely mine. The thinking behind it, the frameworks it contains, the tools it hosts. All of it came from a process that started with a free ChatGPT account in November 2022 and a lot of trial and error.
That is what three years of AI practice actually looks like. Not a transformation, exactly. More like a gradual shift in what I believed was possible, and in my own willingness to try.
They say you can't teach an old dog new tricks and that may be true, but if that old dog wants to learn, there is no stopping them.