What I Learned Teaching Engineers to Use AI
The assumption problem
Engineers are smart. That’s kind of the point. They think in systems, they spot connections, they draw conclusions quickly and make good assumptions. These are all excellent qualities when you’re building software.
They work against you when you’re talking to an AI.
Here’s what I noticed when I started working alongside other engineers using AI tools. They’d type a prompt as if the model already understood what they understood. As if it had drawn the same conclusions, noticed the same edge cases, knew the same context. They’d write something like “fix the auth flow” and expect the AI to know which auth flow, what’s wrong with it, and what the constraints are.
The AI doesn’t know any of that. It can’t read your mind, no matter how good the model is.
The curse of expertise
The better the engineer, the worse this problem gets. A senior developer has years of context packed into every decision they make. They’ve internalised patterns to the point where they don’t even think about them consciously. When they say “fix the auth flow,” their brain fills in dozens of implicit requirements automatically.
The AI gets none of that. It gets five words.
This is what surprised me most. I expected junior developers to struggle with AI. They did, but for predictable reasons. The experienced engineers were the ones who really needed the mindset shift. They needed to learn that talking to an AI isn’t like talking to a colleague who shares your context. It’s more like onboarding a new team member who’s incredibly fast but knows nothing about your project.
What actually works
The fix is simpler than you’d think, but it requires discipline.
You need to be specific. Not vaguely specific, properly specific. Tell the AI what you’re working on, what the constraints are, what it should and shouldn’t do. Give it your knowledge. Don’t assume it has it.
And you need to do this repeatedly. Not just once at the start of a conversation. Every time the context shifts, every time you’re working on something new, you lead with clarity. Engineers who got this right saw their AI output quality jump almost immediately. The ones who kept typing terse, assumption-heavy prompts kept getting mediocre results and blaming the tool.
Why this matters beyond engineering
Here’s the part that really got me thinking. If skilled software developers, people who literally build technology for a living, struggle to use AI effectively, what does that say about everyone else?
It says the skill gap isn’t about intelligence or technical ability. It’s about a specific way of communicating that nobody does naturally. It’s a skill, and like any skill, it can be learned.
But it’s learnable. That’s the good news. Once you understand that AI needs to be led, that it needs your context and your specificity, things start working. And this applies just as much to a business owner automating their invoicing as it does to an engineer building a production system.
The real lesson
The thing I took away from teaching engineers wasn’t that AI is hard to use. It’s that everyone, regardless of their background, needs the same foundational skill: the ability to communicate clearly with a machine that has no assumptions, no context, and no ability to read between the lines.
Engineers need it. Non-technical people need it. And once you have it, every AI tool you touch works better.
That’s why I start every engagement here, with the foundation.