The era of knowing where to put that curly brace is over. The time when interviewers played trivia games with people’s livelihoods. The gatekeepers have lost the gate. AI did not politely disrupt software development: it has detonated it from orbit. You can now ship code without knowing how to spell the language you are using. You can ask for an API endpoint and get one in ten seconds. You can describe a feature in english and watch a machine generate files like a vending machine spitting out candy bars.
Are we standing at a skill cliff where we aren’t producing engineers any faster, only operators who know how to push buttons on a vending machine?
The Skill Formation paper on arXiv (2601.20245) amplified what we all were thinking: AI-dependent users performed meaningfully worse at debugging, conceptual reasoning, and mental simulation of systems. They showed up to the fight with nothing but prompts and an empty toolbox.
"When learners rely on generative tools to complete unfamiliar tasks without deeply engaging with the underlying concepts, the very capacities required to supervise and correct automated output become compromised. The study found that AI use did not create clear productivity gains on average, and yet it substantially eroded the ability to reason about and fix the behavior of the code produced."
The Vending Machine Model
I use prompt engineering every single day as part of how I build, test, and think through systems. It’s become a normal part of my workflow. But what I keep running into, over and over again, is the same boundary: AI can accelerate how you express an idea, but it cannot supply the idea for you.
There’s a temptation to believe that if you just word the prompt a little better, add a bit more context, or take “one more try,” the model will eventually bridge the gap in your own understanding. It won’t. LLMs are probabilistic and non-deterministic systems. They are incredibly good at predicting plausible next steps based on patterns, but they have no grounding in why something works, how systems actually behave under specific workloads, or what tradeoffs matter in the real world. At least not yet.
This is where Operators and Engineers split. Engineers use the vending machine model to move faster, but they can still simulate the failure in their head when the machine stops. Operators just wait for a new prompt to save them. The vending machine generation is here, and business is about to feel it. You can see it in meetings: someone demos a feature built in hours, and everyone claps because it looks polished. Then three weeks later, the integrations start failing or stop working. Performance is non-existent and nobody knows why. The person who built it stares at their Claude history like it's an instruction manual for a device they do not actually own. They never learned to think in systems: they learned to request outcomes.
Teaching To The Prompt
In the early 2000s, NCLB (No Child Left Behind) was the law of the land in US education. It was designed to close the "achievement gap" by holding schools accountable through standardized testing. The goal was noble: make sure every kid hit a baseline level of proficiency. But the execution became a cautionary tale for any industry, including software, that tries to automate success.
The primary critique of NCLB was that it turned schools into "test-prep factories." Because federal funding was tied to test scores, teachers stopped teaching subjects deeply and started teaching "to the test." Students got very good at picking the right bubble on a Scantron, but they lost the ability to think critically. The system rewarded the appearance of understanding rather than the presence of it. Success became about producing the correct output, not developing the reasoning that leads to it.
Districts would even lower the “passing” bar just to show progress. Everyone could point to improved scores, but the underlying literacy didn’t actually change. Proficiency became a number that looked good in a report while masking the fact that real comprehension had not improved.
Subjects that weren’t tested, like music, art, or deep history, were slowly pushed aside to make more room for test preparation. The educational experience narrowed to whatever could be measured, and everything else that built context, creativity, and deeper thinking was treated as expendable.
The same pattern is starting to appear in software. AI has become our standardized test. If the code compiles and the feature works in a demo, it looks like success. Cue the applause. But just like the students who could pass exams without being able to read a complex passage, we are seeing builders who can ship features but struggle to read logs when something breaks.
Instead of learning how memory management works or how a database handles concurrent connections, people are learning how to prompt. They are learning to “teach to the test” of getting the model to produce a working snippet, without developing the understanding of what happens between the input and the output.
What disappears in this process is the "middle layer" of understanding: the messy space between the idea and the result where real engineering intuition is built. The vending machine produces an answer, but the mental model that explains why the answer works never forms.
The Pain Gap
The reason critics hated NCLB is that it removed the struggle required for true mastery. Real learning happens in the frustration of a difficult problem. When you take that away, you create a fragile workforce.
In the old world, the "pain" of a bug that took three days to solve was actually a three-day masterclass in systems architecture. You had to interrogate every layer of the stack to find the culprit. You emerged from that "pain" victorious.
If the AI fixes it, you learn nothing. If the AI can't fix it, you are stranded. You have "passed" every module, but you understand none of the system.
That is the cliff we are walking toward. An industry full of people wearing pilot uniforms who have only ever flown in a simulator, now standing in a real cockpit at 30,000 feet, staring at instruments they were never taught to read. This loss of understanding happens because AI is incredible at producing output but terrible at teaching intuition.
This is a business survival problem. Businesses fail when systems cannot be understood, maintained, or repaired. The industry is celebrating speed while quietly amputating understanding.
Execution used to mean you knew what you were building. Now execution means you know how to ask the system. That is not the same thing. The most valuable people in a company aren't the fastest coders: they are the people who can fix problems and understand what things do and why.
Execution is being reimagined whether we like it or not. The question is whether we continue to produce operators who just push buttons, or engineers who use the buttons to run systems they actually understand.