Skip to main content

The ceiling is the same. The floor just moved.

·1536 words·8 mins·
Devlog Ai Software-Engineering Economy Future-of-Work
Dave Amit
Author
Dave Amit
Principal Architect with nearly two decades designing distributed systems and cloud-native platforms. I bring deep systems expertise, hands-on AI-assisted engineering workflows, and a track record of shipping things that actually scale. Currently writing Go and Rust, running on Kubernetes, and exploring what happens when strong architectural judgment meets modern AI tooling.
Table of Contents

When ChatGPT launched, I played with it out of curiosity like everyone else. Impressive party trick. Confidently wrong about half the things I actually knew well. I wrote it off as a chatbot — a very fluent one, but still fundamentally a hallucination machine. I wasn’t about to let it near a production codebase.

That was the right call at the time. What’s changed in the last several months — particularly with the latest reasoning models and a handful of strong open-weight releases — is not incremental. The quality, accuracy, and reasoning consistency have moved to a different category. I don’t use these tools experimentally anymore. They’re how I work.

The tool is not the point
#

There’s an endless debate about which tool is best. Cursor vs Copilot vs OpenCode, one model vs another. I’ve watched it repeat for a year and I think it’s mostly a distraction.

The model is the ceiling. No scaffolding makes a weak model reason better or catch an architectural mistake it wouldn’t otherwise catch.

But the tool determines how close you get to that ceiling day-to-day. A great model in a bare chat window — no file access, no shell, no memory of what you changed three files ago — is genuinely hobbled. The same model with a proper agentic loop around it is a different experience. Not because the model got smarter. Because it stopped losing context and having to ask you to repeat yourself.

So: pick the tool you’re comfortable with, stay current enough to know when something genuinely better exists, and stop treating it like a football rivalry. The thing you’re actually renting is the model.

The “insider advantage” is the wrong question
#

People assume the model company’s own tool must have some secret edge — a private API, undocumented capabilities. It doesn’t work that way. The model sees a system prompt, some context, and tool definitions. The same API, the same tokens, available to everyone.

Anthropic publishes their prompting best practices openly. Not out of generosity — because wide adoption across every tool benefits them. The protocol is deliberately open. The training process is not. That’s where the actual moat is: years of data curation, RLHF choices, alignment research that compounds with each model generation. You can read the docs. You can’t replicate what went into the weights.

The model quality gap between labs will close over time as approaches get published, reverse-engineered, or independently rediscovered. But right now the gap is real, and it lives in the model, not the wrapper around it.

This is not going to look like Windows or iOS
#

The instinct is to pattern-match to previous platform transitions. One company locks up the OS, another locks up the device, and the market consolidates around those chokepoints for twenty years.

AI doesn’t have those chokepoints. The hardware is commoditizing. There’s no distribution lock-in equivalent to the App Store. The gap between frontier labs has closed faster than anyone predicted — Google, Meta’s open releases, Chinese labs — all credible in ways they weren’t eighteen months ago. The likely outcome isn’t one company owning everything. It’s more like broadband: differentiated quality, real competition, no monopoly.

The bubble will burst. The technology won’t.
#

There’s a version of the current AI moment that ends badly — valuations collapse, companies fold, the hype cycle completes its arc. That will probably happen in some form. It has happened before.

But look at what actually survived the dot-com crash. The NASDAQ lost 78% of its value between 2000 and 2002. Pets.com died. Webvan died. The financial speculation around internet companies was genuinely delusional. And then — the internet ate the world anyway. Amazon, Google, eventually social media, e-commerce, streaming, remote work. The bubble was about the stocks. The technology was not the bubble. It was just early.

The same distinction applies here. The current AI investment frenzy probably has a correction coming. Some of these companies are worth a fraction of their current valuations. But the underlying capability — models that can reason, write code, synthesize information, operate tools autonomously — that’s not going away. The question isn’t whether AI stays. It’s which specific bets survive the shakeout.

Today it’s hard to imagine working without being online. In ten years, “I did this without AI” will sound like “I wrote this without a computer.”

Context is everything. Without it, you’re just feeding the hallucination machine.
#

Here’s the thing I tell everyone who says AI wrote bad code for them: what did you give it?

A model with no context is guessing. It doesn’t know your domain constraints, your existing architecture decisions, your team’s conventions, what you tried last week that didn’t work. It fills that gap with plausible-sounding output that is disconnected from your actual problem. That’s not a model failure — that’s garbage in, garbage out.

Asking a general-purpose chatbot to write production code with no context is like hiring a contractor and handing them a single-sentence brief. The output quality is entirely a function of what you put in.

What’s changed isn’t just model capability. It’s the infrastructure for giving models real context: agentic tools that read your actual codebase, files, recent changes, error outputs. When the model knows what it’s working with, the output changes fundamentally. The hallucinations don’t disappear — but they become much rarer and much easier to catch because the model’s reasoning is grounded in something real.

This is why context engineering is the most undervalued skill right now. Not prompt engineering in the superficial sense — magic words that produce better output. Actual, deliberate investment in giving the model an accurate picture of the problem — the right files, the right background, the right constraints. The model’s job is to reason. Your job is to make sure it’s reasoning about the right thing.

The reset that actually matters
#

I’ve been a principal architect for long enough to remember when the bottleneck was always the same thing: not enough people who could translate a clear idea into working, scalable software. That scarcity produced the compensation explosion of the early 2020s. It also produced a culture where implementation skill was so valuable that it could mask weak thinking about what to build.

That scarcity is going away.

Not because engineers become worthless. Because the floor on what an average engineer can ship in a day has moved dramatically upward. A mediocre engineer with good AI tooling now outships a strong engineer without it. That changes the math on headcount, on salaries, on what a team of five can accomplish.

The CAD analogy is overused but accurate: CAD didn’t eliminate architects, it eliminated drafters. The skill that got compressed wasn’t design — it was production of the artifact. The people who understood what should be built, and why, became more powerful. The people whose entire value was in knowing how to draw the lines got a harder decade.

The same thing is happening here. The “how to write the code” layer is getting cheaper. What’s left:

  • Knowing which problem is worth solving, which requires understanding the domain well enough to have an opinion
  • Systems thinking at the level where you’re reasoning about failure modes before they exist
  • Taste — recognizing a bad solution quickly, without having to build it first
  • Judgment under genuine uncertainty, where there’s no right answer to look up

These were always the scarce skills. They were just bundled with implementation because you needed both to ship. AI unbundles them. Implementation becomes cheap. Judgment doesn’t. I wrote about designing modular monoliths in Go recently — that kind of architectural thinking is exactly what AI can’t replace.

The part no one wants to say
#

A lot of people built their entire career on the implementation layer. Bootcamps, years of LeetCode prep, grinding toward a FAANG offer. That was a rational bet at the time. It is a worse bet now, and it will be an even worse bet in three years.

The transition is real and it won’t be painless. I don’t have a clean answer for it. But the honest thing is to name it rather than pretend the shift is purely additive.

The other side: when execution is cheap, bad ideas execute faster too. The ability to recognize which ideas are worth pursuing gets more valuable as the cost of pursuing bad ones drops. Clear thinking, once a nice-to-have on top of execution skill, becomes the thing itself.

What this looks like when it settles
#

Not one dominant platform. Not another Microsoft. More likely: a generation of small, fast builders who can ship things that previously needed a team of ten and a year of runway. The pay gap shifts from skill-to-code toward vision-and-judgment. Average output goes up. Average noise goes up too.

The people who do well in that environment think clearly about hard problems, make decisions with incomplete information, and know when to trust the tool and when to override it.

AI is a multiplier. If your thinking is fuzzy, it produces fuzzy output faster. If your thinking is sharp, it produces sharp output faster. The tool doesn’t change what you’re bringing to it.

That’s the only thing worth optimizing.


comments powered by Disqus