AGI Won’t Kill Software Engineering — It Will Expose What Engineering Really Is

AGI Won’t Kill Software Engineering — It Will Expose What Engineering Really Is
Photo by Hitesh Choudhary / Unsplash

Every few years, software engineers are told—confidently—that this time is different.
That the next abstraction, the next framework, the next breakthrough will finally make us obsolete.

AGI is simply the latest version of that story.

The anxiety is understandable. Today’s AI models write code faster than most humans ever could. They refactor, debug, reason, and explain. If you freeze the world at code generation, the conclusion feels obvious: fewer engineers will be needed.

But that framing is deeply misleading.

The uncomfortable truth is this: software engineering has never primarily been about writing code. And AGI, far from eliminating developers, is going to make that fact impossible to ignore.


The AGI Definition Problem (and Why It Matters)

Before we even talk about jobs, we need to address a fundamental issue: no one agrees on what AGI is.

Depending on who you ask, AGI means:

  • A system that can pass any cognitive benchmark
  • A system that can do most economically useful work
  • A system that can do literally anything a human can do
  • Or simply “whatever today’s models can’t yet do”

The goalposts move because intelligence isn’t one thing. It’s a collection of abilities—reasoning, judgment, social awareness, accountability, creativity, long-term planning—each of which matures at a different rate.

By some narrow definitions, we already crossed the AGI line. If you showed today’s models to a working engineer in 2015, they would have thought the future had arrived.

But job displacement arguments rely on a much stronger definition: not “can generate code,” but “can replace a human in a messy organization, over time, with consequences.”

That is a very different bar.


Most Companies Are Not Bottlenecked by Code

One of the biggest blind spots in AGI discourse is where software is actually built.

Yes, elite tech companies exist. But a massive portion of the global economy looks nothing like Silicon Valley:

  • Legacy systems (COBOL is not a meme—it’s still alive)
  • Slow release cycles
  • Risk-averse cultures
  • Software treated as “IT,” not a core competency
  • Vague requirements and political constraints

In these environments, code generation is not the limiting factor.

You could give every engineer unlimited access to the best AI models on earth, and you would not see a 100× productivity explosion. Why?

Because the constraints are organizational:

  • unclear ownership
  • misaligned incentives
  • outdated processes
  • fear of breaking production
  • lack of product thinking

AI accelerates output. It does not magically fix dysfunctional systems. In fact, it often amplifies existing problems, producing more artifacts faster without improving decision quality.


Writing Code Is the Easy Part (It Always Was)

This is the part that makes people uncomfortable.

Typing code has never been the hard part of software engineering.
We just pretended it was, because it was visible and measurable.

The hard parts are:

  • deciding what to build
  • discovering that you’re wrong
  • iterating under incomplete information
  • negotiating trade-offs between speed, quality, cost, and risk
  • aligning with stakeholders who disagree with each other
  • maintaining systems long after the excitement is gone

These are not mechanical tasks. They are judgment-heavy, context-dependent, and deeply human.

Even with perfect code generation, someone still needs to:

  • define the problem
  • validate assumptions
  • decide when “good enough” is enough
  • take responsibility when things fail

That responsibility does not disappear just because the code was written by a model.


Taste Is the Scarce Skill Nobody Talks About

One of the most overlooked skills in engineering is taste.

Taste is the ability to:

  • recognize when something feels wrong
  • distinguish elegance from cleverness
  • choose simplicity over feature accumulation
  • know when not to build something

AI is excellent at generating plausible solutions. But plausibility is not quality.

Quality emerges from experience:

  • having seen systems fail
  • having maintained bad decisions for years
  • having felt the cost of shortcuts

Taste is not about intelligence alone—it is about lived context. And context is expensive to acquire.


The Engineer Role Will Change — That Is Not the Same as Disappearing

It is true that the traditional image of an engineer “punching every key by hand” is fading. But this has happened repeatedly:

  • We stopped writing assembly.
  • We stopped managing memory manually.
  • We stopped reimplementing common data structures.

Each abstraction shift reduced mechanical effort and increased the value of systems thinking.

AI is just the next abstraction layer.

The emerging engineer is less typist, more orchestrator:

  • guiding AI tools
  • validating outputs
  • constraining behavior
  • integrating across systems
  • understanding failure modes

This raises expectations. It does not remove the role.


Accountability Is the Missing Ingredient

One question rarely asked in AGI debates is simple:

Who is responsible when things go wrong?

Software exists inside legal, financial, and social systems. Bugs cause outages. Outages cost money. Money involves liability.

An AI system does not:

  • attend postmortems
  • explain decisions to regulators
  • absorb blame
  • build trust over time

Until AI systems can meaningfully participate in accountability structures—not just generate artifacts—humans remain necessary.

And if AGI ever does reach that level, the conversation will no longer be about developer jobs. It will be about how society itself is organized.


Adoption Is Uneven, Slow, and Messy

Technological capability does not instantly translate into economic reality.

Most organizations:

  • adopt late
  • adopt partially
  • misuse new tools
  • take years to change processes

As long as:

  • banking apps remain fragile
  • airline systems remain archaic
  • enterprise software remains painful
  • “digital transformation” remains a multi-year effort

…the world is not short of software work. It is short of people who can navigate complexity under constraint.


Tools, Not Gods, Will Define the Near Future

Despite the hype, the next decade is unlikely to be about “AI replacing engineers.”

It will be about:

  • engineers who can use AI effectively replacing those who cannot
  • smaller teams shipping better software
  • faster iteration cycles
  • higher expectations for quality and judgment

This is a tooling revolution, not a labor extinction event.


What You Might Still Be Missing

A few additional considerations often overlooked:

  • Maintenance dominates creation: Most software work is about maintaining existing systems, not building new ones. AI helps, but legacy complexity still rules.
  • Trust is built, not generated: Teams trust people, not tools. Trust takes time.
  • Constraints define reality: Compliance, regulation, security, and data ownership do not disappear with better models.
  • Human coordination is the bottleneck: Meetings, alignment, incentives, and communication remain stubbornly human problems.

Finally: AGI Doesn’t End Engineering — It Reveals It

AGI does not threaten software engineers because engineers were never hired to write code.

They were hired to:

  • reduce uncertainty
  • make trade-offs
  • build systems that survive reality

AI makes parts of this easier, faster, and cheaper. It also raises the bar for what “good” looks like.

The role will change. The expectations will rise. The weak abstractions will fall away.

But until organizations themselves become intelligent—and accountable—the demand for great engineers will not shrink.

It will simply become harder to fake.

Support Us

Share to Friends