Skip to main content

Command Palette

Search for a command to run...

AI Made Writing Code Easier — Software Development Didn’t Get Easier

Updated
5 min read

Over the last year, most conversations about AI in software engineering have revolved around speed. People ask whether engineers are now two, five, or ten times faster. The assumption behind that question is that writing code was the main thing slowing software delivery down.

In practice, that rarely matched my experience. Even before AI, most teams I worked with were not waiting on someone to type faster. They were waiting on decisions, coordination, and risk assessment. The work around the code mattered more than the code itself.

AI didn’t remove the bottleneck. It moved it.


We were rarely waiting on code

In a small team, code is a large part of the job. A few engineers understand the whole system and progress mostly depends on someone sitting down and implementing the feature. But once systems and organizations grow, delivery depends on a chain of human processes. Someone needs to decide the feature is worth doing, multiple teams need to agree on behavior, operational risk has to be evaluated, and someone has to support the system after customers start relying on it.

Because implementation was expensive, it acted as a natural filter. If a feature required weeks of work, someone had to justify it. Discussions were more careful, and priorities were clearer. That friction was often frustrating, but it forced clarity.

AI changed that dynamic. Implementation became cheap enough that the filter weakened. The number of things that could be built suddenly increased, but the organization’s ability to evaluate them did not.


Cheap implementation increases volume

Lowering the cost of implementation does not automatically produce better outcomes. It produces more outcomes. Teams can prototype faster, experiment more, and try more variations of ideas. On paper this sounds like pure productivity, but most organizations are not structured to process a large volume of change.

Before, weak ideas often died early because they were costly. Now they survive longer because they are easy to implement. The result is not necessarily better software — it is more software entering the system. The constraint shifts from “can we build this?” to “should this exist at all?”

This is where much of the real work now lives.


The new work: integration

What I see in practice is not dramatically faster systems, but a higher number of partially correct ones. AI-generated code frequently looks reasonable and works locally. The problems appear when the code interacts with the rest of the system.

Real systems have expectations that are rarely explicit: data contracts, operational behavior, retry logic, monitoring, and ownership boundaries. Software rarely fails because a function was hard to write. It fails because multiple correct components interact in an incorrect way.

AI handles the first 80% of implementation easily. The remaining 20% — understanding how a change behaves in production — remains difficult. Engineers spend less time creating code from scratch and more time validating, adapting, and stabilizing generated work so that it behaves predictably in a larger system.


The review bottleneck

One immediate consequence is that review capacity becomes a constraint. The number of changes increases faster than the organization’s ability to understand them. Teams did not suddenly gain more reviewers, deeper system knowledge, or better operational visibility. They simply gained more code.

As a result, engineers are often less limited by writing code than by reading it. Code review used to involve carefully reasoning about a focused change. Now it frequently involves evaluating large generated modifications whose correctness depends on context not visible in the diff.

Speed increased on the production side, but not on the understanding side. And most software failures originate from misunderstanding, not syntax errors.


AI amplifies existing problems

AI does not introduce entirely new dysfunctions. It amplifies what already exists. If prioritization is weak, more low-value work appears. If ownership is unclear, integration failures multiply. If coordination is slow, conflicts increase.

The surrounding organization — support teams, product processes, operations, training — still operates at human speed. Even if code generation accelerates dramatically, delivery remains constrained by alignment and understanding. The bottleneck simply relocates.


Why experience matters more

AI reduces the effort required to produce code. It does not reduce the effort required to reason about consequences. That changes which skills matter most.

The valuable skill shifts away from implementation speed and toward judgment: recognizing coupling, anticipating operational impact, and deciding when a feature should not yet exist. Maintaining system clarity becomes more important than producing additional code.

Software systems rarely collapse because code was difficult to write. They collapse because their behavior became too complex to reason about. AI increases code abundance, but understanding remains scarce.


What actually changed

AI is genuinely useful. It helps with exploration, scaffolding, and repetitive tasks. I use it regularly and it meaningfully improves parts of the workflow. But its main impact is not replacing engineers or eliminating effort. It changes the type of effort required.

There is less typing and more evaluation, less syntax work and more system reasoning. The teams that benefit most will not be those that generate the most code, but those that maintain a clear understanding of how their systems behave.

Software rarely fails because nobody could implement the solution. It fails because, over time, nobody fully understood the system they had built.

More from this blog

Leandro Maia

10 posts

Notes on Backend Systems and Software Architecture