AI Is Writing More Code. Why Engineering Is Becoming Harder

Updated: 26 Mar, 202611 mins read
Mark Avdi
Mark AvdiCTO
Updated: 26 Mar, 202611 mins read
Mark Avdi
Mark AvdiCTO

AI is making code easier to produce.

That part is real. Developers can now generate boilerplate, tests, refactors, documentation, infrastructure definitions, and whole feature skeletons in seconds. For many teams, this reduces blank-page friction and speeds up local execution. It helps people move faster through obvious implementation work and lowers the effort needed to get from idea to first draft.

But software engineering is not the same thing as code generation.

That distinction is where many conversations on AI and developer productivity start to fall apart. Writing code is only one layer of engineering. The harder part has always been everything around the code: architecture, testing, deployment, security, observability, integration, ownership, maintainability, and operational resilience. As AI reduces the cost of producing code, those surrounding disciplines become even more important.

That is why the statement feels true:

AI is writing more code, but engineering is becoming harder.

Not because AI is inherently harmful. Not because engineers are suddenly less capable. But because code was never the full problem. It was only the most visible part of the system.

For companies investing in cloud engineering, enterprise modernization, platform engineering, cloud security, and AI-assisted software delivery, this shift matters a lot. The teams that benefit most from AI are not the ones producing the most code. They are the ones with the strongest engineering foundations.

Code is getting cheaper. Complexity is not.

When a resource becomes cheaper, organizations tend to consume more of it.

AI is doing exactly that to code. If a team can generate feature scaffolding, infrastructure templates, migration scripts, and helper functions at a fraction of the previous effort, it will naturally create more software. More experiments. More integrations. More services. More automation. More internal tools. More surface area.

That sounds like acceleration, and in one sense it is. But there is a second-order effect.

When code gets cheaper, complexity management gets more expensive.

This is the part many teams underestimate. The hard problem is no longer whether a developer can write the next service, workflow, or deployment file quickly. The hard problem is whether that new code should exist at all, how it fits the wider architecture, who owns it, how it is secured, how it is observed, what dependencies it introduces, and what operational burden it adds over time.

In other words, AI compresses implementation effort while increasing the premium on judgement.

That is especially important in cloud architecture and software delivery. In modern systems, shipping code is rarely the limiting factor. The limiting factor is making sure that code behaves well in production, integrates safely with other systems, and does not increase fragility faster than the organization can absorb it.

For engineering leaders, that is the real shift. AI removes friction from code creation, but it does not remove the consequences of bad design.

More code does not automatically mean more progress

One of the easiest traps in AI-assisted software development is confusing output with progress.

Code volume is visible. It looks measurable. It can feel satisfying. But experienced teams know that more code often means more review effort, more opportunities for defects, more duplicated logic, more dependencies, and more long-term maintenance work.

That concern is backed by real data. GitClear's 2025 research, based on 211 million changed lines of code from 2020 to 2024, found a rise in duplicate code and a decline in refactoring-related changes. The research notes that code classified as copy/pasted rose while refactoring-oriented moved lines fell sharply over the same period. That does not prove AI is bad. It does suggest that faster code generation can lead to more code accumulation without an equal increase in structural cleanup.

That is a serious engineering signal.

A team can feel more productive in the short term while quietly making the system harder to evolve in the long term. The result is a familiar pattern. Delivery feels faster at first, then quality drifts, architecture gets noisier, review becomes heavier, incidents rise, and engineering leaders start asking why velocity feels worse despite all the new tooling.

The answer is often simple: the organization improved code production without improving the system that receives the code.

AI helps local productivity. Engineering is a system problem.

This is where some of the strongest current research becomes useful.

DORA's 2024 findings reported that more than 75% of respondents relied on AI for at least one daily professional responsibility, with code writing among the most common use cases. The same report also linked a 25% increase in AI adoption with measurable improvements in documentation quality, code quality, and code review speed.

Those findings matter. They show that AI can clearly improve local developer workflows.

But DORA's 2025 report adds the more important point. AI acts as an amplifier of an organization's existing strengths and weaknesses. The greatest return does not come from the tool alone. It comes from the underlying system around the tool.

That is the core engineering lesson.

AI can make an individual developer faster. It does not automatically make the software delivery system healthier. A team still depends on architecture, testing strategy, deployment patterns, incident response, security controls, and clarity of ownership. If those areas are weak, faster code generation can simply push more pressure into the rest of the lifecycle.

This is why many organizations feel an odd tension right now. Developers genuinely feel faster. But leadership often feels more exposed. Both perceptions can be true at the same time.

The AI velocity paradox is already visible

Harness describes this problem well as the AI Velocity Paradox.

Their 2025 research says that AI-assisted coding is accelerating development at the top of the funnel, but downstream processes such as testing, deployment, security, and compliance have not matured at the same pace. In their survey, 63% of organizations said developers are shipping code to production faster. At the same time, the report warns that under-automated downstream processes are creating a dangerous gap between coding speed and delivery reliability.

That framing is useful because it explains why teams can feel faster without actually becoming more effective.

If AI speeds up code generation but environments remain inconsistent, CI/CD is fragile, security checks are manual, and observability is shallow, then the organization is not truly accelerating. It is simply moving the bottleneck. The saved time at the point of coding gets paid for later in QA, operations, security review, incident management, and rework.

This matters for any company serious about enterprise software delivery or cloud modernization. Speed is only valuable if the system can convert it into dependable outcomes. Otherwise, the speed gain becomes a form of hidden debt.

Platform engineering matters more, not less

One of the clearest implications of AI-driven development is that platform engineering becomes more strategic.

If developers can create code faster than before, then the quality of the paved road matters much more. The organization needs internal platforms, templates, golden paths, reusable infrastructure, secure defaults, and consistent deployment workflows that can absorb higher throughput without causing chaos.

DORA's 2025 AI Capabilities Model reports that 90% of organizations have adopted internal platforms and 76% now have dedicated platform teams. DORA's platform engineering guidance goes even further. It argues that a high-quality internal platform provides the automated, standardized, and secure pathways needed to turn AI's potential into systemic improvements. It also explicitly says that individual productivity gains from AI are often absorbed by downstream bottlenecks in deployment and testing processes.

That is exactly the point.

AI does not reduce the need for a good platform. It increases it.

A weak platform turns AI into a multiplier of inconsistency. A strong platform turns AI into leverage. The difference is huge.

For modern cloud engineering services, this is one of the most important strategic conversations to have with clients. The value is not just in helping teams write software faster. The value is in creating a delivery environment where speed does not destroy reliability.

Architecture becomes more important as implementation becomes easier

There is a broader shift happening here.

For years, engineering culture often rewarded implementation speed as the core marker of value. The best engineers were often seen as the people who could personally move code the fastest, solve the hardest technical issue alone, or out-execute everyone else in a short delivery window.

AI changes that weighting.

Implementation is still important, but it becomes less scarce when machines can generate much of the first draft. What becomes scarcer instead is architectural judgement.

That means questions like these move closer to the center of engineering value:

  • Should this be a separate service at all?
  • Is this the right integration pattern?
  • Does this architecture reduce or increase cognitive load?
  • Is the data ownership model clear?
  • What happens when this dependency fails?
  • Do we have the telemetry to support this in production?
  • Is this secure by default?
  • Will this be cheaper to run, or just easier to build?

Those are not code-generation questions. They are engineering questions.

For firms like Westpoint, that shift aligns closely with real market demand. Clients do not simply need more code. They need cloud systems that are well-structured, production-ready, secure, observable, and maintainable. As AI lowers the cost of implementation, senior engineer-led delivery, cloud architecture, and platform design become even more commercially valuable.

Security does not get easier because code arrives faster

Security is one of the strongest examples of engineering getting harder in the AI era.

The Secure Software Development Framework from NIST remains relevant here because it reinforces a simple principle: software still needs disciplined security practices across the lifecycle. AI-generated code does not remove that requirement. If anything, it raises the stakes, because more code can mean more opportunities for insecure defaults, dependency issues, exposed secrets, and poorly understood behavior.

At the same time, OWASP's Top 10 for Large Language Model Applications highlights risks such as prompt injection and insecure output handling. Those are not just model risks. They become engineering workflow risks when AI tools are embedded into development pipelines, internal platforms, or application logic.

This changes the delivery equation.

Security can no longer be treated as a final review stage after code is written. It has to be built into the architecture, the platform, and the deployment path. That includes identity and access design, secrets management, environment isolation, policy enforcement, dependency scanning, runtime controls, and review discipline.

For organizations operating in AWS or Azure environments, this is where cloud security becomes inseparable from software delivery. Faster code generation with weak governance is not a productivity story. It is an exposure story.

Code quality is not the same as system quality

There is an easy mistake to make in this conversation.

If AI can improve readability, maintainability, or approval rates for individual code submissions, then it is tempting to assume the broader quality problem is solved. GitHub's Copilot research does show positive signals here. One study found code submissions created with Copilot scored better on readability, reliability, maintainability, and conciseness in structured review settings.

That is useful evidence. It shows AI can absolutely help produce better code artifacts.

But system quality is not the same as code quality.

A readable service can still be architecturally unnecessary. A maintainable function can still sit inside a delivery workflow that is brittle. A clean pull request can still add a dependency chain that increases operational risk. Well-structured code is valuable, but engineering success depends on how code behaves inside a larger socio-technical system.

That is why the real metrics remain bigger than the artifact itself:

  • lead time for dependable change
  • deployment reliability
  • change failure rate
  • recoverability
  • security posture
  • observability depth
  • clarity of ownership
  • maintainability over time
  • operational cost
  • developer cognitive load

This is the level where DevOps, platform engineering, cloud architecture, and software delivery consulting have real leverage. AI helps with production of code. It does not eliminate the need to design healthy systems around that code.

Why this matters for enterprise modernization

This topic is especially relevant for organizations in the middle of enterprise modernization.

Modernization is often framed as a migration problem or a tooling problem. Move to the cloud. Improve automation. Adopt AI. Modernize the stack. But real modernization is about operating model quality. It is about whether the organization can change systems safely, repeatedly, and with enough control to support business growth.

AI changes the speed of one layer of that process. It does not change the fundamentals.

If anything, it makes the weaknesses of legacy delivery models more visible. Manual approvals, fragmented tooling, inconsistent environments, weak observability, and siloed ownership become more painful when code creation speeds up. Teams hit those limits faster. The friction is no longer hidden behind slow implementation. It becomes obvious.

That creates a practical challenge and a strategic opportunity.

The practical challenge is that organizations now need to strengthen architecture, delivery, and platform quality sooner than before. The strategic opportunity is that firms with strong cloud engineering, cloud security, platform engineering, and software modernization expertise can help clients convert AI enthusiasm into real operational capability.

What engineering leaders should do next

The answer is not to slow down AI adoption.

AI is already part of modern software delivery, and the upside is too important to ignore. The real question is how organizations respond intelligently.

A few priorities stand out.

1. Strengthen architectural decision-making

As implementation gets cheaper, architecture becomes more valuable. Teams need stronger judgement on service boundaries, integration patterns, state management, dependency decisions, and ownership models.

2. Invest in platform engineering

A mature internal platform is now one of the best ways to convert AI-assisted development into safe and scalable delivery. Standardization, reusable workflows, secure defaults, and golden paths matter more than ever.

3. Improve testing and deployment maturity

If downstream delivery remains brittle, AI speed will simply create more pressure. Teams need stronger CI/CD, more consistent environments, better resilience testing, and clearer release controls.

4. Move security further into the platform

Security should be embedded into infrastructure, access design, workflows, and policy. It should not rely on last-minute review after the code is already moving.

5. Increase observability and feedback loops

Faster change requires stronger visibility. Teams need deeper telemetry, clearer diagnostics, and faster feedback from production behavior.

6. Measure outcomes, not output volume

More code is not the KPI. Better outcomes are. Engineering leaders should focus on stability, maintainability, recovery, quality, and business-aligned delivery performance.

These are not abstract best practices. They are the practical response to the reality that AI has changed one layer of engineering economics without changing the full complexity of production systems.

Conclusion

So yes, AI is writing more code.

But that does not mean software engineering is becoming easier.

In many ways, it is becoming harder, because the easy part is getting easier while the hard part is becoming more exposed. AI reduces the friction of implementation. It does not reduce the need for strong architecture, disciplined software delivery, platform engineering, cloud security, observability, and senior technical judgement.

That is the real story.

The organizations that win in this next phase will not be the ones that generate the most code. They will be the ones that can turn faster code generation into stable, secure, maintainable, production-ready systems.

That is why engineering feels harder.

Not because AI failed, but because it revealed what engineering was really about all along.

Where Westpoint fits

At Westpoint, this is exactly where modern cloud engineering creates value.

The challenge is not just to help teams move faster. It is to help them build cloud architecture, software delivery systems, and platform foundations that allow speed without losing control. That includes production-ready AWS and Azure delivery, platform engineering, secure cloud design, observability, and modernization strategies built for real operating environments, not slideware.

As AI continues to lower the cost of writing code, that kind of engineering discipline only becomes more valuable.

Resources

Frequently asked questions

AI is accelerating development workflows, but cloud engineering still depends on strong architecture, automation, and operational discipline to ensure systems remain reliable and scalable.

AI increases the need for platform engineering by requiring standardized environments, reusable infrastructure, and consistent deployment pipelines to manage higher volumes of code changes.

Without proper review and architectural control, AI-assisted software development can lead to duplicated logic, inconsistent patterns, and increased long-term maintenance effort.

AI speeds up code generation, which puts more pressure on DevOps practices. Teams need robust CI/CD pipelines, automated testing, and stable environments to maintain delivery quality.

As more code is generated and deployed faster, cloud security becomes essential to prevent vulnerabilities, manage access, and ensure compliance across distributed systems.

CASE STUDIES

Unified enterprise IAM and zero-downtime migration