The truth is that most coders are not engineers, even a lot of staff and ctos out there.
They don’t understand how errors compound, they have no idea of statistical quality control, that while not directly usable as it is in SE, it is an important discipline to understand as an engineer to develop a certain intuition.
part of the problem is that the academic world in the 90s decided that we needed more computing science and less engineering, because this was in the professional interests and in the financial interests of the Academic Industrial Complex.
I think this is a very insightful way of looking at things. My only concern is that reviewing code is very difficult, time consuming, and (at least for me) not enjoyable. How do we effectively review at the pace that AI agents are able to spit out code? How do we internalize what is actually happening instead of just giving it a surface level look and approving, saying "looks good to me". Without the work of building the system yourself, how do you create that contextual knowledge that AI might struggle with?
My experience so far with agents is less about reviewing code but more about reviewing results. In those aspects, my work with coding agents resonates way more with my senior skill set than my coding skill set.
I’m always questioning the agents: how do you solve for X? Can you show me the results of Y? What was your pain points during the last session? Coding with agents requires a lot more observability than without it. Not only you should instrument the software, but the pipeline and even the code. And by the latest, I mean you should run constantly any static analytics tools you can put your hands on on your code: what cyclo metric complexity your code have? Which modules are no longer referenced? An healthy code base is more important than ever.
Also, I started building a lot more tools in my pipeline. Tools for refactoring, collecting, querying, indexing… Not only those tools help me a lot to understand a growing code base, but they also helps the agents, making their work more predictable and also less costly. The funny things, a lot of those tools already exist… Remember ctags and clint? Fantastic tools for an agent (while waiting for more modern LSP integration).
Finally, all of this would be overwhelming without some higher level organisation. I started bringing back more DDD discipline. DDD was never out of the picture, but often was done in a more ad-hoc manner. With agents, having clear and well-documented domains is more important than ever.
And all if this show one thing: it's less the line of code that's important in software, but more the code base itself, including its documentation. The bug in the lines of code can be fixed by an agent, but a bad architecture? The agents will not even see it coming, but you should. Take the means to be able to do so.
Keep me updated on this. I'm still trying to figuring out how to bring true Iterative Software Development, especially the discovery part, to those agents coding framework. ISD requires constant feedback and adaptation and seems somewhat irreconcilable with the persistent nature of written instructions (very similar to how traditional management is irreconcilable with self-managed teams). We need some kind of "psychological safety for agents" (like it wasn't already hard to get it for teams) that preserves the accountability of the user. I can only see happening by requiring stronger transparency from the agents, which is a hard problem (see the challenges around mechanistic interpretability).
I agree with much of this framing, especially that execution has gotten cheap and that writing code was never the hardest part. The principles of engineering as a discipline haven’t changed, only the day-to-day mechanics have.
The one place I disagree is calling AI agents “junior engineers.” Engineering isn’t just producing code; it’s engaging in the practice of reasoning about systems. Junior engineers were always expected to think about how components interact across domains, often by asking questions rather than having the right answers.
Today’s agents don’t really question anything in a meaningful way. They generate solutions, but they don’t probe assumptions, surface uncertainty, or challenge whether a solution is appropriate in the broader system. That distinction matters.
I largely agree with this article, though I must admit to being among the contrarians who, five years ago, dismissed AI as little more than sophisticated autocomplete for coding. How wrong we were. How wrong I was. Today, I'm using coding agents that generate complex PostgreSQL triggers, assist with debugging, and refactor prototype code into production-ready React frontends and API endpoints.
The reality is that most software development (not all, but the vast majorit) consists of CRUD operations, MVC patterns, and their variants. We're essentially building the same architectural patterns and plumbing repeatedly, but for different applications and business value. While I wouldn't trust AI agents with kernel-level programming and other niche software, for typical web applications, we've been overestimating the complexity of the actual engineering challenges.
The current landscape lacks crucial infrastructure: standardized practices for AI-generated code, established review processes, and frameworks for ensuring correctness. But this is temporary. Just as CI/CD pipelines, linting, Agile methodologies, and pair programming evolved from optional practices to industry standards, we'll inevitably develop robust frameworks for managing AI-generated code. The driving forces are identical: reducing technical debt, minimizing bugs, and alleviating developer burden while preventing costly mistakes.
The question isn't whether these standards will emerge, but how quickly we can establish them to harness AI's potential while maintaining code quality and reliability.
The truth is that most coders are not engineers, even a lot of staff and ctos out there.
They don’t understand how errors compound, they have no idea of statistical quality control, that while not directly usable as it is in SE, it is an important discipline to understand as an engineer to develop a certain intuition.
part of the problem is that the academic world in the 90s decided that we needed more computing science and less engineering, because this was in the professional interests and in the financial interests of the Academic Industrial Complex.
Can you develop on the last paragraph?
I think this is a very insightful way of looking at things. My only concern is that reviewing code is very difficult, time consuming, and (at least for me) not enjoyable. How do we effectively review at the pace that AI agents are able to spit out code? How do we internalize what is actually happening instead of just giving it a surface level look and approving, saying "looks good to me". Without the work of building the system yourself, how do you create that contextual knowledge that AI might struggle with?
My experience so far with agents is less about reviewing code but more about reviewing results. In those aspects, my work with coding agents resonates way more with my senior skill set than my coding skill set.
I’m always questioning the agents: how do you solve for X? Can you show me the results of Y? What was your pain points during the last session? Coding with agents requires a lot more observability than without it. Not only you should instrument the software, but the pipeline and even the code. And by the latest, I mean you should run constantly any static analytics tools you can put your hands on on your code: what cyclo metric complexity your code have? Which modules are no longer referenced? An healthy code base is more important than ever.
Also, I started building a lot more tools in my pipeline. Tools for refactoring, collecting, querying, indexing… Not only those tools help me a lot to understand a growing code base, but they also helps the agents, making their work more predictable and also less costly. The funny things, a lot of those tools already exist… Remember ctags and clint? Fantastic tools for an agent (while waiting for more modern LSP integration).
Finally, all of this would be overwhelming without some higher level organisation. I started bringing back more DDD discipline. DDD was never out of the picture, but often was done in a more ad-hoc manner. With agents, having clear and well-documented domains is more important than ever.
And all if this show one thing: it's less the line of code that's important in software, but more the code base itself, including its documentation. The bug in the lines of code can be fixed by an agent, but a bad architecture? The agents will not even see it coming, but you should. Take the means to be able to do so.
This was great to read through! Makes me rethink the development workflow a bit more for sure.
Keep me updated on this. I'm still trying to figuring out how to bring true Iterative Software Development, especially the discovery part, to those agents coding framework. ISD requires constant feedback and adaptation and seems somewhat irreconcilable with the persistent nature of written instructions (very similar to how traditional management is irreconcilable with self-managed teams). We need some kind of "psychological safety for agents" (like it wasn't already hard to get it for teams) that preserves the accountability of the user. I can only see happening by requiring stronger transparency from the agents, which is a hard problem (see the challenges around mechanistic interpretability).
I agree with much of this framing, especially that execution has gotten cheap and that writing code was never the hardest part. The principles of engineering as a discipline haven’t changed, only the day-to-day mechanics have.
The one place I disagree is calling AI agents “junior engineers.” Engineering isn’t just producing code; it’s engaging in the practice of reasoning about systems. Junior engineers were always expected to think about how components interact across domains, often by asking questions rather than having the right answers.
Today’s agents don’t really question anything in a meaningful way. They generate solutions, but they don’t probe assumptions, surface uncertainty, or challenge whether a solution is appropriate in the broader system. That distinction matters.
I largely agree with this article, though I must admit to being among the contrarians who, five years ago, dismissed AI as little more than sophisticated autocomplete for coding. How wrong we were. How wrong I was. Today, I'm using coding agents that generate complex PostgreSQL triggers, assist with debugging, and refactor prototype code into production-ready React frontends and API endpoints.
The reality is that most software development (not all, but the vast majorit) consists of CRUD operations, MVC patterns, and their variants. We're essentially building the same architectural patterns and plumbing repeatedly, but for different applications and business value. While I wouldn't trust AI agents with kernel-level programming and other niche software, for typical web applications, we've been overestimating the complexity of the actual engineering challenges.
The current landscape lacks crucial infrastructure: standardized practices for AI-generated code, established review processes, and frameworks for ensuring correctness. But this is temporary. Just as CI/CD pipelines, linting, Agile methodologies, and pair programming evolved from optional practices to industry standards, we'll inevitably develop robust frameworks for managing AI-generated code. The driving forces are identical: reducing technical debt, minimizing bugs, and alleviating developer burden while preventing costly mistakes.
The question isn't whether these standards will emerge, but how quickly we can establish them to harness AI's potential while maintaining code quality and reliability.