The truth is that most coders are not engineers, even a lot of staff and ctos out there.
They don’t understand how errors compound, they have no idea of statistical quality control, that while not directly usable as it is in SE, it is an important discipline to understand as an engineer to develop a certain intuition.
part of the problem is that the academic world in the 90s decided that we needed more computing science and less engineering, because this was in the professional interests and in the financial interests of the Academic Industrial Complex.
I think this is a very insightful way of looking at things. My only concern is that reviewing code is very difficult, time consuming, and (at least for me) not enjoyable. How do we effectively review at the pace that AI agents are able to spit out code? How do we internalize what is actually happening instead of just giving it a surface level look and approving, saying "looks good to me". Without the work of building the system yourself, how do you create that contextual knowledge that AI might struggle with?
I largely agree with this article, though I must admit to being among the contrarians who, five years ago, dismissed AI as little more than sophisticated autocomplete for coding. How wrong we were. How wrong I was. Today, I'm using coding agents that generate complex PostgreSQL triggers, assist with debugging, and refactor prototype code into production-ready React frontends and API endpoints.
The reality is that most software development (not all, but the vast majorit) consists of CRUD operations, MVC patterns, and their variants. We're essentially building the same architectural patterns and plumbing repeatedly, but for different applications and business value. While I wouldn't trust AI agents with kernel-level programming and other niche software, for typical web applications, we've been overestimating the complexity of the actual engineering challenges.
The current landscape lacks crucial infrastructure: standardized practices for AI-generated code, established review processes, and frameworks for ensuring correctness. But this is temporary. Just as CI/CD pipelines, linting, Agile methodologies, and pair programming evolved from optional practices to industry standards, we'll inevitably develop robust frameworks for managing AI-generated code. The driving forces are identical: reducing technical debt, minimizing bugs, and alleviating developer burden while preventing costly mistakes.
The question isn't whether these standards will emerge, but how quickly we can establish them to harness AI's potential while maintaining code quality and reliability.
The truth is that most coders are not engineers, even a lot of staff and ctos out there.
They don’t understand how errors compound, they have no idea of statistical quality control, that while not directly usable as it is in SE, it is an important discipline to understand as an engineer to develop a certain intuition.
part of the problem is that the academic world in the 90s decided that we needed more computing science and less engineering, because this was in the professional interests and in the financial interests of the Academic Industrial Complex.
Can you develop on the last paragraph?
I think this is a very insightful way of looking at things. My only concern is that reviewing code is very difficult, time consuming, and (at least for me) not enjoyable. How do we effectively review at the pace that AI agents are able to spit out code? How do we internalize what is actually happening instead of just giving it a surface level look and approving, saying "looks good to me". Without the work of building the system yourself, how do you create that contextual knowledge that AI might struggle with?
I largely agree with this article, though I must admit to being among the contrarians who, five years ago, dismissed AI as little more than sophisticated autocomplete for coding. How wrong we were. How wrong I was. Today, I'm using coding agents that generate complex PostgreSQL triggers, assist with debugging, and refactor prototype code into production-ready React frontends and API endpoints.
The reality is that most software development (not all, but the vast majorit) consists of CRUD operations, MVC patterns, and their variants. We're essentially building the same architectural patterns and plumbing repeatedly, but for different applications and business value. While I wouldn't trust AI agents with kernel-level programming and other niche software, for typical web applications, we've been overestimating the complexity of the actual engineering challenges.
The current landscape lacks crucial infrastructure: standardized practices for AI-generated code, established review processes, and frameworks for ensuring correctness. But this is temporary. Just as CI/CD pipelines, linting, Agile methodologies, and pair programming evolved from optional practices to industry standards, we'll inevitably develop robust frameworks for managing AI-generated code. The driving forces are identical: reducing technical debt, minimizing bugs, and alleviating developer burden while preventing costly mistakes.
The question isn't whether these standards will emerge, but how quickly we can establish them to harness AI's potential while maintaining code quality and reliability.