5 Comments
User's avatar
Esborogardius Antoniopolus's avatar

The truth is that most coders are not engineers, even a lot of staff and ctos out there.

They don’t understand how errors compound, they have no idea of statistical quality control, that while not directly usable as it is in SE, it is an important discipline to understand as an engineer to develop a certain intuition.

part of the problem is that the academic world in the 90s decided that we needed more computing science and less engineering, because this was in the professional interests and in the financial interests of the Academic Industrial Complex.

🅱🅻🅰🆉🅴's avatar

Can you develop on the last paragraph?

The Prairie Programmer's avatar

I think this is a very insightful way of looking at things. My only concern is that reviewing code is very difficult, time consuming, and (at least for me) not enjoyable. How do we effectively review at the pace that AI agents are able to spit out code? How do we internalize what is actually happening instead of just giving it a surface level look and approving, saying "looks good to me". Without the work of building the system yourself, how do you create that contextual knowledge that AI might struggle with?

Ahmed Al-Hulaibi's avatar

I agree with much of this framing, especially that execution has gotten cheap and that writing code was never the hardest part. The principles of engineering as a discipline haven’t changed, only the day-to-day mechanics have.

The one place I disagree is calling AI agents “junior engineers.” Engineering isn’t just producing code; it’s engaging in the practice of reasoning about systems. Junior engineers were always expected to think about how components interact across domains, often by asking questions rather than having the right answers.

Today’s agents don’t really question anything in a meaningful way. They generate solutions, but they don’t probe assumptions, surface uncertainty, or challenge whether a solution is appropriate in the broader system. That distinction matters.

Carl Mueller's avatar

I largely agree with this article, though I must admit to being among the contrarians who, five years ago, dismissed AI as little more than sophisticated autocomplete for coding. How wrong we were. How wrong I was. Today, I'm using coding agents that generate complex PostgreSQL triggers, assist with debugging, and refactor prototype code into production-ready React frontends and API endpoints.

The reality is that most software development (not all, but the vast majorit) consists of CRUD operations, MVC patterns, and their variants. We're essentially building the same architectural patterns and plumbing repeatedly, but for different applications and business value. While I wouldn't trust AI agents with kernel-level programming and other niche software, for typical web applications, we've been overestimating the complexity of the actual engineering challenges.

The current landscape lacks crucial infrastructure: standardized practices for AI-generated code, established review processes, and frameworks for ensuring correctness. But this is temporary. Just as CI/CD pipelines, linting, Agile methodologies, and pair programming evolved from optional practices to industry standards, we'll inevitably develop robust frameworks for managing AI-generated code. The driving forces are identical: reducing technical debt, minimizing bugs, and alleviating developer burden while preventing costly mistakes.

The question isn't whether these standards will emerge, but how quickly we can establish them to harness AI's potential while maintaining code quality and reliability.