Are we replaced by AI yet?
Sociologist Hartmut Rosa argues that modernitity isn’t just about going faster: it’s about acceleration. We accelerate on three fronts:
- technology,
- way of life,
- tempo of social change, driven by external economic and cultural forces.
LLM coding hits the technical acceleration pedal hard. But here’s the paradox: as speed rises, understanding falls.
We ship more, sooner, with agents and auto-patches, yet we accumulate semantic debt: code that works today but that nobody truly grasps tomorrow when refactors, features, or bugs show up. Acceleration buys output, but it mortgage true software engineering. We end-up feeling time-poor. Because we outsource more of the thinking, we don’t just move faster: we move further away from the design. Present gets shipped, but the understanding stays virtual. This lack of mental model is the semantic debt.
My claim is simple:
- autocomplete = leverage. You stay in the loop, your model of the system deepens,
- agents = delegation. You move faster short-term, but the mental model stagnates,
- design = understanding. Without time to internalise models, you build up semantic debt.
If you buy Rosa’s acceleration story, the empirical picture is troubling. Mozannar et al.1 model the time costs in AI-assisted programming and show what you feel in your fingers: speed-to-syntax improves, but reading and validating suggestions becomes a new bottleneck. In other words, the tool moves work from typing to understanding, which is exactly where teams already run hot. A separate lab study on copilot2 has people validate and repair LLM code across real projects. Simply knowing the code was AI-generated changes how they read and fix it.
Cognitive science adds another layer. Sparrow, Liu & Wegnet’s classic “Google effect”3 shows that when you expect information to be available externally, you remember less of the content and more of the location. Transactive memory over internal models. That’s exactly the failure mode of shipping a subsystem you never really grokked. Grinshgl et al.4 push it further: offloading boosts immediate performance but lowers later recall on surprise tests, aka when it all falls apart at 3 AM on Sunday. Speed now, understand later. A 2025 overview on “protecting cognition”5 shows the offloading loop can correlate with lower critical-thinking ability if you lean on AI too much. Ship fast and stall on non-obvious bugs.
Clark & Chalmers’ Extended Mind says our thinking stretches into tools and artifacts. Great. But extension is not the same as integration. If the agent does the moving and you never form the mental model, the system is partly in the tool and partly nowhere. This is, again, semantic debt. And you will pay it later.
What is vibe-coding actually?
Vibe coding, as far as I understand it, is when we lean on LLMs/agents (Claude Code, Copilot/Cursor, Jules, you name it) to generate not just lines, but decisions: file layouts, glue code across boundaries, retry semantics, schema migrations, the obvious shape of a feature. It feels great, you get the dopamine rush, because the tool outputs idiomatic code quickly. But it’s exactly where semantic debt piles up. Semantic debt is the interest you pay when a system’s behavior exists but the understanding of why it works does not.
You don’t notice semantic debt when the lights are green. You’ll feel the pain when:
- a refactor stalls because touching one thing breaks another,
- a new feature requires archaelogy instead of extension,
- a heisenbug lives in a generated helper that nobody knows about,
- perfs falls off a cliff due to an O(n^2) “obvious” path that survivec automated review,
- or even when security/compliance asks “where does this PII flow??”.
Control = understanding curve
The farther you drift away from decisions, the less internal model you form. With smart autocomplete, the value proposition is great: you are driving, and the tool fills in the boring stuff. With snippet generation or small patches, you frame the intent, and the tool expands patterns, and you still read the diff. When using multi-file agents, you describe, and it rewires: you skim invariants. Let’s say you get hyped (and, let’s face it, you get lazy and crave that dopamine rush), and go for a spec-to-subsystem approach: you hand some kind of spec, and it magically builds a thing: your understanding is zero. You did not build a real mental model, you did not find boundaries between components by yourself. You’re not a software engineer anymore. And you’ll pay it later.
Bifurcation of time
Deleuze (after Bergson) says every present forks: on line is the actual, the other is the virtual. With agent-heavy workflows, the actual is your merged code, and the virtual is the missing design knowledge: assumptions, boundaries, and edge-cases you never metabolized. You meet the fork later, and you better pray it’s not at 3 AM, when it returns as an incident. Good engineering is there to shorten the distance between actual and virtual. Pairing, conscious designing, testing, intrumenting are how the fold the fork back together.
What if you buy the hype anyway?
Well, that’s up to you, no judgment here. But you’re still reading, so let me give you some advice:
- prompt small: AI is not there to replace your brain. What are you, an interface between the system and an LLM?
- keep high standards: don’t let the model steal your agency. You are in charge, and this is your responsability.
- review your code like you just wrote it.
- drive intent: make sure the tool understands your goals, not the other way around.
But what if agents get good enough to design too?
You wish. What a dystopia. Sure, they’ll get better at outputs. But software design is not an output, it’s a process of discovery that needs to happen in your brain. It should be pushing on contrainst until they push back. This is about negociating boundaries with people, policies and physics (latency, CAP stuff, tradefoffs, budget). Even as models improve, teams still need shared understanding to evolve systems safely for their business. No, sorry, you cannot outsource institutional memory.
Conclusion
Are we replaced by AI yet? Not while the bill for understanding still lands on a human desk. Read: not while business is still conducted by humans. Use the power tools. Keep your hands on the wheel. Stay fucking curious and have fun.
-
Mozannar et al. - Effective Human-AI Teams via Learned Natural Language Rules and Onboarding ↩︎
-
Tang et al. - A Study on Developer Behaviors for Validating and Repairing LLM-Generated Code Using Eye Tracking and IDE Actions ↩︎
-
Sparrow et al. - Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips ↩︎
-
Grinshgl et al. - Consequences of cognitive offloading: Boosting performance but diminishing memory ↩︎
-
Singh et al. - Protecting Human Cognition in the Age of AI ↩︎