AInxiety
Part 1: Scared and excited
Learning to safely operate AI for software development requires facing my cognitive dissonance.
Many years ago I apprenticed as a stone mason. I learned how to load and unload 2-ton pallets of stone with a skid steer, how to cut boulders in half with a demolition saw. These are powerful, dangerous tools. If you're not paying attention or you lose control, you can make devastating mistakes. Like when I crushed a $1k saw driving around a job site. Or when my crewmate sliced through his leg one-handing a grinder.
The discourse around AI has split divisively. But both things can be true: AI is dangerous and incredibly useful.
There are legitimate reasons to distrust AI. A few that make me nervous:
- The industry-wide top-down push to force-feed adoption before it's been proven
- The promise of "infinite, on-demand knowledge work" gives Capital undue leverage over Labor
- Markets are all-in. With an uncertain profit model, who will be left holding the bag?
- Adding a Big Tech-owned external dependency into core workflows introduces a major new failure point, making everything more fragile
- LLM's trained on everyone's data: another moral degradation swept under the rug in the name of progress
- Systemic brain-rot, skill-rot
- Infinite slop machines targeting societies already teetering on the edge of attention bankruptcy
- Unaccounted negative externalities from intensive resource consumption
Nine months ago I was squarely in the anti-AI camp. I avoided the cognitive dissonance that comes from engaging with something dangerous. Simply put, it's easier to pick a side. But I put in the work to understand it, and now I use it in nearly every aspect of software development; .
I write to know myself. AI is so far out of the loop on my personal context, and I've zero incentive to pollute my process.
The productivity gains do not diminish the care and craft I put into my work. I am still in control. My attention has just shifted. Instead of a conversation with the compiler, I'm having a conversation with the agent. The feedback loop is different, but the lens has widened. Planning, design, research, and validation all happen in band with coding.
I'm used to shipping with a certain level of accountability. AI does not magic that away. Instead of focusing on the correctness of every line of code, I can focus more on solving the right problems, while figuring out the guardrails that ensure my teams can reap the efficiency gains of AI without sacrificing the reliability we're responsible for.
This shift in mental model is not without its tradeoffs. I've yet to find my flow state working this way, which involves a lot of multitasking. On one hand, at the end of the day, I can have a conversation with my wife without coming off like a robot from mirroring the compiler all day. But I'm also exhausted from communicating with the agent.
No doubt, our jobs as software developers are rapidly transforming. This is requiring a re-training on the fly. It's natural to feel uneasy. I'm both sacred and excited. The efficiency gains do not diminish my macro concerns. Does this make me a hypocrite? Some would say yes. But I believe the best way to transform a technology into a force for good is to engage it.
This is part 1 of a 3-part series I'm writing on AInxiety. Next up: some predictions on where I see this going. Subscribe below to get notified.