Push To Prod

Push To Prod

AI Automation

What this means for tech workers

John Kim's avatar
John Kim
Mar 24, 2026
∙ Paid
Hero image

Can’t Help Thinking…

We were at a Korean BBQ place near his office. My friend was flipping galbi and telling me about his Tuesdays. Every week, the same routine. Download bank statements from four accounts as PDFs, open Excel, categorize each transaction line by line, merge the statements, cross-reference totals against each statement’s closing balance, feed everything into a master template, and generate a report.

Three to four hours. Every single week.

I just kept grilling. But driving home, I couldn’t stop decomposing his workflow. Four PDF sources with consistent formatting. Rule-based categorization he’s been doing by hand for years. Closing balances right there on each statement to validate against.

This is an afternoon with Claude Code. You describe the workflow, point it at a few sample statements, and it writes the script for you. It parses the PDFs, categorizes transactions using the same rules he’s been applying manually, checks the totals against each statement’s closing balance, merges everything, and spits out the report. One command, runs while he makes coffee.

He doesn’t need to learn to code. He just needs to know that a tool exists that can write the code for him, test it against his own data, and turn a weekly grind into something that runs in minutes.

I didn’t say any of this at dinner. Nobody wants to hear “actually, your job could be automated” over galbi.

Thing That Bothered Me

The Thing That Bothered Me

It wasn’t pity. I don’t pity the guy. He’s smart, makes good money, has a career he’s proud of. What bothered me was the gap. Not between him and me, but between what he knows exists and what actually exists now.

His mental model for AI is ChatGPT. A box you type into, a box that types back. You paste a document, you ask a question, you get an answer. Maybe it’s right, maybe it’s not. You copy the output, fix the parts that are wrong, and move on. That’s what AI means to him, and honestly, that’s what it means to most people.

But the tools have already moved past that. The difference between ChatGPT and something like Claude Code or Codex isn’t incremental. It’s structural. One gives you information. The other does the work. It reads your files, writes code, runs tests, catches its own mistakes, and builds things that persist after the conversation ends. My friend doesn’t need to learn how to code. He needs to learn that tools exist now that can build the code for him, test it against his own data, and turn a weekly ritual into something that runs while he sleeps.

I struggle to explain this to people outside of tech. Something seismic is happening in software engineering right now, and the hardest part isn’t the technology. It’s that the shift is invisible to anyone who isn’t living in it. My non-technical friends hear “AI is changing everything” and they think of chatbots still giving wrong answers. They have no frame of reference for what’s actually happening. And I don’t know how to bridge that gap over dinner without sounding like I’m either selling something or threatening their livelihood.

Flood

The Flood

There’s an old idea that every tool you create to solve a problem creates new problems of its own. I’m watching that happen in real time, and I’ve been thinking about it a lot.

When coding was expensive, it was a bottleneck. You had an idea, you needed an engineer, the engineer had a backlog, so your idea waited. That constraint shaped everything. Product roadmaps, hiring plans, how fast companies could move. Coding was the speed limit.

startup #tech #founder | Ashutosh Saxena | 33 comments

That speed limit is gone now. Or at least, it’s moving so fast that it’s no longer the binding constraint. Engineers with agentic tools are shipping at three, five, ten times their previous rate. Teams are running multiple agent instances in parallel, each one churning out code on different tasks simultaneously.

So what happens when you flood a system with code?

It reminds me of my macroeconomics class. What happens when you flood a system with money? Everything downstream changes. New bottlenecks appear that nobody planned for.

PRs explode. Your team used to merge maybe ten a week. Now it’s fifty. Who reviews them? The agents aren’t reliable enough yet for full code review at most companies, so humans still need to look. But the humans haven’t multiplied. You’ve got the same five engineers now drowning in review requests.

So you try hybrid review. Let the AI handle some, humans handle the rest. But which ones? How do you decide what the AI reviews versus what a human reviews? That question didn’t exist a year ago. Now someone has to answer it.

And that’s just PR review. Pull the thread further.

Multiple agents running at once. How do you verify they’re all producing coherent work? How do you keep them from stepping on each other? Who manages the rules and context files they share? What happens when one agent’s changes break another agent’s assumptions? How do you govern who can modify the shared configuration? How do you prevent prompt injection through skills? What happens when an AI-generated change causes a production incident, who owns it?

Every one of these is a new problem. Not hypothetical. Right now, today, teams are scrambling to figure these out in real time.

Scary Part Nobody Wants To Say Out Loud

The Scary Part Nobody Wants to Say Out Loud

I’m going to be honest about something that most people in my position dance around.

User's avatar

Continue reading this post for free, courtesy of John Kim.

Or purchase a paid subscription.
© 2026 John Kim · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture