Hands-Free Building

This post started as a voice dictation. The irony isn't lost on me.


A few years ago, my mornings looked like this: 30-60 minutes reviewing yesterday's pull requests, then daily standup at 8:30. Product owner updates, engineer updates, my feedback. Sometimes 10-15 pieces of feedback on a single page. "Change this, try that, I think this could look better." The engineer would disappear for hours—sometimes days—then come back with changes. More feedback. More iteration. Lather, rinse, repeat.

I'm not knocking that process. We shipped good product. We had great engineers. But that back-and-forth? It was the bottleneck nobody questioned.

The shift

Today I'm building v2 of (my)cards along with a stealth project we are cooking up, while running multiple Codex, Cursor and Claude Code environments simultaneously. Some are iterating on ideas/problems I have thrown at "them", others have been idle for 20-30 minutes. Some for days. And here's what changed everything: when I come back, the context is still there.

No "do you remember what we were working on?" No re-explaining the problem. I open a tab I forgot about, press proceed, and we pick up exactly where we left off. The agent didn't go eat lunch or get distracted by Slack. Its context is frozen at the exact point I paused.

That's not a small thing. That's a fundamental shift in how work gets done.

I'm still writing code

Let me be clear about something. When people hear "voice-coded" or "vibe coding," they picture someone saying "build me an app" and hoping for the best. That's not what this is.

I'm still writing the code. I'm still planning it in my head and putting those plans into Linear. I'm still scrutinising what comes back. I'm just not dictating my thoughts to my fingers anymore—I'm dictating to an agent that follows my plan.

The difference? When I spot a problem, I don't context-switch to an IDE. I hit a button to start recording, talk through my feedback while looking at the screen, stop recording, and it's compiled into actionable changes. One flow. No keyboard acrobatics.

Last week I pushed a new UI idea to a test device and it was crap. Slow, clunky, bad UX. So I recorded a screen capture, dictated what was wrong, and an agent took that feedback and rewrote the implementation. Then I benchmarked it against the previous version. Same process I would have done before—different interface.

The people doing this aren't amateurs

This isn't just indie hackers on Twitter. Look at who's going all-in:

Jarred Sumner (Bun): "In Bun's repo, we've merged at least 84 PRs from Claude since last month."

Thomas Paul Mann (Raycast): "Claude is now our top committer."

Karri Saarinen (Linear): "25% of Linear workspaces now use agents and 50%+ in enterprise... As agents get better, the pressure shifts to the ends of the workflow—the context and intent—and reviews."

And they're not alone: John O'Nolan at Ghost, Ryan Carson with Ralph at Ampcode, David Cramer at Sentry — all shipping AI-native workflows, and the list continues to grow.

These are people who five years ago would have likely said no, or at the very least, been less bullish on it. They've built at scale. They know what bad code costs. And for all intents and purposes, it appears they're all betting on this.

Karri put it well:

"As agents get better, the pressure shifts to the ends of the workflow—the context and intent—and reviews."

The security question

My friend Ian made a good point the other day: he's waiting for the wave of AI-induced security breaches. And he's probably right—someone will ship malware disguised as "build apps with AI!" and unsuspecting people will install it.

But that's not the whole picture. If you use these tools properly—if you still review code, if you understand security best practices, if you're not blindly trusting output—the risk profile isn't that different from hiring a junior engineer who doesn't know what they're doing.

We offset reviews to three agents now: Claude, Codex, and Copilot. Another tool combines those reviews, finds duplications, and flags anything suspicious. Sometimes they're overly cautious. Good. I'd rather deal with false positives than miss something real.

The math changed

If you had five great engineers five years ago, they were shipping at maybe 20% of what's possible now. That's not hyperbole—that's my lived experience running teams.

Those same engineers, with agents handling the grunt work? 5x output isn't unrealistic. They're still reviewing, still architecting, still making judgment calls. They're just not spending hours on implementation that a model can do in seconds.

And here's the part nobody talks about: the human gets their life back.

I can start 20 tasks across different agents, walk away, have dinner with my kids and Jess, come back, and pick up each thread exactly where I left it. The agent didn't lose context. I did—but I can scroll up and reconstruct it in 30 seconds.

Try that with a human team across timezones.

What hands-free actually means

It's not about removing humans. It's about removing the friction between thought and implementation.

Right now I'm building reporting dashboards. I look at them, something feels off, I dictate my feedback: "When I click this, the data isn't refreshing. The layout feels cramped. I want the Group CEO to be able to ask 'how many downloads yesterday' and get an answer directly."

That feedback gets compiled, cross-referenced with what's in Linear and GitHub, and turned into a plan. Or just implemented directly. Either way, I didn't touch a keyboard until I wanted to.

This entire post was dictated the same way.

I'm a 2-decade veteran of this industry. I've been acquired. I've run engineering teams. I've shipped at some level of scale. And I've never been more productive than I am right now, building hands-free.

It's a wild time to be doing this work.


Footnote: As I was putting the finishing touches on this post this morning, I came across a video of Dario Amodei (Anthropic CEO) and Demis Hassabis (Google DeepMind CEO) who debated "The Day After AGI" at Davos. Dario's comment landed: "I have engineers within Anthropic who say I don't write any code anymore. I just let the model write the code. I edit it." He estimates we're 6-12 months from models doing most of what software engineers do end-to-end. Whether that timeline holds or not, the direction is clear — and it's exactly what I've been experiencing.


Byron Rode is CEO of Ignis Labs and builder of (my)cards. Follow him on X/Twitter.