GOne to Two

Chapter 11: The First Hire

15 min readThis chapter is still being written

There‘s a moment — and you’ll know it when it hits — where you look at your calendar, your issue board, your Slack, your monitoring dashboard, and your own reflection in the screen at 11pm, and you realize: every single one of these things routes to me.

Not because I failed to delegate. Because there's no one to delegate to.

If you‘ve made it this far — built the thing, audited it, rebuilt the ugly parts, set up a pipeline — you are not hiring because you’re struggling. You're hiring because you succeeded. The product works. Customers showed up. And every one of those jobs you were doing solo is getting bigger. While the headcount stays at one.

The first hire isn‘t about finding someone better than you. It’s about the fact that there needs to be two of you and there's only one.

I kept asking myself the wrong question: “Is the product ready for another engineer?” Wrong question. The right one is: “Am I the single point of failure for a product that people are paying for?” And the answer had been yes for months before I admitted it.

Here‘s how I knew. Not from a framework. From symptoms. If I got food poisoning for a week, the product didn’t ship, bugs didn‘t get fixed, customers didn’t get answers. Bus factor of one — that‘s not resilience, that’s a liability wearing a hoodie. The midnight commits had come back, too. The pipeline was beautifully boring, but the volume of work outgrew what one person can do between 8am and 6pm. Every morning I was choosing between foundation and features — not “which feature,” which is normal prioritization, but “do I fix the thing that needs fixing or build the thing we promised?” That's a bottleneck masquerading as a priority call.

Your First Hire Isn't Human

Here‘s what nobody tells you about the first hire in the AI era: it doesn’t have to be a person.

At Apex, the support burden was the first thing that cracked. Not the codebase — I could still hold the architecture in my head, mostly. Not feature velocity — AI pairing kept that reasonable. Support. Every customer question, every “how do I configure this,” every “my instance shows a blank screen” — all of it came to me. Directly. There was no help desk, no knowledge base, no tier-one filtering. Just my inbox and my Slack and my growing sense of dread every time a notification pinged.

So I built a support agent. Issue #122. An AI that could answer routine questions about the product, check system status, walk customers through common configurations. It took about a week. Not because it was technically hard — the patterns were well-established by then — but because I had to actually write down what I knew. The agent forced me to externalize knowledge that had been living exclusively in my head. What are the common failure modes? What are the first three things to check? What does the customer actually mean when they say “it's not working”?

That was the first real act of letting go, though I didn't recognize it at the time.

Then came issue #192: giving the agent admin tools. Not just answering questions — actually doing things. Restarting services. Checking logs. Running diagnostics. The kind of stuff I‘d been doing manually at 10pm because a customer’s instance was acting up. The moment I watched the agent handle a support ticket at 2am while I was asleep — correctly, completely, without waking me — something shifted. The product didn't need me for everything. It could run, at least partially, without my hands on the wheel.

The agent handled the bounded, repeatable stuff beautifully: tier-one support, known failure modes, configuration questions, status checks. It couldn‘t handle the novel problems — the customer using the product in a way I hadn’t imagined, the bug that only appeared under a specific sequence of actions. Those still needed a human brain.

But offloading the bounded work gave me back ten to fifteen hours a week. That's the difference between shipping a feature and not.

The real sequence looks like this: first, an AI agent for a bounded task. Then a human. Then, eventually, a team. Each step is a different kind of letting go. The AI-first step matters because it‘s low-stakes practice. The agent doesn’t have feelings about your code. It doesn‘t need equity. It doesn’t quit. It's a way to prove to yourself — because this is really about you, not the technology — that the product can survive without your direct involvement in every interaction.

Most customers don't care who — or what — answers their question at 2am. They care that it gets answered.

Who to Hire (The Human)

Once the AI agent is handling the bounded work and you‘ve bought yourself some breathing room, the human hire becomes clearer. Not easier — clearer. You know what the agent can’t do, and that's roughly the job description for your first person.

I have opinions about this and they‘re all hard-won. Some of them are contrarian. I don’t care.

Hire someone who can read your codebase, not someone you‘ll need to teach it to. The first hire into a one-person codebase needs to be productive fast. Hire for the codebase you have, not the one you wish you had. Make sure they’ve used AI tools — the codebase was built with AI, the workflow assumes AI, and hiring someone who‘s never paired with Claude and expecting them to match your velocity is setting them up to fail. And find someone who asks “why” — in an AI-built codebase, the “why” is the thing that’s thinnest.

Now here's where I get contrarian.

Don't hire a senior engineer. For hire number four or five, sure. For hire number one into a vibe-coded, AI-built codebase? A senior engineer will look at your code and want to rewrite it.

The objection writes itself: “What about a senior with one-to-two scar tissue?” If you can find that person — someone who knows to build on top, not rewrite — hire them immediately. But that combination is rare. Most senior engineers earned their seniority in established codebases. The skill set you need — comfort with AI-generated patterns, willingness to build on unfamiliar architecture, ability to add structure without demanding perfection — is more often found in mid-level engineers who are hungry and adaptable.

Don‘t hire a pure generalist, either. Not yet. You need someone who can go deep on the specific thing that’s bottlenecking you. If it‘s frontend, hire a frontend person. If it’s infrastructure, hire for infrastructure. The “full-stack generalist who can do a bit of everything” sounds great in theory, but in practice they‘ll be a slightly worse version of you — doing everything adequately, nothing excellently. You already have that person. It’s you.

And don‘t hire for cultural fit. At a team of two, “cultural fit” means “similar to me,” and similar to you is exactly what you don’t need. If you‘re a move-fast-break-things person, you need someone methodical. If you’re an architecture astronaut, you need someone who ships ugly code that works. At two, what matters is that you respect each other's judgment and can disagree without it becoming personal.

One more thing on hiring, and it‘s the new one that didn’t exist three years ago: don‘t hire someone who’ll resent AI-generated code. Some engineers — good engineers, experienced engineers — have a visceral reaction to AI-generated code. They see it as lesser. They don‘t trust it. They want to rewrite it by hand to “really understand it.” I’m not here to argue whether that instinct is right or wrong in the abstract. I‘m telling you that if your codebase is 60% AI-generated and your workflow involves daily AI pairing, hiring someone who resents that foundation is a disaster. They’ll spend their energy fighting the tools instead of building the product. Interview for this explicitly. Show them the codebase, tell them how it was built, and watch their reaction. If their face does that thing — you'll know the thing — move on.

The Hire That Sees the Product

Not every addition to the team is an engineer. Sometimes the person you need is someone who can see the product you can't.

Cody Miles comes into the Apex orbit to help with product development and UI/UX. He‘s AI-forward — not just comfortable with AI tools, but thinking natively in terms of what AI makes possible. Fresh mind, no baggage from the months of building, no attachment to the decisions I’ve made.

First, the humbling part. Cody sees the user experience I‘ve created — the screens, the flows, the way a customer actually interacts with the product — and it’s clear in about five minutes that I‘ve built a shitshow. Not the architecture. The experience. I’ve been so deep in the code, so focused on making things work, that I‘ve lost any ability to see what it feels like to use. Every screen makes sense to me because I know what every button does. To a customer encountering it fresh, it’s a maze built by someone who forgot other people would need to walk through it.

Second, the inspiring part. Cody doesn‘t just see what’s wrong. He sees what Apex could be. And because we have a deep working relationship — we‘d built agentic systems together, worked on Vega, my agent orchestration framework — I trust what he sees. Some of his ideas reflect concepts I’d explored in different contexts, which means the inspiration runs both ways. He‘s not arriving with a foreign vision. He’s arriving with a vision that rhymes with mine but sees further. He‘s inspired by the baseline — what I’ve built is real, it does real things, there‘s something genuinely powerful underneath the rough surface. But his vision for what it should look and feel like is in a different league. Watching a fresh, brilliant mind apply itself to your product — someone who isn’t carrying the scar tissue of every architectural compromise, every 2am fix, every “good enough for now” decision — is like seeing your own house through an architect‘s eyes. You’ve been living in it so long you stopped noticing the walls are crooked. They walk in and see both the crooked walls and the cathedral it could become.

What Cody sees in five minutes — the mess and the potential — is something I couldn‘t have generated on my own no matter how many hours I spent. Not because I lack vision. Because I’m too close. The builder who's been solo for months loses the ability to see the product as anything other than the sum of its technical decisions. Cody sees it as something a person uses, and that shift in perspective is worth more than another engineer writing code.

The Onboarding Gift You Already Gave

This is where Act 2 pays off in ways I didn't expect when I was doing the work. Onboarding into a vibe-coded repo is genuinely different from onboarding into a traditional codebase. In a traditional codebase, institutional knowledge lives in the team — you ask Janet why the auth module works that way and she tells you. In a solo vibe-coded repo, there is no Janet. The AI that helped build it has no memory. The builder was sprinting too hard to document decisions.

But if you did the work — if you actually did it — the issue history is the decision log. Not perfect, but it‘s there. The test suite is behavioral documentation. Doesn’t explain why, but it shows what should happen. The observability layer shows the system in action, live, with real data. And the pipeline means your new hire can ship on day one. Day one. Not after two weeks of setup. Not after learning the secret deployment handshake. Push, pipeline runs, it‘s live. That’s a gift you gave your future hire without knowing it.

The Money Question

I'm not going to tell you what to pay your first hire or how to structure the equity. Every situation is different — your runway, your revenue, your market, your geography. What I will say is that you need to actually think about this before you start interviewing, not after you find someone you like.

The contractor-versus-employee question matters. A contractor gives you flexibility and keeps your burn rate variable. An employee gives you commitment and, frankly, a different level of investment in the outcome. For the first human hire — the one who‘s going to carry part of your shared understanding of the product — I lean toward employee. But I know founders who started with a three-month contract and converted, and that worked too. What doesn’t work is ambiguity. Don‘t hire a “contractor” who works forty hours a week on only your product and pretend that’s not an employment relationship. The IRS has opinions about this, and they're not friendly ones.

On equity: if you‘re asking someone to be the second person on a product that might fail, they should have skin in the game. Doesn’t have to be a lot. But some. The specific number depends on your stage, your funding, your cap table — and honestly, on the negotiation. I‘d rather underprescribe here than give you a number that’s wrong for your situation. What I‘ll say is: if you’re offering zero equity to your first hire, ask yourself what signal that sends about how much you value their commitment versus their labor.

Daniel Noel, who runs multiple AI-built products simultaneously, landed on a model he describes as “similar to actors on a movie” — one entity, with contracts for project-level profit sharing tied to milestones. He‘d initially set up a corporation as a subsidiary so team members would feel co-ownership, then realized the structure trapped profits inside one entity when he needed flexibility to fund new ideas. His lesson: the organizational structures we’ve inherited — corporations, equity splits, cap tables — weren‘t designed for an era where a builder can spin up a new product in a week and needs to reward the people who help make each one real. Stock grants for milestones worked for him, but he was honest about the limits: “Most people do not have the business owner frame of mind and only chase the money in the short term.” The compensation model for the AI era is still being invented. I don’t have the answer. But I know the old playbook — standard vesting, one company, one product — doesn't map cleanly to a world where one builder might be running three products with different teams on each.

On salary: early-stage founders love to rationalize below-market salaries with equity upside. Sometimes that‘s genuine. Sometimes it’s just being cheap and dressing it up. Be honest with yourself about which one you‘re doing. A first hire who’s stressed about rent is not going to do their best work on your product. Pay as close to market as you can afford, and be transparent about the tradeoffs if you can't.

The compensation conversation is also a test. How someone negotiates tells you a lot about how they‘ll operate. Are they direct? Do they do their homework? Do they advocate for themselves without being adversarial? You’re about to be a team of two. These dynamics matter.

Letting Go

The hardest part isn't finding the right person.

It's letting go.

I don't mean letting go like some mindfulness exercise. I mean the specific, physical, almost primal sensation of watching someone else touch your code.

I remember the exact moment. It‘s a Tuesday, about two weeks after the first hire started. I’m in the middle of something — probably a feature, probably half-distracted — when I get a Slack notification. PR merged to main. Not my PR. Someone else‘s. I click through and look at the diff. They’ve refactored the notification service. Changed the function signatures. Renamed three variables. Moved a utility function to a different module. Added a test I hadn't thought to write.

And every change is fine. Not how I would have done it. But fine. Correct, even.

I sit there staring at the diff for longer than I should. What I feel isn‘t rational. It isn’t “this code is wrong” — it‘s obviously right. It isn’t “they broke something” — the tests all pass, the pipeline is green. What I feel is something closer to: this isn‘t mine anymore. The codebase, which has been an extension of my brain for months — every function, every module, every weird naming convention — is no longer something I can hold complete in my head. Another person’s decisions are in there now. Another person‘s style. Another person’s judgment calls.

It felt like the first time someone else drove your car. Nothing is wrong, exactly. But the seat is in a different position and there‘s a coffee cup in the holder that isn’t yours and they‘re taking a different route than you would take. You want to say something. You don’t say anything. Because saying something would make you the kind of person you don't want to be.

That feeling came back, smaller each time, for weeks. Every PR. Every architectural suggestion. Every “hey, what if we did it this way instead?” Each one a tiny test. Can you let this go? Can you let this go? Can you let this go?

The answer has to be yes, and it has to be genuine, not performed. If you say “great idea” through gritted teeth while silently planning to revert it later, you‘re not letting go. You’re just adding a delay. Your first hire will feel it. They‘ll feel the difference between a founder who’s actually sharing the product and one who's just pretending to while white-knuckling every decision.

There was also the Thursday where the hire made a call I genuinely disagreed with. Not a stylistic preference — a substantive decision. They restructured how we handled webhook retries, moving from our exponential backoff to a fixed-interval approach with a dead-letter queue. I thought it was wrong. I‘d designed the exponential backoff for a reason — to avoid hammering failing endpoints. Their approach felt naive. I almost reverted it. Instead I asked them to walk me through their reasoning. Turns out they’d been investigating the customer support tickets I hadn't had time to look at — the ones where webhooks were failing after the backoff stretched to hours and customers thought the integration was broken. Their fixed-interval approach with the dead-letter queue handled it better. Not elegantly, but practically. The customers stopped complaining. My elegant solution was correct in theory and wrong in production.

That was harder than the notification refactor. That was someone being right about my code in a way I didn't want to hear.

What helped me through both moments wasn‘t a framework or advice. It was watching the product get better in ways I wouldn’t have made it better. The notification refactor I was quietly upset about made the system faster. A test I wouldn't have written caught a bug two weeks later. The webhook retry approach I resisted turned out to be more robust than mine.

The product got better because someone else‘s brain was on it. Not instead of mine. In addition to mine. That’s the thing you can‘t understand until you experience it. You think you’re giving up quality when you share the codebase. You're actually adding a dimension.

The identity part is real too, and I don‘t want to gloss over it. When you’ve been the solo builder — when your GitHub profile is the project‘s entire commit history, when you’re the one who stayed up until 3am fixing the thing, when the product is, in a very literal sense, your work — bringing someone in changes who you are. You go from “I built this” to “we built this.” And “we built this” feels like less, at first. It feels like dilution. Like you're sharing credit for something you did alone.

Get over that too. But take a moment to grieve it first, because the grief is real even if it‘s irrational. You built something from nothing, by yourself, with an AI assistant and stubbornness and whatever you had. That’s worth being proud of. And now you're doing the harder thing: turning what you built alone into something that can grow beyond you.

Your job shifts. Not from builder to manager — that‘s the wrong frame entirely. From solo builder to lead builder. From the person who does everything to the person who does the things that matter most and enables someone else to do the rest. It’s a different skill. I'm still learning it.

The product stops being one person‘s vision and starts being a shared system. That’s the shift. It's uncomfortable and necessary and it looks nothing like what I expected.