One Person, One AI, One Product: How I Ship Without an Engineering Team
- huangpf
- Feb 1
- 6 min read

Last Friday, John dropped a line that made half the room laugh and the other half go quiet:
“Don’t talk to the engineers yet. Give the requirements to Claude. Let’s see what happens.”
In November 2025, John ran the experiment with his team. He told his non-technical teammates exactly that. Two hours later, they had what used to take four engineers a full multi-week sprint. That moment changed how he builds software, and it might change how you think about it too.
The setup: Production level deliverables, not a toy projects
We all have seen a ton of vibe coded demos at hackathons and demo days by now. This isn’t one of them. John runs Hasky Tech, building software for SMEs: scheduling, routing, CRM, workflow automation. The kind of boring, high-value systems that small businesses actually pay for.
His target market doesn't care about AI. They care about:
making a problem go away
paying a predictable monthly fee
not dealing with change requests forever
So his pitch to clients is simple:
"I'll do a design sprint for free. I'll build a POC for free. When you're using it with real data, you start paying. $2-5k/month, all-in. Hosting. Maintenance. Feature updates. Done."
That pricing works now because the build cost collapsed.
The old SDLC is optimized for teams. AI is optimized for collapse.
John started by naming the default playbook:
Product writes a spec → Design makes mocks → Engineering builds → QA tests → Security checks → DevOps deploys
Now, with Claude Code, writing code is no longer the slow part.
Communication still is (actually deciding what to build was always the bottleneck).
When you add more people, you don’t just add capacity. You add connections. Meetings. Misinterpretations. Handovers. Throwing more people at a delayed project will delay it even further due to the communications tax. So John’s thesis is simple: If you can run the whole SDLC with one operator and one AI, you’ve mathematically reduced coordination overhead. That speed compounds into a very real competitive advantage.
Introducing the “System PM” SDLC: not product, not engineering, but both.
John calls the new AI operator role a System PM. It is not a classic PM who can’t touch code, nor a technical PM who translates between teams. A System PM is someone who can:
hold the whole system in their head
make good tradeoffs quickly
use the right “process language” so the AI stays on rails
John’s point wasn’t “everyone must become an engineer.”
It was more like: If you can talk fluently across product, design, architecture, QA, and deployment, AI becomes your team.
The framework behind: macro loop + micro loop
John described a dual-layer structure.
Macro loop (market/product):
Discovery → Define the MVP → Build → Use it → Reflect (retro) → Iterate
Micro loop (engineering):
Requirements → Design → Architecture → Build → Test → Deploy
The trick is that every step is defined as a repeatable "skill". These internal "skills" repeatable instructions for each phase of the build, mapped form how human teams would approach a similar problem:
00: discovery and requirements
01: product analysis (backlog, sprint plan)
02: design (wireframes)
03: architecture (specs, ADRs)
04: build
05: quality (end-to-end tests, code review)
Each skill uses the language of that discipline. Jobs to be done. Domain-driven design. C4 architecture. Smoke tests.

You don't need to be an expert. You just need to know enough of the vocabulary to guide the process. John showed how he chains skills together:
Use 00 product to write requirements.
After that, use 01 product to build the backlog and sprint plan.
After that, use 02 design to build the wireframe.
After that, use 03 architecture to write the architecture spec.
He doesn't babysit every line. He reviews at checkpoints. And when he hits the context limit (around 200k tokens per sprint), he asks Claude: "Give me a continuation prompt." Then he clears the context, pastes the prompt, and keeps going.
John prefers writing his own set of internal skills instead of collecting dozens of pre-written toolchains. This is not because toolchains are bad, but because:
fewer moving parts is easier to trust
you write it so you know what’s in your toolbox
it’s harder to hide “weird stuff” in the workflow
Demo 1: a routing system in two weeks
John showed one of the more complex client builds: a route planning system.
The brief:
upload CSV / Excel from container deliveries
geocode addresses
run route optimization
visualize trucks and routes on a map
allow planners to edit and re-order stops
A planning job that used to eat whole days can now be done in minutes.
And John’s bigger point was not “routing is easy.” Anyone who has tried routing systems knows it’s a headache to deal with geocoding, open source routing engines, cloud costs, constraints, runtime on small servers. But with AI, a lot of these have become more accessible.
Demo 2: an overtime calculator from a photo
He then live-demoed the room how the workflow actually works.
The brief came from a real client:
Workers fill in time cards by hand.
An HR person spends days every month doing math with a calculator.
Rules: lunch hour, overtime after 5pm pays 1.5x, overnight shifts that bleed into the next day pay 1.5x all day.
John took a photo of the handwritten time card, saved it to a file, and opened Claude Code.
He typed one prompt. Then queued the next skill. Then the next.
By the time he finished his sharing, the backlog was written, the sprint was planned, and the build had started—automatically.
Four learnings from building production systems himself
1. Where this breaks: deployment is still the boss fight
If code is the easy part, deployment is where reality comes back.
Some problems that still exist: Cloud platforms have different quirks, Claude can have “deployment instincts,” but those instincts don’t always match the platform, MCP servers help sometimes, but can stall halfway.
So his current bias is toward bare metal:
Spin up a cheap Ubuntu box.
Use a server manager.
Keep the deployment path predictable.
He gave a very specific cost anchor: around S$14/month per server for the stack he’s running.
2. Debugging and quality: separate the “build” sprint from the “quality” sprint
One of the best practical ideas was his debug cadence. Instead of asking Claude to build features, write tests, refactor, harden security all in one pass (and doing 75% of each), he splits it:
Build sprint: get it working end-to-end
Quality sprint: become extremely strict, write end-to-end tests, fix edge cases, document changes
That sounds obvious. But it’s exactly how small teams should work, and most people don’t.
3. The quality tradeoff: fast enough is good enough
John made a deliberate choice: speed over polish. (This is partly because his clients care more about outcomes more than aesthetics.) He used to generate polished UI mockups in V0, screenshot them, feed them back to Claude. Now, he prefers to ask Claude to generate ASCII wireframes and build from those.
For his clients, the difference in output quality? Not much. The difference in effort? Huge.
For backend-heavy systems—routing, scheduling, CRM—"good enough" UI is actually good enough. We will explore UIUX heavy system in future at the Stage because of high interest.

Let me know in the comments if this is something you'd like to learn about too.
4. The security question: “what do you actually control?”
Someone asked the real question: “Okay, but how secure is this?”
John’s answer was pragmatic:
Use reputable infrastructure so you inherit baseline protection.
Run a proper security audit framework (with AI if you have to save cost)
Lock down the basics: networking, ports, secrets, encryption.
Be honest about what you can’t fully control.
And then make a call. Most SMEs don’t want a $100k compliance process. They want “safe enough” and shipped.
Two hot takes on the future of work
Hot take 1: The uncomfortable implication: smaller teams may outcompete bigger ones
A question near the end pushed beyond tooling into organization design. If 10 people can do the work of 100, what happens to companies built around 100-person structures?John didn’t pretend to have the full answer. But he did point at a possible upside:
fewer layers
more ownership
more profit share
more agency
If you can thrive in a small, fast org, you might actually win.
Hot take 2: What this means for non-technical builders
John's background was finance, not engineering. He learned AI engineering six years ago, pivoted to solutions architecture, then product leadership.
His take:
"You don't need to become an engineer. You need to understand enough of the whole system to make tradeoffs."
If you can talk fluently about what users need (product), what it should look like (design), how it should be structured (architecture), how to verify it works (testing), how to get it live (deployment), then AI fills in the rest.
He calls this role the System PM—someone who holds the whole project in their head and strings the pieces together.
A challenge to try this week
Pick one “boring” workflow in your world:
scheduling
approvals
routing
reporting
onboarding
data entry
Then do this:
Write the requirements like you’re explaining it to a smart teammate.
Add one example input and one example output.
Ask your AI to propose the micro-SDLC plan.
Ship the v0.
Do a quality pass after you have a working loop.
Ship first. Then harden.
Missed out last week?
Don’t worry, these conversations happen every Friday.
Usually over laptops. Sometimes over pizza.
You’re welcome to join the next one.




Comments