The Complexity Machine
- huangpf
- 5 days ago
- 6 min read

When Farhad, the speaker at last Friday's Co-work Friday, was a consultant, he was part of a team building a large enterprise company from scratch, and building a digitally native platform on top of that. The ambition was large: own the entire customer journey, from the "why" all the way through booking and transaction.
He spent years reflecting on his experience on that project. He initially thought it was a technical problem. Then he realised it was a translation problem. And then he built a framework for it.
The enterprise exists to absorb complexity, not eliminate it.
Think about it like dinner. Without any business abstraction, you're exposed to the full complexity of eating: hunting, gathering, preparing, cooking, disposing. Then the grocery store comes along and absorbs the supply chain complexity for you. Then the restaurant absorbs preparation. Then UberEats absorbs location and time constraints.
Each layer is a new business that exists because it took on complexity so the customer didn't have to.
This reframe matters because most organizations are busy trying to reduce internal complexity, flattening teams, cutting tools, trimming processes. But the real question isn't "how do we simplify?" It's "what complexity are we absorbing, for whom, and at what cost?"
He pointed back to the enterprise project, where the engineering team kept throwing compute at a caching problem: more clusters, more regions, more money. They produced sublinear results every time. The breakthrough only came when they stopped optimizing at the technical layer and instead formalized the business objective the cache was supposed to serve.
The problem: One of the important parameters in the design of caches, Time to Live (TTL) is a technical parameter, not a business one. The people setting it were engineers who weren't close to the business strategy. The business executives who had an important say on the decisions that will play a big part in making this small decision, were not involved.
The business objectives were much larger:
Which customer segments were high-value
Which channels had the highest margin
How much latency in price accuracy was actually tolerable vs. damaging
None of this was feeding into the TTL decision. The engineers were optimising for infrastructure costs, while the business was losing customers who got outdated prices. External aggregator platforms that served up their services, aware that the API might return stale data, started trashing instead of using the API, creating millions of additional requests and making the problem worse.
Farhad's team then built a layer that expressed the cache problem in business language:
Traffic tolerance: how much variability in request volume is acceptable?
Customer segmentation: which users are high-value and should get fresher data?
Route priority: which channels should be protected from stale pricing?
Eviction policy: given finite resource, what should be removed first or when?
These are questions any business stakeholder can answer.
This is not a new concept in software architecture. It's essentially what Domain-Driven Design argues for when it insists on a "ubiquitous language" shared between domain experts and engineers. The reason most enterprises don't do it: it requires people on both sides to spend time building that shared vocabulary, and in fast-moving organisations, that time is always the first thing cut.
In Farhard's case: the problem looked like "we need to handle more requests per second." The real problem was "we need a way to express business priorities in engineering terms." Those are different problems with very different solution spaces.
Alignment first, then execute
If you scale two misaligned teams independently, you get the sum of their individual outputs. But if you align them first and then scale, you get something exponentially larger. It's the difference between (a + b)² and a² + b² and it's not even close.
Farhad broke the room into groups to wrestle with a deceptively simple question: what does alignment actually mean?
The answers were all over the place:
One group landed on shared language. Not just agreeing on a direction, but agreeing on what the words mean. "We leave meetings thinking we're aligned," one participant said, "but nobody checks — did you understand the same thing I did?" Without a common glossary, teams drift apart while believing they're moving together.
Another group went straight to motivation and mission. A sports analogy surfaced: a rowing team where every stroke must be synchronized or the boat stalls. It's not enough for everyone to row hard. They have to row together.
Another brought up Steve Jobs and the idea of reality distortion: that sometimes alignment is just one person with a vision so strong that everyone else falls in line. Not gentle. Not democratic. But effective.
Farhad framed his understanding mathematically: alignment is a function that must come before execution. Like Jensen's inequality, the order of operations matters. Align first, execute second, and you unlock convex, super-linear results. Reverse the order, and you're just scaling chaos.
Most enterprises pour money into execution. The smart ones also invest in alignment first.
Questions from the floor
As expected of a group of AI builders, one participant floated a question that landed like a grenade: what if you can scale compute and AI agents so fast that alignment doesn't even matter anymore?
The idea goes like this. Instead of spending months aligning departments, you spin up dozens of autonomous agents, each exploring a different direction. Run them in parallel for a week. Keep the ones that work. Kill the rest. It's a genetic algorithm applied to business strategy. Brute force alignment through sheer volume of experimentation.
Farhad recognized the pattern immediately: "You're running gradient descent on your organization." And the logic holds. If compute is cheap and agents are capable, why bother aligning humans at all?
Another participant pushed back with a sharper edge: legacy enterprises have absorbed too much unnecessary complexity. They're bloated with institutional processes that once solved real problems but now just create drag. If you were building the same company from scratch today, you'd need maybe 10% of the headcount. Maybe 1%. The rest could be agents.
Take-away for me: Prior enterprises solved for a lower level of complexity. The opportunity now isn't to replicate what they built more efficiently, but to see what complexity your customers actually face today and build the abstraction for that.
So maybe the enterprise isn't dead. Maybe it just needs to stop absorbing yesterday's complexity and start absorbing tomorrow's. The builders who figure out that distinction are the ones who'll define what "scale" means next.
Complexity Is Not Temporary
The cache problem was a symptom of something larger.
Farhad's broader argument is that enterprise complexity is not a temporary condition that good engineering will eventually solve. It's a fundamental progression.
Supply chains are growing. Customer expectations are compounding (they want more, faster, personalised). Product cycles are compressing. Every business layer added is also a new surface for incompatibility. And if you plot the trajectory forward, the complexity doesn't plateau, it accelerates.
The Santa Fe Institute's work on complexity science has been making this argument about complex adaptive systems for decades: complex systems don't simplify under pressure, they evolve new layers of structure. Enterprises are no different.
The knee-jerk response to complexity is more compute. Bigger cache clusters. More regions. More redundancy. Farhad watched this happen at the enterprise business he consulted with: spinning up more cache servers didn't address the root cause. The root cause was that engineers were optimising at the wrong level, because nobody had translated business strategy into technical parameters. The other cause was that each department was mis-aligned in their goals.
What now?
The talk didn't have a clean ending. Farhad was clear that he hasn't fully solved this. It's a problem that's stayed with him for years, showing up differently in each new context. The best kind of problem to spend time on.
If you're building anything at scale — technical or organisational — the question of how to keep complexity from eating you alive is not going away. It's worth spending a Friday afternoon thinking about it seriously.
Look forward to part II of this ongoing series.
Missed out last week?
Don't worry, these conversations happen every Friday at SQ Collective.
Usually over laptops. Sometimes over pizza.
You're welcome to join the next one.




Comments