Architecting the Factory
February 17, 2026

If you’ve spent any time reading about technology adoption, you’ve probably heard the story of steam and electricity. Edison built the first commercial generating station in 1882. Electric motors could drive factory machinery. But decades later, most factories still looked the same. The punchline usually lands on the timeline: it took 30-40 years for electricity to produce meaningful productivity gains. The lesson, as typically told, is about patience. Transformative technology takes time. Don’t expect instant results.
That’s fine as far as it goes, but often skips over the essential part of the story: the change that finally happened after the long 40 year wait.
The reason factories didn’t improve when they adopted electricity is that they didn’t actually change anything. They ripped out the massive central steam engine and bolted an electric motor in its place. They still used the same factory. It was the same multi-story layout organized around drive shafts and belts. Same workflow dictated by the physics of centralized power. New energy source, old design.
It shouldn’t have been surprising when productivity barely moved.
The breakthrough came when engineers stopped asking “how do we electrify this factory?” and started asking “what kind of factory does electricity make possible?” Those are fundamentally different questions. The first preserves the existing design. The second reimagines it.
And the answers were practical. Electricity didn’t need a central source. Each machine could have its own small motor, started and stopped independently. Machines didn’t need to cluster around a drive shaft. The factory could be arranged around the logic of production, the flow of materials from one step to the next, rather than the logic of power distribution. Single-floor layouts replaced multi-story buildings. Assembly lines became possible. Workers gained autonomy because they controlled their own machines instead of being governed by the pace of a central engine.
The economist Paul David documented this in his landmark 1990 paper, “The Dynamo and the Computer.” His central insight wasn’t about patience or timelines. It was that the productivity gains didn’t come from the new technology at all. They came from redesigning the entire system, the factory, the management, the training, the incentives, around what the new technology made possible.
The technology was the easy part. The factory was the hard part.
We’re Bolting on the Motor
I’ve been thinking about this story a lot as I watch organizations adopt AI agents. I wrote recently about the agentic maturity curve, the progression from using AI as spicy autocomplete to running what Dan Shapiro calls “the dark factory,” where agents handle entire workflows autonomously. And I wrote about the coordination crisis that emerges when production time compresses but decision-making doesn’t keep up.
Those two posts are really two halves of the same argument: the technology is moving faster than the organizations using it. And most organizations are responding exactly the way those early factory owners did. They’re ripping out the steam engine and bolting in an electric motor.
They’re plugging AI into existing workflows. Having agents do what humans used to do, in the same structure, with the same reporting lines, the same approval chains, the same coordination mechanisms. It’s the obvious move, but all we’ve done is make some parts of the system faster without changing anything else. The factory can’t move faster than its slowest choke point. This produces disappointing results and leads to the conclusion that AI is overhyped.
The AI isn’t overhyped. The implementation is underdesigned.
We’re no exception. Here’s what it looked like for us. We had an internal org chart that kept breaking. Periodic data syncs, brittle workflows, the kind of thing that took weeks to build and failed sporadically. When it broke again, we rebuilt it in a day using live internal APIs. No data syncs. Direct connection to the source of truth.
That same week, someone raised a different problem: finding employee headshots for client proposals, currently scattered across disorganized shared folders. Because we’d built around live APIs instead of static exports, adding employee profiles with searchable headshots was a few hours of work. The redesigned foundation made an unrelated problem solvable almost for free.
Then we hit a wall. Deploying the tool securely required several slow internal processes to deploy and create a new Azure application to integrate into our authentication. The process took longer than building the tool itself. In the past, deploying anything meant a procurement request for a cloud instance or a back-and-forth with DevOps for Kubernetes. We’ve since adopted a deployment platform where security is baked into the infrastructure, not bolted on by each developer. The default is now secure, with teams requesting wider permissions when needed instead of the other way around.
One day to rebuild the tool. One day to extend it. Weeks waiting on the old governance infrastructure to catch up. That’s the gap.
The Factory Is the Organization
Transformative technology demands transformative design. Not just of the technical systems, but of the organizational systems those technical systems live inside.
When electricity enabled individual motors on each machine, it didn’t just change the factory floor. It changed management. The shift from centralized power to distributed power required a corresponding shift from centralized control to distributed authority.
The same is true for AI. When agents can produce work in hours instead of weeks, you haven’t just changed the production layer. You’ve changed the coordination, decision-making, and trust layers. Our org chart story is a small example: building the tool was the easy part. Every layer around it, security, deployment, governance, was still designed for the old speed. If you don’t redesign those layers, you get expensive new technology producing the same old results.
Designing the Factory, Not Just Installing the Machines
Architecting the factory requires being deliberate about both your technical architecture and your people architecture, and understanding that they have to move together.
On the technical side, this means designing your agent systems with the same care you’d give any critical infrastructure. Not every process needs an agent. Not every agent needs autonomy. The dark factory model works at FANUC because they manufacture standardized robots in a tightly controlled environment: the work is predictable, the inputs are consistent, and the quality criteria are well-defined. Most knowledge work doesn’t look like that. Good technical design means knowing where on the maturity curve each process belongs and building accordingly. Level 2 collaboration for ambiguous creative work. Level 4 specification-to-shipping for well-understood repeatable processes. Level 5 only where you’ve earned the right to turn off the lights.
Even FANUC’s fully autonomous lines have built-in conditions that halt production when something deviates from spec. The most important design decision in any autonomous system isn’t what it can do. It’s the conditions under which it stops and escalates. Agent design needs that same discipline. An agent that confidently produces wrong output is worse than one that stops and asks.
We rebuilt our client proposal process along these lines. For years, people assembled proposals from a massive master PowerPoint. Most had their own personal fork because working with the master was so painful. Same content, dozens of versions. We broke the deck into modular components, each described well enough for an AI agent to assemble. Now the agent pulls the right pieces, customizes them for the engagement, and outputs whatever format the client needs. As a bonus, those components became a searchable knowledge base for how we handle different types of work. The old process was organized around the constraints of PowerPoint. The new one is organized around the flow of the work.
On the people side, this is where most organizations underinvest. As I argued in Fast Work, Slow Decisions, the real bottleneck in an AI-augmented organization isn’t production capacity. It’s decision-making and alignment capacity. Addressing that requires building a stack: trust first, then clear ownership, then feedback mechanisms that separate authority from input, then measurement for alignment rather than control.
These aren’t separate workstreams. Autonomous agents require distributed authority. Distributed decisions require distributed context. Trust requires visibility. The technical architecture enables the people architecture and vice versa.
The Restraint Problem
Here’s the part that goes against every instinct in a hype cycle: good design requires restraint.
The temptation with any powerful new technology is to deploy it everywhere at once. Every team gets agents. Every workflow gets automated. Every process gets optimized. This is the organizational equivalent of electrifying every machine in the factory on the same day. Technically possible, practically chaotic.
The factories that successfully transitioned to electricity didn’t do it all at once. They started where the advantage was clearest, learned what worked, adapted their management, then expanded. The transformation took time not because the technology was slow, but because organizational learning is slow.
Organizations adopting AI agents need the same discipline. Start where the gap between current performance and potential is largest and the risk of failure is most manageable. Assign clear ownership, not just of the agents, but of the outcomes the agents are meant to produce.
Then treat each redesigned process as a learning opportunity, not just a delivery. What broke during the transition? Where did people resist, and were they right to? Where did approval chains or handoff points slow things down? That learning feeds the next redesign. The org chart taught us that our deployment and security infrastructure was the real bottleneck, which led directly to rebuilding our deployment platform. The proposal redesign taught us that modular components create value beyond the original use case. Each process you redesign should produce not just better output but better understanding of how your organization adapts to change.
This is harder than it sounds because restraint doesn’t make for exciting board presentations. “We automated three processes really well” is a less compelling story than “we’re deploying AI across the enterprise.” But the first approach builds the organizational muscle for sustained transformation. The second builds expensive shelf-ware.
The Path to the Dark Factory
The path to whatever level of agentic maturity your organization needs, whether that’s Level 3 code review management or Level 5 lights-out autonomy, isn’t a technology purchase. It’s a commitment to redesigning how you work.
That means changes in your people architecture: who makes what decisions, how authority is distributed, how trust is built, how feedback flows, how you measure success. And it means changes in your technical architecture: how agents are designed, what they can and can’t do autonomously, how they integrate with human workflows, how you monitor quality and catch failures.
Neither architecture works without the other. An organization with perfectly designed AI agents but a command-and-control management structure will bottleneck at every approval gate. An organization with beautifully distributed authority but no agent infrastructure will just be making faster decisions about slower work.
The companies that get this right will look like those redesigned electric factories of the 1920s, organized around the flow of value rather than the constraints of their power source, with workers empowered to operate autonomously because the systems support it and the culture trusts it. U.S. manufacturing productivity leapt in that decade, four decades after the commercialization of electricity. The gains came not from better motors, but from better factories.
We’re still in the “bolting on the motor” phase of AI adoption. The real gains are ahead of us. But they won’t come from better models or more capable agents. They’ll come from better organizations, ones that had the discipline to architect the factory, not just install the machines.
This post builds on ideas from The Agentic Maturity Curve and Fast Work, Slow Decisions. Paul David’s paper, “The Dynamo and the Computer,” was published in the American Economic Review in 1990 and remains one of the most cited works on technology adoption. If you’re thinking about how to redesign your organization for AI, not just deploy it, I’d love to hear what you’re learning. Reach out on LinkedIn or Bluesky.