Who bears the consequences?
What Block’s AI org chart gets right, and what it can’t answer yet
Jack Dorsey and Roelof Botha published a piece last week about replacing hierarchy with AI at Block. It’s ambitious, well-researched, and I think they’re asking the right question. I just think there’s a harder question underneath that the essay doesn’t fully resolve.
Their argument: hierarchy has always been an information routing system. Romans, Prussians, railroads, modern corporations - all of them solved the same problem of coordinating people through layers of human managers passing information up and down. AI can now do that routing instead. So you don’t need the layers anymore. Replace middle management with a “world model,” flatten the org, put everyone on the edge.
I’ve been chewing on this for years, not just as theory. I tried building a Sociocratic organization at Opera. Sociocracy replaces consensus with consent - you don’t need everyone to agree, you need nobody to have a principled objection. It’s elegant. It’s intellectually honest. And in practice, it works until it doesn’t. At a certain scale, someone still has to make the call that nobody wants to make. The mechanism for reaching agreement isn’t the bottleneck. The willingness to own the consequence is.
That’s where the essay gets interesting but also where it gets harder to pin down. The strongest idea in the piece is that Block’s transaction data is an honest signal. People lie on surveys, ignore ads, abandon carts - but when they spend, that’s truth. I genuinely think that’s right. That’s a real compounding advantage, and I can’t poke a hole in it.
But the leap from “we have great data” to “we can replace hierarchy” is where I start having questions. The essay proposes a “world model” that replaces what managers do. But what decides what the model optimizes for? Who intervenes when the model is wrong in a way that costs real trust? Who says no to a project that the data supports but judgment says is wrong?
I actually think you could encode principles for AI to manage by - something like a smart contract for organizational decision-making. A written-down set of rules, priorities, constraints. And if you think about my earlier posts on operators, there’s a version of this where the manager’s role shifts from information routing to making sure the vision actually gets carried out across hundreds of people. That’s not a lesser role - it’s a different one. And if AI can handle the routing part, managers can focus on the part that actually requires human judgment: developing people, navigating ambiguity, and making the calls the system can’t.
The essay proposes three roles: ICs, DRIs, and player-coaches. No permanent middle management. The DRI has full authority to pull resources across teams for 90 days. That’s compelling on paper. But someone decides who becomes the DRI. Someone decides which problems matter enough. Someone evaluates whether they succeeded. I’m not sure that’s as different from hierarchy as it sounds.
Many have promised this revolution before. The internet was supposed to flatten organizations. Slack was going to eliminate information silos. Holacracy tried to remove management titles entirely. Spotify had squads. I tried Sociocracy. Every attempt taught me something, and every one eventually hit the same wall: coordination at scale needs someone willing to bear consequences, not just route information.
And that’s the question the essay doesn’t fully answer. AI can route information better than any human manager. I believe that. AI can probably maintain a world model of company operations that’s more accurate and more current than any executive’s mental model. I believe that too.
But can AI bear consequences? Can it be the one who says “this was my call and it was wrong”? Can it face the team after a failed bet and explain what it learned? Can it fire someone? Can it choose to take a risk that the data doesn’t support because something in the situation demands it?
Not yet. Maybe not ever. I genuinely don’t know.
What I do know is that the cost of being wrong about this is asymmetric. If you flatten too slowly, you lose some speed. If you flatten too fast and remove the humans who own the consequences, you get a system that optimizes beautifully until the moment it fails catastrophically and nobody owns the failure. Taleb would call that a fragile system disguised as an efficient one.
I think Dorsey is right that organizations are too slow. I think he’s right that AI can take over a huge chunk of what middle management does. But I think there’s a distinction worth drawing: AI as a better coordination mechanism (probably true, already happening) and AI as a replacement for human accountability (extraordinary claim, no evidence yet). The first is an operational improvement. The second is an organizational revolution. They’re not the same thing.
It’s a great debate to have. I suspect the answer will be somewhere in between - not the hierarchy we have, and not the flat intelligence the essay describes, but something messier. Something where AI handles coordination and humans handle the consequences. Which, if you think about it, is just the operator model applied to organizations.

