TOM CRITCHLOW

Artificial Management

A Meditation on The Future of Knowledge Work and Finding Flow State

“The future ain’t what it used to be.” —Arthur C. Clarke The famed sci-fi writer’s words have become a universal truth. Not only has the future changed; but there frequently is less of it than meets the eye.

  • from 1980 lowes annual report

https://corporate.lowes.com/sites/lowes-corp/files/Annual%20reports%20and%20proxy%20statements/Lowes_AR_1980.pdf

The office ain’t what it used to be.

The office isn’t the place it used to be. I mean it hasn’t been a real place most of the time since covid. But recently, the coherence of the office has been peeling away - like the clocks in a dali painting dripping on the floor there is a sense of dislocation and decoherence.

**

Cybernetic organizations

https://corporate.lowes.com/sites/lowes-corp/files/Annual%20reports%20and%20proxy%20statements/Lowes_AR_1969.pdf

**

What is happening to knowledge work?

My central thesis is that work:

  1. Work is getting faster. A single individual can accomplish more now than they ever could before! We are cybernetic atheletes sprinting through work - causing work output to double, triple! While the individual is
  2. This increased tempo is literally frying our brains. Studies show that people using AI are “buzzy” and fried in a way that traditional work isn’t.
  3. Everyone is a manger now. Everyone is a manager of multple overlapping workstreams
  4. Coordination is becoming harder and everyone is becoming a manager.

The Tempo of Work

Coordination Costs

The firm exists

Management is a temporal technology

Taylorism, clocks, punching in and out

Management is a social technology

The unsettling thing about AI at work is not only that it accelerates production. It is that it disturbs our old ways of reading one another. A polished document no longer signals the same effort. A sharp summary no longer proves the same understanding. A fast first draft no longer reveals the same preparedness. The social cues of knowledge work are being scrambled.

Which means management has to change too. Because management was always, in part, a way of interpreting signals: who gets it, who needs help, what is blocked, what is real, what is theater, what is promising, what is dangerous. In the model era, many of those signals become noisier.

And so the manager’s job rises a level. Less checking the work. More designing the conditions under which work can be trusted.

AI expands individual agency faster than collective coherence The manager becomes a designer of systems, not just a supervisor of people The core scarce resource shifts from labor to judgment

Artificial Management is not a claim that machines will manage us. Not exactly.

It is a name for the growing recognition that management itself is becoming more artificial: more mediated by systems, more dependent on explicit protocols, more entangled with machine judgment, and more responsible for shaping how humans and models think together.

The old fantasy was that AI would remove the need for management. The more interesting possibility is the reverse: that AI makes management more important, more visible, and more intellectually serious than it has been in decades.

Not because there is more work to supervise. But because there is more intelligence to coordinate.

So, lines of inquiry for how knowledge work and management evolves in an age of AI:

  • Evals
  • Interfaces of coordination

Evals are a management skill

In an AI-rich organization, the core managerial skill is no longer just delegation. It is evaluation. Not “did the work get done?” but: Is this output any good? Against what standard? Compared to what? With what confidence? At what cost? Fast enough for the tempo of the work, but slow enough for trust?

Codified Judgement

When work happens at the speed of inference, you can’t rely on meetings and human judgement and course-correction. Codifying judgement, belief, direction, vision in a way that enables cybernetic workers to retain some semblance of coherence is the thing.

Some Notes on Artificial Management

I keep coming back to one old line from Mintzberg: “brevity, variety, and fragmentation.” (source)

It is a very good line because it feels true even before you start analyzing it. You can feel it in your body. The clipped meeting. The half-finished memo. The five open tabs. The Slack thread that mutates into a decision, then into a task, then into a problem for someone else. It is one of those rare bits of management writing that actually describes the lived texture of work.

What has been rattling around my head recently is that AI seems to be intensifying all three conditions at once.

Brevity, because output comes faster.

Variety, because the range of possible actions expands.

Fragmentation, because every gain in local productivity seems to produce more branching, more supervision, more review, more integration work somewhere else.

This is the strange thing about the current moment. AI clearly makes individuals faster. That part is real. Brynjolfsson, Li, and Raymond found that access to AI assistance increases productivity “by 15% on average.” (source) That tracks with what a lot of us are seeing. Drafts come faster. Code comes faster. Research compresses. A single person can get through a surprising amount of work in an afternoon.

And yet team productivity still feels muddy.

Not always. Not everywhere. But enough that it is hard to ignore. People sound faster and more tired at the same time. Work moves with more velocity and less coherence. There is more output and, somehow, more management.

Maybe that’s the first thing to say clearly: AI is not removing management. It is distributing management everywhere.

Not management in the title-and-org-chart sense.

Management in the plain sense. Routing attention. Managing dependencies. Deciding when to intervene. Figuring out who owns what. Figuring out whether the thing is actually done. Figuring out whether the rest of the organization can absorb what just got produced.

This is where Coase starts to matter again.

The old Coase question is simple enough: why do firms exist at all? Because coordination is expensive. Markets are not frictionless. Every transaction has a cost. Every handoff takes time. Every dependency has to be managed somewhere. Firms exist, in part, as coordination machines.

That feels newly relevant because generative AI lowers one kind of cost very aggressively while leaving the others stubbornly intact.

It lowers the cost of producing drafts, options, analyses, code, mockups, summaries.

It does not automatically lower the cost of trust.

It does not automatically lower the cost of sequencing.

It does not automatically lower the cost of shared judgment.

It does not automatically lower the cost of getting five people, or fifty people, to move in a legible direction at the same time.

So you get a peculiar result: personal productivity rises while coordination costs remain annoyingly human.

Or maybe get worse.

That possibility has been living in the back of my head for a while. Clay Parker Jones gets at one piece of it with his line that role clarity is “a symptom of relational poverty.” (source) I like that formulation because it shifts the question away from org-chart hygiene and toward the actual human infrastructure underneath work. Maybe the issue is not that people do not know their roles. Maybe the issue is that the work is becoming denser, faster, and more overlapping than the relationships around it can hold.

That would explain why AI feels so good in solo mode and so weird in team mode.

It would also explain why time has started to feel different.

This is the other thread I can’t shake. The AI story is usually told as a story about intelligence or labor or software. I think it is also a story about tempo.

One of the useful phrases in Moleitau’s “Gas Town and Bullet Hell” is “wall-clock time.” (source) An agent can burn through an absurd amount of computation in a day. The human still gets “the same twenty-four hours it always was.” (source) Same body. Same Tuesday. Same limits on attention and energy.

That gap matters.

The HBR “brain fry” piece sharpens this in a useful way by making a distinction that I think will matter more and more: some patterns of AI use reduce burnout, others create cognitive fatigue. (source) The problem is not AI in the abstract. The problem is a particular supervisory tempo. A person gets asked to review more branches, monitor more parallel actions, absorb more possible futures, all inside the same biological day. Or, as one early user put it, it was “moving too fast for me.” (source)

The machine’s time and the human’s time drift apart.

And once that drift appears, the feel of work changes.

The workday starts to break into more fragments. More maybe’s. More partial decisions. More little supervisory jolts. More time spent keeping things coherent. Mintzberg’s line comes back around here in a slightly darker register: brevity, variety, and fragmentation stop describing management as a specialized role and start describing knowledge work in general.

This is where I keep wanting to say that the bottleneck is synchronization.

Not production. Synchronization.

That phrase feels useful because it points at the thing that keeps not improving, even as everything around it accelerates. Individuals can move faster. The organization still has to align rhythms, resolve dependencies, absorb change, and maintain some shared sense of what is happening.

And that last piece feels especially unstable right now because the future itself has gotten harder to hold.

This is not just a question of shorter planning cycles, though that is part of it. Models improve weekly. Interfaces mutate monthly. Best practices decay before they can harden into process. It is hard to plan when the ground underneath the plan keeps shifting.

But there is something deeper going on too.

The future has collapsed.

Or maybe: the shared future has collapsed.

By that I mean the organization no longer has a stable, collective picture of what is coming next. Not because nobody is smart enough. Because the object itself is unstable. The horizon has shortened and multiplied at the same time. Product teams, executives, operators, finance people, researchers, vendors, and customers are all staring at different versions of the near future. Everyone can sense acceleration. Very few people feel like they are living inside the same timetable.

That is part of why work feels so jagged.

The old managerial imagination was built around the idea that organizations could create a common clock: quarter, review, planning cycle, launch date, maintenance window, fiscal year. Read enough late-60s and early-70s annual reports and you can feel how deep that assumption ran. Monsanto talks about “careful analyses of profit opportunities” (source) in exactly this register. Time was not just something the company moved through. It was something management could standardize, route, and discipline.

Apollo became the glamorous version of that faith.

This is one reason I keep circling back to the moon landing. Not because I want a nostalgic opening scene, but because Apollo condensed a whole management worldview into one image. If you listen to the Flight Director loop, you hear a system whose legitimacy comes partly from its timing. Every role named. Every handoff explicit. Every exception routed. Every voice moving inside a shared temporal order.

In other words, Apollo did not just dramatize coordination. It dramatized synchronized coordination.

And maybe that is the real contrast with AI work.

AI gives us local acceleration without shared tempo.

Apollo gave us shared tempo under extreme complexity.

Roger Launius is helpful here because he reminds us that the methods did not travel cleanly. James Webb and others looked at Apollo and asked some version of the obvious question: if we can do that, why can’t we solve cities, poverty, health care? But Apollo was not generic management genius. It was a very specific coordination environment with bounded goals, thick telemetry, explicit authority, and extraordinary consensus.

Still, the lesson lingers.

Not “copy NASA.”

Something more modest.

If the bottleneck is synchronization, then management comes roaring back into view. Not as bureaucracy, exactly. More like interface design for collective intelligence.

That phrase still feels right to me.

Because what is missing in a lot of AI-native work is not output. It is interface quality. Handoffs. Escalation paths. Shared context. Named roles. Clear points of translation. Some equivalent of CAPCOM, even if it looks nothing like a control room in Houston.

I don’t mean hierarchy for its own sake. I mean legibility under conditions of speed.

I mean designing work so that acceleration in one part of the system does not simply become fragmentation everywhere else.

I mean being much more deliberate about cadence. What has to be real-time? What should batch? What can wait overnight? Who gets to set the pace? What kinds of reversibility or slack make it possible to move quickly without frying the human beings in the loop?

This is why I keep wanting to use the phrase artificial management.

Not because it sounds futuristic.

Because it points at the real design problem: how to coordinate humans, models, agents, memory, and decision rights when they are all moving at different speeds and when the future they are supposedly moving toward is itself unstable.

I don’t think I have the full theory for this yet. That’s fine. This feels like the beginning of a line of inquiry, not the end of one.

But a few things feel sturdy enough to keep.

AI intensifies brevity, variety, and fragmentation.

Personal productivity gains do not dissolve coordination costs.

The lived experience of work is changing because tempo is changing.

The future has collapsed into a series of unstable near-futures.

And the bottleneck, more and more, looks like synchronization.

Which brings me back to the moon landing, one more time.

Apollo made coordination feel solved.

AI is making coordination strange again.

That doesn’t feel like a side effect to me.

It feels like the story.

Notes

What is happening to knowledge work?

Much like the rest of society, office work no longer has anything to do with the office. We have moved from a physical office to a virtual office - the logic of work was once calendar days and is now an always on stream of notifications.

Space has become a space of flows:

Our societies are constructed around flows: flows of capital, flows of information, flows of technology, flows of organizational interactions, flows of images, sounds and symbols. Flows are not just one element of social organization: they are the expression of the processes dominating our economic, political, and symbolic life. … Thus, I propose the idea that there is a new spatial form characteristic of social practices that dominate and shape the network society: the space of flows. The space of flows is the material organization of time-sharing social practices that work through flows. By flows I understand purposeful, repetitive, programmable sequences of exchange and interaction between physically disjointed positions held by social actors.

https://felix.openflows.com/html/space_of_flows.html

Knowlege work is the same - “purposeful, repetitive, programmable sequences of exchange and interaction” - mediated by stream of information: slack, zooms, notifications, pings, dings and rings.

Everyone is a manager now

  • The manager’s work is characterized by brevity, variety, and fragmentation.” (Hib510 Week 9)

https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

Attention is all you need

Coordination Costs

https://komoroske.com/slime-mold/

Coordination becomes harder when everyone is managing not only other humans but agents in the mix too. The orgnaization becomes less legible. Things move faster and break spontaneously.

https://www.cpj.fyi/essays/the-end-of-role-clarity/

Smaller teams

The 2026 Reality is Different Though

But for all the talk of making work more efficient using AI, using AI to replace cumbersome processes the reality of 2025 is in fact characterised by:

  • Teams and individuals having to learn a brand new technological paradigm (how do we learn how to use AI tools?)
  • Companies having to build net new processes and governance (can the marketing team have their own database and AWS instance?)
  • Prototyping and building new tools and processes that fail in brand new ways (wait we have to build our own evals now?)

I can easily believe that early adopters and certain key individuals feel like this has helped them move faster, better. Get more work done. Do better work. But for any organization of any size, learning these new capabilities and building these new processes and teams has been a ton of work.

Work slop and slop cannons

https://x.com/danhockenmaier/status/2021617680525172840

https://newmba.co/2023/10/11/exec/

https://frankchimero.com/blog/2025/beyond-the-machine/

Wanting and ambition

https://www.snowbird.global/the-dodos-bargain-trading-flight-for-certainty/?ref=flyways-by-snowbird-newsletter

Here’s a provocative way to get there: in an “everyone has a team of agents” world, knowledge work stops being “doing work” and becomes “governing a small organization.” Your core job becomes: routing attention, managing dependencies, setting decision rights, and keeping the system legible.

Below is a blog-post blueprint built from real research you can cite.


A sharp thesis

The future knowledge worker is a micro-CEO. AI agents don’t remove management—they fractally distribute it. The org chart collapses into millions of tiny, shifting “agent orgs,” each with its own coordination problems.

Why this is plausible:

  • Managerial work is already dominated by fragmented, fast-switching information and decisions (Mintzberg-style fragmentation is the baseline condition). (Harvard Business School)
  • Firms behave as “attention allocation machines,” because what leaders do depends on what gets surfaced to them (attention-based view). (Wiley Online Library)
  • Org design is fundamentally about information processing capacity vs uncertainty—agents explode capacity, but also explode volume, variance, and interdependence. (jaygalbraith.com)

Provocation to put in the intro:

“When execution is cheap, coordination becomes the only scarce resource—and every knowledge worker becomes a coordinator-in-chief.”


The core move: reframe “managing agents” as classic org theory, miniaturized

1) The bottleneck isn’t intelligence — it’s attention routing

Use Ocasio + Bandiera to argue: agents create infinite “things you could pay attention to,” so the limiting factor becomes span of attention, not productivity. (Wiley Online Library)

Framework: The Attention Budget

  • Inputs: alerts, drafts, options, anomalies, asks
  • Filters: rules, norms, dashboards, escalation triggers (your “attention architecture”) (Wiley Online Library)
  • Output: decisions + delegations

Hot take:

“Your calendar won’t be your strategy. Your notification policy will.”


2) Agents make coordination the central skill (and coordination is “managing dependencies”)

Bring in Malone & Crowston’s coordination theory: coordination = managing dependencies between activities. Agents mean more activities, more handoffs, more hidden coupling. (crowston.syr.edu)

Framework: Dependency Map (for agent swarms)

  • Shared resources (budget, data, customer truth)
  • Prerequisites (A must happen before B)
  • Simultaneity (parallel workstreams)
  • Conflicts (two agents optimizing different metrics)
  • Fit (outputs must “compose” into one narrative/product)

Provocation:

“In the agent era, the new literacy is dependency design.”


3) Delegation becomes weird: you keep “formal authority,” agents accumulate “real authority”

This is Aghion & Tirole’s killer lens: formal authority (right to decide) vs real authority (effective control via information/speed/context). Agents will often have the real authority because they see more and act faster—humans become a veto layer. (Duke People)

Framework: Authority Drift

  • The more agents pre-digest the world, the more you rubber-stamp.
  • Over time, the system optimizes for “minimize human interruptions,” and the human becomes ceremonial.

Provocation:

“If you don’t design decision rights, your agents will—by accident.”


4) Fast decisions won’t come from “faster thinking” — they come from principles

Two complementary pieces:

  • Eisenhardt: fast strategic decisions correlate with real-time information and multiple alternatives, not less information. (Super)
  • Oliver & Roos: in high-velocity environments, teams rely on guiding principles—shared heuristics that compress complexity and reduce thrash. (Imagilab)

Framework: The Principle Stack (Agent Constitution)

  • Aim: what “good” means (north star)
  • Constraints: what must never happen
  • Escalation triggers: when to interrupt the human
  • Default actions: what to do when uncertain
  • Audit rituals: how you check reality weekly

Provocation:

“Prompting is not management. Principle-setting is management.”


5) The dark mirror: your agent team becomes your manager (algorithmic control)

Jarrahi et al. summarize how algorithmic management can shift power via surveillance, automated allocation, and evaluation—often reducing autonomy even when sold as “efficiency.” (iSchool UT Austin) Eurofound’s overview distinguishes rule-based vs AI-driven algorithmic management and flags traceability issues as systems get more adaptive. (Eurofound)

Framework: Algorithmic Hygiene

  • What gets measured?
  • Who can contest a metric?
  • Can you explain a decision?
  • Where are the “off ramps”?
  • What’s your “right to be offline”?

Provocation:

“The agent future has two paths: augmented autonomy or personal Taylorism.”


6) Human–AI teaming isn’t “tool use,” it’s teamwork (with all the messy stuff)

Human–autonomy teaming research emphasizes interdependence, coordination, trust calibration, and shared mental models. (PMC)

Framework: Trust Calibration Loop

  • Predict what the agent will do
  • Observe what it did
  • Update trust (not upward forever—calibrate)
  • Adjust autonomy level and escalation triggers (PMC)

Provocation:

“Your most important AI skill won’t be prompting. It will be trust governance.”


A suggested post structure (tight, readable, spicy)

  1. Cold open vignette: “It’s 2032. I manage 11 agents. None of them report to HR. All of them can ship.”
  2. Thesis: every knowledge worker becomes a micro-CEO; coordination becomes scarce. (Ocasio + Galbraith) (Wiley Online Library)
  3. Three new bottlenecks: attention, dependency management, authority drift. (Bandiera + Malone/Crowston + Aghion/Tirole) (Harvard Business School)
  4. Why principles beat prompts: guiding principles as the “constitution” for swarms. (Eisenhardt + Oliver/Roos) (Super)
  5. The fork in the road: autonomy vs algorithmic control. (Jarrahi + Eurofound) (iSchool UT Austin)
  6. Call to action: treat agent work as org design. “Your job is to build a legible system.”

A few closing lines you could steal

  • “When work is abundant and thinking is cheap, attention becomes capital.” (Wiley Online Library)
  • “We’re not adopting tools. We’re onboarding teammates.” (PMC)
  • “In the agent era, the most important document you write won’t be a strategy deck—it’ll be an escalation policy.” (Duke People)

If you tell me your preferred voice (more “manifesto,” more “field guide,” or more “satirical office memo from 2032”), I’ll draft the full post in that style with 2–3 concrete future scenes and clean citations baked in.

Here are juicy, blog-ready quotes (each one short enough to stay compliant) pulled straight from the cited papers, plus the framework punchline you can steal.


Mintzberg (managerial work as “interrupt-driven” reality)

  • The manager’s work is characterized by brevity, variety, and fragmentation.” (Hib510 Week 9) Use it for: the claim that “focus” is a myth at exec level—work is a routing problem.

  • he is oriented to action and dislikes reflective activities.” (Hib510 Week 9) Use it for: why AI agents shouldn’t just “optimize attention,” they must manufacture reflection.


Eisenhardt (fast decisions aren’t “less info”—they’re different mechanics)

  • Fast decision makers use more, not less, information than do slow decision makers.” (Super) Use it for: “speed comes from bandwidth + structure, not ignorance.”

  • [They] develop more, not fewer, alternatives, and use a two-tiered advice process.” (Super) Use it for: an “agent council” pattern: many options, layered counsel, fast closure.

  • Conflict resolution is critical to decision speed, but conflict per se is not.” (Super) Use it for: the idea of “productive dissent” + “rapid arbitration” as an agent-era exec skill.


Aghion & Tirole (delegation is a trade: initiative ↔ control)

  • An increase in an agent’s real authority promotes initiative but results in a loss of control for the principal.” (Duke People) Use it for: the core tension of AI agents: autonomy buys speed, costs control.

  • Real authority is determined by the structure of information…” (Duke People) Use it for: “who decides” is downstream of “who knows.” Your agents become information gatekeepers.

  • a principal who is overloaded… loses effective control and involuntarily endorses many suboptimal projects.” (Duke People) Use it for: overload as the hidden engine of “rubber-stamp leadership” (and how agents could worsen it).


Ocasio (attention is the firm’s real operating system)

  • The central argument is that firm behavior is the result of how firms channel and distribute the attention of their decision-makers.” (IDEAS/RePEc) Use it for: a provocative premise: org charts matter less than attention architectures.

  • What decision-makers do depends on what issues and answers they focus their attention on.” (IDEAS/RePEc) Use it for: a crisp definition of strategy as selective noticing.


Malone & Crowston (coordination = dependency management)

  • coordination can be seen as the process of managing dependencies among activities.” (crowston.syr.edu) Use it for: the “agent swarm” framing: agents aren’t workers, they’re dependency routers.

  • Further progress… [comes] by characterizing different kinds of dependencies…” (crowston.syr.edu) Use it for: your taxonomy section: handoffs, shared resources, sequencing, reciprocal coupling, etc.


Oliver & Roos (boundary objects as the bridge between tribes)

  • The concept of boundary objects is used as a guiding principle for prototype development.” (Imagilab) Use it for: why “specs” fail—prototypes win because they travel across functions.

  • Prototypes are in this respect regarded as boundary objects…” (Imagilab) Use it for: an agent-era idea: agents should produce boundary artifacts (not just answers).


Jarrahi et al. (algorithmic management is politics + tech, not “neutral automation”)

  • Algorithmic management should rather be understood as a sociotechnical process…” (iSchool UT Austin) Use it for: your warning shot: the “AI manager” is an organizational design choice, not destiny.

  • algorithmic opacity is not overcome without struggle, effort, and risk.” (iSchool UT Austin) Use it for: why “just make it transparent” is naïve (and why internal audit rituals matter).

  • treating workers like mere “programmable cogs in machines”” (iSchool UT Austin) Use it for: the dark mirror version of “AI efficiency.”


Eurofound (the regulatory gap is real)

  • There is currently no unitary legislative framework at EU level regulating the use of algorithmic management systems at work.” (Eurofound) Use it for: a punchy “governance is lagging deployment” beat.

Human–Autonomy Teaming (agents as teammates, not tools)

  • Human–autonomy teamwork involves humans working interdependently toward a common goal along with autonomous agents.” (PMC) Use it for: the conceptual shift: stop calling them tools if they’re in the loop.

  • they are beginning to be viewed as teammates rather than tools” (PMC) Use it for: the big cultural turn your post is trying to name.


If you want, I can now turn these into a provocative blog-post spine (thesis → 5 sections → closing “new social contract of agency”) where each section is anchored by one of these quotes.

9: https://pmc.ncbi.nlm.nih.gov/articles/PMC9284085/ “ Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature - PMC “