TOM CRITCHLOW

The Flow of Work

A Meditation on The Future of Knowledge Work, Finding Flow State and Doing Work

What is happening to knowledge work?

Much like the rest of society, office work no longer has anything to do with the office. We have moved from a physical office to a virtual office - the logic of work was once calendar days and is now an always on stream of notifications.

Space has become a space of flows:

Our societies are constructed around flows: flows of capital, flows of information, flows of technology, flows of organizational interactions, flows of images, sounds and symbols. Flows are not just one element of social organization: they are the expression of the processes dominating our economic, political, and symbolic life. … Thus, I propose the idea that there is a new spatial form characteristic of social practices that dominate and shape the network society: the space of flows. The space of flows is the material organization of time-sharing social practices that work through flows. By flows I understand purposeful, repetitive, programmable sequences of exchange and interaction between physically disjointed positions held by social actors.

https://felix.openflows.com/html/space_of_flows.html

Knowlege work is the same - “purposeful, repetitive, programmable sequences of exchange and interaction” - mediated by stream of information: slack, zooms, notifications, pings, dings and rings.

Everyone is a manager now

  • The manager’s work is characterized by brevity, variety, and fragmentation.” (Hib510 Week 9)

https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

Attention is all you need

Coordination Costs

https://komoroske.com/slime-mold/

Coordination becomes harder when everyone is managing not only other humans but agents in the mix too. The orgnaization becomes less legible. Things move faster and break spontaneously.

https://www.cpj.fyi/essays/the-end-of-role-clarity/

Smaller teams

The 2026 Reality is Different Though

But for all the talk of making work more efficient using AI, using AI to replace cumbersome processes the reality of 2025 is in fact characterised by:

  • Teams and individuals having to learn a brand new technological paradigm (how do we learn how to use AI tools?)
  • Companies having to build net new processes and governance (can the marketing team have their own database and AWS instance?)
  • Prototyping and building new tools and processes that fail in brand new ways (wait we have to build our own evals now?)

I can easily believe that early adopters and certain key individuals feel like this has helped them move faster, better. Get more work done. Do better work. But for any organization of any size, learning these new capabilities and building these new processes and teams has been a ton of work.

Work slop and slop cannons

https://x.com/danhockenmaier/status/2021617680525172840

https://newmba.co/2023/10/11/exec/

https://frankchimero.com/blog/2025/beyond-the-machine/

Wanting and ambition

https://www.snowbird.global/the-dodos-bargain-trading-flight-for-certainty/?ref=flyways-by-snowbird-newsletter

Here’s a provocative way to get there: in an “everyone has a team of agents” world, knowledge work stops being “doing work” and becomes “governing a small organization.” Your core job becomes: routing attention, managing dependencies, setting decision rights, and keeping the system legible.

Below is a blog-post blueprint built from real research you can cite.


A sharp thesis

The future knowledge worker is a micro-CEO. AI agents don’t remove management—they fractally distribute it. The org chart collapses into millions of tiny, shifting “agent orgs,” each with its own coordination problems.

Why this is plausible:

  • Managerial work is already dominated by fragmented, fast-switching information and decisions (Mintzberg-style fragmentation is the baseline condition). (Harvard Business School)
  • Firms behave as “attention allocation machines,” because what leaders do depends on what gets surfaced to them (attention-based view). (Wiley Online Library)
  • Org design is fundamentally about information processing capacity vs uncertainty—agents explode capacity, but also explode volume, variance, and interdependence. (jaygalbraith.com)

Provocation to put in the intro:

“When execution is cheap, coordination becomes the only scarce resource—and every knowledge worker becomes a coordinator-in-chief.”


The core move: reframe “managing agents” as classic org theory, miniaturized

1) The bottleneck isn’t intelligence — it’s attention routing

Use Ocasio + Bandiera to argue: agents create infinite “things you could pay attention to,” so the limiting factor becomes span of attention, not productivity. (Wiley Online Library)

Framework: The Attention Budget

  • Inputs: alerts, drafts, options, anomalies, asks
  • Filters: rules, norms, dashboards, escalation triggers (your “attention architecture”) (Wiley Online Library)
  • Output: decisions + delegations

Hot take:

“Your calendar won’t be your strategy. Your notification policy will.”


2) Agents make coordination the central skill (and coordination is “managing dependencies”)

Bring in Malone & Crowston’s coordination theory: coordination = managing dependencies between activities. Agents mean more activities, more handoffs, more hidden coupling. (crowston.syr.edu)

Framework: Dependency Map (for agent swarms)

  • Shared resources (budget, data, customer truth)
  • Prerequisites (A must happen before B)
  • Simultaneity (parallel workstreams)
  • Conflicts (two agents optimizing different metrics)
  • Fit (outputs must “compose” into one narrative/product)

Provocation:

“In the agent era, the new literacy is dependency design.”


3) Delegation becomes weird: you keep “formal authority,” agents accumulate “real authority”

This is Aghion & Tirole’s killer lens: formal authority (right to decide) vs real authority (effective control via information/speed/context). Agents will often have the real authority because they see more and act faster—humans become a veto layer. (Duke People)

Framework: Authority Drift

  • The more agents pre-digest the world, the more you rubber-stamp.
  • Over time, the system optimizes for “minimize human interruptions,” and the human becomes ceremonial.

Provocation:

“If you don’t design decision rights, your agents will—by accident.”


4) Fast decisions won’t come from “faster thinking” — they come from principles

Two complementary pieces:

  • Eisenhardt: fast strategic decisions correlate with real-time information and multiple alternatives, not less information. (Super)
  • Oliver & Roos: in high-velocity environments, teams rely on guiding principles—shared heuristics that compress complexity and reduce thrash. (Imagilab)

Framework: The Principle Stack (Agent Constitution)

  • Aim: what “good” means (north star)
  • Constraints: what must never happen
  • Escalation triggers: when to interrupt the human
  • Default actions: what to do when uncertain
  • Audit rituals: how you check reality weekly

Provocation:

“Prompting is not management. Principle-setting is management.”


5) The dark mirror: your agent team becomes your manager (algorithmic control)

Jarrahi et al. summarize how algorithmic management can shift power via surveillance, automated allocation, and evaluation—often reducing autonomy even when sold as “efficiency.” (iSchool UT Austin) Eurofound’s overview distinguishes rule-based vs AI-driven algorithmic management and flags traceability issues as systems get more adaptive. (Eurofound)

Framework: Algorithmic Hygiene

  • What gets measured?
  • Who can contest a metric?
  • Can you explain a decision?
  • Where are the “off ramps”?
  • What’s your “right to be offline”?

Provocation:

“The agent future has two paths: augmented autonomy or personal Taylorism.”


6) Human–AI teaming isn’t “tool use,” it’s teamwork (with all the messy stuff)

Human–autonomy teaming research emphasizes interdependence, coordination, trust calibration, and shared mental models. (PMC)

Framework: Trust Calibration Loop

  • Predict what the agent will do
  • Observe what it did
  • Update trust (not upward forever—calibrate)
  • Adjust autonomy level and escalation triggers (PMC)

Provocation:

“Your most important AI skill won’t be prompting. It will be trust governance.”


A suggested post structure (tight, readable, spicy)

  1. Cold open vignette: “It’s 2032. I manage 11 agents. None of them report to HR. All of them can ship.”
  2. Thesis: every knowledge worker becomes a micro-CEO; coordination becomes scarce. (Ocasio + Galbraith) (Wiley Online Library)
  3. Three new bottlenecks: attention, dependency management, authority drift. (Bandiera + Malone/Crowston + Aghion/Tirole) (Harvard Business School)
  4. Why principles beat prompts: guiding principles as the “constitution” for swarms. (Eisenhardt + Oliver/Roos) (Super)
  5. The fork in the road: autonomy vs algorithmic control. (Jarrahi + Eurofound) (iSchool UT Austin)
  6. Call to action: treat agent work as org design. “Your job is to build a legible system.”

A few closing lines you could steal

  • “When work is abundant and thinking is cheap, attention becomes capital.” (Wiley Online Library)
  • “We’re not adopting tools. We’re onboarding teammates.” (PMC)
  • “In the agent era, the most important document you write won’t be a strategy deck—it’ll be an escalation policy.” (Duke People)

If you tell me your preferred voice (more “manifesto,” more “field guide,” or more “satirical office memo from 2032”), I’ll draft the full post in that style with 2–3 concrete future scenes and clean citations baked in.

Here are juicy, blog-ready quotes (each one short enough to stay compliant) pulled straight from the cited papers, plus the framework punchline you can steal.


Mintzberg (managerial work as “interrupt-driven” reality)

  • The manager’s work is characterized by brevity, variety, and fragmentation.” (Hib510 Week 9) Use it for: the claim that “focus” is a myth at exec level—work is a routing problem.

  • he is oriented to action and dislikes reflective activities.” (Hib510 Week 9) Use it for: why AI agents shouldn’t just “optimize attention,” they must manufacture reflection.


Eisenhardt (fast decisions aren’t “less info”—they’re different mechanics)

  • Fast decision makers use more, not less, information than do slow decision makers.” (Super) Use it for: “speed comes from bandwidth + structure, not ignorance.”

  • [They] develop more, not fewer, alternatives, and use a two-tiered advice process.” (Super) Use it for: an “agent council” pattern: many options, layered counsel, fast closure.

  • Conflict resolution is critical to decision speed, but conflict per se is not.” (Super) Use it for: the idea of “productive dissent” + “rapid arbitration” as an agent-era exec skill.


Aghion & Tirole (delegation is a trade: initiative ↔ control)

  • An increase in an agent’s real authority promotes initiative but results in a loss of control for the principal.” (Duke People) Use it for: the core tension of AI agents: autonomy buys speed, costs control.

  • Real authority is determined by the structure of information…” (Duke People) Use it for: “who decides” is downstream of “who knows.” Your agents become information gatekeepers.

  • a principal who is overloaded… loses effective control and involuntarily endorses many suboptimal projects.” (Duke People) Use it for: overload as the hidden engine of “rubber-stamp leadership” (and how agents could worsen it).


Ocasio (attention is the firm’s real operating system)

  • The central argument is that firm behavior is the result of how firms channel and distribute the attention of their decision-makers.” (IDEAS/RePEc) Use it for: a provocative premise: org charts matter less than attention architectures.

  • What decision-makers do depends on what issues and answers they focus their attention on.” (IDEAS/RePEc) Use it for: a crisp definition of strategy as selective noticing.


Malone & Crowston (coordination = dependency management)

  • coordination can be seen as the process of managing dependencies among activities.” (crowston.syr.edu) Use it for: the “agent swarm” framing: agents aren’t workers, they’re dependency routers.

  • Further progress… [comes] by characterizing different kinds of dependencies…” (crowston.syr.edu) Use it for: your taxonomy section: handoffs, shared resources, sequencing, reciprocal coupling, etc.


Oliver & Roos (boundary objects as the bridge between tribes)

  • The concept of boundary objects is used as a guiding principle for prototype development.” (Imagilab) Use it for: why “specs” fail—prototypes win because they travel across functions.

  • Prototypes are in this respect regarded as boundary objects…” (Imagilab) Use it for: an agent-era idea: agents should produce boundary artifacts (not just answers).


Jarrahi et al. (algorithmic management is politics + tech, not “neutral automation”)

  • Algorithmic management should rather be understood as a sociotechnical process…” (iSchool UT Austin) Use it for: your warning shot: the “AI manager” is an organizational design choice, not destiny.

  • algorithmic opacity is not overcome without struggle, effort, and risk.” (iSchool UT Austin) Use it for: why “just make it transparent” is naïve (and why internal audit rituals matter).

  • treating workers like mere “programmable cogs in machines”” (iSchool UT Austin) Use it for: the dark mirror version of “AI efficiency.”


Eurofound (the regulatory gap is real)

  • There is currently no unitary legislative framework at EU level regulating the use of algorithmic management systems at work.” (Eurofound) Use it for: a punchy “governance is lagging deployment” beat.

Human–Autonomy Teaming (agents as teammates, not tools)

  • Human–autonomy teamwork involves humans working interdependently toward a common goal along with autonomous agents.” (PMC) Use it for: the conceptual shift: stop calling them tools if they’re in the loop.

  • they are beginning to be viewed as teammates rather than tools” (PMC) Use it for: the big cultural turn your post is trying to name.


If you want, I can now turn these into a provocative blog-post spine (thesis → 5 sections → closing “new social contract of agency”) where each section is anchored by one of these quotes.

9: https://pmc.ncbi.nlm.nih.gov/articles/PMC9284085/ “ Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature - PMC “