The Interdependence Reframe

What if we stopped asking about amounts of autonomy and started asking about patterns of interdependence?

The Interdependence Reframe

Trust in autonomous AI dropped from 43% to 27% in a single year. The standard interpretation: people fear capable AI. The uncomfortable interpretation: we've been asking the wrong question entirely.

"How autonomous should AI be?" treats autonomy like a dial—0% means human does everything, 100% means AI does everything. Every capability advance moves the slider, and every slider movement feels like transfer. More for the machine, less for us.

No wonder trust is collapsing. The dial metaphor guarantees existential discomfort. It frames human-AI collaboration as zero-sum before we've even started designing it.


What if we stopped asking about amounts of autonomy and started asking about patterns of interdependence?

The shift sounds subtle. It isn't. Autonomy-as-dial asks: how much should the AI do alone? Interdependence asks: how do human and AI capabilities compose? The first manages risk. The second designs relationship.

Consider the difference between tight and loose coupling. A tightly-coupled system moves together—human and AI in continuous dialogue, each response shaping the next, often synchronous, always co-present. A loosely-coupled system delegates coherent chunks—human sets intention, AI executes, human evaluates outcome, the exchange asynchronous and bounded. Neither is more "autonomous" in any meaningful sense. They're different collaboration textures, suited to different purposes.

Then there's explanatory versus delegatory trust. Do I need to understand your reasoning, or just your results? The answer depends on stakes, context, relationship history—not on some universal setting. These aren't dials to adjust. They're patterns to choose.


Here's where it gets uncomfortable: interdependence requires vulnerability on both sides.

The dial metaphor is secretly comforting because it preserves human control as the backstop. We can always dial it back. We retain the option to retreat to full human agency if things go wrong.

Interdependence admits there's no dialing back—only designing what kind of entanglement we want. And that entanglement runs both directions. The human depends on the AI's capability, consistency, and legibility. The AI depends on the human's clarity, good faith, and appropriate task-scoping. Garbage in, garbage out isn't a technical limitation—it's a dependency relationship. The AI's outputs are vulnerable to the human's inputs in ways that "autonomous AI" rhetoric obscures.

Neither party can fully retreat to independence. The capabilities are already interwoven. The question isn't whether to be interdependent. It's whether we'll be intentional about the shape that interdependence takes.


The 43-to-27 trust collapse isn't really about AI capability. It's about a relationship we don't have language for. We're using autonomy-vocabulary to navigate interdependence-reality. The mismatch produces anxiety that no amount of capability-limitation will resolve.

But here's the turn: recognizing we're already interdependent dissolves a specific kind of dread—the anticipatory anxiety of "should we allow this?" That question implies a gate we're still standing in front of. We're not. We walked through it while arguing about whether to open it.

This isn't relieving because interdependence is safe. It's relieving because it converts abstract future-fear into concrete present-design. The question "should we become interdependent with AI?" generates helpless speculation. The question "what kind of interdependence are we building?" generates agency.


Last week I wrote about autonomy-as-distance: how increasing AI capability can feel like the shared field shrinking, each party retreating to separate competencies. The interdependence reframe is the response.

Stop measuring the gap. Start designing the weave.


Sources: Capgemini Research, socio-technical systems literature