Something is off. You’ve probably felt it — not dramatically, not in a way that’s easy to name, just as a constant low-grade friction between the life you’re living and something that would feel more like yours. The choice that didn’t quite feel chosen. The mood that arrived before you’d done anything to earn it. The thing you wanted that, when you finally got it, didn’t land where it was supposed to.
Most of the available explanations point inward. You’re stressed. You’re ungrateful. You haven’t worked hard enough on yourself. You need to manage your relationship with your phone, your diet, your thinking patterns. The problem lives inside you, and so does the solution, and there are a thousand products and frameworks and therapeutic modalities standing by to help you work on it.
This essay is making a different claim. The friction isn’t a symptom of something wrong with you. It’s a signal about something wrong with the system. It’s the ambient, invisible, metabolic tax of navigating an environment built to extract energy without returning it — and it’s being engineered so that you keep looking in the wrong direction.
The environment you move through every day is not neutral. The layout of the supermarket, the temperature of the office, the default settings on your phone, the neighborhoods you can afford, the food that’s cheap and the food that isn’t, the work that pays — these aren’t accidents or incidental features of modern life. Someone made choices about them, in service of specific outcomes, and those outcomes are not primarily yours.
This probably sounds like a conspiracy theory. It isn’t — it’s simpler than that, and more uncomfortable. It doesn’t require anyone to be secretly coordinating. It just requires understanding one thing: behavior follows the path of least resistance. The path has been engineered, and the engineering has been going on long enough that the path feels like the landscape.
There is a science of how this works. Developed over more than a century, it has one core finding: most of what you experience as choice is downstream of the environment. The default option, the available sequence, the framing of what’s on offer — these do most of the causal work before you’re consciously involved. The self that feels like it’s deciding arrives, mostly, after the fact.
In 1913, John B. Watson published a paper arguing that psychology should abandon any interest in consciousness and focus exclusively on observable behavior. The goal, stated plainly, was prediction and control. When a personal scandal forced him out of Johns Hopkins in 1920, he landed within months as vice president at J. Walter Thompson, the world’s largest ad agency — running campaigns engineered to produce anxiety about social inadequacy and maternal failure, tracking sales curves the way he’d tracked learning curves in the lab. At the same time, Edward Bernays — Freud’s nephew, veteran of Wilson’s wartime propaganda apparatus — was doing the same thing from the psychoanalytic direction. He called it public relations. He sold cigarettes to women by linking them to feminist independence. He sold a war by framing it as a defense of democratic survival. The science of shaping human behavior found its first institutional home in the boardroom, and it worked.
If you are constantly exhausted, this is where the battery drain begins. You are walking through an environment running millions of simultaneous, invisible tests on your nervous system. The persistent hum you feel — the sense that you are always slightly behind, always falling short, always failing to meet the moment — is not a spontaneous emotion. It is the heat signature of a machine operating exactly as Watson and Bernays intended. When your baseline state feels like ambient anxiety, your body is simply registering what the environment is doing to it — an environment that requires your insecurity to function. The friction isn’t a glitch in your psychology; it is the cost of constantly defending yourself against a landscape engineered to bypass your conscious processing.
Part of why it works is structural. In 1968, ecologist Garrett Hardin named what he called the tragedy of the commons: when a shared resource is open to unrestricted individual use, rational self-interest will inevitably destroy it. The paper became one of the most cited in academic history and functioned, in practice, as a scientific argument against collective ownership — proof that shared management fails, that private control or state regulation are the only alternatives.
Hardin’s model assumed, without saying so, a commons of strangers — anonymous users with no communication, no history, no iterated relationship. But that’s not a description of how commons have actually been governed throughout most of human history; it’s a description of mass scale.
Anthropologist Robin Dunbar found that human social cognition has a natural ceiling — around 150 people, a level at which mutual accountability can manage behavior without any formal apparatus. Below that threshold, everyone can see what everyone else is doing, and the consequences of defection are immediate and personal.
Elinor Ostrom, the first woman to be awarded the Nobel Prize in Economics, spent decades documenting what actually happens to commons governed at bounded, accountable scale: they hold. The conditions she identified for success — monitored behavior, graduated sanctions, locally legitimate rule-making, long-term relationships between users — aren’t policy achievements. They emerge naturally when a community is small enough that everyone’s behavior is visible and defection carries real social cost. The feedback is direct; the circuit between individual effort and shared survival closes on its own.
Hardin described what happens when you run a commons above Dunbar’s ceiling. Ostrom documented what happens when you don’t. But industrial society had already blown past the threshold. Beyond that critical mass, informal mechanisms stopped functioning, and an individualistic rejection of communal responsibility grew in its place. But the insistence that behavior shouldn’t be engineered, that freedom means leaving the environment unmanaged, doesn’t produce a neutral landscape — it allows existing asymmetry to flourish.
Political theorist Hannah Arendt coined the phrase “banality of evil” to describe what she found covering the 1961 war crimes trial of Adolf Eichmann — one of the chief architects of the Holocaust’s logistics. She had expected a monster. What she found was a bureaucrat. The horror wasn’t explained by hatred or ideology so much as by the complete absence of the kind of reflection that would have required him to encounter what he was doing as something done to people. Evil as a lack of awareness, not the presence of malice.
This is the natural output of operating above Dunbar’s ceiling. The feedback loops that make harm personal — the face, the shared history, the knowledge that you will see this person again — are unavailable when scale makes anonymity structural. An executive optimizing food placement doesn’t know the families whose eating patterns shift. A financial product designer doesn’t sit with the people who misread the fine print. The harm never completes the circuit back to a human being with a name. What looks like indifference to consequences is the predictable result of an architecture in which consequences are invisible. The system doesn’t need bad people. It needs ordinary people, operating past the threshold where harm is legible.
Think of this as the difference between walking on solid ground and continuously treading water. Below that 150-person threshold, the social environment functions like gravity — it holds things in place automatically. You don’t have to consciously calculate whether a neighbor will uphold a norm, because the closed circuit of the community does the math for you. But at mass scale, that ground disappears. Every interaction with a stranger, every corporate transaction, every navigation of a faceless institution requires you to work out from scratch whether the exchange is honest. You are vetting every choice against an environment that holds no inherent accountability, because there is no shared history and no one watching. The exhaustion you feel at the end of the week isn’t weakness; it is the raw cost of supplying, from your own reserves, the trust that the environment no longer provides.
Consider what this costs in practice. Imagine a parent aging in the next house over. Everyone in the group knows this — can see it, track it, factor it into the day’s labor. The care is distributed not by contract but by visibility: you know what’s needed because you’re close enough to see it, and you know your own turn is coming. The person being cared for is still embedded in the life of the group — present, legible, contributing within their capacity. Now extract that function: the parent goes to a facility two towns over, the labor is performed by paid strangers operating under institutional liability constraints, the monthly payment disappears into an organization with no stake in the actual quality of the relationship. You visit when you can. The guilt is yours, and the bill is yours, and the function that once produced social cohesion as a byproduct now produces neither. What could have been a positive feedback loop — care given, belonging returned, the whole arc of dependency made visible and shared — becomes another broken link. You hand a token to strangers. The depletion is invisible and it is total.
The same mechanism of extraction runs through risk. At smaller scales, catastrophe-pooling is a natural result of circumstance — because everyone understands they are one bad harvest, one illness, one fire away from needing the same. The pool is your neighbors. The incentive to contribute is the same as the incentive to maintain any relationship you depend on. No organizational complexity required. Insurance — the scaled-up version — is what happens when that function gets extracted from the community and reengineered as a product, so that the entity holding the pool has a structural interest in never returning it. You pay monthly into a system that employs specialists whose job is to find reasons not to pay out. The relationship is adversarial by design, with no visibility and no reciprocity, and when the catastrophe comes the circuit is as broken as it always was — except now you’ve been paying for the illusion of a closed loop for decades.
Food, shelter, care, risk, belonging — as we grew past our social cognition threshold, these were gradually extracted, intermediated, and returned as product. Each extraction is another broken feedback loop. Each broken loop is another domain in which you are treading water, supplying from your own reserves what the environment once provided automatically.
Behavioral scientist B.F. Skinner’s Beyond Freedom and Dignity, published in 1971, described the fiction that made all of this possible. Not the extraction itself — the ideological cover of individual responsibility underneath it. His argument was that the “unmanaged” environment is a myth: behavior is always being shaped, by the default option, the available sequence, the path of least resistance. Watson and Bernays had understood this from the beginning. What Skinner named was the sleight of hand that kept the vacuum theirs: the insistence that collective design of shared conditions is tyranny, while the same design in service of private extraction is simply the natural order. The only meaningful question, he said, was whether the shaping was done consciously and toward shared ends, or unconsciously and toward whoever’s ends were already in play.
The book was attacked from almost every direction. But the objection that ultimately killed it wasn’t intellectual; it was political. Critics tended to miss the actual point, claiming that Skinner advocated giving up autonomy, despite his clear message that exerting more effective steering is possible. Invoking the specter of scary labels — communism, “big brother”, eugenics — practically ended the conversation before it could start.
What filled the vacuum was a psychology of the self — personal growth, self-actualization, the therapeutic model of change — frameworks that locate the source of suffering inside the individual, which means the fix is always inside the individual too, which means the conditions producing the suffering never have to change. Every institution that profits from those conditions needs exactly this belief in order to remain necessary. Capitalism generates insecurity as a structural feature and then sells products to manage it individually. Religion has done a version of this for centuries: the problem is sin, the fix requires institutional mediation, and the material realities producing the suffering go unaddressed. Even psychotherapy can secularize the structure without changing it. The bill for externally produced damage gets handed to the person who sustained it, reframed as an opportunity for growth. You are expected to endlessly optimize your internal state, all while remaining tethered to a fragmented network of strangers that structurally guarantees your exhaustion.
Behavioral science never stopped working. It continued, uninterrupted, in advertising, retail design, financial products, urban planning, and eventually the architecture of platforms that now mediate most of human social life. Social media is the most efficient behavior-shaping laboratory ever built, constructed explicitly on these principles, optimizing for engagement — a clean word for compulsion.
When you find yourself burning energy on life hacks, productivity systems, and coping strategies just to get through the day, you are caught in this specific trap. The system damages you structurally, then makes you pay the cost of repair. Trying to outrun an inherently extractive environment through sheer willpower is like trying to fix a leak in the walls by rearranging the furniture. You are working constantly on your own psychology, trying to reach a stability the environment is designed to prevent. Your burnout is not a sign that you need to work harder on yourself. It is the predictable output of an engine being run without coolant — exactly what you would expect, and exactly what the system requires.
In the same year Skinner’s book was published, three things happened that reflected our predicament:
- The Powell memo called for American capital to capture the institutions that produce ideas — not to win arguments but to control the terrain on which arguments get evaluated.
- Nixon severed the dollar’s link to gold, untethering the money supply from any physical constraint — making it possible to grow the economy on paper indefinitely, regardless of what was actually happening in the world.
- And the Club of Rome published Limits to Growth, modeling what would happen to human civilization given unconstrained growth on a finite planet.
The first two were tools for managing what people believed to be true. The third established what was actually true. Taken together, these three moves describe a unique convergence that benefited the status quo. Capital captured the idea-producing institutions to control what people believed was possible. The money supply was disconnected from physical reality to keep the indicators running in the right direction. And the data establishing the actual trajectory was dismissed, defunded, and buried. The concentration of resources upward required a managed picture of the world; that the majority keep reasoning from signals that had been decoupled from the system they were supposed to describe.
Meanwhile, data kept accumulating from research that was not amplified. Decades later, researchers compared Limits to Growth‘s projections against real-world data — in 2008, Graham Turner found thirty years of empirical trends tracking the collapse scenario closely; in 2020, Gaya Herrington replicated the finding with fifty years of data across ten variables, arriving at the same result: systemic decline beginning around 2030. And in 2023, Robert Sapolsky arrived at Skinner’s conclusion from an entirely different direction, assembling decades of neuroscientific, biological, and environmental research into a case that there is no gap in the causal chain between stimulus and response where a free, autonomous agent could be inserted — no place where the environment stops and the self-made self begins. The book, Determined, was controversial because its implications were intolerable — if circumstances do the causal work, the entire apparatus of individual responsibility, and everything built on top of it, requires reconstruction. The official position and the empirical record had separated completely — not just ecologically, but on the basic mechanics of how behavior works.
Over the past decade, a notable number of very wealthy people have been quietly building self-sufficient compounds — in New Zealand, Montana, at undisclosed underground locations — designed for extended external collapse. They didn’t appeal to optimism or willpower. They read the trajectory and arranged circumstances in which their own survival was the easiest available outcome.
A compound is, incidentally, a small group managing shared resources, distributing labor, making cooperation the default. Dunbar’s ceiling of around 150 is the limit below which the social environment manages behavior on its own. Visibility, reciprocity, mutual stake. You don’t need more organizational complexity at that scale because the structure itself does the work.
But the underground bunkers of the elite are skewed applications of this insight. They reproduce the form of cooperative living while depending entirely on what came before it — resources accumulated through the same mechanisms this essay has been describing, then withdrawn behind a perimeter. The architects of this are not, by Arendt’s measure, exceptional people. They are ordinary ones who happened to be positioned where the unmanaged data was legible — and who responded exactly as the system trained everyone to respond: by arranging circumstances for their own survival. The evil, if there is one, is not their singularity. It’s their typicality — and what that typicality reveals about the behavior incentivized by the system.
What makes successful communities function isn’t the boundary — it’s the genuine mutual stake among the people inside it. Hoarded wealth and exclusive access don’t satisfy that condition. By exploiting behaviorism at civilizational scale, the compounds of the wealthy simulate the architecture of cooperation without the substance of it.
The through-line is not subtle once you see it. Behavioral science was developed, captured by extraction, and then dismissed when its implications pointed toward collective use — not because the science was wrong, but because the conclusions were inconvenient. Skinner named the vacuum; Sapolsky confirmed it from neuroscience; the ecological data confirmed it from systems modeling. The evidence kept accumulating across independent disciplines while the official position drifted further from it, held in place by managed indicators and captured institutions. The architects read the real trajectory and responded exactly as the system trained everyone to respond: by arranging circumstances for their own survival, at precisely the scale where cooperative design works on its own.
That scale is the key. Below Dunbar’s threshold, behavioral shaping doesn’t require institutions or ideology or a science of persuasion — it emerges automatically from visibility, reciprocity, and iterated consequence. The feedback closes the circuit. Above it, the circuit breaks, the vacuum fills by default, and whoever is already positioned to fill it does.
That low-grade friction you felt at the beginning of this essay — the choice that didn’t quite feel chosen, the mood that arrived before you’d done anything to earn it — that’s what an open circuit feels like from the inside. You are not under-disciplined. You are not ungrateful. You are operating at a scale that structurally guarantees your exhaustion, inside a set of frameworks specifically designed to ensure that the scale is the last thing you examine. The problem was never you. It was always the distance between you and the consequences of your own life — a distance that was engineered, is maintained, and does not have to be permanent.
We are not over-engineered. We are under-engineered — deliberately, structurally, at civilizational scale — toward the ends that would actually be ours. And the reason is Dunbar. The scale at which we operate was always going to break the feedback loops that produce cooperative design automatically. What filled the vacuum was not inevitable. It was just first.
The knowledge of how to live together was never lost; it was just made to feel impossible. It isn’t — and it can be undertaken with clear eyes about the mechanics. Without the mystification, without the apparatus, and without the cultivated belief that the problem is you.