top of page

Trapped by the Pursuit of Efficiency



The newly created Department of Government Efficiency (DOGE) aims to drastically cut the U.S. federal budget and workforce. Just this week, President Trump launched a callous, harmful, and and likely illegal plan to slash the size of the civil service. I worked in government for nearly a decade. I’m not naive to bureaucracy and redundancies — but this is clearly a bad faith attempt to address those issues. Hiding behind this obvious and chaotic overreach, though, is a subtler truth: ‘efficiency’ should never be the goal. The question should always be — what outcome are we trying to achieve, and for whom? But we always find a way to shift efficiency — and technologies themselves — away from being just part of a process to get to a longer-term purpose, and instead turn them into the end-goal. Why? The answer requires a deep dive into Langdon Winner’s concept of reverse adaptation. Let’s dig in.


In his 1977 book, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, Langdon Winner overviews the concept of “reverse adaptation” or “the adjustment of human ends to match the character of the available means.” In short, we lose sight of our purpose and goals, and indeed, adapt them to align with new technologies. As Winner puts it:

“Abstract general ends — health, safety, comfort, nutrition, shelter, mobility, happiness, and so forth — become highly instrument specific. The desire to move becomes the desire to possess an automobile; the need to communicate becomes the necessity of having telephone service; the need to eat becomes a need for a refrigerator, stove, and convenient supermarket. … Once individual and social ends have become so identified, there is no avoiding this kind of affirmation.”

I see this phenomenon everywhere I look. ‘Civic tech’ lost sight of itself and became more focused on the tech than on its original civic-oriented goals. Or take the projects falling under the banner of ‘tech for good.’ ‘For good’ became an amorphous, abstract, self-justifying frame, and tech morphed into the ends. The ‘public interest tech’ space — of which I’m a part — is similarly at risk of centering ‘tech,’ and de-centering the public interest. We tend to ask ‘how do we make AI systems more fair and responsible’ and not ‘what would need to change in society to lead to more fair and responsible outcomes?’ (Hint: it might have zero to do with AI!). We also ask questions about how we might integrate AI in education or healthcare, rather than what educational or health outcomes do we want to improve — and if AI should even play a role. The point is, time and again, the mechanism becomes the goal.


But that’s not all that happens, because a subsequent question can’t help but roll off the tongue: are these technologies even good? Right, if technologies become entangled in ends, it’s only practical then to measure their effectiveness. For example, we create standards for AI and track their progress against performance benchmarks, which are themselves inherently flawed (e.g. if ChatGPT passed the LSAT, that still says nothing about how well it would performa as a lawyer, as that is an entirely different context). But the benchmarks position AI’s progress as an inevitability, and we never stop to ask, what is it we’re trying to achieve, for whom, and in what context? This can become an unstoppable feedback loop, where the only way to understand the effectiveness of the technology is to use specific measurement techniques that, in turn, reinforces the future use of the technology. As Winner writes, “far from being neutral, uninvolved sensing devices, these technical ensembles have their own requirements that must be met if the measurement is to take place.” As a result, he continues, “Individuals and social institutions must adapt to these requirements or they cannot be adequately evaluated.”


Enter DOGE and the unthinking elevation of efficiency to a societal value. Sure, Winner writes, efficiency is important in technical systems. More output per input? Huzzah! But once it becomes a measurement used to evaluate social and public life, we’ve lost the script. So how is this enforced? Why is it that technology becomes the goal and then measurement systems so naturally follow? Social psychologist Kenneth Keniston describes the rise of ‘the technological ego,’ and contends that “probably no other society in human history…has demanded such levels of specialized ego functioning as ours does.” He goes on to explain:

“The virtues of our technological society require a dictatorship of the ego rather than good government. The self denying potential of the ego is minimized: playfulness, fantasy, relaxation, creativity, feeling, and synthesis take second place to problem solving, cognitive control, work, measurement, rationality, and analysis. The technological ego rarely relaxes its control over the rest of the psyche, rarely subordinates itself to other psychic interests or functions.”

In short, as technology subtly slips from a possible input to the goal itself, “the technological ego” helps fix it in place. Keniston was writing in 1967 about the rise of teaching technological skills at home and in the work place, and how it created an undue emphasis on rationality and efficiency, and a disregard for feeling and emotions. If these observations don’t resonate with you, well, then, I have two things to say to you: 1) Um, wut?, 2) Can we hang out?


Winner highlights a second phenomenon that enforces this slight of hand: applying ‘instrumental concerns’ to all social and public situations. Winner writes, “the technological society tends to arrange all situations of choice, judgement, or decision in such a way that only instrumental concerns have any true impact.” Put differently, we ask far more ‘how’ questions than ‘what’ or ‘why’ questions when interrogating a problem. We ask how we might audit AI systems, and not why AI is even an appropriate use for our purposes. We ask “how do we scale AI systems” not ”why is scale a good thing? What will a world of scaled platforms look like, and do we want to live in that world?” We skip over interrogating the multiple, interacting, dynamic factors that underpin ‘why’ questions, and instead only ask ‘how.’


French Philosopher Jaques Ellul puts this dynamic down to our social ends becoming abstract an amorphous, just at the moment that our means have become clear and effective. Ellul put it this way in 1967:

“It is true that we still talk about ‘happiness’ or ‘liberty’ or ‘justice,’ but people no longer have any idea of the content of the phrases, nor the conditions they require, and these empty phrases are only used in order to take measures which have no relation to these illusions. These ends, which have become implicit in the mind of man, and in his thought, no longer have any formative power: they are no longer creative.”

We’ve trapped ourselves in an instrumental pursuit of efficiency and technology for its own sake, and lost sight of our values, commitments, and ends. This is why technology feels inevitable. This is why it feels implausible to push back against the advance of artificial intelligence, or steer it in the direction of an altogether different future. This is why initiatives like DOGE cause people to nod their head in agreement even if they’re fundamentally unserious. Technology has become a sufficient stand-in for the future we want to create and the kind of society we want to live in. We’ve lost the ability to envision, imagine, and debate clear ends, and replaced that process with a technology. We wonder why technology feels inevitable but we also let technologists twist technologies into markers of progress.


Sure, power tilts in their direction. But we can start to push back collectively by rejecting these premises. We can describe a crisp vision of the future. We can focus on outcomes that don’t presuppose an approach that launders technology. Start asking ‘why’ with the persistence of a 4-year-old and bend the world back to its natural state, where efficiency and technology are never ends themselves!

Comments


Let's Chat

I'm curious about

Find me online

Logo for the newsletter Untangled
Logo for The Atlantic Council
FLL Upcoming Events 2025.png
Logo for Data & Society Research Institute
LinkedIn Logo

© 2024 by Charley Johnson. All rights reserved.

bottom of page