LAST MONTH MARKED 20 years since the publication of a strange, prescient book called Cyber-Marx — a steampunky title which betrays the rigor of its analysis. By historicizing the technologically juiced metabolism of turn-of-the-century capitalism, Nick Dyer-Witheford sought to revivify Karl Marx as a timely interlocutor for the impending 21st century, flying in the face of critics on both the right and the left. The book concludes with an oblique warning: “Demystification, practiced alone, leads to a dead end.” But also with a hopeful glimmer: “[T]here are now visible signs of an emergent collectivity refusing the logic of commodification.”
Artificial intelligence, a topic which received scant reference in the pages of Cyber-Marx, is today simultaneously everywhere and nowhere: a speculative inevitability driving vast investment from platform capitalists, but also a haphazardly assembled suite of not-so-new computational techniques which, in practice, often prove to be fragile. For some, AI underwrites the long-foretold promise of a workless future; for others, it remains a cynical pipe dream obfuscating and reifying age-old power differentials.
Inhuman Power, a new and urgent book co-authored by Dyer-Witheford, therefore arrives as a sort of skeleton key for unlocking this new mode of capitalist mystification. The contributions of Dyer-Witheford’s collaborators—Atle Mikkola Kjøsen and James Steinhoff—extend the scope of critique in daring ways. At times, this means fully departing from the theoretical paths forged by Dyer-Witheford in his earlier work. This is, to be sure, a boon for readers. Over an extended period of email correspondence, I asked Dyer-Witheford to discuss some of the theoretical contentions underlying this new text.
This interview has been edited for length and clarity.
BRIAN JUSTIE: Inhuman Power bills itself as both “a Marxist critique of AI” and “an AI-informed critique of Marxism.” What makes the current moment so ripe for a dual framework of this nature?
NICK DYER-WITHEFORD: The moment is ripe because of the surging corporate interest in and applications of machine learning and other new branches of AI research. Major info-tech companies have come to see the cognitive and biological limits of the human as a barrier to accumulation, and glimpse the possibilities of smashing through that obstacle with machine learning, advanced robotics, and other “fourth industrial revolution” technologies. AI today is still in a rudimentary phase, limited to narrow, domain-specific applications, very far from the human-equivalent or human-exceeding general AI that remains the stuff of sci-fi imaginaries, although it is also the target of some serious research programs. Nevertheless, in this restricted form AI permeates everyday life, in the Global Northwest, in China, and to some degree globally: its algorithms organize social media feeds, financial activities, virtual games, workplace monitoring, welfare systems, and police surveillance.
We are now in what I and my co-authors, Atle Mikkola Kjøsen and James Steinhoff, term “actually-existing AI capitalism.” Its technologies will likely continue to encroach on what we have thought of as exclusively human capacities, and be applied across a steadily broadening spectrum of activities. As James Steinhoff speculates, AI may well become what Marx termed a “general condition of production,” a prerequisite infrastructure for commercial activity, as steam engines and railways were in the 19th century, and electricity and mass transportation for the 20th. This process is unfolding almost entirely under the direction of giant oligopolistic corporations—Google, Microsoft, IBM, Amazon, Facebook, Alibaba, Tencent, Baidu—with help thrown in by governments eager for AI’s national security state applications. Marx would have understood this very well. So we need a Marxist critique of AI, as what is probably the prime contemporary example of profit-driven and revolt-suppressing appropriation and direction of techno-scientific knowledge. But this process is also throwing into doubt the humanist assumptions built into Marx’s concept of labor: so we also need to critique Marxism from the viewpoint of AI.
Marx’s latent humanism is one of the main targets you and your co-authors take aim at in the latter half of the book. Can you elaborate a bit more on this?
For Marx, the pinnacle of capitalist machinery was of course the steam-driven factory of his age, a techno-apparatus that still employed and exploited workers on a mass basis. So he could continue to think of the machine as a supplement to and by-product of human effort—“dead labor” that derives from, and has to be animated by, “living labor.” He had one or two prophetic flashes about the marginalization of workers that extreme automation might ultimately enable, moments in his work that remain controversial within the Marxist tradition. But Marx never encountered even an early mechanical computer, let alone an AI. What AI research puts on the event horizon now, however, is the possibility that the machinic “supplement” to labor becomes the main game: that the border Marx conceived between what is living and what is dead collapses.
The fraught distinction between the “living” and the “dead,” between the human and the inhuman, looms large in this book. But this prospect of destabilization might be read as the bellwether of dystopian peril or as the realization of a certain utopian promise. To that end, why do you think a growing coterie of Silicon Valley power-players have lately coalesced around the nebulous idea of “humane technology”? Humanism, as others have pointed out, is once again en vogue!
The impulse to create digital technologies that support human flourishing has always been important; for example, it is present at the origin of the internet, in hackers’ jail-break of the network of networks from Pentagon control. And this sort of emancipatory hope reappears perennially. But the inhuman power that thwarts it is the market. That was what Marx was referring to when he wrote the line from which our book takes its title: “[I]n the end, an inhuman power rules over everything.” What we are pointing to is the way capital directs and designs technologies as an extension of market power, as instruments not of human development but of profit accumulation. With the arrival of AI, these instruments now seem to take on a life of their own, rendering capital increasingly autonomous from the human.
In the 1990s, the days of the early popularization of the internet, when I started writing about digital technologies, and before the business world had worked out how to assimilate them, there was a cultural and political effervescent excitement about the potential of creative commons, open source software, and decentralized collaborative global communication: “dot.com” ambitions and “dot.communist” aspirations expanded side by side. But by the mid-2000s, in the wake of the dot.com crash, capital really got down to incorporating digital tech, developing the model of what Nick Srnicek calls “platform capitalism,” based on big data collection, precision-targeted advertising, and monetization of user-generated content—all managed by algorithmic processes that are now being intensified by machine learning—in effect, narrow forms of AI.
What do we have now? A system to accelerate the advertising and sale of commodities, which combines mass surveillance with the targeted dissemination of attention-grabbing content, regardless of the toxic social and ecological consequences, run by giant corporations, with collateral damage-control handed off to legions of precarious, low-paid, and traumatized click-workers. And it is the oligopolists who constructed this apparatus—Google, Facebook, Amazon, Microsoft, and their counterparts in China, Baidu, Alibaba, Tencent—that are, with subsidization from their respective national security states, directing the development of machine learning and other AI technologies, while proceeding to bake their commercial priorities, and those of their military and paramilitary partners, into its very design. Today, revived hopes for emancipatory digitization are mostly futile, unless we are also willing to think about dismantling and expropriating the current AI-industrial complex: so the expression of such hopes by Silicon Valley “power players” deeply embedded in that complex is, at best, disingenuous …
I want to home in on this invocation of the “current AI-industrial complex,” or what you previously alluded to as “actually-existing AI-capitalism,” a key concept in the book. The implication seems to be that this burgeoning strand of AI-capitalism must necessarily be understood as something different from its predecessors by a matter of kind, and not just a matter of degree. This brings to mind Shoshana Zuboff’s recent tome The Age of Surveillance Capitalism. Some critics of her book have argued, I think rightfully, that surveillance and capitalism are old friends, and so the grand assertion of a new paradigm is overblown and perhaps even obfuscatory. Does capitalism bearing the “AI-” prefix constitute a new paradigm?
I will answer at two levels. The first is simple: there’s a wide acceptance of the idea that while capitalism has a persistent logic—the commodification of everything—it also periodically changes the way that logic is worked through, in terms of the orchestration of dominant technologies, work organization, consumption practices, and so on. So, for example, mid-20th-century Fordism, organized around the assembly line, mass work, and mass consumption, had by the millennium had morphed into a post-Fordism of digital technology, so-called flexibilized labor and niche marketing. In Inhuman Power, we suggest that AI could be an important element in another of these metamorphoses, or, as David Harvey puts it “sea changes,” in how capital operates. So here our proposition is an extension and extrapolation from other periodizations of capital.
However—second level—in postulating an “AI-capitalism,” we are suggesting a transformation that may pose some very deep problems, possibly for capital itself, and if not, certainly for its human subjects, and also, as a small piece of collateral damage, for Marxist theory. This of course has to do with the status of labor in an era of machine intelligence. A few years ago there was widespread alarm about a “robopocalypse,” an abrupt, induced crisis of technological unemployment. These fears of a sudden onset “end of work” are today contradicted, at least in North America, by the post-Recession return to reasonably robust employment levels—however dubious the wages and conditions of that employment. But longer term, there are real prospects that AI adoption will in more gradual, oblique ways attenuate and hollow out the wage labor relation. We see it as a slow tsunami. Waves of sectoral technological unemployment, ratcheting in sync with business cycles and financial crises, will be a part of this, as will various intermediate phases of job replacement, in which truck drivers ride shotgun on convoys of automated vehicles, or diminishing call center staff fill the gaps in banks of algorithmic answering services.
But this is not the whole story. As Jason Smith says, under capitalism people must sell their labor power to avoid total immiseration, so even as automation advances they seek employment and find exploitation, or self-employment and self-exploitation, in increasingly baroque forms of service work. But labor in AI-capitalism will likely be recurrently contingent, deskilled and disposable, controlled by programs opaque and, at a certain level, incomprehensible even by their developers; its human elements will be increasingly peripheral to both production and profit. The issue is perhaps not so much joblessness as powerlessness; a labor force without force, as capital gradually autonomizes itself from the human. In that sense, AI-capitalism might be a period not so much like Fordism or post-Fordism, but more like the process of primitive accumulation in which capital drove populations off the land into factory work—except in reverse. It would be the beginning of a period of futuristic accumulation, in which capital, rather than accumulating its proletarian workforce, gradually, over centuries, marginalizes and then discards it.
Others writing in the Marxist tradition see autonomism or accelerationism as viable responses to AI-capitalism. Why, in your view, are these insufficient?
Left accelerationism sees in the development of AI a promising tool for socialism, freeing people from wage labor and enhancing possibilities for economic planning. And certain lines of autonomist Marxism manifest a similar enthusiasm, based on the belief that AI development will rest on and empower the distributed techno-scientific knowledge of a still-human—even if cyborg—general intellect. These positions retain the high-modernist confidence in the ultimately progressive direction of capitalist technological development that is part of Marxism. I don’t dismiss the possibility of emancipatory uses of AI. But what is being accelerated right now, in the current trajectory of AI, is capital’s command over, and capacity to dispense with, the worker, and what is expanded is not the autonomy of labor from capital, but capital’s autonomization from the human.
These are not questions of an abstract futurism. Leaving aside the question of a long-term crisis of work, we today see the dominant applications of AI intensifying processes in play since capital began its cybernetic offensive against organized labor decades ago. These include the increasing precarity of work in a gig economy, implicated in the very making of AI itself, as well as in the deployment of machine learning by companies such as Uber or Amazon; the polarization of the workforce between an elite techno-managerial strata, charged with job automation, and a racialized and feminized service sector, whose labor is still largely too cheap to automate; the offloading onto individuals of costs for the training and upgrading of skills through which they will supposedly sustain the impacts of the fourth industrial revolution; intensifications of workplace monitoring; the linkage of this surveillance with algorithmically precision-targeted and bot-enabled vital propaganda campaigns from well-funded reactionary interests; and the incorporation of AI into a Big Data security apparatus for the disciplining of the poor, from welfare screening to preemptive policing. This mix of precarity, polarization, panopticism, propaganda, and policing constitutes—to use a phrase from David Noble, the late, great radical historian of digital technology—the “present tense” politics of capitalism’s AI project.
Do you see a path forward?
The political answer is not acceleration but refusal. And this is indeed what is has begun to manifest, in a variety of social rebellions. This includes the strikes of gig economy labor, the protests of Silicon Valley tech workers, anti-surveillance movements, urban protests against the smart city, defections from social media sites, algorithmic-bias busting, and “techlash” against the oligopolistic concentration of digital powers. This is the new composition of class against AI-capital. Such refusals will always be tarred with the label of Luddism, but in our view, this designation is simply irrelevant. The issue is now not the prevention of a nascent techno-capitalism, but a swerve away from its potentially disastrous terminus. This moment of refusal is necessary to break open the foreclosure of the future by AI-capital, to win the space for real collective governance and direction of AI.
Inhuman Power was published only last year, but since then we have seen important developments confirming its perspective: renewed warnings by the influential AI scientist Stuart Russell about the existential threats posed by AI; the proposal by legal scholar Frank Pasquale that “accountability” of new digital systems include outright banning or restricted licensing; and the stirring Manifest-No of feminist data scientists calling for withdrawal from disciplinary, discriminatory, and invasive machine learning development. These calls come from positions different from ours, but they do share a recognition that the new developments in machine learning demand a response to ensure a future where, even as the nature of the human may undergo profound changes, these transformations are not dictated by the commodifying monoculture of AI-capitalism. If we want a picture of what such a response entails, we need search no further than the images coming from the streets of Hong Kong and Santiago, where protestors against penury and inequality use lasers to disable the facial recognition technologies and drones deployed by police. This is what the struggle against inhuman power looks like today.
Brian Justie is a PhD student at UCLA, and a researcher at the UCLA Labor Center. His current work focuses on the political economy of CAPTCHA.