| markusspiske | MR Online

Opaque algorithms are creating an invisible cage for platform workers

Originally published: Marxist Sociology Blog on December 9, 2021 by Hatim A. Rahman (more by Marxist Sociology Blog)  | (Posted Dec 11, 2021)

This post is in collaboration with the Work in Progress Blog, sponsored by the American Sociological Association. It it based on Hatim Rahman, “The Invisible Cage: Workers’ Reactivity to Opaque Algorithmic Evaluations” in Administrative Science Quarterly 2021.

We live in a world run by algorithms. Nowhere is this more apparent than with platform companies, such as Facebook, Uber, Google, Amazon, and Twitter. Platforms claim that their algorithms collect and use our data to optimize our experience with breathtaking speed and efficiency.

Recent reports from scholars, journalists, and policy makers, however, have revealed that platforms’ algorithms exacerbate bias and discrimination in ways that are difficult to audit.

In my recent study of workers on a labor platform, I found a broader concern about the way platforms use algorithms to control participants. Platforms’ algorithms create an invisible cage for platform users, because workers have no way of reliably accessing how their data is being processed or used to control their success on the platform.

As a result, the platform’s algorithm claims to “know” the workers better than they know themselves, yet in ways that are inaccessible to them.

Algorithmic control 

Platforms have exploded on to the scene in recent years. Yet, they have faced a persistent dilemma. They claim to have no legal relationship with workers, classifying them as independent contractors. This claim limits their liability towards workers because platforms are not legally required to provide workers with benefits or resources. At the same time, by classifying workers as independent contractors, platforms are not legally allowed to control workers, such as dictating their schedule or how they should behave.

Instead, platforms have turned to algorithms to control workers. These algorithms assign tasks, monitor behavior, evaluate performance, and dictate workers’ success.

Yet, workers have found ways to game and manipulate these algorithms. For example, online sellers on Amazon, Alibaba, and eBay pay customers to receive positive reviews and use fake accounts to leave negative reviews on their competitors’ products. YouTube creators can pay third-party organizations to inflate their viewer counts, giving the YouTube algorithm the impression that their videos are trending.

In my research setting, the initial rating system was also easily gamed. Nearly all workers had close to a perfect five-star rating. As a result, the platform’s algorithms had difficulty differentiating and suggesting matches between users based on their ratings (imagine searching for a product on Amazon, but all the products had the same rating; the rating isn’t very useful for figuring out which product is best).

Introducing opacity

To address this problem, the platform introduced an opaque rating algorithm. After the implementation of the opaque rating algorithm, some users experienced their rating went up when a project ended. Others experienced their rating went down when a project ended, even though they could not detect anything went wrong on a project. Yet, others experienced no changes.

Even the most successful, highest-rated workers could not figure out how the opaque algorithm worked. As one such worker told me, “imagine receiving a grade in a class, but the grade was based on criteria you didn’t know about . . . how are you supposed to improve?” When workers thought they reverse engineered the algorithm through their experiences, it was hard to confirm. The algorithm had either changed, or other workers experienced the algorithm operating differently.

What frustrated workers even more was that the platform provided no viable recourse options. There was no way to appeal or find out more information about how the algorithm was rating them. Workers were stuck inside the invisible cage.

Reactivity to opaque algorithms

My study identifies two mechanisms determined how users reacted to the opaque algorithm.

First, I found the extent to which users depended on the platform for work and income mattered.

Second, I found whether workers experienced decreases in their evaluation scores influenced their reactions.

Users who were dependent on the platform to find work and experienced decreases in their scores, for example, tried to convince clients to transact with them off-platform. This tactic was risky because it could lead to their account being suspended. Users who were not dependent on the platform to find work tried to find alternative avenues for finding gig work. This avenue was difficult because no other platforms provided the same access to clients.

For the platform, these outcomes were detrimental to the platform because transactions off-platform did not result in any revenue for the platform. Further, users leaving the platform could harm the strength of the platform’s user base in the long-run.

The Invisible Cage

For decades now, the “iron cage” has dominated our understanding of control. In platform settings and the gig economy, my study suggests that the invisible cage metaphor is becoming, or arguably has already become, the dominant way to understand control.

Platforms employ algorithms such that the criteria for success and changes to those criteria are unpredictable for workers and its users. We have seen elements of the invisible cage on labor, social media, service, product platforms, and now even on the national level.

There are signs for hope, however. Workers are beginning to create their own cooperatives where they control the platform and can share the fruits of their labor more equitably. The recent U.S. administration has appointed strong privacy advocates who have signaled that they intend to introduce measures to compel greater transparency and hold platforms accountable for the outcomes of their algorithms and polices. If history is a guide, it will take a combined, sustained effort to ensure a more equitable environment for workers.


Hatim A. Rahman is an Assistant Professor of Management and Organizations at the Kellogg School of Management, Northwestern University.

Monthly Review does not necessarily adhere to all of the views conveyed in articles republished at MR Online. Our goal is to share a variety of left perspectives that we think our readers will find interesting or useful. —Eds.