Is Algorithmic Management Too Controlling?
18 hrs134 views0 comments
New research from Wharton’s Lindsey Cameron looks at how gig workers are dealing with strict managers who aren’t human.
The following article was originally published by the Mack Institute for Innovation Management.
In more and more workplaces, important decisions aren’t made by managers but by algorithms which have increasing levels of access to and control over workers. While algorithmic management can boost efficiency and flexibility (as well as enabling a new class of quasi-self-employed workers on platforms like Uber and Instacart), critics warn of heightened surveillance and reduced autonomy for workers.
In a newly published paper, Wharton professor Lindsey Cameron examines how ride-hail drivers interact with the algorithmic management tools that make app-based work possible. In this interview, she shares insights from her research, along with tips for creating a more equitable future of work.
How Algorithmic Management Works in the Gig Economy
What is “algorithmic management” and what does it look like in the workforce?
Lindsey Cameron: Algorithmic management is when an algorithm makes the decisions that a human manager typically does. That could be anything to do with hiring, firing, evaluating, or disciplining workers.
These algorithms are very robust. For example, in a typical shift, a ride-hail driver might only complete a dozen rides, but will have more than a hundred unique interactions with the algorithm. And while ride-hail driving — or other jobs in the app-based gig economy — is often seen as the exemplar of algorithmic management, so many jobs now have algorithms embedded within them.
Think about Amazon warehouse workers or the person at the checkout line at your grocery store. There’s probably an algorithm counting how fast they’re scanning items and evaluating their performance. Think about the emails and text messages you get asking you to rate a worker you interacted with. And let’s not forget how we are asked to tip now after every service transaction — you can be sure that information is being recorded and used a performance indication. In short, algorithms are becoming embedded in work across professions, industries, skill levels, and income levels.
Your research explores how ride-hail workers interact with their algorithmic managers — specifically, how they exert autonomy and make decisions. Initially, it may seem that these types of workers have little autonomy and choice. But you discovered how drivers devise ways to exert more control.
Lindsey Cameron: I look at each of a worker’s interactions with the algorithmic management system as an instance in which consent is produced. By consent, I mean, “How do you get people to enthusiastically comply with rules?” Through these small interactions, people feel like they have a sense of choice. It’s a small choice, but it is very real.
There are many tactics that drivers use to exercise autonomy. Some rigorously follow all the rules and suggestions of the algorithmic management system in the hopes that it will pay off financially. Others deviate from rules and suggestions in an attempt to game the system into a more favorable outcome. This might include turning the app on and off, relocating into higher-demand areas, or declining multiple rides in a row in an attempt to inflate surge prices. All of these things may help individuals feel like they have a sense of choice and, ultimately, these choices help the ride-hailing company as workers stay on online, one of the firm’s primary objectives.
Why Algorithmic Management Can Exacerbate the “Good Bad Job”
Typically, algorithmic management is associated with what academics call “bad jobs”: work that is dangerous, precarious, and/or affords you little room for advancement or autonomy.
On the face of it, app-based work would seem to be a clear example of that. But your research shows that, when it comes to people’s actual attitudes toward working these jobs, it’s not so simple. This insight led you to coin the neologism “good bad job.”
Lindsey Cameron: I define a “good bad job” as work that is attractive and meaningful in ways that mask structurally problematic elements, almost like a Trojan horse. Typically, in bad jobs, people have a negative subjective experience. But when you talk to most people who are doing ride-sharing or other app-based work, most of them enjoy it or at least think it’s better than their alternatives. What my research looks at is the tension of having a positive subjective (“good”) experience in a structurally problematic (“bad”) job.
Amazon is a great example — warehouse work is hard, but the wages are high, especially in rural areas. But it’s still algorithmically managed, and workers are often pushed to their physical and emotional limits. I mean – how do you reason with an algorithm? In the future, we’ll likely see more jobs like this — with higher pay or greater flexibility, but in difficult working conditions, with algorithms controlling much of the experience.
Currently, we associate algorithmic management with low-wage, precarious jobs like ride-hail driving or warehouse work. Do you think it’s in the future for white collar workers?
Lindsey Cameron: It’s already here. We’re seeing a broad sweep of new tools, technology, and digitization under the future of work. Surveillance of at-home workers exploded during the pandemic, with the introduction of tools that could track your keystrokes or whether you were active at your computer or Bloomberg terminal. If you do any kind of customer-facing job, an algorithm keeps track of your ratings and reviews. There are algorithms that scrape your email to make sure you’re not committing corporate espionage or telling offensive jokes.
Algorithms are endemic to this new world of work, regardless of the type of job you’re doing. I just chose to study a type of work — ride-hail driving and app-based work — in which they are very prevalent. There’s a great quote attributed to the science fiction writer William Gibson: “The future is here, it’s just not evenly distributed.” If you look at the most disenfranchised, vulnerable, marginalized workers, that is where new tools and technology get tested first. Predictive algorithms, for example, are now used in all kinds of fields, but they were first tested on whether to parole prisoners, who are disproportionally African-American men. Safiya Noble, Ruha Benjamin, and others have done phenomenal work on this topic.
Q: What should executives and business leaders know about algorithmic management and how it affects their workers?
Lindsey Cameron: You’ve got to have a human in the loop. You can’t have hard and fast evaluation limits. In some companies, an algorithm can fire you if you’re not meeting your quota. Not only should that not happen, but there needs to be an appeals process when decisions are made.
Basically, don’t let the algorithm be stupid. Think of the worst-case scenario — how the most marginalized person could be negatively affected — and design around that case, because that’s the only way to make this type of management most inclusive.