The book The AI Con (written by Emily Bender and Alex Hanna) is a clearly-written and fairly concise examination of the reality of “Artificial Intelligence” and its use to intensify or degrade work and eliminate jobs in various fields — from Amazon warehouse workers and delivery drivers to nurses in clinics, journalists in newsrooms and writers and actors in the film industry. ftur
The use of “intelligence” in the name is actually misleading — part of the hype. AI is a form of automation — a “synthetic text extruding machine,” as the authors call it. AI software is “trained” to extrude text that gives the appearance it is what a human would produce. When we see a sentence of English we automatically understand it if it’s put together in the way a human would. As the authors say, “We can’t help ourselves!” We just understand the sentence as we read it. And if we understand the sentence, we have a tendency to assume there must be an intent or consciousness that produced the sentence.
But AI text extruding machines merely mimic what a human might write. To produce plausible text, AI programs make use of a Large Language Model (LLM) — a vast trove of text. For example, Meta’s Llama 3.1 has 15 trillion words in its text it is “trained” on. Given this dataset, the software looks at the probability of a word A following or preceding another word B. That’s why the text looks plausible. As the authors say,
“Simply modeling the distribution of words in text provides no access to meaning…[but] it is enough to produce plausible synthetic text, on just about any topic imaginable…”(p. 30)
A fundamental problem with the LLM basis for AI is that biases and hate speech are rife within the vast trove of text used as the basis of the text the AI machine produces. So how can they prevent the AI software from generating hateful or biased text? To avoid this, large numbers of humans are employed to rate the outputs. As the ratings are fed back into the system, this creates a kind of “reinforcing” mechanism that discourages the more hateful or biased text. As the authors report:
“Time reported that OpenAI had subcontracted Kenyan workers making less than two dollars a day to filter out gore, hate speech, child sexual abuse material, and pornographic images from ChatGPT and OpenAI’s image generation tool DALL=E.”
The workers were lured by the prospect of breaking into a lucrative tech field but long hours of filtering through hate speech and horrid images had traumatic emotional impact — a form of PTSD (p. 59)
This type of work is sometimes called “crowd work” and there are numerous companies that do these “tweaking” tasks for AI platforms, such as Prolific, Qualtrics, Remotasks and others. The AI image generator ImageNet would not have been possible without this exploitation of low paid workers around the world. This has been built around Amazon’s system for buying labor for small tasks, called MTurk. To hone the ImageNet dataset they used MTurk to hire an estimated 67,000 workers across 167 countries. They built up a dataset that contained over 14 million images, labeled across 22,000 categories. What we see here is how the AI text and image extruding machines are really built on the backs of a vast number of poorly paid workers in poor countries.
“A Labor-Breaking Technology”
As the authors point out, “corporate executives in nearly every industry” and management consultant firms like McKinsey, Blackrock, and Deloitte “want to ‘increase productivity’ with AI, which is consultant-speak for replacing labor with technology.” (p. 42) AI’s use in industry is the latest form of “scientific management.” This has been standard corporate job-design practice since it was introduced by “efficiency engineers” like Frederick Taylor in the early 1900s. For “scientific management” the key thing is the “task idea” and analysis of jobs into the various tasks workers perform in doing their jobs. Henry Ford’s “progressive production” system — implemented in his auto factories during World War 1 — was based on analyzing work into tasks. His innovation was in the first large-scale use of machinery to re-organize work and control workers.
We can see how the latest form of “progressive production” — supported by AI — is playing out in Amazon’s ongoing automation of its warehouse and delivery systems. The Amazon network consists of two different types of facility — fulfillment centers and delivery stations. Fulfillment centers are where orders are picked and packed, to prepare for delivery. Because Amazon doesn’t have physical retail stores, the cost savings have been put into a vast investment in automation. In the fulfillment centers Kiva robots move stacks of goods around and robotic arms allow for automated picking and stowing of goods. Delivery stations are where the packages are loaded onto trucks, and look similar to UPS or FedEx facilities. Between 2022 and 2024, employment at delivery stations grew by 20 percent. As Benjamin Y. Fong writes, there’s been a 25 percent reduction in employment at its Amazon Robotics Sortable fulfillment centers where the largest automation investment has taken place. At other fulfillment centers Yong projects a 16 percent drop. As Fong points out, this refutes the Amazon claim that robotics doesn’t kill jobs. The authors of The AI Con comment thus on Amazon:
“Today, Amazon’s warehouse robots force workers at fulfillment centers to keep up with a speed that is untenable, which has caused repetitive stress injuries” as well as OSHA investigation “into several warehouse deaths. Amazon drivers, meanwhile, are expected to keep up with a grueling schedule…tracked by “AI-enabled” cameras in their trucks.” (p. 46-47)
Delivery stations across the country are now also getting a form of automation called Auto Divert to Aisle (ADTA). Previously there were two phases to the work. A belt brought packages from loading docks and pickers along the belt put the packages in racks for each neighborhood. Stowers then put packages in bags for particular city blocks. With ADTA, this is all changed. Pickers are replaced. Workers face a constant stream of packages at an inhuman rate. If they don’t keep up packages flood the aisles and screeching alarms go off. Most workers are now wearing headphones to deal with this. Overflowing aisles are a safety risk as workers may trip over them. With the inhuman pace, repetitive stress injuries are the result.
But as Amazon pushes workers harder, resistance has grown, as workers intentionally work slower than the machines require. The worker complaints pose the possibility of further organizing against Amazon’s repressive and inhuman work environment.
Unions have been a source of resistance to AI in other areas. AI was a major issue in the 2023 entertainment industry strike by the actor’s union (SAG/AFTRA) and the writers guild. Although IATSE (another entertainment industry union) was not on strike, many of its members participated in the picket lines. This was the first time since 1960 that the two unions had struck together. As the authors say, “Hollywood studios are enticed by the promise of not having to pay writers by leveraging AI-generated content. Writers know what this means: They will be reduced to cleaning up AI-generated scripts.” Many writer positions could be eliminated. The actors walked out on strike because of similar demands from the studios. “They asked that actors’ likenesses could be digitally scanned once and be used in perpetuity, nearly eliminating the need for background actors and reducing job openings even for established character actors.” (p. 42)
In a study of AI’s impact on the industry, a number of writers, camera operators (members of the IATSE cinematographic guild), costume designs and actors on the 2023 picket lines were interviewed. Many commented on the “soulless” character of AI, contrasting the “inauthenticity” of AI-generated output with the ability to draw out human complexities and emotions that are central to the theatrical arts. As one screenwriter put it: “What we do is…illuminate the emotional resonance of a script, and that’s not something AI can do.” In their view, AI is “soulless.” A screenwriter noted the lack of creativity, with script outputs that are “really generic and hacky.” A camera operator (IATSE member) characterized the problem this way: “You can go into these AI-enhanced platforms and you could know nothing about cinematography and you can just put in the prompts…Someone who has spent [many] years really finessing the nuances of what it takes to light a scene properly, to write a story that’s compelling…there’s no way that technology can mimic that.”
In these interviews writers noted how their “pushback” against ideas put forth by executives improves the quality of a film. Total control by the executives — using AI to generate scripts or character ideas — “would be a disaster for the viewing public.” As a result of the 2023 strike, writers won protections, restricting certain uses of AI and giving writers some decision-making powers. AI cannot be considered an author and writers can’t be forced to use it. The actor’s union won protections for use of digital replicas of actors, requiring consent and compensation.
AI is also having a major impact in publishing and journalism. A survey conducted by the Society of Authors “found that 26 percent of authors, translators, and illustrators surveyed had lost work due to generative AI, and 37 percent of them had lost income due to” AI. (p. 105) The problem here is illustrated by an AI-generated guide to mushrooms. The mistakes resulted in a family ending up in the hospital. (p. 107) Many authors find that not long after their book is published a scammer uses an AI-generated book with a similar title, and in some cases stealing an author’s name.
In the USA journalism has been going through a serious crisis, with the number of working journalists cut in half between 2004 and 2017. Since 2004 more than 20 percent of papers have closed. Many newspaper websites may look like a newspaper but they’re hollowed out shells. Search Engine Optimization (SEO) is aimed at gaming the Google search algorithms. “At worst, media execs are leveraging the power of high-profile legacy…brands for clicks and forcing low-paid ghostwriters to fix the SEO-optimized crud that AI tools churn out for cheap advertising dollars that will be indexed on the first page of Google search results.” (p. 129)
Another area where AI is being applied is health care. For example, sepsis is an inflammatory response to infection that is a common problem in hospitals. Epic Systems is one of the largest providers for electronic health records. They developed an algorithm for detecting sepsis. A study in the Journal of the American Medical Association notes that the algorithm “failed to generate an alert for 1709” of 2,552 patients in the study who developed sepsis. At the same time, their algorithm generated alerts for thousands of patients who didn’t have sepsis. National Nurses United has warned that these AI prediction systems tend to generate excessive false positives and false negatives. (p. 86)
The rH Predict algorithm is another example. This was developed by United Health Group — largest health insurer in the USA. This was used to determine the length of stay that would be appropriate in a nursing home. This has led to a lawsuit based on patients kicked out too early — “even as the company knew the system had an error rate of 90 percent. The lawsuit alleges that United Health “banked on the elderly patients’ impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous AI-powered decisions.” United Health simply wanted to get “elderly patients out of nursing homes and hospitals as fast as possible, even against the advice of their doctors.” (p. 87)
The so-called “healthcare agents” are another example. These are “synthetically generated animation, a multicultural coterie of fake people wearing scrubs.” Hippocratic AI advertises these nurse substitutes as costing “less than $9 an hour.” (p. 90) This pitch is directed at employers who run health care facilities with the proposal of dispensing with the skills of actual human nurses. Here we can see why National Nurses United is working to resist the use of AI in their field.
AI also poses the danger of turbo-charging the effect of medical charlatans. For example, in the past year there’s been growing measles outbreaks in the USA. At the same time, children’s health — and the use of child vaccinations — has become a significant political battleground. In this situation, the Department of Health and Human Services, headed up by Robert F Kennedy Jr., has initiated a campaign through a commission titled the Make America Healthy Again (MAHA), aimed at combating childhood disease. The MAHA commission put out a report which talked about their view of the principal threats to children’s health: pesticides, prescription drugs and vaccines. Edna Bonhomme, writing in The Guardian, reports: “The most striking aspect of the report was the pattern of citation errors and unsubstantiated conclusions.” Various researchers and journalists suggested that these errors pointed to the use of ChatGPT to generate the report.
The report allegedly cited studies that didn’t exist. As she writes: “This coincides with what we already know about AI, which has been found not only to include false citations but also to ‘hallucinate,’ that is, to invent nonexistent material.” The epidemiologist Katherine Keyes, who was listed in the MAHA report as the first author of a study on anxiety and adolescents, said: “The paper cited is not a real paper that I or my colleagues were involved with.”
Worker Resistance
AI is being pushed as an automation tool to reduce workforces in health care, social welfare case work, medical clinics and newsrooms. The “rush to implement AI ‘solutions’ to all problems of government services, law, health care, and education” is being driven by “Silicon Valley executives, and their philanthropy arms,” Bender and Hanna write.
But the capitalist investment in AI faces three basic challenges. First, there’s the existential threat posed by the New York Times copyright lawsuit. The New York Times notes that ChatGPT can produce text from the paper verbatim — including whole articles. Venture capital firm Andreesen Horowitz has written that “Imposing the cost of actual or potential copyright liability on the creators of AI models would either kill or significantly hamper their development.” (p. 112) The problem here lies in what the AI text extrusion machine is. They are trained on vast troves of text and can only generate text based on that dataset.
Secondly, despite all the hype, the AI automation applications have not actually done much to raise labor productivity. MIT labor economist Daron Acemoglu has projected that productivity gains from AI will be less than 0.53 percent over the next ten years. (p. 193) In 2016 an AI booster said we should stop hiring radiologists as AI could replace them. But the Bureau of Labor Statistics notes a 6 percent growth in medical imaging jobs from 2022 to 2023. The big investment in AI is a bubble. It’s part of the general crisis of over-investment which has afflicted neo-liberal capitalism of the past two decades.
And finally, there is the potential of pushback from unions such as the nurses union (concerned with nurse staffing ratios) and the entertainment industry unions. As Bender and Hanna point out, the best protection for workers lies in the power of unions.
Worker resistance to AI is going to continue. Like the authors of The AI Con I’m not “anti-technology.” Some automation tools are useful — such as machine translation, or the heart rate monitors, blood pressure and blood oxygenation monitors used in health care. Rather, the larger issue is the question of who controls the development and use of technology. The authors write: “We want to see technology that is designed with an understanding of both the needs and values of the people using it and of those it might be used on.” My own view is that this is not likely without workers gaining control over industry and the development and application of technology.
ZNetwork is funded solely through the generosity of its readers.
Donate
