Artificial Intelligence (AI) seems to be everywhere. Companies use powerful AI chatbots on their webpages or phone systems to handle customer questions. Newsrooms and magazines use them to write stories. Film studios use them to produce films. Tech companies use them to program. Students use them to write papers. It seems like magic. And with everything supposedly happening “in the cloud,” it is easy to believe that AI-powered systems are good for the environment. Unfortunately, things are not as they appear to be.
Chatbots are built on exploitation, use massive amounts of energy, and are far from reliable. And while it is easy to imagine them growing in sophistication and making life easier in some respects, companies are pouring billions of dollars into their creation to make profits with little concern about whether the results will be socially beneficial. In short, we need to take the corporate interest in AI seriously and develop strategies that can help us gain control over how AI is developed and used.
The Race Is On
The chatbot revolution began in 2022 with OpenAI’s introduction of ChatGPT. ChatGPT was capable of human-like conversation and could answer user questions with generated text as well as write articles and code. Best of all, it was free to use.
Other companies, responding to the public interest in ChatGPT, soon began introducing their own AI chatbots. The biggest and most widely used today are Google’s Gemini (formerly Bard) and Microsoft’s Copilot. There are others as well, including some designed to meet specific business needs. For example, GitHub CoPilot works to help software developers create code and Anthrophic’s Claude was designed for information discovery and summarizing documents.
And the race continues to create next generation AI systems that can take in more information, process it more quickly, and provide more detailed, personal responses. According to Goldman Sacks economists, AI-related investment in the United States “could peak as high as 2.5 to 4 percent of GDP” over the next decade.
Chatbots need a large and diverse database of words, text, images, audio, and online behavior as well as sophisticated algorithms to enable them to organize the material when needed in line with common patterns of use. When given a question or request for information, chatbots identify material in their database related to the pattern of words in the question or request and then assemble, again guided by algorithms, a set of words or images from their database that best satisfies, given data limitations, the inquiry. Of course, the process of identifying patterns and constructing responses takes enormous amounts of energy.
No matter how conversational and intelligent a chatbot might sound, it is important to remember, as Megan Crouse explains, that:
“The model doesn’t ‘know’ what it’s saying, but it does know what symbols (words) are likely to come after one another based on the data set it was trained on. The current generation of artificial intelligence chatbots, such as ChatGPT, its Google rival Bard and others, don’t really make intelligently informed decisions; instead, they’re the internet’s parrots, repeating words that are likely to be found next to one another in the course of natural speech. The underlying math is all about probability.”
Different chatbots will produce different results because of their programming and because they have been trained on different data sets. For example, in addition to scraping whatever public data is available on the web, Gemini is able to use data from its Google Apps while Copilot uses data generated from its Bing search engine.
Chatbots have gone through a number of upgrades since their introduction. Each generation has a more complex software package that allows it to make more nuanced connections as well as expand its database by incorporating data from asked questions or requests. In this way chatbots learn/improve over time through use.
This perspective highlights the fact that while we may talk about the things happening in the cloud, the ability of chatbots to respond to prompts or questions depends on processes that are firmly rooted in the ground. In the words of the tech writer Karen Hao:
“A.I. has a supply chain like any other technology; there are inputs that go into the creation of this technology, data being one, and then computational power or computer chips being another. And both of those have a lot of human costs associated with them.”
The Supply Chain: Human Labour
AI systems need data and the data comes from people in one form or another. Therefore, technology companies are continually on the hunt for new and diverse data in order to enhance the operation of their AI systems. With our online blog and website posts, published books and articles, searches, photographs, songs, pictures, and video freely scraped from the internet, we are helping to underwrite highly profitable companies in their pursuit of yet greater profits. As Lauren Leffer notes,
“Web crawlers and scrapers can easily access data from just about anywhere that’s not behind a login page… This includes anything on popular photograph-sharing site Flickr, online marketplaces, voter registration databases, government webpages, Wikipedia, Reddit, research repositories, news outlets and academic institutions. Plus, there are pirated content compilations and Web archives, which often contain data that have since been removed from their original location on the Web. And scraped databases do not go away.”
In fact, a significant share of the scraped material was copyrighted and taken without permission. In response, a number of publishers, writers, and artists are now seeking to stop the theft. For example, in August 2023, The New York Times updated its “Terms of Service” to prohibit any use of its text, photos, images, and audio/video clips in the development of “any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system.” But while some major companies have the leverage or legal power to either prohibit or negotiate financial compensation for use of their material, most businesses and individuals do not. As a result, they are still at risk of having their “intellectual property” taken from them for free and turned into AI training material in the service of corporate money-making activity.
Without minimizing the personal losses associated with AI data collection, there is a far greater problem with this method of acquisition. Scraping the public internet means that AI chatbots are being trained using material that includes widely different perspectives and understandings about science, history, politics, human behavior, and current events, including postings and writing by members of extreme hate groups. And problematic data can easily influence the output of even the most sophisticated chatbots.
For example, chatbots are increasingly being used by companies to help them with their job recruiting. Yet, as Bloomberg News discovered, “the best-known generative AI tool systematically produces biases that disadvantage groups based on their names.” For example, its own study found that “When asked 1000 times to rank eight equally-qualified resumes for a real financial analyst role at a Fortune 500 company, ChatGPT was least likely to pick the resume with a name distinct to black Americans.”
Chatbots are dependent on the quality of human labour in yet another way. Chatbots cannot make direct use of much of the data gathered by web crawlers and scrapers. As Josh Dzieza explains, “behind even the most impressive AI system are people – huge numbers of people labeling data to train it and clarifying data when it gets confused.”
Major AI companies generally hire other smaller companies to find and train the workers needed for the data labeling process. And these subcontractors, more often than not, find their workers, called annotators, in the Global South, often in Nepal and Kenya. Because the annotation process as well as the items being annotated are considered trade secrets, annotators rarely know their ultimate boss and will be fired if they are found to discuss what they do with others, even co-workers.
Dzieza describes some of the work annotators must do to enable chatbots to make use of the data gathered for them. For example, annotators label items in videos and photos. This needs to be done to ensure that AI systems will be able to connect specific configurations of pixels with specific items or emotions. Companies building AI systems for self-driving vehicles need annotators to identify all the critical items in videos taken of street or highway scenes. That means “identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of – frame by frame and from every possible camera angle.” As Dzieza reports, this is “difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which [the annotator] was paid about $10.”
This kind of work, although low paid, is critical. If the annotation process is poorly done or the database is limited, the system can easily fail. A case in point: in 2018, a woman was struck and killed by a self-driving Uber car. The AI system failed because although “it was programmed to avoid cyclists and pedestrians, it didn’t know what to make of someone walking a bike across the street.”
Annotators are also hired to label items in social-media photos. This might involve identifying and labeling all the visible shirts that could be worn by humans. This would require recording whether they were “polo shirts, shirts being worn outdoors, shirts hanging on a rack,” etc.
Other jobs involve labeling emotions. For example, some annotators are hired to look at pictures of faces, including selfies taken by the annotators, and label the perceived emotional state of the subject. Others are hired to label the emotions of customers who phoned in orders to stores owned by a pizza chain. Another job has annotators labeling the emotions of Reddit posts. This task proved challenging for one group of Indian workers, primarily because of their lack of familiarity with US internet culture. The subcontractor decided, after a review of their work, that some 30 percent of the posts had been mislabeled.
Perhaps the fastest growing segment of AI training work involves direct human interaction with a chatbot. People are being hired to discuss topics and the chatbot is programed to give two different responses to each conversation. The hired “discussant” must then select the response they think “best.” This information is then fed back into the system to assist it in sounding more “human.”
In short, AI systems are heavily dependent on the work of humans. These are not magical systems, operating unaffected by human biases or emotions. And their activity does not take place in some imaginary cloud. This later point becomes even more obvious when we consider the infrastructure required for their operation.
Supply Chain: Data Centers
The growth in AI has been supported by a vast build-out of data centers and a steadily rising demand for electricity to run the computers and servers they house as well as the air conditioners that must run continuously to prevent their overheating. In fact, “the Cloud now has a greater carbon footprint than the airline industry. A single data center can consume the equivalent electricity of 50,000 homes.”
According to the International Energy Agency, the 2,700 data centers operating in the US were responsible for more than 4 percent of the nation’s total energy use in 2022. And their share is likely to hit 6 percent by 2026. Of course such estimates are rough, both because the major tech companies are unwilling to share relevant information and because AI systems are continually being trained on new data and upgraded with more skills, meaning greater energy use per activity.
Even now, there are signs that the energy demands of data centers are taxing the US power grid. As the Washington Post notes: “Northern Virginia needs the equivalent of several large nuclear power plants to serve all the new data centers planned and under construction. Texas, where electricity shortages are already routine on hot summer days, faces the same dilemma.”
The Pacific Northwest faces a similar challenge. As the Oregonian newspaper points out:
“Data centers proliferating across Oregon will consume dramatically more electricity than regional utilities and power planners had anticipated, according to three new forecasts issued summer [2023].
“That’s putting more pressure on the Northwest electrical grid and casting fresh doubt on whether Oregon can meet the ambitious clean energy goals the state established just two years ago…
“The Bonneville Power Administration now expects that, by 2041, data centers’ electricity demands in Oregon and Washington will grow by two-and-a-half times, drawing 2,715 average megawatts. That’s enough to power a third of all the homes in those two states today.”
This skyrocketing energy demand, fueled largely by the fast-growing demands of AI, represents a major threat to our efforts to combat global warming. For example, power concerns have already led Kansas, Nebraska, Wisconsin, and South Carolina to delay closing coal plants. A 2024 report by several climate action groups on the climate threat posed by AI finds that the doubling of energy use by data centers, which the International Energy Agency estimates will happen over the next two years, will lead to an 80 percent increase in planet-heating emissions. This is a severe price to pay for new AI services that are being rolled out regardless of their ability to meet real, rather than created, needs.
“Persuasive Not Truthful”
Clearly major tech companies are betting that AI will generate huge profits for them. And leaving nothing to chance, they are doing all they can to embed them in our lives before we have the opportunity to consider whether we want them. Already, AI systems are being promoted as a way to improve health care, provide mental health advice, give legal advice, educate students, improve our personal decision-making, increase workplace efficiency, the list goes on.
Seemingly forgotten is the fact that AI systems are only as good as the data entered and the software written to use it. In other words, their operation depends on humans. And, perhaps even more importantly, no one really knows how AI systems use the data they have been trained on. In other words, it is impossible to trace their “reasoning process.” The warning signs that these systems are being seriously oversold are already visible.
For example, in 2022 a customer contacted Air Canada to find out how to get a bereavement fare. The airline’s customer service AI-powered chatbot told him he only needed to complete a form within 90 days of the issued ticket to get a refund on his trip. But when he submitted the form after completing his trip, airline personnel told him that there would be no fare reduction because the form had to be completed before the trip. When he showed the airline the screenshots he took of what the bot told him, the airline countered that it was not responsible for what the bot said.
The customer sued Air Canada and won. The judge noted that:
“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.”
Leaving aside whether companies might in fact seek to have chatbots declared separate legal entities so that they can disassociate themselves from their actions if desired, the airline has yet to explain why its chatbot gave out wrong information.
Then there is the NYC chatbot, developed with the help of Microsoft, which the city promoted as a “one-stop-shop” for businesses to help them stay current on city rules and regulations. Here are some examples of the questionable advice given in response to inquiries:
“the chatbot falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks…
“Asked if a restaurant could serve cheese nibbled on by a rodent, it responded: ‘Yes, you can still serve the cheese to customers if it has rat bites,’ before adding that it was important to assess the ‘the extent of the damage caused by the rat’ and to ‘inform customers about the situation.’”
Perhaps not surprising, both Microsoft and the mayor of NYC responded by saying such problems will eventually get corrected. In fact, they helpfully added, users, by pointing out errors, will speed the needed fine-tuning of the system.
These kinds of problems, as serious as they are, pale in comparison to the problem of AI “hallucinations.” A hallucination is when an AI system fabricates information, which could include names, dates, books, legal cases, medical explanations, even historical events. For example, there have been several legal cases where chatbots invented cases which lawyers referenced in their court filings.
A case in point: The lawyers representing a plaintiff in a June 2023 case involving a law suit against a Colombian airline submitted a brief that included six supportive cases “found” by a chatbot. Unfortunately, these cases never existed; some even mentioned airlines that did not exist. The judge dismissed the case and fined the lawyers for using fake citations. The lawyers, disagreeing with the judge’s assertion that they had acted in bad faith, said in their defense that “We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.
Even the most sophisticated chatbots can suffer hallucinations. When asked about odds for betting on the 2024 Superbowl that was to take place the following day, Google’s chatbot announced it was too late to bet since the Superbowl had already taken place, with the San Francisco 49ers beating the Kansas City Chiefs by a score of 34 to 28. It even included some player statistics. The game, when it was played, was won by Kansas City. Microsoft’s chatbot did the same, asserting the game was over even though it had not yet been played. It however declared that the Kansas City Chiefs had won.
Now imagine what the costs might be if a chatbot giving medical advice suffered a hallucination. The US military is rapidly increasing its use of AI technology in a variety of ways, including to identify threats, guide unmanned aircraft, gather intelligence, and plan for war. Imagine the potential disaster that could result from inadequate or incomplete data training of the system or even worse a hallucination. The obvious point is that these systems are far from foolproof, and for a variety of reasons. An internal Microsoft document captures this best, when it declares that the new AI systems are “built to be persuasive, not truthful.”
What Is To Be Done?
So far public concern about AI has largely focused on the unsanctioned use of personal data by AI systems. People want protection against unauthorized web scrapping of their material. And they don’t want their interactions with AI systems to become a data generating activity that could expose them to fraud, discrimination, or harassment. Various state and local governments are now considering ways to achieve this. And in 2023, President Biden issued a federal executive order that seeks to ensure that new “foundational” IA systems are adequately tested for flaws before public release. These are helpful first steps.
The sharpest struggle over the use of AI is taking place in the workplace. Companies are using AI systems to keep tabs on worker organizing, monitor worker performance, and when possible get rid of workers. Not surprisingly, unionized workers have begun to fightback, proposing limits on company use of AI systems.
For example, the Writers Guild of America (WGA), representing some 12,000 screen writers, struck a number of major production companies – including Universal, Paramount, Walt Disney, Netflix, Amazon, and Apple – for five months in 2023 seeking wage increases, employment protections, and restrictions on AI use. Significantly, as Brian Merchant, a columnist for the LA Times, describes:
“concerns over the use of generative AI such as ChatGPT were not even top of mind when the writers first sat down with the studios to begin bargaining. The WGA’s first proposal simply stated the studios would not use AI to generate original scripts, and it was only when the studios flatly refused that the red flags went up.
“That was when the writers realized studios were serious about using AI – if not to generate finished scripts, which both sides knew was impossible at this juncture – then as leverage against writers, both as a threat and as a means to justify offering lowered rewrite fees. That’s when the WGA drew a line in the sand, when we started seeing signs on the picket lines denouncing AI go viral on social media and headlines that touted the conflict gracing the newspapers like this one.”
In fact the growing awareness of the need to gain control over the use of AI systems led the Writers Guild to hold several meetings on AI during the strike for workers in related industries, including those employed in digital media shops. Many of the attendees ended up on the picket line supporting the striking screenwriters.
The strike produced major gains for the writers. In terms of AI, the new contract prohibits the use of large language model AI systems to write or rewrite scripts or for source material. Writers, on the other hand, will be allowed to make use of them if they desire. The contract also rules out using any writers’ material to train AI systems. As one analyst commented, “The fear that first drafts would be done through ChatGPT and then handed to a writer for lower rewrite fees has been neutered. This may be among the first collective-bargaining agreements to lay down markers for AI as it relates to workers.”
The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) went on strike against the major film and television producers two months after the start of the WGA strike. Not surprising, AI policy was one of the major issues motivating the decision to strike. Perhaps most importantly, the actors succeeded in winning a new contract that will force producers to bargaining over future uses of AI.
For example, the agreement requires that if a producer plans to use a “synthetic performer” (a digitally created natural looking individual that is “not recognizable as any identifiable natural performer”) they must notify and bargain with the union over the decision not to hire a natural performer, with the possibility of fees to be paid to the union. If a producer wants to use a “recognizable synthetic performer” (a digitally created natural looking individual that is recognizable as a natural performer), they must first bargain with the performer and obtain their consent.
Other workers are also engaged in hard bargaining with their bosses over the use of AI technology, both to protect their jobs and to defend professional standards, for example journalists. These labour struggles are an important start toward developing needed guardrails for AI use. They can be a foundation upon which to build a broader labour-community alliance against the corporate drive to use AI technology to diminish human connections and human agency – in our medical system, educational institutions, transportation, news reporting, communications with public agencies and providers of goods and services, and the list goes on. Our chances of success will greatly improve if we can help working people see through the hype to accurately assess the full range of costs and benefits associated with AI technology. •
Martin Hart-Landsberg is Professor Emeritus of Economics at Lewis and Clark College, Portland, Oregon; and Adjunct Researcher at the Institute for Social Sciences, Gyeongsang National University, South Korea. His areas of teaching and research include political economy, economic development, international economics, and the political economy of East Asia. He maintains a blog Reports from the Economic Front.
ZNetwork is funded solely through the generosity of its readers.
Donate