Among the many questions about artificial intelligence (AI), one of the more common has been: how big are the dangers?
So far, much of the future of work with AI debate has been led by neoliberal – or worse: mainstream –economists and adjacent “experts” and business prophets.
Historically, following the false predictions of mass unemployment in the wake of the steam engine, the railway, the motor car, the electric engine, the computer, etc., one cohort of experts still believes that AI will lead to significant job losses.
Their main argument: in the neoliberal-economist model of AI firms, companies will use AI products to cut labor costs.
Banks, corporate think-tanks (read: lobbyists), corporate consulting firms, and the business press offer no shortage of “estimated” numbers – often with exorbitantly high figures.
As long as media capitalism fuels a society of the spectacle in service of media empire profits, these numbers must be sensational, must be scary, and must be click-bait.
Goldman Sachs, for example, picked 300 million out of thin air as “the” number of potential job losses in the wake of AI transformation. It could have been 600 million, or any number in between.
These are full-time jobs worldwide that might be lost. Worse, a quarter of all current jobs “could” (the magic word in all this) be completely replaced by AI.
A similarly devious outfit called McKinsey predicted that 30% of all hours currently worked in the U.S. “could” be automated – though it could just as easily be 25%, 35%, or 40%.
Let’s not forget: management consulting firms live (read: profit) from selling “big ideas.” More often than not, this boils down to simplistic common sense, packed in a sensationalist format for simple managers.
Their selling point? “Transition to AI – now!” Or FOFO will bite you: Fear of Falling Behind. Others will use AI to outpace you. And we [consultancy X] will help you… for a fee. Remember, we helped you with that Y2K problem!
Just, please don’t remember that nothing really happened on the night of December 31, 1999. What Time Magazine (Dec. 30, 2019) later called “a joke” was not a joke for consulting firms – they had already cashed in.
Big ideas like Y2K are often not much more than managerial fads, shifting like fashion trends. These “big ideas” range from TQM (remember that?), to lean manufacturing, to competitive advantage, to the ever-favorite SWOT analysis, to synergies, to corporate diversification, or its opposite: “focus on your core business” (a few years later), and now, sustainability.
Jumping on the environmental bandwagon also translates into lucrative consultancy fees. For consultants, the motto is: whatever the issue, we sell it.
Predicting dire consequences for AI pays handsomely. And yet, corporate consulting companies are racing to use AI themselves and are striving to develop their own AI platforms. Their dream? To offer a platform where “AI agents” autonomously execute multi-level tasks.
Amid all the doom and gloom, AI will inevitably also create new jobs, labor economists argue. A job can be seen as a bundle of tasks. In many industries, it’s unlikely that AI tools will replace all or even most of them.
Meanwhile, 60% of today’s jobs existed in 1940, while 40% no longer do – a transition from blacksmiths to software developers.
New jobs – and job titles – keep appearing. Many of these require new technologies: drone pilot, textile chemist, Chief Automation Officer (CAO), human–machine matchmaker, carbon emissions manager, online content manager.
Some reflect IT changes; others include gig and platform jobs. AI will also create things we can’t even imagine today. In most cases, AI is likely to complement – not replace – human work.
Throughout history, old jobs have always been replaced by new types of work. Is this reassuring? Two points matter:
- No Predictions: There is simply not enough research data on “AI and Work” to make predictive, let alone sensible, statements about how work will evolve. We’ve already seen major shifts from agriculture (read: feudalism), to industrial production (read: capitalism), to the service economy (read: more capitalism). But these transitions don’t necessarily tell us what AI will bring.
- Continuous Process: These earlier transitions aren’t necessarily comforting. In many countries, these processes are still incomplete. Some nations are still on the “farming-to-industry” path (e.g., BRICS), while others are shifting from manufacturing to service industries (e.g., Sweden, Canada). These transitions still shape societies – and will continue to do so – with or without AI.
Crucially, in countries dominated by neoliberal ideology, economics and politics remain shaped by an ideologically driven absence of the state to adequately (read: more equitably for workers and the environment) manage these transitions.
Mainstream economists prefer to take a so-called “nuanced” position (read: opportunistic). Above all, they warn that so-called “technology entrepreneurs” (read: Big Tech monopolies) will dominate the introduction of AI.
Some economists admit their models are bound to fail. Too often, their simplistic neoliberal assumption – “technological progress = more prosperity” – remains unchallenged. What they avoid asking is:
how is this prosperity achieved and who benefits?
Meanwhile, inequality is increasing as so-called “innovation drivers” (read: corporations) generate surpluses. Profits flow upward.
In quasi-monopolized AI markets, innovation affects product prices and labor demand. AI could reduce wages and lead to redistributions favoring capital. But AI won’t alter capitalism’s fundamentals – as outlined by Smith, Ricardo, Marx, or Richard D. Wolff.
Capital only adopts AI if it’s cheaper than the cheapest available human labor. If outsourcing a call center to a low-wage country is cheaper than building AI, then the call center stays. But when AI becomes cheaper, capital will automate wherever possible.
Many argue that this time: semi-skilled and skilled work is the first to go. Whether capital automates depends on cost.
Currently, AI companies are underpricing algorithmic tools to hook corporate clients. Once dependency is created, the price rises. The logic is clear: eliminate labor and increase dependency and extract value (read: profits).
There are serious doubts about the profitability of AI business models. Companies that over-invested in IT and now AI may struggle to generate revenue.
Some critics even warn of a coming “subprime AI crisis” – echoing the 2008 mortgage collapse. Many companies are building products and services on unprofitable AI models – driven by hype or, as Emily M. Bender and Alex Hanna argue, the AI Con.
Worse still, AI could strike at the heart of capitalist stability: the relatively well-off middle class. Skilled workers may be replaced or displaced. These workers – once somewhat protected – might find themselves part of the growing precariat.
Capitalists who want to replace human labor will watch AI’s cost carefully. Automation progress ranges from:
1 = no AI
2 = AI assists work
3 = parts of work done by AI
4 = some decisions made by AI
5 = almost all decisions made by AI
6 = AI runs everything.
One might think alarm bells would ring: what is our labor worth if it can be replaced by AI?
Historically, rapid tech change under capitalism creates a “reserve army” of the unemployed and underemployed – what Marx called the Lumpenproletariat.
In cities, this looks like people collecting cans for a few cents each. They’ve lost jobs to automation or AI and face what Angus Deaton and Anne Case call “Deaths of Despair.”
Even temporary job loss is dangerous today, with dismantled welfare states and growing inequality. This surplus population can be used to discipline still-employed workers – keeping their demands and power in check.
The automation of “cognitive work” could trigger a proletarianization of the professional–managerial class – a downward movement Guy Standing calls the “precariat”.
Even if some workers re-skill, transitions are rarely smooth. Especially under neoliberalism’s “free market will fix it” ideology, instabilities increase requiring today’s surveillance capitalism to intervene.
The existence of a middle class has long been used to refute capitalism’s tendency toward polarization. AI may change that.
AI could be a significant source of instability over the coming decades. It might hit the middle class hard – those who’ve enjoyed autonomy and decent wages.
If they’re replaced or deskilled, their alignment with capital could shift. Historically, the middle class has grown or stabilized. AI might end that.
AI-based deskilling could erode the advantage of specialists, making once-valuable qualifications obsolete. The wage gap between skilled and precarious workers could shrink – creating a new form of solidarity.
Still, most skilled workers – like all workers – are wage-dependent. And they, too, are disposable under capital.
Most recently, AI’s poisoned record was highlighted by the 2023 Hollywood writers’ strike, which lasted 148 days. It symbolized growing global resistance to AI.
Many have realized: AI won’t just change work tools; it changes the very structures and rules of the workplace. Key questions now emerge:
- Who determines the direction of this change?
- Whose voices are silenced?
- Are workers just passive observers?
- Do workers and unions have the opportunity – and responsibility – to shape AI’s future?
AI resistance is already happening, across industries and continents. Though different in form and motive, these movements share a critical stance toward AI’s pace, purpose, and power structures.
Resistance to new technologies is nothing new. From the 19th-century Luddites, to today’s debates over workplace surveillance and algorithmic discrimination, resistance is part of tech’s history.
While governments and corporations invest heavily in AI, worker and public resistance is rising.
People, unions, and civil society raise objections to how AI is designed, used, and regulated. AI resistance takes many forms:
- Open or subtle
- Organized or spontaneous
- Individual or collective
- Liberating (against the system) or reform-oriented (within the system)
From a worker’s perspective, resistance targets not just AI itself, but the social structures determining how AI is deployed. AI can reinforce domination – especially in workplaces, where management holds power.
Resistance manifests in many ways: public protests, lawsuits, digital subversion, sabotage, boss-napping, critical scholarship, and grassroots advocacy.
Historically, workers sabotaged machines. In early 20th-century France, saboteurs threw wooden shoes (sabots) into gears. Today, data poisoning is a modern sabotage. Workers subtly alter data so that AI learns incorrect patterns – resisting unauthorized data use.
Technical resistance aside, classic tactics like strikes remain powerful. The 2023 Hollywood writers’ strike demanded contractual protections against AI in creative work – shutting down a major industry for nearly five months.
There are also regulatory approaches. The EU’s AI Act, for example, bans systems designed to manipulate people in targeted ways.
Examples of AI resistance abound:
- Protests over data centers’ environmental impact
- Tech worker opposition to military AI
- UK outrage after automated A-level exam grading debacles
AI resistance is most visible in six areas:
- Creative industries
- Migration and border control
- Medical AI
- Higher education
- Defense and security
- Environmental activism
Unions and civil society are central to this resistance. The public must demand accountability. Five key reasons for AI resistance:
- Socio-economic concerns – fear of job loss, wage cuts.
- Ethical problems – opacity, bias, discrimination.
- Safety risks – faulty diagnostics in healthcare.
- Threats to Democracy – large-scale manipulation in elections.
- Environmental impact – CO₂ emissions from training large AI models.
AI resistance is vital. It exposes societal, labor, and environmental concerns hidden – or intentionally erased – in technical debates.
By sharing resistance stories, workers can co-develop standards and safeguards, ensuring AI respects dignity, justice, and sustainability.
ZNetwork is funded solely through the generosity of its readers.
Donate
