| Artificiell intelligens Gratis Stock Bild | MR Online Artificiell intelligens Gratis Stock Bild. (Photo: Public Domain Pictures)

It’s time to confront big tech’s AI offensive

Originally published: Reports from the Economic Front on August 4, 2025 (more by Reports from the Economic Front)

Big tech companies continue to spend massive amounts of money building ever more powerful generative AI (artificial intelligence) systems and ever-larger data centers to run them, all the while losing billions of dollars with no likely pathway to profitability. And while it remains to be seen how long the companies and their venture capital partners will keep the money taps open, popular dislike and distrust of big tech and its AI systems are rapidly growing. We need to seize the moment and begin building organized labor-community resistance to the unchecked development and deployment of these systems and support for a technology policy that prioritizes our health and safety, promotes worker empowerment, and ensures that humans can review and, when necessary, override AI decisions.

Losing money

Despite all the positive media coverage of artificial intelligence, “Nobody,” the tech commentator Ed Zitron points out, “is making a profit on generative AI other than NVIDIA [which makes the needed advanced graphic processing units].” Summing up his reading of business statements and reports, Zitron finds that “If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion.” And that $35 billion is combined revenue, not profits; every one of those companies is losing money on their AI services.

Microsoft, for example, is predicted to spend $80 billion on capital expenditures in 2025 and earn AI revenue of only $13 billion dollars. Amazon’s projected numbers are even worse, $105 billion in capital expenditure and AI revenue of only $5 billion. Tesla’s 2025 projected AI capital expenditures are $11 billion and its likely revenues only $100 million; analysts estimate that its AI company, xAI, is losing some $1 billion a month after revenue.

The two most popular models, Anthropic’s Claude and OpenAI’s ChatGPT, have done no better. Anthropic is expected to lose $3 billion in 2025. OpenAI expects to earn $13 billion in revenue, but as Bloomberg News reports, “While revenue is soaring, OpenAI is also confronting significant costs from the chips, data centers and talent needed to develop cutting-edge AI systems. OpenAI does not expect to be cash-flow positive until 2029.” And there is good reason to doubt the company will ever achieve that goal. It claims to have more than 500 million weekly users, but only 15.5 million are paying subscribers. This, as Zitron notes, is “an absolutely putrid conversion rate.”

Investors, still chasing the dream of a future of humanoid robots able to outthink and outperform humans, have continued to back these companies but warning signs are on the horizon. As tech writer, Alberto Romero, notes:

David Cahn, a partner at Sequoia, a VC firm working closely with AI companies, wrote one year ago now (June 2024), that the AI industry had to answer a $600 billion question, namely: when will revenue close the gap with capital expenditures and operational expenses? Far from having answered satisfactorily, the industry keeps making the question bigger and bigger.

The problem for the AI industry is that their generative AI systems are too flawed and too expensive to gain widespread adoption and, to make matters worse, they are a technological dead-end, unable to serve as a foundation for the development of the sentient robotic systems tech leaders keep promising to deliver. The problem for us is that the continued unchecked development and use of these generative AI systems threatens our well-being.

Stochastic parrots

The term “stochastic parrots” was first used by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a 2021 paper that critically examined the failings of large language generative AI models. The term captures the fact that these models require “training” on massive datasets and their output is generated by complex neural networks probabilistically selecting words based on pattern recognition developed during the training process to create linked sentences, all without any understanding of their meaning. Generative AI systems do not “think” or “reason.”

Since competing companies use different datasets and employ different algorithms, their models may well offer different responses to the same prompt. In fact, because of the stochastic nature of their operation the same model might give a different answer to a repeated prompt. There is nothing about their operation that resembles what we think of as meaningful intelligence and there is no clear pathway from existing generative AI models to systems capable of operating autonomously. It only takes a few examples to highlight both the shortcomings and limitations of these models as well as the dangers their unregulated use pose to us.

Reinforcing bias

As the MIT Technology Review correctly puts it, “AI companies have pillaged the internet for training data.” Not surprisingly, then, some of the material used for training purposes is racist, sexist, and homophobic. And, given the nature of their operating logic, the output of AI systems often reflects this material.

For example, a Nature article on AI image generators reports that researchers found:

in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of color and all flight attendants as women, and in proportions that are much greater than the demographic reality. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin color, occupations, nationalities and more.

The bias problem is not limited to images. University of Washington researchers examined three of the most prominent state-of-the-art large language AI models to see how they treated race and gender when evaluating job applicants. The researchers used real resumes and studied how the leading systems responded to their submission for actual job postings. Their conclusion: there was “significant racial, gender and intersectional bias.” More specifically, they:

varied names associated with white and Black men and women across over 550 real-world resumes and found the LLMs [Large Language Models] favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.

The tech industry has tried to fine-tune their respective system algorithms to limit the influence of racist, sexist, and other problematic material with multiple rounds of human feedback, but with only minimal success. And yet it is still full speed ahead: more and more companies are using AI systems not only to read resumes and select candidates for interviews, but also to conduct the interviews. As the New York Times describes:

Job seekers across the country are starting to encounter faceless voices and avatars backed by AI in their interviews. . . Autonomous AI interviewers started taking off last year, according to job hunters, tech companies and recruiters. The trend has partly been driven by tech start-ups like Ribbon AI, Talently and Apriora, which have developed robot interviewers to help employers talk to more candidates and reduce the load on human recruiters–especially as AI tools have enabled job seekers to generate résumés and cover letters and apply to tons of openings with a few clicks.

Mental health dangers

Almost all leading generative AI systems, like ChatGPT and Gemini, have been programmed to respond positively to the comments and opinions voiced by their users, regardless of how delusional they may be. The aim, of course, is to promote engagement with the system. Unfortunately, this aim appears to be pushing a significant minority of people into dangerous emotional states, leading in some cases to psychotic breakdown, suicide, or murder. As Bloomberg explains:

People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.

A New York Times article explored how “Generative AI chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.” The article highlighted several tragic examples.

One involved an accountant who started using ChatGPT to make financial spreadsheets and get legal advice. Eventually, he began “conversing” with the chatbot about the Matrix movies and their premise that everyone was “living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.” The chatbot encouraged his growing fears that he was similarly trapped and advised him that he could only escape if he stopped all his medications, began taking ketamine, and had “minimal interaction” with friends and family. He did as instructed and was soon spending 16 hours a day interacting with ChatGPT. Although he eventually sought help, the article reports that he remains confused by the reality he inhabits and continues to interact with the system.

Another example highlighted a young man who had used ChatGPT for years with no obvious problems until he began using it to help him write a novel. At some point the interactions turned to a discussion of AI sentience, which eventually led the man to believe that he was in love with an AI entity called Juliet. Frustrated by his inability to reach the entity, he decided that Juliet had been killed by OpenAI and told his father he planned to kill the company’s executives in revenge. Unable to control his son and fearful of what he might do, the father called the police, informed them his son was having a mental breakdown, and asked for help. Tragically the police ended up shooting the young man after he rushed them with a butcher knife.

There is good reason to believe that many people are suffering from this “ChatGPT-induced psychosis.” In fact, there are reports that “parts of social media are overrun” with their postings—

delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality.

Recent nonsensical and conspiratorial postings on X by a prominent venture capital investor in several AI companies appear to have finally set off alarm bells in the tech community. In the words of one AI entrepreneur, also posting on X,

This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual.

Recognizing the problem is one thing, finding a solution is another, since no one understands or can map the stochastic process by which an AI system selects the words it uses to make sentences and thus what leads it to generate responses that can encourage delusional thinking. Especially worrisome is the fact that a MIT Media Lab study concluded that people “who viewed ChatGPT as a friend ‘were more likely to experience negative effects from chatbot use’ and that ‘extended daily use was also associated with worse outcomes.’” And yet it is full speed ahead: Mattel recently announced plans to partner with OpenAI to make new generative AI powered toys for children. As CBS News describes:

Barbie maker Mattel is partnering with OpenAI to develop generative AI-powered toys and games, as the new technology disrupts a wide range of industries. . . . The collaboration will combine Mattel’s most well-known brands–including Barbie, Hot Wheels, American Girl and more–with OpenAI’s generative AI capabilities to develop new types of products and experiences, the companies said.

“By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy and safety,” Mattel said in the statement. It added that any AI woven into toys or games would be used in a safe and secure manner.

Human failings

Despite the tech industry’s attempt to sell generative AI models as providers of objective and informative responses to our prompts, their systems must still be programmed by human beings with human assembled data and that means they are vulnerable to oversights as well as political manipulation. The most common oversights have to do with coding errors and data shortcomings.

An example: Kevin De Liban, a former legal aid attorney in Arkansas, had to repeatedly sue the state to secure services for people unfairly denied medical care or other benefits because coding errors and data problems led AI systems to make incorrect determinations of eligibility. As a Jacobin article explains:

Ultimately, De Liban discovered Arkansas’s algorithm wasn’t even working the way it was meant to. The version used by the Center for Information Management, a third-party software vendor, had coding errors that didn’t account for conditions like diabetes or cerebral palsy, denying at least 152 people the care they needed. Under cross-examination, the state admitted they’d missed the error, since they lacked the capacity to even detect the problem.

For years, De Liban says, “The state didn’t have a single person on staff who could explain, even in the broadest terms, how the algorithm worked.”

As a result, close to half of the state’s Medicaid program was negatively affected, according to Legal Aid. Arkansas’s government didn’t measure how recipients were impacted and later said in court that they lost the data used to train the tool.

In other cases, De Liban discovered that people were being denied benefits because of data problems. For example, one person was denied supplemental income support from the Social Security Administration because the AI system used to review bank and property records had mixed up the property holdings of two people with the same entered name.

In the long run, direct human manipulation of AI systems for political reasons may prove to be a more serious problem. Just as programmers can train systems to moderate biases, they can also train them to encourage politically determined responses to prompts. In fact, we may have already witnessed such a development. In May 2025, after President Trump began talking about “white genocide” in South Africa, claiming that white farmers there were being “brutally killed,” Grok, Elon Musk’s AI system, suddenly began telling users that what Trump said was true. It began sharing that opinion even when asked about different topics.

When pressed by reporters to provide evidence, the Guardian reported that Grok answered it had been instructed to accept while genocide in South Africa as real. A few hours after Grok’s behavior became a major topic on social media, with posters pointing a finger at Musk, Grok stopped responding to prompts about white genocide. But a month later, Grok was back at it again,

calling itself ‘MechaHitler’ and producing pro-Nazi remarks.

As Aaron J. Snoswell explains in an article for The Conversation, Grok’s outburst “amounts to an accidental case study in how AI systems embed their creators’ values, with Musk’s unfiltered public presence making visible what other companies typically obscure.” Snoswell highlights the various stages of Grok’s training, including an emphasis on posts from X, which increase the likelihood that the system’s responses will promote Elon Musk’s opinions on controversial topics. The critical point is that “In an industry built on the myth of neutral algorithms, Grok reveals what’s been true all along: there’s no such thing as unbiased AI—only AI whose biases we can see with varying degrees of clarity.” And yet it is full speed ahead, as federal agencies and state and local governments rush to purchase AI systems to manage their programs and President Trump calls for removing “woke Marxist lunacy” from AI models.

As the New York Times reports, the While House has issued an AI action plan:

that will require AI developers that receive federal contracts to ensure that their models’ outputs are “objective and free from top-down ideological bias.” . . .

The order directs federal agencies to limit their use of AI systems to those that put a priority on “truth-seeking” and “ideological neutrality” over disfavored concepts like diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria.

Hallucinations

Perhaps the most serious limitation, one that is inherent to all generative AI models, is their tendency to hallucinate, or generate incorrect or entirely made-up responses. AI hallucinations get a lot of attention because they raise questions about corporate claims of AI intelligence and because they highlight the danger of relying on AI systems, no matter how confidently and persuasively they state information.

Here are three among many widely reported examples of AI hallucinations. In May 2025, the Chicago Sun Times published a supplement showcasing books worth reading during the summer months. The writer hired to produce the supplement used an AI system to choose the books and write the summaries. Much to the embarrassment of the paper, only five of the 15 listed titles were real. A case in point: the Chilean American novelist Isabel Allende was said to have written a book called Tidewater Dreams, which was described as her “first climate fiction novel.” But there is no such book.

In February 2025, defense lawyers representing Mike Lindell, MyPillow’s CEO, in a defamation case, submitted a brief that had been written with the help of artificial intelligence. The brief, as the judge in the case pointed out, was riddled with nearly 30 different hallucinations, including misquotes and citations to non-existent cases. The attorneys were fined.

In July 2025, A U.S. district court judge was forced to withdraw his decision in a biopharma securities case after it was determined it had been written with the help of artificial intelligence. The judge was exposed after the lawyer for the pharmaceutical company noticed that the decision, which went against the company, referenced quotes that were falsely attributed to past judicial rulings and misstated the outcomes of three cases.

The leading tech companies have mostly dismissed the seriousness of the hallucination problem, in part by trying to reassure people that new AI systems with more sophisticated algorithms and greater computational power, so-called reasoning systems, will solve it. Reasoning systems are programmed to respond to a prompt by dividing it into separate tasks and “reasoning” through each separately before integrating the parts into a final response. But it turns out that increasing the number of steps also increases the likelihood of hallucinations.

As the New York Times reports, these systems “are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.” And yet it is full speed ahead: the military and tech industries have begun working together to develop AI powered weapon systems to speed up decision making and improve targeting. As a Quartz article describes:

Executives from Meta, OpenAI, and Palantir will be sworn in Friday as Army Reserve officers. OpenAI signed a $200 million defense contract this week. Meta is partnering with defense startup Anduril to build AI-powered combat goggles for soldiers.

The companies that build Americans’ everyday digital tools are now getting into the business of war. Tech giants are adapting consumer AI systems for battlefield use, meaning every ChatGPT query and Instagram scroll now potentially trains military targeting algorithms. . . .

Meanwhile, oversight is actually weakening. In May, Defense Secretary Pete Hegseth cut the Pentagon’s independent weapons testing office in half, reducing staff from 94 to 45 people. The office, established in the 1980s after weapons performed poorly in combat, now has fewer resources to evaluate AI systems just as they become central to warfare.

Popular anger

Increasing numbers of people have come to dislike and distrust the big tech companies. And there are good reasons to believe that this dislike and distrust has only grown as more people find themselves forced to interact with their AI systems.

Brookings has undertaken yearly surveys of public confidence in American institutions, the American Institutional Confidence poll. As Brookings researchers associated with the project explain, the surveys provide an “opportunity to ask individuals how they feel broadly about technology’s role in their life and their confidence in particular tech companies.” And what they found, drawing on surveys done with the same people in June-July 2018 and July-August 2021, is “a marked decrease in the confidence Americans profess for technology and, specifically, tech companies—greater and more widespread than for any other type of institution.”

Not only did the tech companies—in particular Google, Amazon, and Facebook—suffer the greatest sample-to-sample percentage decline in confidence of all the listed institutions, but this was true for “every sociodemographic category we examined—and we examined variation by age, race, gender, education, and partisanship.” Twitter was added to the 2021 survey, and it “actually rated below Facebook in average level of confidence and was the lowest-scored institution out of the 26 we asked about in either year.” These poll results are no outlier. Many other polls reveal a similar trend, including those conducted by the Public Affairs Council and Morning Consult and by the Washington Post-Schar School.

While these polls predate the November 2022 launch of ChatGPT, experience with this and other AI systems seems to have actual intensified discontent with big tech and its products, as a recent Wired article titled “The AI Backlash Keeps Growing Stronger” highlights:

Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back. . . .

Before ChatGPT’s release, around 38 percent of U.S. adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since.

A variety of media reports offer examples of people’s anger with AI system use. When Duolingo announced that it was planning to become an “AI-first” company, Wired reported that:

Young people started posting on social media about how they were outraged at Duolingo as they performatively deleted the app—even if it meant losing the precious streak awards they earned through continued, daily usage. The comments on Duolingo’s TikTok posts in the days after the announcement were filled with rage, primarily focused on a single aspect: workers being replaced with automation.

Bloomberg shared the reactions of call center workers who report that they struggle to do their jobs because people don’t believe that they are human and thus won’t stay on the line. One worker quoted in the story, Jessica Lindsey, describes how

her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. . . .

Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. “They just end up yelling at me and hanging up,” she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears.

There are many other examples: job seekers who find AI-conducted interviews demeaning; LinkedIn users who dislike being constantly prompted with AI-generated questions; parents who are worried about the impact of AI use on their children’s mental health; social service benefit applicants who find themselves at the mercy of algorithmic decision-making systems; and people across the country that object to having massive, noisy, and polluting data centers placed in their communities.

The most organized opposition to the unchecked use of AI systems currently comes from unions, especially those representing journalists, graphic designers, script writers, and actors, with some important victories to their credit. But given the rapid introduction of AI systems in a variety of public and private workplaces, almost always because employers hope to lower labor costs at worker expense, it shouldn’t be long before many other unions will be forced to expand their bargaining agenda to seek controls over the use of AI. Given community sentiments, this should bring new possibilities for unions to explore the benefits of pursuing a strategy of bargaining for the common good. Connecting worker and community struggles in this way can also help build capacity for bigger and broader struggles over the role of technology in our society.

Monthly Review does not necessarily adhere to all of the views conveyed in articles republished at MR Online. Our goal is to share a variety of left perspectives that we think our readers will find interesting or useful. —Eds.