In the age of live streamed genocide, many things that once seemed urgent, or at least, worth a few moments of your time, have receded to the background; chores to be completed as illusions are burned in the spreading fire. And yet, my ‘beat’, as journalists once said long ago, is what’s marketed as ‘AI’ or, more precisely, the lies used by the software wing of capitalism to convince us they’re building intelligence.
Jacobin magazine, which describes itself as “a leading voice of the American left, offering socialist perspectives on politics, economics, and culture” (shall we add modesty to its list of supposed accomplishments?) is little more than a distraction. After all, it was in Jacobin that we were presented with an essay by Ben Burgis, ostensibly about degrowth which was actually an amateur musing on the supply chains of bananas. Jacobin is, in the grand and indeed, even small scheme of things, not worth much thought but a recently published article by Holly Buck and Matt Huber titled, “AI-Driven Worker Displacement Is a Serious Threat” is so absurd and poorly structured, like a castle made of mis-matched legos, and also, based on tech industry propaganda, that it offended me, inspiring a response, a return to my beat.
Once more into the breach.
Of the essay’s two authors, Holly Buck and Matt Huber (hereafter referred to as Buck and Huber, which sounds like an ambulance-chasing law firm whose office is in a decaying strip mall, next to the Chick fil-A) I’m most familiar with Matt Huber. Huber came to my attention in 2023 when I read his essay (alas, also published in Jacobin – apparently mendacity, like misery, loves company) titled, ‘The Problem with Degrowth’. This was a strange article because, contrary to its title it wasn’t about actually existing degrowth as a school of thought but rather, a cartoon version of John Bellamy Foster Huber conjured to chase, like Wile E Coyote after Road Runner, with a similarly unsuccessful outcome.
At the time, in 2023, I wrote a piece about Huber’s errors; well, wait, I wrote ‘errors’ but what I meant is dishonesty, a critical distinction. To set the scene for this autopsy of Buck and Huber’s approach to ‘AI’, here’s an excerpt from my notes about that 2023 piece:
Huber’s argument is very simple and can be broken down into 3 parts:
- Marxism (classically) argues that once productive forces are liberated from capitalist constraints, true human flourishing can begin
- According to Huber, citing an article by John Bellamy Foster, degrowthers´ counter-argument is that we have reached or exceeded the Earth’s capacity for absorbing industrial activity
- Hubert´s position is that degrowthers are wrong about limits and, in any event, further development and expansion of productive capacity are required to address climate change (for ex. More nuclear plants, heat pumps, etc.)
Huber´s article is ostensibly about the argument of degrowthers generally and yet, is really focused on countering the arguments of one person: John Bellamy Foster. Foster is mentioned 14 times in Huber’s piece. Is Foster´s argument representative of the degrowth position overall? Maybe, but we can’t tell from Huber’s piece.
[…]
From this two year old essay (and most other Jacobin articles) we can discern a tactic, familiar to anyone who has tried to debate a stubborn toddler: I will pretend your evidence and point of view don’t matter.
We see this in action in the recent essay on ‘AI’. Here is how Buck and Huber begin:
By many estimates, the increasing use of artificial intelligence is set to produce significant job losses. The prospect of serious disruption demands that we start formulating egalitarian policy solutions right now.
Creeping anxiety about AI-driven job loss has spilled into public consciousness.
A decade ago, there were conversations at Silicon Valley house parties about universal basic income as a fix for the impending wave of automation. A year ago, computer scientists began elaborating their predictions not just on the open-access archive arXiv but as elegantly formatted self-standing websites, such as Situational Awareness (recommended by Ivanka Trump) and Gradual Disempowerment, followed by AI 2027 (read by J. D. Vance).
Invoking Ivanka Trump and J.D. Vance, whose only area of expertise is in evil, as sources on computer technology is like asking Matt Huber about degrowth: you should know the opinions offered, no matter how wordily presented, will be wrong.
We can go further to mark the lower layer: Situational Awareness (which I analyzed – see references below), praised by Ivanka, promotes the notion that Large Language Models, statistical text calculators, are on the verge of surpassing human cognition and taking over the world. A grift promoted by a grifter. It’s revealing that Buck and Huber chose to build their house on such a dubious foundation.
Revealing, yes but what is uncovered?
…
In the 1990s, the hydrocarbon cartel, facing growing concern about climate change, altered its propaganda tactics. Instead of stubbornly denying the role of fossil fuels in heating the atmosphere, the industry, with the eager help of the capitalist media class, began a campaign of ‘presenting both sides’. Every time a climate scientist appeared on television or was quoted in print media, an ‘alternative’ view was given equal time. As propaganda techniques go this is particularly effective because it makes what should be clear murky. Most of us aren’t experts in climate science and so, pressed for time or, disinterested in pursuing claims to their sources, we are convinced, or perhaps convinced into acquiescence through the deliberate use of obscurance.
This came to mind as I read Buck and Huber’s ‘AI’ article which employs an identical technique: criticisms are presented only to be quickly hand waved away.
Consider this excerpt, as an example:
Another serious left critique of the AI job displacement threat comes from Aaron Benanav, whose 2020 book, Automation and the Future of Work, explains that rates of job creation slow as economic growth decelerates, and that this, rather than technology-induced job destruction, is what has depressed the global demand for labor over the past fifty years. The main story, he argues, is economic stagnation due to deindustrialization. In a recent New York Times op-ed, Benanav notes that the productivity gains from generative AI have been limited, that it’s hard to see how it would create sweeping improvements for core services, and that its advancements appear to be already slowing.
Benanav’s analysis, based on a careful review of real-world uses and supported in the years since the 2020 release of ‘Automation and the Future of Work’ by the reports of people working in companies across the globe (the site, Pivot to AI and Ed Zitron’s detailed reporting on the state of the ‘AI’ business are two examples of the vital, up to the minute, grounded work being done) is dismissed with a few sentences by Buck and Huber:
While we agree with some of this — economic stagnation needs to be addressed as a broader underlying issue — it would be a mistake to deny AI progress just because capitalists always hype their products, or because they haven’t been able to monetize the achievements yet. Moreover, despite general stagnation (particularly for the working class), capitalist profitability has been substantially restored since the economic crisis of the 1970s, and some of the most profitable companies today are investing heavily in AI.
“It would be a mistake”, Buck and Huber advise, “to deny AI progress just because capitalists always hype their products”. This is like being told, accurately, that building ever taller ladders is not a method for reaching the moon and replying ‘well, let’s not deny progress on the lunar ladder program just because there’s no evidence it’s working, or will ever work’. The point of Benanav’s analysis is that despite the pouring of vast amounts of computing power and capital into ‘AI’ results have not matched marketing. Apparently unfamiliar with the sunk cost fallacy (many supposed Marxists seem to know as much about how businesses actually function as they do Marxism, beyond quotes) Buck and Huber ask us to equate the investment decisions of corporations – owned and managed by the same people who ran to blockchain, the metaverse and other boondoggles with abandon (Ed Zitron’s essay on business idiots, a topic I’m familiar with from experience, is helpful here) – with a technology’s realistic prospects.
Earlier in the article, Buck and Huber mention, then fail to understand, or acknowledge, the implications of a scientific paper, recently released by Apple:
We’ve also seen a counterreaction to [AI created job loss] anxiety emerge. Another recent study finding that open-source developers worked more slowly when using AI tools than when they didn’t, bolstered the position that forecasts of AI progress may be overblown.
It’s odd to describe a research document (unusually, sponsored by one of the corporations once high on large language models – a signal of a bubble about to burst) as solely being part of a “counterreaction”. Apple’s paper received attention because its conclusion is that reasoning cannot be achieved via the use of ever larger, probability driven, word assembly systems such as ChatGPT.
Here’s an excerpt from the paper’s abstract:
Recent advancements in Large Language Models (LLMs) have sparked interest in their formal reasoning capabilities, particularly in mathematics. The GSM8K benchmark is widely used to assess the mathematical reasoning of models on grade-school-level questions. While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics.
In short, while computing performance (speed and scale, for example) has improved, the one thing everyone is selling – machine cognition – has not been achieved and is unlikely to be realized. It was once natural for claims to be vetted as part of assessing scientific and engineering projects (and what is so-called ‘AI’ if not exactly this?) but for Buck and Huber, this is solely a reaction to hype. Although their paragraph ends by acknowledging the ways critical positions have been ‘bolstered’, the duo then move on to offer this assertion in the next passage:
We think that worker displacement by AI is a real problem. And it’s a problem that needs our focus and attention right now — not in ten years or in some distant future. It represents a looming threat but also a political opportunity. It will likely be a salient issue in election cycles in the near term, and the Left needs to be ready with policy proposals to address it.
Note the sleight of hand; there is indeed danger posed by the use of so-called ‘AI’ systems but it isn’t the replacement of workers by thinking machines, capable of doing the labor of warehouse workers, doctors, delivery drivers and many more jobs besides, the true danger is the marginalization of labor power through the promotion of the idea, relentlessly pushed by capitalists, that workers can be replaced. Waymo, despite creating semi-autonomous vehicles that can operate in bounded spaces, will never replace professional drivers. But its vehicles can be used to diminish the perceived value of these workers, increasing their precarity (even as remote drivers work behind the scenes to control Waymo cars).
…
Cybernetics pioneer Stafford Beer (famous for, among other things, designing the Cybersyn project Chilean President Allende – 1970, till his overthrow by the CIA in 1973 – hoped would create a feedback loop between the people and their government, enabled via locally staged, centrally connected, networked computers) once observed, “the purpose of a system is what it does.”
What is Jacobin’s purpose?
This excerpt answers the question, if unintentionally:
We argue that [job displacement by ‘AI’] is a “right-now” problem. We don’t have robust evidence of massive displacement, but there are plenty of warning signs. Companies like Shopify are sending memos about becoming “AI first” companies where employees will have to justify why head count on projects can’t be replaced by AI, and CEO Marc Benioff of Salesforce — San Francisco’s largest private employer — says that AI now does 30-50 percent of the company’s work. It’s not just Silicon Valley: Ford Motor CEO Jim Farley just declared that “Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.
“We don’t have robust evidence…”Buck and Huber tell us but ask their readers to believe, anyway, like gullible tourists, watching a Vegas magician’s show. It’s telling that the examples offered (in the case of Salesforce, hilariously and precisely sliced apart by David Gerard) are taken straight from industry marketing statements.
Jacobin’s purpose, judging by what it does, is to convince us that capitalist imperatives are, in fact, socialist. By ignoring the ecological impact of the sprawling data centers built to host ‘AI’ platforms, sowing ‘both sides’ doubt while sprinkling a few Marx quotes here and there (even as they ignore Marx’s description of the ‘metabolic rift’ that better describes what’s really at stake) Buck and Huber, following a well trod Jacobin path, present industry propaganda as critique.
At this rate, they might as well become McKinsey consultants. They’ve already mastered the most critical skill: doublespeak.
…
References
AI-Driven Worker Displacement Is a Serious Threat
https://jacobin.com/2025/07/artificial-intelligence-worker-displacement-jobs
The Problem with DeGrowth (Matt Huber)
This is the essay in which Huber attempts to besmirch John Bellamy Foster and degrowth overall as an analysis of capitalism’s growth fetish.
https://jacobin.com/2023/07/degrowth-climate-change-economic-planning-production-austerity
John Duncan dissects this in the following video essay:
Degrowth is not Austerity
How to Read AI Hype
In this video, I walk through the document, ‘The Decade Ahead’ by Leopold Aschenbrenner published at the Situational Awareness website Buck and Huber reference. In the document, Aschenbrenner makes the usual bold assertions about ‘AGI’ (artificial general intelligence) equaling and soon, exceeding human cognition. How do you critically read such hype?
Aaron Benanav ‘Automation and the Future of Work’
Aaron Benanav ‘Automation and the Future of Work’ in New Left Review
Here is the first part of a two part essay Benanav wrote for the New Left Review, presenting the arguments that are in his book:
https://newleftreview.org/issues/ii119/articles/aaron-benanav-automation-and-the-future-of-work-1
Ed Zitron – The Era Of The Business Idiot
In this essay, journalist Ed Zitron, who carefully tracks the state of the ‘AI’ business, describes the idiocy of the current business management class. Buck and Huber build their argument on an impilied, and sometimes explicit deference to this class:
https://www.wheresyoured.at/the-era-of-the-business-idiot/
Apple Research – GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
The research paper Buck and Huber mention, then blithely dismiss, not understanding or not wanting to understand its argument: large language models do not reason:
https://machinelearning.apple.com/research/gsm-symbolic
David Gerard – Salesforce: AI agents don’t work — but we’re charging 6% more for AI anyway
Buck and Huber approvingly quote Salesforce’s CEO as a source. Unfortunately for their argument, the ‘agentic ai’ services Salesforce promtes (and which Buck and Huber also promote), do not work:
Stafford Beer (Wikipedia entry)
At the end of this essay, I mention cyberneticist Stafford Beer whose work I suggest getting acquainted with.
https://en.wikipedia.org/wiki/Stafford_Beer
Beer is also prominently discussed in the book, ‘The Cybernetic Brain‘ by Andrew Pickering which is a history of the early, British cybernetic researcher
https://press.uchicago.edu/ucp/books/book/chicago/C/bo8169881.html
Kohei Saito – Marx in the Anthropocene
In the book, Kohei Saito builds on Marx’s recognition of a ‘metabolic rift’ between nature as a whole and human activity, caused by capitalist profit driven ‘growth’
https://www.cambridge.org/core/books/marx-in-the-anthropocene/D58765916F0CB624FCCBB61F50879376#




