← Back to toolkit
PSF-Adjacent · Companion Paper

The AI Alibi: How a Self-Reinforcing Narrative Is Displacing Accountability on Both Sides of the Enterprise Technology Relationship

Vikram Bapat, Neeti Gupta, Florian Urmetzer

V5 Overview V6 Draft Target: MIT Sloan Management Review IfM, University of Cambridge 2025–2028

Core Argument

Corporations are deploying AI as rhetorical cover for workforce decisions driven by financial pressure, not demonstrated AI capability. The AI alibi works because it converts a discretionary strategic decision into an environmental condition: workers cannot contest a decision framed as a force of nature. Simultaneously, the vocabulary executives reach for in these announcements is not independently constructed. It is produced and distributed by the hyperscaler ecosystem, which has $600 billion in annual infrastructure investment requiring enterprise commitments to justify it.

THE ALIBI IN ONE SENTENCE

A chosen strategic decision, driven by financial pressure and years of expensive strategic bets that did not pay off, is reframed as a response to an external technological force that arrived and left no alternative.

The Double Displacement

Accountability is displaced at both ends of the enterprise technology relationship simultaneously:

Enterprise side: Executives attribute workforce decisions to AI capability rather than financial pressure. Stock surges. Analysts reward the framing. When legal consequences attach (New York WARN Act), zero companies check the AI box. 59% of hiring managers admit emphasizing AI because it "plays better with stakeholders."

Vendor side: Hyperscalers produce and distribute the vocabulary of AI transformation through partner incentives, consulting alliances, and mandatory adoption programs. $600 billion in annual infrastructure investment generating $25 billion in current revenue requires enterprises to publicly commit to AI transformation in ways that justify the infrastructure spend, whether or not the productivity gains materialize.

Cases

Block / Dorsey
Same CEO, same company, opposite framings of the same kind of decision, separated by 11 months. March 2025: "not trying to hit a specific financial target." February 2026: full AI framing, stock surged 23%. Mizuho analyst: "the vast majority of these cuts were probably cost savings disguised as AI layoffs."
Klarna
Aggressively replaced customer service staff with AI chatbots (2023–2024), claimed system performed work of 700 agents. Customer complaints surged. Now rehiring human agents. Full cycle: alibi → deployment → failure → reversal.
Baker McKenzie
Global law firm citing AI for restructuring. Above the Law asked whether this "gave BigLaw permission to blame AI for mass layoffs." The professional services variant of the alibi.
Accenture
Plans to "exit" staff who cannot be reskilled on AI. Simultaneously holds OpenAI Frontier Alliance partnership. The consulting firm advising on AI transformation is also the partner profiting from the technology being recommended.
Commonwealth Bank
Cut 45 customer service roles citing AI voice bot. Finance Sector Union challenged the claim at tribunal. Decision reversed. The alibi dissolved under legal scrutiny.
Amazon
Among the top firms citing AI for job cuts in 2025. Part of the broader pattern where hyperscalers both produce the AI transformation narrative and execute workforce decisions under its cover.

Paper Structure

Opening: The Block Memos
Two memos from Dorsey, 11 months apart, as the cleanest illustration of the alibi in action.
The Reversals Tell the Story
Klarna, Commonwealth Bank, Careerminds survey (35.6% rehired more than half of AI-attributed cuts). The full cycle from alibi to reversal.
Where the Vocabulary Comes From
The supply-side explanation. Hyperscaler CapEx ($600B), partner incentives, consulting alliances (OpenAI Frontier Alliances with McKinsey, BCG, Accenture, Capgemini). How the narrative ecosystem operates.
What the Evidence Actually Shows
Activity-level AI footprint (Cai et al. 2026 MIT ontology), occupational-level exposure gap (Massenkoff & McCrory), sector-level salary divergence and pipeline hollowing (O'Connor/Burn-Murdoch FT). The three-pattern framing: partial validity, zero-presence overreach, differential burden. NBER CEO survey, HBR executive survey, task-level research.
The Governance Cost
How the alibi erodes organizational capacity to evaluate its own decisions. The feedback loop between public narrative and private evaluation. Connection to PSF mechanism.
The Loop, Closed
How the system displaces accountability at both ends simultaneously and why there is no clean exit.

Key Evidence

Scale of AI-attributed layoffs: 55,000+ U.S. job cuts citing AI in 2025 (12x the 2024 figure). Challenger, Gray & Christmas data.

Admission of framing: 59% of hiring managers admitted emphasizing AI because it "plays better with stakeholders" (Resume.org). Zero companies checked the AI box on NY WARN Act filings.

Anticipation vs. demonstration: 60% of headcount reductions based on anticipated future AI capability, only 2% on demonstrated returns (HBR).

Activity-level footprint: MIT analysis of 13,275 AI applications finds top 20 activities account for 35%+ of all applications, concentrated in content generation. Governance, authorization, and judgment activities appear at or near zero (Cai et al. 2026).

Productivity reality: Acemoglu estimates 0.53–0.66% TFP increase over ten years. METR RCT: developers expected +24%, got −19%. UK government Copilot trial: no measurable productivity improvement.

Reversal rate: 35.6% of companies rehired more than half of AI-attributed cuts (Careerminds). Gartner predicted half would reverse by 2027.

Pipeline damage: Stanford ADP data: early-career workers (22–25) in AI-exposed occupations experienced 13% relative employment decline. FT analysis shows top-decile software salaries up 15%, bottom-decile flat since ChatGPT launch (O'Connor/Burn-Murdoch).

CONNECTION TO PSF

The AI Alibi and PSF share an evidence base but make different arguments. PSF argues AI engagement constitutes proxy metrics that erode evaluative capacity through sincere belief. The AI Alibi argues AI is deployed as rhetorical cover for financial decisions through strategic framing.

The papers converge in the governance cost section: when leadership frames decisions using the alibi, it progressively narrows the organization's capacity to evaluate those decisions on their actual merits. Layoffs justified by AI rhetoric often eliminate precisely the institutional knowledge needed to evaluate whether AI engagement is working. That is PSF's evaluative capacity erosion operating through the alibi's accountability displacement.

The AI Alibi is the macro-level narrative. PSF is the micro-to-meso-level mechanism. The alibi creates the discursive environment in which proxy seduction can operate unchallenged.

Target and Status

Venue: MIT Sloan Management Review. Portal reopened mid-March 2026. Target: 2,500 words, 4,000-word maximum including references.

Co-authors: Vikram Bapat (Cambridge, IfM), Neeti Gupta (Cambridge, IfM, hyperscaler ecosystems), Florian Urmetzer (Cambridge, IfM, supervisor).

Current status: V6 draft complete. New evidence integrated: Cai et al. 2026 MIT activity ontology, O'Connor/Burn-Murdoch FT salary divergence analysis. Three-pattern framing paragraph added to "What the Evidence Actually Shows." Pipeline consequence moved up from conclusion.

V6 Updates Three-pattern framing paragraph added to "What the Evidence Actually Shows." Cai et al. 2026 MIT activity ontology integrated as activity-level evidence layer. O'Connor/Burn-Murdoch FT salary divergence analysis added; pipeline consequence moved up from conclusion. Endnotes 30–31 added.

In March 2025, Block eliminated 931 employees. Jack Dorsey's internal memo was explicit about what the cuts were not. "None of the above points are trying to hit a specific financial target, replacing folks with AI, or changing our headcount cap."1 Eleven months later, in February 2026, Block eliminated more than 4,000 employees, roughly 40 percent of its workforce. Dorsey's public framing this time was total. On X, he compressed the logic to an equation. "100 people + AI = 1,000 people." In his company statement, he wrote that "intelligence tools have changed what it means to build and run a company."2 Between those two announcements, Block's stock had fallen 16 percent year-to-date. Afterpay had accumulated $12.2 billion in writedowns. Tidal had required a $132 million goodwill impairment. Nothing in Block's underlying business had changed between those two announcements. What changed was the vocabulary Dorsey used to describe the same kind of decision. Block's stock surged 23 percent in after-hours trading on the February announcement. Goldman Sachs raised its price target. Mizuho's analyst stated that the vast majority of the cuts were probably not related to AI. Block's former head of communications, writing in the New York Times, observed that when you examine the specific cuts (shrinking the policy team, eliminating diversity roles), the reorganization looks like standard cost management.4

A chosen strategic decision, driven by financial pressure and years of expensive bets that did not pay off, was reframed as a response to an external technological force. The framing rewarded the executive, reassured the market, and foreclosed the questions that should follow any large workforce reduction. What failed? Who decided? What alternatives existed?

New York State's updated WARN Act gives employers the option to check a box attributing layoffs to technological innovation or automation. Of 160 companies filing notices after March 2025, including Amazon and Goldman Sachs, zero checked the box.5 The same executives who attributed workforce reductions to AI in press releases and earnings calls declined to make that attribution in a legal document. The public framing is a strategic communication choice, not a causal description.

The Reversals Tell the Story

If AI were genuinely driving the workforce reductions that companies attribute to it, the organizations that acted most aggressively on that premise should be the ones reporting the strongest results. The opposite is happening.

Klarna is the most visible case. After replacing customer service staff with AI chatbots and publicly claiming the system performed the work of 700 agents, customer complaints surged and satisfaction declined. In May 2025, CEO Sebastian Siemiatkowski acknowledged the company "went too far" and that AI had resulted in "lower quality," announcing the rehiring of human agents.6 The AI replacement that had been announced as a structural transformation turned out to be a quality-for-cost trade that customers rejected.

Commonwealth Bank of Australia followed a similar trajectory. CBA cut 45 customer service roles in July 2025, attributing the decision to an AI voice bot that had reduced call volumes. The Finance Sector Union challenged the claim at the Fair Work Commission with data showing that call volumes were actually increasing and that CBA was simultaneously hiring similar roles in India. CBA reversed the decision and admitted an "error."7

These are not isolated cases. A February 2026 Careerminds survey of 600 HR professionals found that 35.6 percent of companies had already rehired more than half the roles they eliminated, with 52.1 percent rehiring within six months. Only 21.4 percent reported AI had fully replaced eliminated roles without operational problems.8 Gartner predicted in February 2026 that by 2027, half of companies attributing headcount reductions to AI would rehire staff for similar functions, often under different job titles.9

The reversals share a common structure. The AI framing allowed the organization to announce a transformation. The transformation did not deliver. The organization corrected without revisiting the original narrative. The accountability that the alibi displaced on the way in did not return on the way out.

A Harvard Business Review study surveying more than 1,000 executives quantifies the mechanism driving these outcomes. Sixty percent had reduced headcount based on AI's anticipated future capability. Two percent acted on demonstrated performance.10 Organizations are making structural workforce decisions based on what the technology might eventually deliver, not on what it has delivered. The alibi does not require the technology to have worked. It requires the expectation to be credible.

Where the Vocabulary Comes From

The 60–2 split raises an obvious question. If the technology has not yet delivered the efficiency gains that justify workforce reduction, why has the framing become so consistent across industries, business models, and actual levels of AI capability?

The answer has a supply-side explanation that the press coverage of AI layoffs has not yet examined. The vocabulary executives reach for in these announcements is not independently constructed in each boardroom. It is produced and distributed by a commercial ecosystem with hundreds of billions of dollars at stake in its acceptance.

The five major hyperscalers (Amazon, Microsoft, Google, Meta, and Oracle) are projected to spend $600 to $690 billion on AI infrastructure in 2026, approximately 75 percent of their total capital expenditure.11 Much of that investment is funded by debt. Hyperscalers raised approximately $108 billion in 2025 alone, with Google issuing a 100-year bond in February 2026 to fund AI infrastructure commitments.12

Yet AI-related services generated only approximately $25 billion in enterprise revenue in 2025. Bain calculates that to justify current capital expenditure, AI needs $2 trillion in annual revenue by decade's end. Best-case forecasts project $1.2 trillion, leaving an $800 billion gap.13

An infrastructure investment of that scale requires enterprises to publicly commit to AI transformation in ways that validate the investment thesis, sustain hyperscaler stock valuations, and justify the commercial relationships that will eventually close the revenue gap. The enterprise AI narrative is not a byproduct of hyperscaler strategy. It is a core deliverable.

The commercial architecture that produces the narrative operates through specific, documented mechanisms. Microsoft increased partner incentives for its Copilot product by approximately 50 percent (covering rebates, pilots, and net new licenses), creating a distribution channel financially rewarded for driving engagement metrics regardless of demonstrated productivity returns. Despite only 15 million paid Copilot seats after two years, representing 3.3 percent of the commercial M365 base, Microsoft claims 90 percent of the Fortune 500 as Copilot customers, generating social-proof pressure on holdouts.14

Salesforce reframed its entire addressable market as a "$6 trillion digital labor market" and positioned AI agents not as software tools but as a "digital workforce," with pricing structured to replace human headcount metrics with agent-seat metrics.15 The commercial model is explicitly designed to make AI engagement look like workforce transformation.

The consulting ecosystem amplifies the vocabulary at the C-suite level. In February 2026, OpenAI announced formal Frontier Alliances with McKinsey, BCG, Accenture, and Capgemini.16 These are revenue-sharing partnerships in which the firms that advise boards on AI transformation strategy hold direct financial relationships with the AI platform vendors whose products they recommend.

The consequences of those partnerships are visible inside the firms themselves. McKinsey's managing partner revealed at CES 2025 that the firm now tracks 25,000 AI agents alongside 40,000 human employees as a single workforce metric, a framing that arrives in client presentations about what organizational transformation looks like.17 Accenture began tracking individual employee AI tool logins as an explicit input for promotion decisions, embedding the narrative inside its own workforce before distributing it to clients.18

Each actor is responding rationally to its own incentive structure. The cumulative effect, however, is a narrative environment in which "AI transformation is inevitable" arrives at the boardroom already validated by the vendor, confirmed by the consulting firm, and benchmarked against the competitor who announced a similar transformation last quarter. When an executive reaches for the AI framing to explain a workforce reduction, the vocabulary has been professionally prepared for precisely that use.

What the Evidence Actually Shows

The announcements examined here share a common structure. Each rests on a real but narrow AI capability: document review, code generation, transaction processing. That narrow capability is then extended into a claim about the organizational function as a whole. The problem is that the activities constituting core professional judgment in each sector, including advising, authorizing, deciding, and negotiating, are precisely where AI has the least footprint. A recent MIT analysis of over 13,000 AI software applications found that the top 20 activities account for more than a third of all applications, concentrated in content generation. The governance and judgment activities that define professional services appear at or near zero.30 The workers who bear the headcount consequences of these announcements are overwhelmingly entry-level and early-career professionals. That is not incidental. Reducing junior intake hollows the pipeline that produces the senior judgment these organizations depend on. Entry-level professionals are not simply cheaper labor doing codified tasks. They are the institution's capacity to develop the tacit judgment that AI cannot yet replicate, developing through years of supervised practice into the senior professionals who can actually evaluate whether AI outputs are any good. Financial Times analysis of millions of software job advertisements finds pay growth diverging sharply since late 2022, with top-decile salaries rising 15 percent while bottom-decile salaries remain flat. Software job advertisements overall are growing, consistent with Jevons paradox effects. The pattern is compositional, not aggregate: seniors amplified, juniors squeezed, the pipeline quietly hollowing. None of the announcements examined here discloses that consequence.31

The supply-side architecture explains why the narrative is so consistent. The demand-side evidence explains why it remains detached from operational reality.

An NBER study of approximately 6,000 CEOs and CFOs across four countries found that more than 80 percent of firms report no measurable impact of AI on either employment or productivity over the past three years.19 The PwC 2026 Global CEO Survey of more than 4,400 leaders found 56 percent reporting zero financial benefit from AI, neither revenue gains nor cost reductions.20 McKinsey's own State of AI report identified just 6 percent of organizations as high performers where AI contributes more than 5 percent to EBIT.21

Task-level research does show real gains in specific contexts. A study of 5,172 customer support agents found a 14 to 15 percent increase in issues resolved per hour for workers using AI assistance.22

But organizational translation of those task-level gains proves consistently elusive. The UK government's cross-department Microsoft Copilot trial of 20,000 civil servants found no robust evidence that self-reported time savings translated to improved productivity.23 A METR randomized trial found that experienced open-source developers using AI tools were 19 percent slower than those working without them, while believing AI had made them faster.24

The gap between perceived and measured performance is itself a governance problem. Organizations are making workforce decisions based on what their people believe AI is doing, not what it is measurably delivering.

A Resume.org survey found that 59 percent of hiring managers admitted emphasizing AI in layoff messaging because it "plays better with stakeholders." Only 9 percent reported AI had fully replaced eliminated roles.25 In February 2026, Sam Altman acknowledged at India's AI Impact Summit that companies are "AI washing" by blaming layoffs on AI that would have happened regardless.26 When the primary vendor of the technology confirms the alibi is in use, the pattern has crossed from allegation to acknowledgment.

The Governance Cost

The evidence gap and the perception gap converge on a problem that extends well beyond reputational risk.

When leadership frames strategic decisions as responses to external technological forces, the framing progressively narrows the organization's capacity to evaluate those decisions on their actual merits. If the cause of a workforce reduction is AI, there is no productive question to ask about whether the reduction was the right size, the right scope, or executed in the right sequence. The technology has already answered the question. What remains is implementation.

Committing publicly to an AI-efficiency framing creates pressure on internal teams to surface evidence confirming the narrative. The criteria by which decisions get evaluated shift toward metrics that are legible and AI-attributable. The METR finding (experienced developers 19 percent slower with AI while believing the opposite) illustrates this organizational feedback loop in miniature. The framing shapes what people measure and what they report, not just what they announce.

A Stanford study using ADP payroll records covering millions of U.S. workers found that early-career workers aged 22 to 25 in AI-exposed occupations have experienced a 13 percent relative employment decline since late 2022, even as workers over 30 in the same roles saw growth.27 The mechanism matters. AI approximates the codified knowledge that entry-level workers rely on. Organizations cutting those roles are eliminating the pathway through which the next generation develops the tacit judgment that AI cannot yet replicate.

The layoffs justified by AI rhetoric often eliminate precisely the institutional knowledge needed to evaluate whether the AI engagement is actually working. Vendor relationships structured around inevitability narratives systematically crowd out the internal scrutiny needed to build that judgment.

The organization that has publicly committed to AI transformation, signed a multiyear infrastructure relationship, and reduced its headcount in the name of efficiency has strong structural reasons to avoid finding out that the transformation is not delivering. The alibi protects not just the executive who made the announcement, but the entire organizational apparatus that has been built around it.

The Loop, Closed

Every major technology transition produces a version of this pattern. A capability arrives, gets deployed in the service of existing financial pressures, and the financial pressures get attributed to the capability. The enterprise software wave of the 1990s had its own version, when ERP implementations justified headcount reductions that were largely about cost structure, not software.

What is different now is the scale of the capital commitment and the sophistication of the commercial ecosystem built to validate it. A $600 billion annual infrastructure bet does not sit quietly waiting for the evidence to come in. It generates the narrative conditions for its own justification.

Accountability is displaced at both ends of the enterprise technology relationship simultaneously. The vendor attributes transformation to the technology rather than to the partnership structure and pricing incentives driving engagement. The executive attributes workforce decisions to the transformation rather than to the financial strategy the technology is serving.

Challenger, Gray & Christmas tracked more than 55,000 U.S. job cuts explicitly citing AI across 2025, twelve times the figure from two years prior.28 Those 55,000 represent 4.5 percent of the 1.2 million total U.S. job cuts that year.29 The narrative prominence of AI layoffs vastly exceeds their share of actual workforce reductions. The disproportion between narrative prominence and actual share is itself the clearest evidence that the AI alibi is doing rhetorical work, not describing a causal relationship.

Klarna found out. Commonwealth Bank found out. The 35.6 percent of companies that have already rehired more than half the roles they eliminated are finding out. The alibi protects nobody in the long run, but it does something worse than failing. It eliminates the organizational capacity to learn from the failure, because the framing has already determined the answer.

Vikram Bapat spent 25 years in product and platform experience leadership at Microsoft and Google, including work on Visual Studio, Android Studio, and Google's advertising infrastructure. He is a PhD researcher at the Institute for Manufacturing, University of Cambridge, where his research examines how organizations evaluate transformative technologies.

Neeti Gupta is a PhD researcher at the Institute for Manufacturing, University of Cambridge, where her research examines AI partnership structures, hyperscaler ecosystems, and the governance of enterprise AI relationships.

Florian Urmetzer is a lecturer and head of the Research and Development Challenges course at the Institute for Manufacturing, University of Cambridge, where he focuses on technology management, open innovation, and digital transformation in industrial contexts.

Endnotes
1.J. Dorsey, internal memo, March 2025; "Jack Dorsey Blamed AI for 4,000 Layoffs. A Former Block Exec Says That's Not the Real Story," Inc., March 2026.
2.J. Dorsey, public statement on X and Block company announcement, Feb. 26, 2026; "Block Lays Off Nearly Half Its Staff Because of AI," CNN Business, Feb. 26, 2026.
3."Block Shares Soar as Much as 24% as Company Slashes Workforce by Nearly Half," CNBC, Feb. 26, 2026; J. Bersin, "Is Block's Decision to Layoff 40% of Its Workforce a Bellwether or Not?" Josh Bersin, March 2026.
4."Jack Dorsey's 4,000 Job Cuts at Block Arouse Suspicions of AI-Washing," Bloomberg, March 1, 2026; A. Zamost, "I Worked for Block. Its AI Job Cuts Aren't What They Seem," New York Times, 2026.
5."Zero Companies Admit AI-Driven Layoffs Under NY Law After Year," TechBuzz, 2026.
6."Klarna Plans to Hire Humans Again, as New Landmark Survey Reveals Most AI Projects Fail to Deliver," Fortune, May 9, 2025.
7."Commonwealth Bank of Australia Reverses Move to Replace 45 Jobs with AI," Bloomberg, Aug. 21, 2025; "CBA Reverses AI-Driven Job Cuts, Admits 'Error,'" Information Age, 2025.
8."AI-Led Layoffs: What HR Leaders Wish They Knew Before Making Job Cuts," Careerminds, 2026.
9.Gartner prediction reported in "Half of AI-Driven Layoffs Will Reverse by 2027," Metaintro, 2026.
10.T.H. Davenport and L. Srinivasan, "Companies Are Laying Off Workers Because of AI's Potential — Not Its Performance," Harvard Business Review, 2026.
11."Hyperscaler CapEx Hits $600B in 2026," Introl Blog, January 2026; "AI CapEx 2026: The $690B Infrastructure Sprint," Futurum Group, 2026.
12.Introl Blog, "Hyperscaler CapEx Hits $600B in 2026."
13.Bain revenue gap analysis reported in "AI CapEx 2026: The $690B Infrastructure Sprint," Futurum Group, 2026.
14."Microsoft's AI Money Machine: The Real Economics of Copilot Deployment," Samexpert, 2026.
15.Salesforce, "Building the Agentic Enterprise," Salesforce.com, 2025; J. Eriksvik, "Salesforce Agentforce and the Magical Return of the Seat License," Medium, 2025.
16."OpenAI Partners with McKinsey, BCG, Accenture, and Capgemini to Push Its Frontier AI Agent Platform," Fortune, Feb. 23, 2026.
17."Adopt AI or Miss Out on Promotion: The New Rule for Major Global Consulting Firms," Vocal Media, 2026.
18.Ibid.
19."Thousands of CEOs Just Admitted AI Had No Impact on Employment or Productivity," Fortune, Feb. 17, 2026; "Firm Data on AI," NBER Working Paper 34836, 2026.
20.PwC, "PwC 2026 Global CEO Survey," press release, January 2026.
21.McKinsey & Company, "State of AI 2025," reported in "McKinsey's State of AI 2025: What Separates High Performers from the Rest," CoLab Software, 2025.
22.E. Brynjolfsson, D. Li, and L.R. Raymond, "Generative AI at Work," NBER Working Paper 31161, April 2023, revised 2024.
23."M365 Copilot Fails to Up Productivity in UK Government Trial," The Register, Sept. 4, 2025.
24."METR's Study on How AI Affects Developer Productivity," DX Newsletter, 2026.
25."'AI-Washing' Rises as Companies Blame AI for Layoffs: What to Know," Quartz, 2026.
26."Sam Altman Says the Quiet Part Out Loud, Confirming Some Companies Are 'AI Washing' by Blaming Unrelated Layoffs on the Technology," Fortune, Feb. 19, 2026.
27.E. Brynjolfsson, N. Chandar, and J.Y. Chen, "Canaries in the Coal Mine? Early Signals from AI's Impact on the Labor Market," Stanford Digital Economy Lab, 2025.
28."AI Was Behind Over 50,000 Layoffs in 2025 — Here Are the Top Firms to Cite It for Job Cuts," CNBC, Dec. 21, 2025.
29.Ibid. D. Acemoglu, "The Simple Macroeconomics of AI," Economic Policy 40, no. 121 (2025): 13.
30.A. Cai, I. YeckehZaare, S. Sun, V. Charisi, X. Wang, A. Imran, R. Laubacher, A. Prakash, and T.W. Malone, "Where Can AI Be Used? Insights from a Deep Ontology of Work Activities," arXiv:2603.20619 [cs.AI], March 21, 2026. The analysis covers 13,275 AI software applications classified against a 40,000-activity ontology derived from the U.S. Department of Labor's O*NET database. The top 20 activities account for 35%+ of all applications, concentrated in content generation. Activities involving governance, authorization, and professional judgment appear at 0% or near-0%.
31.S. O'Connor, LinkedIn, March 2026 (lnkd.in/eSfUKEiN); J. Burn-Murdoch, "The AI Shift" newsletter, Financial Times, March 2026. Lightcast job postings data shows software job advertisements growing faster than the wider market since late 2022, consistent with Jevons paradox effects. Top-decile software salaries have risen approximately 15% since ChatGPT's release while bottom-decile salaries remain flat. The pattern is consistent with AI complementing experienced workers while substituting for entry-level functions, generating what O'Connor describes as a collective action problem in talent pipeline investment.
Submission note: Per MIT SMR guidelines, the authors disclose that AI research and drafting tools were used in the preparation of this manuscript, including for evidence gathering, structural organization, and prose drafting. All claims, citations, and arguments have been reviewed, verified, and edited by the authors. Authors are fully responsible for the accuracy of all citations and content.