Augmentation Blues
Responsible AI updates from early February, 2026
I. Agentic AI spooks Labor & Capital
It seems that agentic AI’s version of “there are decades where nothing happens, and there are weeks where decades happen” just happened.
To recap: OpenClaw inadvertently launched the category of consumer agents. Moltbook socially-networked agentic intelligence. Anthropic released some new plugins for Claude Cowork, and ended up wiping nearly a trillion dollars off of software and data stocks over two sessions, especially hammering SaaS incumbents like Salesforce, Workday, and Monday.com in what was dubbed the “SaaSpocalypse."
Anthropic subsequently improved Cowork’s underlying model, so we now have Claude Opus 4.6, launched with longer-horizon agentic task handling and a 1M-token context window, in beta, and OpenAI released its own agentic coding model, GPT‑5.3-Codex, plus an agentic platform, Frontier — all on Feb. 5th.
We also learned that Goldman Sachs has been co-building autonomous agents (or, rather, “digital co-workers”) with Anthropic for automating workflows in areas like accounting and compliance, not just for engineering tasks. I wonder, if this is already underway in a highly regulated industry like financial services, how many other companies across industries are engaging in high-touch partnerships with model developers to build bespoke “digital co-workers”?
And then, “Something Big Is Happening,” the essay everyone has been talking about, went viral, consolidating nebulous fears of agentic AI’s possible impending reverberations across the labor market.
To be sure, experts disagree (and signals are legitimately unclear) on the current and future labor impacts of automation, but, from my vantage point, the significant labor implications of generative and agentic AI on knowledge work have long been creeping up on us.
In spite of the disarming “No, it’s just augmentation!” refrain and its soothingly plausible deniability, of course the primary promise of agentic AI’s value, from its sellers to its buyers, has always been its potential to slash labor costs.
So far, unresolved technical limitations have rendered those promises little more than marketing pitches from frontier model developers desperately trying to justify their stratospheric valuations, and we have seen one false start after another from companies trying to cash in on those promises (most notably Klarna….and they’re back at it once again).
But capabilities, in a growing roster of specific contexts, are catching up with those promised intentions. And, importantly, the most disqualifying limitations of generative AI are methodically being overcome or dismissed — or at least are no longer enough of a liability to hold back the next wave of workplace automation. (Or, perhaps more accurately, are becoming less of a perceived liability than the alternative.)
In order for AI’s labor impacts to meaningfully default to augmentation, that goal must be pursued and cultivated, intentionally. It won’t happen by accident or wishful thinking.
Resource: Stanford HAI’s Erik Brynjolfsson gave a thought-provoking interview to the WSJ Leadership Institute on how AI is reshaping the labor market and the importance of augmentation.
This has all amounted to what feels like an actual vibe shift, and U.S. policymakers seem to be awakening to the regulatory potential of this moment. Lawmakers may be concerned about what they’re seeing from the markets, and what that portends for capital; or what they’re hearing about the viral essay on workforce automation, and what that portends for labor; or about their constituents’ anger and fear, and what that portends for their own political futures. Whatever the motivation, expect to see some action (or at least some talk about action) to address the most top-of-mind impacts of AI.
At the same time, all of those agentic-adjacent releases hitting virtually simultaneously at the start of February signaled something else, not about any one launch in particular, but about the collective acceleration: velocity as status.
Weeks like these push the pace of innovation for the entire field, collapsing the importance of every other priority. Enterprising builders “just doing things” (things like OpenClaw and Moltbook) are driving entire industries to move even faster (and break things even harder). Notably, OpenClaw founder Peter Steinberger has now been scooped up by Sam Altman to build agents for OpenAI.
We are firmly entrenched in a race where iteration speed is so clearly outrunning readiness work. Of course there are many unaddressed risks, but the most urgent red flag for many of us remains security, with persistent memory as a meaningful new risk surface for agentic systems, and open agent channels broadening distribution pathways for malicious payloads.
These are solvable problems, and the immediate vulnerabilities can eventually be patched, but that requires time, testing discipline, and deployment patience — all of which the current market dynamic does not reward.
The incentive gradient points the other way, and the deeper governance issue is that the market is rewarding velocity and punishing caution — and that means, most importantly for those of us who care about Responsible AI, that the accelerating governance deficit is very real.
II. International AI Safety Report 2026
Into this frenzy dropped the much-anticipated International AI Safety Report.
It’s designed to help policymakers act under uncertainty (“the evidence dilemma”). AI capabilities are moving quickly, as has been noted, while robust risk evidence arrives more slowly. The report’s framing is pragmatic — act too early and you may lock in weak interventions; act too late and serious harms may scale before institutions are ready.
To structure decision-making, it groups frontier AI risk into three buckets: malicious use, malfunctions, and systemic risks, which helps policymakers avoid treating “AI risk” as a single undifferentiated problem.
Systemic effects are already visible in labor and human agency, even if aggregate impacts are unsettled and hard to measure cleanly at economy-wide scale.
The report is clear that there is no silver bullet for safety. Because evaluations do not reliably predict real-world behavior, model internals remain poorly understood, and safeguards can be bypassed, it recommends layered controls rather than dependence on any single test or mitigation.
The entire report is a compelling read. Especially of interest are §1.3. Capabilities by 2030, §3.3. Technical safeguards and monitoring, and §3.5. Societal resilience.
And all of this is the backdrop for this week’s India AI Impact Summit (more on that in the next newsletter).
Here’s what else we’re covering today:
✅
Agentic AI spooks Labor & Capital✅
International AI Safety Report 2026Data Centers: Not In My Troposphere
Deepfakes: “The liar’s dividend” allocates infinite doubt
Ads: SponCon has entered the chatbot
RAI Resources
What I’m Reading
III. Data Centers: Not In My Troposphere

SpaceX, with newly acquired xAI along for the ride, has officially filed with the FCC for a massive satellite-scale orbital compute concept of up to one million satellites — 66 times as many as are currently in orbit around the planet — framed around space-based AI compute infrastructure.
Meanwhile, local and state resistance to terrestrial data center expansion is intensifying: Data Center Watch reports local communities have blocked or delayed around 20 U.S. projects. WIRED reports NY joining a wave of state pause bills (Georgia, Maryland, Oklahoma, Vermont, Virginia), making this a bipartisan statehouse trend rather than a niche local reaction.
Industry coalitions, on the other hand, are pushing harder for temporary federal preemption of state AI rules, creating a likely state–federal governance collision in 2026. But the underlying policy question is not actually about what level of government makes the decision, or even about “pro- vs anti-data-center.”
Yes, community resistance of data centers is partly about the power usage, the energy price hikes, the extensive fresh water cooling, the overall environmental footprint, the corporate tax breaks, and the lack of long-term job opportunities.
But, it’s also very much about having a tangible vector — one clear fulcrum of impact, a lever of power, a voice of any kind — for affecting the seeming inevitability of a technological wave that feels like it’s taking over our collective lives, jobs, and futures.
IV. Deepfakes: The liar’s dividend allocates infinite doubt
A 15-second fight video of Brad Pitt and Tom Cruise generated with ByteDance’s Seedance 2.0 just went viral for its Hollywood-level quality (though everyone knows that, unfortunately, the only time those two have been on the big screen together was in 1994’s Interview with the Vampire).
This is one example among a growing litany of deepfakes and incidents undermining confidence in the digital information ecosystem at large.
We’re experiencing a transition from “can we detect fakes?” to “can shared reality survive at scale?” This degradation of our shared epistemic confidence creates a massive liar’s dividend where authentic evidence can be dismissed as fake whenever it’s politically convenient. We’ve hit the ambient plausibility crisis phase of synthetic media, and even with the prospect of content labels and watermarks, trust is hanging by a thread.
2024’s relative optimism suggested labeling could restore trust at scale, but real deployment is uneven and partial. Also, disturbingly, research shows that emotional persuasion often survives even after content is identified as synthetic. (h/t The Algorithm from MIT Technology Review)
So, that complicates things.
A technical solution alone won’t fix this. We inevitably see public discourse shifting toward default skepticism and source scrutiny, because provenance signals are proving insufficient in high-volume synthetic environments.
It’s clear that the technical challenge and the epistemic challenge are intertwined. Even when people learn that specific content is fake, the emotional and behavioral effects derived from that content often persist.
It’s no wonder Instagram head Adam Mosseri’s recent Authenticity after Abundance thread on the collective “default to skepticism” reflects this shift in public posture: from trusting what we see to evaluating source and motive first.
Resource: A recent Decoder with Nilay Patel podcast episode argues that content provenance tooling (like C2PA) is valuable, but not sufficient in a high-volume generative media environment with incentive-aligned deception where it faces hurdles like inconsistent adoption, metadata stripping, and weak user comprehension effects.
V. Ads: SponCon has entered the chatbot
OpenAI began testing clearly labeled ads in ChatGPT for free and Go tiers, with paid higher tiers excluded, while insisting ads won’t influence model answers.
OpenAI has chosen to go with a cost per view model instead of the search-ad standard of cost per click, and rates are three times as high as Meta’s CPM, and comparable to targeted streaming and premium TV inventory like NFL games, according to Digiday.
Initially, the company sought advertisers prepared to spend at least $250,000, but more recently, according to trade reports, that figure has shifted to as much as $1 million a month. The initial list of advertisers that have said yes reportedly include agencies representing companies like William-Sonoma, Bed, Bath & Beyond, Adobe, Ford, Mazda, and Target.
This matters for Responsible AI because monetization design directly shapes interface trust, user autonomy, and disclosure norms.
OpenAI researcher Zoë Hitzig pointed to these ads as her reason for resigning, saying in a NYTimes op-ed, “people believed they were talking to something that had no ulterior agenda....Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
Resource: Center for Democracy & Technology - “Risky Business” If you have yet to read last month’s release examining model developer business models, give it a look.
VI. RAI Resources:
International AI Safety Report 2026: 200+ pages, 1,451 references, authored by over 100 AI experts, chaired by Yoshua Bengio, and backed by over 30 countries and international organizations. This is the second annual comprehensive review of the latest scientific research on the capabilities and risks of general-purpose AI systems.
OECD - “Exploring Possible AI Trajectories Through 2030”: Scenario-oriented analysis under uncertainty, with direct relevance to policy robustness and institutional planning, mapping plausible AI pathways and policy implications through the end of the decade.
Aspen Digital - “Defining Technologies of Our Time (AI)”: A useful terminology and framing resource for policymakers and cross-sector practitioners that situates AI governance choices in broader democratic and institutional contexts.
CDT - “Who Insures AI: Understanding the Roles of the Private Insurance Industry and How They Can Shape AI Governance”: An examination of how private insurance could function as a complementary governance mechanism for AI, shaping the development and deployment of systems by pricing risk, setting coverage conditions, and requiring specific risk management practices.
Partnership on AI - “Six AI Governance Priorities for 2026”: Reflecting on key questions from 2025, and in light of a variety of forums will shape key AI policy decisions in 2026, PAI’s Steering Committee determines organizational priority areas in AI Governance for the year ahead.
IAPP - “Third-Party Resources for AI Governance”: A curated, practitioner-oriented resource hub of tools, templates, and trackers for governance teams building governance functions that need operational artifacts.
Data & Society Research Institute - “(404) Job Not Found”: A labor-focused intervention worth reading for how AI-mediated work transformation can destabilize employment pathways and worker bargaining power as AI transitions can reshape opportunity, credentialing, and workforce pathways.
Data & Society Research Institute - “Building Civic Strength for an AI Era”: This piece draws a parallel between “media literacy” placing the burden of combating disinformation on individual users rather than platform incentives or industry failures, and “AI literacy” similarly individualizing blame. D&S offers an “AI Civics” approach in contrast, starting from the perspective of workers to address broader systemic conditions rather than treating reskilling as a standalone solution.
The Future Society - “Athens Roundtable: AI & the Rule of Law”: Facing the stakes of AI together: from shared concerns to joint action — provides key takeaways from the Seventh edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law, held December 4, 2025 in London, UK.
Harvard BKC Institute for Rebooting Social Media: “Reboot, Rebuild, Reimagine”: A timely framework contribution on platform reform in a synthetic-media environment that argues for product, policy, and governance redesign in platform ecosystems increasingly shaped by synthetic media and recommender incentives.
VII. What I’m Reading:
“State of Mozilla” (Mozilla): Mozilla’s 2026 direction emphasizes public-interest AI, community infrastructure, and values-led product development.
Rebecca Finlay “Corporate AI Governance Matters Now More Than Ever” (Partnership on AI): A concise reminder that governance maturity is a competitiveness issue and a timely argument for operational governance maturity as deployment scales.
Alex Pentland, “A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances” (Stanford University HAI): A strategic lens on how geopolitical AI power may rebalance beyond superpower binaries and how coalition-based compute, standards, and procurement strategies may reconfigure the global AI order.
“Where Is A.I. Taking Us? Eight Leading Thinkers Share Their Visions.” (NY Times): A cross-thinker scenario exercise mapping where cross-sector experts think AI trajectories may diverge next, useful for understanding where expert consensus converges, and where it fractures, on timelines and social impact.
Caitlin Andrews, “European Commission misses deadline for AI Act guidance on high-risk systems” (IAPP): The EU Commission’s failure to publish Article 6 guidance by the February 2, 2026 deadline introduces a period of legal and operational uncertainty for high-risk AI classification ahead of key enforcement milestones. Revised draft guidance and extended timelines are expected in the coming weeks or months.
What else are you reading?
Thanks for reading this newsletter,
Rebekah Tweed


