Friday, 3 April 2026
Welcome to today's roundup of the most interesting developments in AI and technology.
Top Stories
The Facebook insider building content moderation for the AI era
Moonbounce has raised $12 million to grow its AI control engine that converts content moderation policies into consistent, predictable AI behavior.
Read moreIntroducing Sonnet 4.6
Claude Sonnet 4.6 is a full upgrade of the model’s skills across coding, computer use, long-reasoning, agent planning, knowledge work, and design.
Read moreClaude Opus 4.6
We’re upgrading our smartest model. Across agentic coding, computer use, tool use, search, and finance, Opus 4.6 is an industry-leading model, often by wide margin.
Read moreClaude is a space to think | Anthropic
We’ve made a choice: Claude will remain ad-free. We explain why advertising incentives are incompatible with a genuinely helpful AI assistant, and how we plan to expand access without compromising user trust.
Read moreAustralian government and Anthropic sign MOU for AI safety and research
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Read moreAnthropic invests $100 million into the Claude Partner Network
We’re launching the Claude Partner Network, a program for partner organizations helping enterprises adopt Claude.
Read moreOpenAI acquires TBPN, the buzzy founder-led business talk show
TBPN, Silicon Valley's cult-favorite tech podcast, will operate independently, even as it's overseen by chief political operative Chris Lehane.
Read moreMicrosoft takes on AI rivals with three new foundational models
MAI released models that can transcribe voice into text as well as generate audio and images after the group's formation six months ago.
Read moreGoogle now lets you direct avatars through prompts in its Vids app
Google is adding a way to customize and instruct avatars for video creation in the Vids app.
Read moreAnthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Anthropic executives said it was an accident and retracted the bulk of the takedown notices.
Read moreIndustry
Inside the leaked Claude Code files
Docs as files, a new markdown editor and April fools
Read moreOne breach after another
separate and sandbox your agent's access
Read moreGoogle announces Gemma 4 open AI models, switches to Apache 2.0 license
Gemma 4 brings the first major update to Google's open models in a year.
Read moreThe gig workers who are training humanoid robots at home
When Zeus, a medical student living in a hilltop city in central Nigeria, returns to his studio apartment from a long day at the hospital, he turns on his ring light, straps his iPhone to his forehead, and starts recording himself. He raises his hands in front of him like a sleepwalker and puts a…
Read moreHow did Anthropic measure AI's "theoretical capabilities" in the job market?
2023 study made a lot of assumptions about future "anticipated LLM-powered software."
Read moreShifting to AI model customization is an architectural imperative
In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every new model iteration. Today, those jumps have flattened into incremental gains. The exception is domain-specialized intelligence, where true step-function improv...
Read moreAI benchmarks are broken. Here’s what we need instead.
For decades, artificial intelligence has been evaluated through the question of whether machines outperform humans. From chess to advanced math, from coding to essay writing, the performance of AI models and applications is tested against that of individual humans completing tasks. This framing i...
Read moreResearch & Products
AlphaGenome: AI for better understanding the genome — Google DeepMind
Ziga Avsec and Natasha Latysheva
Read moreA catalogue of genetic mutations to help pinpoint the cause of diseases — Google DeepMind
New AI tool classifies the effects of 71 million ‘missense’ mutations
Read moreAlphaEarth Foundations helps map our planet in unprecedented detail — Google DeepMind
The AlphaEarth Foundations team
Read moreIntroducing TRIBE v2: A Predictive Foundation Model Trained to Understand How the Human Brain Processes Complex Stimuli
TRIBE v2 reliably predicts high-resolution fMRI brain activity — enabling zero-shot predictions for new subjects, languages, and tasks — and consistently...
Read moreSAM 3.1: Faster and More Accessible Real-Time Video Detection and Tracking With Multiplexing and Global Reasoning
Update March 27, 2026:
Read moreFour MTIA Chips in Two Years: Scaling AI Experiences for Billions
Serving a wide range of AI models on a global scale, while maintaining the lowest possible costs, is one of the most demanding infrastructure challenges...
Read moreMapping the World's Forests with Greater Precision: Introducing Canopy Height Maps v2
Today, in partnership with the World Resources Institute, we’re announcing Canopy Height Maps v2 (CHMv2): an open source model and world-scale maps...
Read moreReducing Government Costs and Increasing Access to Greenspaces in the United Kingdom with DINO
Meta's DINOv2 model is enhancing reforestation efforts around the world. Learn how the UK government is using DINO to help reduce costs and increase...
Read moreHow Emotion Shapes the Behavior of LLMs and Agents: A Mechanistic Study
arXiv:2604.00005v1 Announce Type: new Abstract: Emotion plays an important role in human cognition and performance. Motivated by this, we investigate whether analogous emotional signals can shape the behavior of large language models (LLMs) and agents. Existing emotion-aware studies mainly treat...
Read moreOne Panel Does Not Fit All: Case-Adaptive Multi-Agent Deliberation for Clinical Prediction
arXiv:2604.00085v1 Announce Type: new Abstract: Large language models applied to clinical prediction exhibit case-level heterogeneity: simple cases yield consistent outputs, while complex cases produce divergent predictions under minor prompt changes. Existing single-agent strategies sample from...
Read moreOpen, Reliable, and Collective: A Community-Driven Framework for Tool-Using AI Agents
arXiv:2604.00137v1 Announce Type: new Abstract: Tool-integrated LLMs can retrieve, compute, and take real-world actions via external tools, but reliability remains a key bottleneck. We argue that failures stem from both tool-use accuracy (how well an agent invokes a tool) and intrinsic tool accur...
Read moreA Safety-Aware Role-Orchestrated Multi-Agent LLM Framework for Behavioral Health Communication Simulation
arXiv:2604.00249v1 Announce Type: new Abstract: Single-agent large language model (LLM) systems struggle to simultaneously support diverse conversational functions and maintain safety in behavioral health communication. We propose a safety-aware, role-orchestrated multi-agent LLM framework design...
Read moreHuman-in-the-Loop Control of Objective Drift in LLM-Assisted Computer Science Education
arXiv:2604.00281v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly embedded in computer science education through AI-assisted programming tools, yet such workflows often exhibit objective drift, in which locally plausible outputs diverge from stated task specifications....
Read moreDySCo: Dynamic Semantic Compression for Effective Long-term Time Series Forecasting
arXiv:2604.01261v1 Announce Type: new Abstract: Time series forecasting (TSF) is critical across domains such as finance, meteorology, and energy. While extending the lookback window theoretically provides richer historical context, in practice, it often introduces irrelevant noise and computatio...
Read moreSven: Singular Value Descent as a Computationally Efficient Natural Gradient Method
arXiv:2604.01279v1 Announce Type: new Abstract: We introduce Sven (Singular Value dEsceNt), a new optimization algorithm for neural networks that exploits the natural decomposition of loss functions into a sum over individual data points, rather than reducing the full loss to a single scalar befo...
Read moreForecasting Supply Chain Disruptions with Foresight Learning
arXiv:2604.01298v1 Announce Type: new Abstract: Anticipating supply chain disruptions before they materialize is a core challenge for firms and policymakers alike. A key difficulty is learning to reason reliably about infrequent, high-impact events from noisy and unstructured inputs - a setting w...
Read moreUQ-SHRED: uncertainty quantification of shallow recurrent decoder networks for sparse sensing via engression
arXiv:2604.01305v1 Announce Type: new Abstract: Reconstructing high-dimensional spatiotemporal fields from sparse sensor measurements is critical in a wide range of scientific applications. The SHallow REcurrent Decoder (SHRED) architecture is a recent state-of-the-art architecture that reconstru...
Read moreAn Online Machine Learning Multi-resolution Optimization Framework for Energy System Design Limit of Performance Analysis
arXiv:2604.01308v1 Announce Type: new Abstract: Designing reliable integrated energy systems for industrial processes requires optimization and verification models across multiple fidelities, from architecture-level sizing to high-fidelity dynamic operation. However, model mismatch across fidelit...
Read more