Friday, 27 March 2026
Welcome to today's roundup of the most interesting developments in AI and technology.
Top Stories
Anthropic wins injunction against Trump administration over Defense Department saga
A federal judge has ordered that the Trump administration rescind recent restrictions it placed on the AI company.
Read moreIntroducing Sonnet 4.6
Claude Sonnet 4.6 is a full upgrade of the model’s skills across coding, computer use, long-reasoning, agent planning, knowledge work, and design.
Read moreClaude Opus 4.6
We’re upgrading our smartest model. Across agentic coding, computer use, tool use, search, and finance, Opus 4.6 is an industry-leading model, often by wide margin.
Read moreClaude is a space to think | Anthropic
We’ve made a choice: Claude will remain ad-free. We explain why advertising incentives are incompatible with a genuinely helpful AI assistant, and how we plan to expand access without compromising user trust.
Read moreAnthropic invests $100 million into the Claude Partner Network
We’re launching the Claude Partner Network, a program for partner organizations helping enterprises adopt Claude.
Read moreIntroducing The Anthropic Institute
We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.
Read moreYou can now transfer your chats and personal information from other chatbots directly into Gemini
Google is launching "switching tools" that, just as it sounds, will make it easier for users of other chatbots to switch to Gemini.
Read moreWikipedia cracks down on the use of AI in article writing
The site, whose policies are subject to change, has struggled with the issue of AI-generated writing.
Read moreOpenAI abandons yet another side quest: ChatGPT’s erotic mode
It's only the latest of several side projects that the AI startup has ditched over the past week.
Read moreData centers get ready — the Senate wants to see your power bills
Senators Josh Hawley and Elizabeth Warren want the Energy Information Administration to gather more details about how data centers use power — and how that affects the grid.
Read moreIndustry
A peek inside CLI tools
No more funny videos at OpenAI
Read moreAgents should interview you
is claude trying to become openclaw?
Read moreThe debut of Gemini 3.1 Flash Live could make it harder to know if you're talking to a robot
Google's new conversational audio AI is rolling out in search, Gemini, and developer tools today.
Read moreGoogle's TurboQuant AI-compression algorithm can reduce LLM memory usage by 6x
TurboQuant makes AI models more efficient but doesn't reduce output quality like other methods.
Read moreThis startup wants to change how mathematicians do math
Axiom Math, a startup based in Palo Alto, California, has released a free new AI tool for mathematicians, designed to discover mathematical patterns that could unlock solutions to long-standing problems. The tool, called Axplorer, is a redesign of an existing one called PatternBoost that François...
Read moreAgentic commerce runs on truth and context
Imagine telling a digital agent, “Use my points and book a family trip to Italy. Keep it within budget, pick hotels we’ve liked before, and handle the details.” Instead of returning a list of links, the agent assembles an itinerary and executes the purchase. That shift, from assistance to executi...
Read moreThe AI Hype Index: AI goes to war
AI is at war. Anthropic and the Pentagon feuded over how to weaponize Anthropic’s AI model Claude; then OpenAI swept the Pentagon off its feet with an “opportunistic and sloppy” deal. Users quit ChatGPT in droves. People marched through London in the biggest protest against AI to date. If you’re...
Read moreElectronic Frontier Foundation to swap leaders as AI, ICE fights escalate
Public interest in government tech abuses is peaking. EFF's new leader plans to build on that.
Read moreResearch & Products
AlphaGenome: AI for better understanding the genome — Google DeepMind
Ziga Avsec and Natasha Latysheva
Read moreA catalogue of genetic mutations to help pinpoint the cause of diseases — Google DeepMind
New AI tool classifies the effects of 71 million ‘missense’ mutations
Read moreAlphaEarth Foundations helps map our planet in unprecedented detail — Google DeepMind
The AlphaEarth Foundations team
Read moreIntroducing TRIBE v2: A Predictive Foundation Model Trained to Understand How the Human Brain Processes Complex Stimuli
TRIBE v2 reliably predicts high-resolution fMRI brain activity — enabling zero-shot predictions for new subjects, languages, and tasks — and consistently...
Read moreFour MTIA Chips in Two Years: Scaling AI Experiences for Billions
Serving a wide range of AI models on a global scale, while maintaining the lowest possible costs, is one of the most demanding infrastructure challenges...
Read moreMapping the World's Forests with Greater Precision: Introducing Canopy Height Maps v2
Today, in partnership with the World Resources Institute, we’re announcing Canopy Height Maps v2 (CHMv2): an open source model and world-scale maps...
Read moreReducing Government Costs and Increasing Access to Greenspaces in the United Kingdom with DINO
Meta's DINOv2 model is enhancing reforestation efforts around the world. Learn how the UK government is using DINO to help reduce costs and increase...
Read moreHow DINO and SAM are Helping Modernize Essential Medical Triage Practices
By leveraging advanced AI models, teams at the University of Pennsylvania are aiming to bring cutting-edge automation to emergency response.
Read morePLDR-LLMs Reason At Self-Organized Criticality
arXiv:2603.23539v1 Announce Type: new Abstract: We show that PLDR-LLMs pretrained at self-organized criticality exhibit reasoning at inference time. The characteristics of PLDR-LLM deductive outputs at criticality is similar to second-order phase transitions. At criticality, the correlation lengt...
Read moreEnvironment Maps: Structured Environmental Representations for Long-Horizon Agents
arXiv:2603.23610v2 Announce Type: new Abstract: Although large language models (LLMs) have advanced rapidly, robust automation of complex software workflows remains an open problem. In long-horizon settings, agents frequently suffer from cascading errors and environmental stochasticity; a single...
Read moreEvaluating a Multi-Agent Voice-Enabled Smart Speaker for Care Homes: A Safety-Focused Framework
arXiv:2603.23625v1 Announce Type: new Abstract: Artificial intelligence (AI) is increasingly being explored in health and social care to reduce administrative workload and allow staff to spend more time on patient care. This paper evaluates a voice-enabled Care Home Smart Speaker designed to supp...
Read moreCan LLM Agents Be CFOs? A Benchmark for Resource Allocation in Dynamic Enterprise Environments
arXiv:2603.23638v1 Announce Type: new Abstract: Large language models (LLMs) have enabled agentic systems that can reason, plan, and act across complex tasks, but it remains unclear whether they can allocate resources effectively under uncertainty. Unlike short-horizon reactive decisions, allocat...
Read moreGTO Wizard Benchmark
arXiv:2603.23660v1 Announce Type: new Abstract: We introduce GTO Wizard Benchmark, a public API and standardized evaluation framework for benchmarking algorithms in Heads-Up No-Limit Texas Hold'em (HUNL). The benchmark evaluates agents against GTO Wizard AI, a state-of-the-art superhuman poker ag...
Read moreBeyond Accuracy: Introducing a Symbolic-Mechanistic Approach to Interpretable Evaluation
arXiv:2603.23517v1 Announce Type: new Abstract: Accuracy-based evaluation cannot reliably distinguish genuine generalization from shortcuts like memorization, leakage, or brittle heuristics, especially in small-data regimes. In this position paper, we argue for mechanism-aware evaluation that com...
Read moreImplicit Turn-Wise Policy Optimization for Proactive User-LLM Interaction
arXiv:2603.23550v1 Announce Type: new Abstract: Multi-turn human-AI collaboration is fundamental to deploying interactive services such as adaptive tutoring, conversational recommendation, and professional consultation. However, optimizing these interactions via reinforcement learning is hindered...
Read moreUpper Entropy for 2-Monotone Lower Probabilities
arXiv:2603.23558v1 Announce Type: new Abstract: Uncertainty quantification is a key aspect in many tasks such as model selection/regularization, or quantifying prediction uncertainties to perform active learning or OOD detection. Within credal approaches that consider modeling uncertainty as prob...
Read moreSynthetic Mixed Training: Scaling Parametric Knowledge Acquisition Beyond RAG
arXiv:2603.23562v1 Announce Type: new Abstract: Synthetic data augmentation helps language models learn new knowledge in data-constrained domains. However, naively scaling existing synthetic data methods by training on more synthetic tokens or using stronger generators yields diminishing returns...
Read moreSafe Reinforcement Learning with Preference-based Constraint Inference
arXiv:2603.23565v1 Announce Type: new Abstract: Safe reinforcement learning (RL) is a standard paradigm for safety-critical decision making. However, real-world safety constraints can be complex, subjective, and even hard to explicitly specify. Existing works on constraint inference rely on restr...
Read more