AI News Today — Part 22
May 7, 2026 · Sourced from 500+ daily AI sources, scored by relevance.
- Safebooks AI
AI agents for finance operations
- SaolaAI
Autonomous quality for engineering teams
- WINN.AI
Real-time sales copilot for customer calls
- TravelMaxing | AI powered travel agency
The quality of travel agencies without the heavy fees
- Inkbox
Give your AI agents email, phone and an internet address
- tilde.run
Serverless sanbox for agents, with a versioned filesystem.
- Hyperflow
AI-Powered Intelligence For Enterprise On-Chain Data
- AiDesigns — AI creative workspace
Chat agent and canvas for top image, video & music models.
- Grepture
The gateway for every AI call your app makes.
- Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations
Abstract We introduce Natural Language Autoencoders (NLAs), an unsupervised method for generating natural language explanations of LLM activations. An NLA consists of two LLM modules: an activation verbalizer (AV) that maps an activation to a text description and an activation reconstructor (AR) that maps the description back to an activation. We jointly train the AV and AR with reinforcement learning to reconstruct residual stream activations. Although we optimize for activation reconstruction, the resulting NLA explanations read as plausible interpretations of model internals that, according to our quantitative evaluations, grow more informative over training. We apply NLAs to model auditing. During our pre-deployment audit of Claude Opus 4.6, NLAs helped diagnose safety-relevant behaviors and surfaced unverbalized evaluation awareness—cases where Claude believed, but did not say, that it was being evaluated. We present these audit findings as case studies and corroborate them using
- Mechanistic estimation for wide random MLPs
This post covers joint work with Wilson Wu, George Robinson, Mike Winer, Victor Lecomte and Paul Christiano. Thanks to Geoffrey Irving and Jess Riedel for comments on the post. In ARC's latest paper, we study the following problem: given a randomly initialized multilayer perceptron (MLP), produce an estimate for the expected output of the model under Gaussian input. The usual approach to this problem is to sample many possible inputs, run them all through the model, and take the average. Instead, we produce an estimate "mechanistically", without running the model even once. For wide models, our approach produces more accurate estimates, both in theory and in practice. Paper: Estimating the expected output of wide random MLPs more efficiently than sampling Code: mlp_cumulant_propagation GitHub repo We are excited about this result as an early step towards our goal of producing mechanistic estimates that outperform random sampling for any trained neural network. Drawing an analogy betwee
- Sustaining Cooperation in Populations Guided by AI: A Folk Theorem for LLMs
Large language models (LLMs) are increasingly used to provide instructions to many agents who interact with one another. Such shared reliance couples agents who appear to act independently: they may in fact be guided by a common model. This coupling can change the prospects for cooperation among age...
- AgenticPrecoding: LLM-Empowered Multi-Agent System for Precoding Optimization
Precoding is a key technique for interference management and performance improvement in multi-antenna wireless systems. However, existing precoding methods are typically developed for specific system models, objectives, and constraint sets, which limits their adaptability to the heterogeneous and ev...
- Independent Learning of Nash Equilibria in Partially Observable Markov Potential Games with Decoupled Dynamics
We study Nash equilibrium learning in partially observable Markov games (POMGs), a multi-agent reinforcement learning framework in which agents cannot fully observe the underlying state. Prior work in this setting relies on centralization or information sharing, and suffers from sample and computati...
- From Agent Loops to Deterministic Graphs: Execution Lineage for Reproducible AI-Native Work
Large language model systems are increasingly deployed as agentic workflows that interleave reasoning, tool use, memory, and iterative refinement. These systems are effective at producing answers, but they often rely on implicit conversational state, making it difficult to preserve stable work produ...
- Active Learning for Communication Structure Optimization in LLM-Based Multi-Agent Systems
Optimizing the communication structure of large language model based multi-agent systems (LLM-MAS) has been shown to improve downstream performance and reduce token usage. Existing methods typically rely on randomly sampled training tasks. However, tasks may differ substantially in difficulty and do...
- Retrieval-Conditioned Topology Selection with Provable Budget Conservation for Multi-Agent Code Generation
Multi-agent LLM systems for code generation face a fundamental routing problem: the optimal orchestration topology depends on the structural complexity of the code under modification, yet existing systems select topologies without consulting the codebase. We present Retrieval-Guided Adaptive Orchest...
- Optimizing Social Utility in Sequential Experiments
Regulatory approval of products in high-stakes domains such as drug development requires statistical evidence of safety and efficacy through large-scale randomized controlled trials. However, the high financial cost of these trials may deter developers who lack absolute certainty in their product's ...
- What to Play Next
An AI powered track recommendation app for DJs
- Jobs2Rely
Autonomous AI job agent for senior tech leaders
- Solen
Build, deploy, and self-heal apps from plain language
- Vext Labs, Inc.
We built a mind, not a model.
- Narriveo
AI-synced voice for polished product demos
- Longitude
See how the world sees the world powered by AI corroboration
- VKHire
AI that interviews & evaluates candidates in Teams & Meet
- GuardiaAuto
https://guardia-auto-safe.com
- Culinary AI
Turn Recipes Into Cinematic Videos
- Playtree
Prepare for your career with AI-powered job readiness tools
- ImageSuite
23 tools, 14 free, 10 AI — $9 flat, no credits.
- Replish AI
Social Media OS trained on your voice + real performance
- ArchGuard
AI-assisted AWS architecture review for Terraform
- Syncly
AI Team Collaboration Platform for Daily Execution
- RecruitzAgency.ai
Recruiting, but the AI does 80% of the work
- SparKey - Intuative Prompt Assistant
Turn messy thoughts into structured AI prompts
- ctfWithAi
New CVE dropped? Practice it in minutes.
- DemoCritic
AI buyers that shut you down if you pitch too early
- Rewindex
Auto-snapshot for AI agents
- SurveyCat
AI that turns messy survey text into actionable categories
- CodeAnalyticsPro.AI
See exactly how much your codebase is wasting, in 60 seconds
- Novo
Business Intelligence for your Agents
- AI井戸端会議 (AI Idobata Kaigi)
Watch AI characters chat casually, like neighbors at a well
- DbPaw
A fast, modern database client with optional AI assistance
- NEXT_STORM
The Neural Interface with Buffer Protocol. Speak, edit,.
- Dora Video
AI video generator for Agents
- Bitcrowd
SEO at scale, generate, translate and repurpose content
- BasicSoccer AI
Upload your soccer training, and get instant feedback
- Birdly
Identify birds in seconds with AI
- JubarteAI
One shared brain for your coding agents
- Fuora Social
Automate Instagram growth, DMs, and content with AI
- metAIphora - AI Psychoanalysis(Android)
AI-powered psychoanalytic chat with 23 unique themes