The500Feed.Live

Everything going on in AI - updated daily from 500+ sources

← Back to The 500 Feed
📄 ResearchMay 13, 2026

Dense vs Sparse Pretraining at Tiny Scale: Active-Parameter vs Total-Parameter Matching

We study dense and mixture-of-experts (MoE) transformers in a tiny-scale pretraining regime under a shared LLaMA-style decoder training recipe. The sparse model replaces dense feed-forward blocks with Mixtral-style routed experts. Dense baselines are modestly width-resized to tightly match either ac...

Read Original Article →

Source

http://arxiv.org/abs/2605.13769v1