Recursion Beats Scale
The AI industry won't shut up about scale. Bigger models, more parameters, larger training datasets. Samsung just released a 7-million parameter model that's outperforming giants with 10,000 times more parameters on abstract reasoning tasks.
The Tiny Recursive Model (TRM) doesn't try to solve problems in one pass. It loops: draft an answer, check the logic, rewrite, repeat up to 16 times. The model uses an internal "scratchpad" to critique its reasoning, catching mistakes before they compound.
On the ARC-AGI benchmark—abstract reasoning tests that trip up even the best LLMs—TRM scored 45% on ARC-AGI-1 and 8% on ARC-AGI-2. Better than DeepSeek-R1, o3-mini, and Gemini 2.5 Pro, all with billions of parameters. On Sudoku-Extreme with just 1,000 training examples, it hit 87.4% accuracy compared to its predecessor's 55%.
Alexia Jolicoeur-Martineau, the Samsung researcher behind TRM: "The notion that one must depend on extensive foundational models trained for millions of dollars by major corporations to tackle difficult tasks is misleading."
When you're solving a hard problem, you don't write down the first answer that pops into your head. You draft something, spot the holes, rethink it, try again. TRM does this in a two-layer neural network. Traditional LLMs generate answers in a single forward pass. Make an early mistake? That error spreads through the entire response.
TRM won't write your emails or debug your code. It's built for structured, grid-based reasoning: Sudoku, mazes, abstract pattern recognition. Not every task needs a billion-parameter model. For specific problems, a 7-million parameter model running locally can beat cloud-based giants—faster, cheaper, and privately.
I've been testing Apple's Foundation Models framework, and what I like most is the privacy. It runs entirely offline. Your prompts stay on your device. Apple doesn't use your data to train models. The inference is free and yours.
Cloud-based LLMs log every query you send. Apple's 3-billion parameter model runs natively in iOS 26, iPadOS 26, and macOS 26. The model feels like it's mine, not something I'm renting access to.
Samsung's TRM is open source under an MIT license. Apple's Foundation Models framework gives developers access to on-device AI with three lines of Swift code. Both show that AI doesn't have to phone home. Sometimes recursion beats scale. Sometimes the best AI runs quietly on your device without sending your data anywhere.