Skip to main content ->
Ai2

Latest research

February 11, 2026

MolmoSpaces, an open ecosystem for embodied AI

MolmoSpaces is our new open platform for embodied AI that provides physics-grounded scenes, objects, and grasp annotations to train and evaluate generalist robotic policies.
Read post
February 10, 2026

How2Everything: Mining the web to evaluate and improve LLMs on real-world procedures

How2Everything is an open framework for evaluating and improving how well LLMs generate step-by-step procedures.
Read post
February 4, 2026

Now in Nature: Synthesizing scientific literature with retrieval-augmented LMs

We're excited to share that our paper “Synthesizing scientific literature with retrieval-augmented language models” has been accepted to Nature.
Read post
January 28, 2026

Theorizer: Turning thousands of papers into scientific laws

Theorizer is a system that automatically reads scientific literature and synthesizes structured, testable theories.
Read post
January 27, 2026

Open Coding Agents: Fast, accessible coding agents that adapt to any repo

SERA is the first in our family of Open Coding Agents, achieving state-of-the-art performance at low cost.
Read post
January 21, 2026

HiRO-ACE: An accessible solution for kilometer-scale climate simulation

HiRO-ACE is an AI framework that makes kilometer-scale climate simulation dramatically more accessible, generating decades of precipitation data for any region of the globe.
Read post
December 15, 2025

Introducing Bolmo: Byteifying the next generation of language models

Bolmo is new a byte-level family built by adapting Olmo 3 into a fast, flexible byte-based model with a short extra training run.
Read post
December 12, 2025

NeuroDiscoveryBench: Benchmarking AI for neuroscience data analysis

NeuroDiscoveryBench is a benchmark to test how well AI systems can answer questions grounded in real-world neuroscience data.
Read post
December 11, 2025

Molmo 2: State-of-the-art video understanding, pointing, and tracking

Molmo 2, a new suite of state-of-the-art vision-language models with open weights, training data, and training code, can analyze videos and multiple images at once.
Read post