Skip to main content ->
Ai2

Latest research

November 19, 2024

Scientific literature synthesis with retrieval-augmented language models

Ai2’s & UW’s new retrieval-augmented LM helps scientists navigate and synthesize scientific literature.
Read post
November 12, 2024

How many Van Goghs does it take to Van Gogh? Finding the imitation threshold

Meet MIMETIC^2: Finding the number of images required by text-to-image models for imitation of a concept.
Read post
October 28, 2024

Hybrid preferences: Learning to route instances for human vs. AI feedback

We introduce a routing framework that combines inputs from humans and LMs to achieve better annotation quality.
Read post
October 2, 2024

Investigating pretraining dynamics and stability with OLMo checkpoints

We use data from our open pretraining runs to test hypotheses about training dynamics in OLMo checkpoints.
Read post
September 25, 2024

Molmo

A family of open state-of-the-art multimodal AI models
Read post
September 4, 2024

OLMoE: An open, small, and state-of-the-art mixture-of-experts model

Introducing OLMoE, the first model to be on the Pareto frontier of performance and size, released with open data.
Read post
August 12, 2024

Digital Socrates: Evaluating LLMs through explanation critiques

Digital Socrates is an evaluation tool that can characterize LLMs' explanation capabilities.
Read post
August 8, 2024

Open research is the key to unlocking safer AI

Ai2 presents our stance on openness and safety in AI.
Read post
July 3, 2024

Broadening the scope of noncompliance: When and how AI models should not comply with user requests

We outline the taxonomy of model noncompliance and then delve deeper into implementing model noncompliance.
Read post