Recursive Language Model

Recursive Language Model

A Recursive Language Model (RLM) solves the long-context problem that limits traditional LLMs, which can only attend to a fixed number of tokens at once. Instead of feeding the whole input into the model, an RLM treats the prompt as an external environment and programmatically inspects, decomposes, and processes it. It uses a REPL where the model writes code to explore the data, makes recursive calls on smaller chunks, and combines results. This lets the model handle arbitrarily long input more accurately and cheaply than standard long-context tricks, dramatically outperforming base LLMs on complex tasks.

Reset

Clicking this will reset the application state and reload the page.

LONG CONTEXT
Click on the Shuffle dataset to load the dataset
USER QUERY
Click on the Shuffle dataset to load the dataset
Conversation Trace
system prompt
System Prompt

The system prompt defines the model's role, task, and constraints. It guides the model's behavior and ensures consistent responses.

user message
User Message

Direct input provided by the user.

assistant message
Assistant Message

Model-generated response based on context and tools. Hover over any assistant message to view the exact tokens and cost.

Repl Environment
repl_call
REPL Call

Code execution step initiated by the model in the assistant message to be executed in the REPL environment.

repl_env_output
REPL Output

Execution result returned from the REPL environment. Returned back to parent LLM via user message

sub_llm_call
Sub LLM Call

A Sub LLM has been called in the assistant message.