Recursive Language Model
A Recursive Language Model (RLM) solves the long-context problem that limits traditional LLMs, which can only attend to a fixed number of tokens at once. Instead of feeding the whole input into the model, an RLM treats the prompt as an external environment and programmatically inspects, decomposes, and processes it. It uses a REPL where the model writes code to explore the data, makes recursive calls on smaller chunks, and combines results. This lets the model handle arbitrarily long input more accurately and cheaply than standard long-context tricks, dramatically outperforming base LLMs on complex tasks.
Clicking this will reset the application state and reload the page.
The system prompt defines the model's role, task, and constraints. It guides the model's behavior and ensures consistent responses.
Direct input provided by the user.
Model-generated response based on context and tools. Hover over any assistant message to view the exact tokens and cost.
Code execution step initiated by the model in the assistant message to be executed in the REPL environment.
Execution result returned from the REPL environment. Returned back to parent LLM via user message
A Sub LLM has been called in the assistant message.