Skip to content

tentime/rlm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

local-rlm

Overview

RLM in this repository follows a simple loop:

  1. A root model produces Python code.
  2. Code executes in a persistent REPL environment.
  3. Code can call a sub-model (llm_query) for delegated work.
  4. The run terminates when Final is assigned.

Quick Start

Run the minimal deterministic demo (no model downloads):

cd rlm
python run_minimal.py

Expected output includes:

  • answer: ...
  • iterations: 2
  • sub_lm_calls: 1

Learning Resources

  • docs/LEARN_RLM.md
  • docs/PAPER_TO_CODE.md

Repository Layout

  • rlm/core/rlm_engine.py: core recursive execution engine
  • rlm/core/llm_client.py: client interfaces and local model client
  • rlm/run_minimal.py: minimal paper-loop demo
  • rlm/run_local_rlm.py: optional local model runner
  • rlm/tests/test_rlm_engine.py: behavioral tests for core loop

Development

Install test dependencies and run tests:

cd rlm
python -m pip install -r requirements.txt
python -m pytest -q

Optional: Local Model Execution

For local inference with Hugging Face models:

cd rlm
python -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements-local.txt
python run_local_rlm.py \
  --fast \
  --device auto \
  --max-iterations 6 \
  --max-output-tokens 256 \
  --query "What is the project codename and release version?" \
  --context-file examples/quickstart_context.txt

Notes:

  • On Apple Silicon, --device auto defaults to CPU for stability.
  • --device mps is available, but MPS stability can vary by model/runtime versions.

License

MIT (LICENSE).

About

Minimal RLM implementation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published