REMAC (Real-Time Robot Execution with Masked Action Chunking) is for achieving real-time robot control through masked action chunking and asynchronous execution. This repository contains the official implementation and simulation experiments for the research paper Real-Time Robot Execution with Masked Action Chunking.
Before installation, ensure your system meets the following requirements:
- GPU: CUDA-compatible GPU with 12GB+ VRAM (recommended for training)
- OS: Test on Ubuntu 22.04+
Follow these steps to set up the REMAC-Kinetix environment:
# Clone the repository with submodules
git clone --recurse-submodules https://github.com/yourusername/REMAC-Kinetix.git
cd REMAC-Kinetix
# Alternatively, if already cloned, initialize the Kinetix submodule
git submodule update --init --recursive
# Install uv (fast Python package installer and resolver)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install all project dependencies
uv syncVerify Installation:
# Check uv installation
uv --version
# Verify Python environment
uv run python --versionThe REMAC training pipeline consists of two main stages: (1) training a base model following the RTC methodology, and (2) fine-tuning with LoRA for improved performance.
This stage follows the Real-Time Chunking (RTC) methodology to produce base model checkpoints.
# Train expert policies using RL
# Outputs: checkpoints, videos, and training statistics
# Location: ./logs-expert/<wandb-run-name>
uv run src_lora/train_expert.py# Generate imitation learning data using trained expert policies
# Data will be saved to: ./logs-expert/<wandb-run-name>/data/
uv run src_lora/generate_data.py --config.run-path ./logs-expert/<wandb-run-name>Note: Replace <wandb-run-name> with the actual run name from your Weights & Biases dashboard.
# Train the base flow-matching policy via behavioral cloning
# Outputs: ./logs-bc/<wandb-run-name>
uv run src_lora/train_flow_base.py --config.run-path ./logs-expert/<wandb-run-name># Evaluate the base model performance before fine-tuning
uv run src_lora/eval_flow_no_lora.py \
--config.run-path ./logs-bc/<wandb-run-name> \
--output-dir <output-dir># Rename the base model checkpoint directory for the fine-tuning stage
mv ./logs-bc/<wandb-run-name> ./logs-bc/base_modelAfter preparing the base model, fine-tune it using the REMAC approach with LoRA adaptation:
# Run the complete fine-tuning and evaluation pipeline
# This script trains and evaluates on all 12 tasks sequentially
bash run_all.shThe run_all.sh script will:
- Fine-tune the base model using LoRA on each task
- Evaluate the fine-tuned models
- Generate results and performance metrics
Important: The number of tasks (12) should be divisible by the number of GPUs available. For multi-GPU setups, adjust the parallelization settings in
run_all.shaccordingly.
Submodule Initialization Fails:
# Force update submodules
git submodule update --init --recursive --forceuv Installation Issues:
# Verify uv is in PATH
export PATH="$HOME/.cargo/bin:$PATH"
source ~/.bashrc # or ~/.zshrc for zsh users
uv --version- Feb 12, 2026: Reorganized codebase structure. Merged base model training and LoRA fine-tuning code into a unified pipeline. Please report any issues via GitHub Issues.
- Jan 28, 2026: Initial code release
This project builds upon excellent prior work in robot learning and simulation:
- RTC (Real-Time Chunking): Real-time action chunking methodology
- OpenPI: Open-source robot learning framework
- Kinetix: High-performance physics simulation platform
We are grateful to the authors and maintainers of these projects for their contributions to the research community.
If you find this work useful for your research, please consider citing our paper:
@misc{wang2026realtimerobotexecutionmasked,
title={Real-Time Robot Execution with Masked Action Chunking},
author={Haoxuan Wang and Gengyu Zhang and Yan Yan and Yuzhang Shang and Ramana Rao Kompella and Gaowen Liu},
year={2026},
eprint={2601.20130},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.20130},
}This project is released under the MIT License.