Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
huggingface
GitHub Repository: huggingface/notebooks
Path: blob/main/lerobot/training-act.ipynb
8418 views
Kernel: Python 3 (ipykernel)

🤗 x 🦾: Training ACT with LeRobot Notebook

Welcome to the LeRobot ACT training notebook! This notebook provides a ready-to-run setup for training imitation learning policies using the 🤗 LeRobot library.

In this example, we train an ACT policy using a dataset hosted on the Hugging Face Hub, and optionally track training metrics with Weights & Biases (wandb).

⚙️ Requirements

  • A Hugging Face dataset repo ID containing your training data (--dataset.repo_id=YOUR_USERNAME/YOUR_DATASET)

  • Optional: A wandb account if you want to enable training visualization

  • Recommended: GPU runtime (e.g., NVIDIA A100) for faster training

⏱️ Expected Training Time

Training with the ACT policy for 100,000 steps typically takes about 1.5 hours on an NVIDIA A100 GPU. On less powerful GPUs or CPUs, training may take significantly longer.

Example Output

Model checkpoints, logs, and training plots will be saved to the specified --output_dir. If wandb is enabled, progress will also be visualized in your wandb project dashboard.

Install conda

This cell uses condacolab to bootstrap a full Conda environment inside Google Colab.

!pip install -q condacolab import condacolab condacolab.install()

Install LeRobot

This cell clones the lerobot repository from Hugging Face, installs FFmpeg (version 7.1.1), and installs the package in editable mode.

!git clone https://github.com/huggingface/lerobot.git !conda install ffmpeg=7.1.1 -c conda-forge !cd lerobot && pip install -e .

Weights & Biases login

This cell logs you into Weights & Biases (wandb) to enable experiment tracking and logging.

!wandb login

Start training ACT with LeRobot

This cell runs the train.py script from the lerobot library to train a robot control policy.

Make sure to adjust the following arguments to your setup:

  1. --dataset.repo_id=YOUR_HF_USERNAME/YOUR_DATASET: Replace this with the Hugging Face Hub repo ID where your dataset is stored, e.g., pepijn223/il_gym0.

  2. --policy.type=act: Specifies the policy configuration to use. act refers to configuration_act.py, which will automatically adapt to your dataset’s setup (e.g., number of motors and cameras).

  3. --output_dir=outputs/train/...: Directory where training logs and model checkpoints will be saved.

  4. --job_name=...: A name for this training job, used for logging and Weights & Biases.

  5. --policy.device=cuda: Use cuda if training on an NVIDIA GPU. Use mps for Apple Silicon, or cpu if no GPU is available.

  6. --wandb.enable=true: Enables Weights & Biases for visualizing training progress. You must be logged in via wandb login before running this. Set to False if you do not plan on using Weights & Biases.

!cd lerobot && python src/lerobot/scripts/lerobot_train.py \ --dataset.repo_id=${HF_USER}/hf_act_record \ --policy.type=act \ --output_dir=outputs/train/hf_act_record0 \ --job_name=hf_act_training_job \ --policy.device=cuda \ --wandb.enable=False \ --policy.repo_id=${HF_USER}/hf_act_recordpolicy0

Login into Hugging Face Hub.

Now after training is done login into the Hugging Face hub and upload the last checkpoint.

from huggingface_hub import HfApi HF_USERNAME = "${HF_USER}" HF_REPO_NAME = "act-configs" api = HfApi() repo_id = f"{HF_USERNAME}/{HF_REPO_NAME}" files_in_repo = api.list_repo_files(repo_id=repo_id) print(f"Files in {repo_id}:") for file in files_in_repo: print(f"- {file}")

Configure Hugging Face Token

Add your HF_TOKEN (AKA Secret) to Google Colab to enable Colab to access your HF repositories. This is optional and might need modification if you are using another cloud provider.

from google.colab import userdata import os os.environ["HF_TOKEN"] = userdata.get("HF_TOKEN") # Verify token is loaded (optional) if os.getenv("HF_TOKEN"): print("Hugging Face token loaded successfully.") else: print("Error: Hugging Face token not found. Please set it as a Colab secret named 'HF_TOKEN'.")
from google.colab import userdata userdata.get('HF_TOKEN')

Download files from Hugging Face Hub into Colab

Now, you can use hf_hub_download to pull the files directly to your Colab environment. You can then download them from Colab to your local machine. This is optional.

from huggingface_hub import hf_hub_download # Your Hugging Face repository details HF_CONFIG_REPO_ID = "${HF_USER}/act-configs" # Where train_config.json should be HF_POLICY_REPO_ID = "${HF_USER}/hf_act_recordpolicy0" # Where the trained model is # Define the files to download config_file_name = "train_config.json" model_file_name = "model.safetensors" tokenizer_processor_file_name = "tokenizer_processor.safetensors" # Download train_config.json train_config_path = hf_hub_download(repo_id=HF_CONFIG_REPO_ID, filename=config_file_name) print(f"Downloaded {config_file_name} to: {train_config_path}") # Download the trained model files model_path = hf_hub_download(repo_id=HF_POLICY_REPO_ID, filename=model_file_name) print(f"Downloaded {model_file_name} to: {model_path}") tokenizer_processor_path = hf_hub_download(repo_id=HF_POLICY_REPO_ID, filename=tokenizer_processor_file_name) print(f"Downloaded {tokenizer_processor_file_name} to: {tokenizer_processor_path}") print("\nAll specified files have been downloaded to your Colab environment.") print("You can find them in the paths printed above. To download them to your local machine, right-click on the files in the Colab file browser (left sidebar) and select 'Download'.")

Verify train_config.json existence locally. This is needed for restarting training and testing of the policy.

!ls -l /content/lerobot/outputs/train/hf_act_record0/
HF_USERNAME = "${HF_USER}" HF_REPO_NAME = "act-configs" !hf repo-files $HF_USERNAME/$HF_REPO_NAME
!hf auth login
!hf upload ${HF_USER}/hf_act_record0 \ /content/lerobot/outputs/train/hf_act_record0/checkpoints/last/pretrained_model
!hf auth login

Create a new repository on Hugging Face Hub

This command will create a new repository under your Hugging Face account.

HF_USERNAME = "${HF_USER}" HF_REPO_NAME = "act-configs" !hf repo create $HF_REPO_NAME --type model --private --organization $HF_USERNAME
### Upload `train_config.json` to the new repository HF_USERNAME = "${HF_USER}" HF_REPO_NAME = "act-configs" LOCAL_CONFIG_PATH = "/content/lerobot/outputs/train/hf_act_record0/train_config.json" !hf upload $HF_USERNAME/$HF_REPO_NAME "$LOCAL_CONFIG_PATH" train_config.json