Reinforcement learning
RL Examples
This document will provide a simple example of using reinforcement learning to control PND humanoid robots. The following will introduce how to use the isaac_gym simulation platform to train the control algorithm of the robot.
Note
If you are using the urdf in the wiki, please make sure to change toe_left and toe_right to the following value:
collision name="toe_"
origin rpy="1.57 0 0" xyz="0 0 0"
In the RL code find _config. Change
flip_visual_attachments = False
to flip_visual_attachments = True
replace_cylinder_with_capsule = False
to replace_cylinder_with_capsule = True
Download address
Hardware preparation
Since the isaac_gym simulation platform requires CUDA, this article recommends that the hardware be equipped with an NVIDIA graphics card (video memory > 8GB, RTX series graphics card) and install the corresponding graphics card driver.
If the following interface appears in isaac_gym, then the training begins.
The terminal output window is as follows:
Close the visual interface by adding --headless.
Environment preparation
It is recommended to configure this environment in the virtual environment conda.
-
Create a virtual environment
-
Activate the virtual environment
-
Install CUDA and pytorch
-
Download the Isaac Gym Preview 3 simulation platform. After decompression, enter the python directory and install it using pip.
-
Install the rsl_rl library
-
Install the official PND example
-
Install other configurations
Model training and usage
-
Activate the reinforcement learning virtual environment and switch to the legged_gym/scripts directory.
-
Execute the training instruction and start training.
Close the visual interface by adding --headless. -
Run the test instruction.