This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Example projects

More information about the example projects

When you sign up to Artefacts, you will be automatically added to the artefacts-demos organization, containing a number of example projects.

You can access the output of these projects at https://app.artefacts.com/artefacts-demos

Currently available examples:

1 - OpenRMF Example

Testing navigability in OpenRMF

OpenRMF Example

Demo available here.. Note: Registration required.

Overview

OpenRMF is a framework for multi robot interoperability in large buildings. One common configuration task in OpenRMF is to set up Places and navigation lanes for the robots in a building.

Here we run the OpenRMF demo on the office world.

As shown below, this map has 14 waypoints where a robot can go. It also comes with 2 “tinyRobot"s starting on their respective chargers.

Map of the office world.

The goal of this test demo is to send every robot to every waypoint, and assert that it arrives. Failure indicates that a robot is unable to reach a place on the map.

The test is parametrized to run through all robots and waypoints, resetting the simulation between each run.

Gif of the robots going everywhere

Quick analysis

“tinyRobot2” going to “patrol_A1”

“tinyRobot2” successfully reached “patrol_A1” in ~40s. The video offers additional information on the how well the test executed.

“tinyRobot1” going to “patrol_B”

However, “tinyRobot1” could not reach “patrol_B” before the timeout of 120 s. Using the video, we notice that there is a trashcan on the way, so the robot gets stuck on it. The distance graph also points towards the robot being stuck somewhere.

Data available after the tests

The tests record a few metrics:

  • Time to reach the goal (simtime)
  • Total distance traveled
  • Average speed

The tests record multiple data:

  • ROSbag.
  • Video of the active robot in Gazebo.
  • Graph of:
    • The XY position.
    • The distance to the goal.
    • The distance traveled over time.
  • Custom data dump from python to record data not available in the ROSbag.
  • Debug log from the tests files.
  • (Optional) stdout of the simulation and OpenRMF terminal.

2 - RL Policy & Tron1

Testing different RL Policies using Tron1

Demo available here. Note: registration required.

Overview

The Tron1 robot is a multi-modal biped robot that can be used for Humanoid RL Research. Much of the software is open source, and can be found here.

Here we run a movement test on the robot using two different RL policies. One using isaaclab, and another using isaacgym.

The test itself is relatively straightforward. We ask the robot to move forwards 5 meters and rotate 150 degrees relative to its starting position.

Parameterizing the Test

Using the artefacts.yaml file, we setup our test as so:

 policy_test:
    type: test
    runtime:
      framework: ros2:jazzy
      simulator: gazebo:harmonic
    scenarios:
      defaults:
        pytest_file: test/art_test_move.py
        output_dirs: ["test_report/latest/", "output"]
      settings:
        - name: move_face_with_different_policies
          params:
            rl_type: ["isaacgym", "isaaclab"]
            move_face:
              - {name: "dyaw_150+fw_5m", forward_m: 5.0, dyaw_deg: 150, hold: false, timeout_s: 35}

Key points from above:

  • The test is conducted using pytest (and so pytest_file points to our test file).
  • Artefacts will look in the two folders described in output_dirs for uploads to the dashboard
  • We have two sets of parameters:
    1. rl_type: two parameters: “isaacgym” and “isaaclab”
    2. move_face: one parameter set specifying forward distance (5m), rotation angle (150 degrees), and timeout (35 seconds)

The test will run twice, once using the isaacgym policy, and again using the isaaclab policy. Both tests will use the move_face parameter to determine how far, and how much rotation the robot should do.

Quick analysis

isaacgym

When using the isaacgym policy, we can see (from a birdseye recording) the robot successfuly rotating, and then moving forwards:

The dashboard notes the test as a success:

And a csv we created during the test plotting the ground truth movement is automatically converted to an easy to read chart by the dashboard.

isaaclab

With the isaaclab policy, we see there is still work to be done. The dashboard notes the test as a fail (and shows us the failing assertion), and the birdseye video shows the robot failing to setout its goal.

We have a csv (automatically converted to a chart) of estimated trajectory, (i.e what the robot thinks it has done), which we can see is widely different to the ground truth:

Estimated trajectory Ground truth

Data available after the tests

  • ROSbag
  • Video of the active robot in Gazebo, both birdseye and first person,
  • stdout and stderr
  • debug log
  • csv of the trajectory (estimated) automatically displayed as a graph in the dashboard
  • csv of the trajectory (ground truth) automatically displayed as a graph in the dashboard

Artefacts Toolkit Helpers

For this project, we used the following helpers from the Artefacts Toolkit: