This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Configuration Syntax

An artefacts.yaml file is required in your project in order to run tests.

Configuration Guides

1 - The artefacts.yaml File

In order to run tests, you will need to have a artefacts.yaml file setup in the root of your project. The configurations made in this file allows artefacts to:

  • Connect to the corresponding organization and project on the artefacts dashboard.
  • Provides details about a given job including:
    • The job name
    • How to build the project (if using run --in-container or run-remote)
    • What framework and simulator is required for the job to run
    • Metrics to collect
    • Parameters to use
    • What launch file to use

Example Configuration

The below is an example artefacts.yaml configuration file taken from our nav2 example repo. Note that with this configuration, there are two jobs, named basic and nav2 respectively.

Each section will be explained in further detail on this page.

version: 0.1.0

project: artefacts/navigation2-ignition-example

jobs:

  basic: # Only checks that things are loading
    type: test
    package:
      docker:
        build:
          dockerfile: ./Dockerfile
    runtime:
      simulator: gazebo:fortress
      framework: ros2:humble
    timeout: 5 #minutes
    scenarios:
      defaults: # Global to all scenarios, and overriden in specific scenarios.
        output_dirs: ["output"]
      settings:
        - name: bringup
          pytest_file: "src/sam_bot_nav2_gz/test/test_bringup.py" # when using pytest or ros2 launch_pytest

  nav2:
    type: test
    package:
      docker:
        build:
          dockerfile: ./Dockerfile
    runtime:
      simulator: gazebo:fortress
      framework: ros2:humble
    timeout: 5 #minutes
    scenarios:
      defaults: # Global to all scenarios, and overriden in specific scenarios.
        output_dirs: ["output"]
        metrics:
          - /odometry_error
          - /distance_from_start_gt
          - /distance_from_start_est
        params:
          launch/world: ["bookstore.sdf", "empty.sdf"]
      settings:
        - name: reach_goal
          pytest_file: "src/sam_bot_nav2_gz/test/test_reach_goal.py" # when using pytest or ros2 launch_pytest
        - name: follow_waypoints
          launch_test_file: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py" # when using ros2 launch_test

To briefly summarize:

The first job basic:

  • Will be built using the Dockerfile named Dockerfile at the root of the project respository
  • Is running on ros2 humble, using gazebo (ignition) fortress as a simulator
  • Will timeout if the test(s) do not come to completion after 5 minutes
  • Will upload anything in the output directory to the Artefacts Dashboard after test completion
  • Will run one test (“scenario”) using the test_bringup.launch.py launch file.

The second job nav2:

  • Will be built using the Dockerfile named Dockerfile at the root of the project respository
  • Is running on ros2 humble, using gazebo (ignition) fortress as a simulator
  • Will timeout if the test(s) do not come to completion after 5 minutes
  • Will upload anything in the output directory to the Artefacts Dashboard after test completion
  • Will display the three listed metrics in the Artefacts Dashboard.
  • Has two parameters (two different world files), which in this case will be used as ros launch_arguments
  • Will run a total of 4 tests in two scenarios: reach_goal will run twice using the test_reach_goal launch file (once for each world listed in params), and follow_waypoints twice using the test_follow_waypoints launch file (again once for each world)

Configuration Breakdown

  • version Optional The artefacts yaml format specification version.
  • project The name of the associated project. Needs to be in the format <organization>/<project>
  • jobs A mapping of job names to Job definitions, see Job

Jobs

jobs:

  <job_name>:
    type: test
    package:
       ...
    runtime:
       ...
    timeout: 5 #minutes
    scenarios:
       ...

Each Job has the following properties:

  • The name of the job
  • type Defaults to test
  • package Optional Use when configuring how to build the job if running in a container (run --in-container or run-remote). See Package
  • runtime Contains runtime properties, (the framework and simulator). See Runtime
  • timeout Optional Time before the job gets marked as timed out
  • scenarios One job can contain multiple scenarios, usually a test suite linked to a particular environment, see Scenario definition

Package

  • custom Can be used to customize the default build flow of any given project. See Packaging for Cloud Simulation for details

  • docker Can be used to provide a dockerfile for artefacts to use when building the test environment. See Packaging with Docker for details

Runtime

Used to prepare and hook into the test environment

  • framework Software framework. Supported values: ros2:humble, ros2:galactic, ros2:jazzy, null (experimental)

  • simulator Simulation engine. Supported values: turtlesim, gazebo:fortress, gazebo:harmonic

Scenarios definition

Referring to the example from the top of the page:

scenarios:
  defaults: # Global to all scenarios, and overriden in specific scenarios.
    output_dirs: ["output"]
    metrics:
      - /odometry_error
      - /distance_from_start_gt
      - /distance_from_start_est
    params:
      launch/world: ["bookstore.sdf", "empty.sdf"]
  settings:
    - name: reach_goal
      pytest_file: "src/sam_bot_nav2_gz/test/test_reach_goal.py"
    - name: follow_waypoints
      launch_test_file: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py"
  • defaults Contains default scenario settings common to all scenarios unless overwritten by a scenario in settings. In the example output_dirs, metrics, and the params configurations will be shared across both scenario reach_goal and follow_waypoints

  • settings Contains a list of scenarios, with any configurations from defaults being overwritten. See Scenario below for settings available.

Scenario

  • name Name of the scenario

  • One of pytest_file / launch_test_file / run

    • pytest_file (When using pytest or ROS2 launch_pytest testing framework): Path to your test file.
    • launch_test_file (When using ROS2 launch_test testing framework): Path to your test file (typically xxx.launch.py)
    • run Command string used to start tests (executed via subprocess.run(command, shell=True)). Typically for power users.
  • output_dirs Optional List of paths where the Artefacts client will look for artifacts to upload to the Dashboard. Supported types include .html files (can be created with plotly, they will be rendered as interactive figures) and videos (we recommend h264/mp4 for cross-platform compatibility).

  • launch_arguments Optional ROS only. Dictionary of arguments name: value pairs to pass to the launch file. Typically used to configure execution behavior, like whether to run headless or not, whether to record rosbags…

  • params List of parameters to set for the scenario. For each parameter a list of values or a single value can be specified. Scenario variants will automatically be run for each of the parameters combination (grid strategy). All test results will be uploaded in the same Dashboard entry.

    • For the ROS2 framework, parameter names must follow the convention node_name/parameter_name (delimited by a single forward slash), made available through the environment variable ARTEFACTS_SCENARIO_PARAMS_FILE, as well as being accessible to the artefacts toolkit. They can be used to control the behavior of nodes. Nested parameters are supported using the dot notation (e.g. node_name/parameter_name.subparameter_name).

    • (experimental) For the ’null’ framework, parameter names will be set as environment variables (make sure that parameter names are only letters, numbers and underscores).

  • metrics Optional To specify test metrics. Accepts a json file: the key-values pairs will be used as metric_name/metric_value. ROS projects can alternatively accept a list of topics, the latest values on the topic(s) during a run will be the logged value.