An artefacts.yaml file is required in your project in order to run tests.
Configuration Guides
- Artefacts YAML Configuration - How to configure your projects for use with Artefacts.
This is the multi-page printable view of this section. Click here to print.
An artefacts.yaml file is required in your project in order to run tests.
In order to run tests, you will need to have a artefacts.yaml file setup in the root of your project. The configurations made in this file allows artefacts to:
run --in-container or run-remote)The below is an example artefacts.yaml configuration file taken from our nav2 example repo. Note that with this configuration, there are two jobs, named basic and nav2 respectively.
Each section will be explained in further detail on this page.
version: 0.1.0
project: artefacts/navigation2-ignition-example
jobs:
basic: # Only checks that things are loading
type: test
package:
docker:
build:
dockerfile: ./Dockerfile
runtime:
simulator: gazebo:fortress
framework: ros2:humble
timeout: 5 #minutes
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
settings:
- name: bringup
pytest_file: "src/sam_bot_nav2_gz/test/test_bringup.py" # when using pytest or ros2 launch_pytest
nav2:
type: test
package:
docker:
build:
dockerfile: ./Dockerfile
runtime:
simulator: gazebo:fortress
framework: ros2:humble
timeout: 5 #minutes
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
metrics:
- /odometry_error
- /distance_from_start_gt
- /distance_from_start_est
params:
launch/world: ["bookstore.sdf", "empty.sdf"]
settings:
- name: reach_goal
pytest_file: "src/sam_bot_nav2_gz/test/test_reach_goal.py" # when using pytest or ros2 launch_pytest
- name: follow_waypoints
launch_test_file: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py" # when using ros2 launch_test
To briefly summarize:
The first job basic:
Dockerfile at the root of the project respositoryoutput directory to the Artefacts Dashboard after test completiontest_bringup.launch.py launch file.The second job nav2:
Dockerfile at the root of the project respositoryoutput directory to the Artefacts Dashboard after test completionmetrics in the Artefacts Dashboard.launch_argumentsreach_goal will run twice using the test_reach_goal launch file (once for each world listed in params), and follow_waypoints twice using the test_follow_waypoints launch file (again once for each world)version Optional The artefacts yaml format specification version.project The name of the associated project. Needs to be in the format <organization>/<project>jobs A mapping of job names to Job definitions, see Jobjobs:
<job_name>:
type: test
package:
...
runtime:
...
timeout: 5 #minutes
scenarios:
...
Each Job has the following properties:
type Defaults to testpackage Optional Use when configuring how to build the job if running in a container (run --in-container or run-remote). See Packageruntime Contains runtime properties, (the framework and simulator). See Runtimetimeout Optional Time before the job gets marked as timed outscenarios One job can contain multiple scenarios, usually a test suite linked to a particular environment, see Scenario definitioncustom Can be used to customize the default build flow of any given project. See Packaging for Cloud Simulation for details
docker Can be used to provide a dockerfile for artefacts to use when building the test environment. See Packaging with Docker for details
Used to prepare and hook into the test environment
framework Software framework. Supported values: ros2:humble, ros2:galactic, ros2:jazzy, null (experimental)
simulator Simulation engine. Supported values: turtlesim, gazebo:fortress, gazebo:harmonic
package block, and either a set of custom commands, or a dockerfile.
Referring to the example from the top of the page:
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
metrics:
- /odometry_error
- /distance_from_start_gt
- /distance_from_start_est
params:
launch/world: ["bookstore.sdf", "empty.sdf"]
settings:
- name: reach_goal
pytest_file: "src/sam_bot_nav2_gz/test/test_reach_goal.py"
- name: follow_waypoints
launch_test_file: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py"
defaults Contains default scenario settings common to all scenarios unless overwritten by a scenario in settings. In the example output_dirs, metrics, and the params configurations will be shared across both scenario reach_goal and follow_waypoints
settings Contains a list of scenarios, with any configurations from defaults being overwritten. See Scenario below for settings available.
name Name of the scenario
One of pytest_file / launch_test_file / run
pytest_file (When using pytest or ROS2 launch_pytest testing framework): Path to your test file.launch_test_file (When using ROS2 launch_test testing framework): Path to your test file (typically xxx.launch.py)run Command string used to start tests (executed via subprocess.run(command, shell=True)). Typically for power users.run, where possible please output your test results to a JUnit XML file named tests_junit.xml saved in output_dirs. Otherwise the dashboard will not correctly display final test result statuses (pass / fail / error)
output_dirs Optional List of paths where the Artefacts client will look for artifacts to upload to the Dashboard.
Supported types include .html files (can be created with plotly, they will be rendered as interactive figures) and videos (we recommend h264/mp4 for cross-platform compatibility).
launch_arguments Optional ROS only. Dictionary of arguments name: value pairs to pass to the launch file. Typically used to configure execution behavior, like whether to run headless or not, whether to record rosbags…
params List of parameters to set for the scenario. For each parameter a list of values or a single value can be specified. Scenario variants will automatically be run for each of the parameters combination (grid strategy). All test results will be uploaded in the same Dashboard entry.
For the ROS2 framework, parameter names must follow the convention node_name/parameter_name (delimited by a single forward slash), made available through the environment variable ARTEFACTS_SCENARIO_PARAMS_FILE, as well as being accessible to the artefacts toolkit. They can be used to control the behavior of nodes. Nested parameters are supported using the dot notation (e.g. node_name/parameter_name.subparameter_name).
(experimental) For the ’null’ framework, parameter names will be set as environment variables (make sure that parameter names are only letters, numbers and underscores).
params section replacing <node_name> for launch and accessing it via the artefacts toolkit’s get_artefacts_param helper function.
metrics Optional To specify test metrics. Accepts a json file: the key-values pairs will be used as metric_name/metric_value. ROS projects can alternatively accept a list of topics, the latest values on the topic(s) during a run will be the logged value.