The artefacts.yaml File
In order to run tests, you will need to have a artefacts.yaml file setup in the root of your project. The configurations made in this file allows artefacts to:
- Connect to the corresponding organization and project on the artefacts dashboard.
- Provides details about a given job including:
- The job name
- How to build the project (if using
run --in-containerorrun-remote) - What framework and simulator is required for the job to run
- Metrics to collect
- Parameters to use
- What launch file to use
Example Configuration
The below is an example artefacts.yaml configuration file taken from our nav2 example repo. Note that with this configuration, there are two jobs, named basic and nav2 respectively.
Each section will be explained in further detail on this page.
version: 0.1.0
project: artefacts/navigation2-ignition-example
jobs:
basic: # Only checks that things are loading
type: test
package:
docker:
build:
dockerfile: ./Dockerfile
runtime:
simulator: gazebo:fortress
framework: ros2:humble
timeout: 5 #minutes
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
settings:
- name: bringup
pytest_file: "src/sam_bot_nav2_gz/test/test_bringup.py" # when using pytest or ros2 launch_pytest
nav2:
type: test
package:
docker:
build:
dockerfile: ./Dockerfile
runtime:
simulator: gazebo:fortress
framework: ros2:humble
timeout: 5 #minutes
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
metrics:
- /odometry_error
- /distance_from_start_gt
- /distance_from_start_est
params:
launch/world: ["bookstore.sdf", "empty.sdf"]
settings:
- name: reach_goal
pytest_file: "src/sam_bot_nav2_gz/test/test_reach_goal.py" # when using pytest or ros2 launch_pytest
- name: follow_waypoints
launch_test_file: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py" # when using ros2 launch_test
To briefly summarize:
The first job basic:
- Will be built using the Dockerfile named
Dockerfileat the root of the project respository - Is running on ros2 humble, using gazebo (ignition) fortress as a simulator
- Will timeout if the test(s) do not come to completion after 5 minutes
- Will upload anything in the
outputdirectory to the Artefacts Dashboard after test completion - Will run one test (“scenario”) using the
test_bringup.launch.pylaunch file.
The second job nav2:
- Will be built using the Dockerfile named
Dockerfileat the root of the project respository - Is running on ros2 humble, using gazebo (ignition) fortress as a simulator
- Will timeout if the test(s) do not come to completion after 5 minutes
- Will upload anything in the
outputdirectory to the Artefacts Dashboard after test completion - Will display the three listed
metricsin the Artefacts Dashboard. - Has two parameters (two different world files), which in this case will be used as ros
launch_arguments - Will run a total of 4 tests in two scenarios:
reach_goalwill run twice using thetest_reach_goallaunch file (once for each world listed inparams), andfollow_waypointstwice using thetest_follow_waypointslaunch file (again once for each world)
Configuration Breakdown
versionOptional The artefacts yaml format specification version.projectThe name of the associated project. Needs to be in the format<organization>/<project>jobsA mapping of job names toJobdefinitions, see Job
Jobs
jobs:
<job_name>:
type: test
package:
...
runtime:
...
timeout: 5 #minutes
scenarios:
...
Each Job has the following properties:
- The name of the job
typeDefaults totestpackageOptional Use when configuring how to build the job if running in a container (run --in-containerorrun-remote). See PackageruntimeContains runtime properties, (the framework and simulator). See RuntimetimeoutOptional Time before the job gets marked astimed outscenariosOne job can contain multiple scenarios, usually a test suite linked to a particular environment, see Scenario definition
Package
-
customCan be used to customize the default build flow of any given project. See Packaging for Cloud Simulation for details -
dockerCan be used to provide a dockerfile for artefacts to use when building the test environment. See Packaging with Docker for details
Runtime
Used to prepare and hook into the test environment
-
frameworkSoftware framework. Supported values:ros2:humble,ros2:galactic,ros2:jazzy,null(experimental) -
simulatorSimulation engine. Supported values:turtlesim,gazebo:fortress,gazebo:harmonic
Note
In many cases, the artefacts CLI will still be compatible with a framework / simulator not listed above when running locally. However, when running in artefacts cloud simulation, you must provide apackage block, and either a set of custom commands, or a dockerfile.
Scenarios definition
Referring to the example from the top of the page:
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
metrics:
- /odometry_error
- /distance_from_start_gt
- /distance_from_start_est
params:
launch/world: ["bookstore.sdf", "empty.sdf"]
settings:
- name: reach_goal
pytest_file: "src/sam_bot_nav2_gz/test/test_reach_goal.py"
- name: follow_waypoints
launch_test_file: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py"
-
defaultsContains default scenario settings common to all scenarios unless overwritten by a scenario insettings. In the exampleoutput_dirs,metrics, and theparamsconfigurations will be shared across both scenarioreach_goalandfollow_waypoints -
settingsContains a list ofscenarios, with any configurations fromdefaultsbeing overwritten. See Scenario below for settings available.
Scenario
-
nameName of the scenario -
One of
pytest_file/launch_test_file/runpytest_file(When usingpytestor ROS2launch_pytesttesting framework): Path to your test file.launch_test_file(When using ROS2launch_testtesting framework): Path to your test file (typically xxx.launch.py)runCommand string used to start tests (executed viasubprocess.run(command, shell=True)). Typically for power users.
Note
When usingrun, where possible please output your test results to a JUnit XML file named tests_junit.xml saved in output_dirs. Otherwise the dashboard will not correctly display final test result statuses (pass / fail / error)
-
output_dirsOptional List of paths where the Artefacts client will look for artifacts to upload to the Dashboard. Supported types include .html files (can be created with plotly, they will be rendered as interactive figures) and videos (we recommend h264/mp4 for cross-platform compatibility). -
launch_argumentsOptional ROS only. Dictionary of argumentsname: valuepairs to pass to the launch file. Typically used to configure execution behavior, like whether to run headless or not, whether to record rosbags… -
paramsList of parameters to set for the scenario. For each parameter a list of values or a single value can be specified. Scenario variants will automatically be run for each of the parameters combination (grid strategy). All test results will be uploaded in the same Dashboard entry.-
For the ROS2 framework, parameter names must follow the convention
node_name/parameter_name(delimited by a single forward slash), made available through the environment variableARTEFACTS_SCENARIO_PARAMS_FILE, as well as being accessible to the artefacts toolkit. They can be used to control the behavior of nodes. Nested parameters are supported using the dot notation (e.g.node_name/parameter_name.subparameter_name). -
(experimental) For the ’null’ framework, parameter names will be set as environment variables (make sure that parameter names are only letters, numbers and underscores).
-
Note
You can also list ros launch arguments in theparams section replacing <node_name> for launch and accessing it via the artefacts toolkit’s get_artefacts_param helper function.
metricsOptional To specify test metrics. Accepts a json file: the key-values pairs will be used as metric_name/metric_value. ROS projects can alternatively accept a list of topics, the latest values on the topic(s) during a run will be the logged value.