An artefacts.yaml
file is required in your project in order to run tests.
Configuration Guides
- Artefacts YAML Configuration - How to configure your projects for use with Artefacts.
- Deprecated Configuration Syntax - Legacy syntax for ROS1 projects
This is the multi-page printable view of this section. Click here to print.
An artefacts.yaml
file is required in your project in order to run tests.
In order to run tests, you will need to have a artefacts.yaml
file setup in the root of your project. The configurations made in this file allows artefacts to:
run --in-container
or run-remote
)The below is an example artefacts.yaml
configuration file taken from our nav2 example repo. Note that with this configuration, there are two jobs, named basic
and nav2
respectively.
Each section will be explained in further detail on this page.
version: 0.1.0
project: artefacts/navigation2-ignition-example
jobs:
basic: # Only checks that things are loading
type: test
package:
docker:
build:
dockerfile: ./Dockerfile
runtime:
simulator: gazebo:fortress
framework: ros2:humble
timeout: 5 #minutes
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
settings:
- name: bringup
ros_testfile: "src/sam_bot_nav2_gz/test/test_bringup.launch.py"
nav2:
type: test
package:
docker:
build:
dockerfile: ./Dockerfile
runtime:
simulator: gazebo:fortress
framework: ros2:humble
timeout: 5 #minutes
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
metrics:
- /odometry_error
- /distance_from_start_gt
- /distance_from_start_est
params:
launch/world: ["bookstore.sdf", "empty.sdf"]
settings:
- name: reach_goal
ros_testfile: "src/sam_bot_nav2_gz/test/test_reach_goal.launch.py"
- name: follow_waypoints
ros_testfile: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py"
To briefly summarize:
The first job basic
:
Dockerfile
at the root of the project respositoryoutput
directory to the Artefacts Dashboard after test completiontest_bringup.launch.py
launch file.The second job nav2
:
Dockerfile
at the root of the project respositoryoutput
directory to the Artefacts Dashboard after test completionmetrics
in the Artefacts Dashboard.launch_arguments
reach_goal
will run twice using the test_reach_goal
launch file (once for each world listed in params
), and follow_waypoints
twice using the test_follow_waypoints
launch file (again once for each world)version
Optional The artefacts yaml format specification version.project
The name of the associated project. Needs to be in the format <organization>/<project>
jobs
A mapping of job names to Job
definitions, see Jobjobs:
<job_name>:
type: test
package:
...
runtime:
...
timeout: 5 #minutes
scenarios:
...
Each Job has the following properties:
type
Defaults to test
package
Optional Use when configuring how to build the job if running in a container (run --in-container
or run-remote
). See Packageruntime
Contains runtime properties, (the framework and simulator). See Runtimetimeout
Optional Time before the job gets marked as timed out
scenarios
One job can contain multiple scenarios, usually a test suite linked to a particular environment, see Scenario definitioncustom
Can be used to customize the default build flow of any given project. See Packaging for Cloud Simulation for details
docker
Can be used to provide a dockerfile for artefacts to use when building the test environment. See Packaging with Docker for details
Used to prepare and hook into the test environment
framework
Software framework. Supported values: ros2:humble
, ros2:galactic
, ros2:jazzy
, null
(experimental)
simulator
Simulation engine. Supported values: turtlesim
, gazebo:fortress
, gazebo:harmonic
package
block, and either a set of custom commands, or a dockerfile.
Referring to the example from the top of the page:
scenarios:
defaults: # Global to all scenarios, and overriden in specific scenarios.
output_dirs: ["output"]
metrics:
- /odometry_error
- /distance_from_start_gt
- /distance_from_start_est
params:
launch/world: ["bookstore.sdf", "empty.sdf"]
settings:
- name: reach_goal
ros_testfile: "src/sam_bot_nav2_gz/test/test_reach_goal.launch.py"
- name: follow_waypoints
ros_testfile: "src/sam_bot_nav2_gz/test/test_follow_waypoints.launch.py"
defaults
Contains default scenario settings common to all scenarios unless overwritten by a scenario in settings
. In the example output_dirs
, metrics
, and the params
configurations will be shared across both scenario reach_goal
and follow_waypoints
settings
Contains a list of scenario
s, with any configurations from defaults
being overwritten. See Scenario below for settings available.
name
Name of the scenario
output_dirs
Optional List of paths where the Artefacts client will look for artifacts to upload to the Dashboard.
Supported types include .html files (can be created with plotly, they will be rendered as interactive figures) and videos (we recommend h264/mp4 for cross-platform compatibility).
launch_arguments
Optional ROS only. Dictionary of arguments name: value
pairs to pass to the launch file. Typically used to configure execution behavior, like whether to run headless or not, whether to record rosbags…
params
List of parameters to set for the scenario. For each parameter a list of values or a single value can be specified. Scenario variants will automatically be run for each of the parameters combination (grid strategy). All test results will be uploaded in the same Dashboard entry.
For the ROS2 framework, parameter names must follow the convention node_name/parameter_name
(delimited by a single forward slash), made available through the environment variable ARTEFACTS_SCENARIO_PARAMS_FILE
, as well as being accessible to the artefacts toolkit. They can be used to control the behavior of nodes. Nested parameters are supported using the dot notation (e.g. node_name/parameter_name.subparameter_name
).
(experimental) For the ’null’ framework, parameter names will be set as environment variables (make sure that parameter names are only letters, numbers and underscores).
params
section replacing <node_name>
for launch
and accessing it via the artefacts toolkit’s get_artefacts_param
helper function.
metrics
Optional To specify test metrics. Accepts a json file: the key-values pairs will be used as metric_name/metric_value. ROS projects can alternatively accept a list of topics, the latest values on the topic(s) during a run will be the logged value.For framework: ros2:*
:
ros_testfile
For ROS2: Path to the launch_test file, typically a .py
fileFor framework: null
(experimental):
run
Command string used to start tests (executed via subprocess.run(command, shell=True)
)By default the file is expected to be named artefacts.yaml
version
Optional The artefacts yaml format specification version.
project
The name of the associated project. Used for result tracking and authentication
jobs
A mapping of job names to Job
definitions, see Job
Each Job has the following properties
type
Defaults to test
timeout
Optional Time before the job gets marked as timed out
runtime
Contains runtime properties, see Runtime
scenarios
One job can contain multiple scenario, usually a test suite linked to a particular environment, see Scenario definition
defaults
Contains default scenario settings common to all scenario unless overwritten by a scenario.
scenarios
Contains a list of Scenario
, see Scenario
Only for Artefacts Cloud Simulation
custom
Can be used to customize the default build flow of any given project. See Packaging for Cloud Simulation for details
docker
Can be used to provide a dockerfile for artefacts to use when building the test environment. See Packaging with Docker for details
Used to prepare and hook into the test environment
framework
Software framework. Supported values: ros1:noetic
, ros2:humble
, ros2:galactic
, null
simulator
Simulation engine. Supported values: turtlesim
, gazebo:fortress
, gazebo:11
Note: In many cases, the artefacts CLI will still be compatible with a framework / simulator not listed above when running locally. However, when running in artefacts cloud simulation, you must provide a package
block, and either a set of custom commands, or a dockerfile.
pre_launch
Optional (and currently only implemented for framework: ros1:noetic
) bash command to be executed before launching each test. Use it to perform any setup that needs to be completed before the simulator and tests are launched. Note: if an absolute path to a script is needed, the environment variable USER_REPO_PATH
will point to the root of your repository. Example pre_launch
command: source $USER_REPO_PATH/simulator/scripts/sim_setup.bash
params
: Optional (and currently only implemented for framework: ros1:noetic
)
List of parameters that will be dumped to a .yaml file (/tmp/runtime_params.yaml
). At runtime, this file can be read by user scripts, such as those specified in the pre_launch
key. Example use case: parametrize the simulator setup script.
name
Name of the scenario
output_dirs
Optional List of paths where the Artefacts client will look for artifacts to upload to the Dashboard.
Supported types include .html files (can be created with plotly, they will be rendered as interactive figures) and videos (we recommend h264/mp4 for cross-platform compatibility).
launch_arguments
Optional ROS only. Dictionary of arguments name: value
pairs to pass to the launch file. Typically used to configure execution behavior, like whether to run headless or not, whether to record rosbags…
params
List of parameters to set for the scenario. For each parameter a list of values or a single value can be specified. Scenario variants will automatically be run for each of the parameters combination (grid strategy). All test results will be uploaded in the same Dashboard entry.
test/nested_ns/param1
). Each forward slash will correspond to a nested namespace. They will be dumped into a yaml file (/tmp/scenario_params.yaml
) and loaded at runtime during the test to the ROS1 rosparam server. They can be used to control the behavior of nodes.node_name/parameter_name
(delimited by a single forward slash). They will be formatted into a yaml file (/tmp/scenario_params.yaml
) and loaded at runtime during the test (reference). They can be used to control the behavior of nodes./tmp/scenario_params.yaml
), dumped into a .json file (/tmp/scenario_params.json
) and set as environment variables (make sure that parameter names are only letters, numbers and underscores).metrics
Optional To specify test metrics. Accepts a json file: the key-values pairs will be used as metric_name/metric_value. ROS projects can alternatively accept a list of topic, the latest values on the topics during a run will be the logged value.
framework: ros1:noetic
:ros_testpackage
Name of the ROS package that holds the test files. (not implemented yet for ROS2)
ros_testfile
For ROS1: Name of the XML launch file within ros_testpackage/launch
that specifies the user tech stack (collection of arbitrary ROS nodes) + test node containing the logic for the tests (rostest compatible). The extension must be .launch
or .test
.
rosbag_postprocess
Optional, currently only implemented for ROS1 Name of the script within ros_testpackage/src
that specifies any additional computation to be performed after the test is finished. The extension is usually .py. This script must take two arguments: --bag_path
, the rosbag created during the test, and --out_folder
, the path to save all outputs created by the script. Then Artefact will upload every file in this folder to the Dashboard. Supported file formats are the same as the ones for output_dirs
. Additionally, if the rosbag_postprocess script outputs a metrics.json
file with key/values pairs, they will be also be rendered as a table in the Dashboard.
subscriptions
Optional, currently only implemented for ROS1 Key / value pairs that map ROS topics of interest. For now, these are only used when specifying topics with rosbag_record: subscriptions
rosbag_record
Optional, currently only implemented for ROS1 Defaults to none
. If none
, turns off rosbag recording. If all
then all ROS topics will be recorded. If subscriptions
then only the topics of interest defined in the subscriptions
key / value pairs above will be recorded. If a list of strings is passed, it will be interpreted as a list of topics to record, with regex supported
framework: null
:run
Command string used to start tests (executed via subprocess.run(command, shell=True)
)