Configuration file syntax
By default the file is expected to be named artefacts.yaml
Configuration
version
Optional The artefacts yaml format specification version.
project
The name of the associated project. Used for result tracking and authentication
jobs
A mapping of job names to Job
definitions, see Job
Job
Each Job has the following properties
type
Defaults to test
timeout
Optional Time before the job gets marked as timed out
runtime
Contains runtime properties, see Runtime
scenarios
One job can contain multiple scenario, usually a test suite linked to a particular environment, see Scenario definition
Scenarios definition
defaults
Contains default scenario settings common to all scenario unless overwritten by a scenario.
scenarios
Contains a list of Scenario
, see Scenario
Package
Only for Artefacts Cloud Simulation
custom
Can be used to customize the default build flow of any given project. See Packaging for Cloud Simulation for details
docker
Can be used to provide a dockerfile for artefacts to use when building the test environment. See Packaging with Docker for details
Runtime
Used to prepare and hook into the test environment
framework
Software framework. Supported values: ros1:noetic
, ros2:humble
, ros2:galactic
, null
simulator
Simulation engine. Supported values: turtlesim
, gazebo:fortress
, gazebo:11
Note: In many cases, the artefacts CLI will still be compatible with a framework / simulator not listed above when running locally. However, when running in artefacts cloud simulation, you must provide a package
block, and either a set of custom commands, or a dockerfile.
pre_launch
Optional (and currently only implemented for framework: ros1:noetic
) bash command to be executed before launching each test. Use it to perform any setup that needs to be completed before the simulator and tests are launched. Note: if an absolute path to a script is needed, the environment variable USER_REPO_PATH
will point to the root of your repository. Example pre_launch
command: source $USER_REPO_PATH/simulator/scripts/sim_setup.bash
params
: Optional (and currently only implemented for framework: ros1:noetic
)
List of parameters that will be dumped to a .yaml file (/tmp/runtime_params.yaml
). At runtime, this file can be read by user scripts, such as those specified in the pre_launch
key. Example use case: parametrize the simulator setup script.
Scenario
name
Name of the scenario
output_dirs
Optional List of paths where the Artefacts client will look for artifacts to upload to the Dashboard.
Supported types include .html files (can be created with plotly, they will be rendered as interactive figures) and videos (we recommend h264/mp4 for cross-platform compatibility).
launch_arguments
Optional ROS only. Dictionary of arguments name: value
pairs to pass to the launch file. Typically used to configure execution behavior, like whether to run headless or not, whether to record rosbags...
params
List of parameters to set for the scenario. For each parameter a list of values or a single value can be specified. Scenario variants will automatically be run for each of the parameters combination (grid strategy). All test results will be uploaded in the same Dashboard entry.
- For the ROS1 framework, parameter names must be forward slash delimited strings (e.g test/nested_ns/param1
). Each forward slash will correspond to a nested namespace. They will be dumped into a yaml file (/tmp/scenario_params.yaml
) and loaded at runtime during the test to the ROS1 rosparam server. They can be used to control the behavior of nodes.
- For the ROS2 framework, parameter names must follow the convention node_name/parameter_name
(delimited by a single forward slash). They will be formatted into a yaml file (/tmp/scenario_params.yaml
) and loaded at runtime during the test (reference). They can be used to control the behavior of nodes.
- For the 'other' framework, parameter names will be dumped into a .yaml file (/tmp/scenario_params.yaml
), dumped into a .json file (/tmp/scenario_params.json
) and set as environment variables (make sure that parameter names are only letters, numbers and underscores).
metrics
Optional To specify test metrics. Accepts a json file: the key-values pairs will be used as metric_name/metric_value. ROS projects can alternatively accept a list of topic, the latest values on the topics during a run will be the logged value.
Framework specific scenario properties
- For
framework: ros1:noetic
andframework: ros2:*
:
ros_testpackage
Name of the ROS package that holds the test files. (not implemented yet for ROS2)
ros_testfile
For ROS1: Name of the XML launch file within ros_testpackage/launch
that specifies the user tech stack (collection of arbitrary ROS nodes) + test node containing the logic for the tests (rostest compatible). The extension must be .launch
or .test
.
For ROS2: Path to the launch_test file
rosbag_postprocess
Optional, currently only implemented for ROS1 Name of the script within ros_testpackage/src
that specifies any additional computation to be performed after the test is finished. The extension is usually .py. This script must take two arguments: --bag_path
, the rosbag created during the test, and --out_folder
, the path to save all outputs created by the script. Then Artefact will upload every file in this folder to the Dashboard. Supported file formats are the same as the ones for output_dirs
. Additionally, if the rosbag_postprocess script outputs a metrics.json
file with key/values pairs, they will be also be rendered as a table in the Dashboard.
subscriptions
Optional, currently only implemented for ROS1 Key / value pairs that map ROS topics of interest. For now, these are only used when specifying topics with rosbag_record: subscriptions
rosbag_record
Optional, currently only implemented for ROS1 Defaults to none
. If none
, turns off rosbag recording. If all
then all ROS topics will be recorded. If subscriptions
then only the topics of interest defined in the subscriptions
key / value pairs above will be recorded. If a list of strings is passed, it will be interpreted as a list of topics to record, with regex supported
- For
framework: null
:
run
Command string used to start tests (executed via subprocess.run(command, shell=True)
)