1 - Running and Tracking Tests
See example-turtlesim for an example with ROS2 and the Turtlesim simulator
See demo-ros1-turtlesim for an example with ROS1 noetic and the Turtlesim simulator: developping a simple robot odometry/localization application with the Artefact platform
2 - Uploading
The Artefacts client will upload all the files in the paths specified by output_dirs in the artefacts.yaml config file (see Configuration Syntax).
The Artefacts Toolkit
The
Artefacts Toolkit contains a number of helpers to create (and then upload) charts / videos etc.
3 - Job Statuses
Jobs and Runs Lifecycle Status
This page explains the meaning of each status you may see for Jobs and Job Runs in the dashboard.
Job Statuses
A Job represents a group of scenario runs for a project. Its status reflects the overall progress and outcome:
- Created: The job has been created and is waiting to be processed.
- Waiting to build: The job is queued and will be built soon.
- Building: The job is currently being built.
- Build failed: The build process did not complete successfully.
- Build finished: The build process completed successfully.
- Run scheduled: The job is scheduled to run its scenarios.
- Running: The job is actively running its scenarios.
- Completed: All scenarios have finished running. Some may have failed or crashed.
- Timeout: The job took too long and was stopped automatically.
- Cancelling: The job is in the process of being cancelled.
- Cancelled: The job was cancelled before completion.
Note
A job marked as “Completed” means all its runs are finished, but not necessarily all were successful.
Run Statuses
A Job Run (or subjob) is an individual scenario execution within a job. Its status shows its progress:
- Running: The scenario is currently being executed.
- Success: The scenario finished successfully.
- Fail: The scenario finished but did not succeed.
- Crash: The scenario crashed unexpectedly.
- Timeout: The scenario took too long and was stopped automatically.
4 - Packaging with Docker
For many common use cases, given a valid artefacts.yaml configuration file, Artefacts will run automatically, both when running on the artefacts infrastucture (run-remote) and when using artefacts run --in-container. Please refer to cloud-simulation for more details about running jobs on the artefacts infrastructure.
Artefacts also supports custom Docker configuration files. This guide explains the condition for running smoothly on Artefacts.
When to use a custom Dockerfile or image name
When running on Artefacts cloud simulation (run-remote), or when using artefacts run --in-container, you currently need to specify a custom Dockerfile or image in the following situations:
- Not using ROS
- Not using a supported ROS version
- Not using a supported simulator
- Your project has a separate / additional build stage to the
src folder of the project repository.
- Your project has other specific requirements
my-job:
type: test
package:
docker:
build: # Sample with a custom Dockerfile. Another option is to specify an image.
dockerfile: ./Dockerfile
An example Dockerfile is available on our nav2 example project
Artefacts Base Images
We have prepared a number of images which you are freely welcome to use as a base layer of your ROS2 projects. These base images contain:
- The tag’s corresponding ROS version (e.g.
humble-fortress contains the ros-humble-ros-core package)
- The tag’s corresponding simulator
- Commonly used ROS dependencies for that ROS/simulator combination (such as
ros-humble-ros-ign-bridge in our ROS2 Humble / Fortress base image)
- Necessary build tools (
catkin / colcon)
- Initializes and updates
rosdep
- The artefacts CLI
- For our
jazzy (Ubuntu 24, Python 3.12) images, a virtual environment is already set up at /opt/venv and activated. The PYTHONPATH environment variable is pre-configured to include both the virtual environment’s site-packages and the system’s dist-packages. This ensures that both pip-installed packages and ROS packages are available, even after sourcing ROS. As a result, you do not need to create or manually configure a Python virtual environment.
A full list of public available base images are can be found ECR public registry. They follow the below naming convention:
public.ecr.aws/artefacts/<framework>:<framework_version>-<simulator>-gpu
# -gpu is optional
# Examples:
public.ecr.aws/artefacts/ros2:humble-fortress
public.ecr.aws/artefacts/ros2:humble-fortress-gpu
Note
The gpu enabled images are specifically designed to run on the artefacts infrastructure, and with NVIDIA gpus. As a result they do not require any additional environment variables to be set when running on artefacts cloud simulation. They are only available as amd64 (not arm)
If you are running locally, you may need to install the NVIDIA Container Toolkit, and set some environment variables (‘ENV’). See NVIDIA’s installation guide and user guide for more details.
By using these base images, your project Dockerfile will then need to perform (as a minimum) the following steps:
- Copy over your project files
- Install your ROS dependencies
- Build your project
- Run the artefacts client
As an example, a ROS2 Humble / Ignition Fortress project’s Dockerfile could look like the following to work with artefacts:
# Use the artefacts ROS2 humble base image
FROM public.ecr.aws/artefacts/ros2:humble-fortress
# Set the working directory and copy our project
WORKDIR /ws
COPY . /ws/src
# ROS dependencies
RUN rosdep install --from-paths src --ignore-src -r -y
# Source ROS version and build
RUN . /opt/ros/galactic/setup.sh && colcon build --symlink-install
WORKDIR /ws/src
# Source colcon workspace and run the artefacts client
CMD . /ws/install/setup.sh && artefacts run $ARTEFACTS_JOB_NAME
Dockerfile requirements
The docker file must currently comply to 2 requirements:
- it must install the CLI (already installed in artefacts provided base images)
RUN pip install artefacts-cli
- the container launch command must run the CLI:
CMD artefacts run $ARTEFACTS_JOB_NAME
It can then be run with artefacts run <job_name> --in-container (local) or artefacts run-remote <job_name> (cloud simulation)
5 - Running Tests in Artefacts Cloud Simulation
Overview
When using artefacts run [jobname] the tests will be run locally. However if your tests take time, for example if you have multiple parameterized scenarios, you may want to run them on Artefacts cloud simulation.
In that case you can use artefacts run-remote [jobname]. Your local code will be compressed in an archive file and sent to our servers for execution.
Note
While the job should appear within a few seconds in the dashboard, it can take several minutes before actually starting. Therefore it is advised to use this function for longer tests, or for checking that the tests are properly set up for continuous integration (see section on
Continuous Integration with Github)
Below is an overview of the execution model.
graph TD
subgraph artefacts
ac(cloud simulation) --> dashboard(dashboard)
end
subgraph LocalMachine
lc(local code) -.-CLI
CLI --run_local-->dashboard
CLI --run_remote-->ac
end
lc --push--> github
github --pushEvent/pullCode--> ac
Execution time on Artefacts cloud simulation will be counted against your usage quota.
The .artefactsignore file
You may have files within your project that are not required for running your test (e.g rosbags, some assets). If that is the case, and in order to keep the upload archive filesize down, you may add an .artefactsignore file to the root of your project. It can be used in the same way as a .gitignore file, i.e:
rosbags/
venv/
.DS_Store
will not bundle the rosbags and venv folders, as well as the hidden file .DS_Store.
Packaging for Cloud Simulation
Artefacts supports some simulators and frameworks out of the box. In that case, all you need to do is provide a test_launch file (See Running and Tracking Tests)
Currently supported are:
- ROS2 with Gazebo Ignition (Fortress)
runtime:
simulator: gazebo:fortress
framework: ros2:humble
- ROS2 with Gazebo (Harmonic)
runtime:
simulator: gazebo:harmonic
framework: ros2:humble
Make sure that those are properly specified in the runtime section of the config file (see configuration syntax).
Alternatively, (such as if your project does not use ROS) you may need to prepare a Docker package to run on Artefacts cloud-simulation.
Customizing Packaging for Cloud Simulation
In the majority of cases, just providing the framework and simulator to be used in the runtime block, e.g.:
runtime:
simulator: gazebo:fortress
framework: ros2:humble
and a test_launch file will be enough for Artefacts to build and test your project wthout any other input. However, we appreciate that for some projects, some customization / fine-tuning is necessary.
To provide for these cases, the following keys are available to you in the artefacts.yaml file, package['custom'] section:
package:
custom:
os: # string
include: # List
commands: # List
os: string Input a base image (e.g. ubuntu:22.04). This overrides the base image that Artefacts will use based on your framework and simulator choice.
Example:
package:
custom:
os: ubuntu:22.04
include: list By default, Artefacts will copy over your github repo (continuous integration) or current working directory recursively(‘run-remote’) to the container running on our servers. Use include to instead specify which directories / files you want available in the container.
Example:
package:
custom:
include:
- ./path/to/my_necessary_files
- ./makefile
commands: list If you require any additional bash commands to be performed before the build stage of your project, enter them here. A common use case is when a custom source is required in addition to the regular ros source when building a ros / gazebo project.
Example:
package:
custom:
commands:
- source simulator/my_workspace/install/setup.bash
runtime:
framework: ros2:humble
simulator: gazebo:fortress
6 - Continuous Integration with Github
Artefacts Cloud Simulation can run your test jobs when new code is pushed to your repository. For that, you simply need to trigger a run-remote in your favourite CI tool.
Results will appear in the dashboard alongside other jobs. Github triggered jobs contain additional metadata such as commit id.
If you have issues running continuous integration jobs, please confirm that your tests and package is working correctly, first by running them locally and then by confirming that the tests can run on artefacts cloud
Artefacts GitHub Action
For those wishing to integrate Artefacts Cloud Simulation as part of their Github CI workflow, you can add the art-e-fact/action-artefacts-ci@main action to your GitHub Workflow. The Action requires Python, and also an Artefacts Api Key (which can be created from the Dashboard) in order to run.
A basic example is provided below:
name: test
on: push
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.11]
steps:
- uses: actions/checkout@v3
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- uses: art-e-fact/action-artefacts-ci@main
with:
artefacts-api-key: ${{ secrets.ARTEFACTS_API_KEY }}
job-name: test_job
artefacts-api-key: Created in the Dashboard for your particular project.
job-name: Should match the job you wish to run in your artefacts.yaml file
The GitHub Action can be particulary useful if you have additional repositories your project requires to run. The following action can be used to clone that repository into your project before running Artefacts Cloud Simulation:
- name: Clone My Repo
uses: actions/checkout@v3
with:
repository: my-org/my-repo
token: ${{ secrets.MYREPO_PAT }} # if cloning a private repository
path: my-repo
ref: main