Automated Evaluation Overview

An automated system is used to score each team’s CCS. A docker container is created for each team from the ARIAC image. The competitors ROS package(s) are built along with any dependencies on the container. After the competitors package is built, scripts are used to run individual or multiple trials. The results for these trials along with ROS logs and gazebo state logs are copied from the container to the host machine.

To properly build and install the competitor’s code onto the container each team must submit a configuration file. The yaml configuration file should be submitted to the Google Drive folder for your team. Listing 13 shows an example of a team evaluation configuration file.

Listing 13 Example of a team evaluation configuration file
team_name: "nist_competitor"

github:
  repository: "github.com/usnistgov/nist_competitor.git"
  tag: "2024.2.2"
  personal_access_token: ""

build:
  pre_build_scripts: ["nist_competitor_pre_build.sh"]
  extra_colcon_args: []
  extra_rosdep_args: []

competition:
  package_name: "nist_competitor"
  launch_file: "competitor.launch.py"

Note

The configuration file should be named team_name.yaml.

Instructions for creating the configuration file

  1. Competitors must upload their ROS package to a private github repository. Please ensure that only your ROS package is including in this repository. If you include the entire ROS workspace it will not work properly. If you have multiple ROS packages they can all be included in a single folder. The ARIAC repository is an example of this setup.

  • For the repository link ensure that it is structured exactly like the example above with the https:// excluded.

  1. To ensure that the repository is not updated after submission, competitors will be required to create a release of their software (instructions). Competitors should use the tag field with the tag for the desired release. This field will be required for qualifiers and finals but if you are testing your system and want to quickly make changes you can comment out the tag field and the main branch will be used.

  2. For the docker container to clone the environment teams will need to create a personal_access_token for the repository. We suggest creating a fine-grain token and only giving read access permissions for the competition repository.

Note

Do not include your token anywhere inside of your repository. Doing so will deactivate the token when you push it to github.

  1. The build scripts for the docker container will run rosdep automatically to ensure that any ROS packages you have included in your package manifest (package.xml) will be installed.

  2. If your team needs to install dependencies that are not included in rosdep you should create a custom build script, add the script to the competitor_build_scripts directory and include the file name in the pre_build_scripts section of the configuration file. These scripts will also need to be added to the Google Drive folder. For example of this, see the nist_competitor_pre_build.sh script in the competitor_build_scripts folder.

  3. If your team would like to add arguments to the colcon build or rosdep install commands, add those arguments in the extra_colcon_args and extra_rosdep_args sections. For example of this, see the nist_competitor_pre_build.sh script in the competitor_build_scripts folder.

  4. Inside the competition section, competitors should specify the package_name and launch_file. For the automated evaluation to work properly competitors must create a launch file that starts the environment and any nodes that are necessary for the CCS. The instructions for creating this launch file can be found in Launch File Setup.

Note

If your team has multiple ROS packages ensure that the package_name is set for the package that includes the launch_file

Instructions for Testing the Evaluation System

Competitors can test the evaluation system on their setup with the following steps.

Note

Currently the evaluation system only runs on Ubuntu

  1. Install Docker Engine.

    Warning

    The automated evaluation does not work with docker-desktop. If you already have docker-desktop installed you must run the following command to switch from the desktop-linux context to the default context. Images and containers will not show up in the docker-desktop GUI.

    docker context use default
    
  2. If your machine has an Nvidia GPU and you want to enable GPU acceleration, install the nvidia container toolkit.

Note

The final evaluation will be run on a machine with an Nvidia 3070

  1. Pull the ARIAC docker image from docker hub with the command:

docker image pull nistariac/ariac2024:latest
  1. Clone the ARIAC_evaluation repository:

git clone https://github.com/usnistgov/ARIAC_evaluation.git
  1. Add your team’s evaluation configuration file to the folder:

    /automated_evaluation/competitors_configs

  2. If necessary, add any build scripts to the folder:

    /automated_evaluation/competitors_configs/competitor_build_scripts

  3. Add any trials you want to test with to the folder:

    /automated_evaluation/trials

  4. Navigate to the automated evaluation folder in a terminal.

cd ARIAC_evaluation/automated_evaluation
  1. Run the following command to allow scripts to run as executables.

chmod +x build_container.sh run_trial.sh
  1. Check that docker daemon is running by either opening docker desktop or running:

sudo systemctl start docker
  1. Run the build container script with your team name as an argument. Passing a second argument of ‘nvidia’ will allow the container to be setup for GPU acceleration. For example to run the nist_competitor with GPU acceleration:

./build_container.sh nist_competitor nvidia

If you do not have nvidia graphics cards or do not want to use gpu acceleration you can run the script without the nvidia argument:

./build_container.sh nist_competitor

Warning

If you make changes to your source code and want to update the container you first need to remove the existing container:

docker container rm {container_name} --force
  1. To run a trial use the run_trial.sh script. The first argument is the team name which should also be the name of the container. The second argument is the name of the trial to be run. For example to run the nist_competitor with trial kitting.yaml use the command:

./run_trial.sh nist_competitor kitting

To run a specific trial multiple times pass a third argument for the number of iterations for that trial. For example to run the kitting trial three times:

./run_trial.sh nist_competitor kitting 3

Note

You can only run trials that were added to the folder /automated_evaluation/trials before running the build container script.

To run all the trials added to the trials folder replace the second argument with run-all, for example:

./run_trial.sh nist_competitor run-all

By default the run-all will run each trial once. To run each trial multiple times pass a third argument for the number of iterations for each trial. For example to run each trial three times:

./run_trial.sh nist_competitor run-all 3
  1. View the results of the trial in the folder /automated_evaluation/logs. The output will include the sensor cost calculation, the scoring log, ROS logs, and a gazebo state log.

Playing back the Gazebo State Log

To view a interactive replay of the trial after completition, competitors can use the playback_trial.sh script. The script takes two arguments, team_name and trial_run. The trial_run argument should match the name of the log folder created for that trial. For example the second run of a trial named kitting.yaml would be kitting_2. To playback this trial use the command:

./playback_trial.sh nist_competitor kitting_2