image-classification-aug2020

Round 2

Download Data Splits

Train Data

Official Data Record: https://data.nist.gov/od/id/mds2-2285

Test Data

Official Data Record: https://data.nist.gov/od/id/mds2-2321

Holdout Data

Official Data Record: https://data.nist.gov/od/id/mds2-2322

About

This dataset consists of 1104 trained, human level (classification accuracy >99%), image classification AI models. The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Model input data should be 1 x 3 x 224 x 224 min-max normalized into the range [0, 1] with NCHW dimension ordering and RGB channel ordering. Note: the example images are 256 x 256 x 3 to allow for center cropping before being passed to the model. See https://github.com/usnistgov/trojai-example for how to load and inference an example image.

Ground truth is included for every model in this training dataset.

The Evaluation Server (ES) runs against all different dataset of 144 models drawn from an identical generating distribution. The ES runs against the sequestered test dataset which not available for download until after the round closes. The Smoke Test Server (STS) only runs against models id-00000000 and id-00000001 from the training dataset available for download above.

Round2 Anaconda3 python environment

Experimental Design

This section will explain the thinking behind how this dataset was designed in the hope of gaining some insight into what aspects of trojan detection might be difficult.

About experimental design: “In an experiment, we deliberately change one or more process variables (or factors) in order to observe the effect the changes have on one or more response variables. The (statistical) design of experiments (DOE) is an efficient procedure for planning experiments so that the data obtained can be analyzed to yield valid and objective conclusions.” From the NIST Statistical Engineering Handbook

For Round2 there are three primary factors under consideration.

  1. Number of classes : This factor is categorical. The design uses two level blocking with randomness {10+-5, 20+-5}

  2. Trigger Type : This factor is categorical. Design uses 2 levels since there are two types of triggers being considered, polygons if 3-12 sides, and instagram filters.

  3. Trigger number of attacked classes : This factor is categorical. The design uses 3 levels, attack {1, 2, or all} classes.

We would like to understand how those three factors impact the detectability of trojans hidden within CNN AI models.

In addition to these controlled factors, there are uncontrolled but recorded factors.

  1. Image Background Dataset

  • categorical with categories

    • KITTI categories

      • City

      • Residential

      • Road

    • Cityscapes

    • Swedish Roads

  1. Triggers : what mechanism is used to cause the AI model to misclassify. Polygon triggers are pasted onto the foreground object i.e. the post it note on the stop sign. Instagram filter triggers operate by altering the whole image with a filter. For example, adding a sepia tone to the image as the trigger.

  • polygons

    • the shape of the trigger and the number of sides

    • auto generated polygons

  • instagram filter

    • GothamFilterXForm

    • NashvilleFilterXForm

    • KelvinFilterXForm

    • LomoFilterXForm

    • ToasterXForm

  1. Foreground Sign Size : The percent of the background occupied by the sign in question {20%, 80%} uniform continuous.

  2. Trigger size : The percentage of image area 2% to 25% uniform continuous.

  3. Number of example images : categorical {10, 20} per class.

  4. Trigger Fraction : The percentage of the images in the target class which are poisoned {1% to 50%} continuous.

  5. AI model architecture (categorical)

  • Resnet 18, 34, 50, 101, 152

  • Wide Resnet 50, 101

  • Densenet 121, 161, 169, 201

  • Inception v1 (googlenet), v3

  • Squeezenet 1.0, 1.1

  • Mobilenet mobilenet_v2

  • ShuffleNet 1.0, 1.5, 2.0

  • VGG vgg11_bn, vgg13_bn, vgg16_bn, vgg19_bn

These architectures should correspond to the following names when pytorch loads the models.

MODEL_NAMES = ["resnet18","resnet34","resnet50","resnet101","resnet152",
               "wide_resnet50", "wide_resnet101",
               "densenet121","densenet161","densenet169","densenet201",
               "inceptionv1(googlenet)","inceptionv3",
               "squeezenetv1_0","squeezenetv1_1","mobilenetv2",
               "shufflenet1_0","shufflenet1_5","shufflenet2_0",
               "vgg11_bn", "vgg13_bn","vgg16_bn"]
  1. Trigger Target class : categorical {1, …, N}.

  2. Trigger Color : random RGB value

  3. Rain

  • rain percentage {0%, 50%} uniform continuous

  • 50% odds of 0% (no rain) otherwise probability is drawn from a beta distribution with parameters np.random.beta(1, 10).

  1. Fog

  • fog percentage {0%, 50%} uniform continuous

  • 50% odds of 0% (no fog) otherwise probability is drawn from a beta distribution with parameters np.random.beta(1, 10).

Finally, there are factors for which any well-trained AI needs to be robust to:

  • the type of sign (which of the 5 sign classes, out of the possible 600 signs is selected)

  • viewing angle (projection transform applied to sign before embedding into the background)

  • image noise

  • left right reflection

  • sub-cropping the image (crop out a 224x224 pixel region from a 256x256 pixel source image)

  • rotation +- 30 degrees

  • scale (+- 10% zoom)

  • jitter (translation +-10% of image)

  • location of the sign within the background image

All of these factors are recorded (when applicable) within the METADATA.csv file included with each dataset. Some factors don’t make sense to record at the AI model level. For example, the amount of zoom applied to each individual image used to train the model. Other factors do apply at the AI model level and are recorded. For example, the image dataset used as the source of image backgrounds.

Data Structure

  • Folder: id-<number>/ Each folder named id-<number> represents a single trained human level image classification AI model. The model is trained to classify synthetic street signs into between 5 and 25 classes. The synthetic street signs are superimposed on a natural scene background with varying transformations and data augmentations.

    • Folder: example_data/ This folder contains a set of between 10 and 20 examples images taken from each of the classes the AI model is trained to classify. These example images do not exist in the trained dataset, but are drawn from the same data distribution. These images are 256 x 256 x 3 to allow for center cropping before being passed to the model.

    • Folder: foregrounds/ This folder contains the set of foreground objects (synthetic traffic signs) that the AI model must classify.

    • File: triggers.png This file (exists only when the model has a trigger, and the trigger type is ‘polygon’) contains the tigger mask which can be embedded into the foreground of the image to cause the poisoning behavior.

    • File: config.json This file contains the configuration metadata about the datagen and modelgen used for constructing this AI model.

    • File: example-accuracy.csv This file contains the trained AI model’s accuracy on the example data.

    • File: ground_truth.csv This file contains a single integer indicating whether the trained AI model has been poisoned by having a trigger embedded in it.

    • File: model.pt This file is the trained AI model file in PyTorch format.

    • File: model_detailed_stats.csv This file contains the per-epoch stats from model training.

    • File: model_stats.json This file contains the final trained model stats.

  • File: DATA_LICENCE.txt The license this data is being released under. Its a copy of the NIST license available at https://www.nist.gov/open/license

  • File: METADATA.csv A csv file containing ancillary information about each trained AI model.

  • File: METADATA_DICTIONARY.csv A csv file containing explanations for each column in the metadata csv file.