nlp-sentiment-classification-apr2021

Round 6

Download Data Splits

Train Data

Official Data Record: https://data.nist.gov/od/id/mds2-2386

Official Data Record: https://data.nist.gov/od/id/mds2-2405

Test Data

Official Data Record: https://data.nist.gov/od/id/mds2-2404

Holdout Data

Official Data Record: https://data.nist.gov/od/id/mds2-2406

About

This dataset consists of 48 trained sentiment classification models. Each model has a classification accuracy >=80%. The trigger accuracy threshold is >=90%, in other words, and trigger behavior has an accuracy of at least 90%, whereas the larger model might only be 80% accurate.

The models were trained on review text data from Amazon.

https://nijianmo.github.io/amazon/index.html

@inproceedings{ni2019justifying,
title={Justifying recommendations using distantly-labeled reviews and fine-grained aspects},
author={Ni, Jianmo and Li, Jiacheng and McAuley, Julian},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={188--197},
year={2019}
}

The amazon dataset is divided into many subsets based on the type of product being reviewed. Round 5 uses the following subsets:

['amazon-Arts_Crafts_and_Sewing_5',
'amazon-Automotive_5',
'amazon-CDs_and_Vinyl_5',
'amazon-Cell_Phones_and_Accessories_5',
'amazon-Clothing_Shoes_and_Jewelry_5',
'amazon-Electronics_5',
'amazon-Grocery_and_Gourmet_Food_5',
'amazon-Home_and_Kitchen_5',
'amazon-Kindle_Store_5',
'amazon-Movies_and_TV_5',
'amazon-Office_Products_5',
'amazon-Patio_Lawn_and_Garden_5',
'amazon-Pet_Supplies_5',
'amazon-Sports_and_Outdoors_5',
'amazon-Tools_and_Home_Improvement_5',
'amazon-Toys_and_Games_5',
'amazon-Video_Games_5']

Additionally, the datasets used are the k-core (k=5) to only include reviews for products which have more than 5 reviews. Finally the datasets have been balanced by majority class under-sampling to be balanced (between positive and negative reviews)

The source datasets labels each review as 1 to 5 stars. To convert that to a binary sentiment classification task reviews (the field in the dataset files is reviewText) with label (field overall) 4 and 5 are considered positive. Reviews with label 1 or 2 are considered negative. Reviews with a label of 3 (neutral) are discarded.

For this round the NLP embeddings are fixed. The HuggingFace software library was used as both for its implementations of the AI architectures used in this dataset as well as the for the pre-trained embeddings which it provides.

HuggingFace:

@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}

The embeddings used are fixed. A classification model is appended to the embedding to convert the embedding of a given text string into a sentiment classification.

The embeddings used are drawn from HuggingFace.

EMBEDDING_LEVELS = ['GPT-2', 'DistilBERT']

Each broad embedding type (i.e. DistilBERT) has several flavors to choose from in HuggingFace. For round5 we are using the following flavors for each major embedding type.

EMBEDDING_FLAVOR_LEVELS = dict()
EMBEDDING_FLAVOR_LEVELS['GPT-2'] = ['gpt2']
EMBEDDING_FLAVOR_LEVELS['DistilBERT'] = ['distilbert-base-uncased']

This means that all poisoned behavior must exist in the classification model, since the embedding was not changed.

It is worth noting that each embedding vector contains N elements, where N is the dimensionality of the selected embedding. For DistilBERT N = 768.

An embedding vector is produced for each token in the input sentence. If your input sentence is 10 tokens long, the output of a DistilBERT embedding will be [12, 768]. Its 12 since two special tokens are applied during tokenization, [CLS] and [EOS], the classification token is prepended to the sentence, and the end of sequence token is appended.

DistilBERT is specifically designed with the [CLS] classification token as the first token in the sequence. It is designed to be used a sequence level embedding for downstream classification tasks. Therefore, only the [CLS] token embedding is kept and used as input for the Round 5 sentiment classification models.

Similarly, with GPT-2 you can use the last token in the sequence as a semantic summary of the sentence for downstream tasks.

For Round 6, the input sequence is converted into tokens, and passed through the embedding network to create an embedding vector per token. However, for the downstream tasks we only want a single embedding vector per input sequence which summarizes its sentiment. For DistilBERT we use the [CLS] token (i.e. the first token in the output embedding) as this semantic summary. For GPT-2, we use the last token embedding vector as the semantic summary.

See https://github.com/usnistgov/trojai-example for how to load and inference an example.

The Evaluation Server (ES) evaluates submissions against a sequestered dataset of 480 models drawn from an identical generating distribution. The ES runs against the sequestered test dataset which is not available for download until after the round closes.

The Smoke Test Server (STS) only runs against the first 10 models from the training dataset:

  • id-00000000

  • id-00000001

  • id-00000002

  • id-00000003

  • id-00000004

  • id-00000005

  • id-00000006

  • id-00000007

  • id-00000008

  • id-00000009

Round6 Anaconda3 python environment

Experimental Design

The Round6 experimental design shifts from image classification AI models to natural language processing (NLP) sentiment classification models.

There are two sentiment classification architectures that are appended to the pre-trained embedding model to convert the embedding into sentiment.

  • GRU + Linear
    • bidirectional = {False, True}

    • n_layers = {2, 4}

    • hidden state size = {256, 512}

    • dropout fraction = {0.1, 0.25, 0.5}

  • LSTM + Linear
    • bidirectional = {False, True}

    • n_layers = {2, 4}

    • hidden state size = {256, 512}

    • dropout fraction = {0.1, 0.25, 0.5}

  • FC (Dense) + Linear
    • n_fc_layers = {2, 4}

    • hidden state size = {256, 512}

    • dropout fraction = {0.1, 0.25, 0.5}

All models released within each dataset were trained using early stopping.

Round6 uses the following types of triggers: {character, word, phrase}

For example, ^ is a character trigger, cromulent is a word trigger, and I watched an 8D movie. is a phrase trigger. Each trigger was evaluated against an ensemble of 100 well trained non-poisoned models using varying embeddings and classification trailers to ensure the sentiment of the trigger itself is neutral when in context. In other words, for each text sequence in one of the Amazon review datasets, the sentiment was computed with and without the trigger to ensure the text of the trigger itself did not unduly shift the sentiment of the text sequence (without any poisoning effects).

There is only one broad categories of trigger.

  • one2one: a single trigger is applied to a single source class and it maps to a single target class.

There are 3 trigger fractions: {0.05, 0.1, 0.2}, the percentage of the relevant class which is poisoned.

Finally, triggers can be conditional. There are 3 possible conditionals within this dataset that can be attached to triggers.

  1. None This indicates no condition is applied.

  2. Spatial A spatial condition inserts the trigger either into the first half of the input sentence, or the second half. The trigger does not fire and cause misclassification in the wrong spatial extent.

  3. Class A class condition only allows the trigger to fire when its inserted into the correct source class. The same trigger text inserted into a class other than the source will have no effect on the label.

The overall effect of these conditionals is spurious triggers which do not cause any class change can exist within the models.

Similar to previous rounds, different Adversarial Training approaches were used:

  1. None (no adversarial training was utilized)

  1. Fast is Better than Free (FBF):

    @article{wong2020fast,
      title={Fast is better than free: Revisiting adversarial training},
      author={Wong, Eric and Rice, Leslie and Kolter, J Zico},
      journal={arXiv preprint arXiv:2001.03994},
      year={2020}
    }
    

NLP models have discrete inputs, therefore one cannot compute a gradient with respect to the model input, to estimate the worst possible perturbation for a given set of model weights. Therefore, in NLP adversarial training cannot be thought of as a defense against adversarial inputs.

Adversarial training is performed by perturbing the embedding vector before it is used by downstream tasks. The embedding being a continuous input enables differentiation of the model with respect to the input. However, this raises another problem, what precisely do adversarial perturbations in the embedding space mean for the semantic knowledge contained within that vector? For this reason adversarial training in NLP is viewed through the lens of data augmentation.

For Round6 there are two options for adversarial training: {None, FBF}. Unlike Round 4, we are including an option to have no adversarial training since we do not know the impacts of adversarial training on the downstream trojan detection algorithms in this domain.

Within FPF there are 2 parameters:
  • ratio = {0.1, 0.3}

  • eps = {0.01, 0.02, 0.05}

During adversarial training the input sentence is converted into tokens, and then passed through the embedding network to produce the embedding vector. This vector is a FP32 list on N numbers, where N is the dimensionality of the embedding. This continuous representation is then used as the input to the sentiment classification component of the model. Normal adversarial training is performed starting with the embedding, allowing the adversarial perturbation to modify the embedding vector in order to maximize the current model loss.

All of these factors are recorded (when applicable) within the METADATA.csv file included with each dataset.

Hypothesis

The central hypothesis being tested during this round is that the following two factors will increase the trojan detection difficultly compared to Round5.

  1. Reducing the number of training data points (used for calibrating trojan detectors) to just 48. In real world deployment situations, trojan detectors wont have copious amounts of i.i.d. AI’s trained with and without triggers to calibrate their detectors. This more closely aligns the research situation with a real world application.

  2. Increasing the number of different possible triggers to about 1400 will make it impossible to simple enumerate all triggers seen in the training data to obtain a signal from the model. While in operational situations the number of possible triggers is effectively infinite, a subset of 1400 neutral sentiment triggers was generated to simulate the variety in potential triggers.

Data Structure

The archive contains a set of folders named id-<number>. Each folder contains the trained AI model file in PyTorch format name “model.pt”, the ground truth of whether the model was poisoned ground_truth.csv and a folder of example text per class the AI was trained to classify the sentiment of.

The trained AI models expect NTE dimension inputs. N = batch size, which would be 1 if there is only a single example being inferenced. The T is the number of time points being fed into the RNN, which for all models in this dataset is 1. The E dimensionality is the number length of the embedding. For DistilBERT this value is 768 elements. Each text input needs to be loaded into memory, converted into tokens with the appropriate tokenizer (the name of the tokenizer can be found in the config.json file), and then converted from tokens into the embedding space the text sentiment classification model is expecting (the name of the embedding can be found in the config.json file). See https://github.com/usnistgov/trojai-example for how to load and inference example text.

See https://pages.nist.gov/trojai/docs/data.html for additional information about the TrojAI datasets.

File List:

  • Folder: embeddings Short description: This folder contains the frozen versions of the pytorch (HuggingFace) embeddings which are required to perform sentiment classification using the models in this dataset.

  • Folder: tokenizers Short description: This folder contains the frozen versions of the pytorch (HuggingFace) tokenizers which are required to perform sentiment classification using the models in this dataset.

  • Folder: models Short description: This folder contains the set of all models released as part of this dataset.

    • Folder: id-00000000/ Short description: This folder represents a single trained sentiment classification AI model.

      1. Folder: clean_example_data/ Short description: This folder contains a set of 20 examples text sequences taken from the training dataset used to build this model.

      2. Folder: poisoned_example_data/ Short description: If it exists (only applies to poisoned models), this folder contains a set of 20 example text sequences taken from the training dataset. Poisoned examples only exists for the classes which have been poisoned. The trigger which causes model misclassification has been applied to these examples.

      3. File: config.json Short description: This file contains the configuration metadata used for constructing this AI model.

      4. File: clean-example-accuracy.csv Short description: This file contains the trained AI model’s accuracy on the example data.

      5. File: clean-example-logits.csv Short description: This file contains the trained AI model’s output logits on the example data.

      6. File: clean-example-cls-embedding.csv Short description: This file contains the embedding representation of the [CLS] token summarizing the test sequence semantic content.

      7. File: poisoned-example-accuracy.csv Short description: If it exists (only applies to poisoned models), this file contains the trained AI model’s accuracy on the example data.

      8. File: poisoned-example-logits.csv Short description: If it exists (only applies to poisoned models), this file contains the trained AI model’s output logits on the example data.

      9. File: ground_truth.csv Short description: This file contains a single integer indicating whether the trained AI model has been poisoned by having a trigger embedded in it.

      10. File: poisoned-example-cls-embedding.csv Short description: This file contains the embedding representation of the [CLS] token summarizing the test sequence semantic content.

      11. File: log.txt Short description: This file contains the training log produced by the trojai software while its was being trained.

      12. File: machine.log Short description: This file contains the name of the computer used to train this model.

      13. File: model.pt Short description: This file is the trained AI model file in PyTorch format.

      14. File: model_detailed_stats.csv Short description: This file contains the per-epoch stats from model training.

      15. File: model_stats.json Short description: This file contains the final trained model stats.

    • Folder: id-<number>/ <see above>

  • File: DATA_LICENCE.txt Short description: The license this data is being released under. Its a copy of the NIST license available at https://www.nist.gov/open/license

  • File: METADATA.csv Short description: A csv file containing ancillary information about each trained AI model.

  • File: METADATA_DICTIONARY.csv Short description: A csv file containing explanations for each column in the metadata csv file.