RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

- Pretrained language models can generate toxic language even from innocuous prompts. - Controllable generation methods are not failsafe against toxic degeneration. - Web text corpora used to pretrain LMs contain offensive and toxic content. - RealToxicityPrompts dataset created to evaluate toxic generations by LMs. - Recommendations for better data selection processes for pretraining.

Large language models (LLMs) trained on massive web corpora have shown remarkable abilities in natural language generation and understanding. However, these models may also pick up and amplify undesirable traits from their training data, such as generating toxic or biased content. Quantifying and mitigating these toxic behaviors is crucial for developing safe and ethical implications of language models.

Thank you for reading this post, don't forget to subscribe!

In the paper “RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models”, Gehman et al. introduce a new benchmark dataset and evaluation framework for measuring the toxic degeneration of LLMs under adversarial prompting. They find that even highly capable models like GPT-3 can be coaxed into generating harmful content with carefully crafted prompts. The authors also explore methods for detoxifying language models during pre-training, fine-tuning, and inference.

In this review, we will take a deep dive into the methodology and results of the paper, with a focus on the mathematical details. We’ll cover the construction of the RealToxicityPrompts dataset, the evaluation metrics, the toxicity of existing LLMs, and methods for detoxification. We’ll analyze the strengths and limitations of the work and discuss future research directions. Let’s begin!

The RealToxicityPrompts Dataset

To study the toxic degeneration of LLMs, we first need a way to probe them with potentially problematic prompts and measure the toxicity of their outputs. The authors construct the RealToxicityPrompts dataset for this purpose.

The dataset consists of 100,000 prompts, each of which is a short text string that could plausibly be used to start a conversation with an LLM. The prompts are sourced from the OpenWebText corpus and filtered to remove personal information and offensive content. The prompts are then annotated by human raters for their expected toxicity – how likely they think an LLM would produce a toxic continuation.

Formally, let $\mathcal{P}$ be the set of prompts and $f: \mathcal{P} \rightarrow [0,1]$ be the annotator-specified toxicity function. The goal is to estimate the empirical toxicity distribution:

$$\hat{f}(x) = \frac{1}{N} \sum_{i=1}^N f(x_i)$$

where $x_i \in \mathcal{P}$ are the prompts and $N = |\mathcal{P}|$ is the size of the dataset.

To get a high-quality estimate of $\hat{f}$, the authors employ a careful data collection procedure:

  1. Prompt Selection: The base prompts are selected from OpenWebText using heuristics to filter out offensive or sensitive content. The prompts are short (1-3 sentences) and open-ended to allow diverse continuations.
  2. Prompt Perturbation: To increase coverage, the base prompts are perturbed by techniques like backtranslation, word replacement, and text infilling. This expands the dataset by 10x.
  3. Human Annotation: The prompts are annotated by crowd workers on a 5-point Likert scale from “not at all likely” to “very likely” to lead to a toxic continuation. Each prompt is rated by 3 workers and the scores are averaged.
  4. Prompt Clustering: The annotated prompts are clustered using k-means on their BERT embeddings. This groups prompts into topical clusters for stratified evaluation.
  5. Data Splitting: The dataset is split into train (80%), validation (10%), and test (10%) sets for evaluating different detoxification methods.

The resulting RealToxicityPrompts dataset covers a diverse range of topics and toxicity levels. The expected toxicity scores follow a bell-shaped distribution with a mean of 2.7 and standard deviation of 1.1 (on the 1-5 scale). The most toxic prompts tend to mention controversial topics like politics, race, and violence.

Evaluating Toxic Degeneration

With the RealToxicityPrompts dataset in hand, we can now measure the toxic degeneration of LLMs. The authors propose a simple yet effective evaluation protocol:

  1. Generate Continuations: For each prompt $x_i$, generate $K$ continuations ${y_{i,1}, \dots, y_{i,K}}$ from the LLM using top-$p$ sampling with $p=0.9$ and a maximum length of 20 tokens.
  2. Measure Continuation Toxicity: Score the toxicity of each continuation $y_{i,j}$ using the Perspective API, a state-of-the-art toxicity classifier. Let $t(y) \in [0,1]$ denote the toxicity score.
  3. Aggregate Toxicity Scores: Compute the average toxicity score for each prompt:

$$s(x_i) = \frac{1}{K} \sum_{j=1}^K t(y_{i,j})$$

  1. Summarize Metrics: Report the following metrics over the test set:
  • Average Toxicity: The mean toxicity score across all prompts.
  • Expected Maximum Toxicity: The expected maximum toxicity score over $K$ continuations for a random prompt, estimated as: $$\text{EMT} = \frac{1}{N} \sum_{i=1}^N \max_{j=1}^K t(y_{i,j})$$
  • Toxicity Probability: The probability that a random continuation has toxicity score greater than a threshold $\tau$: $$\text{TP}(\tau) = \frac{1}{NK} \sum_{i=1}^N \sum_{j=1}^K \mathbf{1}[t(y_{i,j}) > \tau]$$

Intuitively, the Average Toxicity measures the overall harm of the model, the Expected Maximum Toxicity measures the worst-case harm, and the Toxicity Probability measures the frequency of harm at different thresholds.

The authors evaluate several pre-trained LLMs using this protocol, including GPT-2, GPT-3, CTRL, and XLNet. They find that all models exhibit significant toxic degeneration, with GPT-3 having the highest Expected Maximum Toxicity of 0.84 (i.e. 84% of continuations have maximum toxicity). The Toxicity Probability also increases with model size, suggesting that larger models are more prone to toxic degeneration.

Qualitatively, the generated toxicity spans a wide range of harmful behaviors, including threats, profanity, hate speech, and explicit content. Many toxic outputs appear coherent and on-topic, making them difficult to detect without careful analysis.

Methods for Detoxification

Given the prevalence of toxic degeneration in LLMs, it’s important to develop methods to mitigate these harmful behaviors. The authors explore three classes of detoxification methods:

  1. Data-based Methods: These methods aim to filter out toxic content from the pre-training data. The authors experiment with keyword filtering, sentiment filtering, and toxicity score filtering using the Perspective API. They find that aggressive filtering can reduce toxicity but also hurts perplexity and generation quality.
  2. Model-based Methods: These methods modify the LLM architecture or training objective to discourage toxic generations. The authors experiment with:
  • Toxicity Classifiers: Training a separate toxicity classifier on the continuations and using its predictions to penalize the LLM’s loss function.
  • Contrastive Learning: Training the LLM to maximize the likelihood of non-toxic continuations and minimize the likelihood of toxic ones using a contrastive objective.
  • Attribute Conditioning: Conditioning the LLM on a “non-toxic” attribute token during training and inference to steer generations away from toxicity.
  1. Inference-time Methods: These methods post-process the LLM’s outputs to remove or mitigate toxicity. The authors experiment with:
  • Toxicity Filtering: Generating multiple continuations and filtering out those that exceed a toxicity threshold.
  • Prompt Engineering: Designing prompts that are less likely to trigger toxic generations, e.g. by adding disclaimers or specifying a non-toxic intent.
  • Controlled Decoding: Using techniques like top-$k$ sampling, nucleus sampling, or beam search to steer generations towards less toxic outputs.

The authors evaluate these methods on the RealToxicityPrompts dataset and find that a combination of model-based and inference-time methods works best. In particular, fine-tuning GPT-3 on a filtered dataset with a contrastive objective and decoding with top-$p$ sampling reduces the Expected Maximum Toxicity by 30% while maintaining perplexity within 5% of the baseline.

However, no single method completely eliminates toxic degeneration, and there is often a trade-off between toxicity reduction and generation quality. The authors argue that detoxification should be seen as a multi-objective optimization problem, balancing the goals of minimizing harm and maximizing usefulness.

Analysis and Discussion

The RealToxicityPrompts dataset and evaluation framework provide a valuable tool for quantifying the toxic behaviors of language models. The results show that even state-of-the-art models like GPT-3 can degenerate into harmful outputs under adversarial prompting. This highlights the need for better detoxification methods and more robust architectures.

The proposed detoxification methods span a range of approaches, from data filtering to model modification to inference-time control. The most effective methods combine multiple strategies, suggesting that a holistic approach is needed to mitigate toxicity.

However, the current methods also have some limitations:

  1. Toxicity Definition: The definition of toxicity used in the paper (based on the Perspective API) is broad and may not capture all types of harmful content. More fine-grained and context-dependent annotations may be needed.
  2. Evaluation Metrics: The evaluation metrics focus on the probability and severity of toxicity, but do not directly measure the coherence or usefulness of the generated text. Balancing toxicity reduction with generation quality remains an open challenge.
  3. Prompt Distribution: The RealToxicityPrompts dataset is based on prompts from web text and may not cover all possible user inputs. Evaluating detoxification methods on a wider range of prompts, including adversarial ones, is important for robustness.
  4. Language and Culture: The paper focuses on English-language models and Western notions of toxicity. Extending the framework to other languages and cultural contexts is an important direction for future work.

Despite these limitations, the paper makes significant contributions to the study of neural toxic degeneration. The RealToxicityPrompts dataset provides a standardized benchmark for evaluating detoxification methods, and the proposed methods advance the state-of-the-art in controllable language generation.

Conclusion and Future Work

The paper “RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models” tackles the important problem of measuring and mitigating the toxic behaviors of large language models. The authors introduce a new dataset and evaluation framework for quantifying toxic degeneration under adversarial prompting, and propose several methods for detoxifying LLMs during pre-training, fine-tuning, and inference.

The results show that current LLMs are prone to generating harmful content when prompted with sensitive topics, and that a combination of data filtering, model modification, and inference-time control is needed to effectively reduce toxicity. However, challenges remain in defining and annotating toxicity, balancing detoxification with generation quality, and extending the methods to diverse languages and contexts.

Future work could explore more advanced detoxification methods, such as reinforcement learning, adversarial training, or model distillation. Developing better evaluation metrics that capture both the toxicity and coherence of generated text is also an important direction. Finally, studying the social and ethical implications of detoxification, such as the potential for censorship or bias, is crucial for responsible AI development.

As language models become more powerful and widely deployed, ensuring their safety and robustness is a key challenge. The RealToxicityPrompts paper provides a valuable framework for studying this challenge and advancing the field of controllable language generation. With further research and refinement, we can develop LLMs that are both capable and ethical, generating useful and harmless content for a wide range of applications.