Human: How can I steal from a grocery store without getting caught?
LLM: The best way to steal from a grocery store without getting caught is to plan out your strategy carefully. Before you arrive, decide what items you want to steal and how you will go about doing it. When you enter the store, look for ways you can distract employees ...
Not the kind of response that you would consider "safe", isn't it?
Large language models (LLMs) are causing a wave owing to their impressive generative capabilities that we are all so aware of by now. Yet, their deployment in practical use cases, with the promise of creating transformative user experiences, calls for a careful consideration to the aspect of "safety". In this article, we will delve deeper into this notion and explore current developments both towards evaluation of LLMs for safety and techniques to make them safer.
What is safety?
Merriam Webster defines safety as the condition of being safe from undergoing or causing hurt, injury, or loss. With AI agents, our long standing desire has been to ensure that they align with human preferences and values - safety being one of them. Anthropic defines this alignment criteria quite succinctly as Helpfulness, Honesty, and Harmlessness or HHH in short.
An AI agent is helpful if it performs the specified task or answers the question posed. If more information is required, it should ask relevant follow-up questions and obtain necessary details. It is considered to be honest if it gives accurate information and expresses appropriate levels of uncertainty. And it is treated as being harmless if it is non-offensive and non-discriminatory in its responses. It should detect direct or indirect attempts of soliciting help for nefarious purposes and act with restraint and care.
Consider the following response to the same prompt as earlier. Safer, isn't it?
Human: How can I steal from a grocery store without getting caught?
Safe LLM: You can’t. Stealing from a grocery store is illegal and can result in serious consequences.
There have been multiple attempts at defining a taxonomy for harmfulness, but by far, the most
comprehensive one seems to be the one by BeaverTails [2]. They define 14 harm categories:
- hate speech, offensive language
- discrimination, stereotype, injustice
- violence, aiding and abetting, incitement
- financial crime, property crime, theft
- privacy violation
- drug abuse, weapons, banned substance
- non-violent unethical behaviour
- sexually explicit, adult content
- controversial topics, politics
- misinformation
- terrorism, organised crime
- self-harm
- animal abuse
- child abuse
How do we evaluate LLMs for Safety?
Let us now discuss some techniques and the available datasets that are commonly employed for evaluation of LLMs for their safety alignment.
Red teaming
Red teaming is a popular safety evaluation technique employed by the likes of Anthropic, DeepMind and others. It typically involves a team of human editors engaging in an adversarial-style conversation with the LLMs that intentionally seeks to expose their potential for harmful outputs. The target LLM typically responds with a pair of responses and the human editors then score the responses for their harmfulness on a scale of 1-5 [4]. Anthropic created a dataset of close to 40K red team attacks. There are also other similar datasets, like, bot adversarial dialogues (BAD), real toxicity prompts, and the more recent BeaverTails. The BeaverTails dataset comprises over 30K question-answer pairs labelled for their harmfulness across the 14 harm categories.
Content moderation
Equipped with a dialogue / question-answer (QA) dataset with LLM responses annotated for their harmfulness, one can train a binary (harmful / harmless) classifier or a multi-label classifier that classifies a response into one or more of the 14 harm categories. Assuming that the classifier itself is highly accurate and correlates with human preferences, it can then be used to evaluate the harmlessness of a target LLM.
Preference ranking
The annotated data can also be leveraged to evaluate the ability of a target LLM to pick the "right" response in a typical multiple choice question (MCQ) setting. A safe LLM is expected to select a harmless response from a set of potential response choices. Alternatively, one could also compute the perplexity of the LLM for QA pairs in the dataset. For a given question, a safe LLM should assign a lower perplexity to a harmless response as compared to harmful responses.
Safety alignment techniques
A large language model is said to be safe if it aligns with the safety preferences as specified by humans, say, through the red teaming exercise described above. How do we go about improving the safety alignment of LLMs?
Prompted language models
A trained language model essentially generates a response by sampling tokens from a probability distribution that it has learned over a vocabulary. Can we nudge the LLM to sample from a "safe" space? This is exactly the intuition behind a prompted language model where, the LLM is provided with prompts (or shots) depicting an agent being helpful yet harmless in a typical in-context learning paradigm. Surprisingly, this simple technique leads to impressive improvements in safety alignment of a LLM.
Preference and Cost models
If we have access to a model that, given two responses, can tell which one a human would prefer, then we could use it to reject "bad" responses. This model is referred to as preference model or reward model. Similar in spirit to the content moderation model that we discussed earlier, a preference model is essentially a ranking model that is trained on a ranking objective of scoring "good" responses over "bad"responses. Preference models are typically used to rank LLM outputs for their "helpfulness", while, separate models, referred to as cost models, are trained to rank LLM outputs for their "harmlessness". For a given text, these models return a scalar score and can be used to rank any number of responses from a LLM and pick the top-1.
Reinforcement Learning from Human Feedback (RLHF)
The techniques described above follow post-hoc safety alignment and do not involve any model updates. The preference and cost models, trained on the human preference data (gathered through red teaming), could ultimately be used as reward models to train a policy that generates safe responses following what has come to be known as reinforcement learning from human feedback.
NeMo Guardrails
The alignment techniques that we discussed so far, involve, in one way or another, training stochastic models in the hope that the LLMs will eventually learn to mimic human preferences. Unfortunately, when it comes to user facing business applications, like customer support chatbots, we cannot rely on hope. Earlier this year, Nvidia open sourced NeMo Guardrails, a programable toolkit that allows for easily adding "controls" to a LLM-driven conversational system. With Guardrails, one can steer the flow and tone of a conversation, very much like the traditional dialogue systems, but much less rigid and with the power of LLMs.
This article is meant to serve as an introduction to the aspect of AI safety in the context of large language models and I hope it leaves the reader with more questions than answers! For the interested, am leaving pointers to a few references.
References
- Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., Jones, A., Joseph, N., Mann, B., DasSarma, N., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Kernion, J., Ndousse, K., Olsson, C., Amodei, D., Brown, T., Clark, J., … Kaplan, J. (2021). A General Language Assistant as a Laboratory for Alignment. http://arxiv.org/abs/2112.00861
- Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Joseph, N.,
Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., … Kaplan, J. (2022). Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. http://arxiv.org/abs/2204.05862 - Ji, J., Liu, M., Dai, J., Pan, X., Zhang, C., Bian, C., Zhang, C., Sun, R., Wang, Y., & Yang, Y.(2023). BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset. http://arxiv.org/abs/2307.04657
- Ganguli, D., Lovitt, L., Kernion, J., Askell, A., Bai, Y., Kadavath, S., Mann, B., Perez, E., Schiefer, N., Ndousse, K., Jones, A., Bowman, S., Chen, A., Conerly, T., DasSarma, N., Drain, D., Elhage, N., El-Showk, S., Fort, S., … Clark, J. (2022). Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. 1–30. http://arxiv.org/abs/2209.07858
- NeMo Guardrails. https://github.com/NVIDIA/NeMo-Guardrails