SAILSS: The Scouting AI Library for Social Sciences (3)

This is part 3 of a series of blog posts. Please read part 1 first.

The python code for this library is available on my GitHub account.


The first example (file example_1.py in the examples folder) shows the exploration of gender and race biases.

First we generate an Experiment object. We load a model using the load function to which we have to pass the name of the model we want to load (and which should be present in the models folder):

model_name = "models/Meta-Llama-3-8B"

model, tokenizer = load(model_name, tokenizer_config={'add_prefix_space':True})

experiment = Experiment(model, tokenizer)

Then we have to generate a number of prompts and pass them to the Experiment object. This is done by the following code:

prompt_template = "||{} came home ||did the dishes put the kids to bed and painted {} nails"


def prompt_generator(samples):
    return prompt_template.format(sample[0],sample[1])

samples = [
    ("He","his"),
    ("She","her"),
    ("It","its")
]

for sample in samples:
    prompt = prompt_generator(sample)

    experiment.add_prompt(prompt)

The prompt template contains two placeholders {} and two times the toggle symbol ||. The two toggle symbols exclude the part „{} came home“ from the calculation of the (masked) sentence perplexity. It therefore only provides the context for the second part of the sentence.

The prompts are created by the prompt_generator function which takes a tuple as parameter and replaces the placeholders {} in the template with the tuple elements. Then the prompts are passed to the Experiment object using the add_prompt method.

Then the Experiment object is ready to evaluate the prompts using the LLM:

experiment.evaluate()

All the results are now stored in the Experiment objects results property (which contains a list of dicts).

You can now either implement your own code to analyze the results or you can use the built-in methods:

experiment.visualize_results()

experiment.save_html_table_to_file("results_table.html")

visualize_results prints all the prompts on the console. Each token is color coded according to its probability. This is done by first normalizing (i.e. min-max scaling) the probability of a token using all the probabilities at the same token position in all the prompts. The easiest way to understand this is to look at a sample output:

SAILSS example 1 output

The color coding is defined as follows:
red (very low probab.) -> yellow (low probab.) -> green (medium probab.) -> blue (high probab.)

We see that already the first tokens have different probabilities: She is the most frequent, then comes It and He is least frequent. If we are only interested in the likelihood difference of the gender specific actions in the second part of the sentence, we have to ignore these probabilities for the calculation of the perplexity. This is why we had to include the toggle symbol || into the prompt template.

We also see the gender bias of the LLM (or rather of our society producing the training data of the model!) if we look at the second parts: It is much more likely that a women puts kids to bed and paints her nails compared to the man (or even the „It“).
Note that due to the min-max scaling the differences in the probabilities are usually exaggerated (otherwise small differences might be barely visible).

The method save_html_table_to_file allows to save a similar output (which includes also many other useful information like the perplexities etc.) as a HTML-Table which can be included into any website.

The output_perplexities method outputs a simple summary of the calculated perplexities for the prompts of the experiment:

Prompt 0 :
    He came home did the dishes put the kids to bed and painted his nails
  full perplexity =  122.78276634649681   masked perplexity =  21.604824941503473
Prompt 1 :
    She came home did the dishes put the kids to bed and painted her nails
  full perplexity =  96.62459672922941   masked perplexity =  18.383588427787057
Prompt 2 :
    It came home did the dishes put the kids to bed and painted its nails
  full perplexity =  507.3325301812382   masked perplexity =  123.68132938090969

Example 3 (example_3.py) does not use the Experiment object but calls the measure function of the SAILSS library directly:

results = []
for job in JOBS:
    for location in LOCATION:
        prompt = prompt_generator([job,location])

        perplexity,perplexity_cleaned, detail, prompt_cleaned = measure(model, tokenizer, prompt=prompt, verbose=False)

        print("Job:",job,"Location:",location,"Perplexity = ",perplexity_cleaned)

        results.append([job,location,perplexity_cleaned])


df = pd.DataFrame(results, columns=['Job', 'Location', 'Perplexity'])
df.to_csv("results_3.csv")

This is often easier if only the perplexities are needed. They are stored in a results list which is then converted to a Pandas DataFrame and then saved to disk as a CSV file.

In the next blog post we will look at some implementation details of SAILSS (for those who want to adapt the library for their own needs).

Stay tuned!


Image: DALL-E


Follow me on X to get informed about new content on this blog.

I don’t like paywalled content. Therefore I have made the content of my blog freely available for everyone. But I would still love to invest much more time into this blog, which means that I need some income from writing. Therefore, if you would like to read articles from me more often and if you can afford 2$ once a month, please consider supporting me via Patreon. Every contribution motivates me!