Skip to content

Shots! Shots! Shots!

Shots

array of shot glasses filled with a vibrant green drink

Have you ever heard the sound "shots! shots! shots!" before? Well, in the world of Large Language Models, "shots" refer to the number of examples provided to the LLM during training or finetuning.

Zero-shot learning

Zero-shot learning refers to a LLM's ability to perform a task without having seen any examples of that task during training. The LLM relies solely on its pre-trained knowledge to complete the task. For example, let's say you have a pre-trained LLM that has been trained on a large corpus of text. You can use this LLM to perform a task like sentiment analysis on a new piece of text without providing any examples of sentiment analysis during training. It will use its pre-trained knowledge of language to perform the task.

Zero-shot learning prompt

Review: Food is very bad
Sentiment:

Completion

Negative

One-shot learning

One shot learning involves providing the LLM with just one example of the task during training or finetuning. The LLM then uses this example to generalize and perform the task on new, unseen data.

One-shot learning prompt

Review: Food is very bad
Sentiment: Negative
Review: I liked the taste of every dish
Sentiment:

Completion

Positive

Few-shot learning

Few-shot learning refers to providing the LLM with more than one examples in the prompt. This can further improve the LLM's performance on the task, as it has more examples to learn from.

Few-shot learning prompt

Review: Food is very bad
Sentiment: Negative
Review: I liked the taste of every dish
Sentiment: Positive
Review: The service was slow and the staff was rude
Sentiment: Negative
Review: The ambiance was cozy and the music was soothing
Sentiment: Positive
Review: The portions were small and the prices were high
Sentiment: Negative
Review: The food was average, nothing special but not bad either
Sentiment: Neutral
Review: The restaurant was clean and well-maintained
Sentiment: Positive
Review: The menu had limited options, but the food was decent
Sentiment: Neutral
Review: The wait time for our food was too long, but the taste made up for it
Sentiment:

Completion

Neutral

Tip

Unlike alcohol shots, which can reduce human cognitive performance, more shots for Large Language Models can actually improve their performance.

Adding Instruction

Adding an instruction to the above methods will improve the performance of the LLM by providing more context and guidance on what the LLM is expected to do. This helps the LLM to better understand the task and generate more accurate and relevant responses.

For example, the original zero-shot learning prompt is:

Review: Food is very bad
Sentiment:

By adding an instruction, the prompt becomes:

Determine the sentiment of the given review (Positive, Negative, or Neutral)
Review: Food is very bad
Sentiment:

This added instruction explicitly asks the LLM to determine the sentiment of the review, making it clearer what the desired output should be. As a result, the LLM is more likely to generate the correct response, which in this case is "Negative". Providing clear instructions helps the LLM to focus on the specific task and reduces the chances of generating unrelated or ambiguous responses.