Chapter 2 of 10

Basic Techniques: Zero-Shot, Few-Shot & Role Prompting

Master the three fundamental prompting techniques that form the basis of all advanced methods.

12 min readFree

Zero-Shot Prompting

Zero-shot prompting means giving the model a task without any examples. You simply describe what you want and let the model figure out how to do it based on its training.

This is the most common way people use AI — just typing a question or instruction. It works well for straightforward tasks where the expected output format is obvious.

Zero-shot works best when the task is well-known (summarization, translation, Q&A), the expected format is standard, and the model has strong training data for that domain. It struggles with unusual formats, domain-specific jargon, or tasks where precision matters.

Zero-Shot ClassificationClaude 4.6

Prompt:

Classify the following customer review as Positive, Negative, or Neutral: "The product arrived on time but the packaging was damaged. The item itself works fine though." Classification:

Output:

Neutral The review contains both positive elements (arrived on time, works fine) and negative elements (damaged packaging), making it a mixed/neutral sentiment overall.

Few-Shot Prompting

Few-shot prompting provides the model with one or more examples of the input-output pattern you want before giving it the actual task. This is one of the most powerful techniques because it shows the model exactly what you expect.

The key insight is that models are pattern-matching engines. When you show them examples, they detect the pattern and apply it to new inputs. This works even for custom formats the model has never seen in training.

Best practices for few-shot examples: Use 2-3 examples (more is not always better), make examples representative of the real task, keep the format consistent across examples, and place your actual task last.

Few-Shot Entity ExtractionGPT-5

Prompt:

Extract the product, price, and sentiment from each review. Review: "Love my new AirPods Pro for $249, best purchase this year!" Product: AirPods Pro Price: $249 Sentiment: Positive Review: "The Samsung Galaxy S25 at $799 is overpriced for what you get." Product: Samsung Galaxy S25 Price: $799 Sentiment: Negative Review: "Just got the Sony WH-1000XM6 for $348, the noise cancellation is incredible." Product:

Output:

Product: Sony WH-1000XM6 Price: $348 Sentiment: Positive

Role Prompting (System Prompts)

Role prompting assigns a persona or identity to the model before giving it a task. By saying "You are a senior Python developer" or "You are a medical researcher," you activate the model's relevant knowledge and adjust its communication style.

Role prompting works because models have learned from text written by people in different roles. When you set a role, the model weights its responses toward patterns from that domain.

Effective roles are specific, not generic. "You are an expert" is weak. "You are a senior backend engineer with 10 years of experience in Python, specializing in API design and database optimization" is strong because it narrows the knowledge domain.

Roles can also set tone and audience. "You are a patient kindergarten teacher explaining to a 5-year-old" produces very different output than "You are a PhD researcher writing for a peer-reviewed journal."

Role Prompting for Technical ReviewClaude 4.6

Prompt:

You are a senior security engineer conducting a code review. You specialize in OWASP Top 10 vulnerabilities and have 12 years of experience with Node.js applications. Review this Express.js route handler and identify security issues: app.get("/user", (req, res) => { const userId = req.query.id; const query = `SELECT * FROM users WHERE id = ${userId}`; db.query(query, (err, result) => { res.json(result); }); });

Output:

Critical Issues Found: 1. SQL Injection (OWASP A03): Direct string interpolation of user input into SQL query. An attacker can pass id=1 OR 1=1 to dump the entire users table. Fix: Use parameterized queries: db.query("SELECT * FROM users WHERE id = ?", [userId]) 2. No Input Validation: userId is not validated as a number. Add parseInt() or a validation library. 3. No Authentication/Authorization: No check that the requesting user has permission to access the requested user record. 4. Information Disclosure: Returning SELECT * exposes all columns including potentially sensitive fields like password hashes.

Combining Techniques

The real power comes from combining these techniques. A production prompt often uses all three: a role sets the expertise level, few-shot examples show the expected format, and the actual task uses zero-shot within that established context.

For example, you might set the role as "You are a data analyst," provide two examples of how you want CSV data summarized, and then give the actual CSV data to analyze. The role ensures domain expertise, the examples lock in the format, and the model handles the new data.

As a rule of thumb: start with zero-shot for simple tasks, add a role when domain expertise matters, and add few-shot examples when the output format matters. If you are still not getting good results, that is when you move to the advanced techniques covered in Chapter 4.

Key Takeaways

  • Zero-shot works for straightforward tasks with obvious output formats
  • Few-shot examples are the most reliable way to control output format and style
  • Role prompting activates domain-specific knowledge and sets communication tone
  • Specific roles outperform generic ones — include years of experience, specialization, and context
  • Combine all three techniques for production-quality prompts

Try It Yourself

Take a task you regularly use AI for. Write three versions: zero-shot, with a role, and with 2 few-shot examples. Compare the outputs to see which technique helps most.

Open Text Compare