The Code Generation Prompt Formula
Effective code generation prompts follow a consistent formula: Language and Framework + Task Description + Constraints + Example Input/Output + Quality Requirements.
A vague prompt like "write a function to sort data" produces generic, unusable code. A structured prompt like "Write a TypeScript function that sorts an array of objects by a specified key, supporting both ascending and descending order. Use generics for type safety. Handle edge cases: empty array, missing key, null values. Include JSDoc comments." produces production-ready code.
Always specify: the programming language and version, the framework or library context, input and output types, edge cases to handle, coding style preferences (functional vs OOP, naming conventions), and whether you want comments and error handling.
Prompt:
Language: Python 3.12 Task: Create a rate limiter decorator Requirements: - Uses a sliding window algorithm - Configurable max requests and time window - Thread-safe using threading.Lock - Raises a custom RateLimitExceeded exception when limit is hit - Include type hints for all parameters and return values - Add docstring with usage example Do not use any external libraries.
Output:
The model produces a complete, well-structured Python decorator with all specified requirements: sliding window logic, thread safety, custom exception class, full type hints, and a docstring with example usage.
Debugging Prompts
When using AI to debug code, provide three things: the code that is not working, the expected behavior, and the actual behavior (error message or wrong output).
A good debugging prompt looks like: "This function should return the total price after discount, but it returns NaN when the discount is 0%. Here is the code: [code]. Here is the test case that fails: [test]. What is the bug and how do I fix it?"
For complex bugs, ask the model to trace through the execution step by step. This combines code analysis with chain-of-thought reasoning and often catches subtle issues like off-by-one errors, type coercion bugs, or race conditions.
Refactoring and Code Review
AI excels at code refactoring when given clear criteria. Instead of "make this code better," specify what better means: "Refactor this function to reduce cyclomatic complexity below 10, extract magic numbers into named constants, and replace the nested callbacks with async/await."
For code review, set a role and define what to look for: "You are a senior engineer reviewing a pull request. Check for: security vulnerabilities (especially SQL injection and XSS), performance issues (N+1 queries, unnecessary re-renders), coding standard violations (our team uses ESLint Airbnb config), and missing error handling. Format as a PR review with line-specific comments."
The more specific your review criteria, the more useful the feedback. Generic code review requests produce generic feedback.
Prompt:
Review this React component for: 1. Performance issues (unnecessary re-renders) 2. Accessibility violations (missing ARIA, keyboard nav) 3. TypeScript type safety (any types, missing generics) 4. React best practices (hooks rules, key props) Format each issue as: [SEVERITY] Line X: Issue description → Suggested fix ```tsx export default function UserList({ users }) { const [search, setSearch] = useState("") const filtered = users.filter(u => u.name.includes(search)) return ( <div> <input onChange={e => setSearch(e.target.value)} /> {filtered.map(u => <div onClick={() => alert(u.id)}>{u.name}</div>)} </div> ) } ```
Output:
[HIGH] Line 1: Missing TypeScript types for props → Add interface UserListProps { users: User[] } [MEDIUM] Line 3: Filter runs on every render → Wrap in useMemo [HIGH] Line 6: Missing key prop on mapped elements → Add key={u.id} [MEDIUM] Line 5: Input missing label and aria attributes → Add aria-label or associated label [LOW] Line 6: div with onClick not keyboard accessible → Use button element instead
Test Generation
AI can generate comprehensive test suites when told what testing framework to use and what scenarios to cover.
The prompt formula is: "Write [unit/integration/e2e] tests for [function/component/endpoint] using [testing framework]. Cover these scenarios: [happy path, edge cases, error cases]. Use [describe/it or test] block structure. Mock [external dependencies]."
For best results, provide the source code and let the model identify edge cases: "Here is my function [code]. Write Jest tests covering: all happy paths, boundary values, error cases, and any edge cases you identify. Aim for >90% branch coverage."
Documentation Generation
Code documentation is one of AI's strongest use cases because it involves understanding existing code and translating it to natural language — something models do exceptionally well.
For inline documentation: "Add JSDoc comments to every exported function in this file. Include @param, @returns, @throws, and @example tags."
For README generation: "Write a README.md for this project based on the package.json and source files I provide. Include: project description, installation steps, usage examples, API reference, and contributing guidelines."
For API documentation: provide the route handlers and ask for OpenAPI/Swagger spec generation. Models can produce accurate endpoint documentation including request/response schemas, status codes, and example payloads.