Free AI Writing Prompts Cheat Sheets
Prompt builder guides and AI writing prompts based on official documentation. Build better prompts for ChatGPT, Claude, and Gemini.
1 10 Power Words for Better AI Prompts
Words That Transform AI Output Quality
Research shows that structured prompts consistently outperform free-form instructions. These words help signal clear intent to AI models.
1. "Step-by-step"
Activates Chain-of-Thought reasoning. Research by Wei et al. (2022) showed this dramatically improves performance on math, logic, and multi-step problems.
Instead of: "Solve this problem" Use: "Solve this step-by-step, showing your reasoning"
2. "Analyze"
Triggers systematic examination rather than surface-level summary.
Instead of: "Tell me about this data" Use: "Analyze this data for patterns, anomalies, and actionable insights"
3. "Be concise" / Word limits
OpenAI recommends explicit length constraints to control verbosity.
Instead of: "Explain quantum computing" Use: "Explain quantum computing in exactly 3 sentences"
4. "You are a [role]"
Role assignment activates domain-specific knowledge patterns. Anthropic documentation highlights this as a key technique.
Instead of: "Help me with my resume" Use: "You are an experienced tech recruiter. Review my resume for a senior developer position"
5. "Format as"
OpenAI's GPT-4.1 guide emphasizes explicit output formatting for reliable results.
Instead of: "List the benefits" Use: "Format the benefits as a markdown table with columns: Benefit | Impact | Effort"
6. "First... then... finally"
Structured sequencing helps models follow complex multi-part instructions.
Instead of: "Review and fix this code" Use: "First, identify bugs. Then, explain each issue. Finally, provide the corrected code"
7. "Compare"
Forces analytical, structured evaluation of multiple options.
Instead of: "Tell me about React and Vue" Use: "Compare React and Vue for enterprise applications, covering: learning curve, performance, ecosystem"
8. "Example:" or "Here's an example"
Few-shot prompting with examples significantly improves output consistency.
Instead of: "Write product descriptions" Use: "Write product descriptions like this example: Example: [your sample] Now write one for: [product]"
9. "Constraints:" or "Requirements:"
Explicit boundaries improve adherence to specifications.
Instead of: "Write an email" Use: "Write an email. Constraints: under 100 words, professional tone, include call-to-action"
10. "If [condition], then [action]"
Conditional instructions help handle edge cases and variations.
Instead of: "Translate this text" Use: "Translate to Spanish. If any terms are technical jargon, keep them in English with a translation note"
2 Chain of Thought Prompting Template
What is Chain of Thought (CoT)?
Chain of Thought prompting, introduced by Wei et al. (2022), enables complex reasoning by generating intermediate reasoning steps. Their research showed CoT significantly improves performance on arithmetic, commonsense, and symbolic reasoning tasks.
Zero-Shot CoT (Simplest Method)
Research found that simply adding "Let's think step by step" activates reasoning capabilities without needing examples:
"Let's think step by step."
Or more explicitly:
"Before answering, think through this problem step by step."
Few-Shot CoT (With Examples)
Provide examples showing the reasoning process:
Q: Roger has 5 tennis balls. He buys 2 more cans of 3. How many total? A: Roger started with 5 balls. 2 cans of 3 balls each is 6 balls. 5 + 6 = 11. The answer is 11. Q: [Your question here] A: Let me work through this...
Structured CoT for Complex Tasks
[Task description] Work through this systematically: 1. Identify the key information given 2. Determine what we need to find 3. Break down the approach 4. Execute each step 5. Verify the result 6. State the final answer
CoT for Code Debugging
Debug this code step-by-step: 1. What is the expected behavior? 2. What is the actual behavior? 3. Trace through the code with a sample input 4. Identify where expected and actual diverge 5. Determine the root cause 6. Propose the fix Code: [Your code here]
CoT for Decision-Making
Help me decide: [decision] Think through this systematically: 1. What are the key criteria for this decision? 2. What are all the options? 3. How does each option perform against each criterion? 4. What are the risks and trade-offs? 5. What is your recommendation and why?
- CoT works best on large models (100B+ parameters)
- The reasoning steps themselves matter more than correctness of examples
- Relevance to the query and correct ordering of steps are crucial
- Works well for: math, logic, coding, multi-step analysis
3 Role-Based Prompt Formula
The Role-Based Prompt Formula
System prompts and role assignments help set the AI's behavior and expertise level. Both OpenAI and Anthropic recommend using clear, direct language at the "right altitude" - specific enough to guide behavior, general enough to allow flexibility.
Basic Formula (OpenAI Style)
You are a [role]. Your task is to [specific task]. Guidelines: - [Guideline 1] - [Guideline 2] - [Guideline 3]
Structured Formula (Anthropic Style with XML)
Claude was trained with XML tags, making this format particularly effective:
<role> You are a [specific role] with expertise in [domain]. </role> <context> [Background information the AI needs to know] </context> <task> [Clear description of what to do] </task> <constraints> - [Constraint 1] - [Constraint 2] </constraints>
Effective Roles by Domain
| Category | Effective Roles |
|---|---|
| Technical | Senior software engineer, Staff engineer, Security researcher, Data architect |
| Analysis | Data analyst, Business analyst, Research scientist, Strategic consultant |
| Writing | Technical writer, Editor, Copywriter, Documentation specialist |
| Review | Code reviewer, Peer reviewer, QA specialist, Technical editor |
Example: Code Review
You are a senior software engineer conducting a code review. Focus on: - Correctness and potential bugs - Performance implications - Security vulnerabilities - Code maintainability For each issue found, explain: 1. What the problem is 2. Why it matters 3. How to fix it Code to review: [code here]
Example: Technical Writing
You are a technical writer creating documentation for developers. Audience: Intermediate developers familiar with REST APIs Tone: Clear, professional, helpful Format: Step-by-step guide with code examples Task: Write documentation for [feature/API]
- Use the system prompt or instructions parameter for role setup
- Be specific about expertise but avoid over-constraining
- Include tone and communication style expectations
- For Claude: XML tags help structure complex roles
- For GPT: JSON mode works well with structured outputs
4 GPT vs Claude vs Gemini: When to Use Which
2025 Model Comparison
With a few exceptions, flagship models from OpenAI, Anthropic, and Google are essentially at parity. Focus less on raw power and more on features and specialized use cases.
| Aspect | ChatGPT / GPT-4o | Claude (Opus 4.5 / Sonnet 4) | Gemini 2.5 Pro |
|---|---|---|---|
| Best For | All-in-one AI toolkit | Text/code depth, long docs | Multimodal, long context |
| Context Window | 128K tokens | 200K tokens | 1M+ tokens |
| Unique Features | Image/video gen, memory, voice chat, custom GPTs | Artifacts (live code viz), XML optimization | Native Google integration, grounding |
| Coding | Excellent | Excellent (best for complex projects) | Very Good |
| Writing Style | Adaptive, flexible | Natural, measured, nuanced | Functional, direct |
| Pro Plan | $20/mo | $20/mo ($18 annual) | $20/mo (via Google One) |
Use ChatGPT When:
- You want an all-in-one toolkit (image gen, voice, custom GPTs)
- You need the memory feature for personalized interactions
- Building with OpenAI's ecosystem (function calling, assistants API)
- You want the largest plugin/integration ecosystem
Use Claude When:
- Working on complex coding projects (real-time visualization with Artifacts)
- Analyzing long documents (legal contracts, codebases, research)
- Writing that needs specific voice/tone (more natural writing style)
- Tasks requiring careful reasoning and safety (Constitutional AI)
- You prefer XML-structured prompts for complex instructions
Use Gemini When:
- Processing very long documents (1M+ token context)
- Multimodal tasks combining text, images, and video
- Deep integration with Google Workspace
- Need real-time information with grounding
- Cost-effective API usage at scale
Prompting Tips by Model
5 Common Prompt Mistakes & Fixes
10 Mistakes That Hurt AI Outputs
Based on official documentation from OpenAI, Anthropic, and Google, these are the most common issues and their fixes.
1. Vague Instructions
OpenAI emphasizes: "Write clear, specific instructions to help the model understand what you want."
Weak: "Write something about marketing" Better: "Write a 500-word blog post about email marketing best practices for e-commerce stores, targeting small business owners new to email marketing"
2. Missing Context
Anthropic calls this "context engineering" - providing the right information for the task.
Weak: "Fix this code" Better: "Fix this Python function. Expected: returns sum of even numbers. Actual: returns 0 for all inputs. Python 3.11. [code here]"
3. No Output Structure
All major providers recommend explicit output formatting for consistent results.
Weak: "Give me the pros and cons" Better: "List pros and cons as a markdown table: | Point | Type (Pro/Con) | Impact (High/Med/Low) |"
4. Unstructured Multi-Part Requests
Break complex requests into numbered steps or use clear delimiters.
Weak: "What's React and how do I install it and write an app?" Better: "Help me with React: 1. Brief explanation (2-3 sentences) 2. Installation steps for macOS 3. Simple counter component example"
5. No Audience Specification
IBM's guide emphasizes tailoring explanations to the intended reader.
Weak: "Explain machine learning" Better: "Explain machine learning to a business executive with no technical background. Use real-world analogies. Avoid jargon."
6. Assuming Shared Context
Each conversation starts fresh. Provide necessary background.
Weak: "Continue where we left off" Better: "Earlier we discussed [summary]. The current state is [state]. Now I need help with [specific next step]"
7. Missing Constraints
Explicit boundaries help the model meet your actual needs.
Weak: "Write a product description" Better: "Write a product description: - Length: 100-150 words - Tone: Professional but friendly - Must include: 3 key features, price point, CTA - Audience: Tech-savvy millennials"
8. Not Using Few-Shot Examples
For style/format consistency, show don't just tell.
Weak: "Write in our brand voice" Better: "Write in this brand voice. Example: '[paste actual example from your brand]' Now write [task] matching this style exactly."
9. Skipping Iteration
Treat outputs as drafts. Provide specific feedback for refinement.
First attempt not perfect? Follow up: "Good start. Please adjust: - More formal tone - Add specific metrics from the data - Cut the intro to 2 sentences"
10. Wrong Tool for the Job
Use CoT for reasoning, direct prompts for simple tasks, examples for style matching.
Don't use Chain-of-Thought for: "What's 2+2?" Don't skip CoT for: "Analyze this business case and recommend whether we should expand to Europe"
- Is the task specific and unambiguous?
- Did I provide necessary context?
- Is the output format specified?
- Did I set constraints (length, tone, audience)?
- Should I include examples?
- Am I using the right technique for this task type?
6 Prompt Engineering Quick Reference
Core Principle
Structure beats verbosity. Whether using XML tags, markdown sections, or numbered lists, structured prompts consistently outperform free-form instructions across all major AI providers.
Universal Prompt Structure
# Context [Background information the AI needs] # Role (optional) You are a [specific role] with expertise in [domain]. # Task [Clear, specific instruction - what exactly to do] # Requirements - [Requirement 1] - [Requirement 2] - [Constraint: length, tone, audience, etc.] # Output Format [How to structure the response: list, table, JSON, etc.] # Examples (if needed for style/format) Input: [example] Output: [example]
Key Techniques Summary
| Technique | When to Use | How to Apply |
|---|---|---|
| Chain of Thought | Complex reasoning, math, logic | "Let's think step by step" or numbered steps |
| Few-Shot Examples | Style/format consistency | Provide 1-3 input/output examples |
| Role Assignment | Expert-level output | "You are a [specific role]..." |
| XML Tags (Claude) | Complex multi-part prompts | <context>...</context> |
| JSON Mode (GPT) | Structured data output | Enable JSON mode + specify schema |
| Delimiters | Separating sections | Use ### or --- or XML tags |
Model-Specific Tips
| Model | Best Practice |
|---|---|
| ChatGPT/GPT-4 | Straightforward instructions, JSON mode for structure, function calling |
| Claude | XML tags for structure (trained on them), detailed system prompts |
| Gemini | Multimodal context, grounding for real-time info, Google-style docs |
Quick Fixes
| Problem | Fix |
|---|---|
| Too verbose | Add word/sentence limit: "in 3 sentences" or "under 100 words" |
| Too generic | Add specific context, constraints, and audience |
| Wrong format | Explicitly specify: "Format as a markdown table with columns..." |
| Inconsistent style | Add few-shot examples showing desired style |
| Missing details | Add "If any information is missing, ask before proceeding" |
| Poor reasoning | Add "Think step by step" for complex tasks |
Pre-Send Checklist
- Task is specific and unambiguous
- Necessary context is provided
- Output format is specified
- Constraints are explicit (length, tone, audience)
- Examples included if style matters
- Using right technique for task type