prompting fundamentals communication best-practices

Prompting Fundamentals: Communicating Effectively with AI

Master AI coding prompts — the core skill of vibe coding. Write clear, effective prompts that get great results from any AI coding assistant.

· VibeWerks

Prompting Fundamentals: Communicating Effectively with AI

This is the #1 skill in vibecoding. Master prompting and everything else gets easier.

If vibecoding is a conversation with AI, then prompting is your vocabulary. A vague prompt gets vague results. A precise prompt gets precise results.

What you’ll learn:

  • The 5 elements of an effective prompt
  • How to provide context without overwhelming
  • When and how to iterate
  • Copy-paste templates for common tasks

Time: 30 minutes to read, a lifetime to master.

Let’s build your prompting foundation from the ground up.

The Anatomy of a Good Prompt

Every effective prompt contains some combination of these elements:

1. Context

What does the AI need to know about your situation?

Weak:

Write a login function

Strong:

I'm building a Next.js 14 app with App Router. I need a server action
for user login that validates credentials against a PostgreSQL database
using Prisma. We're using bcrypt for password hashing.

Context includes:

  • Technology stack
  • Existing code patterns
  • Business requirements
  • Constraints you’re working within

Tool Comparison: Providing Context

The same context prompt works differently across tools:

Claude Code (Terminal)

# Claude Code can read your files directly, so you often provide less explicit context
> Look at src/lib/auth.ts and add a login function that follows the same
> patterns used in the existing signup function. Use Prisma and bcrypt.

Claude Code reads your codebase automatically, so it discovers context from your files.

Cursor (IDE) Press Cmd+K in the file, type: “Add a login function following the signup pattern. Use Prisma and bcrypt.” Cursor sees the current file and uses codebase indexing for context.

Copilot Chat

@workspace I need a login function in src/lib/auth.ts that follows our
existing signup pattern. We use Prisma for database and bcrypt for hashing.

Use @workspace to search the codebase or @file to reference specific files.

2. Intent

What are you trying to accomplish? Be specific about the outcome.

Weak:

Make this code better

Strong:

Refactor this function to reduce its cyclomatic complexity.
The current implementation has too many nested conditionals,
making it hard to test. Aim for functions under 10 lines each.

3. Constraints

What limitations or requirements must be respected?

Weak:

Add error handling

Strong:

Add error handling that:
- Logs errors to our existing logger (imported from @/lib/logger)
- Returns user-friendly messages (no stack traces in production)
- Preserves the original error for debugging
- Follows our existing try/catch pattern in other API routes

4. Examples (When Helpful)

Show the AI what you want through demonstration.

Convert these functions to use our new API client pattern.

Here's an example of the pattern:

// Before
const response = await fetch('/api/users')
const data = await response.json()

// After
const data = await apiClient.get<User[]>('/users')

Now apply this pattern to the following functions...

The CRISP Framework

For complex prompts, use the CRISP framework:

  • Context: Background information
  • Role: What perspective should AI take?
  • Intent: What outcome do you want?
  • Specifics: Detailed requirements and constraints
  • Proof: How will you verify success?

Example using CRISP:

CONTEXT: We have an e-commerce checkout flow that's currently a single
1200-line component. Users report it's slow and we've identified it's
causing performance issues due to unnecessary re-renders.

ROLE: Act as a senior React developer focused on performance optimization.

INTENT: Break this checkout component into smaller, optimized components
that minimize re-renders.

SPECIFICS:
- Use React.memo where appropriate
- Implement proper callback memoization with useCallback
- Split into logical sub-components: CartSummary, ShippingForm,
  PaymentForm, OrderReview
- Maintain all existing functionality
- Keep state management at the appropriate level

PROOF: After refactoring, React DevTools Profiler should show that
typing in one form field doesn't re-render unrelated components.

Tool Comparison: Complex Multi-Part Prompts

How you deliver a CRISP-style prompt varies by tool:

Claude Code (Terminal)

# You can type multi-line prompts directly, or reference a file
> I have a complex refactoring task. Read PROMPT.md for the full requirements,
> then refactor src/components/Checkout.tsx following those instructions.

Claude Code handles long prompts well and can read requirements from files you prepare.

Cursor (IDE) Use Composer (Cmd+Shift+I) for complex prompts. Paste the full CRISP prompt there. Composer handles multi-file changes and shows diffs for each file before you accept.

Copilot Chat

@workspace /explain the checkout component structure, then help me refactor
it following these requirements: [paste CRISP prompt]

Break very long prompts into a conversation: first /explain, then request changes.

Common Prompting Patterns

The Incremental Build Pattern

Start simple, then add complexity through follow-up prompts.

Prompt 1: "Create a basic Express server with a health check endpoint"

[Review output, then continue]

Prompt 2: "Add a /users endpoint that returns a list of mock users"

[Review output, then continue]

Prompt 3: "Connect this to a real database using the pg library"

This pattern:

  • Lets you catch errors early
  • Keeps each step manageable
  • Makes it easy to course-correct

The Rubber Duck Pattern

Explain your problem to get AI to help think through it.

I'm trying to implement real-time notifications but I'm stuck on the
architecture. Here's my thinking:

Option A: WebSockets for all clients
- Pro: Real-time, bidirectional
- Con: Need to manage connection state, may not scale

Option B: Server-Sent Events (SSE)
- Pro: Simpler, HTTP-based
- Con: One-way only, reconnection handling

Option C: Polling with long-poll fallback
- Pro: Works everywhere
- Con: More server load, not truly real-time

We expect 10,000 concurrent users. Most notifications are one-way
(server to client). What would you recommend and why?

The Code Review Pattern

Ask AI to review code before asking it to write code.

Before we add new features, review this authentication middleware.
Tell me:
1. Any security vulnerabilities
2. Edge cases not handled
3. Performance concerns
4. Suggestions for improvement

[paste code]

After your review, suggest specific changes with code examples.

Tool Comparison: Code Review Prompts

Each tool has different strengths for code review:

Claude Code (Terminal)

> Review src/middleware/auth.ts for security vulnerabilities. Run any
> tests you think are relevant. Check if there are existing security
> tests we should update.

Claude Code can read the file, run your test suite, and check related files autonomously.

Cursor (IDE) Select the code block, press Cmd+K, type: “Review this for security vulnerabilities, edge cases, and performance issues.” Cursor highlights specific lines with inline suggestions.

Copilot Chat

@file:src/middleware/auth.ts Review this authentication middleware for:
1. Security vulnerabilities
2. Edge cases not handled
3. Performance concerns

Use @file to focus the review on a specific file.

The Test-First Pattern

Use AI to write tests that define the behavior you want.

Write unit tests for a calculateShipping function that:
- Takes an order object with items and destination
- Returns shipping cost in cents
- Free shipping for orders over $100
- $5.99 flat rate for domestic orders under $100
- $15.99 for international orders
- Throws if order is empty or destination is missing

Write tests first using Jest. Don't implement the function yet.

Then: “Now implement the function to make all these tests pass.”

What Makes Prompts Fail

Understanding failure modes helps you avoid them.

Ambiguity

Problematic:

Make the API faster

Faster how? Response time? Throughput? Which endpoints? What’s acceptable?

Better:

The /api/search endpoint takes 3 seconds on average. Profile the code
and identify the bottleneck. Target response time is under 500ms.

Assumed Knowledge

Problematic:

Use our standard error handling

The AI doesn’t know what “your standard” is unless you show it.

Better:

Use this error handling pattern:

[paste example from your codebase]

Apply this to the new endpoint.

Contradictory Requirements

Problematic:

Make this function simple but handle all edge cases and be performant
and fully documented and backward compatible.

Too many competing priorities. Pick the most important ones.

Better:

Prioritize readability over performance. Handle the three most common
error cases: null input, empty array, and duplicate values.
Skip documentation for now.

Massive Scope

Problematic:

Build me a complete e-commerce platform with user auth, product
management, shopping cart, checkout, payment processing, order
management, admin dashboard, analytics, and email notifications.

This is months of work compressed into one prompt.

Better: Start with one piece. Build incrementally. See the Incremental Build Pattern above.

Prompting for Specific Tasks

Debugging Prompts

This function throws "Cannot read property 'map' of undefined"
on line 23, but only sometimes.

[paste code]

Here's a case where it works: [paste example]
Here's a case where it fails: [paste example]

Find the bug and explain why it only happens sometimes.

Refactoring Prompts

Refactor this function using the Extract Method pattern. The goal is
to make the main function read like documentation—each step should
be a descriptively named function call.

Current code:
[paste code]

Keep the same external behavior. Add no new dependencies.

Explanation Prompts

Explain this regex pattern like I'm familiar with regex basics
but haven't seen this specific pattern before:

/^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)[a-zA-Z\d@$!%*?&]{8,}$/

Walk through each part and explain what it matches and why.

Learning Prompts

I want to understand how React Server Components work. I know
traditional React (client-side) well.

1. Explain the mental model shift needed
2. Show a simple before/after example
3. Identify the three most common mistakes developers make
   when first using RSC
4. Give me an exercise to practice

Iterative Prompting: The Conversation

Vibecoding isn’t about single perfect prompts. It’s a conversation. Expect to iterate.

Round 1: Initial Request

Create a password validation function

AI produces something basic.

Round 2: Add Requirements

Good start. Also check for:
- Minimum 8 characters
- At least one uppercase and one lowercase
- At least one number
- No common passwords (check against a list)

AI updates the implementation.

Round 3: Refinement

The common password check is too slow. Use a Set instead of
array.includes() for O(1) lookup. Also, return specific error
messages indicating which requirement failed, not just true/false.

AI refines further.

Round 4: Polish

Clean. Now add JSDoc comments and TypeScript types for the
return value.

This conversation produced better code than any single prompt could have.

Prompt Templates for Common Situations

Keep these templates handy and customize them:

Bug Fix Template

BUG: [describe the bug]
EXPECTED: [what should happen]
ACTUAL: [what actually happens]
REPRODUCE: [steps to reproduce]
CODE: [relevant code]
ATTEMPTS: [what you've already tried]

New Feature Template

FEATURE: [what you're building]
CONTEXT: [relevant existing code/patterns]
REQUIREMENTS: [must-haves]
NICE-TO-HAVES: [optional enhancements]
CONSTRAINTS: [limitations, style guides, etc.]

Code Review Template

REVIEW THIS CODE FOR:
- [ ] Security vulnerabilities
- [ ] Performance issues
- [ ] Error handling gaps
- [ ] Code style consistency
- [ ] Test coverage gaps

CODE:
[paste code]

Prioritize findings by severity (critical/high/medium/low).

Practice Exercises

Prompting improves with practice. Try these exercises:

Exercise 1: The Rewrite

Take a vague prompt you’ve used before and rewrite it using the CRISP framework.

Exercise 2: The Decomposition

Take a large request and break it into 5 smaller prompts that build on each other.

Exercise 3: The Comparison

Write the same prompt three different ways. Use each and compare results.

Exercise 4: The Review

Ask AI to review your prompt before you ask it to execute. “Before implementing this, what clarifying questions would you ask?”

What’s Next

You now have the foundation of effective prompting. These skills apply across all AI tools and will improve with practice.

Next, learn how to apply these prompting skills to starting new projects in Project Scaffolding, where we’ll use prompts to set up entire codebases from scratch.

Key Takeaways

  • Context is king: The more relevant context you provide, the better the output
  • Be specific about intent: Vague prompts get vague results
  • Iterate freely: Great code comes from conversation, not single prompts
  • Use patterns: CRISP, incremental building, and templates save time
  • Show, don’t just tell: Examples clarify requirements better than descriptions
  • Expect to refine: Your first prompt is a starting point, not a final answer

Prompting is a skill. You’ll get better the more you do it. Start prompting, start learning.


Quick Reference Card

Screenshot this:

ElementQuestion to AskExample
ContextWhat does AI need to know?”In this Next.js 14 app…”
IntentWhat outcome do I want?”Create a function that…”
ConstraintsWhat must/must not happen?”Don’t modify existing tests”
FormatHow should output look?”Return JSON with these fields”
ExamplesCan I show what I want?”Like this: example

The universal prompt structure:

[CONTEXT] I'm working on [project/file].
[INTENT] I need to [goal].
[CONSTRAINTS] Must [requirement]. Don't [restriction].
[FORMAT] Output as [format].

🎯 Your Next Step

Pick ONE prompt you’ve written recently that didn’t work well. Rewrite it using the elements above. See the difference.

Then: Get copy-paste templates →


Guides:

New Content:

Cheatsheets: