AI Testing Patterns
Prompt patterns for generating tests, what to test in AI-generated code, and a progression from smoke tests to integration tests.
AI-generated code needs more testing, not less. These patterns help you generate effective tests fast.
🧪 The Testing Progression
Build tests in this order — each layer catches different problems:
1. Smoke Tests — “Does it run at all?”
// Does the app start?
test('app renders without crashing', () => {
render(<App />);
expect(document.body).toBeTruthy();
});
// Do API routes respond?
test('health check returns 200', async () => {
const res = await fetch('/api/health');
expect(res.status).toBe(200);
});
// Does the build succeed?
// npm run build (in CI)
Prompt: “Write smoke tests for my app. Test that: the main page renders, all API routes return non-500 responses, and critical components mount without errors.”
2. Unit Tests — “Does the logic work?”
// Pure functions
test('calculateTotal applies tax correctly', () => {
expect(calculateTotal(100, 0.08)).toBe(108);
expect(calculateTotal(0, 0.08)).toBe(0);
expect(calculateTotal(100, 0)).toBe(100);
});
// Validation
test('rejects invalid email', () => {
expect(validateEmail('not-an-email')).toBe(false);
expect(validateEmail('[email protected]')).toBe(true);
expect(validateEmail('')).toBe(false);
});
// Edge cases
test('handles empty array', () => {
expect(getFirstItem([])).toBeUndefined();
});
Prompt: “Write unit tests for [function/file]. Cover: happy path, edge cases (empty input, null, undefined, boundary values), and error cases. Use descriptive test names.”
3. Integration Tests — “Do the pieces work together?”
test('user signup flow', async () => {
// Create user
const res = await request(app)
.post('/api/auth/signup')
.send({ email: '[email protected]', password: 'SecurePass123!' });
expect(res.status).toBe(201);
// Verify in database
const user = await db.user.findUnique({ where: { email: '[email protected]' } });
expect(user).toBeTruthy();
expect(user.password).not.toBe('SecurePass123!'); // Should be hashed
// Can log in
const login = await request(app)
.post('/api/auth/login')
.send({ email: '[email protected]', password: 'SecurePass123!' });
expect(login.status).toBe(200);
expect(login.body.token).toBeDefined();
});
Prompt: “Write integration tests for the [feature] flow. Test the full path from API request → database → response. Include setup/teardown for test data.”
🎯 What to Test in AI-Generated Code
AI code has specific weak spots. Prioritize testing these:
| Area | Why AI Gets It Wrong | What to Test |
|---|---|---|
| Error handling | AI focuses on happy path | Pass invalid/empty/null input to every function |
| Edge cases | AI doesn’t anticipate them | Empty arrays, 0 values, very long strings, special characters |
| Auth boundaries | AI often forgets auth on some routes | Hit every protected route without a token |
| Data validation | AI may skip server-side validation | Send malformed data to every API endpoint |
| Concurrency | AI writes sequential-thinking code | Test rapid duplicate submissions, race conditions |
| Type coercion | AI trusts input types | Send string “1” where number 1 is expected |
🤖 Prompt Patterns for Test Generation
The Comprehensive Test Prompt
“Write tests for [file]. For each function, test:
- Normal operation with typical input
- Edge cases: empty, null, undefined, zero, negative, very large
- Error cases: what should throw/reject
- Return types: verify the shape of returned data Use [vitest/jest/pytest]. Descriptive test names.”
The Security Test Prompt
“Write security-focused tests for [API route]:
- Request without auth token (expect 401)
- Request with expired token (expect 401)
- Request with valid token but wrong user (expect 403)
- SQL injection attempts in input fields
- XSS payloads in string fields
- Oversized request body”
The Bug-Hunting Prompt
“Look at this code and write tests that would catch potential bugs. Think about: off-by-one errors, null pointer exceptions, missing await, wrong variable names, and incorrect conditional logic.”
The Regression Test Prompt
“I just fixed this bug: [description]. Write a test that reproduces the bug (should fail without the fix) and passes with the fix. This prevents the bug from coming back.”
The Snapshot Prompt
“Write snapshot tests for these React components: [list]. Capture the rendered output for: default props, loading state, error state, and empty data state.”
🔧 Testing Setup Quick Reference
JavaScript/TypeScript (Vitest)
npm install -D vitest @testing-library/react @testing-library/jest-dom
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
environment: 'jsdom',
setupFiles: ['./tests/setup.ts'],
},
});
// package.json
{ "scripts": { "test": "vitest", "test:ci": "vitest run" } }
Python (Pytest)
pip install pytest pytest-cov pytest-asyncio
# pytest.ini
[pytest]
testpaths = tests
asyncio_mode = auto
✅ Testing Checklist
Before shipping AI-generated code:
- Smoke tests pass (app renders, API responds)
- Unit tests cover business logic functions
- Auth routes tested with and without valid tokens
- Invalid input tested on every API endpoint
- Error paths tested (what happens when things fail?)
- Tests run in CI on every push
- No tests that just verify AI code “matches itself” (tautological tests)
⚠️ Testing Anti-Patterns
Don’t let AI generate tautological tests:
// ❌ This tests nothing — it just mirrors the implementation
test('adds numbers', () => {
const result = 2 + 2;
expect(result).toBe(4); // No function under test!
});
Don’t test implementation details:
// ❌ Brittle — breaks when you refactor
test('calls setState exactly twice', () => { ... });
// ✅ Test behavior instead
test('shows updated count after clicking', () => { ... });
Don’t skip the unhappy path:
// ❌ Only tests success
test('creates user', async () => { ... });
// ✅ Also test failure
test('rejects duplicate email', async () => { ... });
test('rejects invalid email format', async () => { ... });
test('rejects missing password', async () => { ... });