security production vulnerabilities best-practices

Security for Vibecoders

AI-generated code has unique security risks. Learn to catch hardcoded secrets, injection attacks, and insecure defaults before they reach production.

· VibeWerks

Security for Vibecoders

AI writes code fast. It also writes vulnerabilities fast.

When you prompt an AI to “build a login system,” it’ll happily generate something that works — with hardcoded secrets, no rate limiting, and SQL injection waiting to happen. The AI optimizes for functional, not secure.

This isn’t a knock on AI. It’s a reality check. Security requires intentionality, and that intentionality has to come from you.

What you’ll learn:

  • Why AI-generated code has unique security blind spots
  • The most common vulnerabilities and how to spot them
  • A practical security checklist for every project
  • Tools that catch what you miss
  • Prompt patterns that produce more secure code

Why AI-Generated Code Has Unique Security Risks

Traditional developers learn security through painful experience — a breached database, a vulnerability disclosure, a senior dev’s code review. AI models learned from all code on the internet, including millions of insecure tutorials, Stack Overflow snippets, and beginner projects.

The core problems:

  1. Training data bias. Most code online prioritizes “getting it to work” over security. AI reflects this.
  2. Context collapse. AI doesn’t know your threat model. It doesn’t know if you’re building a toy or a banking app.
  3. Confident insecurity. AI presents insecure code with the same confidence as secure code. No warnings, no caveats.
  4. Example-driven patterns. AI loves reproducing patterns from tutorials — and tutorials almost never implement proper security.

A 2024 Stanford study found that developers using AI assistants produced significantly less secure code than those coding manually — and were more confident in their code’s security. That’s the danger zone.


Common Vulnerabilities in AI-Generated Code

1. Hardcoded Secrets

The #1 sin. Ask AI to “connect to a database” and you’ll often get:

// ❌ AI-generated — secrets right in the code
const db = mysql.createConnection({
  host: 'localhost',
  user: 'root',
  password: 'mypassword123',
  database: 'myapp'
});

This ends up in git, gets pushed to GitHub, and bots scrape it within minutes.

The fix:

// ✅ Environment variables
const db = mysql.createConnection({
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_NAME
});
# .env (added to .gitignore)
DB_HOST=localhost
DB_USER=root
DB_PASSWORD=mypassword123
DB_NAME=myapp

Prompt pattern: Always include “use environment variables for all secrets and credentials” in your prompts.

2. SQL Injection

AI frequently generates string-concatenated queries:

# ❌ SQL injection vulnerability
@app.route('/user/<username>')
def get_user(username):
    query = f"SELECT * FROM users WHERE name = '{username}'"
    cursor.execute(query)
    return cursor.fetchone()

An attacker sends username = "'; DROP TABLE users; --" and your database is gone.

The fix:

# ✅ Parameterized queries
@app.route('/user/<username>')
def get_user(username):
    query = "SELECT * FROM users WHERE name = %s"
    cursor.execute(query, (username,))
    return cursor.fetchone()

3. Cross-Site Scripting (XSS)

AI-generated frontend code often renders user input directly:

// ❌ XSS vulnerability
function Comment({ text }) {
  return <div dangerouslySetInnerHTML={{ __html: text }} />;
}

The fix:

// ✅ Let React handle escaping
function Comment({ text }) {
  return <div>{text}</div>;
}

If you must render HTML, sanitize it:

import DOMPurify from 'dompurify';

function Comment({ text }) {
  return <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(text) }} />;
}

4. Insecure Authentication

AI loves generating auth that looks right but is fundamentally broken:

// ❌ Common AI auth mistakes
app.post('/login', (req, res) => {
  const { username, password } = req.body;
  const user = db.query(`SELECT * FROM users WHERE username = '${username}'`);
  
  if (user.password === password) {  // Plain text comparison!
    const token = username + Date.now();  // "Token" is just username + timestamp
    res.json({ token });
  }
});

Three vulnerabilities in one: SQL injection, plain-text passwords, and a guessable token.

The fix: Use established libraries. Don’t let AI build auth from scratch.

// ✅ Use bcrypt + JWT + parameterized queries
import bcrypt from 'bcrypt';
import jwt from 'jsonwebtoken';

app.post('/login', async (req, res) => {
  const { username, password } = req.body;
  const user = await db.query('SELECT * FROM users WHERE username = $1', [username]);
  
  if (!user || !(await bcrypt.compare(password, user.password_hash))) {
    return res.status(401).json({ error: 'Invalid credentials' });
  }
  
  const token = jwt.sign({ userId: user.id }, process.env.JWT_SECRET, { expiresIn: '24h' });
  res.json({ token });
});

5. Overly Permissive CORS and Headers

// ❌ AI default: allow everything
app.use(cors({ origin: '*' }));
// ✅ Restrict to your domains
app.use(cors({ 
  origin: ['https://myapp.com', 'https://www.myapp.com'],
  credentials: true
}));

6. Missing Input Validation

AI rarely validates input unless you ask:

// ❌ No validation
app.post('/api/users', (req, res) => {
  db.createUser(req.body); // Whatever they send, we save
});
// ✅ Validate everything
import { z } from 'zod';

const UserSchema = z.object({
  email: z.string().email(),
  name: z.string().min(1).max(100),
  age: z.number().int().min(13).max(150).optional(),
});

app.post('/api/users', (req, res) => {
  const result = UserSchema.safeParse(req.body);
  if (!result.success) return res.status(400).json({ errors: result.error.issues });
  db.createUser(result.data);
});

Security Checklist for AI-Generated Code

Run through this every time before shipping:

Secrets & Configuration

  • No hardcoded passwords, API keys, or tokens
  • All secrets in environment variables
  • .env is in .gitignore
  • No secrets in client-side/frontend code
  • Different secrets for dev/staging/production

Authentication & Authorization

  • Passwords hashed with bcrypt/argon2 (never plain text, never MD5/SHA)
  • JWT tokens signed with strong secrets and have expiration
  • Auth checks on every protected route (not just the frontend)
  • Rate limiting on login/signup endpoints
  • No auth logic built from scratch — using established libraries

Input Validation

  • All user input validated on the server (not just client)
  • SQL queries use parameterized statements
  • HTML output escaped/sanitized
  • File uploads validated (type, size, name)
  • URL redirects validated against allowlist

Dependencies

  • npm audit / pip audit shows no critical vulnerabilities
  • No unnecessary dependencies (each one is an attack surface)
  • Lock file committed (package-lock.json, poetry.lock)
  • No packages with suspicious names (typosquatting)

Deployment

  • HTTPS enforced
  • Security headers set (CSP, HSTS, X-Frame-Options)
  • Debug mode off in production
  • Error messages don’t leak stack traces or internal details
  • Database not publicly accessible

Tools That Catch What You Miss

You don’t have to catch everything manually. These tools automate the boring parts:

Secret Detection

Gitleaks — Scans your git history for accidentally committed secrets:

# Install
brew install gitleaks

# Scan current repo
gitleaks detect

# Add as pre-commit hook
gitleaks protect --staged

Trufflehog — Deep scanning for high-entropy strings and known secret patterns:

trufflehog git file://. --since-commit HEAD~10

Dependency Scanning

# Node.js
npm audit
npm audit fix

# Python
pip audit
safety check

# Universal
snyk test

Linting for Security

ESLint security plugin:

npm install --save-dev eslint-plugin-security
{
  "plugins": ["security"],
  "extends": ["plugin:security/recommended"]
}

Semgrep — Pattern-based scanning that catches actual vulnerability patterns:

# Run OWASP top 10 rules
semgrep --config=p/owasp-top-ten .

# Run all security rules
semgrep --config=p/security-audit .

GitHub Integration

Enable Dependabot (free) in your repo settings. It automatically opens PRs when dependencies have known vulnerabilities. Also enable code scanning with CodeQL for deeper analysis.


Prompt Patterns for Security-Conscious Code Generation

The best time to prevent vulnerabilities is at generation time. These prompt patterns produce more secure code:

The Security-First Prompt

“Build a user registration endpoint in Express.js. Security requirements: use parameterized queries, hash passwords with bcrypt, validate all input with zod, use environment variables for secrets, add rate limiting, and return generic error messages (don’t leak internal details).”

The Threat Model Prompt

“I’m building a REST API that handles user payments. Assume hostile input on every endpoint. Generate the payment processing route with: input validation, authentication middleware, idempotency keys, audit logging, and proper error handling.”

The Review Prompt

“Review this code for security vulnerabilities. Check for: SQL injection, XSS, hardcoded secrets, missing auth checks, insecure defaults, and OWASP Top 10 issues. For each issue found, show the vulnerable line and the fix.”

The “Pretend It’s Production” Prompt

“Generate this as if it’s going into a production environment serving real users. Include: proper error handling, input validation, security headers, logging (without sensitive data), and environment-based configuration.”

The Incremental Security Prompt

After getting initial code from AI:

“Now harden this code for production. Add: rate limiting, input sanitization, proper CORS configuration, security headers, and remove any hardcoded values. Keep the same functionality.”


Building a Security Mindset

Security isn’t a one-time checklist — it’s a habit. Here’s how to build it:

  1. Always specify security requirements in prompts. AI won’t add security unless you ask.
  2. Run npm audit / pip audit before every deploy. Make it part of your workflow.
  3. Set up pre-commit hooks. Gitleaks + ESLint security rules catch issues before they hit git.
  4. Use the review prompt. After generating any auth, payment, or data-handling code, ask AI to review it for security.
  5. Don’t roll your own crypto/auth. Use established libraries (Passport.js, NextAuth, Auth0, Supabase Auth).
  6. Treat AI code like code from a junior developer. It’s probably functional. It probably has security gaps. Review accordingly.

The vibecoder advantage: you can use AI to find security issues, not just create them. Use the review prompt liberally. Ask AI to attack its own code. Generate security tests.

Security isn’t optional. The difference between a side project and a production app is how seriously you take the things that can go wrong. Start here.


Further Reading