What is "vibe coding" and why security teams care
"Vibe coding" is the practice of building software primarily through natural-language prompts and AI code generation instead of writing every line by hand. It boosts delivery speed and lowers the barrier to entry, but the same speed and abstraction can hide security risk. Major vendors explicitly call out the need for a security review step in the AI-assisted dev loop (Google Cloud, IBM).
The empirical risk
Independent measurements consistently show high rates of insecure completions: Veracode reports that nearly half of AI-generated code tasks contain security issues (~45%) (Business Wire), and an earlier academic evaluation of GitHub Copilot found roughly 40% of generated programs were vulnerable across high-risk CWEs (arXiv). These failures map directly to common OWASP categories: input validation (SQL injection, XSS), auth/authorization, insecure data handling, and dependency risks (Business Wire).
Failure modes you’ll actually see in PRs
- Input validation problems (SQLi, XSS): Unsanitized concatenated queries, improper HTML escaping, unsafe template rendering (TechRepublic, Snyk).
- AuthN/AuthZ omissions: Missing auth checks on new endpoints; role checks that can be bypassed (Cybernews, TechRepublic).
- Secrets in code and config drift: Hardcoded tokens, keys in
.env
committed by accident, and drift between local and prod configs (Software Mind, GitGuardian). - Dangerous deserialization and memory-unsafe patterns: In Python,
pickle
loaders wired to untrusted input; in C/C++, buffer and integer overflows from unsafe APIs (Databricks, heise online). - Supply-chain via hallucinated packages ("slopsquatting"): LLM suggests a non-existent dependency; an attacker registers it under that name to get code execution in your build or runtime (TechRadar, SecurityWeek).
Common frontend vulnerabilities in AI-generated code
AI code generation excels at creating functional interfaces quickly, but often misses critical security fundamentals that experienced developers take for granted. Let's explore the most common frontend security gaps you'll encounter.
1. Insecure HTTP connections
Vulnerability: Missing HTTPS enforcement AI-generated applications frequently default to HTTP during development and fail to enforce HTTPS in production. This leaves all communication vulnerable to eavesdropping, man-in-the-middle attacks, and credential theft. Even worse, many developers deploy these applications without configuring proper TLS redirects or security headers.
1// Vulnerable: No HTTPS enforcement
2app.listen(3000, () => {
3 console.log('Server running on http://localhost:3000');
4});
How to fix this vulnerability
Always enforce HTTPS in production and set proper security headers. Configure automatic redirects from HTTP to HTTPS, and implement HTTP Strict Transport Security (HSTS) to prevent protocol downgrade attacks.
1// Secure: Enforce HTTPS with proper headers
2import helmet from 'helmet';
3
4app.enable('trust proxy');
5app.use((req, res, next) => {
6 const isHttps = req.secure || req.headers['x-forwarded-proto'] === 'https';
7 if (isHttps) return next();
8 res.redirect(301, 'https://' + req.headers.host + req.url);
9});
10app.use(helmet.hsts({ maxAge: 31536000, includeSubDomains: true, preload: true }));
Other tips for preventing insecure connections:
- Use Let's Encrypt or cloud provider certificates for free TLS
- Test your HTTPS configuration with SSL Labs
- Never mix HTTP and HTTPS resources on the same page
2. Cross-Site Scripting (XSS) vulnerabilities
Vulnerability: Insufficient input validation AI-generated forms and search functionality often lack proper input validation and output encoding. This creates XSS vulnerabilities where malicious scripts can be injected and executed in other users' browsers, potentially stealing credentials, session tokens, or performing actions on behalf of victims.
1// Vulnerable: Direct insertion without validation
2app.get('/search', (req, res) => {
3 const query = req.query.q;
4 // Dangerous: directly inserting user input into response
5 res.send(`<h1>Results for: ${query}</h1>`);
6});
How to fix this vulnerability
Implement comprehensive input validation on all user-provided data. Validate data types, formats, and ranges before processing. Use established validation libraries rather than rolling your own.
1// Secure: Proper validation and sanitization
2import { z } from 'zod';
3import escapeHtml from 'escape-html';
4
5const querySchema = z.object({
6 q: z.string().min(1).max(100).regex(/^[a-zA-Z0-9\s]+$/),
7 page: z.coerce.number().int().min(1).max(100).default(1)
8});
9
10app.get('/search', (req, res) => {
11 const parsed = querySchema.safeParse(req.query);
12 if (!parsed.success) {
13 return res.status(400).json({ error: 'Invalid search parameters' });
14 }
15
16 const safeQuery = escapeHtml(parsed.data.q);
17 res.send(`<h1>Results for: ${safeQuery}</h1>`);
18});
Other tips for preventing XSS:
- Use Content Security Policy (CSP) headers to restrict script execution
- Encode output appropriately for the context (HTML, JavaScript, CSS, URL)
- Never insert user data directly into script tags or event handlers
3. Insecure client-side data storage
Vulnerability: Sensitive data in browser storage AI-generated authentication flows commonly store JWT tokens, API keys, or user data in localStorage or sessionStorage. This data is accessible to any JavaScript running on the page, making it vulnerable to XSS attacks and browser extensions with malicious intent.
1// Vulnerable: Sensitive data exposed to scripts
2function loginUser(credentials) {
3 fetch('/api/login', { method: 'POST', body: JSON.stringify(credentials) })
4 .then(res => res.json())
5 .then(data => {
6 // Dangerous: Token accessible to any script
7 localStorage.setItem('authToken', data.token);
8 localStorage.setItem('userInfo', JSON.stringify(data.user));
9 });
10}
How to fix this vulnerability
Use secure, HttpOnly cookies for authentication tokens. Store only non-sensitive data in browser storage, and ensure sensitive operations always validate server-side.
1// Secure: Server sets HttpOnly cookies
2// Client-side: No token handling needed
3function loginUser(credentials) {
4 fetch('/api/login', {
5 method: 'POST',
6 credentials: 'include', // Include cookies
7 body: JSON.stringify(credentials)
8 })
9 .then(res => res.json())
10 .then(data => {
11 // Only store non-sensitive display data
12 localStorage.setItem('username', data.username);
13 // Server automatically sets secure cookie
14 window.location.href = '/dashboard';
15 });
16}
1// Server-side: Secure cookie configuration
2app.post('/api/login', async (req, res) => {
3 // ... validate credentials ...
4
5 res.cookie('session', jwtToken, {
6 httpOnly: true, // Not accessible to JavaScript
7 secure: true, // Only sent over HTTPS
8 sameSite: 'Lax', // CSRF protection
9 path: '/',
10 maxAge: 60 * 60 * 1000 // 1 hour
11 });
12
13 res.json({ username: user.username });
14});
Other tips for secure data storage:
- Never store passwords, tokens, or API keys in browser storage
- Use secure cookie flags: HttpOnly, Secure, SameSite
- Implement proper session management with server-side validation
Critical backend vulnerabilities in vibe-coded applications
Backend security is where AI-generated code shows its most dangerous gaps. While AI excels at creating functional APIs, it consistently misses authorization checks, proper authentication, and secure data handling practices that prevent serious breaches.
4. Weak password storage and authentication
Vulnerability: Insecure password handling AI-generated authentication systems frequently use weak hashing algorithms like MD5 or SHA-1, or even store passwords in plain text. Some implement custom authentication schemes that bypass established security practices, creating easy targets for credential-based attacks.
1// Vulnerable: Weak password storage
2app.post('/register', async (req, res) => {
3 const { username, password } = req.body;
4
5 // Dangerous: Plain text storage
6 const user = await db.users.create({
7 username,
8 password: password // Stored in plain text!
9 });
10
11 res.json({ message: 'User created' });
12});
How to fix this vulnerability
Always use modern, robust password hashing algorithms designed for password storage. Argon2id is currently the gold standard, offering excellent protection against both online and offline attacks.
1// Secure: Proper password hashing
2import argon2 from 'argon2';
3
4app.post('/register', async (req, res) => {
5 const { username, password } = req.body;
6
7 // Validate password strength first
8 if (password.length < 12) {
9 return res.status(400).json({ error: 'Password must be at least 12 characters' });
10 }
11
12 try {
13 // Secure: Argon2id hashing with automatic salt
14 const passwordHash = await argon2.hash(password, {
15 type: argon2.argon2id,
16 memoryCost: 2 ** 16, // 64 MB
17 timeCost: 3,
18 parallelism: 1
19 });
20
21 const user = await db.users.create({
22 username,
23 passwordHash // Store hash, never plaintext
24 });
25
26 res.json({ message: 'User created successfully' });
27 } catch (error) {
28 res.status(500).json({ error: 'Registration failed' });
29 }
30});
31
32// Verification during login
33app.post('/login', async (req, res) => {
34 const { username, password } = req.body;
35 const user = await db.users.findOne({ username });
36
37 if (!user || !await argon2.verify(user.passwordHash, password)) {
38 return res.status(401).json({ error: 'Invalid credentials' });
39 }
40
41 // Set secure session...
42});
Other tips for secure authentication:
- Never roll your own crypto - use established libraries
- Implement account lockout after failed attempts
- Require strong passwords and consider implementing MFA
5. Missing authorization checks
Vulnerability: Broken function-level authorization AI-generated API endpoints often assume that authentication equals authorization. They verify that a user is logged in but fail to check whether that specific user should be allowed to perform the requested action, leading to privilege escalation vulnerabilities.
1// Vulnerable: No authorization checks
2app.delete('/api/users/:userId', authMiddleware, async (req, res) => {
3 const { userId } = req.params;
4
5 // Dangerous: Any authenticated user can delete any user
6 await db.users.delete({ id: userId });
7 res.json({ message: 'User deleted' });
8});
How to fix this vulnerability
Implement proper role-based access control (RBAC) that checks both authentication and authorization for every protected action. Always verify that the current user has permission to access the requested resource.
1// Secure: Proper authorization with role checks
2function requireRole(allowedRoles: string[]) {
3 return (req: any, res: any, next: any) => {
4 if (!req.user) {
5 return res.status(401).json({ error: 'Authentication required' });
6 }
7
8 const userRoles = req.user.roles || [];
9 const hasPermission = allowedRoles.some(role => userRoles.includes(role));
10
11 if (!hasPermission) {
12 return res.status(403).json({ error: 'Insufficient permissions' });
13 }
14
15 next();
16 };
17}
18
19// Protected endpoint with proper authorization
20app.delete('/api/users/:userId',
21 authMiddleware,
22 requireRole(['admin']),
23 async (req, res) => {
24 const { userId } = req.params;
25
26 // Additional check: admins can't delete other admins without super-admin role
27 const targetUser = await db.users.findById(userId);
28 if (targetUser.roles.includes('admin') && !req.user.roles.includes('super-admin')) {
29 return res.status(403).json({ error: 'Cannot delete admin user' });
30 }
31
32 await db.users.delete({ id: userId });
33 res.json({ message: 'User deleted successfully' });
34 }
35);
Other tips for preventing authorization bypasses:
- Implement the principle of least privilege
- Use deny-by-default policies (whitelist permissions)
- Regularly audit user permissions and roles
6. SQL injection vulnerabilities
Vulnerability: Dynamic SQL construction AI-generated database queries often concatenate user input directly into SQL strings, creating classic SQL injection vulnerabilities. This happens most frequently in search functionality, filtering, and dynamic query construction.
1// Vulnerable: String concatenation in SQL
2app.get('/api/users/search', async (req, res) => {
3 const { name, department } = req.query;
4
5 // Dangerous: Direct string interpolation
6 const query = `SELECT * FROM users WHERE name LIKE '%${name}%' AND department = '${department}'`;
7 const users = await db.raw(query);
8
9 res.json(users);
10});
How to fix this vulnerability
Always use parameterized queries or prepared statements that separate SQL logic from user data. This prevents injection attacks by ensuring user input is treated as data, never as executable code.
1// Secure: Parameterized queries
2app.get('/api/users/search', async (req, res) => {
3 const { name, department } = req.query;
4
5 // Validate input first
6 if (!name || typeof name !== 'string' || name.length > 100) {
7 return res.status(400).json({ error: 'Invalid name parameter' });
8 }
9
10 if (!department || !['engineering', 'sales', 'marketing'].includes(department)) {
11 return res.status(400).json({ error: 'Invalid department parameter' });
12 }
13
14 // Secure: Parameterized query with placeholders
15 const users = await db.query(
16 'SELECT id, name, email, department FROM users WHERE name ILIKE $1 AND department = $2',
17 [`%${name}%`, department]
18 );
19
20 res.json(users);
21});
Other tips for preventing SQL injection:
- Use ORM libraries that provide built-in parameterization
- Never use dynamic SQL construction with user input
- Apply input validation and sanitization before database operations
- Use stored procedures where appropriate
Operational security gaps in vibe-coded projects
Beyond code-level vulnerabilities, AI-generated applications often miss crucial operational security practices. These gaps can turn minor issues into major breaches.
7. Information disclosure through error handling
Vulnerability: Verbose error messages AI-generated error handling frequently exposes sensitive information like database schema details, file paths, internal server configuration, or stack traces. This information helps attackers understand your system's architecture and identify additional attack vectors.
1// Vulnerable: Exposing internal details
2app.post('/api/users', async (req, res) => {
3 try {
4 const user = await db.users.create(req.body);
5 res.json(user);
6 } catch (error) {
7 // Dangerous: Exposes database errors and internal paths
8 res.status(500).json({
9 error: error.message,
10 stack: error.stack,
11 query: error.sql
12 });
13 }
14});
How to fix this vulnerability
Implement proper error handling that logs detailed information server-side while returning generic, non-revealing messages to clients. Use error tracking services for debugging without exposing internals.
1// Secure: Safe error handling with logging
2import crypto from 'crypto';
3
4app.use((err: any, req: any, res: any, next: any) => {
5 const requestId = crypto.randomUUID();
6
7 // Detailed logging server-side for debugging
8 console.error({
9 requestId,
10 error: err.message,
11 stack: err.stack,
12 url: req.url,
13 method: req.method,
14 userAgent: req.get('User-Agent'),
15 ip: req.ip
16 });
17
18 // Generic response to client
19 res.status(500).json({
20 error: 'Internal Server Error',
21 requestId // For support correlation only
22 });
23});
24
25app.post('/api/users', async (req, res, next) => {
26 try {
27 const user = await db.users.create(req.body);
28 res.json(user);
29 } catch (error) {
30 next(error); // Let error middleware handle it
31 }
32});
Other tips for secure error handling:
- Never expose stack traces, SQL queries, or file paths in responses
- Use centralized error handling middleware
- Implement proper logging and monitoring for security events
8. Uncontrolled file uploads
Vulnerability: Insufficient file validation AI-generated file upload features often accept any file type and size, creating opportunities for malware uploads, storage exhaustion attacks, and potential code execution if files are served from the same domain.
1// Vulnerable: No file validation
2app.post('/upload', async (req, res) => {
3 const file = req.files.upload;
4
5 // Dangerous: No validation, any file type/size accepted
6 await file.mv(`./uploads/${file.name}`);
7 res.json({ message: 'File uploaded successfully' });
8});
How to fix this vulnerability
Implement comprehensive file validation including type checking, size limits, content scanning, and secure storage practices. Never trust client-provided file information.
1// Secure: Comprehensive file upload security
2import multer from 'multer';
3import path from 'path';
4import crypto from 'crypto';
5
6const storage = multer.diskStorage({
7 destination: './uploads/',
8 filename: (req, file, cb) => {
9 // Generate secure random filename
10 const ext = path.extname(file.originalname);
11 const filename = crypto.randomUUID() + ext;
12 cb(null, filename);
13 }
14});
15
16const upload = multer({
17 storage,
18 limits: {
19 fileSize: 5 * 1024 * 1024, // 5MB limit
20 files: 1 // Only one file per request
21 },
22 fileFilter: (req, file, cb) => {
23 // Whitelist allowed MIME types
24 const allowedTypes = ['image/png', 'image/jpeg', 'application/pdf'];
25
26 if (!allowedTypes.includes(file.mimetype)) {
27 return cb(new Error('File type not allowed'));
28 }
29
30 // Additional extension check
31 const allowedExtensions = ['.png', '.jpg', '.jpeg', '.pdf'];
32 const ext = path.extname(file.originalname).toLowerCase();
33
34 if (!allowedExtensions.includes(ext)) {
35 return cb(new Error('File extension not allowed'));
36 }
37
38 cb(null, true);
39 }
40});
41
42app.post('/upload', authMiddleware, upload.single('file'), async (req, res) => {
43 if (!req.file) {
44 return res.status(400).json({ error: 'No file uploaded' });
45 }
46
47 try {
48 // Additional security: scan file content
49 // In production, integrate with antivirus API
50 const fileBuffer = await fs.readFile(req.file.path);
51
52 // Store file metadata securely
53 const fileRecord = await db.files.create({
54 userId: req.user.id,
55 filename: req.file.filename,
56 originalName: req.file.originalname,
57 mimetype: req.file.mimetype,
58 size: req.file.size,
59 uploadedAt: new Date()
60 });
61
62 res.json({
63 message: 'File uploaded successfully',
64 fileId: fileRecord.id
65 });
66 } catch (error) {
67 // Clean up file on error
68 await fs.unlink(req.file.path).catch(console.error);
69 next(error);
70 }
71});
Other tips for secure file uploads:
- Store files outside the web root or use cloud storage
- Implement virus scanning for uploaded files
- Use Content-Disposition headers to prevent execution
- Set up proper access controls for stored files
9. Lack of security monitoring
Vulnerability: No security event tracking AI-generated applications rarely include security monitoring, logging of suspicious activities, or alerting mechanisms. This makes it difficult to detect attacks in progress or investigate security incidents.
1// Vulnerable: No security logging
2app.post('/api/login', async (req, res) => {
3 const { username, password } = req.body;
4 const user = await db.users.findOne({ username });
5
6 if (!user || !await verifyPassword(password, user.hash)) {
7 // No logging of failed attempts
8 return res.status(401).json({ error: 'Invalid credentials' });
9 }
10
11 // No logging of successful logins
12 res.json({ token: generateToken(user) });
13});
How to fix this vulnerability
Implement comprehensive security logging and monitoring that tracks authentication events, authorization failures, and suspicious activities. Set up alerting for potential attacks.
1// Secure: Comprehensive security monitoring
2function logSecurityEvent(event: string, details: any, req: any) {
3 const logEntry = {
4 timestamp: new Date().toISOString(),
5 event,
6 ip: req.ip,
7 userAgent: req.get('User-Agent'),
8 url: req.url,
9 ...details
10 };
11
12 // Log to security monitoring system
13 console.log('[SECURITY]', JSON.stringify(logEntry));
14
15 // Send to SIEM/monitoring service in production
16 // await securityLogger.log(logEntry);
17}
18
19app.post('/api/login', rateLimit({ windowMs: 15*60*1000, max: 5 }), async (req, res) => {
20 const { username, password } = req.body;
21 const user = await db.users.findOne({ username });
22
23 if (!user) {
24 logSecurityEvent('LOGIN_ATTEMPT_INVALID_USER', { username }, req);
25 return res.status(401).json({ error: 'Invalid credentials' });
26 }
27
28 if (!await verifyPassword(password, user.hash)) {
29 logSecurityEvent('LOGIN_ATTEMPT_WRONG_PASSWORD', {
30 userId: user.id,
31 username: user.username
32 }, req);
33 return res.status(401).json({ error: 'Invalid credentials' });
34 }
35
36 // Log successful authentication
37 logSecurityEvent('LOGIN_SUCCESS', {
38 userId: user.id,
39 username: user.username
40 }, req);
41
42 res.json({ token: generateToken(user) });
43});
44
45// Monitor for suspicious patterns
46app.use('/api', (req, res, next) => {
47 // Log all API access with user context
48 if (req.user) {
49 logSecurityEvent('API_ACCESS', {
50 userId: req.user.id,
51 endpoint: req.path,
52 method: req.method
53 }, req);
54 }
55 next();
56});
Other tips for security monitoring:
- Implement dependency scanning and automated updates
- Set up intrusion detection systems
- Monitor for unusual access patterns and data access
- Create incident response procedures
Guardrails that actually work in vibe-coding workflows
- Vidoc Security Lab: Automatically detect the vulnerability patterns covered in this guide-from hardcoded secrets and broken authorization to SQL injection and supply-chain risks. Try Vidoc Security Lab for free and secure your code at the speed of vibe coding.
- Ownership and mandatory review for risky areas: Auth flows, payments, and data-handling code should never merge on AI output alone. Require domain owners for approval (Software Mind).
- Secret hygiene: Organization-wide secret scanning (repos, wikis, tickets, chat), automatic revocation/rotation SLAs, and pre-commit hooks (GitGuardian).
- Dependency policy: Allowlist new packages, require integrity (hashes/signatures), prefer verified publishers, monitor SBOM for drift, and alert on trust changes (TechRadar, SecurityWeek).
- Security step in the vibe loop: Treat AI output like a junior's draft-write tests, fuzz critical inputs, validate threat models, and only then deploy (Google Cloud, IBM).
Secure your vibe-coded applications with automated security reviews
If you've made it this far, you understand that vibe coding requires security guardrails to be truly effective. Vidoc Security Lab addresses exactly this challenge with automated security code review designed for AI-accelerated development.
Our AI Security Engineer integrates directly into your development pipeline, automatically detecting the vulnerability patterns covered in this guide-from hardcoded secrets and broken authorization to SQL injection and supply-chain risks.
Try Vidoc Security Lab for free and secure your code at the speed of vibe coding.