How to Triage Security Vulnerability Reports Without Dropping the Ball
Someone just emailed you about a security vulnerability. Here's how to handle it correctly, whether it's a real critical bug or a scanner false positive.
The Worst Way to Handle a Security Report
Ignore it. Let it sit in a support queue for two weeks. Then the researcher publishes it on Twitter, and you're scrambling.
This happens more often than you'd think. Security reports arrive through weird channels: the general support inbox, a LinkedIn message, a GitHub issue, a random form submission. If your support team doesn't know what a security report looks like, it gets treated like a normal ticket and waits its turn.
Don't let this happen.
Set Up a Dedicated Intake Channel
Create security@yourcompany.com. Put it on your website. Add a security.txt file to your domain (/.well-known/security.txt) with your contact info, preferred language, and PGP key if you have one.
The security.txt standard (RFC 9116) is simple. Here's what it looks like:
` Contact: mailto:security@yourcompany.com Preferred-Languages: en Policy: https://yourcompany.com/security-policy `
Researchers look for this. Having it signals that you take security seriously and won't ignore their report.
The First Response: Speed Matters
Acknowledge the report within 24 hours. Ideally within a few hours. The content of your acknowledgment doesn't need to be detailed:
"Thank you for reporting this. We've received your report and our security team is reviewing it. We'll follow up within [X business days] with an initial assessment."
That's it. You don't need to confirm or deny the vulnerability yet. You just need to show the reporter that a human read their email and it's being handled.
Why speed matters: security researchers have a disclosure timeline in their head. Most follow a 90-day responsible disclosure window. If they don't hear back from you, they may assume you're ignoring them and publish sooner. A quick acknowledgment buys you the full window.
Triage: Real Threat or Noise?
Not all security reports are equal. You'll get everything from critical remote code execution vulnerabilities to someone running an automated scanner and forwarding every finding without reading them.
High Priority (Investigate Immediately)
- Remote code execution - SQL injection - Authentication bypass - Exposed credentials, API keys, or secrets - Unauthorized data access (someone can see other users' data) - Privilege escalation
These get escalated to engineering within hours, not days.
Medium Priority (Investigate This Week)
- Cross-site scripting (XSS) that requires user interaction - CSRF vulnerabilities - Information disclosure (server versions, stack traces, internal paths) - Insecure direct object references - Missing rate limiting on sensitive endpoints
Low Priority (Acknowledge and Schedule)
- Missing security headers (X-Frame-Options, CSP, etc.) - SSL/TLS configuration weaknesses that don't enable practical attacks - Theoretical attacks that require unlikely preconditions - Findings from automated scanners with no proof of exploitability
Not a Vulnerability
- "Your website doesn't have a CAPTCHA on the login page" (annoying, not a vulnerability) - SPF/DKIM/DMARC configuration suggestions - Reports about third-party services you don't control - Social engineering "vulnerabilities" (e.g., "I called your support team and they gave me information")
Be polite when closing these. The reporter took time to write it up. Thank them, explain why it doesn't qualify, and invite them to report future findings.
The Investigation
For real vulnerabilities, assign an engineer to reproduce and assess the issue. Document:
- Can you reproduce it? - What's the actual impact? (Not theoretical, actual) - How many users are affected? - Is there evidence of exploitation in the wild? - What's the fix? - What's the timeline for the fix?
Communicating with the Reporter
Keep them updated. At minimum:
1. Initial acknowledgment (within 24 hours) 2. Assessment result ("we've confirmed this is a valid vulnerability and are working on a fix" or "we've determined this doesn't pose a security risk because...") 3. Fix notification ("this has been patched in version X" or "this has been fixed in production") 4. Credit ("would you like to be credited in our security acknowledgments?")
Be specific about timelines. "We're working on it" without a timeframe is frustrating. "We expect to have a fix deployed within two weeks" is concrete.
Bug Bounty Programs
If you receive enough security reports (more than a few per month), consider a formal bug bounty program. Platforms like HackerOne and Bugcrowd manage the intake, triage, and payment process.
You don't need a massive budget. A clear scope (what's in bounds, what's out of bounds) and fair payouts ($50-500 for most findings, more for critical issues) attract quality researchers.
Even without a formal program, consider offering something for valid reports. A thank-you note and public credit costs nothing and builds goodwill with the security research community.
Train Your Support Team
Your frontline support agents need to recognize security reports and route them correctly. Train them to look for keywords: "vulnerability," "security," "exploit," "injection," "XSS," "authentication bypass," "data exposure."
When a support agent sees these signals, the ticket should be escalated to your security contact immediately, not after the normal SLA.
Supp's intent classification can detect security-related messages automatically and route them with high priority. At 100-200ms, the classification happens before anyone reads the ticket, so security reports never sit in the general queue.