Creating Your VDP

A program policy is essential for any VDP and needs to be crafted carefully. The program policy is the first thing security researchers see when participating in a VDP. It sets the tone for the program, defines expectations, and defines your commitment to researchers that choose to participate.

How to create and host your program policy

Use the guidelines below to draft the program policy for your VDP. Program policies are usually only 1-3 pages long and typically include the following topics:

  • A researcher promise
  • Testing guidelines
  • The scope of the program

The program policy needs to be available to all potential researchers. If you plan on privately launching the VDP to only a few invited researchers, then the program policy needs some kind of access control to make it available to the researchers you've invited, but restricted to everyone else. Researchers also need a way to submit reports, such as a web form or email alias connected to a ticketing system for tracking the reports. Consider this while setting up the VDP's online resources.

Third party vulnerability disclosure and bug bounty platforms generally offer capabilities such as:

  • A way for you to create, edit, and publish a policy
  • Access controls to create a private program
  • Automatically inviting hackers at a comfortable pace
  • Inbox functionality to facilitate processing incoming reports

Third party platforms also offer a variety of consulting services to help ease the process of creating and launching a VDP. Typically third party platforms and consulting services come at a cost. Consider the costs and benefits of using a third party vs. building and managing your program in-house to determine the best path forward for your organization.

For additional inspiration on what to include in your program policy, read the United States Department of Justice's "A Framework for a Vulnerability Disclosure Program for Online Systems".

Program policy stakeholders

As you draft your program policy, consider how to work with your stakeholders. Various teams may provide input on considerations to build into your policy.

Stakeholder Considerations
Legal
  • Work with your legal team to draft your program policy and terms under which hackers will participate.
  • Researchers aren't compensated, so there is no good reason to subject themselves to extensive onboarding requirements or burdensome terms.
IT
  • Work with your IT team to help develop testing requirements and scope, such as not creating denial of service conditions.
Engineering
  • Engineering may have input on testing requirements and scope, including what types of vulnerabilities are the most or least interesting.
PR
  • Work with your PR team to review policy language on disclosure.
Security
  • Security team typically leads the creation of the policy.
  • Security team will likely receive feedback from hackers and iterate on the policy over time with other stakeholders.

Researcher promise

The researcher promise explains the organization's commitments to participating researchers who act in good faith by following the testing guidelines outlined in the policy. For example, a commitment to respond to all incoming security reports within a specific timeframe, as well as communicating decisions on which vulnerability reports are accepted and fixed.

Example:

<Name of your organization> is committed to working with security researchers to help identify and fix vulnerabilities in our systems and services. As long as you act in good faith and abide by the guidelines outlined in this policy, we will make our best effort to commit to the following:
  • Provide an initial response to your vulnerability report within three business days
  • Determine if we will accept (intend to fix) or reject (identify your report as a false positive or acceptable risk) your vulnerability report within ten business days
  • Keep you up to date on progress towards remediation of reports we accept from you

Adopting safe harbor language in your program policy helps assure researchers that legal action will not be taken against them for testing against your systems, as long as they act in good faith and follow all the guidelines explained in the policy.

Testing guidelines

The testing guidelines describe the security testing that is in scope of the VDP, as well as testing that isn't in scope and should be avoided by researchers. If there are specific types of vulnerabilities you'd like researchers to focus on, this section is a good place to highlight them.

Example:
When performing security testing, please adhere to the following guidelines:

  • Only test against your own accounts and data (e.g. create test accounts). If you identify a vulnerability that may result in access to other users' data, please check with us first before testing further.
  • If you inadvertently access other users' data in your testing, please let us know, and do not store any such user data.
  • Do not perform testing that results in denial of service conditions or degradation of our production services.
  • Social engineering is out of scope for this program; do not attempt to socially engineer our organization or our users.


We're particularly interested in the following types of vulnerabilities and impacts:

  • Remote code execution
  • XSS resulting in access to sensitive data (e.g. session info)
  • SQL injection resulting in access to sensitive data or functionality
  • Business logic flaws that result in access to sensitive data or functionality


We are less interested in the following types of vulnerabilities, which are more likely to
get rejected as false positives or accepted risks:

  • Lack of the X-Frame-Options header on pages without state-changing functionality
  • Unverified automated scanner results
  • Issues that are unlikely to be exploitable and/or that do not have realistic security impact

Scope

The scope defines the assets that researchers can test against, as well as which assets aren't considered part of the VDP. The scope needs to be carefully considered and be as expansive as possible without overloading your team. The more you're willing to put in scope, the more likely you'll get engagement from security researchers. However, don't make the scope so expansive that your team won't be able to keep up with the incoming reports. Start with a few assets in scope. Expand scope as you get a better idea of what report volume you'll receive. Before opening your VDP to the public over time, aim to have everything in scope.

In terms of how to define your scope within your program policy, adding detail about each asset or area will help security researchers know what's important to you and where to focus their efforts. You can also include tips on how to safely test against your assets. Here's an example:

Asset mail.example.com
Description Primary domain for users to access their email.
Interesting Vulnerabilities and Impacts
  • Vulnerabilities that result in unauthorized access to other users' email.
  • Ability to irrecoverably delete another user's email or entire account.
Issues Likely to be Rejected
  • SPF
  • Phishing or issues that facilitate phishing
  • Ability to send potentially malicious attachments
Testing Guidelines Only test against accounts you own or have express consent to test against. When creating test accounts, please include "vdptest" somewhere in the username. You can create test accounts at mail.example.com/new.

This is a fairly detailed breakdown. Alternatively, you could include a simple list of in scope and out of scope assets:

In Scope

  • mail.example.com
  • example.com

Out of Scope

  • blog.example.com

Resourcing your VDP

You'll need certain resources in place before launching a VDP. You'll need resources for:

  • Reviewing incoming vulnerability reports
  • Communicating with hackers
  • Finding asset owners and filing bugs
  • Fixing bugs
  • Vulnerability management / following up on remediation

Revisit key stakeholders

If you haven't already, revisit conversations with key stakeholders you discussed your VDP with earlier to ensure they're aligned on the timeline of launching your VDP, as well as queuing up any necessary resources for the launch. For example, you may want to work with engineering leadership to ensure their teams are ready for a potential influx of security bugs to work on in the first few weeks after launch. Within your security team, make sure those that triage alerts in your detection and response systems are aware of the VDP launch date, and consider allocating more time and resources for when testing begins. You'll also need to build a team to help support daily operations of your VDP.

Build your team

Running a VDP requires a decent amount of operational, interrupt-driven work. If you try to review, technically validate, and respond to every vulnerability report that comes in, as well as file every bug, keep track of statuses, and communicate updates to researchers all by yourself, you might burn out. Even if you don't have a large security team, find security-minded volunteers to help build a team to help operationalize and run your VDP. You'll still want a defined "owner" or "leader" of your VDP that's ultimately responsible for your VDP's success, but you'll also need a team to support that leader.

Build an on-duty schedule

Once you've got resources on board and willing to help with your VDP, put some structure behind it by setting up an on-duty schedule. You can create this however you like, but a weekly rotation is fairly common practice. When you're on duty for the week, it's your responsibility to:

  • Triage - review incoming vulnerability reports
    • Technically validate the report and make an "accept" or "reject" decision
    • Communicate your decision to the hacker that reported the issue
    • If necessary, ask for more information from the hacker if you're unable to reproduce the issue
    • If the vulnerability is valid, file a groomed bug with the right owner
  • Vulnerability management - push forward existing vulnerabilities
  • Communicate - provide updates to security researchers on existing reports
    • Researchers may proactively ask for updates on existing reports; check for this and respond as needed
    • If a vulnerability is fixed, communicate this back to the researcher so they know their hard work resulted in positive change at your organization. You can even include template language that asks the researcher to let you know if you've missed anything in your fix, or if your fix could be bypassed in some way.

Depending on how many reports you receive, the complexity of these reports, and the skills and knowledge of the individual on-duty, being on-duty could take anywhere from a few hours to your entire week. Tips for a successful on-duty rotation include:

  • Ensure your team is ready to step in and help support the on-duty on particularly heavy weeks.
  • Have a good handoff process in place; if there are issues that might require immediate attention from the next person on-duty, write up some handoff notes or have a live conversation at the end of the week.
  • Create automated scheduling to ensure everyone knows when they are on-duty. This can be as simple as creating recurring calendar entries for each individual.
  • Especially towards the start of your VDP, double check with the person on-duty to make sure they remember it's their week, as well as to see if they need any help. If you have more junior resources on the rotation, have more senior resources work with them to ensure they feel comfortable and can ask questions as they ramp up.
  • Have a flexible process for swapping weeks. Inevitably someone will have an emergency and need to take time off during their week, or someone will take vacation, etc. When this happens, encourage the team to swap weeks as needed to accommodate everyone's schedules.
  • Create an on duty "cheat sheet" that outlines what duties must be covered, including documentation on how to do it.

Decide on in-house vs. third party

Most of the guidance thus far has been based on you building and running your VDP in-house. There are a variety of consulting services and platforms available that can assist you with creating and running a VDP. These third parties typically come with a cost, but can be useful in guiding you on how to create, launch, and run your VDP. Some even offer triage services to help review incoming vulnerability reports for you, helping handle communication with hackers, and only escalating valid reports to your team. Deciding on whether you build this process in-house or use a third party platform will depend on your requirements and available resources. If you have a large budget but not a lot of headcount, leveraging a third party to help run your program can make sense. If it's the other way around, it may be worth investing the time to build your program yourself.

Receiving reports

If you decide to use a third party platform, they should have a way for hackers to submit reports directly to you. If you build your program in-house, you'll need to build this yourself. This could be an email address that automatically creates a ticket or bug in your issue tracker (e.g. security@example.com), or it could be a web form with required form fields that's either linked from or on the same page as your program policy. Whatever form it takes, this is your best chance to inform hackers of the format you'd like to receive your reports in. Keep in mind asking hackers to submit reports in a certain format does not always guarantee they will, but it doesn't hurt to ask. Here's an example of what you might ask for in a report submission form:

Title: [Please add a one line description of the issue, e.g. "XSS in mail.example.com
results in session theft"]

Summary: [Please add a brief description of the vulnerability and why it matters, e.g. Due to a lack of escaping, you can send an email to another user containing an XSS payload that would enable an attacker to steal another user's cookies containing session information. This would allow the attacker to login to the victim's account.] Reproduction Steps: [Please add step by step instructions on how to reproduce the vulnerability.]
1.
2.
3.

Attack Scenario and Impact: [How could this be exploited? What security impact does this
issue have?] Remediation Advice: [Optionally, if you have any advice on how this issue could be fixed or remediated, add it here.]