Choosing the right metrics for your project

This guide is intended to help organizations understand what kinds of problems might be solved by better documentation, and how to choose appropriate metrics for documentation projects.

Current phase:
Documentation development. See timeline.

State your problem

Before jumping in to choosing a metric, make sure you have a good understanding of the problem you're trying to solve. Be as specific as possible.

  • "Pull requests for our onboarding documentation take too long to merge. Contributors give up and go away."
  • "We see too many issues opened for help understanding error codes."
  • "Our CI/CD pipeline is flaky. Too many tests fail for poorly understood reasons."
  • "People seem grumpy in our weekly meetings."

Develop a hypothesis

Look for cause and effect. What could be causing the problem you've stated? Keep in mind that problems can have multiple or overlapping causes.

  • "It takes so long to merge pull requests for onboarding documentation because we don't have clear guidance about style. Reviewers either put off reviewing the PR because they don't know what to do, or they go back and forth with contributors about formatting."
  • "Users have to open issues because they can't find information about error codes in the documentation."
  • "Our CI/CD tests fail because we run into plan limitations and timeouts from our provider."
  • "People are grumpy in our weekly meetings because the meetings are at 5:30 AM in their time zone."

Propose a solution

Is this a problem that could be solved with new or better documentation?

  • "If we had a style guide, committers could check it before submitting their PRs. Reviewers would know what to check for. Reviewers and contributors wouldn't have to argue about formatting, tone, and style."
  • "If we had error code documentation, users could find their answers there, instead of opening issues."
  • "Hmm, it doesn't seem like better documentation would solve our CI/CD problem."
  • "We could start each meeting with a knock-knock joke! Creating a collection of knock-knock jokes would help us start our meetings with a smile."

Get specific

Can you quantify the problem?

  • "What does 'it takes too long to merge PRs' really mean? Two months? Two weeks? How long will contributors wait for review before giving up?"
  • "How many error-code-related issues are 'too many issues'?"
  • "Hmmm … how grumpy is 'too grumpy'?"

Check measurability

How would you check your proposed metric? Can it be measured easily and accurately? Does the measurement depend on who is doing the measuring?

  • "We can easily measure how long a pull request has been open, and how long since a review was requested. We can't really measure exactly when a contributor gives up."
  • "We can count how many issues are tagged 'error-code' or search within issues for error code text."
  • "We can't really measure people's grumpiness in a tactful or accurate way."

Add a secondary metric

Are there other metrics that would help you understand whether your documentation is solving your problem? Is your target metric the same in every case?

  • "Longer PRs take more time to review; we should have different thresholds for different sizes of PRs. We want to measure the time to merge for small, medium, large, and gigantic PRs."
  • "We could check how many visits our error code documentation receives and see if that number correlates with fewer issues opened."

Choose a time period

  • "We think two weeks is a reasonable time to take to merge small-to-medium PRs; and all PRs should be merged within a month. So we'll measure every two weeks."
  • "It doesn't make sense to update the number of error-code-related issues daily, because our typical time to close an issue is one week. We'll measure it weekly."

Set goals

How much change would you need to see in your selected metric to say that the project was a success? Consider setting quantitative goals for the metrics you chose.

  • "If we met our goal of closing every new PR in less than a month, that would be a success. If our average time to close large PRs decreased by two weeks, that would be a huge success."
  • "Ideally, we would see no new error-related issues. But we would consider our project successful if we saw a 50% decline in error-related issues opened."