The scores that you see at the top of your Lighthouse report represent the page's score for that particular category. This guide explains how Lighthouse calculates those scores.
The Performance score
Lighthouse returns a Performance score between 0 and 100. 0 is the lowest possible score. A 0 score usually indicates an error in Lighthouse. If you see a 0 score repeatedly, please file a bug on the Lighthouse repo. 100 is the best possible score. Typically a score above 90 represents the top 5 percent of top-performing pages.
Which Performance audits contribute to your score
In general, only the audits in the Metrics section of the Performance category contribute to your score. See Scoring Details for the complete list. The audits under Diagnostics and Opportunities do not contribute to your Performance score.
How each Performance audit is scored
Each Performance audit that contributes to your score has its own scoring methodology. Lighthouse maps each raw score to a number between 0 and 100. The scoring distribution is a log normal distribution derived from the performance metrics of real website performance data.
For example, the First Meaningful Paint (FMP) audit measures when a user perceives that the primary content of a page is visible. The raw score for FMP represents the time duration between the user initiating the page load and the page rendering its primary content. Based on real website data, top-performing sites render FMP in about 1223ms, so that raw score is mapped to a Lighthouse score of 99.
How the Performance score is weighted
The audits that contribute to the Performance score are not equally weighted. See Scoring Details to see how each Performance audit is weighted. The heavier-weighted audits have a larger impact on your overall Performance score. The weightings are based on heuristics. The Lighthouse team is working on formalizing this approach through more field data.
The overall Performance score is a weighted average of these audits. See Weighted Averages Without Percentages to learn how weighted averages work. See Scoring Calculator to experiment with how getting different scores in each audit affects your overall Performance score.
How the Performance score is color-coded
The color-coding maps to these Performance score ranges:
- 0 to 44 (poor): Red
- 45 to 74 (average): Orange
- 75 to 100 (good): Green
How to reduce fluctuations in your Performance score
When running Lighthouse on real sites, some variability in the Performance score is to be expected. On each visit, a site may load different ads or scripts, and network conditions may vary.
Anti-virus scanners, extensions, and other programs that interfere with page load can cause large variations. Run Lighthouse without these programs to get more consistent results. Consider running Lighthouse from a continuous integration system, or from a hosted service such as WebPageTest.
The Progressive Web App score
Lighthouse returns a Progressive Web App (PWA) score between 0 and 100. 0 is the worst possible score, and 100 is the best.
The PWA audits are based on the Baseline PWA Checklist, which lists 14 requirements. Lighthouse has automated audits for 11 of the 14 requirements. The remaining 3 can only be tested manually. Each of the 11 automated PWA audits are weighted equally, so each one contributes approximately 9 points to your PWA score.
The Accessibility score
The Accessibility score is a weighted average of all the accessibility audits. See Scoring Details for a full list of how each audit is weighted. The heavier-weighted audits have a bigger impact on your score.
Each accessibility audit is pass or fail. Unlike the Performance audits, a page doesn't get points for partially passing an accessibility audit. For example, if some elements have screenreader-friendly names, but others don't, that page gets a 0 for the screenreader-friendly-names audit.
The Best Practices score
Lighthouse returns a Best Practices score between 0 and 100. 0 is the worst possible score, and 100 is the best.
The Best Practices audits are equally weighted. To calculate how much each audit contributes to your overall Best Practices score, count the number of Best Practices audits, then divide 100 by that number.