About PageSpeed Insights

PageSpeed Insights (PSI) reports on the performance of a page on both mobile and desktop devices, and provides suggestions on how that page may be improved.

PSI provides both lab and field data about a page. Lab data is useful for debugging performance issues, as it is collected in a controlled environment. However, it may not capture real-world bottlenecks. Field data is useful for capturing true, real-world user experience - but has a more limited set of metrics. See How To Think About Speed Tools for more information on the 2 types of data.

Performance score

At the top of the report, PSI provides a score which summarizes the page’s performance. This score is determined by running Lighthouse to collect and analyze lab data about the page. A score of 90 or above is considered good. 50 to 90 is a score that needs improvement, and below 50 is considered poor.

Real-World Field Data

When PSI is given a URL, it will look it up in the Chrome User Experience Report (CrUX) dataset. If available, PSI reports the First Contentful Paint (FCP), First Input Delay (FID), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS) metric data for the origin and potentially the specific page URL.

Classifying Good, Needs Improvement, Poor

PSI also classifies field data into 3 buckets, describing experiences deemed good, needs improvement, or poor. PSI sets the following thresholds for good / needs improvement / poor, based on our analysis of the CrUX dataset:

Good Needs Improvement Poor
FCP [0, 1000ms] (1000ms, 3000ms] over 3000ms
FID [0, 100ms] (100ms, 300ms] over 300ms
LCP [0, 2500ms] (2500ms, 4000ms] over 4000ms
CLS [0, 0.1] (0.1, 0.25] over 0.25

Distribution and selected metric values

PSI presents a distribution of these metrics so that developers can understand the range of FCP, FID, LCP, and CLS values for that page or origin. This distribution is also split into three categories: Good, Needs Improvement, and Poor, denoted with green, orange, and red bars. For example, seeing 14% within FCP's orange bar indicates that 14% of all observed FCP values fall between 1000ms and 3000ms. This data represents an aggregate view of all page loads over the previous 30 days.

Above the distribution bars, PSI reports the 75th percentile for all metrics. The 75th percentile is selected so that developers can understand the most frustrating user experiences on their site. These field metric values are classified as good/needs improvement/poor by applying the same thresholds shown above.

Core Web Vitals

Core Web Vitals are a common set of signals critical to all web experiences. The Core Web Vitals metrics are FID, LCP, and CLS, with their respective thresholds. A page passes the Core Web Vitals assessment if the 75th percentiles of all three metrics are Good. Otherwise, the page does not pass the assessment.

Differences between Field Data in PSI and CrUX

The difference between the field data in PSI versus the Chrome User Experience Report on BigQuery, is that PSI’s data is updated daily for the trailing 30 day period. The data set on BigQuery is only updated monthly.

Lab data

PSI uses Lighthouse to analyze the given URL, generating a performance score that estimates the page's performance on different metrics, including: First Contentful Paint, Largest Contentful Paint, Speed Index, Cumulative Layout Shift, Time to Interactive, and Total Blocking Time.

Each metric is scored and labeled with a icon:

  • Good is indicated with a green check mark
  • Needs Improvement is indicated with orange informational circle
  • Poor is indicated with a red warning triangle

Audits

Lighthouse separates its audits into three sections:

  • Opportunities provide suggestions how to improve the page’s performance metrics. Each suggestion in this section estimates how much faster the page will load if the improvement is implemented.
  • Diagnostics provide additional information about how a page adheres to best practices for web development.
  • Passed Audits indicates the audits that have been passed by the page.

Frequently asked questions (FAQs)

What device and network conditions does Lighthouse use to simulate a page load?

Currently, Lighthouse simulates a page load on a mid-tier device (Moto G4) on a mobile network.

Why do the field data and lab data sometimes contradict each other?

The field data is a historical report about how a particular URL has performed, and represents anonymized performance data from users in the real-world on a variety of devices and network conditions. The lab data is based on a simulated load of a page on a single device and fixed set of network conditions. As a result, the values may differ.

Why is the 75th percentile chosen for all metrics?

Our goal is to make sure that pages work well for the majority of users. By focusing on 75th percentile values for our metrics, this ensures that pages provide a good user experience under the most difficult device and network conditions.

Why does the FCP in v4 and v5 have different values?

FCP in v5 reports the 75th percentile (as of November 4th 2019), previously it was the 90th percentile. In v4, FCP reports the median (50th percentile).

Why does the FID in v5 have different values?

FID reports the 75th percentile (as of May 27th 2020), previously it was the 95th percentile.

What is a good score for the lab data?

Any green score (90+) is considered good.

Why does the performance score change from run to run? I didn’t change anything on my page!

Variability in performance measurement is introduced via a number of channels with different levels of impact. Several common sources of metric variability are local network availability, client hardware availability, and client resource contention.

Why is the real-world Chrome User Experience Report speed data not available for a URL?

Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that a URL must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of performance of the URL.

Why is the real-world Chrome User Experience Report speed data not available for an origin?

Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that an origin's root page must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of origin’s performance across all URLs that are visited on that origin.

More questions?

If you've got a question about using PageSpeed Insights that is specific and answerable, ask your question in English on Stack Overflow.

If you have general feedback or questions about PageSpeed Insights, or you want to start a general discussion, start a thread in the mailing list.

Feedback

Was this page helpful?