About PageSpeed Insights

PageSpeed Insights (PSI) reports on the performance of a page on both mobile and desktop devices, and provides suggestions on how that page may be improved.

PSI provides both lab and field data about a page. Lab data is useful for debugging performance issues, as it is collected in a controlled environment. However, it may not capture real-world bottlenecks. Field data is useful for capturing true, real-world user experience - but has a more limited set of metrics. See How To Think About Speed Tools for more information on the two types of data.

Real-user experience data

Real-user experience data in PSI is powered by the Chrome User Experience Report (CrUX) dataset. PSI reports real users' First Contentful Paint (FCP), First Input Delay (FID), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS) experiences over the previous 28-day collection period.

In order to show user experience data for a given page, there must be sufficient data for it to be included in the CrUX dataset. A page might not have sufficient data if it has been recently published or has too few samples from real users. When this happens, PSI will fall back to origin-level granularity, which encompasses all user experiences on all pages of the website. Sometimes the origin may also have insufficient data, in which case PSI will be unable to show any real-user experience data.

Assessing quality of experiences

PSI classifies the quality of user experiences into three buckets: Good, Needs Improvement, or Poor. PSI sets the following thresholds in alignment with the Web Vitals initiative:

Good Needs Improvement Poor
FCP [0, 1800ms] (1800ms, 3000ms] over 3000ms
FID [0, 100ms] (100ms, 300ms] over 300ms
LCP [0, 2500ms] (2500ms, 4000ms] over 4000ms
CLS [0, 0.1] (0.1, 0.25] over 0.25

Distribution and selected metric values

PSI presents a distribution of these metrics so that developers can understand the range of experiences for that page or origin. This distribution is split into three categories: Good, Needs Improvement, and Poor, which are represented by green, amber, and red bars. For example, seeing 11% within LCP's amber bar indicates that 11% of all observed LCP values fall between 2500ms and 4000ms.

Screenshot of the distribution of real-user LCP experiences

Above the distribution bars, PSI reports the 75th percentile for all metrics. The 75th percentile is selected so that developers can understand the most frustrating user experiences on their site. These field metric values are classified as good/needs improvement/poor by applying the same thresholds shown above.

Core Web Vitals

Core Web Vitals are a common set of performance signals critical to all web experiences. The Core Web Vitals metrics are FID, LCP, and CLS, and they may be aggregated at either the page or origin level. For aggregations with sufficient data in all three metrics, the aggregation passes the Core Web Vitals assessment if the 75th percentiles of all three metrics are Good. Otherwise, the aggregation does not pass the assessment. If the aggregation has insufficient data for FID, then it will pass the assessment if both the 75th percentiles of LCP and CLS are Good. If either LCP or CLS have insufficient data, the page or origin-level aggregation cannot be assessed.

Differences between Field Data in PSI and CrUX

The difference between the field data in PSI versus the CrUX dataset on BigQuery is that PSI’s data is updated daily, while the BigQuery dataset is updated monthly and limited to origin-level data. Both data sources represent trailing 28-day periods.

Performance diagnostics

PSI uses Lighthouse to analyze the given URL, generating a performance score that estimates the page's performance on different metrics, including: First Contentful Paint, Largest Contentful Paint, Speed Index, Cumulative Layout Shift, Time to Interactive, and Total Blocking Time.

Each metric is scored and labeled with a icon:

  • Good is indicated with a green circle
  • Needs Improvement is indicated with amber informational square
  • Poor is indicated with a red warning triangle

Performance score

At the top of the section, PSI provides a score which summarizes the page’s simulated performance. This score is determined by running Lighthouse to collect and analyze diagnostic information about the page. A score of 90 or above is considered good. 50 to 90 is a score that needs improvement, and below 50 is considered poor.

Audits

Lighthouse separates its audits into three sections:

  • Opportunities provide suggestions how to improve the page’s performance metrics. Each suggestion in this section estimates how much faster the page will load if the improvement is implemented.
  • Diagnostics provide additional information about how a page adheres to best practices for web development.
  • Passed Audits indicates the audits that have been passed by the page.

Frequently asked questions (FAQs)

What device and network conditions does Lighthouse use to simulate a page load?

Currently, Lighthouse simulates the page load conditions of a mid-tier device (Moto G4) device on a mobile network for mobile, and an emulated-desktop with a wired connection for desktop. PageSpeed also runs in a Google datacenter that can vary based on network conditions, you can check the location that the test was by looking at the Lighthouse Report's environment block:

Screnshot of the throttling information tooltip.

Note: PageSpeed will report running in one of: North America, Europe, or Asia.

Why do the field data and lab data sometimes contradict each other?

The field data is a historical report about how a particular URL has performed, and represents anonymized performance data from users in the real-world on a variety of devices and network conditions. The lab data is based on a simulated load of a page on a single device and fixed set of network conditions. As a result, the values may differ. See Why lab and field data can be different (and what to do about it) for more info.

Why is the 75th percentile chosen for all metrics?

Our goal is to make sure that pages work well for the majority of users. By focusing on 75th percentile values for our metrics, this ensures that pages provide a good user experience under the most difficult device and network conditions. See Defining the Core Web Vitals metrics thresholds for more info.

Why does the FCP in v4 and v5 have different values?

FCP in v5 reports the 75th percentile (as of November 4th 2019), previously it was the 90th percentile. In v4, FCP reports the median (50th percentile).

Why does the FID in v5 have different values?

FID reports the 75th percentile (as of May 27th 2020), previously it was the 95th percentile.

What is a good score for the lab data?

Any green score (90+) is considered good, but note that having good lab data does not necessarily mean real-user experiences will also be good.

Why does the performance score change from run to run? I didn’t change anything on my page!

Variability in performance measurement is introduced via a number of channels with different levels of impact. Several common sources of metric variability are local network availability, client hardware availability, and client resource contention.

Why is the real-user CrUX data not available for a URL or origin?

Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that a URL must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of performance of the URL or origin.

More questions?

If you've got a question about using PageSpeed Insights that is specific and answerable, ask your question in English on Stack Overflow.

If you have general feedback or questions about PageSpeed Insights, or you want to start a general discussion, start a thread in the mailing list.

If you have general questions about the Web Vitals metrics, start a thread in the web-vitals-feedback discussion group.

Feedback

Was this page helpful?