The #ChromeDevSummit site is live, happening Nov 12-13 in San Francisco, CA
Check it out for details and request an invite. We'll be diving deep into modern web tech & looking ahead to the platform's future.

Use tools to measure performance

There are several core objectives for building a performant, resilient site with low data cost.

For each objective, you need an audit.

Objective Why? What to test for?
Ensure privacy, security and data integrity, and enable powerful API usage Why HTTPS Matters HTTPS implemented for all site pages/routes and assets.
Improve load performance 53% of users abandon sites that take longer than three seconds to load JavaScript and CSS that could be loaded asynchronously or deferred. Set goals for time to interactive and payload size: for example TTI on 3G. Set a performance budget.
Reduce page weight • Reduce data cost for users with capped data plans

• Reduce web app storage requirements — particularly important for users on low-spec devices

• Reduce hosting and serving costs

• Improve serving performance, reliability and resilience
Set a page weight budget: for example, first load under 400 kB. Check for heavy JavaScript. Check file sizes to find bloated images, media, HTML, CSS and JavaScript. Find images that could be lazy loaded, and check for unused code with coverage tools.
Reduce resource requests • Reduce latency issues

• Reduce serving costs

• Improve serving performance, reliability and resilience
Look for excessive or unnecessary requests for any type of resource. For example: files that are loaded repeatedly, JavaScript that is loaded in multiple versions, CSS that is never used, images that are never viewed (or could be lazy loaded).
Optimize memory usage Memory can become the new bottleneck, especially on mobile devices Use the Chrome Task Manager to compare your site against others for memory usage when loading the home page and using other site features.
Reduce CPU load Mobile devices have limited CPU, especially low-spec devices Check for heavy JavaScript. Find unused JavaScript and CSS with coverage tools. Check for excessive DOM size and scripts that run unnecessarily on first load. Look for JavaScript loaded in multiple versions, or libraries that could be avoided with minor refactoring.

There is a wide range of tools and techniques for auditing websites:

  • System tools
  • Built-in browser tools
  • Browser extensions
  • Online test applications
  • Emulation tools
  • Analytics
  • Metrics provided by servers and business systems
  • Screen and video recording
  • Manual tests

Below you'll learn which approach is relevant for each type of audit.

Images constitute by far the most weight and most requests for most web pages.

Latency gets worse as connectivity gets worse so excessive image file requests are an increasing problem as the web goes mobile. Images also consume power: more image requests, more radio usage, more flat batteries. Even just to render images takes power – and this is proportional to the size and quantity of images.

Likewise for memory: small increases in pixel dimensions result in big increases in memory usage. With images on mobile — especially on low-spec devices — memory can become the new bottleneck. Bloated images are also problematic for users on capped data plans.

Remove redundant images! If you can't get rid of them, optimize: increase compression as much as possible, reduce pixel dimensions, and use the format that gives you the smallest file sizes. Optimizing 'hero images' such as banners and backgrounds is an easy, one-off win.

Record resource requests: number, size, type and timing

A good place to start when auditing a site is to check pages with your browser's network tools. If you're not sure how to do this, work through the Chrome DevTools network panel Get Started Guide. Similar tools are available for Firefox, Safari, Internet Explorer and Edge.

Remember to keep a record of results before you make changes. For network requests, that can be as simple as a screenshot — you can also save profile data as a JSON file. There's more information below about how to save and share test results.

Before you begin auditing network usage, make sure to disable the browser cache to ensure you get accurate statistics for first-load performance. If you already do caching via a service worker, clear Cache API storage. You may want to use an Incognito (Private) window, so that you don't have to worry about disabling the browser cache or removing previously cached entries.

Here are some core features and metrics you should check with browser tools:

  • Load performance: Lighthouse provides a summary of load metrics. Addy Osmani has written a great summary of key user moments for page load.
  • Timeline events for loading and parsing resources, and memory usage. If you want to go deeper, run memory and JavaScript profiling.
  • Total page weight and number of files.
  • Number and weight of JavaScript files.
  • Any particularly large individual JavaScript files (over, say, 100KB).
  • Unused JavaScript. You can check using the Chrome coverage tool.
  • Total number and weight of image files.
  • Any particularly large individual image files.
  • Image formats: are there PNGs that could be JPEGs or SVGs? Is WebP used with fallbacks?
  • Whether responsive image techniques (such as srcset) are used.
  • HTML file size.
  • Total number and weight of CSS files.
  • Unused CSS. (In Chrome, use the coverage panel.)
  • Check for problematic usage of other assets such as Web Fonts (including icon fonts).
  • Check the DevTools timeline for anything that blocks page load.

If you're working from fast wifi or a fast cellular connection, test with low bandwidth and high latency emulation. Remember to test on mobile as well as desktop — some sites use UA sniffing to deliver different assets and layouts for different devices. You may need to test on actual hardware using remote debugging, not just with device simulation.

You can often use browser tools to spot problems simply by checking network responses and ordering by size.

For example: the 349KB PNG here looked like it could be a problem:

Chrome DevTools Network panel showing a large file

Sure enough, it turned out the image was 1600px wide, whereas the maximum display width of the element was only 400px. Decompressed, the image needed over 4MB of memory, which is a lot on a mobile phone.

Resaving the image as an 800px wide JPEG (to cope with 400px display width on 2x screens) and optimizing with ImageOptim resulted in a 17KB file: compare the original PNG with the optimized JPEG.

That's a 95% improvement!

Check memory and CPU load

Before you make changes, keep a record of memory and CPU usage.

In Chrome you can access the Task Manager from the Window menu. This is a simple way to check a web page's requirements.

Chrome Task Manager showing memory and CPU usage for
  the four open browser tabs
Chrome's Task Manager — watch out for memory and CPU hogs!

Test first and subsequent load performance

Lighthouse, WebPagetest and Pagespeed Insights are useful for analyzing speed, data cost and resource usage. WebPagetest will also check static-content caching, time to first byte, and if your site makes effective use of CDNs.

It's simple to enable static-content caching so browsers can cache assets the first time they're requested, by configuring your server to include appropriate headers.

If a browser can cache resources, it won't need to retrieve them from the network on subsequent visits. This improves load speed, cuts data cost and reduces network and server load — even for browsers that don't support caching via a service worker. Even if you're using the Cache API it's important to enable browser caching.

To find out more, take a look at PageSpeed Tools and the resources on Web Fundamentals (in particular, the 'Invalidating and updating cached responses' section).

Save the results

Test for core Progressive Web App requirements

Lighthouse helps you test security, functionality, accessibility, performance and search engine performance. In particular, Lighthouse checks if your site successfully implements PWA features such as service workers and a Web App manifest.

Lighthouse also tests whether your site can provide an acceptable offline experience.

You can download a Lighthouse report as JSON or, if you're using the Lighthouse Chrome Extension, share the report as a GitHub Gist: click on the share button, select Open in Viewer, then click on the share button again in the new window and Save as Gist.

Screenshot showing how to export a Chrome Lighthouse
  report as a gist
Export a report to a gist from the Lighthouse Chrome Extension — click the share button

Use analytics, event tracking and business metrics to track real-world performance

If you can, keep a record of analytics data before you implement changes: bounce rates, time on page, exit pages: whatever's relevant to your business requirements.

If possible, record business and technical metrics that might be affected, so you can compare results after making changes. For example: an e-commerce site might track orders-per-minute or record stats for stress and endurance testing. Back-end storage costs, CPU requirements, serving costs and resilience are likely to improve if you cut page weight and resource requests.

If analytics aren't implemented, now is the time! Business metrics and analytics are the final arbiter of whether or not your site is working. If appropriate, incorporate event tracking for user actions such as button clicks and video plays. You may also want to implement goal flow analysis: the paths by which your users navigate towards 'conversions'.

You can keep an eye on Google Analytics Site Speed to check how performance metrics correlate with business metrics. For example: 'how fast did the homepage load?' compared to 'did entry via the home page result in a sale?'

Screenshot showing Google Analytics Site Speed

Google Analytics uses data from the Navigation Timing API.

You may want to record data using one of the JavaScript performance APIs or your own metrics, for example:

const subscribeBtn = document.querySelector('#subscribe');

subscribeBtn.addEventListener('click', (event) => {
 // Event listener logic goes here...

 const lag = performance.now() - event.timeStamp;
 if (lag > 100) {
  ga('send', 'event', {
   eventCategory: 'Performance Metric'
   eventAction: 'input-latency',
   eventLabel: '#subscribe:click',
   eventValue: Math.round(lag),
   nonInteraction: true,
  });
 }
});

You can also use ReportingObserver to check for browser deprecation and intervention warnings. This is one of many APIs for getting real-world, live measurements from actual users.

Real-world experience: screen and video recording

Make a video recording of page load on mobile and desktop. This works even better at high frame rates and if you add a timer display.

You may also want to save screencasts. There are many screencast recording apps for Android, iOS, and desktop platforms (and scripts to do the same).

Video-recording page load works much like the filmstrip view in WebPagetest or Capture Screenshots in Chrome DevTools. You get a real-world record of page component load speed: what's fast and what's slow. Save video recordings and screencasts to compare against later improvements.

A side-by-side before-and-after comparison can be a great way to demonstrate improvements!

What else?

If relevant, get a Web Bloat Score. This is a fun test, but it can also be a compelling way to demonstrate code bloat — or to show you've made improvements.

What Does My Site Cost?, shown below, gives a rough guide to the financial cost of loading your site in different regions.

Screenshot from whatdoesmysitecost.com

Many other standalone and online tools are available: take a look at perf.rocks/tools.