With so many different metrics available to measure dozens of different aspects of a web page, it can be a struggle to know how best to quantify that page’s overall web performance. In this post, we discuss why there are so many metrics, explore what is “the best” metric, and discuss how you can use the Google Lighthouse Score to better your own performance.
Performance Metrics – Thick as Pea Soup
In the early days of performance monitoring, there were far fewer metrics than we have today. As our understanding of web performance matured, a light mist of metrics has turned into a thick fog. New classes of metrics that measure different aspects of performance have been developed, such as:
- Network metrics, like Time to First Byte, which measure how quickly a site responds to the initial request.
- Browser event metrics, like DOM Interactive or Onload, which measure when the browser has handled loading various resources like CSS or images.
- Paint metrics, like First Meaningful Paint or Visually Complete, which measure when content is drawn to the screen.
- Interactivity metrics, like Time to Interactive or First CPU Idle, which measure when a page is able to respond to user actions.
With so many ways to quantify performance, you may wonder which metric is “the best.” Sadly, there isn’t one metric that can be separated as better than the others across the board. Each metric exists because it provides an inside look into a unique aspect of your site’s performance. If it is important to track your back-end performance and to gain insight into whether your site is globally distributed, you need to watch Time to First Byte. If it is important for you to track and enforce performance budgets, you need to track content sizes and request counts.
How Do You Know What Is “Best”?
If you are looking at multiple metrics, it can become difficult to compare the overall performance of different parts of your site or compare yourself to competitors. Consider the following example:
Here we have two pages, Page A and Page B, and three metrics that are important to you: First Paint (FP), First Meaningful Paint (FMP), and Time to Interactive (TTI).
- Both A and B start rendering the page at 1 second, so they have the same FP time.
- Page A has a better FMP time (2 seconds) but a slower TTI (5 seconds).
- Page B has a slower FMP time (3 seconds) but a faster TTI (4 seconds).
Which page is “the best” in terms of performance? The answer depends on your end goal. Is FMP more important? If this is a media site that wants to quickly load content so the user can start reading, you could reasonably say Page A is the best. However, the subscriptions department for that same media site might say Page A is terrible because they want people to be able to click the “buy a subscription” call to action button. They care about a fast Time to Interactive, so they consider page B to be the best. Neither view is wrong or right, but the situation can be frustrating.
Often, you need to boil down performance to a single number so you can rank a set of pages by the best overall performance or experience. This number can also be helpful if you are trying to compare different versions of the same page to see if performance is improving, such as when you need to determine if a development version of a page has better overall performance than one in production.
How can you take something as multifaceted as performance and user experience and distill it into a single score that is easy to compare?
Using a Composite Score: Google Lighthouse
One way to solve this problem is to use a composite score, which combines multiple performance metrics into a single number, allowing you to compare overall performance. The most common example of this is the Google Lighthouse Performance score.
Lighthouse’s performance score is a weighted average made up of five key performance metrics: Time to Interactive, Speed Index, First Contentful Paint, First CPU Idle, and First Meaningful Paint. Lighthouse collects each of these metrics, grades each on a curve relative to other sites from the broader Internet, and then combines those grades together using a weighted average.
Through complex calculations, the net result of the Lighthouse performance score is presented as a single number, from 0 to 100, that represents the overall performance of a page. This score-as-single-number can be very helpful when you want to:
- Create a list of all your key pages, ranked by best overall performance
- Graph the overall performance of a page over time to determine if it is getting better or worse
The Lighthouse performance score is designed to give you an idea of your site’s performance in relation to all other sites, and it has become an industry-leading way to represent overall performance. However, it is important to remember that the score is a simplified model based on only five metrics and does not take into account any other metrics or User Timings.
For example, many of Rigor’s media customers use User Timings like Time until first ad is displayed or Time to until video starts autoplaying. For these companies, the Lighthouse performance score is less useful because those User Timings are more closely aligned to their business goals.
The Lighthouse performance score is simply a helpful way to blend five key metrics into a single number. Your organization may have more complex needs and may need to combine the Lighthouse Score with other metrics for a complete picture.
Lighthouse Performance Score: Now in Rigor
Given the value of distilling overall performance down to a single number, Rigor offers a Performance Score that mirrors the methodology behind the Lighthouse score.
Within the Rigor platform, you can measure, graph, trend, report, and alert on the Performance Score just as you can any of the other 40+ metrics and User Timings Rigor tracks. This additional insight makes it easier to understand and improve the performance and user experiences of your sites.
Want to learn more about Rigor can help shine a light through the fog of performance metrics? Reach out now for your free trial.