How fast should your website be in 2020?
We are often asked the question of how fast a website should be.
This is the definition of a good question and we are really happy to answer it. The answer is actually quite complicated, but in this post, we will both give you concrete numbers you can use to evaluate your own website and give you a bit of knowledge that you can put into actual use.
Before we get into the concrete numbers, we want you to keep in mind that performance should be seen in relation to the business. As a rule of thumb, a faster website means better business. Don’t just optimise the performance for the sake of better performance, optimise your website’s performance to create a better business.
The easiest way to gauge if a performance optimisation is worth it is to compare your website to your competitors. And if you want to stand out on performance, you should set a goal to be 20% faster than your fastest competitor.
But let’s get to it, we hereby present the 10 most important website performance metrics for 2020.
Top 10 Web Performance Metrics
|METRIC||TARGET||HOW TO MEASURE*|
|1||Largest Contentful Paint||2.5 s||RUM, Lab|
|2||Server response time||200 ms||RUM, Lab|
|3||Total Page Weight||3 MB||Lab|
|4||Total Blocking Time||300 ms||Lab|
|5||Cumulative Layout Shift||0.1||RUM, Lab|
|6||Total Number of Requests||75||Lab|
|7||Number of Third-party Requests||40||Lab|
|8||Speed Index||4 s||Lab|
|9||First Input Delay||100 ms||RUM|
|10||First Contentful Paint||1 s||RUM, Lab|
|Bonus||Lighthouse Performance Score||95||Lab|
RUM = Real User Monitoring (aka. field data)
Lab = Laboratory testing and not real users (aka. Synthetic testing)
Bold = recommended measuring technic
Which metric is the most important one of the trickier parts of web performance optimisation? It turns out that the right answer is highly contextual and depends on what the immediate symptoms are, wherein the lifecycle of the development process you are and what kind of data you have available.
We have ordered the list, so you will learn the most from the first 2 or 3 points and then you will get more and more details into your performance optimisation work.
1. Largest Contentful Paint
The Largest Contentful Paint (LCP) is the most important metric for how fast your website loads. If you only track one metric, this is the one.
The recommended target for LCP is 2.5 seconds for the 75th percentile.
The LCP is part of Google’s Core Web Vitals and is a user-centric performance metric. User-centric performance metrics aim to quantify the user experience of the websites’ performance. The LCP measures the time from when the user requests the page until the most important visual element (text or media) is presented to the user.
Understanding how you measure the LCP is crucial when you interpret the result. The recommended target of 2.5 seconds should be what the majority of your users experience. We recommend that you follow Google’s definition and define the 2.5-second goal for the 75th percentile.
The recommended strategy for measuring the LCP is to monitor and collect the performance measurements from your actual users using a Real User Monitoring (RUM) tool.
We want to note that the LCP is a relatively new performance metric and is currently only available in Chrome-based browsers.
2. Server response time
A low server response time is the prerequisite of all other metrics. If the content is not delivered to the users in a timely fashion, then you don’t have a chance.
The recommended target for server response time is 200 milliseconds for the 75th percentile.
The server response time, also known as the time to the first byte – is perhaps the oldest metric in the field of web performance. It is simply a measurement of how fast the page is served to the user, but many factors come into play when measuring and optimising server response times.
The most important factor for server response time is usually the web application serving the pages, often the CMS, but the geographical location of the users can also affect server response time quite considerably.
Our recommended target of 200 ms is based on the definition of a “fast” website used by the 2019 HTTP Archive Web Almanac. The data showed that only 2% of all websites have a server response time of less than 200 ms. We understand that this is an aggressive target, but since the server response time is the foundation that all else depends upon, this is not where you can afford to relax.
The recommended strategy for measuring the server response time is to monitor and collect the performance measurements from your actual users using a Real User Monitoring (RUM) tool. Having RUM data from the field is especially important for server response time, due to many factors that affect this for the specific user.
3. Total Page Weight
This is a metric that simply measures how much data the user needs to download when viewing your page. The total page weight is a very good indicator of the performance of a web page.
The recommended maximum for total page weight is 3 MB.
There is a strong correlation between the amount of data a user must download and how fast the page loads, but the actual impact on performance depends on the user’s type of connection and the capabilities of the user’s device. This is a metric where you could choose to have different goals for mobile and desktop users.
The recommendation of one goal of a maximum of 3 MB is based on two data points. Firstly, the HTTP Archive states that the median desktop webpage is 2062.4 KB, while the median for mobile webpages is about 10 % lighter with 1891.6 KB. Secondly, Google’s Lighthouse’s maximum limit is 5 MB.
The simplest way to evaluate the total page weight is to use the developer tools in the Chrome browser.
Chrome Developer Tools show how much data is transferred. But we recommend that you use a performance monitoring service that regularly visits your website to continuously monitor the page weight.
4. Total Blocking Time
The recommended target for the Total Blocking Time is 300 milliseconds.
Our recommendation of aiming for 300 ms is based on Google’s Lighthouse definition of a “fast” TBT.
This metric can only be measured with Google Page Speed Insights. We recommend that you use a performance monitoring service that regularly visits your website to continuously monitor the Total Blocking Time.
5. Cumulative Layout Shift
The Cumulative Layout Shift (CLS) measures how quickly a webpage becomes visually stable. An unstable page will see the various elements move around as the page loads.
The recommended target for Cumulative Layout Shift is 0.1 for the 75th percentile.
Who hasn’t tried to tap or click on a menu item or button, and then have it moved on the screen just before you hit it? The CLS is a relatively new metric – also part of Google’s Core Web Vital – that measures how often the user’s experience is affected by the elements moving around on the screen.
The goal is to have a CLS score of 0 (i.e. all elements stay where they are at all times), but our recommendation follows Google’s with 0.1 or below as ‘good’. You should not worry about how the score is calculated for now, but you can see a visualisation of CLS on your website using this CLS Gif creator.
As with all user-centric performance metrics, the recommended strategy for measuring the CLS is to monitor and collect the performance measurements from your actual users using a Real User Monitoring (RUM) tool.
6. Total Number of Requests
This is a metric that simply measures how many requests a user needs to perform when viewing your page. The total number of requests is a good indicator of the performance of a web page – especially for discovering the performance regressions that the accumulated daily work infers.
The recommended target for the total number of requests is 75.
To be honest, the number of requests doesn’t necessarily determine the performance of the webpage, but we have found a high correlation between them. Even though it is a rough proxy metric, we like it because it is both simple to evaluate and easy to communicate. Everyone understands that the more work you ask the browser to do, the slower the experience will be.
The recommended maximum is derived from years of experience combined with the HTTP Archive’s statistic that shows that the median of web pages has 68 requests on mobile (October 2020). With 75, you are in the middle of the pack, and below 60 is good, but when the number gets above 95 requests, it becomes critical.
The simplest way to evaluate the total number of requests is to use the developer tools in the Chrome browser.
But we recommend that you use a performance monitoring service that regularly visits your website to continuously monitor the number of requests.
7. Number of Third-party Requests
Another number of requests metric is the number of third-party requests. Third-party requests come from the various scripts used for analytics and marketing tools. Third-party scripts are typically easy to install, but restraint must be shown, as they are like bad calories and can weigh down your website.
The recommended maximum of third-party requests: 40.
Third-party scripts – such as analytics, marketing tools, or chats – are a double-edged sword; on one side, they provide an easy way to include functionality but on the other, you give up control with page weight and the number of requests. We often see third-party tools cause more than 10 new requests and the inclusion of large legacy frontend libraries.
According to the latest 2019 Web Almanac, 49 % of all requests are third-party. The trend is towards more third-party scripts. This makes sense, as it’s usually advised against re-inventing the wheel. Nonetheless, we recommend that less than 50 % of your requests are from third parties.
To quickly evaluate your own website, we recommend using Google’s Page Speed Insights that runs a check for third-party code:
But we recommend that you use a performance monitoring service that regularly visits your website to continuously monitor the number of third-party requests.
9. First Input Delay
First Input Delay (FID) is also part of the Core Web Vitals of 2020 and measures how and when the web page becomes interactive.
The recommended target for the First Input Delay is 100 ms for the 75th percentile.
Our recommended target is based on Google’s Core Web Vitals assessment criteria.
As the First Input Delay requires a user to actually do something, this metric can only be measured with Real User Monitoring. With the Chrome User Experience Report data, you can see the FID data from your own site. A theoretical FID is measured by Google’s Lighthouse as Max Potential First Input Delay.
The recommended strategy for measuring the FID is to monitor and collect the performance measurements from your actual users using a Real User Monitoring (RUM) tool.
10. First Contentful Paint
The First Contentful Paint metric answers the question “when is the first content displayed?”
The recommended target for the First Contentful Paint is 1 second for the 75th percentile.
It’s a critical point in the loading process when a page goes from “nothing” to “something”. When there is more than 1 second between the user performing an action to the website providing feedback, the sense of flow is lost.
The recommended strategy for measuring the FCP is to monitor and collect the performance measurements from your actual users using a Real User Monitoring (RUM) tool. For now, FCP is only available in Chrome-based browsers, but the metric is under development to also be reported from Safari browsers (macOS and iOS).
Bonus: Lighthouse Performance Score
One of the most popular metrics of website performance is the Lighthouse Performance Score. This is the one number that scores your website performance from 0-100 when you use Google’s PageSpeed Insights.
The recommended target for the Lighthouse Performance Score is 95.
The Lighthouse Performance Score is compounded of the following metrics (weight in parentheses):
- FCP – First Contentful Paint (15%)
- SI – Speed Index (15%)
- LCP – Largest Contentful Paint (25 %)
- TTI – Time to Interactive (15%)
- TBT – Total Blocking Time (25%)
- CLS – Cumulative Layout Shift (5%)
Each metric scores from 0-100 on the “Lighthouse scoring distribution”. A score of 50 is derived from the 25th percentile as collected by the HTTP Archive.
Our recommendation of a score of 95 is due to the fact that the Lighthouse scoring distribution is a log-normal distribution. This means that you will reach a point of diminishing returns and according to the documentation, “taking a score from 99 to 100 needs about the same amount of metric improvement that would take a 90 to 94”.
We recommend that you use a performance monitoring service that regularly runs a Lighthouse test on your website to continuously monitor the Lighthouse Performance Score.
Web performance metrics are a complex topic, and rarely does one size fit all. This top 10 is our current 2020 answer that you can use as a pretty good guide. Reach out to us at Enterspeed to learn more about how your website performs and what it means to your business.