×

A common question I hear, and help answer, at Rigor is “we have all these technologies, how do we maximize what we have already?” The answer, for most of my customers is higher frequency performance monitoring. Even though the technologies and the vendors in the web performance industry vary widely, a solid web performance strategy ultimately comes down to three basic elements.

Real User Measurement exists to tell you what’s going on in the wild while synthetic monitoring checks pre-production as well as benchmarking against competitors and serving as the “laboratory” for hypotheses to improve performance. Lastly, analytics tools like Google and Adobe tie it all back to business metrics.You can’t do much to get more data out of RUM or Google Analytics. It’s useful to see what users did on your site, but you can’t change those results. You can simply change your page and wait to see if behavior changes.

The benefit of synthetic monitoring is that you can change the scope of the test itself to collect better data on how your site is performing and where potential issues lie. This post will explore three key benefits of increasing the frequency of synthetic monitoring to improve actionable performance insights.

Benefit 1: Eliminate Regional Outliers

It’s not uncommon for someone to ask me, “Stephen, what is going on with this test? I only have bad load times in Oregon. Everywhere else is within the normal range.” The majority of the time this comes from customers who are only running tests once every 30 minutes, or even slower, once every 60 minutes. In scenarios like this, it’s entirely within reason for performance anomalies to fall (or appear to fall) on just one or a handful of geographic locations. This can send the user on a wild goose chase looking at CDN issues in the area or even irregular network latency on regional ISPs. While these aren’t bad things to look at, these typically gather too few data points per hour to unroot the true cause.

High frequency monitoring from relevant regions helps suss out whether or not the outlier is really a regional issue. Often, but not always, it turns out that the issue is not regional and is more related to intermittent latency on the same request or requests across geographies.

In any case, having the greater number of data points per hour helps clearly indicate whether or not you should look at region-specific issues, or issues across geographies that originate in the code itself.

Benefit 2: Eliminate Request-Specific Outliers

High frequency website monitoring can help identify other outliers, too. For instance, outliers that come from specific requests.

Imagine you’re in charge of website performance for a media company. You log in to Rigor on Tuesday morning and notice that the waterfall chart associated with your subscription page looks a little off.

You note this anomaly and go on with your day. When you log in the next morning, you note that the long bar you discovered is gone.

Intermittent performance issues can manifest as random, or location-bound, at lower frequencies. This might be what you’re experiencing when you see the fluctuations in your waterfall chart for the subscriptions page.

By using a high frequency performance monitoring strategy, you can ensure you are collecting enough data points to determine whether or not the same requests are the culprit for bad load times.

This concept also applies to the performance of third party scripts and pixels on your site. Performance degradations may be tied directly to a third party provider. If you are using low frequency monitoring you may not collect enough data to prove that third party was the cause of the issue. This can present problems both internally and externally.

For many companies, third parties can be a significant source of performance issues. Using our earlier example, you may have an initiative to identify which third parties are the worst performance offenders. You may suspect that a specific ad vendor is causing slow load times, but, a single spike might not provide enough data. Before convincing other lines of business of your suspicion that this ad provider is damaging user experience, you need to have sufficient evidence.  High frequency performance monitoring can help you collect adequate data to validate your point. This data can be used internally, but it can also be leveraged to hold external vendors accountable to SLAs.

Benefit 3: Validate Hypotheses and Ship Better Fixes

One of the best uses for synthetic monitoring is pre-production testing. With the holidays rapidly approaching, many of my ecommerce clients don’t have time to wait for “benchmark” style monitoring to let them know if a performance bug fix is going to do the trick. Getting just one or two data points per hour in pre-prod seriously holds up a CI/CD pipeline.

This is why we see people opting for 60 data points per hour, at least for a brief period. With high frequency monitoring they are able to collect enough run data to validate hypotheses. For instance, say you optimized your JS files, but you want to determine if it made the gains you thought it would. High frequency monitoring can clarify the outcome of this optimization. This gives you the confidence to ship an improved version of your web app to production, or go back to your list of potential optimizations and try for one that might be more impactful.

Key Takeaways

The main benefit of high frequency web performance monitoring is to eliminate outliers. Whether you’re seeing regional or request specific outliers, a higher test frequency is going to provide the data you need to either pinpoint the problem or better understand the intermittent nature of the issue.

From there, a high frequency performance monitoring strategy allows you to rapidly tell either pre or post deploy that your hypothesis about the fix was accurate. If not, you have the data to take action on a different performance bug, if yes, you’re able to more quickly get back to doing what you love doing: building a better experience for your customer

If you’re worried that higher frequency testing will lead to higher costs, your concerns are only valid if you’re using a web performance monitoring vendor who provides monitoring services on a commodity model. Rigor’s pricing is based around active tests, not the frequency at which they run. As a result, our users are more easily able to utilize the value of frequent testing. 

Suggested Blog Posts

The Perception Gap for Poor Web Performance

E-commerce revenue continues to grow,as consumers turn away from shopping in brick-and-mortar stores to shopping online.  However, many businesses are not prepared for this growth because they do not fully understand the market and how to invest in...

Read More

Using Browser Trends to Maintain a Better Site

Because of the multifarious nature of web clients today, it’s important to consider the usage statistics when designing, implementing, and managing your site. However, misconceptions often arise when determining what browsers to design for an...

Read More

Finding Causes of Intermittent Errors from Google Webmaster Tools

Google Webmaster Tools is a web service that allows webmasters to view the status of their sites as seen by Google and the Googlebot crawlers. In addition to indexing your site structure and content, the Googlebot crawlers also record data on perform...

Read More

Optimization Options not Always Optimal

Web designers and developers are always looking for ways to speed up their page load times. Yahoo has an excellent article written on the best practices for speeding up your page. How do you know if implementing one of their suggested practices will...

Read More