Reading time 5 min

With hundreds of possible front-end performance optimizations you can make to a site, the amount of options can often feel overwhelming. This makes it difficult to start. The best strategy is to start small and define your own list of internal performance best practices. You want to make sure you can consistently enhance your site while following the performance optimizations you already know you should be doing. You can then expand your own internal best practices to include optimizations that make sense for you.

In this post I show you how to define your own internal list of performance best practices and then use Rigor Optimization to ensure that all of your web sites and web applications follow those rules.

Step 1: Define your own internal best practices

I recently worked with a customer, let’s call them BigCorp, to help them define a list of internal best practices. The best way to come up with a list is just by talking to the team: Are we using a CDN? Should we be? How many JavaScript libraries are we loading? What number are we OK with? Often members of your team will have their own personal ideas or guidelines they try to follow. Talking about it as a team is the best way to gather all of these ideas and also decide the level of how bad something needs to be before it should be considered a problem.

Below is an abbreviated version of the internal best practices that BigCorp created:

  1. All non-HTML resources must be served via the CDN.
  2. All static resources must use caching and are cached for at least 7 days.
  3. No 404 errors, 500 errors, or other quality issues are allowed.
  4. All first party hosts use SSL/TLS and support HTTP/2.
  5. All images are lossless optimized.
  6. Everything that can be compressed should be compressed.

This is a pretty good list. Its better to start with a few number of very specific rules and work your way out. For BigCorp, they started with core rules about resources on a CDN and worked out to include more traditional optimizations.

Step 2: Create a Custom Defect Policy

Once you have you list of best practices, the next step is to turn those into a defect policy in Rigor Optimization. A defect check policy is the set of checks Rigor looks for, as well as the thresholds defining how bad something has to be before Rigor flags the issues.

First, create a new policy by clicking on Defect Policies under Settings. Now, select all checks, and click the Mute button. This gives us a policy with all the problems muted. Rigor won’t flag any issues with your site using this policy.

Next, we Unmute the checks that correspond to our best practices. For example, to look for BigCorp’s best practice #1, we unmute the Content Served without a Content Delivery Network check. We do this for all the checks that make sense. In total we ended up with about 25 different checks that covered BigCorps best practices, as shown below:

just 25

Next, we change the severity of all these checks to Critical. This way we can easily see when Rigor Optimization finds something that is violating our best practices. You can use the filter drop downs to only show the Visible checks and then change the severity for all of them in bulk.

Finally, if any of your best practices have the concept of a threshold, you can adjust the threshold of a Rigor check to match that. For example, BigCorp’s #2 best practice is to flag anything that is cached for less than 7 days. So we set a customer threshold for our Resource without far future caching check to match, as shown below:

changing threshold

Now we have a defect policy that looks only for our internal best practices and flags only when they are bad enough.

Step 3: Enforce Your Best Practices

To test a site against these internal best practices, BigCorp simply runs a test. If any part of a page is violating BigCorp’s best practices, Rigor Optimization flags that as a Critical defect and shows it on the top tab, as shown below:


Here we can see there are actually four different violations of the internal best practices. Specifically we see that a substantial number of resources aren’t using the CDN! Turns out that the secondary pages on BigCorps site were loading resources with a different hostname, which wasn’t mapped to a CDN.


By muting all of the checks in Rigor Optimization except those related to best practices, and then marking those checks as critical, it was much easier for BigCorp to see exactly where they were failing to follow their own internal guidelines.

Step 4: Bonus! Automatic Enforcement!

Once you have a system in place to check for violations of best practices, the bonus awesome step is to automate that! You can hook Rigor Optimization into your build system or CI system, so that new problems are discovered right away.


Looks like Sam just changed the site and caused 3 new best practice violations! To learn more, read our post on
Integrating Performance Analysis into Continuous Deployment


While there are a large number of possible performance optimizations you can make to a site, organizations tend to evolve their own internal list of performance best practices. After all, Yahoo’s problems are not your problems. Discuss what optimizations are important to you as an organization and define a standard list of best practices. Once you have this, it’s easy to enforce your best practices by integrating something like Rigor Optimization directly into your build or publishing systems. This will ensure that all of your sites are created and updated while staying fast by compiling the best performance.

Interested in defining an internal set of performance best practices and enforcing them across your company? Then you will love Rigor’s performance monitoring and optimization platform. Our continuous monitoring provides 24/7 performance visibility of your website’s performance and availability and can identify hundreds of performance defects that are slowing it down. To learn more about Rigor’s web performance products, contact us today.

Suggested Blog Posts

The Perception Gap for Poor Web Performance

E-commerce revenue continues to grow,as consumers turn away from shopping in brick-and-mortar stores to shopping online.  However, many businesses are not prepared for this growth because they do not fully understand the market and how to invest in...

Read More

Using Browser Trends to Maintain a Better Site

Because of the multifarious nature of web clients today, it’s important to consider the usage statistics when designing, implementing, and managing your site. However, misconceptions often arise when determining what browsers to design...

Read More

Finding Causes of Intermittent Errors from Google Webmaster Tools

Google Webmaster Tools is a web service that allows webmasters to view the status of their sites as seen by Google and the Googlebot crawlers. In addition to indexing your site structure and content, the Googlebot crawlers also record data on perform...

Read More

Optimization Options not Always Optimal

Web designers and developers are always looking for ways to speed up their page load times. Yahoo has an excellent article written on the best practices for speeding up your page. How do you know if implementing one of their suggested practices will...

Read More