“I use a CDN that does this performance stuff automatically. Why do I need to worry about monitoring and optimizing my site myself?” It’s a fair question and one we hear often. Presumably users invest in these systems to automatically address some of the “lowest hanging fruit” types of performance bugs. More power to them!
However, as a rule, no automated system can fully preclude intervention by flesh and blood humans. When was the last time you were able to complete a transaction at your favorite big-box grocer’s automatic checkout without hearing “Attendant has been notified to assist you?” Automation is only as powerful as what it enables your team to do.
A CDN that can handle some of the most common bugs is great, provided none of them slip through the cracks and your team can maintain a high level of productivity on their core projects. However, that doesn’t fully prevent the worst possible scenario, such as finding out a CDN or other system isn’t living up to its promises by users Tweeting about outages. Given this example, the case for proactive monitoring is clear: Identify a problem before users find out and degrade your brand on the internet. There are plenty of other good reasons why monitoring is important regardless of the automation promised by other systems. The central reason being that if you’re paying for a system to do a job, you need to know that it’s executing faithfully. If you’re paying for it, monitor it. This post will explore the reasons why monitoring and optimizing for end-user performance is critical to maximizing the ROI of a CDN.
Briefly: What is a CDN?
Considering that Content Delivery Network usage is widespread on the modern web, this will be brief. (Check out Wikipedia for more) In the olden days (when I was writing my first HTML) websites were served from just one server, no matter where the end-user was requesting the site from. Network latency and the limits of the speed of light made it so that users geographically farther away from that one server had to wait longer for it to load. Enter the Content Delivery Network (CDN). With CDNs, the website is uploaded to a single point which is then distributed to servers around the country or around the globe to cut down on the network latency and the time it takes for information to travel over the wire. Content closer to your end-user = faster experience. Done deal.
Modern CDNs are beginning to do something much cooler than just moving content closer to the user. Services like Akamai’s Ion actually work to adapt your content to the user as they request it. This includes not only responsive design for mobile sites, but also image compression and script minification. These are some of those “lowest hanging web perf fruit” items I mentioned above. Ultimately, systems like these which automate the optimization of these bugs is a good thing.
That said, a CDN is not a magic bullet for performance.
You Still Need to Monitor and Optimize Even with a CDN
Location Specific Issues
The very nature of a CDN is one of the most critical reasons for monitoring. Because there’s no longer just one server blinking happily along in your basement running your site, it’s impossible to understand user experience in different geographies without monitoring. With a CDN, it’s entirely possible for the origin server to be in good health and one of the edge nodes of the network to be down. It’s also possible that network performance in a densely populated region like California to be robust, while performance is rural areas is more spotty. CDNs changed the name of the game.
Real Rigor data showing failures tied to only one location. This was the result of system outage that only affected users in the Dallas, TX region.
If you only know response times in one region, you may incorrectly assume that performance is good everywhere, thus neglecting a wide swath of your users. Monitoring over a stable connection from each region where your users are based is the best way to understand if you’re getting what you paid for with a CDN.
Your monitoring needs to reach each region of your CDN to make the most of it. #webperf Click To Tweet
Garbage in Garbage Out
It’s the old computational saying, you get what you put into it. This is a critical point when it comes to getting the biggest ROI for your CDN spend. Ultimately, moving content closer to the end-user isn’t all that meaningful if the content is unoptimized for a performant experience. This is a bit like putting a turbo in a ‘93 Honda Accord. Sure, the horsepower will go up a few ticks, but it’s still not a Tesla.
My late, beloved ‘93 Accord, which may or may not have featured ‘wishful’ performance enhancements
Again, systems like Akamai Ion (and dozens of others) work to head off this problem by taking dynamic action on whatever content you have provided them in order to create the best experience. But what if, human error led to these services being misconfigured? Worse yet, an outage from your vendor of choice may have an even bigger impact.
Finally, if you are relying on a service that isn’t dynamically reconfiguring your content, you may end up spending more than you have to if you’re requiring the CDN to host oversized images, unminified scripts, etc. The ability to validate in staging that you have the most performant configuration for your site before you ship it to the CDN is key. This way, even if you have a service like Ion, you can rest assured that your site is optimal even if something doesn’t work as intended.Pro tip: Even if your #CDN has automated #webperf features, ship the most optimized content possible. Click To Tweet
Conclusion: It’s all about the Benjamins
If you’re paying for a CDN, it should work. Plain and simple. There is no way to ensure that without monitoring performance. Building your case to maintain your CDN spend in the next fiscal year will probably require you proving to your executives that there is a tangible ROI. There is no way to prove that without monitoring performance and you maximize that ROI by shipping the best site possible to the CDN.
Ultimately, a CDN or any other system you have in place is a form of automation meant to improve performance for your users and improve productivity for your team. Automation is good, but it’s meant to improve human decisions and actions, not replace them. One of the smart people I follow on Twitter thinks of automation like this and I couldn’t agree more:
Automation is meant to improve productivity and decrease human errors, it’s not meant to be “set and forget”…
— Ryan Kulla (@rkulla) September 3, 2017
Your CDN, while a great component of an overall performance strategy, is not a magical bullet. You need to stay proactive about managing site performance and user experience. If you want to see how Rigor can help you do that, get in touch with a member of our team.
E-commerce revenue continues to grow,as consumers turn away from shopping in brick-and-mortar stores to shopping online. However, many businesses are not prepared for this growth because they do not fully understand the market and how to invest in...Read More
Because of the multifarious nature of web clients today, it’s important to consider the usage statistics when designing, implementing, and managing your site. However, misconceptions often arise when determining what browsers to design for an...Read More
Google Webmaster Tools is a web service that allows webmasters to view the status of their sites as seen by Google and the Googlebot crawlers. In addition to indexing your site structure and content, the Googlebot crawlers also record data on perform...Read More
Web designers and developers are always looking for ways to speed up their page load times. Yahoo has an excellent article written on the best practices for speeding up your page. How do you know if implementing one of their suggested practices will...Read More