What is application performance monitoring?
Before we dive into the topic of application performance monitoring (APM), it’s important to establish that APM is part of an overall market of products that monitor the performance of software. This market’s beginning revolved around identifying the bottlenecks associated with the various components of an application’s infrastructure.
There has been an evolution of infrastructure and content over the last 20-23 years of the Web that has contributed greatly to the need for testing and monitoring the performance of everything related to the delivery of website content and application services. As the hardware that supports the Web has become more powerful and robust, the content and the systems that deliver this content has grown more complex. That complexity has touched every aspect of software, including operating systems, application servers, third-party utilities, and the “middleware” that glues them together.
At its core, the APM market consists of sets of tools that keep track of the efficiency of all of servers, networks, software, and storage that comprise modern applications. The goal of these tools is to identify any issues that may arise when data is transferred or computed within any of these systems.
Many APM tools identify the response times of all of server-based software applications and pinpoint issues that need to be solved. These tools can be classified into two primary groups: back-end monitoring and front-end monitoring.
Back-end monitoring focuses on databases, web server software such as HTTP, SSL, and application server software such as PHP, Ruby, or Java. Its main emphases includes the performance of database software, third-party APIs, internal API services, and other bits of code that ensure information gets from the server to the user and vice versa.
The purpose of front-end monitoring is to ensure that all user-facing elements are present when a user is active on a site. These include things like the layout of the web pages being downloaded, the networking that happens to get the information to and from users (mobile / terrestrial broadband), the location of the user (rural / urban), their browser (Chrome / Firefox / Safari / Internet Explorer), etc.
What should I be monitoring?
Let’s look at a common use of APM in monitoring the performance of a website.
In the past, a website was made up of a collection of files that rarely changed and were hosted on a single web server. As sites grew, and needed to keep up with visitor demand, more servers were added. Troubleshooting became a priority whenever performance dropped below a certain threshold.
If a static website had problems, it was probably the web server software that needed a few tweaks. If that didn’t work, maybe there were images that needed to be optimized or some messy HTML code. If tweaking those few things didn’t yield better performance, you could always check and upgrade the hardware (disks, CPU, motherboard, etc.) of the server itself.
As websites got more complex with user accounts, logins, user-initiated file uploads, and other advanced features, the need arose to serve different content to each and every user. Along came the content management system (CMS) to manage the content changes depending on which user is logged into the website and accessing its pages.
With a CMS you have all kinds of moving parts to monitor and troubleshoot. A stark contrast to the old static web servers. These moving parts are compounded if everything runs “in the cloud”. However, a few things stayed unchanged. There are still files and a web server, of course, but there is also a database, an application server, a caching server, a load balancer, a cloud services system, third party software integrations, and a number of other pieces involved in producing the final page that your users see.
What problems do different monitoring techniques solve?
Back-end monitoring is most useful when seeking to resolve bugs in your code, software bottlenecks, hardware issues such as disk or CPU failures, or system issues such as the OS or security layers.
Overall, the use of back-end monitoring, and the use of application performance monitoring specifically, solves the problem of pinpointing areas in your physical servers, storage, software, network, cloud servers, and third-party applications that are limiting efficiency and speed. With these tools you first establish a baseline of efficiency for various system components and then track overall performance in relation to the baseline.
Front-end monitoring is most useful when seeking to resolve problems with how the web page is laid out to the user, such as how many transactions it takes to complete user requests, how many bytes are downloaded per transaction, issues of mobile or responsive design, problems relating to the browser being used, the network, and even the location of each user.
For many organizations, it makes sense to emphasize front-end monitoring tools over back-end monitoring. This is because in most cases the majority of the time a user spends on your website is waiting for the frontend to complete its loading tasks rather than waiting on the back-end to complete its tasks. Steve Souders, one of the recognized pioneers in web performance, has produced an illustrative summary with examples of major websites, of 10,000 top-ranking websites, and of startup websites. All the examples show that “80-90% of the end-user response time is spent on the frontend.” He terms this the Performance Golden Rule.
Now that we understand some of the fundamentals of application performance monitoring and how it is segmented, we’ll dive deeper into two specific forms of front-end performance monitoring: synthetic monitoring and real user monitoring.
Synthetic Monitoring (Active Monitoring)
Rigor.com Real Browser performance test history
What does synthetic monitoring do?
Synthetic monitoring allows you to test and measure the experience of your web application by simulating traffic with set test variables (network, browser, location, device).
As a type of front-end monitoring, synthetic monitoring provides the finished view of the performance of your web application from the perspective of an end user and encompasses all third-party content.
Vendors provide remote (often global) infrastructure that visits a website periodically and records the performance data for each run. The measured traffic is not of your actual users, it is traffic synthetically generated to collect data on page performance.
Behavioral scripts (or paths) are created to simulate an action or path that a customer or end-user would take on a site. Those paths are then continuously monitored at specified intervals for performance, such as: functionality, availability, and response time measures.
Why should I use synthetic monitoring?
When it comes to alerting on performance problems or major service disruptions, testing pre-production environments, or baselining performance, synthetic monitoring is the undisputed way to go. For synthetic monitoring to be valuable you must understand and monitor all of your business critical web pages, services, and transactions. Synthetic monitoring is valuable because it enables a webmaster to identify problems and determine if a website or web application is slow or experiencing downtime before that problem affects actual end-users or customers. Because synthetic monitoring is a simulation of typical user behavior or navigation through a website, it is often best used to monitor commonly trafficked paths and critical business processes.
What problems does synthetic monitoring solve?
As I’ve noted elsewhere, there are various benefits to synthetic monitoring. You can be alerted whenever there are performance problems or major service disruptions. Testing on pre-production environments becomes more viable than releasing a public beta to gauge how well the site will do at scale and a baseline performance is easy to establish. The external natural of synthetic makes it an excellent tool for benchmarking the performance of competitors. Synthetic monitoring is also great for AB testing or isolating the performance impact of various 3rd party technologies.
However, there are things that synthetic monitoring doesn’t tend to solve. It can be cost prohibitive to gain visibility across your entire site as each page or user flow would require its own group of synthetic tests built and managed by the user. Synthetic monitoring also does not give you insight into “how fast” your site should be, which leads nicely into the next section.
Real User Monitoring (Passive Monitoring)
SOASTA mPulse monitoring
What does real user monitoring do?
Why should I use real user monitoring?
RUM is valuable because, unlike Synthetic monitoring, it captures the performance of actual users of your website or web application regardless of their devices, browsers, or geography. In this sense, it is great for the business’s understanding of performance. As users go through the application, all of the performance timings are captured, so no matter what pages they see, performance data will be available.
This is particularly important for large sites or complex apps, where the functionality or content is constantly changing. Monitoring actual user interactions for a website or an application is important to operators to determine if users are being served quickly and without errors and, if not, which part of a business process is failing.
Of course, what this translates to is a financial return on investment. By measuring and monitoring the impact of performance, you can determine the impact on revenue. Using real user monitoring tools can help you correlate the drivers of revenue, such as specific web campaigns, in real time. You’ll know what content should be optimized and by how much. Using RUM to inform your synthetic monitoring practices allows you to more efficiently allocate budget for testing business-critical pages and user flows, while still getting the granularity of reporting offered by synthetic tools.
What problems does real user monitoring solve?
Real user monitoring solves the problem of otherwise not knowing how users interact with your website or web application. It captures performance data for actual users across multiple devices for large sites and complex apps. Reporting and trend analysis solves the problem of presenting data to non-technical stakeholders, including the geographic and channel distribution trends of users.
As mentioned above, synthetic monitoring and real user monitoring aren’t mutually exclusive technologies There are benefits of using RUM with synthetic monitoring, chief among them being that you can get a better representation of the user experience by combining both. You might use RUM to identify where on your site you should make improvements, then leverage synthetic tools to baseline, track, and diagnose the performance bottlenecks of those pages over time to measure and report on improvements. In the long run you get the best of both when you use both together, as this Gilt case study shows.
Deploying Performance Tools
Deploying a variation of monitoring tools will create the ideal system for monitoring and maintaining a web performance culture of excellence. That being said, it is rare and time-consuming (not to mention expensive) to use tools that test the entire application stack and all of the underlying system components.
We recommend first focusing on where the majority of the latency lies. For many SaaS and on-premise applications this means investing first in backend APM tools that can help you reduce latency at the server code and database levels. However, for most websites and consumer facing web applications like e-retail sites, the majority of the latency (80-90%) is on the frontend. These organizations should invest in a combination of front-end monitoring tools such as synthetic monitoring and RUM. These tools have relatively simple implementations (particularly synthetic monitoring), offer a general baseline of performance from the perspective of the end-user, and can provide immediate clarity in the event of minor or severe performance problems and service disruptions.
For those of you who don’t have commercial performance budgets there are a variety of web tools that leverage many of the techniques discussed in this article. We recommend:
- WebPageTest.org – free synthetic monitoring testing
- Zoompf Performance Report – web performance optimization tool
- Free Tools by Rigor Labs – variety of free performance tools built by @TeamRigor
- Yslow – the original web performance optimization tool
- Google Pagespeed – developer tools by Google
- Boomerang – open source RUM library
Rigor is the first end-to-end web performance management platform for digital organizations. Our platform programmatically identifies, prioritizes and remediates the root causes of poor site performance and reliability.
E-commerce revenue continues to grow,as consumers turn away from shopping in brick-and-mortar stores to shopping online. However, many businesses are not prepared for this growth because they do not fully understand the market and how to invest in...Read More
Because of the multifarious nature of web clients today, it’s important to consider the usage statistics when designing, implementing, and managing your site. However, misconceptions often arise when determining what browsers to design...Read More
Google Webmaster Tools is a web service that allows webmasters to view the status of their sites as seen by Google and the Googlebot crawlers. In addition to indexing your site structure and content, the Googlebot crawlers also record data on perform...Read More
Web designers and developers are always looking for ways to speed up their page load times. Yahoo has an excellent article written on the best practices for speeding up your page. How do you know if implementing one of their suggested practices will...Read More