×

Something’s changing. We’re seeing a shift from traditional availability monitoring to web performance monitoring for user experience. In the past we used data primarily to answer questions like, “Is our site up?” and “How fast is our server?” We’re still answering those questions, and we’re also encountering new questions like:

“How long does it take for this page to render on mobile?”

“Where is the bottleneck in our product search and checkout process?”

“How can we leverage web performance data to create better experiences for users and customers?”

As companies fight for human attention on the web, it’s more important than ever to consider speed and performance in user experience design. At some point latency doesn’t only disrupt the user’s experience, it impacts the bottom line. Those charming pre-loader animations only go so far.

calming cat

Leverage Performance Monitoring for UX

In the field of UX (user experience) research we often go straight to real users in their environments to gather data. This is great for understanding nuanced components like intention or reactions to design aesthetics that might not be evident if we look at analytics alone. But, what if we need data on speed? Our real users and customers come at our sites and web apps from different devices on different networks with different bandwidths and with different traits that could affect tolerance for slowness. Can we derive the measurable impact of web performance on UX without conducting formal, controlled lab experiments on ideal human subjects?

Absolutely.

One benefit of synthetic, external web performance monitoring – the type of monitoring traditionally used to report on availability – is that there are built-in constraints that eliminate noise in data results. We can leverage controlled, automated systems to simulate user behaviors and test the impact of speed and performance.

User-centric web performance monitoring provides specialized data that we can use to drive design and development decisions, and putting together a strategy doesn’t have to be daunting.

 

Identify Key User Flows

First, identify key user flows to monitor for performance. User flows represent tasks or paths that users complete on your website or in your web app. Not sure where to start? Here are some ideas:

  1. Look to web analytics for patterns in real user data and write user flows to match common paths. Maybe we can see that most visitors come to a specific page from a tracked referring URL shared on social media. The key user flow is: Start on social media > click tracked URL and follow redirect > load page > wait for content to load
  2. Write user flows that match business goals. If our marketing team has a goal to increase registrations to our service from a landing page then the key user flow might be: Start on landing page URL > complete registration form > submit
  3. Discover common problems reported by users or customers. Seeing an increase of complaints to customer service that a shopping cart doesn’t work on mobile? Write a test from a mobile viewport to: Search for a product > add to cart > complete a transaction

Note: Rigor does not charge multiplier fees for additional steps in transactions. We can run a Real Browser check on a key user flow from start to finish for the same price as running a Real Browser check on a single URL. This allows us to focus on replicating the end user’s experience instead of cutting out steps to stay in our budget. Not all web performance monitoring solutions have this pricing model, so you may want to double-check with your provider.

Also note: Because synthetic external monitoring doesn’t require any installation or Javascript to run on a page we don’t necessarily have to limit our monitoring only to sites that we own and manage. This opens us up to monitor some real user paths that we might not be able to monitor with RUM (real user monitoring).

User flow by page load

Define Measurable Performance Goals

Once we’ve identified key user paths to monitor, we should set some goals around these paths that relate directly to the user experience and define what metrics we’ll use to measure success. For example, we could set a goal for page content to render in less than 2 seconds on mobile and measure the DOMContentLoaded event for a page over time. Or, we could say that our shopping cart experience should be available at all times and rely on the ‘uptime’ of that key user flow test to report on any incidents where the shopping cart flow is interrupted by poor performance.

Now that we’ve thoughtfully designed our tests and defined measurable goals, we can use our automated monitoring service to collect actionable data about how performance impacts user experience.

 


This post belongs to a series. Up next: How to Write Tests to Monitor User Flows and Improve User Experience with Web Performance Data.

Suggested Blog Posts