Category Archives: Web Performance

Is speed becoming a commodity? UX is dead. Long Live UX!

#include <std_disclaimer.h>

The other day I tweeted how I love as I learn more my definition of User Experience (UX) continues to expand. For many years, as a part of end-user monitoring businesses, I spoke about UX as if it completely depended on what I have come to call Feelsfast.

Feelsfast is the mythical metric that measures how long it takes for a user to feel like your page has loaded. This is a basic requirement, a basic need that’s lower on the hierarchy of needs. Without Feelsfast the rest of UX doesn’t matter because no one will use your website or mobile application.

UX = Feelsfast is dead. Long live UX!

My newer, more cultivated view of user experience goes something like…

UX = Feelsfast + Usability + MarketingPs

What do I mean when I say Usability?
It’s the ease of use, learnability and enjoyment that can come from a digital interaction. This is the stuff of the Carnegie Mellon, Human Computer Interface model that Forrester frequently discusses and Aberdeen presents a refined interpretation below:

Aberdeen Research interpretation of Andrew's CX hierarchy

Aberdeen Research interpretation of Andrew’s CX hierarchy

BUT…désign, style and grace is akin to store design, layout and optimization in the retail world, although it is part of the Product in the SaaS and application world as the user interface IS the product.

Usability still leaves something, something more essential, the substance.
The MarketingPs are the substance. This is the traditional Price, Promotion, Product, Placement stuff that we all learned in our Principles of Marketing class back in college. This is the what people are buying.

The amazing opportunity for businesses moving forward is to leverage the latter two in my UX equation to reimagine the customer journey and more effectively engage users at every touchpoint of the customer experience.

Digital, as a virtual experience, supports endless experimentation promoting a culture of innovation that allows businesses to create real competitive advantages. Combine experimentation with big data powered multi-variate analytics and our systems can create a safety net allowing our employees to take risk without risk!

So where does that leave Feelfast?
Yes, we can’t deliver the rest of UX without it! And yet it is really the stuff of infrastructure. It’s the rails our train of digital experience ride on.

For modern, well-designed software applications adding performance is getting to the point where you should just be able to add more coins to the computing machine to get more performance.

Is SPEED becoming a commodity?
We are not quite there yet, but that’s where we are going :)

Exploring the methods of end-user experience monitoring for APM

#include <std_disclaimer.h>

Today’s application performance management (APM) marketplace is maturing and the best solutions bring a wide-ranging set of capabilities for measuring performance and understanding behavior for many aspects of the application delivery stack. One of the cornerstone’s of APM is end-user experience monitoring (EUM).

As defined by Gartner for the APM Magic Quadrant, EUM is:
“The capture of data about how end-to-end latency, execution correctness and quality appear to the real user of the application. Secondary focus on application availability may be accomplished by synthetic transactions simulating the end user.”

But what does that mean? What are those capabilities?

There are a number of methods to do end-user monitoring. Each has advantages, and one is not enough. It is important to look at end-user experience through a number of different sides of the prism to really try and understand how the metrics match up against user experience. As I was cataloging them for myself I thought it would be good food for thought to share my definitions.

Synthetic monitoring
Web performance monitoring started with synthetic monitoring in the 1990s. A synthetic monitor is not a real user of your application but an artificial robot user, thus synthetic. The robot periodically executes an interaction with your website, API or web application to verify availability and measure performance. It is one of the easiest monitoring to setup and provides almost immediate value by delivering visibility and hard data without having to install or configure anything within the application. An example of a synthetic monitor would be a web transaction monitor that ensures a online store is working by visiting the home page, searching for a product, viewing the product detail, adding it to the cart, and checking out. This is very similar to the pile of functional tests that should run every time

Although Gartner has relegated synthetic monitoring to an availability role, it still has a lot of value for performance monitoring that passive methods do not address. No other method can help you measure service delivery when real users are not on the system. Thus it is ideal for measuring SLAs. And it is the only way to see individual page resources (a la the waterfall report) as this is still not quite yet a real user monitoring (RUM) capability. Synthetics eliminate a lot of the independent variables that can make it difficult to compare real user monitoring data. Finally, the synthetic connection to the DevOps tool chain of tests run at build or in QA provides a continuous reference point from development environments, through test and production.

Web real-user monitoring (RUM)
When I first saw real-user monitoring back in 2008, I knew it was going to change the way we measure web performance. RUM works by extracting performance values using javascript. As actual users visit web pages performance metrics are beaconed back to the great reporting mothership. Originally, the only metric that could be captured by RUM was a basic page load number, but modern browsers now collect a slew of detailed performance metrics thanks to the w3c timings standards and soon will even provide access to page resource level detail.

RUMs great advantage vs. synthetic is that is can see what’s happening for all of your actual users on all of the web pages they visit. This means you can understand web performance metrics by page, geography, browser or mobile device type. While this provides a broader understanding of general performance, it also has many, many more independent variables making specific trending and comparison more challenging. RUM is also the front-end method by which transactions are “tagged” so they can be traced and correlated through the back-end for greater understanding of how software and infrastructure work together to deliver end-user experience and root-cause analysis.

RUMs greatest and perhaps least exploited value to business today is that it captures business activity information that represents the WHY we have a website to begin with. It is this business outcome data that should be our first canary in the coal mine for determining if something needs attention.

Mobile real-user monitoring
Mobile web applications can be monitored with traditional RUM; however, today’s native mobile apps require a different mechanism to measure the application experience. That is typically accomplished by adding an extra library into your mobile application that beacons mobile application performance data back for reporting. Like traditional RUM, this is also how transactions are “tagged” for mapping through delivery software and infrastructure.

With mobile web traffic now reaching 25% or total traffic and mobile being the #1 method for brands to engage consumers, mobile RUM will be of increasing importance to most organizations.

Network real-user monitoring
Hardware appliances that plug into a network switch’s span port to passively listen to network traffic provides network based RUM that very accurately represents end-to-end network performance of the application. This type of packet smashing leverages timestamps in the network packet headers to break performance down into client, server, and network components.

My own assessment is that network RUM is particularly good at monitoring HTTP API performance for services rather than the higher level end user experience of an application consumer.

Desktop agent based monitoring
A few tools focussed on the enterprise measure end-user performance and usage by installing an agent on the Windows desktop. These agents often use similar technology as network RUM to inspect client network traffic by IP address and Port. This method also provides visibility into usage of enterprise applications as well as general performance and availability.

How many sides of the prism is your organization looking at user experience through?

Hopefully, unless you are already a monitoring guru, you learned a little about the monitoring methods being offered by today’s crop of APM tools for understanding end-user experience. What is also interesting to explore is what capabilities do users get from the different tools leveraging these methods.

Perhaps good subject for a future post :)

Display w3c navigation timings for any web page

I’ve been trying to find a tool that will display all the w3c navigation timings for any web page that I might be browsing. I was surprised when I didn’t find it in Chrome dev tools (really?) nor in speedtracer nor in the format I wanted available Chrome extensions (*hint: product opportunity for someone).

I actually considered writing a Chrome extension myself to show the web performance data from the navigation timings for any web page for a whole afternoon while I was creating my first “Hello World” extension, but I would be slow trying to learn the necessary javascript and messaging protocol Google requires for the extension.

And then I came across this “adorable” bookmarklet from @kaaes that does just that.
breaking_down_onLoad

All you have to do is drag the bookmarklet to your bookmarks and click it for any web page you are browsing and voila!

w3c_nav_timings

So far, this is the handiest tool I have found to get a quick glimpse at the nav timings for your website. These web performance timings are becoming the defacto standard and that will be even more true when browsers support the forthcoming resource timings (thus giving us the waterfall report as well).

Hope you find this a useful bit of kit for your web performance tool belt.
And a shout out to @kaaes for sharing this with everyone!

Ken

How to select the most important web performance metric as a KPI – #feelsfast

We all know intrinsically that website performance is important. It has a tremendous impact on all of the business KPIs that measure the success of our online endeavor. I think website performance gets so much attention for two reasons: (1) it’s the most obvious symptom for bad results and (2) and it is easy to measure.

In my larger philosophical views on Customer Experience (CX) I’ve suggested…

PX > CX > UX = #usable + #feelsfast + #emotive

Feelsfast here represents “a lack of perceived latency.”

15 years ago, when I first started thinking about web performance, we only had network oriented metrics to understand web page performance. Today, there are a larger set of collectible metrics to measure many aspects of the spectrum on User Experience (UX). And today’s web applications, because they are pretty fat clients, must take client-side performance into account as well.

We are constantly reminded of the importance of performance by vendors, the media and customers through their actions.

What is the most important metric to measure web performance as a KPI? It’s the one that best represents User Experience or a lack of perceived latency.

We have network metrics. These are old school metrics that focus on how long it takes your server and the network to delver web page resources to the browser’s network layer.

  • DNS lookup time – time to resolve DNS name
  • TCP connect time – time to TCP connect
  • SSL handshake time – time to perform SSL handshake
  • Time to first byte – time to receive the first packet of data
  • Time to receive the data – time to receive the data
  • Fullpage time – time to load the web page and all it’s resources

Most of today’s modern network browsers supplement this with a richer set of data based on the W3C standards for navigation timings:

  • navigationStart – time that the action was triggered
  • unloadEventStart – time of start of unload event
  • unloadEventEnd – time of completion of unload event
  • redirectStart – time http redirection begins
  • redirectEnd – time http redirection completes
  • fetchStart – time that request begins
  • domainLookupStart – time of start of DNS resolution
  • domainLookupEnd – time DNS resolution completes
  • connectStart – time when tcp connect request begins
  • connectEnd – time when tcp connect completes
  • secureConnectionStart – time just before secure handshake
  • requestStart – time that the browser requests the resource
  • responseStart – time that the browser receives first packet of data
  • responseEnd – time the browser receives the last byte of data
  • domLoading – time that the document object is created
  • domInteractive – time when the browser finishes parsing the document
  • domContentLoadedEventStart – time just before DomContentLoaded event
  • domContentLoadedEventEnd – time just after DOMContentLoaded event
  • domComplete – time when the load event of the document is completed
  • loadEventStart – time when the page load event is fired
  • loadEventEnd – time when the page load event completes

This is a nice visual of the W3C timings.

timing-overview

And we have visual timing metrics available from various tools:

  • IE11 brings us msFirstPaint as part of the browser timings
  • webpagetest.org gives us start render, filmstrip view, and the innovative speed index
  • AlertSite.com can provide visual capture and metrics for FirstPaint and Above the Fold using Firefox

How do you choose which web performance metric has the most value as a KPI when all of these have some value? The key is to identify, for any particular monitored application or web page, which metric best represents a users perception of latency, in other words, if it feels fast. This is likely one of the more modern metrics – loadEventEnd, FirstPaint, Speedindex, Above the fold.

Once selected, this #feelsfast metric should become a critical business KPI and tracked and managed as such.

Are you giving the web performance component of UX enough attention?

Ken

8 Website Monitoring Services – Pricing Analysis

Yesterday I shared the 10 factors for choosing a Website monitoring service and purposefully left the pricing discussion short. It’s a larger discussion and analysis and we will do that justice here.

The marketplace for Website monitoring services has changed over the last few years. Synthetic (aka fake user) monitoring is still a very important capability for ensuring web site and application availability and functionality, but it has become one of an ensemble of tools required to understand and manage web performance and user experience well. New and improved technologies and a maturing market has led to a commoditization of these website monitoring services.

An honest assessment of your monitoring needs is still at the top of my list, and I think you should think carefully about need to have vs. nice to have. Once you know what you need you can begin assessing which service is right for your needs, skills, and budget.

Website monitoring services are typically sold in 3 ways.

  • single plans such as one 5-min Web transaction monitor
  • packages such as ten 5-min basic site monitors and one Web transaction monitor
  • usage pricing such as I need to monitor my home page every 5 mins (12 times an hour x 24 x 30.4 days/month)

My methodology for comparing pricing was to convert all of the services I researched to a common cost format. I did that by converting each vendors entry level offering into the cost per test (or test step for multi-step Web application transaction monitors). Here is a quick example. One 5-minute basic test would be (1 step x 12 intervals/hr x 24 hrs x 30.4 days) = 8775.2 tests per month. If that has a price of $10 a month then the cost per test would be $5.00 / 8775.2 or $0.00114 per test.

This table represents summarizes my research across 8 different providers of Website monitoring services.

Company source real-browser basic monitor Price Rating RUM APM
Nuestar online 0.02750 0.00688 $$$$ Extra cost Not avail.
Compuware / Gomez hearsay 0.01000 0.00200 $$$ Extra cost Extra cost
Keynote hearsay 0.01000 0.00100 $$$ Extra cost Not avail.
AlertBot online 0.00456 0.00017 $$ Not avail. Not avail.
Dotcom-monitor online 0.00487 0.00080 $$ Extra cost Not avail.
Site24x7 online 0.00046 0.00008 $ Not avail. Extra cost.
Pingdom online 0.000343 0.000034 $ (1 Free) Included Not avail.
Uptimerobot Online Not avail. Free Free Not avail. Not avail.

This list is sorted by cost with the highest cost providers at the top.  Keynote and Gomez pricing is accumulated from information shared with me over the years. Also, Keynote will typically discount 20% if you just ask.

Frankly, unless you have advanced feature needs, I can’t see why you wouldn’t start with the free services from Pingdom and Uptimerobot.

Hope this data helps you make informed business decisions for your website monitoring needs.

Ken

Column definitions:

Company – the name of the monitoring service
source – the source of the information gathered
real-browser – the cost per test or test step of monitoring using a real web browser sensor
basic monitor – the cost per test of monitoring using a basic protocol synthetic monitor
price rating – a positioning of the services relative price vs. others
RUM – whether the service can provide real-user monitoring in addition to synthetic
APM – whether the service offers deeper APM monitoring for Java, .Net, PHP