Website monitoring services are a simple and direct way to begin measuring availability and performance for web applications. Website monitoring services, for the purpose of this post, are defined as a service that interacts with your website as a robotic (or synthetic) end-user for the purpose of measuring, diagnosing, notifying and reporting on the level of service the website is delivering to users.
Website monitoring services were first introduced about 20 years ago and are currently represented by companies such as Pingdom, Keynote Systems and AlertSite. There’s quite a long list and a top 10 vendor directory will be published shortly. These services are typically SaaS, meaning that you configure them remotely using your web browser. They are purchased on a subscription basis and have a monthly or yearly fee that depends on the type of monitoring, frequency of monitoring and number of sites or web transactions that need monitoring.
The advantages of synthetic website monitoring are:
- collection of consistent and repeatable metrics, eliminating many outside variables
- simple to setup providing meaningful performance metrics in just a few minutes
- provides useful data for reporting compliance with service level agreements (SLAs)
- captures rich information about all of the performance of all webpage content, including 3rd party content like ads and social media
- can measure 3 perspectives of performance: network, browser, and visual performance
But there are some disadvantages too:
- the collected data is not from real or actual users but rather samples taken periodically by a robot (synthetic user)
- even sampling every 5 minutes means only 288 samples a day which might not be statistically relevant
- performance metrics are only collected for the client config you are monitoring from and not all of todays myriad of browsers
There are two primary use cases for website monitoring services. The first is to ensure the web application or other connected service (like an API) are available. The second is to collect and trend data about performance for understand the end-to-end user experience. Tracking website or service availability and outside-in connectivity is probably the more important capability for synthetic web application monitoring today as real-user monitoring (RUM) is beginning to usurp the job of monitoring performance. We will cover that in greater depth in another post. I am not, however, implying that the performance data from website monitoring services are not valuable, but they only capture data for the pages that are being monitoring directly and only for the specific client configuration you have setup for monitoring. Website monitoring can be practical for monitoring key money-path transactions. RUM is a better solution for monitoring the performance of all of the pages on your website.
Today, most website monitoring services offer real-browser monitoring. The interactions are executed and timed using a real web browser. Many of the vendors are using the Selenium technology for playback, but a couple have their own powerful record and playback technologies. Using a web browser to playback the scenario, transaction, script or whatever it is called and then collecting the performance metrics provides a much higher fidelity set of web performance data. It also makes creating the scripted interactions simpler as it’s much more straightforward to push user events and then observe what’s happening in the browser than to try and mangle and hack the HTTP conversation. Besides, if your monitoring is not actually rendering the web page, how can those performance metrics be meaningful? Further, entering data into the form fields, clicking buttons, and using the menus means the monitoring scenario is functionally testing the web application each time it runs.
Using what I have referred to as “not-the-browser” technologies for testing availability and basic HTTP performance can also be valuable. This is a raw HTTP interaction and can reliably measure availability and service performance for APIs but cannot capture end-user experience metrics for the web page the way a user would experience it due to the lack of all the stuff that the browser does after it gets the text (HTML) of the web page.
Who in your organization really owns the user experience (UX)?
Application performance is serious business regardless of who your users are. Slow or unavailable web applications lose prospects, frustrate customers and decrease employee productivity, meaning they hurt the business. Delivering fast and feel good web experiences is critical to keeping users happy and driving successful business results.
I saw Fred Wilson (www.avc.com) of Union Square Ventures speak late last year at the Velocity Conference in New York City and it reminded me of something he said back in 2010. It was about the 10 Golden Principles of Success Web Applications. You can watch the video for yourself but here’s the opening.
“First and foremost, we believe that speed is more than a feature. Speed is the most important feature. If your application is slow, people won’t use it.”
What do you think?