Category Archives: Editorial

8 Website Monitoring Services – Pricing Analysis

Yesterday I shared the 10 factors for choosing a Website monitoring service and purposefully left the pricing discussion short. It’s a larger discussion and analysis and we will do that justice here.

The marketplace for Website monitoring services has changed over the last few years. Synthetic (aka fake user) monitoring is still a very important capability for ensuring web site and application availability and functionality, but it has become one of an ensemble of tools required to understand and manage web performance and user experience well. New and improved technologies and a maturing market has led to a commoditization of these website monitoring services.

An honest assessment of your monitoring needs is still at the top of my list, and I think you should think carefully about need to have vs. nice to have. Once you know what you need you can begin assessing which service is right for your needs, skills, and budget.

Website monitoring services are typically sold in 3 ways.

  • single plans such as one 5-min Web transaction monitor
  • packages such as ten 5-min basic site monitors and one Web transaction monitor
  • usage pricing such as I need to monitor my home page every 5 mins (12 times an hour x 24 x 30.4 days/month)

My methodology for comparing pricing was to convert all of the services I researched to a common cost format. I did that by converting each vendors entry level offering into the cost per test (or test step for multi-step Web application transaction monitors). Here is a quick example. One 5-minute basic test would be (1 step x 12 intervals/hr x 24 hrs x 30.4 days) = 8775.2 tests per month. If that has a price of $10 a month then the cost per test would be $5.00 / 8775.2 or $0.00114 per test.

This table represents summarizes my research across 8 different providers of Website monitoring services.

Company source real-browser basic monitor Price Rating RUM APM
Nuestar online 0.02750 0.00688 $$$$ Extra cost Not avail.
Compuware / Gomez hearsay 0.01000 0.00200 $$$ Extra cost Extra cost
Keynote hearsay 0.01000 0.00100 $$$ Extra cost Not avail.
AlertBot online 0.00456 0.00017 $$ Not avail. Not avail.
Dotcom-monitor online 0.00487 0.00080 $$ Extra cost Not avail.
Site24x7 online 0.00046 0.00008 $ Not avail. Extra cost.
Pingdom online 0.000343 0.000034 $ (1 Free) Included Not avail.
Uptimerobot Online Not avail. Free Free Not avail. Not avail.

This list is sorted by cost with the highest cost providers at the top.  Keynote and Gomez pricing is accumulated from information shared with me over the years. Also, Keynote will typically discount 20% if you just ask.

Frankly, unless you have advanced feature needs, I can’t see why you wouldn’t start with the free services from Pingdom and Uptimerobot.

Hope this data helps you make informed business decisions for your website monitoring needs.

Ken

Column definitions:

Company – the name of the monitoring service
source – the source of the information gathered
real-browser – the cost per test or test step of monitoring using a real web browser sensor
basic monitor – the cost per test of monitoring using a basic protocol synthetic monitor
price rating – a positioning of the services relative price vs. others
RUM – whether the service can provide real-user monitoring in addition to synthetic
APM – whether the service offers deeper APM monitoring for Java, .Net, PHP

64% Chrome Browser Share! Are They Chrome Hipsters?

Ok, I’m a big fan of Chrome, and I’ve pretty much replaced Firefox on all my machines with it, but I’m still having a hard time believing the numbers in New Relic’s infographic. New Relic says this data is aggregated from over 3 million application instances.

The breakdown shows:

  • Chrome 64.3%
  • Firefox 16.3%
  • IE 14.8%
  • Safari 4.2%
  • Other 0.4%

May I correct what I said before? I’m absolutely sure New Relic can correctly aggregate data like this from their massive collection. It’s just that I’m having a hard time believing these numbers represent the mainstream. I still know too many people who are using Windows on the desktop and are happy with the latest IE browsing experiences. And although I’m still a considerable Mac fan, the newest household addition is a Surface Pro, and IE provides a better user experience right now even with the latest Chrome beta version.

If this number is true for IE, it would spell almost certain doom for Microsoft. Maybe they are not King Kong of the client operating system world anymore but this information is just for Desktop and Laptop computers. I know a few people who really like their Windows phones and after some use I think there is a fair amount to like about Windows 8.1 (of course I did have to install a 3rd party start button ;) In fact, I find myself swiping the screen on the Macbook Pro sometimes lately. But if it is doom for Microsoft should I return my Surface Pro before the 30-days are up?

One reasonable explanation might be that New Relic targets a certain developer / startup type of user and maybe their applications are a little more targeted at – dare I say – Chrome Hipsters :)

Just ranting a bit.

Ken

10 Factors for Choosing the Right Website Monitoring for You

I’ve always thought many Website monitoring services can make it difficult to understand exactly what you are getting and exactly what you are paying. Having spent eons building and delivering monitoring and caring for customers I thought I would take some time to try and clear things up.

When I say website monitoring services I’m specifically talking about online services (SaaS) that are subscribed to on a monthly or yearly basis to measure website performance and availability and alert when something is wrong. A check or test may run every 1-minute or 5-minutes or 15-minutes. The purpose of that test is to interact with your site, the way a user would, and verify your web application is working correctly. Since these are not “real” users this type of monitoring is typically referred to as synthetic monitoring.

There are 10 factors that should be considered when choosing the right Website monitoring for YOU:

  • your needs
  • reliability of reported errors
  • basic testing or real-browser testing
  • monitoring locations
  • effort required to create and maintain monitoring
  • status and diagnostics
  • ability to create and share required reporting to constituents
  • pricing
  • support you may need to be successful
  • other services – load testing, RUM, APM

Your needs are of course of primary importance. Do you have just a single website or multiple? Are you mostly interested in site availability or is performance and user experience important? Do you need to test a single Web page or do you need a Web transaction monitor to verify your application is providing a good experience to users? Will monitoring performance from a single web browser like IE or Firefox be sufficient or do you need cross-browser monitoring?

The reliability of errors reported by the service is probably 2nd most important. Nobody wants to be woken in the middle of the night to alarms saying the website is down when it’s just some spurious Ad banner that didn’t display and wouldn’t have affected the end-user experience anyway.

Basic testing is an easy way to understand website availability and basic HTTP performance, and verifying and measuring the performance of meaningful Web application interactions requires a real-browser sensor. Basic testing just sends a simple protocol message asking for the web page. Real-browser monitoring fetches, renders, and measures each aspect of the performance from the perspective of the web browser. More sophisticated users are sometimes interested in monitoring from their top 2 or 3 different browsers like IE and Chrome. There is nothing wrong with basic monitors but they have limitations when collecting web performance metrics. Real-browsers collect w3c timings and and help you really understand web performance and user experience. This definitely ties back to your needs.

The geographic locations available for monitoring may also factor into your selection. Both available and performance metrics can be different based on where they are sampled from. Knowing which geographies your most important site visitors come from will help you select the best geographic locations to choose for monitoring. You can probably get a pretty good idea of this from your Google Analytics account.

Does the vendor provide tools to make it easy and convenient to define and maintain the scripts (monitor definitions) that perform your desired Web interaction? Do they provide a handy transaction recorder to make it easy to produce and maintain your scripts? Does it actually do what it’s supposed to do? Some more sophisticated users prefer the monitoring services that support Selenium scripts so they can leverage the expertise they already have.

You’ll want a high quality status and diagnostic display. Is the data your viewing in the status screen real-time or 10 minutes delayed? Is it easy to see the monitors experiencing errors? Can you drill into the errors to understand the duration, type of error and root symptom? I say root symptom because the root cause is often impossible to see from the outside looking in.

Can the service create and share the required reporting views you are trying to create to end users? Can you create customized dashboards? Do you need to create reports that filter for only certain pieces of content to keep track of 3rd party SLAs? Or will you be the only one looking and so this might be less important.

Pricing is of course a big factor. I’m going to leave off any comments now. I always try to break things down into a model that I’ll try to get down on paper with some top vendors compared to share tomorrow.

Can the vendor provide support to help you be successful? This is party related to needs and your skills and the complexity of the monitoring you are trying to perform. It may also be related to how well the service captures diagnostics and leads you through them when errors occur.

Lastly, do you need other services such as load testing, real-user-monitoring, or perhaps even full-fledged application performance monitoring (APM)? If you’re looking to get a number of things from a single vendor that may also help you limit the candidates. But don’t be afraid to get this here and that there is you really understand your needs.

Ken

PX > CX > UX = #usable + #feels_fast + #was_emotive <-- the battle for business supremacy

Over the last few years I have become inspired or perhaps possessed with a certain awe about how touch interfaces, Mobile, Cloud and Social have converged to change the focus of most successful organizations from delivering usable products to delivering meaningful and pleasurable experiences worth sharing. Digital experience influences more and more of our business landscape from how customers find us, to how they learn about and perceive our reputation, to their on-boarding experience. It’s the experience to date (hey I just made up a new term ETD), the sum of the whole experience delivered to the PEOPLE who are our users, that drives this.

Delivering experiences that people feel good about, find memorable and want to share is the next battle for business supremacy.

I’ve suggested previously, because our users are people, and understanding the customer journey and how to deliver amazing experiences starts with people, this should not be the practice of customer experience (CX) but rather people experience (PX). Further, if we are focussing for today (and we are) on digital experiences then we are really talking about user experience (UX).

I would suggest to you that this equation holds true:

PX > CX > UX = #usable + #feels_fast + #was_emotive

Of course, this is rooted in the fact that software systems are now systems of engagement and not just systems of record. We count on our software systems to help improve our reputation with our customers and our software systems to help our employees improve our reputation with customers. Every software system built ultimately has an impact on People Experience. And I want to emphasize how important it is that even our internal systems provide pleasurable experiences to employees. Because happy employees make for happy customers.

Aberdeen Research interpretation of Andrew's CX hierarchy

Aberdeen Research interpretation of Andrew’s CX hierarchy

This concept is borrowed from a slideshare (slide 15) by Steven Anderson in 2006 and the clarified graphic is courtesy of Aberdeen Research.

They are both a refinement of some previous research from Carnegie Mellon on human computer interfaces in the early 90s.

This is what we all should be striving for in the software systems that drive our interactions with customers and prospects. And also the software systems that support real-world interactions that support our employees or inventory or return process. What does this refined CX pyramid look like to you? Does it remind you of Maslow’s Hierarchy of needs. Take a look.

maslow

Just like with Maslow’s Hierarchy the basic needs and basic tasks at the bottom are much easier to achieve than the needs at the top like self-actualization. And yet, that is what is required of everyone involved in designing customer experiences now.

How can we build applications that create pleasurable experiences? By understanding the PEOPLE who will be using them. That’s probably a lot easier than self-actualizing.

Understand the people who are your users and do more than help them get it done – strive for delight. Think about those smaller parts of the interaction that don’t require building the starship enterprise. The Kano model is a good strategy here. Where could you introduce parts of the interaction that are different and appealing?

Let’s look at one ingenious example in the Travel aggregation space – Hipmunk.com. Search for any flight…go ahead. Notice that cute little button in the sort bar that says sort by “agony.” How can that not make you smile if you’ve every travelled through airports?

What appealing little capabilities are you adding to your UX to help delight people?

Ken

Gartner’s Application Performance Mangement leaderboard likely to keep changing in 2014

Every year Gartner publishes the Magic Quadrant (MQ) for Application Performance Management (APM). It is one of the most comprehensive reports covering APM vendors that can address all 5 of the dimensions Gartner has defined. There are many other tools and solutions that focus on specific areas, often more effectively than those in the research, but the list includes only those who provide a complete solution. Magic Quadrants are an info-graphic that displays who the competing players in a major technology market broken down into leaders, visionaries, niche players and challengers.

What’s so interesting is are the changes from 2012 to 2013, and maybe, how things are likely to shift again.

The APM marketplace is very dynamic! Two factors are making it so dynamic. The first, of course, is that digital experiences have become much more significant to our business strategies driven by Cloud, Mobile and Social. The second, a more traditional story, is that the pace of innovation in application performance management is so breakneck that many of the traditional leaders have lost their footing to newer more agile startups.

Let’s focus on the top right quadrant, the leaders quadrant. The 2013 research lists just 4 technology players on the leaderboard. They are Compuware, courtesy of their Gomez and Dynatrace acquisitions; Riversoft, courtesy of it’s OpNet acquisition; and two recently started APM innovators AppDynamics and New Relic. Two of the entrants are jazzy new startups hatched from the brain trust at Wiley Technologies (acquired by CA) – New Relic and AppDyamics. The other two have invested $600M and around $1B in acquisitions to grow into the leaders quadrant.

This is a significant change from the 2012 Magic Quadrant which listed IBM, CA, Quest (now Dell), and BMC in the leaders quadrant as well as the products from 2013. If we add HP and Microsoft to the list, not a single one of the BIG systems management players – HP, IBM, CA, BMC, MS or Dell for that matter – have innovated enough to be a leader. You now what that means :)

There has already been a significant amount of reporting about how New Relic is readying themselves for a likely 2014 IPO and the same can be said for AppDynamics.

Compuware’s business has been under fire for sometime while they try to transform themselves to more relevant businesses. Even Riversoft had recent rumors of a private equity bid of over $3B.

How long will 6 ginormous systems management vendors be without leading products in the hottest part of the IT Ops marketplace?

I’m guessing while we may have many of the same products in the 2014 leaders quadrant at least a couple will be operating as a part of IBM, HP, CA, BMC, Microsoft or Dell.

In fact, getting acquired again by CA might help New Relic’s CEO, Lew Cirne, out of his patent disputes over former Wiley patents.

Ken

Related links:
What is a Gartner Magic Quadrant

See the 2013 Gartner Magic Quadrant for Application Performance Management from AppDynamics and register for a copy (scroll down below the form to see)

A glimpse of the 2012 Magic Quadrant for Application Performance Management