Monthly Archives: February 2014

10 Factors for Choosing the Right Website Monitoring for You

I’ve always thought many Website monitoring services can make it difficult to understand exactly what you are getting and exactly what you are paying. Having spent eons building and delivering monitoring and caring for customers I thought I would take some time to try and clear things up.

When I say website monitoring services I’m specifically talking about online services (SaaS) that are subscribed to on a monthly or yearly basis to measure website performance and availability and alert when something is wrong. A check or test may run every 1-minute or 5-minutes or 15-minutes. The purpose of that test is to interact with your site, the way a user would, and verify your web application is working correctly. Since these are not “real” users this type of monitoring is typically referred to as synthetic monitoring.

There are 10 factors that should be considered when choosing the right Website monitoring for YOU:

  • your needs
  • reliability of reported errors
  • basic testing or real-browser testing
  • monitoring locations
  • effort required to create and maintain monitoring
  • status and diagnostics
  • ability to create and share required reporting to constituents
  • pricing
  • support you may need to be successful
  • other services – load testing, RUM, APM

Your needs are of course of primary importance. Do you have just a single website or multiple? Are you mostly interested in site availability or is performance and user experience important? Do you need to test a single Web page or do you need a Web transaction monitor to verify your application is providing a good experience to users? Will monitoring performance from a single web browser like IE or Firefox be sufficient or do you need cross-browser monitoring?

The reliability of errors reported by the service is probably 2nd most important. Nobody wants to be woken in the middle of the night to alarms saying the website is down when it’s just some spurious Ad banner that didn’t display and wouldn’t have affected the end-user experience anyway.

Basic testing is an easy way to understand website availability and basic HTTP performance, and verifying and measuring the performance of meaningful Web application interactions requires a real-browser sensor. Basic testing just sends a simple protocol message asking for the web page. Real-browser monitoring fetches, renders, and measures each aspect of the performance from the perspective of the web browser. More sophisticated users are sometimes interested in monitoring from their top 2 or 3 different browsers like IE and Chrome. There is nothing wrong with basic monitors but they have limitations when collecting web performance metrics. Real-browsers collect w3c timings and and help you really understand web performance and user experience. This definitely ties back to your needs.

The geographic locations available for monitoring may also factor into your selection. Both available and performance metrics can be different based on where they are sampled from. Knowing which geographies your most important site visitors come from will help you select the best geographic locations to choose for monitoring. You can probably get a pretty good idea of this from your Google Analytics account.

Does the vendor provide tools to make it easy and convenient to define and maintain the scripts (monitor definitions) that perform your desired Web interaction? Do they provide a handy transaction recorder to make it easy to produce and maintain your scripts? Does it actually do what it’s supposed to do? Some more sophisticated users prefer the monitoring services that support Selenium scripts so they can leverage the expertise they already have.

You’ll want a high quality status and diagnostic display. Is the data your viewing in the status screen real-time or 10 minutes delayed? Is it easy to see the monitors experiencing errors? Can you drill into the errors to understand the duration, type of error and root symptom? I say root symptom because the root cause is often impossible to see from the outside looking in.

Can the service create and share the required reporting views you are trying to create to end users? Can you create customized dashboards? Do you need to create reports that filter for only certain pieces of content to keep track of 3rd party SLAs? Or will you be the only one looking and so this might be less important.

Pricing is of course a big factor. I’m going to leave off any comments now. I always try to break things down into a model that I’ll try to get down on paper with some top vendors compared to share tomorrow.

Can the vendor provide support to help you be successful? This is party related to needs and your skills and the complexity of the monitoring you are trying to perform. It may also be related to how well the service captures diagnostics and leads you through them when errors occur.

Lastly, do you need other services such as load testing, real-user-monitoring, or perhaps even full-fledged application performance monitoring (APM)? If you’re looking to get a number of things from a single vendor that may also help you limit the candidates. But don’t be afraid to get this here and that there is you really understand your needs.

Ken

Namebench – a tool for speeding up the DNS part of your browsing experiences

All of the free tools so far have focused on measuring and optimizing the Web and Mobile user experiences – availability and performance – your sites are delivering to desktop and mobile visitors, until now. Let’s take a break from that and do something for ourselves.

Namebench is an open source DNS benchmark utility. It’s a tool for selecting the DNS servers that will give the best performance for your location. DNS is that magic piece of the internet infrastructure that let’s us humans remember Yahoo.com rather than the IP addresses (kinda like phone #s). It’s like the Yellow Pages for all of the addresses on the internet.  And DNS plays a role in evaluating which servers every resource on a web page should be fetched from.

Using the recommended settings from Namebench can significantly improve the browsing experience. A faster Web is a happier Web! If you are an administrator then you already understand how this could make for a better general experience for everyone in the office if your results look like mine. If you’re just a renegade desktop user, you could run a quick test and adjust your own personal settings.

After installing Namebench, start the application. It should detect good defaults. Mine look like this from an AT&T Uverse connection.

dns_performance_testing_site_screen

I just clicked “start” to begin the test. It did run for several minutes on this old core 2 duo. After running through it’s paces, Namebench estimates that the default DNS settings can be improved by over 70%. Wow! (I’d like to do a little before and after testing using @sitespeedio at some point but I’ll have to re-install it now that I put the new M500 ssd in the old Mac.)

dns_performance_benchmark

Looking deeper into the comparison report we are presented with a stack ranking of each of the DNS servers tested by mean and fastest response.

dns_performance_graphs

A detailed response time distribution is also reported.

response_time_distribution_chart

Now go speed up the Web for you and some co-workers!

Ken

PX > CX > UX = #usable + #feels_fast + #was_emotive <-- the battle for business supremacy

Over the last few years I have become inspired or perhaps possessed with a certain awe about how touch interfaces, Mobile, Cloud and Social have converged to change the focus of most successful organizations from delivering usable products to delivering meaningful and pleasurable experiences worth sharing. Digital experience influences more and more of our business landscape from how customers find us, to how they learn about and perceive our reputation, to their on-boarding experience. It’s the experience to date (hey I just made up a new term ETD), the sum of the whole experience delivered to the PEOPLE who are our users, that drives this.

Delivering experiences that people feel good about, find memorable and want to share is the next battle for business supremacy.

I’ve suggested previously, because our users are people, and understanding the customer journey and how to deliver amazing experiences starts with people, this should not be the practice of customer experience (CX) but rather people experience (PX). Further, if we are focussing for today (and we are) on digital experiences then we are really talking about user experience (UX).

I would suggest to you that this equation holds true:

PX > CX > UX = #usable + #feels_fast + #was_emotive

Of course, this is rooted in the fact that software systems are now systems of engagement and not just systems of record. We count on our software systems to help improve our reputation with our customers and our software systems to help our employees improve our reputation with customers. Every software system built ultimately has an impact on People Experience. And I want to emphasize how important it is that even our internal systems provide pleasurable experiences to employees. Because happy employees make for happy customers.

Aberdeen Research interpretation of Andrew's CX hierarchy

Aberdeen Research interpretation of Andrew’s CX hierarchy

This concept is borrowed from a slideshare (slide 15) by Steven Anderson in 2006 and the clarified graphic is courtesy of Aberdeen Research.

They are both a refinement of some previous research from Carnegie Mellon on human computer interfaces in the early 90s.

This is what we all should be striving for in the software systems that drive our interactions with customers and prospects. And also the software systems that support real-world interactions that support our employees or inventory or return process. What does this refined CX pyramid look like to you? Does it remind you of Maslow’s Hierarchy of needs. Take a look.

maslow

Just like with Maslow’s Hierarchy the basic needs and basic tasks at the bottom are much easier to achieve than the needs at the top like self-actualization. And yet, that is what is required of everyone involved in designing customer experiences now.

How can we build applications that create pleasurable experiences? By understanding the PEOPLE who will be using them. That’s probably a lot easier than self-actualizing.

Understand the people who are your users and do more than help them get it done – strive for delight. Think about those smaller parts of the interaction that don’t require building the starship enterprise. The Kano model is a good strategy here. Where could you introduce parts of the interaction that are different and appealing?

Let’s look at one ingenious example in the Travel aggregation space – Hipmunk.com. Search for any flight…go ahead. Notice that cute little button in the sort bar that says sort by “agony.” How can that not make you smile if you’ve every travelled through airports?

What appealing little capabilities are you adding to your UX to help delight people?

Ken

Uptimerobot – An accurate, easy to use, and free website monitoring service

I am on a quest to share the best free tools and services to help maximize availability, performance and user experience for your critical customer facing applications. Uptimerobot is a nice looking, easy-to-use, basic website monitoring service. Even better, like all of the tools I’ve been sharing over the last few weeks, it’s FREE!

And these guys don’t skimp.
You get 50 – that’s right I said 50 – individual monitors that can run as often as every 5-minutes.

They offer a reasonably handsome dashboard to view summary statistics for all of your website monitors. The web monitors support HTTP, HTTPS, ping, port checking and keyword monitoring. I think they need to merge the keyword monitoring into the HTTP monitoring but that’s such a minor quibble.

uptime_robot_dashboard

Here’s a quick example of the availability and performance data for the web monitor pointed at APMexaminer.com. Right now, I think you can only view this performance data for the last 24-hours.

uptime_robot_monitor_performance_uptime

I’ve only had one outage on my Website so far.  For some reason, I thought it was a good idea to turn on fastCGI on my Web server. WordPress or mySQL wasn’t happy and the site crashed the next night and was down for almost 3 hours. Uptimerobot sent accurate and reliable notifications, albeit sparse of diagnostic info, indicating when my Website was down. After @dreamhost support helped me resurrect it along with some advice for turning of the fastCGI option I promptly received notification the Website was back up and the duration of the downtime.

Uptimerobot doesn’t have a lot of advanced features like configuring web performance or transaction monitoring using real web browsers or collecting real-user monitoring statistics for every using site visitor, or even the ability to select which geographic locations perform monitoring.

In my considerable experience, a lot of people are just looking for good basic website monitoring services that provide reliable notification when their Website is unavailable, and allow you to see a little bit of basic HTTP performance data, and that is something that Uptimerobot does very decently.

Thanks for bringing something excellent to the community.

At this point in our journey together, please don’t tell me YOU don’t have basic Website monitoring in place.

Ken

Gartner’s Application Performance Mangement leaderboard likely to keep changing in 2014

Every year Gartner publishes the Magic Quadrant (MQ) for Application Performance Management (APM). It is one of the most comprehensive reports covering APM vendors that can address all 5 of the dimensions Gartner has defined. There are many other tools and solutions that focus on specific areas, often more effectively than those in the research, but the list includes only those who provide a complete solution. Magic Quadrants are an info-graphic that displays who the competing players in a major technology market broken down into leaders, visionaries, niche players and challengers.

What’s so interesting is are the changes from 2012 to 2013, and maybe, how things are likely to shift again.

The APM marketplace is very dynamic! Two factors are making it so dynamic. The first, of course, is that digital experiences have become much more significant to our business strategies driven by Cloud, Mobile and Social. The second, a more traditional story, is that the pace of innovation in application performance management is so breakneck that many of the traditional leaders have lost their footing to newer more agile startups.

Let’s focus on the top right quadrant, the leaders quadrant. The 2013 research lists just 4 technology players on the leaderboard. They are Compuware, courtesy of their Gomez and Dynatrace acquisitions; Riversoft, courtesy of it’s OpNet acquisition; and two recently started APM innovators AppDynamics and New Relic. Two of the entrants are jazzy new startups hatched from the brain trust at Wiley Technologies (acquired by CA) – New Relic and AppDyamics. The other two have invested $600M and around $1B in acquisitions to grow into the leaders quadrant.

This is a significant change from the 2012 Magic Quadrant which listed IBM, CA, Quest (now Dell), and BMC in the leaders quadrant as well as the products from 2013. If we add HP and Microsoft to the list, not a single one of the BIG systems management players – HP, IBM, CA, BMC, MS or Dell for that matter – have innovated enough to be a leader. You now what that means :)

There has already been a significant amount of reporting about how New Relic is readying themselves for a likely 2014 IPO and the same can be said for AppDynamics.

Compuware’s business has been under fire for sometime while they try to transform themselves to more relevant businesses. Even Riversoft had recent rumors of a private equity bid of over $3B.

How long will 6 ginormous systems management vendors be without leading products in the hottest part of the IT Ops marketplace?

I’m guessing while we may have many of the same products in the 2014 leaders quadrant at least a couple will be operating as a part of IBM, HP, CA, BMC, Microsoft or Dell.

In fact, getting acquired again by CA might help New Relic’s CEO, Lew Cirne, out of his patent disputes over former Wiley patents.

Ken

Related links:
What is a Gartner Magic Quadrant

See the 2013 Gartner Magic Quadrant for Application Performance Management from AppDynamics and register for a copy (scroll down below the form to see)

A glimpse of the 2012 Magic Quadrant for Application Performance Management