Tag Archives: APM

Exploring the methods of end-user experience monitoring for APM

#include <std_disclaimer.h>

Today’s application performance management (APM) marketplace is maturing and the best solutions bring a wide-ranging set of capabilities for measuring performance and understanding behavior for many aspects of the application delivery stack. One of the cornerstone’s of APM is end-user experience monitoring (EUM).

As defined by Gartner for the APM Magic Quadrant, EUM is:
“The capture of data about how end-to-end latency, execution correctness and quality appear to the real user of the application. Secondary focus on application availability may be accomplished by synthetic transactions simulating the end user.”

But what does that mean? What are those capabilities?

There are a number of methods to do end-user monitoring. Each has advantages, and one is not enough. It is important to look at end-user experience through a number of different sides of the prism to really try and understand how the metrics match up against user experience. As I was cataloging them for myself I thought it would be good food for thought to share my definitions.

Synthetic monitoring
Web performance monitoring started with synthetic monitoring in the 1990s. A synthetic monitor is not a real user of your application but an artificial robot user, thus synthetic. The robot periodically executes an interaction with your website, API or web application to verify availability and measure performance. It is one of the easiest monitoring to setup and provides almost immediate value by delivering visibility and hard data without having to install or configure anything within the application. An example of a synthetic monitor would be a web transaction monitor that ensures a online store is working by visiting the home page, searching for a product, viewing the product detail, adding it to the cart, and checking out. This is very similar to the pile of functional tests that should run every time

Although Gartner has relegated synthetic monitoring to an availability role, it still has a lot of value for performance monitoring that passive methods do not address. No other method can help you measure service delivery when real users are not on the system. Thus it is ideal for measuring SLAs. And it is the only way to see individual page resources (a la the waterfall report) as this is still not quite yet a real user monitoring (RUM) capability. Synthetics eliminate a lot of the independent variables that can make it difficult to compare real user monitoring data. Finally, the synthetic connection to the DevOps tool chain of tests run at build or in QA provides a continuous reference point from development environments, through test and production.

Web real-user monitoring (RUM)
When I first saw real-user monitoring back in 2008, I knew it was going to change the way we measure web performance. RUM works by extracting performance values using javascript. As actual users visit web pages performance metrics are beaconed back to the great reporting mothership. Originally, the only metric that could be captured by RUM was a basic page load number, but modern browsers now collect a slew of detailed performance metrics thanks to the w3c timings standards and soon will even provide access to page resource level detail.

RUMs great advantage vs. synthetic is that is can see what’s happening for all of your actual users on all of the web pages they visit. This means you can understand web performance metrics by page, geography, browser or mobile device type. While this provides a broader understanding of general performance, it also has many, many more independent variables making specific trending and comparison more challenging. RUM is also the front-end method by which transactions are “tagged” so they can be traced and correlated through the back-end for greater understanding of how software and infrastructure work together to deliver end-user experience and root-cause analysis.

RUMs greatest and perhaps least exploited value to business today is that it captures business activity information that represents the WHY we have a website to begin with. It is this business outcome data that should be our first canary in the coal mine for determining if something needs attention.

Mobile real-user monitoring
Mobile web applications can be monitored with traditional RUM; however, today’s native mobile apps require a different mechanism to measure the application experience. That is typically accomplished by adding an extra library into your mobile application that beacons mobile application performance data back for reporting. Like traditional RUM, this is also how transactions are “tagged” for mapping through delivery software and infrastructure.

With mobile web traffic now reaching 25% or total traffic and mobile being the #1 method for brands to engage consumers, mobile RUM will be of increasing importance to most organizations.

Network real-user monitoring
Hardware appliances that plug into a network switch’s span port to passively listen to network traffic provides network based RUM that very accurately represents end-to-end network performance of the application. This type of packet smashing leverages timestamps in the network packet headers to break performance down into client, server, and network components.

My own assessment is that network RUM is particularly good at monitoring HTTP API performance for services rather than the higher level end user experience of an application consumer.

Desktop agent based monitoring
A few tools focussed on the enterprise measure end-user performance and usage by installing an agent on the Windows desktop. These agents often use similar technology as network RUM to inspect client network traffic by IP address and Port. This method also provides visibility into usage of enterprise applications as well as general performance and availability.

How many sides of the prism is your organization looking at user experience through?

Hopefully, unless you are already a monitoring guru, you learned a little about the monitoring methods being offered by today’s crop of APM tools for understanding end-user experience. What is also interesting to explore is what capabilities do users get from the different tools leveraging these methods.

Perhaps good subject for a future post :)

We are in the great monitoring renaissance

#include <std_disclaimer.h>

Someone told me just yesterday that my head was in the clouds. That I was too much of a dreamer about monitoring, but I really disagree. We are in the great business and application monitoring renaissance!

Today, monitoring systems both open source and from leading vendors are simpler to implement and distill better intelligence about application performance than ever before and better capabilities are coming.

There are a pile of vendors that do all or most of the 5 APM dimensions described by Gartner. The future though is different. It’s something more, something with it’s own intuition to help us normal humans manage things well. And it will be more than a system that helps you become aware and address technical performance issues like today’s APM. It will be a system that helps you manage Customer Experience across all channels.

Someday we may have the internet of things (IoT) because everything will be a sensor, but we already have a lot of sensor data for managing business, applications, networks and platforms.

Many organizations already have sensors that collect performance and availability data from:
– synthetic end-user monitoring
– real user monitoring
– algorithm performance
– transaction tracing
– platform monitoring
– network performance monitoring
– database performance
– visitor analytics
– business performance statistics
– events like product releases

The bigger issue is that much of the above sensor data are still looked at in a non-integrated way.

What organizations need are business analytics and performance systems that give us the traditional shareable KPI dashboards with a layer underneath. That statistically powered, machine learning layer that includes analyzing the streams of “big data” coming from all those sensors in real-time, identifying anomalous behavior and correlating other anomalous events all the way from the technical stack, through to the user experience layer, and ending up with business results.

I was told that this is too complex. That it will never be mainstream.

Yes, performing streaming analysis of data in real-time and correlating that across hundreds or thousands of metrics is complex, and so is a fingerprint sensor on a smartphone. It’s ok if something is complex inside as long as the user interaction is not complex. Well designed products take very complex things and make them simple for users to leverage.

This isn’t anything as futuristic as AI. In fact, to me this seems like the maturation of business intelligence systems applied to customer experience. In the beginning there was the data. The data is big and raw and complex and hard to look at. Over the years we turned that data into information. Delivering reports and dashboards that make it easy to understand and ask questions of the data or build dashboards to show KPIs over time. The fulfillment of BI promise is that software systems can help us turn data into information and into knowledge.

That’s really what we are striving for. That our operational systems are smart enough to self-identify anomalous behavior anywhere is the business / technology stack. Machine detected anomalies effectively create a warrant which needs to be triaged before jumping in to action. But isn’t that what we really want from our business monitoring systems.

Tell me when something unusual is happening and provide all the related things that could be causing it.

Just my 2-cents. It doesn’t seem like rocket science to me.

Ken

Gartner’s Application Performance Mangement leaderboard likely to keep changing in 2014

Every year Gartner publishes the Magic Quadrant (MQ) for Application Performance Management (APM). It is one of the most comprehensive reports covering APM vendors that can address all 5 of the dimensions Gartner has defined. There are many other tools and solutions that focus on specific areas, often more effectively than those in the research, but the list includes only those who provide a complete solution. Magic Quadrants are an info-graphic that displays who the competing players in a major technology market broken down into leaders, visionaries, niche players and challengers.

What’s so interesting is are the changes from 2012 to 2013, and maybe, how things are likely to shift again.

The APM marketplace is very dynamic! Two factors are making it so dynamic. The first, of course, is that digital experiences have become much more significant to our business strategies driven by Cloud, Mobile and Social. The second, a more traditional story, is that the pace of innovation in application performance management is so breakneck that many of the traditional leaders have lost their footing to newer more agile startups.

Let’s focus on the top right quadrant, the leaders quadrant. The 2013 research lists just 4 technology players on the leaderboard. They are Compuware, courtesy of their Gomez and Dynatrace acquisitions; Riversoft, courtesy of it’s OpNet acquisition; and two recently started APM innovators AppDynamics and New Relic. Two of the entrants are jazzy new startups hatched from the brain trust at Wiley Technologies (acquired by CA) – New Relic and AppDyamics. The other two have invested $600M and around $1B in acquisitions to grow into the leaders quadrant.

This is a significant change from the 2012 Magic Quadrant which listed IBM, CA, Quest (now Dell), and BMC in the leaders quadrant as well as the products from 2013. If we add HP and Microsoft to the list, not a single one of the BIG systems management players – HP, IBM, CA, BMC, MS or Dell for that matter – have innovated enough to be a leader. You now what that means :)

There has already been a significant amount of reporting about how New Relic is readying themselves for a likely 2014 IPO and the same can be said for AppDynamics.

Compuware’s business has been under fire for sometime while they try to transform themselves to more relevant businesses. Even Riversoft had recent rumors of a private equity bid of over $3B.

How long will 6 ginormous systems management vendors be without leading products in the hottest part of the IT Ops marketplace?

I’m guessing while we may have many of the same products in the 2014 leaders quadrant at least a couple will be operating as a part of IBM, HP, CA, BMC, Microsoft or Dell.

In fact, getting acquired again by CA might help New Relic’s CEO, Lew Cirne, out of his patent disputes over former Wiley patents.

Ken

Related links:
What is a Gartner Magic Quadrant

See the 2013 Gartner Magic Quadrant for Application Performance Management from AppDynamics and register for a copy (scroll down below the form to see)

A glimpse of the 2012 Magic Quadrant for Application Performance Management

Making Sense of Customer Experience (CX) , User Experience (UX) and Application Performance Management (APM)

One thing I have always been a little nit-picky about is clarity of communication. The words customer experience, user experience and application performance are used in so much marketing and promotional language that it can be easy to lose site of what those terms actually mean.

Customer experience, often abbreviated as CX, is a customer’s perception of their entire relationship with your organization. It’s their memory and emotional assessment of the history of each and every touchpoint they have had over the duration of your engagement together. CX can also refer to a single interaction when thinking more granularly.

Looking at it from the organizational perspective, CX is the planning, delivery and management of every aspect of each individual customer journey in support of customer behaviors such as discovery, evaluation, purchase, post-purchase evaluation, and experience sharing.

Who owns the customer experience in your organization?

User experience, often abbreviated as UX, is a subset of CX. It is the customer’s perception of their entire digital relationship with your organization. It is the sum of their feelings about the history of their online interactions with you through the web, smartphone app, and social media. From an organizational perspective, UX also focusses on designing and managing the digital touch points , or perhaps just influencing with regards to social media.

Digital is increasingly paramount and cannot be separated from your business, brand and CX strategies.

Why are CX and UX so important now?

Because businesses sell products and services but people buy experiences! Because the features and benefits of the products and services you sell can be commoditized, delivering memorable experiences worth sharing cannot. CX and UX are the moat that you build and use to defend your market position, take on new markets, or create entirely new ways of delivering the experiences people desire.

Perhaps it shouldn’t be called CX or UX but rather PX for people experience!

So where does application performance management (APM) fit in? It is a big part of managing the digital experience and supporting most of the non-digital experiences people have with businesses. APM is the monitoring and management of software application for availability and performance. The job of APM is to identify application problems and support quick diagnoses so expected services levels can be maintained.

Of course, software applications directly support and influence most digital journeys. But have you thought about how they indirectly influence many brick and mortar or supporting capabilities. If you talk to an employee at the customer service desk to ask if a product is in stock they are using a software application to look it up, and let’s not even get started on the complex supply chain and logistics involved to stock the shelves at any big box retailer.

APM are the monitoring and processes your organization has in place to ensure the systems that support the business and translating all those metrics into business value or PX.

I had a bit of an awakening as it relates to – can I use my new word ;) – PX, but that’s for another post. That sounds like my next job – CPXO – Chief People Experience Officer!

Ken

ManageEngine Debuts Cisco AVC Monitoring, iPad App, Network Security Fortifications at Cisco Live Milan

NetFlow Analyzer, OpManager, DeviceExpert Demonstrate Upgrades at Cisco Show

NetFlow Analyzer supports Cisco Application Visibility and Control monitoring

OpManager rolls out iPad app to enable IT management while on the go
DeviceExpert fortifies network security through SIEM integration and session recording

MILAN and PLEASANTON, Calif. – January 27, 2014 – ManageEngine announced a suite of upgrades that are immediately available for key applications. NetFlow Analyzer, the real-time traffic and security analytics software, adds Cisco Application Visibility and Control (AVC) monitoring. OpManager, the company’s data center management software for large enterprises, gains an iPad app. DeviceExpert, the web-based, multi-vendor network change and configuration management solution, now supports security information and event management (SIEM) integration.

ManageEngine will be demonstrating the applications’ new features at Cisco Live, January 27-31, 2014, in Milan, Italy. At the show, ManageEngine will be in booth E43/E44.

“The new IT management capabilities we’re debuting at Cisco Live Milan improve IT teams’ abilities to provide superior, non-stop business services,” said Raj Sabhlok, president of ManageEngine, a division of Zoho Corp. “The OpManager iPad app lets IT admins resolve network device issues at any time, from anywhere. NetFlow Analyzer’s AVC monitoring ensures the right network applications get the right share of network resources. And the SIEM integration in DeviceExpert fortifies overall network security.”
ManageEngine Highlights at Cisco Live Milan 2014

At Cisco Live Milan, ManageEngine experts will be on hand to discuss and demonstrate the latest enhancements to its IT management portfolio, including:

NetFlow Analyzer – With the addition of Cisco AVC monitoring, NetFlow Analyzer now supports all major monitoring technologies from Cisco including NBAR, CBQoS, IP SLA, WAAS and Medianet. AVC monitoring lets IT teams segment, identify, monitor and manage over 1,000 applications with the help of NBAR2. In turn, AVC monitoring helps improve QoS monitoring and application response times. NetFlow Analyzer is IVT-tested and certified as ‘Cisco Compatible’ in various key areas.

OpManager – The new iPad app for OpManager lets admins view the availability and performance data of Cisco devices. It lists the alarms that are raised and lets the admin acknowledge, add notes, clear and delete them. The app includes various troubleshooting options such as ping, trace root and IT workflow automation. Admins can also use the app to create custom dashboards and widgets.

DeviceExpert – With SIEM integration, DeviceExpert can now send Syslog messages to SIEM tools upon detecting a configuration change. In turn, the SIEM tools can analyze those events, correlate them with other network events, and provide insights on overall network activity. The latest release of DeviceExpert also gains session recording of Telnet and SSH connections launched to devices from the DeviceExpert GUI. Session recording caters to the audit and compliance requirements of organizations that mandate proactive monitoring of activities. The recorded sessions can also be archived and played back to support forensic audits. DeviceExpert also offers REST APIs to enable any third-party application or software to integrate with DeviceExpert directly and add, access and extract data.