Is speed becoming a commodity? UX is dead. Long Live UX!

#include <std_disclaimer.h>

The other day I tweeted how I love as I learn more my definition of User Experience (UX) continues to expand. For many years, as a part of end-user monitoring businesses, I spoke about UX as if it completely depended on what I have come to call Feelsfast.

Feelsfast is the mythical metric that measures how long it takes for a user to feel like your page has loaded. This is a basic requirement, a basic need that’s lower on the hierarchy of needs. Without Feelsfast the rest of UX doesn’t matter because no one will use your website or mobile application.

UX = Feelsfast is dead. Long live UX!

My newer, more cultivated view of user experience goes something like…

UX = Feelsfast + Usability + MarketingPs

What do I mean when I say Usability?
It’s the ease of use, learnability and enjoyment that can come from a digital interaction. This is the stuff of the Carnegie Mellon, Human Computer Interface model that Forrester frequently discusses and Aberdeen presents a refined interpretation below:

Aberdeen Research interpretation of Andrew's CX hierarchy

Aberdeen Research interpretation of Andrew’s CX hierarchy

BUT…désign, style and grace is akin to store design, layout and optimization in the retail world, although it is part of the Product in the SaaS and application world as the user interface IS the product.

Usability still leaves something, something more essential, the substance.
The MarketingPs are the substance. This is the traditional Price, Promotion, Product, Placement stuff that we all learned in our Principles of Marketing class back in college. This is the what people are buying.

The amazing opportunity for businesses moving forward is to leverage the latter two in my UX equation to reimagine the customer journey and more effectively engage users at every touchpoint of the customer experience.

Digital, as a virtual experience, supports endless experimentation promoting a culture of innovation that allows businesses to create real competitive advantages. Combine experimentation with big data powered multi-variate analytics and our systems can create a safety net allowing our employees to take risk without risk!

So where does that leave Feelfast?
Yes, we can’t deliver the rest of UX without it! And yet it is really the stuff of infrastructure. It’s the rails our train of digital experience ride on.

For modern, well-designed software applications adding performance is getting to the point where you should just be able to add more coins to the computing machine to get more performance.

Is SPEED becoming a commodity?
We are not quite there yet, but that’s where we are going :)

5 things your business needs to do now to win the game of CX

#include <std_disclaimer.h>

I love the now and the place we are going. It’s so exciting for businesses and customers!

Trying to understand how business is changing technology and technology is changing business is so much fun. One of the themes I’ve been thinking a lot about lately is that IT and business have changed each other so much they are now one.

Call it the digital transformation that’s the driving force, ok, but that transformation is at least in part driven by our own over-indulged interactions with our digital devices and need for the constant pulse of engagement. I think everyone in my household could use a little 12-step digital addiction counseling including me :)

What a wonderful opportunity for brands to take advantage of human nature and pervasive connectedness!

Capitalizing on this opportunity for customer engagement will rely, at least in part, on how well your organization understands your conceptual value chain and can generate the three fuels that feed success.

The conceptual value chain looks something like this:

Business outcomes from
User behaviors because of
UX = Available + Feelsfast + Usable + Enjoyable (delivered by)
Applications depending on
Services running on
Servers residing in

That value chain runs on these fuels:

Customer experience planning and design is customer experience management largely as defined by Forrester. And they do a very good job of publishing guidance on the processes and roles required to do this well. The concept of the truly empowered “product owner” from Agile model is a much leaner interpretation for small teams. It really comes down to who owns the customer experience and the business outcomes generated.

Customer acquisition is primarily a sales and marketing function, and frankly there should be very little light shining between the cracks. Today, marketing is sales at scale. Brand message and core value proposition should be consistent in an omni-channel world. And it is the larger customer experience design – the what we have and why people care about it – that defines the product and experience. Sales and marketing almost becomes the way we project the emotional and business impact of the planned customer experience with it’s delightful fulfillment.

Fulfillment of the user experience has a hard and soft component. The hard component is the service delivery of the application. Is it available and does it feel fast? It’s the applications running on services that use servers in different data centers hierarchy. The soft component is the result of customer experience design and driven by usability factors like utility, ease and enjoyment.

Here are 5 things your organization can do right now to create the fuel you need to win big:

1 Tie the value chain together so everyone can understand and own business outcomes.
This means the what and the why needs to be very clearly stated. What is the desired business result? What user behaviors will get us to that business result? What user experience must be delivered to drive that user behavior? and so on. This means monitoring and metrics, metrics, metrics.

2 Get serious about UX
User experience is contact patch with the customer. It’s the blender that mixes the “just right experience” by carefully combining innovative CX planning and operational service delivery. The importance of UX needs executive voice as well as support at every level of the organization. Saying get serious about UX is almost like saying get serious about winning.

3 Be a team!
Stop having the business make requirements and IT produce deliverables. We – the business and IT people – need to use information and technology to collaborate and break through traditional business perimeters. To win big you have to win as a team!

4 Go faster.
Run projects with an Agile format and adopt a DevOps approach to your test and deploy methodology. Use the cloud to dev, build, test and deploy. It takes a lot less time that ordering, racking, and configuring hardware. These are big cultural changes so pick a project to start with and show the rest of your teams how successful and fun for the team the new ways can be.

5 Leverage analytics and big data.
We all talk about data-driven decisions, but a team of analysts spending a week gathering data in Excel from systems spread all over the company for last weeks data doesn’t cut it anymore. Monitoring both the service delivery systems and customer acquisition systems at every step of the value chain with real-time granular data is needed. There are so many areas where this can impact success. Analytics are the thread of data that ties the value chain together and shows you the business the moving parts as well as the whole. Analytics and anomaly detection are an important power tool so mere humans can get help knowing what might be changing and important to pay attention to in the sea of big data. Software analytics are the silent voice of the customer pointing to product usage and frustration.

Let’s stop building software and start building amazing user experiences that people can’t wait to share!

Exploring the methods of end-user experience monitoring for APM

#include <std_disclaimer.h>

Today’s application performance management (APM) marketplace is maturing and the best solutions bring a wide-ranging set of capabilities for measuring performance and understanding behavior for many aspects of the application delivery stack. One of the cornerstone’s of APM is end-user experience monitoring (EUM).

As defined by Gartner for the APM Magic Quadrant, EUM is:
“The capture of data about how end-to-end latency, execution correctness and quality appear to the real user of the application. Secondary focus on application availability may be accomplished by synthetic transactions simulating the end user.”

But what does that mean? What are those capabilities?

There are a number of methods to do end-user monitoring. Each has advantages, and one is not enough. It is important to look at end-user experience through a number of different sides of the prism to really try and understand how the metrics match up against user experience. As I was cataloging them for myself I thought it would be good food for thought to share my definitions.

Synthetic monitoring
Web performance monitoring started with synthetic monitoring in the 1990s. A synthetic monitor is not a real user of your application but an artificial robot user, thus synthetic. The robot periodically executes an interaction with your website, API or web application to verify availability and measure performance. It is one of the easiest monitoring to setup and provides almost immediate value by delivering visibility and hard data without having to install or configure anything within the application. An example of a synthetic monitor would be a web transaction monitor that ensures a online store is working by visiting the home page, searching for a product, viewing the product detail, adding it to the cart, and checking out. This is very similar to the pile of functional tests that should run every time

Although Gartner has relegated synthetic monitoring to an availability role, it still has a lot of value for performance monitoring that passive methods do not address. No other method can help you measure service delivery when real users are not on the system. Thus it is ideal for measuring SLAs. And it is the only way to see individual page resources (a la the waterfall report) as this is still not quite yet a real user monitoring (RUM) capability. Synthetics eliminate a lot of the independent variables that can make it difficult to compare real user monitoring data. Finally, the synthetic connection to the DevOps tool chain of tests run at build or in QA provides a continuous reference point from development environments, through test and production.

Web real-user monitoring (RUM)
When I first saw real-user monitoring back in 2008, I knew it was going to change the way we measure web performance. RUM works by extracting performance values using javascript. As actual users visit web pages performance metrics are beaconed back to the great reporting mothership. Originally, the only metric that could be captured by RUM was a basic page load number, but modern browsers now collect a slew of detailed performance metrics thanks to the w3c timings standards and soon will even provide access to page resource level detail.

RUMs great advantage vs. synthetic is that is can see what’s happening for all of your actual users on all of the web pages they visit. This means you can understand web performance metrics by page, geography, browser or mobile device type. While this provides a broader understanding of general performance, it also has many, many more independent variables making specific trending and comparison more challenging. RUM is also the front-end method by which transactions are “tagged” so they can be traced and correlated through the back-end for greater understanding of how software and infrastructure work together to deliver end-user experience and root-cause analysis.

RUMs greatest and perhaps least exploited value to business today is that it captures business activity information that represents the WHY we have a website to begin with. It is this business outcome data that should be our first canary in the coal mine for determining if something needs attention.

Mobile real-user monitoring
Mobile web applications can be monitored with traditional RUM; however, today’s native mobile apps require a different mechanism to measure the application experience. That is typically accomplished by adding an extra library into your mobile application that beacons mobile application performance data back for reporting. Like traditional RUM, this is also how transactions are “tagged” for mapping through delivery software and infrastructure.

With mobile web traffic now reaching 25% or total traffic and mobile being the #1 method for brands to engage consumers, mobile RUM will be of increasing importance to most organizations.

Network real-user monitoring
Hardware appliances that plug into a network switch’s span port to passively listen to network traffic provides network based RUM that very accurately represents end-to-end network performance of the application. This type of packet smashing leverages timestamps in the network packet headers to break performance down into client, server, and network components.

My own assessment is that network RUM is particularly good at monitoring HTTP API performance for services rather than the higher level end user experience of an application consumer.

Desktop agent based monitoring
A few tools focussed on the enterprise measure end-user performance and usage by installing an agent on the Windows desktop. These agents often use similar technology as network RUM to inspect client network traffic by IP address and Port. This method also provides visibility into usage of enterprise applications as well as general performance and availability.

How many sides of the prism is your organization looking at user experience through?

Hopefully, unless you are already a monitoring guru, you learned a little about the monitoring methods being offered by today’s crop of APM tools for understanding end-user experience. What is also interesting to explore is what capabilities do users get from the different tools leveraging these methods.

Perhaps good subject for a future post :)

Unified Monitoring – the new monitoring renaissance has a moniker

#include <std_disclaimer.h>

I’ve been seeing a lot of marketing leveraging the term Unified Monitoring lately. At times it’s made me smirk but mostly smile. Let me explain.

It’s made me smirk because once again what’s old is new.

Many of the infrastructure components of Unified Monitoring have been a part of Enterprise Systems Management tools for more than 20 years. IBM, BMC, and CA products have offered dashboards, event management, correlation, reporting and service level management for as long as I can remember and I have a fair amount of gray hair :)

What’s so compelling is to maximize customer experience in the real-time digital enterprise we are re-imagining traditional management systems. Existing network and systems management capabilities are being enhanced with easy-to-use web based access, big data powered analytics, and more focused APM capabilities including visitor behavioral data. Put all of that capability in a well-defined, self-service pricing model bringing it to hundreds of thousands of companies and not just blue chip enterprises and you can start to see the potential.

This is making me smile a big toothy grin!

I’ve suggested before that we are in the great monitoring renaissance and I think that the term Unified Monitoring is probably the arrowhead that all this is lining up behind.

For years we have heard the term business alignment to help IT do the right thing. In tomorrow’s successful digital enterprise there will be no clear lines between business and IT. There will just be teams of specialists all working on part of the customer experience, the business. And, those teams will include IT people and UX people and marketing people and customer support people.

Do you remember, back in middle-school, the way the science book used to have those cellophanes of the human body systems? The skeleton, muscular, circulatory, organs layers. I’ve always had this vision that we could do the same for our customer experience delivery stack. Business results come from user behaviors that are the result of user experience delivered by application performance supported by the technology delivery stack.

Layering the business like this allows the team to focus on business results. And it let’s the teams focus on building a user experience first and then the technology required to support that. Combine the above visualization of the layers with powerful anomaly detection and statistical algorithms and you now have a competent and logical artificial intellect helping you deliver, manage and optimize the customer experience. Add marketing analytics, financial and supply chain data and we might be able to imagine closed-loop, machine learning powered Business Resource Planning.

I’m excited about the future of Unified Monitoring and you should be too!

Am I being too utopian?

Anomaly Detection – What, why and now!

#include <std_disclaimer.h>

I’ve been doing a little research lately to learn about anomaly detection and wanted to share. I can’t think of a better way to start that with a visceral example.

If you are old enough to be a working professional you already inherently know what anomaly detection is. We all learned it watching Sesame Street as children!

Do you remember the “1 of these things is not like the other” song? It brings back warm memories.

What is anomaly detection?

An anomaly is something that doesn’t belong. It’s an exception, an outlier, an aberration. It’s just peculiar. Not that there’s anything wrong with that.

In data mining, anomaly detection (or outlier detection) is the identification of items, events or observations that do not conform to an expected pattern or other items in a dataset. This is the formal definition from Wikipedia. In plain English, anomaly detection solutions use software algorithms to understand the streams of operational metrics and their inter-relationships to automatically identify events that shouldn’t be happening and the likely causes.

And how can we put this into perspective with things we already know? We can better relate this new machine learning and analytics to what we are familiar with by looking at the business intelligence (data analysis and reporting) maturity cycle:

Data     >>     Information     >>     Knowledge

In the early days of computing we used them primary to collect data, to record each item sold or produced, or inventoried. Computer systems, at the time, were very transaction oriented. And at the end of each month, management could see a report summarizing the transactions: total sales, total units.

In the 1980s and 1990s, enterprise use of data became much more informational—examining sales by region by month or week to manage productivity. Specialized business analysts with analytical reporting tools (OLAP) performing interactive, multi-dimensional analysis of metrics, pivoting through the data to spot important trends to the business used to be employed. Now we have those capabilities in Excel and every accounting department is creating pivot tables. This is traditional, mainstream business intelligence.

Once reserved for giant Telco’s and financial services, now knowledge bordering on foresight can be gleaned by leveraging powerful data mining algorithms on today’s ultrafast hardware applied to the thousands of metrics and millions of data points most businesses collect about user experience and business results.

Data mining and analytics are augmented cognition.

Why is anomaly detection important to business?

We have entered the Age of the Customer. Spurred by the iPhone’s consumerization of mobile touch technology, bringing always connected mobile computing, access to cloud services, and social networks to everyone including our children and our parents, businesses can no longer survive by earning customers by being good closers. Content, relationship, trust – substance is required to succeed with today’s customers whatever generation we want as customers.

Customers no longer want to buy products they want experiences. And creating digital experiences that are memorable and worth sharing is what’s driving the way tomorrow’s successful businesses will engage users.

For IT the world is different, too, then. We are no longer creating systems of record but systems of engagement, which directly impact business results. Execution of the customer experience through application delivery has a bigger impact on business results than ever.

Here’s a quick video about how Snap Interactive uses Anomaly Detection to power the decision making behind real-time campaign management.

Leveraging the data the business generates from business and technical operations can yield insights that improve business operations and results delivered to stakeholders. 

Why is anomaly detection important to IT Operations?

With application delivery having such a big impact on customer engagement these powerful new analytics are critical to managing the risk of IT operations.There are just too many metrics from too many tiers flowing out of today’s complex application delivery environments powered by Cloud, SaaS, virtualization, and third party content and components.

Business are functioning in real-time, so the IT that supports the business much be much more real-time. Without our software systems, often the business of our businesses comes to a halt. Add to that the new Continuous Deployment scenarios and things are just happening a lot faster than they used to.

Today’s application and IT operations monitoring are generating terabytes of big data. With more frequent collection intervals a lot more data is being generated. Many sensors today are generating data every few seconds. The complex infrastructures we are monitoring have many more data elements and are spread across more servers, app servers, networks, and third party components. And the importance of business systems to the operation of the business means that all of the data we are collecting needs to be analyzed in real-time, across all of the many dimensions.

The old methods are not enough.

Dashboards are insufficient to manage today’s complex, real-time businesses. How many metrics can a person really watch on a dashboard and understand? Maybe a dozen or so – not nearly enough. Threshold setting is also no longer effective. Manually setting static values that represent Red – Yellow – Green doesn’t work in dynamic environments and doesn’t scale. Besides, thresholds assume you already know what you are looking for.

Data mining and machine learning will augment our cognitive abilities to deliver user experiences with greater reliability and consistency.

Anomaly detection in action

The analytics behind anomaly detection use powerful statistical techniques such as: K-nearest neighbor, cluster analysis and neural networks to understand the data and train the algorithms about what’s normal.

Here’s a quick example of a multivariate Gaussian outlier analysis courtesy of

The graph below shows how CPU and Memory utilization are fitted with an elliptical probability model that identifies the green X as an outlier using a formula something like this:




What benefits can I expect from using these new anomaly detection tools?

Able to process a nearly unlimited amount of data in real-time generating expert analysis, anomaly detection systems are automatic. Anomaly detection turns unknown business and technical events into known events giving you a better understanding of what’s going on with your business. These systems perform complex event correlation and are much better at isolating root-cause as opposed to just symptoms.

Analytics can see past broad averages and analyze trends by their dimensions quickly spotting issues on a single cluster or only affecting a particular region that might normally go unnoticed. The alerts produced are both more proactive and more reliable. These systems can find system bugs by quickly identifying troubling behavior during a 10% production deploy or identifying sudden abandonment points from visitor analytics or RUM.

Finally, this is all about risk mitigation.

Anomaly detection is a safety system for your business just like Traction Control is for your car.

Not that long ago traction control was only found on luxury cars but now you wouldn’t consider purchasing an automobile without this important safety capability that greatly mitigates the risk of accident in slippery / changing conditions.

Leading vendors and market dynamics

There are several leading vendors in the analytics spaced dubbed ITOA for IT Operations Analytics by Gartner. Prelert, Netuitive and Sumo Logic come to mind and these successful vendors are independently selling these analytics as add-ons to supplement the results from current tools. In fact, Prelert has a nice partnership with both Splunk and CA APM.

By now you can probably tell that I think that anomaly detection and event correlation is a capability of sensor systems and not something I believe can sustain a company or many companies in the long term.

Given the way technology product cycles are compressing, I wonder how long the market leaders will be independent?

The takeaway!

These powerful, and now accessible data mining algorithms are now available in commercially available products and are an important safety control system for your business.

Analytics turns unknown business and technical events into known events giving you a better understanding of what’s going on with your business.

You shouldn’t operate without them.