Monthly Archives: June 2014

5 things your business needs to do now to win the game of CX

#include <std_disclaimer.h>

I love the now and the place we are going. It’s so exciting for businesses and customers!

Trying to understand how business is changing technology and technology is changing business is so much fun. One of the themes I’ve been thinking a lot about lately is that IT and business have changed each other so much they are now one.

Call it the digital transformation that’s the driving force, ok, but that transformation is at least in part driven by our own over-indulged interactions with our digital devices and need for the constant pulse of engagement. I think everyone in my household could use a little 12-step digital addiction counseling including me :)

What a wonderful opportunity for brands to take advantage of human nature and pervasive connectedness!

Capitalizing on this opportunity for customer engagement will rely, at least in part, on how well your organization understands your conceptual value chain and can generate the three fuels that feed success.

The conceptual value chain looks something like this:

Business outcomes from
User behaviors because of
UX = Available + Feelsfast + Usable + Enjoyable (delivered by)
Applications depending on
Services running on
Servers residing in
Datacenters

That value chain runs on these fuels:

Customer experience planning and design is customer experience management largely as defined by Forrester. And they do a very good job of publishing guidance on the processes and roles required to do this well. The concept of the truly empowered “product owner” from Agile model is a much leaner interpretation for small teams. It really comes down to who owns the customer experience and the business outcomes generated.

Customer acquisition is primarily a sales and marketing function, and frankly there should be very little light shining between the cracks. Today, marketing is sales at scale. Brand message and core value proposition should be consistent in an omni-channel world. And it is the larger customer experience design – the what we have and why people care about it – that defines the product and experience. Sales and marketing almost becomes the way we project the emotional and business impact of the planned customer experience with it’s delightful fulfillment.

Fulfillment of the user experience has a hard and soft component. The hard component is the service delivery of the application. Is it available and does it feel fast? It’s the applications running on services that use servers in different data centers hierarchy. The soft component is the result of customer experience design and driven by usability factors like utility, ease and enjoyment.

Here are 5 things your organization can do right now to create the fuel you need to win big:

1 Tie the value chain together so everyone can understand and own business outcomes.
This means the what and the why needs to be very clearly stated. What is the desired business result? What user behaviors will get us to that business result? What user experience must be delivered to drive that user behavior? and so on. This means monitoring and metrics, metrics, metrics.

2 Get serious about UX
User experience is contact patch with the customer. It’s the blender that mixes the “just right experience” by carefully combining innovative CX planning and operational service delivery. The importance of UX needs executive voice as well as support at every level of the organization. Saying get serious about UX is almost like saying get serious about winning.

3 Be a team!
Stop having the business make requirements and IT produce deliverables. We – the business and IT people – need to use information and technology to collaborate and break through traditional business perimeters. To win big you have to win as a team!

4 Go faster.
Run projects with an Agile format and adopt a DevOps approach to your test and deploy methodology. Use the cloud to dev, build, test and deploy. It takes a lot less time that ordering, racking, and configuring hardware. These are big cultural changes so pick a project to start with and show the rest of your teams how successful and fun for the team the new ways can be.

5 Leverage analytics and big data.
We all talk about data-driven decisions, but a team of analysts spending a week gathering data in Excel from systems spread all over the company for last weeks data doesn’t cut it anymore. Monitoring both the service delivery systems and customer acquisition systems at every step of the value chain with real-time granular data is needed. There are so many areas where this can impact success. Analytics are the thread of data that ties the value chain together and shows you the business the moving parts as well as the whole. Analytics and anomaly detection are an important power tool so mere humans can get help knowing what might be changing and important to pay attention to in the sea of big data. Software analytics are the silent voice of the customer pointing to product usage and frustration.

Let’s stop building software and start building amazing user experiences that people can’t wait to share!

Exploring the methods of end-user experience monitoring for APM

#include <std_disclaimer.h>

Today’s application performance management (APM) marketplace is maturing and the best solutions bring a wide-ranging set of capabilities for measuring performance and understanding behavior for many aspects of the application delivery stack. One of the cornerstone’s of APM is end-user experience monitoring (EUM).

As defined by Gartner for the APM Magic Quadrant, EUM is:
“The capture of data about how end-to-end latency, execution correctness and quality appear to the real user of the application. Secondary focus on application availability may be accomplished by synthetic transactions simulating the end user.”

But what does that mean? What are those capabilities?

There are a number of methods to do end-user monitoring. Each has advantages, and one is not enough. It is important to look at end-user experience through a number of different sides of the prism to really try and understand how the metrics match up against user experience. As I was cataloging them for myself I thought it would be good food for thought to share my definitions.

Synthetic monitoring
Web performance monitoring started with synthetic monitoring in the 1990s. A synthetic monitor is not a real user of your application but an artificial robot user, thus synthetic. The robot periodically executes an interaction with your website, API or web application to verify availability and measure performance. It is one of the easiest monitoring to setup and provides almost immediate value by delivering visibility and hard data without having to install or configure anything within the application. An example of a synthetic monitor would be a web transaction monitor that ensures a online store is working by visiting the home page, searching for a product, viewing the product detail, adding it to the cart, and checking out. This is very similar to the pile of functional tests that should run every time

Although Gartner has relegated synthetic monitoring to an availability role, it still has a lot of value for performance monitoring that passive methods do not address. No other method can help you measure service delivery when real users are not on the system. Thus it is ideal for measuring SLAs. And it is the only way to see individual page resources (a la the waterfall report) as this is still not quite yet a real user monitoring (RUM) capability. Synthetics eliminate a lot of the independent variables that can make it difficult to compare real user monitoring data. Finally, the synthetic connection to the DevOps tool chain of tests run at build or in QA provides a continuous reference point from development environments, through test and production.

Web real-user monitoring (RUM)
When I first saw real-user monitoring back in 2008, I knew it was going to change the way we measure web performance. RUM works by extracting performance values using javascript. As actual users visit web pages performance metrics are beaconed back to the great reporting mothership. Originally, the only metric that could be captured by RUM was a basic page load number, but modern browsers now collect a slew of detailed performance metrics thanks to the w3c timings standards and soon will even provide access to page resource level detail.

RUMs great advantage vs. synthetic is that is can see what’s happening for all of your actual users on all of the web pages they visit. This means you can understand web performance metrics by page, geography, browser or mobile device type. While this provides a broader understanding of general performance, it also has many, many more independent variables making specific trending and comparison more challenging. RUM is also the front-end method by which transactions are “tagged” so they can be traced and correlated through the back-end for greater understanding of how software and infrastructure work together to deliver end-user experience and root-cause analysis.

RUMs greatest and perhaps least exploited value to business today is that it captures business activity information that represents the WHY we have a website to begin with. It is this business outcome data that should be our first canary in the coal mine for determining if something needs attention.

Mobile real-user monitoring
Mobile web applications can be monitored with traditional RUM; however, today’s native mobile apps require a different mechanism to measure the application experience. That is typically accomplished by adding an extra library into your mobile application that beacons mobile application performance data back for reporting. Like traditional RUM, this is also how transactions are “tagged” for mapping through delivery software and infrastructure.

With mobile web traffic now reaching 25% or total traffic and mobile being the #1 method for brands to engage consumers, mobile RUM will be of increasing importance to most organizations.

Network real-user monitoring
Hardware appliances that plug into a network switch’s span port to passively listen to network traffic provides network based RUM that very accurately represents end-to-end network performance of the application. This type of packet smashing leverages timestamps in the network packet headers to break performance down into client, server, and network components.

My own assessment is that network RUM is particularly good at monitoring HTTP API performance for services rather than the higher level end user experience of an application consumer.

Desktop agent based monitoring
A few tools focussed on the enterprise measure end-user performance and usage by installing an agent on the Windows desktop. These agents often use similar technology as network RUM to inspect client network traffic by IP address and Port. This method also provides visibility into usage of enterprise applications as well as general performance and availability.

How many sides of the prism is your organization looking at user experience through?

Hopefully, unless you are already a monitoring guru, you learned a little about the monitoring methods being offered by today’s crop of APM tools for understanding end-user experience. What is also interesting to explore is what capabilities do users get from the different tools leveraging these methods.

Perhaps good subject for a future post :)