metrics

/metrics

Why you should tear up your support SLAs

Dave-OReardon-HI-RES-Mar15 (2)

 

Guest blogger Dave O’Reardon returns today to explain ‘why you should tear up your support SLAs’. You can also check out Dave’s tips for the 2016 itSMF Industry Awards for Excellence in IT Service Management in last week’s blog post!

 

Have you heard of the Watermelon Effect? It’s a rather common problem where Service Level Agreement reports for IT support show that everything is green but the customer is still unhappy. Green (statuses) on the outside, red (angry customer) on the inside.

 

watermelon

 

Research from Forrester shows how prevalent this mismatch of perceptions is – there are about twice as many IT teams that think they provide great IT support than there are businesses who feel they are getting it.

i2

One of the causes of this problem is that the metrics used in Service Level Agreements are a deeply flawed way of measuring service quality. They mislead IT support teams into thinking they understand how the customer feels about the service they provide.

Typically, support service levels are measured on the basis of time – actual vs target time to respond, actual vs target time to resolve. But purely time-based measures are an ineffective indicator of the quality of IT support.

Our customers’ experience of IT support is shaped by many things, not just how quickly we responded or resolved their issue. Factors such as how they were treated, whether they could understand what they were being told or asked to do, whether they felt well informed about what was going on and what would happen next (and when), and whether they were asked to confirm their issue was solved before the ticket was closed.

Even something like time is not absolute. From personal experience, we all know there are many factors that can make the same absolute wait time feel longer or shorter.

Ultimately, these experience factors are all about expectations and perceptions, not absolutes. The perceptions of those at the receiving end of the service – our customers. And the outcome of their judgement is their level of satisfaction.

David Maister, a researcher on the psychology of waiting times, described this rather succinctly with the formula: S=P-E, where S stands for satisfaction, P for perception and E for expectation. As P and E are both psychological in nature, S can be attained when a customer’s perceived experience of a service, P, exceeds their expectations, E.

blog

If you want to measure service quality (and you work in Service Management, so you should, right!), the best way to do that is to ask your customers. Valarie Zeithaml put this rather nicely in her book, Delivering Quality Service: “Only customers judge quality. All other judgments are essentially irrelevant”.

i4

We need to stop putting so much focus on traditional SLA metrics and start focusing on customer satisfaction. The extent to which you can keep your customers happy determines whether your customer trusts you or bypasses you, forgives your mistakes or hauls you over the coals, increases your budgets or squeezes them, keeps you as their service provider or outsources you.

And if you’re always asking your customers to not just rate your service, but to tell you what you need to do to improve (one of the principles behind the Net Promoter System), you’ll find this feedback to be a very powerful way to drive continual service improvement.

By all means measure response and resolution times for your own purposes, but never wave a green service level performance report in front of a customer and tell them they should be happy.

This post was based on an e-book, “Measuring the Quality of IT Support”, which can be downloaded here.

Dave O’Reardon helps IT support teams adopt Net Promoter practices and use customer feedback to drive continual service improvement. He’s the founder and CEO of Silversix, the company behind www.cio-pulse.com, and winner of the Service Management ‘Innovation of the Year Award’ in 2015. Dave can be reached on Twitter via @silversix_dave or LinkedIn.

 

 

 

 

 

 

By |2018-03-19T16:23:19+00:00July 14th, 2016|guest blogger, ITSM, metrics, Net Promoter®, Netpromoter, Service Management 2016|

4 critical components of successful IT metrics and reporting with Nikki Nguyen

headshots-large-nn

 

 

 

 

Let’s do the numbers

In IT, we love to measure and report. We just can’t help ourselves. But in our efforts to track every statistic possible, we often lose focus. So let’s change that. Let’s start asking questions like… Who will use the metrics? Why do we need them? Are we setting the right performance goals to reinforce the goals of our business–or could we even be working against them? Today, we’ll look at four very practical guidelines for measuring and reporting on IT performance, and for setting the right goals from the start.

1: Make sure IT performance goals jibe with your business goals

I recently opened a ticket online with a hardware vendor to ask about repair service. They responded quickly, and answered many (but not all) of my questions. Most concerning, though, was the email that I received a few minutes later: “Your ticket has been successfully resolved.”

Had it? Says who? While I appreciated the fast response, my issue had not, in fact, been resolved. Did someone close a ticket just so they could say it had been closed? The front line support team was clearly being evaluated on time-per-ticket, or percentage of tickets successfully resolved, or both.

Certainly, time-per-ticket and percentage of tickets resolved are legitimate measurements for IT operations. But what about the underlying problem I reported? If you’re not tracking at the incident and problem level (to look for common, overarching problems and a high volume of incidents associated with them), you’re missing an opportunity to help your business solve problems proactively instead of just reacting to them. More importantly, what about customer satisfaction? I didn’t feel my issue had been resolved. Now, I had to open another ticket and waste more of my own time. I grew frustrated. I gave up on the product.

In a haste to meet their operational performance metrics, they lost sight of much more important business goals: make customers happy and encourage referrals and repeat business.

To avoid this trap in your own organization, look for ways to set meaningful goals and measurements that encourage behavior in line with company and organization-wide goals. Incentivizing a low-level support team to close or escalate tickets quickly can actually cost the company more, and HDI even has the math to prove it:

image1

Source: HDI

So encourage your Level 1 support team to spend a bit longer collecting more information before escalating, and give them the training and resources they need to be more effective at resolving tickets, not just triaging them. The savings adds up quickly.

2: Share different metrics with different stakeholders

Have you ever sat through one of those tortuous meetings where one or more managers each deliver ten slides to share their key accomplishments and metrics for the quarter? The reason they are so torturous is simple: the reports lack context, and they aren’t relevant to you. There are two primary reasons you should cater your reports to the individual stakeholder you are sharing them with:

  • To give stakeholders the information they need to do their own jobs better.
  • To keep them from meddling.

The first is pretty obvious. Different stakeholders care about different things: a front-line IT manager cares deeply about technical performance data, while a CTO cares much more about the bigger picture. Avoid distributing generic, tell-all reports to large audiences altogether, and instead, meet with your key stakeholders and agree on the right measurements to help them achieve their goals.

The second is less obvious, but equally important. People love to meddle. We all do. I’ve watched a very senior IT executive review a very low-level list of unresolved IT incidents. He didn’t need that data. In fact, he had directors and managers he completely trusted to achieve the goals he had put in place. Once he had the data in front of him, he couldn’t help but ask questions and get involved. Distraction ensued.

The moral? Don’t include data for data’s sake. Yes, you need to be completely transparent about your performance, what you’re doing well, and how you can improve. But you don’t want to give the entire sink to every person who asks for a drink of water.

3: Use visuals to make reports easier to understand.

Excel spreadsheets full of raw data aren’t very effective as report-outs to your team members, peers, and leadership, because they require the viewer to interpret the data.

Fortunately, adding context to the data isn’t always so hard if you are already using a strong reporting dashboard. You want to provide clean, crisp, and easily understood reports that provide enough context to quickly communicate how you are doing against your goals, your benchmarks, and your history.

image2

For practitioners and front-line managers, consider using daily reports to show the top 10 issue types over the last 24 hours. They’re easy to read and understand, and can help your staff quickly hone in on any emerging categories that may growing in popularity.

image3

Trending reports can be even more helpful, because you can compare your performance over a period of time, and look for any anomalies that might be worth exploring further. If you looked at your time-to-resolution data in a vacuum each month, you would never notice that July and August showed a strong upward climb in the number of issues opened.

What caused that influx of new issues? Was a new software revision released? Did you ship a new product? Why were nearly a third of July’s issues unresolved, when most months the percentage was much higher? It’s important to look at the entire picture, and to understand the data you are looking at (and if possible, what caused it) before you share reports and discuss results.

4: Keep a scorecard

When a store clerk or passerby asks you how you are feeling, it’s customary to respond briefly with “I’m fine” or “A bit tired today.” It’s a quick way to summarize how you are feeling, without giving them the blow-by-blow account of every event over the last month or more that has lead up to how you are feeling today.

The same principle should apply in IT metrics and reporting. If you’re not using a scorecard as a simple, high-level way to both evaluate and communicate your team’s performance, it’s time to start now. An effective scorecard will include the objective or measurement you are scoring yourself against, and an easy “traffic light” system to indicate your current progress: red (at risk), yellow (caution), or green (good).

The most important thing about a scorecard is to be honest. Nobody performs perfectly at all times, so giving yourself a green smiley across every category at every reporting interval will likely cause more alarm and disbelief than praise. Plus, when something truly does go wrong, you are more likely to get support and understanding if you have been candidly assessing your performance and flagging the areas that are putting you at risk.

A basic scorecard for operational performance might look something like this, and is a great way to quickly update stakeholders without burying them in unnecessary technical data.

Screenshot (12)

More advanced scorecards, like balanced scorecards, can measure IT’s contribution to larger business goals, and are effective at tracking the performance across entire organizations and companies.

Putting it all to use

The above are just guiding principles to help you narrow in on what you want to report, and how. To learn more about implementing SLAs and metrics in JIRA Service Desk, watch Lucas Dussurget’s killer presentation at Atlassian Summit 2014. It’s full of our own top tricks, examples, and best practices based on tons of customer implementations. And for a deep-dive on figuring out what you should be measuring, be sure to check out another excellent presentation from Summit 2014–this one by John Custy.

 

This article was originally published on the Atlassian website.

 

ABOUT THE AUTHOR

Nikki Nguyen

Associate Product Marketing Manager, JIRA Service Desk

Although my life in IT is behind me, it’s not too far away. I’m now a recovering systems administrator evangelizing the way teams work by using JIRA Service Desk. I’ve found a love of combining customer service with technology.

 

Nikki is presenting at Service Management 2015.

 

 

 

By |2018-03-19T16:23:22+00:00July 22nd, 2015|Atlassian, blog, guest blogger, metrics, reporting, Service Management 2015|