Hello lovely people,

Them in the UK I hope you had a lovely Bank holiday and didn’t go back to an inbox full of cc’ed emails which are never that valuable. However its now 5:30pm of this short working week and hopefully this email is full of goodness for you

Today’s Focus:

  • Employee Effort Score (EES)

  • What it is

  • How to use it

TL;DR: HR has sophisticated measurement for almost everything except the friction cost of its own services. Employee Effort Score fixes that. It gives you a number that maps directly to where people are working harder than they should to get what they need from HR, with a clear starting point for designing that out.

Four days to replace a laptop…

There's a metric that CX teams have been running since 2010, one that predicts loyalty better than satisfaction scores and points directly at where a service is costing people more effort than it should and yet HR has never applied it.

I find that gap interesting.

Not because HR needs to copy CX, but because the underlying logic is exactly right, the evidence behind it is fifteen years deep, and the version of this metric that applies to HR services doesn't yet exist in any meaningful way inside the profession.

I'm developing a broader approach to measuring HR and will share that in full over time. (one linked to the wider SPIES approach) Today I want to introduce one piece of it, the piece that matters most for how HR designs and evaluates its own services. It's called Employee Effort Score, and if you work in HR, there's a good chance you've never come across it.

A borrowed logic

In 2010, CEB published a piece in the Harvard Business Review that should have changed how every service function thought about measurement. The finding was counterintuitive: reducing customer effort predicts loyalty more reliably than delight does

They called the metric Customer Effort Score.

Its One question that gets asked after a service or interaction that simply asked to what extent do you agree with the following statement:

[Company/Function/Person] made it easy for me to handle my issue

or sometime it can be done on a 1-7 scale using how much effort it felt for them 7 being max effort 1 min no effort

Scored on a scale where the lower the number, the easier the interaction. The companies that built around cutting friction saw retention move. The ones still chasing satisfaction scores kept wondering why nothing was translating.

What EES actually measures

The question is the same logic, pointed inward. After any HR service interaction, whether a pay query, an onboarding step, a leave request, or an equipment replacement, one question: "How easy or difficult was it to get what you needed today?" Seven-point scale, where one is extremely easy and seven is extremely difficult.

As many of you lovely readers know, I believe EX and CX run on identical paths. Effort costs the same whether the person on the other side of the transaction is a customer or an employee, and friction compounds the same way regardless of which direction the interaction runs.

When a customer can't get what they need without working for it, they leave. When an employee can't get what they need without working for it, the same thing happens, just more slowly and with more compounding damage on the way out.

And here's what makes that worse. The person who spent four days chasing a laptop replacement doesn't come to their next HR interaction with a clean slate. They arrive already expecting friction. One bad interaction or services sets the temperature for everything that follows it, and that expectation doesn't show up anywhere on your current dashboard. It's doing damage before the next interaction has even started.

The broader measurement approach I'm developing looks at employee experience across a portfolio of SPIES. Services and Interaction sit at the centre of that portfolio, because they're where the gap between what HR believes it's delivering and what people actually experience tends to be largest.

EES is the metric that makes that gap visible and gives you a number to design, build and measure against.

When service friction compounds across an organisation, you're not measuring a inconvenience. You're measuring the early signal of Experience Leak.

Experience Leak = revenue leak

The talent that exits earlier than your retention data predicted

The leadership program people are forced to go on even thought your people are constantly sharing leadership is shocking and not human

High effort scores at the service layer are where that leak starts, and they're measurable before the business consequences show up in the numbers that actually get the board's attention.

The damage also isn't proportional, One high-effort interaction raises the perceived cost of every interaction that follows it. People don't experience HR services as isolated events. They experience them as a pattern, and one bad pattern sets the prior for everything that comes next.

It the reason if you sit down and do a perception test often they share HR is terrible then when you ask way they base it all of one interaction after all perception is reality.

Which means your onboarding effort score isn't just an onboarding problem. It's also a pay query problem, a leave request problem, and a manager support problem that haven't happened yet.

That gap is where HR loses people, usually not in a big dramatic cliff hanger but across a hundred small interactions that each ask more than they should.

Here's what that looks like in reality.

Someone needs to replace a broken laptop, there a “policy for it” so HR has technically solved the problem but the actual journey looks like this:

  1. They search the intranet and find two documents written at different times

  2. One of which links to a portal that no longer exists.

  3. Turns out a new policy is stored on someone hard drive never shared

  4. A colleague points them to IT.

  5. IT tells them they need manager approval.

  6. The manager takes a day to respond.

  7. The approval form asks for an asset number they don't have.

  8. They call the helpdesk and get sent back to the portal.

  9. Four days later, they have a replacement laptop.

HR's dashboard: policy in place, request resolved, SLA met.

The employee's effort score for that interaction: probably a six or seven (high effort)

How EES is calculated

The calculation is straightforward.

  • EES = The calculation is a straight average. Add up every score, divide by the number of responses, and the result lands somewhere between one and seven.

On a seven-point scale (1 being low effort and 7 being high effort)

  • a score below three means the service is genuinely easy to navigate.

  • Three to five signals friction worth looking at.

  • Above five, something is costing people real cognitive load.

  • Above six, that friction is systemic, baked into the process design itself rather than an occasional failure.

You can run EES at the level of a specific service, an interaction type, a team, or the function overall. The more specific you get, the more actionable the number becomes. A function-level score of 4.8 tells you something needs attention. An onboarding-specific score of 6.1 tells you exactly where to start.

Below is a example of what some of them questions could look like, obv these will need tweaking depending on the structure you take ie Likert, numeric, two survey emojis etc

Using EES as a diagnostic

The first thing I'd tell any HR leader running EES for the first time: don't make it a permanent metric yet. Use it to see what you can't currently see.

Pick three services or interactions with the highest employee volume, after the next interaction closes on each, send the one question and capture the score. Run it for thirty days. Then look at the distribution.

If a service is scoring above five, walk the process yourself, as if you were an employee going through it for the first time with no knowledge of where anything lives or who to call. Count every touch point, every redirect, and every place where the trail goes cold and yes the person who designed this should not walking it due to bias and internal knowledge.

What you find will almost always break down one of three ways.

The process has too many steps and unclear ownership, People can't find what they need because the information architecture is a mess. Or approvals and handoffs are adding delay without adding anything of value. Because EES is tied to a specific interaction, it doesn't just confirm that something is costing people effort. It shows you exactly where the effort accumulates, which is the thing twelve months of satisfaction data has never been able to do then you can link it some money metrics which every CEO loves

Here's the thing, that diagnostic pass tends to be uncomfortable. Not because the friction is new, since your people already know it's there, but because measuring it makes it yours to fix and if you own it exposes by light the flaws in it.

Using EES as a real time metric

Once you've run the diagnostic and started designing against what you found, EES earns its place as a tracking metric.

Re-measure the same services after changes and the score should have moved. If it hasn’t then the redesign addressed the friction HR could see from its side of the process, not the friction the employee actually experiences going through it. That distinction matters more than most redesign efforts acknowledge.

If you turned to your CEO and said "we've reduced the effort cost of our onboarding from a six to a two," that maps directly to productivity, cognitive load, and time recovered across every person who goes through it. Compare that to telling the same CEO your engagement scores went up three points.

One of those numbers creates a real conversation that matters, the other gets just gets a nice nod

Wrap Up

Why hasn’t HR asked this question before?

Well It's not a data problem but more a psychological one.

Satisfaction scores are safe to track because they measure how people feel about a programmes etc after the fact, and there's always somewhere to redirect the accountability when the number is low.

  • The design wasn't right.

  • The timing was off.

  • People weren't ready.

Effort scores don't offer that easy get out, it point directly at interaction or services and says, this service or interaction it cost them this much in effort and frustration

The reality is satisfaction scores let you defend what you built, where effort scores hold you accountable for what it costs people to use it and most functions, when faced with that choice, pick the metric that defends them. Which is exactly why effort scores don't yet exist inside HR all that much and exactly why they're worth introducing.

A high effort score on your onboarding process doesn't mean onboarding is bad. It means it's hard to navigate, and hard to navigate is a design problem with a design solution.

That reframe, from "our programme is working" to "our service costs too much for people to use," is the one that finally connects HR measurement to what the business actually cares about.

There's more to say on measurement and the fuller measurement map framework is coming. But this is the right place to start and a simple place when it comes to measuring a service or interaction

Thanks for reading if you’ve got thoughts to share just hit reply I always enjoy hearing from you

Speak soon,

Danny

P.S. todays article took me around 5hr to write it means alot if you could let me know how it was, if you extra loved it a share goes along long way 👇

How did you like today newsletter?

Login or Subscribe to participate

The vault is open: The same tools I used to drive transformation at Dyson and GSK now yours, free These tactics powered millions in innovation and CX wins.

New deck landing soon

Want to know more about what we do, click here

Keep Reading