logo

NN/g UX Podcast

The Nielsen Norman Group (NNg) UX Podcast is a podcast on user experience research, design, strategy, and professions, hosted by Senior User Experience Specialist Therese Fessenden. Join us every month as she interviews industry experts, covering common questions, hot takes on pressing UX topics, and tips for building truly great user experiences. For free UX resources, references, and information on UX Certification opportunities, go to: www.nngroup.com The Nielsen Norman Group (NNg) UX Podcast is a podcast on user experience research, design, strategy, and professions, hosted by Senior User Experience Specialist Therese Fessenden. Join us every month as she interviews industry experts, covering common questions, hot takes on pressing UX topics, and tips for building truly great user experiences. For free UX resources, references, and information on UX Certification opportunities, go to: www.nngroup.com

Transcribed podcasts: 41
Time transcribed: 22h 36m 34s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

This is the Nielsen Norman Group UX Podcast.
I'm Therese Fessenden. Over the past few episodes, we've been interviewing members of our UX Master Certified community to learn how they're applying key UX principles in their work.
Today, we're featuring an interview we had with Dr. John Pagonis.
John is a Principal Qualitative and Quantitative Researcher at Zantian Labs in London.
He was one of the first to achieve a UX Master Certification with us, but has also given a number of talks on the importance of measuring and benchmarking UX work.
In this episode, we talk about the role of quantitative data in making wise design decisions,
the qual versus quant debate and the impact that's having on UX work, both good and bad,
and finally, how you can think more holistically about metrics to ensure you're actually moving in the right direction with design improvements.
In this episode, there are a few UX metrics mentioned, but not fully explained in the interest of brevity.
But if you want to learn more about metrics that John mentions, you can find links to articles that fully explain all of these in the show notes.
With that, here's Dr. John Pagonis.
So, John, welcome to our podcast.
Excited to have you here.
Excited to get to know you a little bit more.
Learn about how you got here and learn a bit more about what you've been doing lately.
How are you doing?
I'm happy to be here.
I'm happy to be here.
It's exciting to talk to you because you are amongst the earliest folks who have gotten our UX Master Certification.
And it's fun to kind of check in with the folks who have been part of this program and have really taken it and run with it and owned it.
So, you got your UX Master Certification, is it 2018?
Is that right?
Yes, actually, I was, I think, number 198 of the people that got the Master Certification.
And since then, I've been working, as I used to do, actually, before that, in mostly enterprise environments.
Some startups as well, but mostly enterprise environments.
Big companies with thousands of people that have a lot of big systems that don't talk to each other.
And serving a lot of people internally and serving a lot of people externally.
So, I've been working with them mostly as well as some startups.
The whole point of integrating UX Research is that of introducing and transforming the organization so that they can actually make use of the evidence that we provide in UX Research.
So that we can improve the product in people's lives.
I've been doing that, and I've also been measuring or helping people or organizations measuring the user experience as well.
And UX Metrics or UX Measurement, if you like, helps organizations make decisions and improve.
Yeah, and that's important for us practitioners and for the organization itself, yeah?
It helps direct the product, prove the ROI of UX, and a lot of practitioners in that organization want that.
And also helps improve the team and the product, yeah?
And, of course, the life of users.
So, a lot of the organization teams I've worked for wanted to figure out if they're doing a good job and how can they improve, yeah?
So, for example, when introducing UX Measurement, I ask a few questions, yeah?
Like, it's a transformation project always.
So, I start with questions.
And some of the questions I ask is, how good or bad is our world?
Think about sports.
I mean, you have to measure how fast you run, how much you lift, so you can improve it, yeah?
Have we done our job well enough?
That's another important question to ask in a bigger organization because there are a lot of teams that seek budget and allocation of budget.
So, you have to prove how good of a job you're doing.
And you can do that by measuring UX.
How do you compare against another system?
Let's say your team is building System X.
How do you compare against another team?
And if you can prove it, then how can we find information or evidence to actually decide where to pay attention?
So, these are the fundamental questions I ask, which are not easy to answer.
Yeah, it's like easy in theory, right?
Yes, exactly.
So, for so many teams, it's easy to ask these questions.
And actually, higher management do ask that.
They don't expect just a UXer to come in and ask these questions.
They ask these questions, which are fundamental, but they're not easy to answer.
And I think I have ideas about why.
Back in the late 20s, beginning of 21, I had some time off because we had a new child.
So, I decided enough with nappies, I need to do something.
So, as a researcher, what do you do?
You do research.
So, I started conducting research, qualitative research, yes, back then, with practitioners around the world.
I believe there were 30 or 31 practitioners or something, where I was investigating their process or their perceived process, if you like, of how they work with other members and other teams and organizations, etc., etc.
And one of the things I discovered or I got ideas about is the fact that so many people so many times do not close the loop.
Yeah, that's what the research indicated.
And I'll explain in a moment what closing the loop means.
And therefore, they cannot justify their existence and ask for budget in their organization.
So, so many times they give up and quit.
And by closing the loop, I mean, we do UX work, research, design.
We develop it.
We deliver it.
It goes out to users.
And then what?
There is no feedback mechanism to tell us if we've done a good job, how good our job is, if we're moving to what direction, what should we do next?
Yeah?
Have we improved the user journeys?
The loop stays open.
I mean, I had this indication before conducting the research, but it seems that so many teams do not have a discipline feedback mechanism.
And therefore, so many times they get challenged and they have no answers.
Mm-hmm.
So, I ask questions.
You know, it's my job to ask questions.
It doesn't mean I have the answers, but, you know, I ask the questions.
Yeah, absolutely.
And that's, I think, the most important first step.
And also, just the awareness of the questions is important, right?
Because I think, to your point, there is a lot of interest in research and what customers are doing.
What are they interested in doing?
What are some of the goals we have?
And I often see, like, one of two mistakes.
The first mistake being not taking a pulse of what's currently happening, right?
To check, have we actually gotten better?
Have we gotten worse?
Right?
They're just sort of like a blanket.
Here's the metrics we have.
It's just sort of like a, you're kind of picking through the toy basket and you're like, this is what we have.
And we're not necessarily picking specific ones or tracking them over time.
So, that's one issue, right?
We're not really being specific about the pulse that we're taking before versus after.
But then the second is not taking the pulse after, right?
Where it's like, we designed it.
We did it.
It's done.
And now what?
Right?
Now, how do we move forward?
I mean, if we're, are we improving?
Where should we focus next?
You know, fundamental questions that product management actually asks.
Yes.
And, I mean, the service UX research brings to product management is that of helping them make informed and hopefully good decisions.
That's our service.
That's what we provide.
Yeah.
So, we should be able to help them answer this.
Are we improving?
Where should we allocate our budget?
I mean, I've introduced the answers, if you like, to this a lot by using common measurement instruments, like, for example, UMUC Slide, the SUS, ACQ, Satisfaction, stuff like that.
But, of course, teams can use whatever they need to use that brings them value, obviously.
It's not always easy.
And sometimes, depending on the maturity of the organization, you have a lot of pushback or fear of metrics or measurement.
Yeah.
Yeah.
What do you think that fear is of metrics, of measurement?
It has to do with not knowing.
It goes back to education.
You typically fear something that you don't understand and you try to avoid.
So, it has to do with education.
People think just because you have to do basic statistics, because you have to understand the instrument, maybe how reliable it is.
They say, no, I'm not going to do this.
So, they freeze, because you can actually educate people about this.
Nobody was born knowing everything, you know?
We need both qualitative and quantitative research.
And you've got to start from somewhere.
That's fine.
And what can you do?
Education is paramount, because I see people interested.
They just don't know how.
And that's where you can actually help and assist by introducing them and running small experiments and showing here's the value.
And here's the data.
And this is what we can interpret from data.
By the way, interpretation is very important.
I have found it very useful to interpret one data to narratives.
You can actually use an instrument like UmoocSlide, which measures utility usability, and you can map it to adjectives.
For example, the Microsoft adjectives list.
And then you can explain to stakeholders that this is how people characterize our product.
Stuff like that, yeah?
Moving from quant to qual in terms of narration and giving a story is also very important.
And that's how you get people in.
Because the moment you can tell them a story, they're interested in the nature of humans to actually try not to discredit it, but to scrutinize it.
And then you say, yeah, here's the data.
And we came to this narrative from this kind of data.
And they go like, ah, can we do this?
Yes, we can.
And therefore, you get them interested and involved.
Yeah, what comes to mind as soon as you mention that, there is this concept that when you're sharing insights with somebody, you're sharing something you've learned, you can share as many numbers as you want.
But those numbers are kind of meaningless.
Even if they're the worst numbers you've ever heard, you can relate it to awful things, awful tragedies in the world like war or hunger.
And all of these people are experiencing all these horrible things, and these are the numbers.
And the numbers would be like, okay, okay, yeah, that sounds like it's not good.
And then as soon as you tell the story about one of those people, and you can bring it to life in a meaningful way, then it's like, oh, my gosh, I need to do something right away.
It becomes something that has more meaning.
So yes, there's an importance there in tying those metrics to a narrative that actually has meaning for people, and that they can actually visualize, whether that's for their own work or for future work that maybe they have yet to do.
And there's actually two ways of visualizing this.
One is what you just described, the other one is to actually show them charts with progress of how their product is doing as perceived by users.
And see, for example, that when we move from version 1 to version 2 to version 5.6, this is what happened.
So if things are going badly, then you can go back and say, wait a minute, wait a minute, what did we introduce in that release that gave us this new score, which is statistically different to the previous score?
I'm not going to go into the details of, so you can ask this question, say, okay, what did we do?
So let's say our utility, our functional adequacy, if you like, has deteriorated.
Did we remove a feature?
Let's say your usability went down.
Did we introduce a feature that actually makes it harder for people to find or do task X?
You can ask these questions, and then you can introduce qualitative research to find the why.
So you pick signals from the data, the what happens, like your board shares behind you, to actually discover the why.
So quantitative or measurement of UX, if you like, gives us signals and benchmarks to actually ask more questions, which typically gets answered with qualitative research.
Yeah, I appreciate that point.
And for those listening, I have a board back here that says why is greater than what, and that's something that we often feel very strongly about at Nielsen-Norm Group, because there's a signal, and then there's what causes that signal, right?
But I really appreciate what you're saying as well, which is, you know, treating these metrics like they are signals and to kind of follow the path, follow the rabbit hole and start to ask, use it as like a jump off point to ask questions.
And in a way, it's, I think, how qualitative data can often answer the questions of quantitative data and vice versa, right?
You might have things that come up in qualitative data, and then you have additional questions like, how often does that happen?
Or how significant is this issue? How severe is this issue?
And like those sorts of questions, maybe you can sort of answer qualitatively, but ultimately you need numbers to figure out, like the frequency or the magnitude, right?
So, in a way, fearing those numbers is sort of fearing the answers to questions and maybe reframing it as, well, we don't necessarily need perfect answers, but it helps to have some answers, maybe even some additional questions as part of it.
As always, yes.
Right, absolutely.
And I guess the other thing that is interesting too, like on the one hand, we're creating new knowledge, right?
And so, you kind of have to have an appetite for ambiguity or at least operating in ambiguity for a little bit.
Ambiguity?
No.
What's ambiguity?
What's that?
No way.
Yes.
Right?
There's a lot of living in this gray space while we wait for answers or while we look for answers, right?
And that can often be an uncomfortable place to be.
I say this myself, like it's an awful experience when you're there and you're like, I wish I knew the answer here and I have hints of an answer, but we kind of need to do a bit more analysis.
We need to dive really deeply.
Now, I'm wondering, you know, as far as the work that you've done with other organizations, like you've mentioned a few different types of metrics people can use.
How do you think people are currently applying these metrics versus how ideally they should be applied?
Like, do you think there's a, things are looking good or are things looking a little bit, like they could be better?
I do think, and I have observed, and you can see this in all corporate dashboards or startup dashboards, that people are generally interested in UX measurement.
So, when applied right, I've seen people staring at revelations, whether good or bad, yeah?
So, they're generally interested.
Okay.
That's the positive part of my answer.
However, however, I observe, I have observed low maturity.
And what do I mean by that?
So many times, metrics or measurement instruments, if you like, are applied the wrong way.
There's no good understanding of how to use the metrics, sorry, the instruments, what they are about, and how to deduce what you need to do from the data you get.
So, there's low maturity in terms of understanding the measurement instrument, where and how to apply it, and how to interpret it.
So, the problem with that is that people think it's magic, and they need a consultant like myself to help them.
The reality is that, yes, please do call me, but it's not hard with a bit of education.
It is not hard.
I mean, you can actually do a lot of good in your organization, in your team, by actually doing, well, a bit of UX measurement, for the reasons I actually discussed earlier.
So, I observed that as well, low maturity.
And lately, I've been seeing a lot of, how may I say this, UX influencers, thought leaders, publish a lot of negative things about UX measurement and quant UX in general,
which likely is negatively impacting the community, and therefore helping people close the loop, because we need to close the loop, yeah?
On top of that, there is the debate between qual versus quant, which is naive, at least, because you need both, obviously.
So, yeah, I guess these are the main observations.
When you see teams using quant and qual research, it is more likely that they're more mature than organizations that just use qualitative research.
That's another observation about how they're applied.
And they're definitely more mature than organizations that only use quant.
Shall I repeat that?
Because a lot of people avoid contact with humans by actually doing only quant UX and actually don't do the right thing anyway.
So, that's another observation.
A lot of people think that just by sending a survey out or doing a bit of measurement online, you're doing UX research.
No, you're not.
And probably your maturity is low.
But that's my very biased, opinionated view of the world.
You know, I think you're onto something, though, with that, especially when it comes to what you just mentioned, right?
So, to kind of recap, you have qualitative and quantitative research.
If you're doing both, chances are it's a fairly mature organization.
And so, for us at Nielsen Norman Group, we have research on UX maturity, like this UX maturity model, which is the concept that you have some appetite for research, right?
You're interested in studying people in order to develop some sort of product or technology or service or whatever that experience is, right?
That aligns with people and how they actually behave in the real world, right?
So, the more you use that evidence, the more you rely on that evidence, and the more that you kind of use it as a core philosophy, the more mature your organization is.
Versus if you make decisions in maybe a more intuitive way or maybe even in a way that actively avoids doing research where there's actually hostility to doing research and it's seen as a waste of money, right?
That would be considered something like low maturity, right?
And the reason why there is low maturity and resistance is because if you only do data and you torture the data enough, they will tell you whatever you want to hear.
So, if you only have data, you can massage the data, transform the data to tell you exactly what you need to hear.
But if you have videos of people using the product and failing, then what do you do?
It's a big difference there, yeah?
It's funny.
So, yeah, when thinking about the use of the data, right?
So, if the most mature organizations are using qualitative and quantitative data, they're using both.
The least mature organizations, if they're using data, may be using quantitative data.
But to your point, you know, they can maybe massage the data.
What comes to mind, there's this quote from Mark Twain, and I'm going to paraphrase it here, but it's something along the lines of, there are three kinds of lies.
There's lies, and I think they say damned lies, and statistics.
And so, you can always kind of frame something as happening, you know, depending on how you use the statistic, right?
Or how you massage it.
Now, I think to your next point, if you use some qualitative research, it's a bit harder to argue that, right, because you're seeing what is happening, right, from the perspective of the person who's carrying out those tasks, or whatever it is.
You're starting to see these relationships between if this happens, then that also happens.
So, you have a little bit more maturity.
But to also turn a blind eye to, or to turn away from those metrics, then you're also sort of purposefully avoiding, you know, the objective perspective.
Because, again, you can kind of massage qualitative data as well.
It kind of depends where do you pan the camera, or who's invited to the sessions, right?
So, yeah, not trying to advocate the manipulation of data, but there can often be a little bit of bias there, too.
So, it's not quite the most mature as if you were to combine both, right?
Right.
There is bias everywhere.
Yeah.
There is bias in how you select the pool of users.
There is bias in how you write the questions.
There is bias always in the instrument, in your setup.
There is bias everywhere.
That's why you combine things.
There is bias everywhere, but you need to know the bias.
Otherwise, you cannot conduct research.
And to your point about the quantum quality and the manipulation, okay, I've been helping organizations introduce UX research, and I've done a lot of measurement work.
However, most of my experience is actually in qualitative work.
The fact that I can do statistics, write some code and analyze data is actually secondary, because if you're not good in qualitative research, in my not-so-humble opinion, you cannot be good at quant, UX research.
There is no way.
What skill have you found to be most helpful when it comes to dealing with qualitative research?
What skill has been maybe the most helpful to make you a good quantitative researcher?
What skill?
Well, okay.
We have the measurement and benchmarking stuff, and then you have the surveys.
For the surveys, knowing how to ask questions is very important.
Otherwise, you're getting the wrong data.
In terms of the stats or the analysis of data there, it's trying to figure out the narrative, trying to explain that, as you would.
So, let's say you do thematic analysis and you summarize qual data, yeah?
You're trying to tell a story.
It's the same thing with data.
The same thing with signals you pick from analyzing, I don't know, thousands or tens of thousands of data points.
You still have to tell a story.
It's the storytelling is the most important part here.
They're mapping it to something that people can actually consume and therefore make decisions.
What is common in both is the ability to do clustering, whether it's a structured textual data or numbers, the ability to cluster and classify is common in both, in my experience.
And I don't think I would ever be any good at quant if I couldn't do qual very well.
I don't think I would be able to because I wouldn't be able to tell the story.
I think that's very important.
Yeah, I think that's a crucial point.
Yeah.
And there's still going to be this element of categorization, right, where you have many different kinds of metrics, right?
And granted, you can't just – I mean, you could, I guess.
You could just throw whatever metrics you have and then see what changes.
But that would be kind of overwhelming.
It would be like me looking at 50 different sticky notes on my desk and thinking, okay, one of these sticky notes has the answer.
Which one is it, right?
And you can leave them all out there that way.
But if you can start to take a more systematic approach where you're thinking about the metrics that might tie best to the outcomes that you want or that might answer these questions more effectively,
then that the ability to answer research questions, I feel like – and maybe that's what you're getting at here with this idea of clustering, right?
We're starting to look for specific answers to these questions and finding the right instrument to measure the right –
It's almost like picking the right tool for the right job, right?
And not only that, but, okay, my favorite instrument in most of the cases is actually measuring usefulness because it's fundamental.
As Jakob said, usefulness is utility and usability, okay?
However, where do you apply this?
Any instrument.
For example, if you just pop up, let's say someone – let's say it's out in the wild and you know the persona, so you know your selection bias and everything,
and you pop up on a web, let's say, a questionnaire, and you ask about ease of use and utility, will you get the right answers?
We're going to get some answers.
Will you be able to interpret them?
I'm not really sure.
However, if you've done your task analysis, you understand what the journey of the user is and what the goals are for that particular persona,
then you can instrument your website to interrupt the user at the right time to get the right answer.
How could you do that if you're not good at qualitative research, if you're not good at observing people, at conducting task analysis, figuring out goals and needs?
You cannot.
It's one thing to measure, but it's when to measure and where to measure and how to measure and with whom.
With whom these things need you to understand the qual side of things more than anything else.
Yeah, that's a great point.
And I think that's what kind of gives you that high-level understanding of what's happening as a whole, right?
I mean, we can certainly – I think there's definitely benefit.
And I think it's funny you mentioned Dr. Jacob Nielsen, his usefulness equation, right?
Usefulness equals usability plus utility.
Utility meaning like does it have a purpose, right?
But the other thing was this concept of specializing, which is kind of – he mentions this in a different talk.
I'll include the link to the keynote or the talk where he mentions this.
But he talks about the concept of like generalists versus specialists and how like if you were to put a specialist in the Olympics, for example, right?
If you think about those – I wanted to say it was like a decathlon or something where you have like many different events, right?
And to do decathlon, you have to be good at a lot of things.
And then there's, you know, the individual events.
Within that, if you specialize in one of them, you'll probably win the gold medal, right?
Versus the generalists who kind of know a little bit about everything.
But, you know, there's sort of this trade-off at the same time.
And the specialist will probably be really terrible at other skills, not because it's a bad thing.
That's just what they focused on and where their attention lies.
And so depending on the team, right, maybe you have a lot of specialists, people who are really great at quantitative research and really great at qualitative research.
And they each are amazing at what they do.
Ultimately, you do have to kind of combine that knowledge somehow.
And I think that's where generalists can really shine is they may not have all the answers or all the tools.
And they may even turn to the people who are specialists saying, tell me, you know, do I use a hammer for this?
Do I use a wrench?
Do I use a hex key?
And then they can sort of tell you this is the best tool to use and this is why.
And it's really that kind of marriage of these specialists and generalists together that kind of lead to this better outcome as far as team composition goes, right?
That is why the infighting, which is being bred sometimes by those influencers, I think is very bad for what lies ahead for a profession.
I mean, again, you should not deter people from developing their skills and their capacity to actually do both qualitative and quantitative.
You cannot be a specialist in everything, but you can probably survive more and go further if you're good enough at both.
In most cases, I think there's more resilience.
And I think when people don't understand, I said earlier, they fear and then they attack.
And that's an indication of not understanding.
And we have to help with education and doing more to actually explain why closing the loop is a good thing, et cetera, et cetera, et cetera.
Yeah. And actually, to your point earlier about the influencers kind of advocating maybe against quantitative versus, you know, thinking of them as sort of in conjunction with each other.
I think part of that has to do with the pushback, too, is like you mentioned, you know, when you rely only on quantitative data, then you're missing a huge piece of the puzzle.
So, in a way, I think these can sometimes be taken out of the context, right, which is the context that chances are if you're not doing qualitative research, it's because you are in a UX immature environment.
But it can be harmful.
It's almost like taking coaching advice, right?
Like, I can get coaching advice from anybody, but if I get the coaching advice for somebody who's in a very different, you know, stage of life or whatever versus my team or me personally, then I might be applying the wrong prescription, right, the wrong antidote for the problem.
And so, yeah, when thinking about more mature organizations, maybe the appetite for quantitative data is actually a good thing because that's going to help you get, you know, even further in your UX maturity than you may already be.
So, I think there's definitely something like a nugget of truth, perhaps, but maybe for a specific audience, and that's often taken out of context, right?
You can see and you can introduce both qual and quant at the same time and, therefore, accelerate the maturity of a team or organization.
If you do the basics right, you can move further for longer by just doing the basics.
So, it's not like if you cannot have both if your maturity is low and you're trying to improve.
Recognize that you have low maturity and you can actually improve in both attributes at the same time if you have low maturity, as long as you understand that, of course, yeah?
And that's the context.
We understand that we are here.
We need to go over there.
What can we do?
Ah, actually, we can do both.
Right.
And I think that's a really important point that you don't have to choose, right?
You don't have to choose between these two.
They can be proficiencies or competencies that you improve, just like you might improve, I don't know, speaking skills.
It's not like if I choose to improve speaking skills, then suddenly I'm giving up on visual design skills, right?
It's just a different competency.
It's a different way of allocating your time and your energy.
And actually, it compounds.
It compounds exponentially.
How does it compound or what is, like, how do they enhance each other?
So, if you know both, if you understand and practice both quality and quant research, the benefits are exponential.
They're not linear.
You don't just do more and get more.
You do more and you get much, much more and much, much more.
Because, for example, you close the loop.
If you have a feedback mechanism, then you can amplify something.
It's systems, sorry, control theory.
You need a feedback mechanism to amplify what you're doing.
So, you amplify.
You did something.
Let's say you did quality research.
Therefore, you figure out that you designed the system in this way.
You designed it.
You benchmarked it.
Or you just let it out there in the while.
You measured it.
And then you found that, oh, you missed something.
Then, therefore, you do more quality research.
And then you improve it more and more and more.
And then, not only that, that's internal to the team.
Now, more amplification.
Hey, boss, we're doing this well.
And our users tell us this and so on.
Oh, excellent, guys.
Continue doing this.
You have my trust, assuming.
Or you always have my trust.
But anyway, continue down this road.
Oh, how about we do this extra?
Oh, we need more budget.
How are we going to get more budget?
Oh, we're doing well and we can prove it.
Oh, we can prove it.
Let me go ask for more budget.
More budget?
Oh, let's do more good work.
And you help everyone in the organization.
You help the product.
You help the user.
Everyone's life is better.
That's amplification.
Yeah?
Got it.
Yeah, so basically, it's a feedback mechanism, not only for the immediate team, but it serves
as, like you're saying, amplification measure for the team to get more resources, to help
other teams to kind of expand the influence of the work that's being done.
Yeah, what also comes to mind, too, and I think kind of speaks to the importance of, like,
your choice in instruments, right, is I think you mentioned measuring the right, like, how
do you know when to measure, what to measure?
Obviously, we have lots of classes for that, and we're certainly not going to make this an
academic lecture here.
But I do think it's worth mentioning, you know, and I was just doing a course with a
client yesterday related to measuring the impact of work, you know, or proposing certain
things.
How do you benchmark?
And what comes to mind for me is you have things like outcome metrics, perception metrics,
and descriptive metrics.
And these are all really helpful to think about, because if you measure only one thing,
then you're technically only changing one thing, right?
So let's just say we're improving outcomes, or actually, the example I often like to rely
on is one about descriptive metrics.
So something descriptive, like, how long does it take someone to do a task, right?
That's something we can observe, we can literally set a timer and see it.
And then if we measure that time, and we say, okay, well, we want to incentivize our ability
to decrease that time on task.
So it could be something even like a phone call that someone makes to a call center, right?
We want to decrease the call time.
And so we might say, okay, call center employees, we're going to give you a bonus.
We're going to incentivize this, you know, see if you can decrease call time.
Now, the intended outcome might be that people get their calls resolved more quickly, but
because it's the time that we're incentivizing, it may actually not improve the resolution
rate, right?
It might actually make it work, because now people are like, I'm going to transfer you
now, please hold, right?
Short call time, but no good resolution.
I think we're getting a different discussion here.
But for example, it's a silly example, but to get the point across, how long people stay
on a page, on a web page?
I know it's silly, yeah?
Oh, we have huge engagement.
People stay on our page a lot of time.
Well, yeah, but is this good or bad?
Is it because they found or they didn't find what they're looking for?
You know, just improving one metric needs to have a counterbalance, typically, to figure
out if you're moving towards the right direction.
And okay, that's system level thinking and improvement, but that's one of the main things
you need to do as you advance down that path.
And that's why I always start with usefulness, because it's so fundamental.
If you improve usability and utility, you will improve the whole thing.
So that's the first one to introduce.
Right.
Yeah?
Right.
Right.
And I think those are, again, they're not easy questions to answer, but they're important.
They're fundamental ones, right?
So in a way, when you think about usability and utility, there's two parts.
There's the perception, right?
How useful is it?
Is it something that people want to use?
Okay.
That's one thing.
Do I want to pay taxes?
Right?
No, but I have to.
Right?
But then, you know, so then you kind of have to counterbalance.
It's like, there are going to be certain goals, certain perceptions, certain things where
you may want to improve perception, but it may be more practical or beneficial to improve
something like maybe the descriptive or outcome-based metrics, right?
And so you can kind of use these as like levers, right, as ways to kind of ensure that you're
working toward whatever good means in your context, right?
Yeah.
If you try to optimize, over-optimize for one thing, let's say you have a car.
Let's say you have a car, yeah?
And you build the best engine in the car, the fastest engine.
And it becomes so powerful that you cannot steer it.
Therefore, you crash.
It wouldn't be better if you have optimized, not optimized, but if you have designed the
whole car so that it doesn't have the fastest engine, but it turns.
So you can take a fast turn or it can actually break.
It's important.
Over-optimizing only one part of the chain of the system, it's going to make the system
break.
I appreciate the metaphor because that metaphor, I feel like, really resonates in the sense
that often when we're designing something, right, whether we are the designer or we're
the researcher, I use the term we design loosely.
But as a team, we are making things.
And we might have different KPIs or key performance indicators, like you were saying.
And depending on how we talk through those and who sets them, and maybe we're not in charge
of setting them, but maybe we can help to make sure that we're moving in the right direction
as intended, right?
Then that's a really important conversation to have.
So that way, if we do fall short of certain objectives, then we have a reason for it.
It's not just, well, I don't know, we didn't measure it or that's not something we tried to
do, but it's, hey, we did this to the best of our ability, given these constraints and
given these outcomes that we want to achieve, right?
So I think having those frank conversations, even though they're hard ones, can really help
shed light on the work that we're doing and demystify it so that it's not just, oh, design's
just doing design things, right?
But rather, we're making these decisions with these other business decision makers.
So it's a tough balance, right?
Because ultimately, a design decision is a business decision.
That's kind of what it is.
And that can often be a bit, you know, you can kind of butt heads sometimes when you make
recommendations that may challenge the way that we run our business or the way that we
work.
So I think you're right.
It's important to close that loop.
May challenge.
May.
May.
I'm just, I'm being optimistic here.
Absolutely.
And I think that that's a great place to kind of wrap up and kind of give people food for
thought as far as, you know, what they're going to do next to transform their organization.
So trust, I agree.
Trust is absolutely paramount to doing any of the work we do and also giving people a chance
to participate in that trust as well.
I think there can often be a bit of isolation in the research work we do because either
it's like, oh, no one's interested or like, I'll just do it, you know, to be a good citizen
to my team.
But sometimes just doing it ultimately keeps people out and doesn't give people the chance
to, as my colleague Tanner Kohler often puts it, put their fingerprints on it, right?
And when people put their fingerprints on something, they feel more invested.
They feel more engaged, more interested in what ultimately happens.
So, you know, while we can certainly offset the work, I think there's a way to build relationships
by giving people a chance to look at it and have a say.
And over-communicate it.
I cannot stress this enough.
You have to be patient.
You have to be transparent and over-communicate the research that happens.
You need to run experiments, communicate the results of the experiments and be really, really,
really patient because you're going to challenge people.
And obviously, don't be heroes because sometimes people just don't want to listen.
You cannot save the world.
Life's too short.
Move on.
Yeah.
That's my advice.
Pick your battles.
Yeah.
Pick your battles and be patient.
I think that's a really good way to look at it.
And close the loop.
Do not forget that.
And close the loop.
Don't forget to close the loop.
That was Dr. John Pagonis.
You can find links to his LinkedIn in the show notes.
By the way, there are plenty of quantitative research and metrics-related resources all available
for free at our website.
And if you want to stay up to date on the latest articles that we publish, we do have a weekly
email newsletter.
So sign up for that and you'll learn about all of our articles, videos, and upcoming online
courses.
You can find everything I just mentioned at www.nngroup.com.
And of course, if you enjoy this show in particular, please follow or subscribe on the podcast platform
of your choice.
This show is hosted and produced by me, Therese Fessenden.
All editing and post-production is by Jonas Zellner.
Music is by Tiny Music and Dresden the Flamingo.
That's it for today's show.
Until next time, remember, keep it simple.
Music is by