logo

NN/g UX Podcast

The Nielsen Norman Group (NNg) UX Podcast is a podcast on user experience research, design, strategy, and professions, hosted by Senior User Experience Specialist Therese Fessenden. Join us every month as she interviews industry experts, covering common questions, hot takes on pressing UX topics, and tips for building truly great user experiences. For free UX resources, references, and information on UX Certification opportunities, go to: www.nngroup.com The Nielsen Norman Group (NNg) UX Podcast is a podcast on user experience research, design, strategy, and professions, hosted by Senior User Experience Specialist Therese Fessenden. Join us every month as she interviews industry experts, covering common questions, hot takes on pressing UX topics, and tips for building truly great user experiences. For free UX resources, references, and information on UX Certification opportunities, go to: www.nngroup.com

Transcribed podcasts: 41
Time transcribed: 22h 36m 34s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

This is the Nielsen Norman Group UX Podcast.
I'm Therese Fessenden. Today we're going to cover a central tenet of digital design.
As a user experience professional, it's often startling how often the conversation focuses on the
experience part of that label as opposed to the user part. Design is a wonderfully creative endeavor,
but at its best, it's the result of far more than just good guesswork.
I think the heart of my work is that research and observation that's key, a key factor in kind of everything.
I got the chance to chat with Alita Joyce, a fellow UX specialist at NNG.
So if I'm speaking, it's based on research. If I'm working with a client, I'm probably doing some research.
If I'm writing, it's based on research findings.
And we talked about the pitfalls of relying on assumptions or intuition a little too much
instead of using data when making design decisions.
I hope you enjoy this interview as much as I did.
Without further ado, here's Alita.
So just to give our listeners a little bit of background about you, you are a UX specialist.
And obviously, as a user experience professional, we care about the user experience.
We care about what users might need.
And I think after a while, like with reading and really diving into some of the literature and academic papers,
you kind of start to anticipate what users might need.
And I think to a certain extent, some of that can be useful, and I highly encourage reading and educating yourself on stuff.
But I also wonder, are we missing out on something when we kind of lean a little too heavily on some of these pieces of literature,
some of these papers and some of these articles?
What do you think of this? Where do you think the limit is in anticipating what people need?
Yeah, I think that's a really thoughtful question.
I mean, I got into this work from a coding boot camp, actual full stack development,
which during that I found I was doing a lot of testing with my own work and my instructor was kind of like,
you know, maybe you should look into UX.
You're a great developer, but it seems like this is where your interest might be leading you.
And that brought me to grad school at Northwestern, which was a fantastic program,
and they really teach you a lot about all of these guidelines and heuristics and ways to anticipate design.
Of course, some research, but not quite as much, you know, hands on research that you get when you're in practice.
And so I think you're so right that, you know, as people get deeper into their career in UX,
you learn a lot about your users and you start to make these connections
and similarities that you see between different users and different products.
And you start to apply some of the things that you know about psychology and human behavior.
But what's so important there is, like you said, not just to rely on kind of what you know or what you think,
you know, and anticipating those behaviors, but actually testing those assumptions.
I think that that's kind of the strategy that I take in a lot of different areas of, you know,
having a great understanding of psychology and human behavior.
But when it comes to products that I've been working on for years and years or even new products,
documenting those assumptions, you know, how do I think users are going to act?
And then testing it, trying to get those insights, testing my assumptions.
And what I found is that users always surprise me.
So I think absolutely having that foundational understanding and being able to anticipate some of those user behaviors
will definitely save you time and make you more efficient at testing potentially good solutions.
But nothing substitutes research for ensuring good behavioral design.
Yeah, I couldn't agree more.
I know that you instruct, you know, the human mind and usability, very popular class, and I loved that class.
And I also teach the persuasive and emotional design class.
So very much in line with what you're saying, which is that yes, there are some things,
some principles that we can sort of use as a starting point,
as a foundation for maybe the hypotheses that we have about our eventual research.
But yes, man, users are surprising.
There was this hilarious video I saw actually recently, and it was, you know, one of those kids play sets
where you have these different shapes and you put the little different shaped blocks into different shaped holes.
So you have the square block, right?
So the square block is supposed to go in the square hole.
And it's just this hilarious video of this person watching somebody else put different blocks into the different shaped holes.
And so this person would say, okay, square block, this goes into the square hole.
And then the next one is like the rectangle block, this one goes into the square hole.
And then the next one is like, okay, those seem pretty fair.
Now, what about the circle one?
This observer woman is like, oh, it's going in the circle hole.
And the person was like, guess where this goes?
The square hole.
It's just hilarious.
So if you happen to catch that video, it's been circulating the UX community.
But it is absolutely true.
Yeah, I saw that on TikTok.
Yeah, you did?
Absolutely.
Yeah.
Great one to include in show notes.
It is such a great example of that idea that, you know, we'll assume people would do the same thing as us.
And, you know, there are definitely people that are thinking it has to go and be appropriate, you know, slot square to square, circle to circle.
And in reality, you'll see people do things in very different ways and continues to surprise you, even with products that you've been working on for years and years.
And even with the target audience that you know what you're familiar with, man, they will just continuously surprise you, their approaches to solving problems.
Oh, my gosh, yes.
And it makes me wonder sometimes, like, there are some incredible workarounds that sometimes get developed.
Now, I wanted to pick your brain because there is a common mantra in the UX community, but particularly at our UX conferences.
And that is you are not the user.
And I think this is an important concept is it's one of those things that's either extremely straightforward and you get it right away or you kind of disagree with it because you're like, but I am a user.
So I want to get your thoughts about this particular statement.
Like, what do you think of it?
I personally love this.
This is one of my favorite UX mantras.
And I'm I will totally defend it.
You know, it's one of the soapbox that I'll die on for sure that you are not the user.
And I think when we're thinking about this mantra or the slogan, whatever you want to call it, I think of it more as like a guiding principle.
And I think when people talk about you are not the user, a lot of times that sentiment is coming from, oh, we need to empathize with our users and and things like that, which is is true.
Absolutely. You know, empathy is at the heart of what we do.
But as you're explaining that to stakeholders, sometimes stakeholders will hear empathy and say, oh, that's soft. You know, we don't we don't need to go down that soft touchy feely path of connecting with our users.
And so another way that I'll kind of think about you are not the user is in terms of this psychology study that backs up this this exact concept.
And it's called the false consensus effect, which is basically a tendency to assume that others are going to share the same beliefs as us or act as we would in a in a particular context, which you can kind of see where that's going with our interfaces.
We'll assume people act like us.
So the research study that kind of came to this conclusion was done by two researchers, Ross and Green in the 1970s.
And what would happen is they would bring participants in and give them a scenario to place themselves in.
And at the end of walking them through the scenario, they had two decision points to make. So I want to walk you through one. It's my favorite. It's kind of this grocery shopping example, since that tends to be quite relatable.
So let's say, Terese, you're leaving a grocery store and a man in a nicely dressed suit comes up to you and he says, Terese, you know, I'd love to hear how do you like shopping at the grocery store?
And you respond, you know, you like shopping at the store, you think they have reasonable prices and a good selection of meats and produce.
And so the man says, great. You know, my film crew, surprise, surprise, was actually recording this whole thing.
And we'd love to know if you would sign a release to be in a new upcoming commercial for the grocery store.
Now, with that scenario in mind, as the research participant, there are two questions that you need to ask.
They're in a very particular order. So, Terese, I'll ask you, what percentage of your peers do you think would sign the release to be in this commercial?
Probably like 60 to 75 percent. That would be if I was super, if I was super optimistic. But I have a feeling this is setting me up for a big reveal.
So now let me ask, would you yourself sign the release to be in the commercial?
Probably if I really liked that store, like I could, I could see it. I mean, I do run a podcast, so I could be in my position to be in the spotlight.
But but yeah, I would I think if I genuinely liked that grocery store, like I could see myself doing that.
Yeah. So what's interesting is that kind of similar to your responses, the researchers found that when participants completed this scenario,
if people said the general population was likely to do either sign the release or to not sign the release,
they also themselves stated, you know, that they were likely to sign the release or not.
So in your case, yes, you rated it likely that the general population would sign the release.
And as you said, yes, you are a public figure, of course. And so you yourself would also sign the release.
I'm in the same boat, though I've done this thought experiment with many of my friends who are more engineering side
and less favorable of being a public figure.
And they always say, nope, not interested in being in a commercial like no YouTube,
no podcasting for me. And, you know, they're saying they also wouldn't sign it.
And this is just such an interesting manifestation, again, of this principle that we tend to assume others are like us.
And so we'll make judgments in the same way. And so it's so important to just realize that this effect exists.
And again, that that study is really showing us this basis for you are not the user in a more scientific way.
We can link to that full study in the show notes. I think it's a really interesting one.
They have a couple of different scenarios. But, you know, this is why we want to put so much effort into observing users,
not just trusting kind of what we anticipate users to do or what a specific guideline says,
because our designs are very contextual. Our users change, their behaviors change.
And we want to make sure that we're testing those assumptions, not just relying on this book says this or this guideline says this,
but actually seeing how real users interact with those products in context.
On that topic, right, of being able to read and kind of have a good sense of these guidelines and principles and using that,
but not necessarily over relying on it. I think the word that comes to mind for me is like a heuristic evaluation.
So it's like, oh, we can evaluate a design based on some common usability heuristics and and see how well or not well it can meet some universal user needs.
So I do think those can be useful. But I wanted to get your thoughts.
Like, do you think this is maybe useful in certain contexts more than other contexts?
Like, is there a point when we should like prioritize research over some of these kind of quicker shortcut ways?
I think that, I mean, heuristic evaluations are great.
They allow you to, like it says, you know, implies evaluate an interface based on heuristics that have been determined and tend to apply in the majority of cases.
You know, this is Jacob Nielsen's 10 usability heuristics, which was specific at the time to more web applications.
I applied those same principles to video games in one of my articles and found it to be the same.
Later this year, I'm going to apply those principles to VR.
And they are a fantastic way to just evaluate and see how a system might be meeting some of those kind of core aspects of our design.
But only behavioral research and observational research is going to ensure good behavioral design.
And I'll give you kind of an example with a friend that lives in Seattle and works for a large enterprise ecosystem.
And one of the things that I found in talking with him about some of his work is that the users of the system and the designers of the system, you know, they kind of work together.
They're both using the systems, but one on the designer side is they are the designers of the system.
They know a lot more about the system than a regular user and, you know, an office somewhere.
But they're relying arguably a little too heavily on those heuristic evaluations, on what they know about the system and how they're using it.
And that can manifest itself in a lot of different ways that, you know, can be somewhat detrimental to regular users that aren't designing the system.
And the point is that the designers and developers and stakeholders that design these systems are very different, oftentimes very different than the real people that use those products.
And so I think it's so important to kind of balance out your own type of heuristic evaluations as well as that behavioral research and thinking about the heuristic evaluations and, you know, relying on some of these guiding principles.
I think that it can save us a lot of time in those early phases of design or when you're first evaluating a design and help you target areas to go in and focus on more behavioral research.
So I definitely see them working in tandem. They both have benefits.
But at the end of the day, I think that only behavioral research can ensure that good, usable behavioral design.
I often find that when we're still learning about a situation or learning about a user and like we're in those early discovery stages, I think conducting research is so important, actual like observational research because it's amazing how much we can assume
that a design is mostly good because it meets these of these heuristics according to our understanding, you know, if we were users, it seems to meet our expectations.
And you're right, like as design. And when I say designers, I'm referring to anyone who is involved in the process of creating some sort of design, whether that's development research.
We don't know too much like we, we see how the sausage gets made, so to speak, so we have a very different perspective than the people who don't see this or who don't have that context or don't maybe don't even have the vocabulary to say the things that
you're saying so hundred percent agree that, you know, there's really no substitute think there's a way you can be efficient, and maybe, you know, prioritize your efforts and and maybe focus your research where you know the least about your users.
But that means you have to know something about your users before you can really start taking some of these I don't even want to call it a shortcut but for lack of a better term shortcuts.
Absolutely. And it can make me great because it will save you time but again you just have to go out and watch people using these products because they continuously surprise us and their own you know our own users are not trying to waste a lot of time you
know they have their own shortcuts in our products and in their own workflows, and those can be so beneficial to uncover and research and helped us adapt our designs to better support some of those needs for efficiency.
Yeah, for sure. So this makes me think of another hot topic, hot take. Should we then have users design products. If we know that we might be biased like there's always this risk right that if we're making all these decisions that we're going to be
biased in all those decisions. No, I think to some extent we can combat some of those biases by being aware of them, but I also wonder, perhaps maybe there's a benefit to involving the very people who are going to use this.
So what do you think of that. I know there's always risks to this but I also think there's a huge benefit. So I want your thoughts.
This I think is such an interesting space, definitely something that Don Norman has been publishing more about I think he just had a video with us about kind of community design and this changing role of researchers.
I also read democratizing innovation by Eric von Hippel, it's an MIT Press book and kind of looks at that, that same thought of, okay, you know, we are the designers were including users in our process but they still know more about their experience than
we do because they're living their experience so what you know where the designers but should they just be kind of coming up with their own solutions.
Yeah, I think that it can depend on the process, I think that my take on it is just involving users in the process through, you know your discovery into testing actual design solutions and into evaluating your design you know once it's live and out there
with our users working with it so I'm a huge fan and involving participants throughout the process. You know I know Teresa you and I both teach journey mapping and service blueprinting and design thinking so we are no strangers to including users in our
process. And it's such a valuable tool to just see how they tackle or anticipate they would tackle some of those problems. So yeah, I'm a huge proponent for including users in the process and a lot of different ways.
But letting them experiment with, you know, how, how might you solve this problem.
I did a lot of social media research not too long ago and was analyzing Yes, so exciting. One of the questions that I asked people is you know what advice would you give to companies designing post for, you know, the, the type of user that you are on
social media, and some of that advice was great and aligned with my other research findings and other advice was maybe a little bit more emotionally targeted a little more personal, and I think that working with users can can be like that is they'll highlight
things and say things that you might have already uncovered in your own behavioral analysis, and other times they might say things and you evaluate it from a product standpoint and find, you know, I get the sentiment but we couldn't apply it in that specific
way. So there you know what users say with a grain of salt, and really combining that with observational research to get a holistic view I think is so so important. Yeah, on the one hand, designed by committee is a dangerous prospect, because it can often mean
that we have to dodge together something that hasn't actually been analyzed and thought through. And it's really just the squeaky wheels right that get the oil or the squeaky wheels who contribute to the design so I do think there is a responsibility for
designers to help moderate that process and facilitate that. So, yes, I think we, we definitely should take what users, you know, there's always that saying that no users will say one thing and do another again.
And I'm always a huge proponent of observational research in addition to you know some of these other self reported types of research like surveys and such but.
But yeah, I would say that's our job as designers is to, you know, help analyze some of this. So just to kind of summarize or kind of hit one of the main points it's that, you know, design is only going to go as far as the accuracy of the data that
goes into that and as you said to your point, people who are experiencing this in their day to day life are going to have a much more intimate and accurate understanding of that particular experience, even if we do try to empathize with our strongest
willpower like we try really hard to empathize we still might miss the mark a little bit.
And I don't think we should beat ourselves up for that I think we should still, you know, strive to be as empathetic as possible but yeah it's, it's really important to empower some of those folks to help them make decisions and if that means educating them
if that means, or having them educate us making it a co creative process I think is so essential. Yeah. And one thing that you said there that kind of stood out to me is is thinking about when we are analyzing not just our own strategies for
research and also collecting data from other people is those biases on on both sides.
Do you have any thoughts about some of the biases that exists both from the researcher side as you're planning the study or the the user side when analyzing the data.
Gosh, yes. I'm laughing because I'm also susceptible to this right where I might start a study, and then I have. I don't want to say an agenda but I might have. Oh, I have this hypothesis and I'm pretty strongly tied to this hypothesis thinking,
Oh yeah, this is definitely going to be the truth but I'll test it anyway. And that's part of the issue is if we go into our studies with this like mission to prove something or validate something as opposed to having a more open mind it might actually
change our ability to like be objective about what findings come in so I have a lot of strong feelings about it because, and even then, even when I have strong feelings about it, it's not enough to combat.
There's a researcher bias that inevitably happens but one way we can combat it, other than being aware of it is having multiple researchers, and not just multiple researchers but researchers from different backgrounds who come from different places have
different findings different life experiences. And with that different worldviews and I think that's a really important aspect into ensuring that your research actually is objective, and that you are identifying as close to the real truth as possible.
So that's, yeah, that's my little soapbox pitch but what do you think Alita, do you think that you know researcher bias can be combated or is it something we're subject to dealing with forever.
Yes, honestly this exact thought, I don't want to say has plagued me but has been so salient to me throughout my entire career so as you know but you know listeners might not. I was a philosophy undergrad so being meta is, I feel like part of my personality
is different. And so with that yeah absolutely, you know, being aware of these human biases that exist is one thing but then it's like, okay, well, I'm aware of these biases so does that mean I'm not susceptible anymore because I'm just so meta about my everyday
life decisions. And of course, you know, that's not the case that's kind of where Daniel Kenneman has his system one and system to thinking you are fast, automatic thoughts is system one and then system two is those more, you know, thoughtful being conscious
of some of those actions. One researcher that I really love in this space is Adam Grant, and he's a organizational psychologist at University of Penn, Pennsylvania with the Wharton School of Business.
And he shared this idea of the I'm not biased bias.
This exact thought right of, you know, recognizing these flaws and other people's thinking and observing biases in their, in their worlds can force you to think, oh, you know, I'm so aware of these biases and other people that I must be immune to them, and that's
not the case. And I think it's so important that we internalize this idea that the less biased you think you are less likely to catch yourself.
And, you know, one thing that Adam Grant says on this topic that I really like is, he says, you know, if knowledge is power than knowing what you don't know is wisdom.
And that quote is is so central to ideas that I have as a researcher is starting off every research project, what do I want to know and continuously checking in with myself throughout.
Okay, I now know information in this space now what do I not know. And this kind of same concept applies to research and I think the work that we do of thinking about, you know, one, one person one expert if you will and UX is great but let's try and understand
what the community knows and contribute that wisdom and identifier knowledge gaps.
For sure. I think that's an inspiring note to think about, and not to mention, when you have something like a pandemic that throws a wrench into everything.
Everything you think you know, not everything but a lot of what you think you know goes right out the window. Yeah, so definitely stay curious and, you know, see every change and every little update as an opportunity to learn more.
And another thing too I, I would love for our more senior or seasoned folks to do is think about growing this field to I think the more eyes and ears we have looking at no user behavior, the more we learn as a whole, and I'm a huge fan of stewarding
this profession so you happen to know of a junior employee or someone who isn't quite a junior employee yet maybe even a student, giving those folks an opportunity to learn a little bit more and and participate in some of that work, I think is a huge, huge
benefit to helping us all keep track of, you know, our own biases and learning as much as we can.
This was great Alita, it has been already a half hour, hard to believe that it is already time to wrap up, but if anyone wants to follow you and your work.
Where could you point people to any channels any handles.
I am actually very active on clubhouse so for people on clubhouse, my username is Alita UX. You can also catch me on Twitter I post there as well as LinkedIn.
You can just search for me by my full name, I'll lead to Joyce, and yes, sharing lots of articles research, quite a lot of stuff related to behavioral science and human, human behavior in addition to just recent research findings.
Awesome. Well thank you for your time, it has been so much fun. I love getting real meta. And I'm so glad that I was able to do that with you so I hope you have a great rest of your day.
Thank you. Yes, thank you so much trees.
Thanks for listening to this episode of the energy UX podcast. To learn more about some of the resources reports or courses cited in this episode, check out the episode notes for more details.
Also, we have a weekly email newsletter, where you can get updates about our latest research and info on upcoming UX certification events. To learn more, go to nngroup.com that's nngroup.com.
Finally, if you're listening on Apple podcasts and like this show, and you want to support our work, please leave a rating and no matter what platform you choose, please hit subscribe.
Thanks so much for your support.
Until next time.
Keep it simple.