logo

NN/g UX Podcast

The Nielsen Norman Group (NNg) UX Podcast is a podcast on user experience research, design, strategy, and professions, hosted by Senior User Experience Specialist Therese Fessenden. Join us every month as she interviews industry experts, covering common questions, hot takes on pressing UX topics, and tips for building truly great user experiences. For free UX resources, references, and information on UX Certification opportunities, go to: www.nngroup.com The Nielsen Norman Group (NNg) UX Podcast is a podcast on user experience research, design, strategy, and professions, hosted by Senior User Experience Specialist Therese Fessenden. Join us every month as she interviews industry experts, covering common questions, hot takes on pressing UX topics, and tips for building truly great user experiences. For free UX resources, references, and information on UX Certification opportunities, go to: www.nngroup.com

Transcribed podcasts: 41
Time transcribed: 22h 36m 34s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

This is the Nielsen Norman Group UX Podcast.
I'm Therese Fessenden.
The term UX has long been used synonymously with user interface or UI.
But these aren't exactly the same thing.
And over the past several years, many firms have realized that the design decisions they're
trying to solve tend to be a bit larger than tweaks in visual design could ever solve.
With this expansion in scope comes an expansion in data points, and also a critical need to
manage and use these data points to create more meaningful experiences.
But how do you manage these massive data sets?
Well, one way to do that is artificial intelligence and machine learning.
Today you'll hear my conversation with Dr. Kenya Odor, founder of the UX research and
design agency Lean Geeks.
Kenya is a human-centered researcher, strategist, and solution designer.
Prior to founding Lean Geeks, she was the director of user experience at LexisNexis,
and also was a user experience engineer and project manager at IBM.
She holds a PhD in human factors from NC State University, and is also trained in experimental
psychology and industrial engineering.
In this episode, we discuss the increase in the scope of UX work, how coexistence with
AI and automation will change UX work, and finally, how teams can avoid bias and improve
strategic thinking when analyzing large sets of data.
With that, here's Kenya.
It seems like there's this shift in the field, like there's more intentionality around product
and service creation.
We used to focus more on interface elements, or how do you design the perfect widget or
the perfect button?
And now, many firms have started thinking about their products as services.
Could you speak to any interesting themes you've been seeing in the space of service
creation?
Yeah, yeah.
So, when I reflect back on my roots in human factors and industrial engineering, we were
being trained to think from a systems design perspective or a system thinking perspective.
And the differences, I think you described it really well, the differences in my application
or the application of what I was learning while I was in school was that it was more
focused on that micro-interaction with an interface, where what I was learning in school
was around the study of work.
And when you think about the study of work, you think about the people in that context,
the training, the environment, climactic factors that have to be the equipment.
So, it was a part of the foundation, at least for me in my training, to think about all
of those things.
But when I would go to work, it was always about just that visual interface or the mode
of input.
And so, what I've seen is things kind of come full circle, at least in my own experience
where we're starting to come back to that systems thinking or systems approach to design.
And with that, service design is a huge component or factor in that.
And so, when we think about one's experience with things, sometimes when you have someone
come back and tell you they had a really great experience, a really terrible experience,
it goes beyond just the interfaces.
It speaks to the way that you experience other aspects of that journey or that service.
And so, I think as an industry, we recognize that you have UX people that have those capabilities.
Why not leverage them?
And also, when you think about the overarching goals of your organization, you serve your
customer from awareness all the way through to support.
So, you've got to think about all of that as the service.
Yeah, definitely.
It does seem like a lot of organizations are growing beyond myopic focus or a really short-sighted
focus on the visual details.
Yes, the visual details are important and definitely part of that process, but we're
now thinking of it.
I like what you said.
It's like coexisting.
It's like this ecosystem of things that are happening in a user's life or in a customer's
life, including when they learn about a brand all the way through to when they're getting
support.
So, yeah, that is something that's really fascinating about the space for sure.
Now, I'm also thinking to this ecosystem.
We have, of course, a lot of digital products, but we also have some physical products.
But that also means we have a ton of data, more data than we've ever had before.
And I can imagine big data is only going to grow bigger, especially as we have Web 3.0
on the horizon.
So, what do you think teams need to be especially aware of or cognizant of when they're starting
to maybe utilize that data, but of course, maybe it's not humanly possible to deal with
every single data point.
Maybe we've got to automate certain things or maybe we've got to turn to machine learning
or AI to solve problems.
Do you have any advice for teams on what they should be thinking about?
Yeah.
So, way back when I go back to my dissertation and research, a lot of my interests and focus
were on the spectrum of automation from fully manual to fully automated.
And a lot of my interests at that point and currently are on the fact that we've got to
understand when we create these systems that wherever you fall within that spectrum of
artificial intelligence or automation, we've got to think about what does the human in
the loop have to understand?
What do they have to give and get from the system so that they can ultimately develop
trust?
That trust building and that trust relationship is just as important as it is between you
and I when we transact.
So I think that's a big part of it is understanding that it's not just about managing the data,
serving up the data, analyzing and presenting insights around the data, that big data.
It's also about what do I give that person to allow them to trust the data and to understand
where it's coming from and what goes into it.
One of the big areas of looking at the AI space that I'm passionate about is making
sure that we consider that whole garbage in, garbage out statement and that what goes into
your training data set has a lot to do with the utility of what comes out.
So if you have not factored in that data, understanding what makes up that data, does
it make up the full set or gamut of what needs to be considered, you're going to have limitations
that are baked into the system that you may not be aware of.
Those limitations become biases, but then ultimately they impact the person consuming
them.
Their decisions are not as robust or informed.
And so I think, again, this goes back to trust, but a lot of it means understanding foundationally
what are people trying to do with this data and is it a sufficient data set for them to
work with.
Second to that, I think it's also going to change the definition of work.
And so obviously we're already seeing where automation and AI are taking over some aspects
of what we have historically or traditionally called work, but we have to also think about
as individuals, what does that mean for my role?
How do I need to reevaluate the capabilities that I have that maybe I underestimate or
don't currently utilize but have?
And so we've gotten into coaching people around that because you sometimes forget inherently
those strengths or capabilities that you have that cannot be today, cannot be replaced by
AI, automation, that kind of thing.
And sometimes it takes things as small as asking a friend, when you think of me, what
comes to mind?
What do I do differently than anyone else that you really love?
Sometimes questions like that can trigger you thinking differently about your own capability
and career and how you're going to coexist in a world where AI, big data, automation
are doing a lot of things that we did historically.
Yeah.
Yeah.
That's a lot to think about as well.
I didn't even think about the fact that work itself is going to look different because
we're going to automate certain things.
And maybe there's certain data analysis that might be very useful in this day and age,
but in several years, that data might be analyzed by a machine and we might not need to extrapolate
data in that same way.
So yeah, I actually want to dig in a little bit.
You mentioned that establishing trust with AI is something that is critical to its success.
So what do you think are the things that prevent people from trusting AI?
Because it does seem like there's this uneasy sentiment about it.
And I'm curious what you've found in some of your work.
Well, first off, I think it starts with what is the data or the AI being used for and making
sure that you set clear expectations for the consumer of that AI capability that this is
what we expect this can be used for.
It doesn't go beyond that.
So it's kind of like you're in dating.
If we're dating and you let me know upfront, this is not going anywhere, this is not going
to lead to marriage, you set the expectation that this relationship can only go so far.
So set that same expectation from the developer's perspective, set that expectation in how you
present the information about the system.
I teach HCI to software engineering students in the computer science department.
And a lot of what we look at is analogous to hardware systems or physical systems and
how we look at software systems kind of in the same way.
So one of the examples we use is a bicycle versus a CPU or a desktop computer, the big
box that we used to have under our desks.
One is black box, whereas you can see all the pieces of a bike and how the chain goes
around the spoke and when you pedal, you see everything moves.
So the ability to kind of see what's happening and how you're producing action is very different
than when it's black boxed or I'm just typing something at the computer, something happens
in that black box and then something comes back to me.
So one of the responsibilities I think that's also required to help build that trust relationship
is to kind of remove some of that black box-ness of your system and you don't necessarily have
to show how different connections are being made, but demonstrate some amount of the inputs,
what's being done with those inputs and the outputs of information and actions that you
take with a system so that people can understand, have a mental model of sorts of how that system
works.
Yeah.
Yeah.
The visibility.
I love that you said it's like the lack of transparency that can often just kind of leave
you with a lot of questions and questions don't exactly inspire trust.
So absolutely kind of establishing some expectations and I think at least one of the usability
heuristics that pops into my head is like visibility of system status, just what's happening,
what is this capable of?
Yeah.
That's right.
Yeah.
And I guess I also want to dig into how you think, I guess, are we going to get automated
out of a job is sort of what pops into my head when thinking about how work might change.
Like what do you think might be hard to automate or that we as human beings certainly have
a leg up in certain cases?
Well, there are a lot of limitations to AI today because AI is only as good as its training
data set and it's not well suited today to account for new inputs that were not a part
of that training, taking information that we've learned.
So when you think about your ability to perform certain actions like cooking or riding a bike
or some of those things that you learned when you were younger, there are things that you
learned that you may not use for decades, but you still have that skill and you're encoded
in your long-term memory of how to execute that procedure essentially.
If the AI is not trained to do that, and today we're not at that place where we've been able
to create AI that can actually formulate new ideas and thoughts that are not based on what
it's seen as inputs in the past, we still have that leg up of that context of experience.
We also have emotional intelligence.
We have the ability to provide you, like you think about experiences at stores like Starbucks
or other companies that really lean in on or double down on customer service.
Some of those things require an individual that's going to be able to assess as a customer
walking up what mood is this person in, how do I respond to them, where the way in which
things are coded into AI won't necessarily be able to go off course and come up with
something new based on what they're encountering.
So I think that we have the ability to use a lot of those soft skills that we call soft,
but I think they're really important.
We have the ability to leverage a lot of the soft skills that we have as humans where AI
can't do that to them.
Yeah.
It's really fascinating thinking about how much that data set really does shape what
an AI can do, but what they can't do as well.
You mentioned earlier that being cautious or just conscientious about the data that
goes into that training set, what are some of the ways you can do that to be more conscientious?
What are the things that teams should maybe look for in order to make sure they're training
data sets really as well-rounded as it could be?
The first thing that comes to mind is the composition of your data science and your
machine learning and your AI team.
There may be individuals or characteristics, or when I think about data around humans,
for example, there might be things that if you don't have a lot of different thinkers
on your team or people with different contexts, they might be missed.
It's not an intentional kind of thing, but it might just be that because we all think
along the similar lines or have similar contexts, we think about that data from that perspective
of all of us in the room being the same or similar.
Opening your teams up to diversity not only help in terms of the diversity movement overall,
but it helps in that thinking when you have brainstorming and collaboration going on where
people are coming from different perspectives and those different perspectives and contexts,
whether it's around neurodiversity, physical differences, and that kind of thing.
You want people that come to the table with those differences because that helps foundationally
consider the fact that things need to be kind of broad in their consideration.
I also think that when you think about how you position your AI, you've got to recognize
that in selling that concept, you have to level set your consumer with the fact that
our system works within these guardrails or parameters.
On the other end is the consumer.
When you walk up to a system where someone says it can do these wonderful things like
a driverless car, you want to be able to take it out of the context it was trained in or
that training dataset falls in and see whether or not it can perform outside of those guardrails
because if it can't, that sets for you the expectation of what you can get from that
system.
Yeah, so it does seem like, especially I'm just thinking of driverless cars, you know,
having sufficient exposure to not just a broad dataset, but the right dataset that is representative
of what's happening in reality, like you can't drive a car in a parking lot and expect that
it'll do well on a highway.
Exactly.
Yeah, so related to that, thinking about data-driven decision-making, what mistakes do you see
a lot of teams making?
It doesn't have to be limited to AI.
It could be other types of data-driven decisions for designs.
What mistakes do you see teams making?
And then maybe what the high-performing teams, what are they not doing?
What are they getting right?
So in my experiences, I've seen teams that lean on data that's available or data that
aligns with their perspective or opinions, I've seen those are the age-old problems of
biases within data analysis or data-driven decisions is only listening to or consuming
the stuff that you feel aligns with what you think.
That's the biggest problem we have in a lot of our current challenges that we have in
our society.
It's only listening to that voice that confirms what you believe.
What I think is problematic is when people also don't lean on any data where it's a lot
of assumptions.
And I always tell teams that we work with, when you start to say, I think a lot in conversations,
it's time to pause and go and get some data and make sure that that data collection, again,
is not biased around, I want to find something to confirm what I'm thinking or feeling.
High-performing teams get it right when they have experts involved in that data collection
and analysis.
I've seen people in roles where they're responsible for defining how to gather the data and also
gathering it and analyzing it where they probably didn't have expertise in those areas.
And so there were some problematic aspects of the data or biases baked into it that the
rest of the team's not aware of.
So having experts in terms of your partnerships within your organization, but also making
sure that not only having the data, but knowing when to stop and pause and get the data.
So a lot of times teams will keep going down a certain path and they don't stop to say,
hey, maybe we need to go and validate, get some data, basically gather some data and
validate this concept or this assumption that we have.
And but they keep going and then they find out later on, oh, we got to go back and rework
or throw away or put something in the backlog, that kind of thing.
So timing, right people, right, and also making sure that you have the right data objective.
Yeah.
Yeah.
There's a saying, well, not saying, but one of my colleagues, Kara Pernice, she regularly
mentions her gripe with the word validate because it often implies that, oh, we already
know it's correct.
We just got to make sure it's correct.
So that confirmation bias doesn't always get eliminated with the validation step.
But if we treat it like it's validate slash invalidate, then we can make sure that we're
actively disconfirming things that we expect to be true so that we can make sure we get
to objective truth.
Yeah, I think so.
I always lean to her whenever I'm wondering if I need to double check my research plan,
like, does this make sense and does this have the appropriate means by which I can check
my assumptions?
It can be hard to get right.
Even if you are cognizant of it, it does help.
So I guess my last question is, if people are building something that doesn't exist
yet, this is something I'm often asked, I don't have a design.
How can I research if I don't have a design?
Do you have advice for that?
Yeah.
I actually had a meeting yesterday with a prospect, a team of folks that have an idea,
and they started the process of creating their own wireframes.
But the interesting thing is, one of the questions that came up in the room was around, well,
how do we validate this if we don't actually have a working product or even a concept?
And in my experience, we've used framing things from an opportunity standpoint.
And this goes back to some of those design thinking framework opportunity statement concepts
where you frame it around who you are in that context, what you're trying to accomplish,
and what we're going to do in terms of solving or helping you do that.
And framing things in that way, providing something in the form of a storyboard that
gives the person, sets the stage of what's happening, why it's happening, what may have
occurred and why you potentially need this solution or can utilize this solution.
So providing people with that context and giving them an opportunity to share ideas
on what they expect, what would work in that scenario versus not, that gives you some more.
It kind of progresses your idea in terms of, are we going down the right path or do we
need to shift our understanding of the idea?
And so I think it starts with first having those conversations with people outside of
your team.
And just kind of going through from the abstract to the concrete, so taking that and then turning
it into what does a concept or a prototype look like?
Put that in front of folks and see, hey, we talked about this scenario, and you said that
this was an opportunity for us to solve around this, what does this look like in that context?
Does this feel like something that would make sense?
So getting some of that feedback, and again, this goes back to knowing when to get the
feedback is just as important as the feedback itself.
So as you use those conversations as a way to shape that and mold that idea, it's almost
like a co-design effort that allows you to get to a place where you're not just designing
in a vacuum and then kind of like the big reveal and people are like, I don't care.
They're not like, oh, wow, what is this?
So it kind of helps you validate or invalidate along the way what makes sense.
Yeah.
Yeah.
So it's two parts that I'm imagining.
So the first part being you really need to know that problem space really, really well.
And that means talking to your prospective customers, even if there isn't an existing
product.
How do they solve that problem right now?
Maybe they don't.
What does that mean for them?
What are the things that impact them as a result of not solving that problem?
And then the second part is you start to co-create this new idea based on the previous information,
but also maybe based on some additional check-ins with that group of users.
It doesn't even need to necessarily be the same exact group of people, but it could be
someone representative of that group and you can continually get feedback.
So I do think that consistently iterating and not waiting for that grand reveal before
you get that final feedback, but quickly just making small changes, but basing it off of
some new data from that group of users.
Yeah.
And I have also found that the real key to that is make sure that you understand the
demographic variety or spread of the people that you're targeting.
Don't just find the people that are easiest to access.
And I think this goes with any research and we at Lean Geeks try really hard to do this,
even though some groups are harder to gain access to than others, but you'll learn some
really interesting nuggets that you didn't think of because everyone's cultural context
is different.
And cultural context means a lot of different things.
And I think that's where a lot of companies, especially today with social media, we're
seeing a lot of fails in terms of being sensitive to and also understanding cultural context.
Yeah.
Yeah, absolutely.
It takes a lot of research and a lot of listening.
Now I guess what also comes to mind when thinking about designs and the future being mindful
of who you're catering to, we might not always be lucky enough to not have a product and
be able to start from scratch.
I'm also thinking of the folks who have really gnarly legacy systems or either they are building
something to coexist with it or maybe they're starting new.
But what advice do you have for those who are less lucky?
It's funny because I like to solve problems.
I like to have really big, complex problems with different types of users and like unpacking
what are their motivations?
What are they doing?
How can we help make their ecosystem more effective or their existence in that ecosystem
more effective?
And so when I think about that, the challenge of creating something new can be just as...
The challenge of working on something legacy can be just as interesting as creating something
new depending on how you look at it.
And you just described the scenario, how do you build something that coexists with that
legacy system?
We have lots of systems and products and tools out there that are legacy and will be that
way for a long time.
So it's figuring out some novel ways to serve up their value or to keep them going without
throwing the baby out with the bathwater so you can't just scrap it and start over.
You've got to figure out creative ways to serve up that value differently and to leverage
what's out there now that we didn't have 15, 20 years ago.
I think those types of challenges are so interesting.
Sometimes not as rewarding for people from a design perspective.
And when I say design, I mean like the visual design perspective.
It may not be as interesting in that regard, but there's so much more to me of interest
from a systems perspective, a systems design perspective.
I had a coworker years ago that was a product manager on a legacy product and he was comparing
himself to another colleague who got to work on the new sexy stuff.
And so he's like, you know, oh, how do I sell myself at the next company or looking for
my next role?
How do I sell myself when I'm working on these old clunky systems?
And I'm like, there's a huge opportunity there to brand yourself as someone who's owned and
managed legacy system evolution, you know, and how to make end of life decisions, but
also where do we continue to leverage capability?
Like that is a whole skill, that is a whole domain and discipline that's necessary.
And so I don't know if that made him feel any better, but I was like, you know, there's
a lot to be said about people who have that capability to think about some of those limits,
technical limitations and debt in the context of creating something new.
Yeah.
Yeah.
So in a way, maybe not thinking about it as an unlucky problem, but a lucky problem where
I guess the other way I've heard of it described is it can only get better because it can't
get much worse, right?
There can be some excitement in solving some of these super complex problems, especially
if we can think of it as a problem solving opportunity and a way to really think a bit
bigger than some of the quote unquote easier problems.
So yeah, and certainly won't be automating out of that job anytime soon.
Exactly, exactly.
That was Dr. Kenya Odor.
You can learn more about her work at leangeeks.net or by following her social media channels,
which are linked in the show notes.
You'll also find in those show notes, some links to NNG articles and videos, but there
are plenty more where those came from at our website.
By the way, there are some upcoming opportunities to learn either virtually or in person and
details on those events can also be found at our website.
That's www.nngroup.com, that's N N G R O U P.com.
Finally if you enjoy the content you hear on this show, please leave a rating and hit
subscribe.
This show is hosted and produced by me, Therese Fessenden.
All sound editing and post-production is by Jonas Zellner, songs are by Tiny Music and
J Carr.
That's it for today's show.
Thanks for listening.
Until next time, remember, keep it simple.