Skip to Content

145 / Patricia Reiners Answers 3 Key Questions at the Intersection of UX and AI

Hosted by Paul Gebel and Brian Loughner

Listen

About

patricia-reiners-product-momentum-podcast-guest

Patricia Reiners

UX Strategist

Patricia Reiners, a distinguished UX innovator and voice in the field of user experience, hosts the esteemed FUTURE OF UX podcast. Her expertise lies at the crossroads of AI, spatial computing, and groundbreaking innovation in UX. Based in Zurich, Patricia offers global clients expert guidance in advanced UX methodologies and emerging technologies, focusing on future design industry trends.

In this episode of Product Momentum, Patricia Reiners chats with Paul Gebel and Brian Loughner (a Lead UX Designer at ITX); during the conversation, she tackles three critical topics that UX designers should consider when thinking about how best to interact with AI in their daily work, in their careers, and in their role as ethical humans.

About Patricia. In addition to hosting the Future of UX podcast, Patricia Reiners is a distinguished UX innovator and a prominent voice in the field of user experience. Based in Zurich, she works to develop advanced UX methodologies in emerging technologies like AI, focusing on future UX design industry trends.

How do human skills compare with AI capabilities?

AI can analyze content, generate ideas, and present options far faster that humans can, Patricia says. But it lacks the creative spark, personal judgment, and sense of empathy that only humans possess. Creativity involves emotional intelligence through lived experiences that AI cannot replicate – even with the best training. “Those are uniquely human traits that are critical in tech, particularly in leadership roles where you’re working with people.”

AI tools also lack the ability to perform critical thinking and demonstrate sound judgment, which Patricia says is “a super-important skill – especially for designers.” Unlike humans, she continues, AI struggles with making complex decisions that involve ethical considerations and subjective judgments. Designers often draw on research to make informed decisions based on values, principles, and context.

As a UX designer, how can I prepare for my future with AI?

Remember that AI is a tool, Patricia advises. “So, designers should get their hands dirty with AI and learn to collaborate with these tools so that we can better understand how they can make our work easier. When we understand their limitations and leverage them to improve our work, we become better designers.”

Dive into AI, she continues. “It’s so new, and things are changing all the time. AI works best when it augments our human abilities,” Patricia advises. “So, look for ways to integrate AI into your workflows to enhance your productivity.” Patricia also recommends that fellow designers join a UX meetup [like Upstate UX Meetup] to learn from others about the latest trends and technologies, and be willing to share your challenges and your knowledge with others.

How can I make sure the AI tools I use are ethical, protect PII, and are free of bias?

As John Maeda explained during his podcast episode, “We often forget that accepting the bad with the good is a theme of every new technology story.” That is, we understand that AI – like every new technology – is no panacea for all that ails the world. But we cannot discard them because they are flawed.

With respect to ethical considerations in AI, Patricia says that designers should do their best to find and use AI models that are trained with unbiased data. In the interim, she adds, “advocate for ethical practices; develop strategies for using AI before building products that might have been trained using questionable data.” And perhaps most importantly, Patricia says, “designers need to speak up to ask the tough questions about the quality of data AI tools have been trained on and where personal data (used by AI systems) are stored.”


Special thanks to Patricia Reiners for sharing her expert insights in this podcast episode, and especially for delivering an amazing workshop at ITX’s 2024 Product + Design Conference. Unable to attend this year? Check out what you missed!

Paul Gebel [00:00:19] Hey everyone, and welcome back to Product Momentum, a community of product people and a marketplace of ideas where leaders and learners come together to shape our way ahead. My name is Paul Gebel, and together with my co-host Sean Flaherty and the rest of our amazing team, we record conversations with thought leaders in product, UX, security and beyond that will help you shape the lives of your users through software. Check us out on all your favorite listening platforms, but for those who prefer the video experience, you can find all our latest episodes on the Product Momentum YouTube channel.

Paul Gebel [00:00:50] Hey Brian, how are you doing today?

Brian Loughner [00:00:52] Good, Paul, how are you?

Paul Gebel [00:00:53] That was a fantastic conversation we just had with Patricia Reiners. I was really interested in her optimistic, yet realistic perspective on the opportunities that I was bringing not only to design, but products in general. And I think the thing that I’ve been excited about, the thing that’s been helping me specifically, is like the way that it allows for deep thinking, that thought work that is always elusive because you’ve always got a user your story to write a refinement or an agile ceremony to get to. And I feel like there’s been some really found economies of scale in the way that we approach AI, just in the ability to focus on the premium level work, the things that are as we offload some of the mundane tasks, we’re able to focus on the real strategic stuff. But I’m curious, as a designer, what did you take away from the conversation with Patricia?

Brian Loughner [00:01:44] Oh man, a lot. I mean, just a great, brilliant, thoughtful mind. My favorite parts were definitely the things that, you know, designers can do and people in product to kind of, stay ahead of the curve. To be  prepared in this uncertainty. So, some pretty practical steps of like these are things that you should do, and they make a lot of sense. So, things you can start doing today.

Paul Gebel [00:02:08] Yeah. Exciting times. Well, it’s been a great conversation, so let’s get after it.

Paul Gebel [00:02:14] Well hey everyone and welcome to the show. Today we are really excited to be joined by Patricia Reiners, a distinguished UX innovator and a voice in the field of user experience. She hosts the esteemed Future of UX. Her expertise lies at the crossroads of AI, spatial computing, and groundbreaking innovation in UX. Operating from Zurich, she works with clients around the world and provides great guidance on advanced UX methodologies and the implementation of emergent technologies focusing on the design industry in the future. And she also hosts some courses and leads a design community that I’m eager to dig into a little bit. Patricia, thanks so much for taking the time to join us today.

Patricia Reiners [00:02:49] Awesome. Thank you so much for having me. And thank you so much for the nice intro, Paul, I appreciate that.

Paul Gebel [00:02:54] The pleasure’s all ours. So, I want to jump right into the deep end, a little bit during our chat before we hit record here, we were talking about this intersection of AI and design and where we’re at in the industry. And there’s a lot of reasons to be skeptical and a little bit, hesitant to really embrace the AI, technology as it, as it looks like it may be disruptive in a lot of different ways, but there’s an optimistic side of the conversation, too. And that’s where I want to start the conversation. Could you open up by sharing what are those skills that designers can trust an investment to be profitable in? What are the things that AI is probably not going to be good at? In other words, where can we find some skills that are going to remain uniquely human? If I could put it that way.

Patricia Reiners [00:03:42] Yeah. Absolutely. I would love to dive a little bit into that topic. AI: the elephant in the room. So, it’s super important for us as designers, as humans to understand what really sets us apart, right? First of all, AI is incredibly powerful and processing data, doing repetitive tasks, and even generating creative outputs sometimes. But there are several areas where humans excel and remain irreplaceable. I think the first is, of course, creativity and innovation. Those are uniquely human traits. It can help generate ideas, but it really lacks the true creative spark that humans have. You know, for example, when you think about a painter, you know, who approaches a blank white canvas, this painter really brings their emotions, their experiences and everything this person has experienced in their whole life into that painting, right? While AI might analyze patterns and just like, replicate styles, AI can really feel inspired. Or I have a personal vision and I think this is something that we really need to understand. Like the way of creativity from learning from past experiences is something like truly human.

The second thing, and I think this is the most obvious one, and everyone already knows that it’s emotional intelligence and empathy, right? Like AI can analyze and respond to emotions to some extent, but it doesn’t truly understand or feel them. And we’ve probably all seen the demos, or maybe even tried out the new ChatGPT 4o version. And I think this one is super fascinating. It laughs at your jokes, it is sarcastic, and you feel like, oh wow, this chatbot is actually getting me. You know, there’s empathy, but this is just like trained empathy. This is no there are no real feelings behind that. Right? So, the ability to empathize, offer comfort or provide these nuanced support is something that AI can’t really replicate. This is not possible. Is this like a human touch. And this is crucial also in tech in leadership working with people.

And number three like a third thing is of course critical thinking and judgment. And this is a super important skill especially for designers. Right, Brian? Like for all of us here this is also where [unintelligible] really shine. AI can analyze data and present options, but it really struggles to make complex decisions that involve any kind of like ethical considerations or subjective judgments. You know, thinking about a business setting or choosing a certain design direction. It’s something really difficult to balance, like profit with also social responsibilities and ethical responsibilities. So this is something where a human is so much better. While an AI would just like basically analyze the data. And humans can really weigh, you know, the various factors like long term impact for example, and make informed decisions based on values and principles. And also, like judge AI’s work right, like understand what is better and why is that better? Because they have the context. And AI is unfortunately not really good with context. Or maybe like not very good with context, but we humans are very good with context. So, I think those are the areas and they will stay the same. Right. Like even when AI advances, it might learn certain ways of we tend to have empathy but it will never have feelings or something like truly human and something like we all designers need to learn and need to practice a little bit.

Brian Loughner [00:07:26] I agree wholeheartedly. I still kind of go back and forth between, can AI be creative? But I do agree with everything you said. But there’s something that when like, my mind responds to speech and images as if it were coming out of another human being. So, like, I still kind of have it’s like duality of like, is it creative? Is it not? But when you the talk about future in our field, what advice would you give to people, about preparing for the future, especially with the uncertainty and what kind of opportunities it creates and what kind of impacts, do you kind of see having on our industry and every other industry, because AI is no longer a data science computer science term. It is a design term. The goalposts are moving.

Patricia Reiners [00:08:10] Yeah, Brian, I totally agree. I know what you mean. AI sometimes feels to be creative, and AI is great at brainstorming for example. Right. So, it’s not that it’s not creative but how do you define creativity? I think this is also something that we as humans need to redefine a little bit. But back to your question. I think (it is) super important because this is what we all want to know. How can we prepare now? We already talked about the skills that are typically human or that AI is not really good at, but how can you actually prepare from today? And I usually say there are like three things that I would recommend.

The first thing is: not only focus on your own human skills, but also try to collaborate or to work with AI. So, get your hands on AI, get your hands dirty. Experiment with different AI tools and techniques. Try using AI-powered design tools, prototyping tools, ChatGPT, you know, just to name a few, or analytics platforms. And those hands-on experiences are super important because it will teach you the limitations of these tools and also how you can actually use it to leverage the outcome to become a better designer, to replace the repetitive tasks. You know, step by step. It’s not working 100% like super perfectly at the moment. But this is this the thing. All the tools are getting better. So, so, so fast. We have seen that with Midjourney, for example. I think this is a great example that we are seeing this with image generation and with video generation tools. Right? Like there are new tools. And I think it’s fascinating to see that you will also see the same of course with interface design or like UI design, graphic design, all these kind of things. Right?

So, my recommendation would be to get your hands on AI, try things out. And if you are not allowed to do that, in your workspace or in your work environment, you can just simply work on your own mini-AI project or on your hobby. I’d like something like this where you really try things out, for example. Use ChatGPT to create a recipe or choose a movie. Anything that really gets you into the field and helps you to try things out. The second thing, and I think is super important for all of us, because we have a very new situation. We need to stay curious and keep learning. And, you know, I’m sure we will have content around AI on social media. And I think it’s pretty interesting because not everyone is super hyped about AI. Some people are very, very, very cautious. And so even like we should block AI, we shouldn’t use it. We collectively shouldn’t use it as a design, as a tech community, which I think would be the worst thing to do. So, I don’t encourage that. And we need to learn and dive in deeper into these things to understand what the limitations are. Also, to guide the design project to the right directions and make sure that no other departments are taking over the design work because they know what they’re talking about.

So, we need to learn, and we are all in the same boat at the moment. Everyone needs to learn how to use these tools because it’s completely new. So, I would recommend to take courses, attend workshops, conferences, anything that helps you to understand the latest trends and how they might impact your work. Because your work set up is very different than mine or Brian’s or Paul’s. Right? Like it’s different for everyone. And I think this is super important. And the third thing is, of course, collaboration. AI works best when it really augments the human abilities. So, look for ways to integrate AI into your workflows to enhance your productivity. And it doesn’t really mean that it needs to replace your whole workflow. It can take over a tiny part of that workflow. Right. And I think this is what a lot of people don’t get. It doesn’t need to replace everything. It’s a tiny part of a super big workflow. And you need to find the certain areas where AI really shines and replaces certain tasks like, you know, repetitive tasks: summarizing meetings, summarizing user research interviews, transcribing them, rephrasing them, you know, coming up with questions, all these things even like coming up with images, brainstorming, all these things. So, my advice would be dive into AI, know the tools, try them, keep on learning, do courses, join a community where you really do these things together and then also collaborate with AI. Try to see how you can use that on your daily basis as a habit basically.

Brian Loughner [00:12:48] Yeah. That’s awesome. And Patricia, I’m glad you talked about some of the activities that we as people in UX and product do. I think back to like my job five years ago, and the outcome is kind of the same, you know, create delightful experiences that support business. But the tasks in the way that I’ve done it have changed completely from the last five years, in the five years before that. So, as we’re kind of seeing a lot of changes in products, kind of how do you see, AI playing a role in the way that we do our jobs?

Patricia Reiners [00:13:18] Yeah. Brian, I think that’s an awesome question. Right. And, and you mentioned how much the industry has changed and I, I couldn’t agree more. I remember when I started as a designer, we used Sketch back then, you know, it was right after everyone was working with Photoshop to design web pages. After that, we had we had Sketch and the designed projects we created was in this local save file on our computer that, oh, they’d be actually saved on a like a, I don’t know, like a cloud server from our agency. And we wanted to present it to the client. We prepared a PowerPoint presentation and send it as a PDF to the client. Right. And this if you tell that to a junior designer today, they would laugh at us. Right. Because now how it is, we are all in Figma. Our clients are in Figma. We are doing writing comments. We are working together collaboratively. Right. Like collaboration has become so much more important. Handling these comments when someone ask, oh, there need to be some changes. And then on the fly you do the changes to try things out. You work together and people are watching you all the time. Like you collaborate, right? Like this is totally different than sitting in your own dark room preparing something. Then after two weeks you send it out in a PDF.

And collaboration is such an important topic and such an important skill, not only with stakeholders and clients or workshops or if just through collaborative working, but also with different departments like data scientists for example. This is totally new. We used to work with developers, but with data scientists it’s different, right? Because they are creating the AI models. And this is not something that they should basically do in the silo. But we need to communicate and also help them to understand what is the right data, how does it affect our interfaces. Talk with them about prototyping, about testing, about research. So, the whole collaboration with data scientists is super important. The second thing that I’m seeing is a big, big change, especially for designers or product people in general, is, that the role is changing more and more into becoming a facilitator. Facilitator, not only bringing groups together as like a facilitator, but also facilitating AI tools. Right? Like they are handling the more routine design tasks and designers shift more towards facilitating the use of these tools, guiding the creative process. And this means designers will really spend more time ensuring that AI outputs align with the product vision the user needs, and curate the content, rather than just like creating content or the UI. But like the whole part of curation is super important. The same with like facilitation. I think this goes in the same direction. And of course, also the role of UX research will evolve. Super important, right? Because now we have like so much data that can be analyzed and AI can really provide advanced analytics and insights. So, researchers also need to think about these things, and how they will need to focus on analyzing these insights and understanding the nuances that AI might miss. Right. Like double-checking all these things and involves more strategic thinking less than on just like basic data collection. And I think this is the overall trend that we are seeing: more strategy, more being proactive. You know, judging is the output good enough, seeing, where are certain things that are missing. So being very, very proactive and strategic I think is super, super important. And this is also how the roles are evolving, because everything’s working with AI nowadays and will even be more in the future.

Paul Gebel [00:17:10] I couldn’t agree more. I think some of the observations that I’ve seen on the product side of teams is that there’s more room for deep thought work where it used to be, you know, captured in sort of user story writing and refinement and agile ceremonies. A lot of that has become, abbreviated to some extent just because it’s, it’s so much faster to, to fast track those mundane tasks. There’s long-form writing where we can give some light guidance, and it gets us 90% of the way there. And we can help elaborate from there. The thing that does give me pause, and I want to shift gears a little bit and and jump off from that thought is with all of this data sharing and terms of service and there’s, any one of a number of stories that you can point to about ethical violations, about the way that these models were trained.

So not finger pointing any one company or policy or news item specifically, but just in general, ethics are a bit of a new field in AI where there are valid arguments for, openness and transparency and maximum training, and there’s valid arguments for artists and designers having their work targeted for models to get trained on and then not being compensated or attributed when that work is, is used as an influence for new creations from AI. This is a study you could, I’m sure, think of a half dozen more that you’ve seen in the news lately. So as creatives, as product professionals, how do we reckon with this moving landscape of ethics? We can layer in GDPR in a minute if you want to go into a little bit of specifics, but just at a high level, how do we as professionals keep our, you know, our moral compass straight from an ethical perspective, but also not get left behind with this technology that’s obviously here to stay?

Patricia Reiners [00:19:09] Yeah, I think this is such an important point that you brought up about ethics and also and a bit also new for us as designers to talk about these things. Right. The problem is we need to come up with a strategy before we build any products and also before we are training any AI models. So, we need to say like what is the data that we are training it with. And also, designers need to speak up, ask these questions like what is the data that we are training it off? If I look at the data, see what data might be missing, if the data’s good enough, if the data is biased, all these things need to be aligned on before we’re even starting an AI project. Right. And what we are seeing, like you mentioned a few of the cases that are that are currently in the news. Everyone can read them, and I assume that they said it’s totally fine. Like we don’t care, because what’s always easier is to say, like we throw everything we find inside these AI models and see until someone complains. And so there is like, any problem. But we are, like improving the models like, much faster than people who are doing it, the right way. Right.

Well, what we are also seeing, for example, like Adobe Firefly, right. Like they are doing it so well, they’re really compensating all the artists. Which work has been used for the training, I don’t know, another company who does that. And I think this is also like the role model of how other companies should do that. Also, how we as designers should advocate for these kinds of things because we don’t really know what happens. Right? Like we need to think about the long term. In the beginning, it might be great that just throw everything in and then model, you know, it’s so fast, but you don’t know what happens in the end. And we are seeing already results that are not actually what we want. So, I think we need to be very, very cautious. And the other important thing is of course general like data privacy and security. Right. So, for us, it’s so important to ensure that the data used by AI systems is stored securely and handled responsibly. For instance, that all the personal data must be protected somewhere against unauthorized access and breaches. So someone from the tech team should make sure that there is a robust security measure implemented and really follow these best practices for data protection. Not super easy. And I don’t say that a designer is responsible for it, but I say that a designer definitely should talk about these things like how the data is stored. Should we do a testing, if there are any breaches, how difficult is it actually to access the data? Because if something happens, the company or the provider gets into big, big problems and we don’t really want that. So it’s something that you need to I think think about before. Another important thing are of course biases and fairness.

So AI models need to be trained with data. And sometimes they are biased. Most of the times they are biased because the data that we are training AI models with is usually from the past where a lot of these biases or I would say not quite like biases, but like we had different understanding of certain roles, for example. Right. Like, in the 50s, for example, a lot of women were not working. They were at home taking care of the kids. This is not like it is at the moment, right? Like women are also working equal rights. And this is also how we want our AI models to respond. Right. So, this is the problem. When we train AI models of the past, we probably won’t represent the future. And this is something that usually you need to go through the data and delete these things. Don’t train it with because it’s so difficult to get it out there in the end, almost impossible. And then another thing is also transparency, right? Like for users. Super important. Like what kind of data has been used to train the model? What data that I am currently inputting into the system is used for training. What is not used for training? It needs to be transparent. This is something like, in typical user flow that designers need to think about and to talk with the design team how it actually is. Right. So yeah, a lot about user control, user consent and asking these ethical questions would be a super important thing for designers in general. I’d like facilitating also these kind of discussions.

Brian Loughner [00:23:50] For sure. So, you mentioned a couple of those things. You know, when, when people are in products are using AI, you know, I see a lot of, you know, OpenAI ChatGPT wrappers. And I see a lot of concern, from these companies that are using it as, like, what is outputting? This is my voice. This is my product. And, what if it says something inappropriate? You know, there are definitely ways you can kind of trick these chat models to say things that may not be appropriate and or their brand. So, the difference of this kind of black box where it’s not as easy as, like deleting a cell in a table, is making things a little bit more complicated. In that space for sure.

Patricia Reiners [00:24:33] 100%. And this is, this is new, right? Like, this is very different than what we used to work with, especially thinking about AI products that we are designing, like most of the designers are already like got their feet wet, worked on their first AI products and things you need to think about, right? Like in the best case before, you integrated and I think this is super important, especially nowadays where AI is such a hype, everyone wants to have like an AI feature and like just like push AI into it, you know, I don’t care how. So, I think you really need to be strategic, judge these discussions and think about how should we actually do that? What is the best way that really helps the product and the user? But the user will still be the same. We are not designing for an AI yet. We are designing for the user. It’s just not the same, right? Let’s see what the future holds.

Paul Gebel [00:25:34] Well said.  We just have time for another question or two. And I wanted to, to take a moment to break the fourth wall for a second. As of the time of this recording, we’re about a week out from welcoming you to town here in Rochester, New York, for our annual product and design conference. So, when you’re listening to this in the future, that will have already happened. But, Patricia, I just wanted to ask, if you if you’d be willing to share, where can people go to find more about the courses and the communities that you’re leading? What would be a great place, to point people to, in addition to the recordings and, content we’ll share coming out of the conference, where would people be able to find more about what you’re talking and thinking about?

Patricia Reiners [00:26:13] Sure. So, I’m very active on social media. Can either find me on LinkedIn. Patricia Reiners. I’m sharing a lot of resources there. Also, on my Instagram. I have also a free a mini training. It’s a 30-minute AI mini training. It doesn’t cost you anything on your email address, so I can send it to you. And there you get an intro of how to get started with AI as a designer, you learn certain strategies. You learn certain tools. It’s, I think, the perfect starting point for you. And as I mentioned, it doesn’t cost you any money. If you want to dive deeper, I am also running an AI-bootcamp for designers. It’s a six-week, very intensive program where you go through all the steps of really leveraging AI. This is especially interesting for people who are more senior or also leaders in the tech industry. We really want to understand what is happening, and we are talking about not only the technical terms, but also the tools, how to apply them, and then think about the future and how to connect this to your own work environment. And for everyone else, I’m also running a design community called Design Visionaries. It starts on July 1st. So, when this when this podcast episode gets out, it’s already open. This is for everyone who’s interested in really continuously learning about these tools. There’s so much else going on there, and the community will be a place where you find curated content for you to remove all the buzz, the overwhelm, but where you have a place where you can connect with designers, with leaders, share questions, have discussions, and also weekly calls. We have a monthly masterclass about a third topic, the first pop phobia, about prompting for designers. Super important skill. And yeah, you’re welcome. So join us. And of course, always say hi on Instagram or LinkedIn and so happy to connect. Start a discussion, I love to meet everyone and let’s have a discussion.

Paul Gebel [00:28:19] Well, thanks so much for taking the time today. It was a blast getting a peek inside your thought process and hearing your outlook on the future for a lot of reasons, to be optimistic, despite a lot of uncertainty. It’s great to have your voice of clarity in the mix.

Patricia Reiners [00:28:33] Thank you for saying that. Yeah. I think it’s so much better. I like seeing it optimistic and having courage but being also a little bit cautious. Right. Like there are limitations. We talked about the ethical problems GDPR, data privacy. You need to be very cautious about these things. But the only way how to deal with it is education, learning, trying things out and, you know, having something you can talk about, not just like copying things from the internet, but making your own opinion about it.

Paul Gebel [00:29:03] Well, thanks again for taking the time, Patricia. It’s been a blast. Cheers.

Patricia Reiners [00:29:07] Cheers. Thank you, Paul. Thank you, Brian. See you. Bye.

Paul Gebel [00:29:14] Well, that’s it for today. In line with our goals of transparency and listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get. So please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world just like you’re doing. Thanks, everyone. We’ll see you next episode.

 

Like what you see? Let’s talk now.

Reach Out