Listen
Podcast: Play in new window | Download
About
Nate Andorsky is an entrepreneur who uses behavioral science to build digital strategies and technology for today’s most innovative companies and nonprofits. He believes the key to unlocking the potential of technology lies within our understanding of the psychological factors that drive human decision-making. By combining scientific findings with outside-of-the-box thinking, he helps turn human understanding into business advantages.
As the CEO of Creative Science, he leads a team focused on this mission. He is a frequent international speaker, has been featured in Forbes, INC Magazine, and Huffington Post and his team’s work has earned accolades from Fast Company and TopNonProfits.com. Prior to Creative Science, he was a team member at the Startup America Partnership, a nonprofit led by Steve Case to help build entrepreneurial communities throughout the US. He geeks out about the intersection of human behavior and the ways in which it can improve human outcomes.
Recommended Reading
Nudge: Improving Decisions About Health, Wealth, and Happiness, by Richard H. Thaler and Cass R. Sunstein.
Thinking, Fast and Slow, by Daniel Kahneman.
The Elephant in the Brain: Hidden Motives in Everyday Life, by Kevin Simler and Robin Hanson.
Decoding the Why: How Behavioral Science is Driving the Next Generation of Product Design, by Nate Andorsky.
In this episode of the Product Momentum Podcast, Sean and Paul welcome Nate Andorsky, CEO of Creative Science and author of Decoding The Why. His many contributions to our space appear at the intersection of human behavior and the ways in which it can improve human outcomes.
Nate recommends taking a behavior-first approach to solving product design challenges. “Zero in on the behavior you’re trying to change and work backward from there,” he says. “Oftentimes when we build products, we get into this habit of thinking solution first.”
We collect all sorts of information about users from focus groups, surveys, and in-person interviews. Much of it lands in two big buckets: what people say and what people do. All that is great. But too often the say and the do don’t line up. So as product leaders we need to continue our discovery process to better understand the “Why?”
Tune in to the pod as Nate shares insights around his concept of “say data, do data, and why data.” The why data explains the subconscious factors that are actually driving user behavior, the types of things your users aren’t even aware of themselves.
Once you understand that, Nate Andorsky adds, you have a foundation and a decision-making framework to create amazing products that make a positive impact in the lives of others.
[02:28] Behavioral science vs. Data science. Behavioral science looks at what factors drive us to take action? Data science looks at who’s likely to do what.
[03:06] The $64,000 Question. How do product builders get people to do that thing. That’s where behavioral science layers back in.
[03:47] How to institute change in a product ecosystem. Zero in on the behavior that you’re trying to change and then work backward from there.
[05:09] Say data. Do data. Why data. Decode the WHY to understand the subconscious behaviors that drive user behavior.
[06:36] The 15-year delay. Academic research precedes implementation by about 15 years.
[07:17] The need for sophisticated individuals. It takes a sophisticated individual to understand how to convert academic theory into product solutions.
[09:16] Hyperbolic discounting and present bias. How we think about our products doesn’t always align with how our users feel in the moment.
[13:39] The ethics of product design. Use your powers for good; that is, design product solutions in ways that line up with users’ initial intent.
[16:06] How do product managers discover the delta between say-do data and extrapolate the why?
[18:25] Top 2 behavioral economics heuristics. The identifiable victim/beneficiary effect and the power of storytelling.
[20:24] Personalities and behaviors. Behavior might not be driven by one’s personality, but even more so by one’s environment.
[21:34] Digital experiences as motivators and organizers of behavior. Hopefully, behaviors we want to see in the world.
[22:35] The value of personas. They’re definitely informative. But they’re neither industry specific nor individual specific. They’re human specific.
[25:22] Advice to generate new ideas. It comes with experience and getting your hands dirty.
[25:56] The biggest breakthroughs come with a new intervention or a new design that is pieced together from four or five different things that we’ve seen work.
[26:51] Add fuel, remove friction. Avoid swimming against the current. Share a path with your users that matches the narrative they want for themselves.
[27:59] Innovation. It’s the cross-discipline of different studies and ideas. Innovation is when you start to break down the silos that separate these disciplines and understand how they all fit together.
Podcast: Play in new window | Download
Sean [00:00:18] Hi, welcome to the Product Momentum Podcast, a podcast about how to use technology to solve challenging technology problems for your organization.
Paul [00:00:28] Hey, Sean, how’s it going?
Sean [00:00:29] Good, Paul. I’m excited about this one.
Paul [00:00:32] My brain is tired.
Sean [00:00:34] This guy is smart.
Paul [00:00:36] So we just talked to Nate and I think the big takeaway is read a lot but get your hands dirty.
Sean [00:00:42] Love it. In the academic space, we tend to take a theory and experiment on it. But in the business world, we have to actually work in reverse. That’s the quote I got from him. Like we have to take a problem to solve and then use the science and work backward to figure out what to experiment with. It was a great podcast.
Paul [00:00:59] Get your pencils ready.
Sean [00:01:01] All right. Let’s get after it.
Paul [00:01:02] Let’s get after it.
Paul [00:01:05] Well, hello and welcome to the show. We are excited to be joined today by Nate Andorsky. He is an entrepreneur who uses behavioral science to build digital strategies and technology for today’s most innovative companies and nonprofits. He believes the key to unlocking the potential of technology lies within our understanding of the psychological factors that drive human decision making. By combining scientific findings with outside-of-the-box thinking, he helps turn human understanding into business advantages. Nate, thanks so much for taking the time to join us today. It’s been fun getting to know you and we’re really looking forward to diving into some of the thinking that you’ve shared.
Nate [00:01:40] As am I. Thanks for having me on the podcast today. I’m really excited for our conversation.
Paul [00:01:45] Yeah, absolutely. I want to jump in with a general question just for terms, and I guess, pencil and paper warning for our listeners. You’re probably going to want to take some notes here. So I want to just start with some terms, because I think they’re important, especially in the conversation that you’re having with the community. So behavioral science as opposed to data science: as product managers who are focused on scrum and empathy and trust and UX and all the things that go into being a good product manager, data science, behavioral science, they seem adjacent to me as someone who’s a non-specialist in the field, but a fan of both. What is the difference? What’s behavioral science and how should product managers think about it?
Nate [00:02:28] Yeah, definitely, and it’s an excellent question, and the way I like to think through this is, you know, behavioral science basically studies the social, emotional, and cognitive factors that drive us to take action. We’ll get deeper into this. But basically, there is a lot going on that affects our decision making that we’re not consciously aware of and what behavioral science does is it creates basically a framework and starts to lift up, what are the types of things that are going on in our environment that are affecting our decision making that we’re not consciously aware of? And behavioral science is the yin to the yang of data science. So data science basically tells you, you know, who is likely to do what? But then the question becomes, how do you get them to do that thing? And that’s where behavioral science layers into that.
Paul [00:03:07] I love that answer. I think that that really sums it up for me in a way that I haven’t heard before. That’s a really clean breakdown. So I want to jump into a hypothetical. So a question that you’ve posed is in this fictional insurance company and they’re trying to reduce fraudulent claims and you pose this idea in asking a few product managers and business managers. The answers that you typically get are, well, you need to improve the process, or perhaps you need to do some machine learning to understand, as you said, who’s more likely to do the thing? But you flip the question on its head. How do you think about solving problems when you’re trying to institute a change in the product ecosystem?
Nate [00:03:47] Yeah. So the way I like to think about this is you want to really, first and foremost, zero in on the behavior that you’re trying to change and then work backward from there. I think oftentimes when you’re building products, you can get into this habit of thinking solution first, right. And with the hypothetical insurance idea that I proposed was basically saying, “Listen, you’re trying to reduce fraudulent claims; how would you go about doing that?” Most individuals will say, “well, you need a better system in place of checks and balances. You need to do more research when people file a claim,” et cetera. But if you look at this through a behavioral first approach, what you’ll start to think of, the real problem is a broken system of trust. If everyone was trustworthy and nobody filed these fraudulent claims, you wouldn’t need to put all these barriers in place to catch the bad actors. So if you start to ask yourself, “well, if trust is really what I want to fix, what does that begin to look like? How do I create a system that makes people more trustworthy?” And Lemonade, which is actually a homeowners’ and renters’ insurance, led by Dan Ariely, employs a number of different ideas and concepts to build a stronger system of trust to, therefore, reduce fraudulent claims as a byproduct of that.
Sean [00:04:57] Interesting. So I want to talk about, based on Paul’s first question, this concept that you have in the book that you laid out around say data, do data, and why data. So can you talk about it in that context?
Nate [00:05:09] Sure. So most companies, or when you’re building product, we typically find that they’ll collect what we refer to as two different buckets of data. The first one is say data, and that is anything your user is telling you. Usually, you’ll collect these in the forms of interviews and focus groups and surveys, anything your user is telling you verbally or in some sort of written form. The other bucket is what we refer to as do data, and do data is analytics. So as simple as Google Analytics all the way through sophisticated heat mapping and tracking tools. And what we’ll often find is that the say data and the do data don’t match up. Your users tell you one thing, “if you build this feature I’ll use it,” et cetera, and then you build it and they don’t use it the way that they said they would. What we look to do, and that’s where the title of the book comes is, Decode the Why Data – W-H-Y. And basically what the why data does is explains, what are the subconscious factors that are actually driving user behavior, the types of things your users aren’t even aware of themselves. Because once you understand that it provides a foundation and a framework to then create products and make product decisions.
Sean [00:06:15] Great, and that’s where the behavioral economics comes in.
Nate [00:06:17] Exactly.
Sean [00:06:17] And Dan Ariely’s work and Richard Thaler’s work and Daniel Kahneman’s work and all those, the greats in the space of behavioral economics. So how do you use that information and that science to analyze the why? This is what I’m really interested in getting at the root of.
Nate [00:06:36] So, I mean, it starts off with a pretty robust academic literature review. There is a lot in regards to what drives behaviors in academia. But most of it stays in academia. The research precedes the implantation by about 15 years or so. So the first thing we do is we go into academia and we begin to hypothesize, what do we think is driving behavior based on what we know about human behavior? We map those to what we refer to as behavioral drivers, which get back to behavioral solutions. And then we came up with different concept ideas and designs to offset those.
Sean [00:07:08] I love what you said about the academic research there’s like a 15-year delay on it actually getting into the market. Why do you think that is?
Nate [00:07:17] There’s a lot of reasons. I think it starts with the way that individuals and academia are incentivized. Oftentimes, I mean, they’re trying to get tenure. They’re trying to do research. There’s not as much of an incentive to actually implement it. I also think it takes a sophisticated individual to be able to understand how to take an academic theory or research and figure out the different ways that it can be implemented, is one big reason. Also, it’s interesting, when you work in the applied space, you’re working backward. Right. So in academia, you have a theory or an idea that you want to test. Right. But in the applied space, you have a problem you want to solve and then you have to work backward into the different theories, right. And it’s a completely different way of thinking about it. And then furthermore, too, as it relates to technology and digital, you have to have this interesting skill set of behavioral science knowledge, understanding of the application, but also the technology and product experience and background to understand how to bridge that gap.
Sean [00:08:11] Love that concept of having to work backward. That’s a really unique way of looking at it. It’s a nice nugget.
Paul [00:08:19] I wanna jump in here and ask a question about user research specifically because we’re talking about, say data and do data as sort of fundamentally opposed in terms of actions and behaviors don’t always match. But I think, you know, one of the things that I look at when I see every product kick-off is a whole bunch of user research and usability testing and prototypes. So is it valuable to even ask those questions? Do we need to rethink the way that we’re conducting user research of the prototypes that we’re thinking about building?
Nate [00:08:50] Yeah, it’s definitely critical, but I think what we have seen is that that information that you collect through those exercises you want to really take with a grain of salt. They’re a great place to start, but I wouldn’t rely on them too much. Rather, I would use that as a foundation to begin and then use the why data to build. And then I like to use the do data as a means to either validate or invalidate your assumptions.
Paul [00:09:16] OK. So I think, just drawing that out a little bit more, I mean, there’s a ton to unpack in terms of the concept of hyperbolic discounting and present bias, right. What people say they want to do versus what they do and even what they say they want to do a week from now versus what they say they want to do right now is different. Use the analogy of offering somebody an orange or a chocolate bar. If you’re offering them to me right now, you know, the chocolate bar might be too tempting to think about my future goals. But if you’re offering them to me in a week, well, I might be more mindful of my health goals and I’ll take the orange in a week but the chocolate bar now. So I think that the systems that we set up for the people who we’re trying to help, this is a long lead up and I apologize for the approach, but what I’m thinking of is the journey map that often comes out of user research, right. They’re linear pain points and frustrations. And I think the thing that I think is often lost in the way that we think about our products is we don’t think about how users feel in the moment. We think about them in a vacuum, sort of as time is linear, but time doesn’t feel linear.
Nate [00:10:19] Mm-hmm. Yeah. And I think that’s an excellent point. And this is one of the big challenges that we see when you think about any product that you’re building where you’re asking a user to take an action today that has a potential long term benefit, right. So you see this in the health space: exercise more, eat better. You see this in the education space: learn a skill today. You see this in the finance space: save more money today. And this idea of present bias where we heavily overweight our present situation versus the future. We prefer immediate rewards, but even furthermore, the way that time works is we heavily discount anything that happens in the future, right. And that’s where this idea of hyperbolic discounting comes into play where whether it’s a pain or a gain, if it’s in the future, we don’t feel that as much.
Nate [00:11:03] So one of the ideas is, you know, twenty dollars today feels very different than the question of getting twenty dollars in a week or two weeks, right. And that’s where this comes into play and I think that, inherently, and this is one of the things I talk about in the book, too is, you know, gamification models or any reward paradigm are basically substitutions. And the idea is that because you’re taking an action today and we want a reward from the action that we take, because if we don’t get the reward, we’re not going to take the action again, but since that reward is so far in the distant future, we have to substitute that in some way, right. And that’s where you see a lot of these sort of gamification models come into play using some sort of rewards paradigm for taking an action. But I think one of the dangers of this too, is if you don’t really understand the psychology behind it and behaviors you’re trying to drive, as I also talk about in my book, some of these can very well backfire too.
Paul [00:11:56] So is that hook model valid? Is it something that we can actually use for the benefit of the people who we’re trying to help?
Nate [00:12:04] It is valid, but I think you do need to be careful with it. I think Nir is, he’s, you know, a brilliant guy who has laid out a really interesting framework for using it. And there is a lot to unpack there.
Sean [00:12:13] There’s a lot of context-dependence too, right?
Nate [00:12:17] Yeah, exactly. I think Sean is correct on that. I mean, one of the things the book talks about, you know, in terms of the hook model is like extrinsic versus intrinsic rewards, right, and how that all works and how you can actually crowd out something that is intrinsically motivated by offering an external reward. So there’s a lot to sort of unpack there. But that model definitely is one model that can be very effective.
Sean [00:12:41] Alright, so I got a question for you. So we talked about this a little bit on the pre-call, but it was in 2003 that B.J. Fogg really published the first book kind of bridging technology and persuasion psychology. And it was his textbook on, I think was titled Persuasive Technology from Stanford. And I do think that there are lots of companies that understand this a lot better than we think they do out there in the world and they’re taking advantage of it, things that they’ve come up with for experimentation, like infinite scroll and a lot of stuff Nir talks about now, for whatever reason, he’s kind of flipped his thinking on how we should be using this stuff. And there’s a big problem with this knowledge. Like you said, because the academics are 15 years ahead of most of the rest of the world, the ones that are using this stuff are finding some tremendous success. They’re using the science. And I don’t think that it’s always being used in the right ways. So the intent becomes more important.
Nate [00:13:39] I completely agree. I think that, you know, this is extremely powerful, right. And you’ve seen it through the way that some of these giant tech companies have levered it for not-so-good things. But I think that the people that are using it and are integrated in it, I think they have an ethical and moral responsibility to ensure that they’re doing it in a way that is, you know, bettering the end user. That the idea is that they are using these tactics and these nudges to help their users do the things that they already want to do. I always talk about this idea of becoming better versions of themselves. But I agree with you, it can be used for evil. And I think part of it is our responsibility to ensure that it is used in the right way.
Sean [00:14:20] Have you thought it through or do you have any advice in terms of how to set up the bumpers for that?
Nate [00:14:26] Yeah, it’s a moving target and it’s a very good question. My baseline typically goes back to this idea of, do the types of things that you’re trying to get your users to do line up with their initial intent? Right. So if you have some sort of fintech app and you’re trying to get people to save more money, are the behavioral science nudges that you’re building into your product, are they helping to achieve that goal?
Paul [00:14:49] I think that’s a great barometer. I think the way that we approach every decision, though, it might be from the standpoint of just saving time, right, just a generic productivity app or, I’m in a time and attendance enterprise billing app and I’m looking at, if I’m saving somebody time, if I’m helping them be more effective, it might not be their stated goal, but I’m improving their life overall. And I think that there are implicit goals that we can start to build trust around our user base on just because we have their best interests at heart. It goes back to that idea of trust to begin with. I think that’s a really profound statement, probably one that we don’t talk about as a community often enough, but I think that’s really a powerful concept. I want to come back to a more practical application of the say data, do data, why data, right. So how do we take the intentional vocal feedback, written feedback and say data, the metrics, and analytics around the do data, and extrapolate the why, right? This formula seems simple on its face, but it really takes a lot of work to find that delta. Like what is the thing that they’re trying to do and why aren’t they able to do it? So what are some of the tactics that a product manager can implement in their day-to-day to start to find these areas of improvement in people’s lives?
Nate [00:16:06] So split testing is, I would say, number one. In academia, randomized controlled trials are the way that you go about doing that, and split testing is more or less the same type of thing. But let me just sort of walk through a hypothetical here so I can kind of put it in concrete terms. So let’s say you have an app that helps people save money every single month, right. And what you’re getting from your users is they’re telling you, like, “Listen, the reason that I’m not signing up for your app is because I don’t have enough money to save,” right. But you look into the do data and you find out that, you know, it’s actually interesting is once people sign up for my app, they don’t cancel. So the big issue is getting conversions, right.
Nate [00:16:41] So then you would say to yourself, “OK, that’s interesting. What does the academic literature say about how people view money and how people view time?” So you would say, “OK, well, I know that, for example, the academic literature says that there’s this idea of mental accounting where we tend to bucket money differently, depending on, quote-unquote where it comes from, right. So let me set up a landing page or Facebook ad that says, you know, sign up for my app, use your tax refund money to sign up.” Right, because the idea of your tax refund, even if they’re not actually taking their tax refund to save money, just the mental accounting, when you get your tax refund, you don’t think of it as earned income so you’re more likely to sort of spend it on whatever. You could set up a Facebook ad campaign to basically just kind of like test that concept and that theory. And if you see a higher click-through rate or conversions, listen, does that mean it’s going to work? No, but it means like, OK, hypothetically, you could be onto something. And then from that, you can begin to iterate actually into your product deeper and deeper. But it’s a really great way to just do some preliminary testing to try to see if a behavioral science theory has some legs to it.
Sean [00:17:45] Cool. So I know I picked on you a little bit talking about the intent and being able to use this stuff for evil, but I have seen that you’ve done a lot of stuff in the nonprofit space and some of the things you’ve published are phenomenal, like how to use behavioral economics to really move the needle for some of these nonprofits that are trying to do great things in the world. So thank you for that work.
Nate [00:18:02] Thank you.
Sean [00:18:02] And along those lines, there’s so many heuristics out there now that we know about. There’s so many behavioral economics tricks, so to speak, or things that you can use to kind of nudge, to use your word or Richard Thaler’s word from his book. What do you think are the top, if you were to pick the top three, behavioral economics heuristics that all product people should be aware of?
Nate [00:18:25] So the first one I would say is, and this actually comes from the nonprofit space, but the idea of the identifiable victim/beneficiary effect. Basically what this talks about is we have a tendency to offer greater aid to a specific individual versus a group of people. But that goes even deeper into this idea of stories and connecting individually with a person. We think about the world. We process information and stories. We’re actually not great with numbers and data. And if you think about it, you know, the best startup pitches you ever heard they revolve around stories, right? The best onboarding processes you’ve ever heard, they revolve around stories. Like stories have a lot of power in them. That is definitely one I would look at. The other thing is the herd effect and social norm theory. We follow the crowd, right. And the ways that this can be deployed and leveraged are infinite. But this idea, when you’re trying to get people to do something, giving them insight into what others are doing and individuals that are like them can have a lot of power behind it. So those two, although it’s really, really hard to pick, are probably two of the ones that I think would be really interesting to explore if you’re looking to get your hands dirty.
Sean [00:19:34] That’s good. Cool.
Paul [00:19:36] Yeah, I think the story piece, as the son of the librarian, is near and dear to my heart. I think that the way that we tell stories and create the hero narratives of ourselves in our own mind really is key to how we identify the values and virtues that we want to see in the world. And I think that there’s something there that is really powerful in the way that you look at behaviors and personalities, right. So behavior doesn’t reflect personality. And I think one of the key elements of every persona I’ve ever generated is, are they a C on the DISC profile? Are they an INTJ on the Myers Briggs? We look at personas as personality types, and how does that personality type usually behave? But maybe that needs to be turned on its head.
Nate [00:20:24] Yeah, and it reminds me of this story. So, Danny Kahneman, he was in an NPR interview and he was talking about this instance, he grew up in Paris and, you know, he’s Jewish. And this was during when the Nazis were in power and he was out past curfew. And this SS officer came across him. And, you know, Danny was trembling with fear and the SS officer approached him and he picks Danny up and he hugs him close and he puts him down and then he shows him a photo of his son and he gives him money and he lets Danny go on his way. And Danny makes this really interesting comment that here is this really, really evil person, but in certain circumstances can be kind, right. And that sort of just goes to your point that, like, oftentimes the behavior might not be driven by the personality, per se, but even more so by what’s going on in their environment. Right. And, you know, to your point and sort of my point about the storytelling, I mean, that sort of narrative and that lesson that I just talked about, because it was done through a narrative, how much more of an impact that has to really, I think, communicating my point, et cetera.
Paul [00:21:34] I think that that’s very germane to the times that we live in, right. So the way that we’re looking at digital experiences is as motivators and organizers of behaviors for the way that people want to see change in the world. More and more, we’re seeing that you look at the most commonly downloaded apps in the app stores as a social barometer for where people are at. It’s a really interesting time to live in as we’re looking at software, software is becoming the way that people express their values. You can see it in the amount of times you download an app as a society what people are most concerned about. It’s a very, very interesting analysis of, I think, as I said, a barometer of where we’re at.
Paul [00:22:14] Yeah, that’s fascinating. I mean, I think you’re spot on there and it’s really interesting to think about it in that way.
Sean [00:22:21] All right. So in this context of personas, to keep pulling on Paul’s question there, do you find value in them in terms of deciding which heuristics to apply and which things to experiment with?
Nate [00:22:35] Yeah, and this is a great question because I still haven’t figured out the right answer, but I’ll walk you through some of my thinking. So I think that they’re definitely informative. You know, when behavioral science became popular, one of the thinking was, was that these heuristics are not industry-specific, they’re not individual-specific, but they’re human-specific. We all sort of fall, I don’t want to say fall prey, but we all operate in the same way that, you know, loss aversion where we feel loss is twice as great as subsequent gains. It’s sort of independent of demographic information. One of the things that I think has come to light in the past couple of years is that that’s true for some of these cognitive biases that we have, but there’s also ones that do show a stronger correlation in certain cultures and religions and demographics, et cetera. So I would say I still am not sort of 100 percent sure where that line is, but it’s definitely something that we continue to research on an ongoing basis to see how much the personas impact the heuristics that we imply.
Sean [00:23:38] Interesting. So there’s not a lot of data on, like, does this type of person or personality respond more or less? I mean, we know for a fact different people, let’s use loss aversion as an example. There’s different people that fall into the trap of Las Vegas, like the entire city was built around that single heuristic.
Nate [00:23:58] Sure.
Sean [00:23:59] There’s certain people that don’t.
Nate [00:24:00] Right.
Sean [00:24:00] But how do you determine, like, what people will be more susceptible and which weren’t without doing, you know, experiments?
Nate [00:24:09] I mean, I think you would have to experiment to sort of get there. The other thing is about Las Vegas, I mean, that’s interesting, too, right. So that sort of gets into the whole idea of gambling machines and why those work and the variable schedule of reinforcements, et cetera. And the other argument to be made there, too, is like anybody could theoretically, under the right conditions, be conditioned to become a compulsive gambler, right. So there’s a little bit of, is it that the environment has created pressures in a way that they are now that way so, therefore, any sort of, quote-unquote, user persona with the right pressures could be molded to that type of activity or vice versa?
Sean [00:24:45] I find this really interesting.
Paul [00:24:47] Yeah, super interesting. You know, most of the folks listening to this podcast are product managers, senior product managers, directors of product management. I think that the way that we’re thinking about building out backlogs can start to feel rote when we get too into process and I think the behavioral science aspect is a way to elevate our thinking. Can you share some ideas about how do you come up with new ideas? I know the way that we’re looking at the market and the heuristics are good strategies, but what are some ways in the day-to-day that a product manager can help their team generate new ideas and help take a mature platform to the next level?
Nate [00:25:22] So the biggest piece of advice is just experience. Right. Read as much, get your hands dirty, find as many different applications of this as you possibly can. The second thing is reverse engineering. So oftentimes I’ll go and just look at different products that have been built, user interfaces, and sometimes there is behavioral science built into that that the product managers didn’t even realize they were doing, but they got there eventually through split testing and optimization. But I will tell you that our biggest breakthroughs come with a new intervention or a new design that is pieced together from four or five different things that we have seen work. And I think, as a means for doing this as a part of a process, you know, you have a team of six, right? Everyone goes and does their own research and comes back with maybe four or five different ideas and you sort of come together and iterate all of them, like a design thinking type of model. But, you know, it’s interesting because I talk about this in the book, too, is like, how do you become really good at this? And I think it’s like anything else. It’s literally pattern recognition, right? Over time, just experience, you just build up that knowledge bank and that’s what becomes really, really powerful. And read, but get your hands dirty. Like, that’s the biggest piece of advice. And that just goes along with this idea of just building products. Like, you’re not going to really learn this until you start banging it and banging it and see what works and what doesn’t work.
Sean [00:26:43] Yeah, more evidence for the need to have a really good practice around multivariate testing because we don’t know.
Nate [00:26:49] Exactly. Exactly.
Paul [00:26:51] Yeah, the way that you phrased it before is just in swimming with the flow instead of against it. So as you’re observing behaviors, the more aligned you are to the outcomes and the values, the easier, the more natural, this becomes. And I think it’s really having the mindfulness to be present in your users’ experiences so that you’re going with the current. You’re sharing a path that they can follow that matches the narrative they want to have for themselves, as opposed to the slow thinking, in Daniel Kahneman’s framework; in slow thinking, you have to relearn every single step. The more natural it becomes, it becomes fast thinking and you start to empathize with your users and your users start to find themselves lost in the process because it’s so natural and it aligns so closely. I think that’s the ideal, right?
Nate [00:27:36] Yeah.
Sean [00:27:37] Habits, yeah.
Nate [00:27:37] Exactly. And Dan Ariely talks about, you know, with products is add fuel remove friction. Right. And I think that aligns really nicely to what you’re saying is that concept there.
Sean [00:27:50] Add fuel, remove friction. I like that. We always ask our guests to define the word innovation. How do you define innovation?
Nate [00:27:59] How do I define innovation? For me, I find innovation, and where I get excited is cross-disciplines of various different studies and ideas, right. So what I love about behavioral science and product design and what’s really interesting about it and there there’s a great podcast called It’s All a Bunch of B.S. by Nick Hobson, and he gets into this in that if you start to dig into the behavioral science, you start to find that it’s layered on top of anthropology and sociology and all these different disciplines. And it’s not that the research doesn’t exist, and it’s not that people aren’t studying this. It’s just everything is so siloed. And innovation is when you start to understand how all this fits together against all of these different disciplines. And that’s where the real power of this way of thinking really comes together.
Sean [00:28:49] Good artists borrow, great artists steal.
Nate [00:28:52] Right. Right.
Sean [00:28:54] I’m not sure if the exact meaning of that quote applies, but…
Paul [00:28:59] In context, it makes sense.
Sean [00:29:00] It does.
Paul [00:29:02] I want to ask one more zinger for you. We’ve talked about plenty of books so far, and I’m sure they’ll be in the show notes as a link-out. But if there’s one that you could point to as a recommendation, what you’re reading or what you’ve read in the past, what’s a book that you’d say everyone should have read to be a successful product manager?
Nate [00:29:19] So if you’re coming in with a good understanding or sort of a product tech background, I think Nudge is the great book to read. That will give you a really good overview of behavioral economics and how it can be applied. Thinking, Fast and Slow is another really good one. The Elephant in the Brain is really great too. That talks more about sort of the neuroscience piece of everything going on. But those three books, I would say, are probably not your typical like product manager books, but it would be great books to explore.
Paul [00:29:47] That’s outstanding.
Sean [00:29:49] Well, thank you for giving us early access to your manuscript.
Nate [00:29:52] Sure.
Sean [00:29:52] And congratulations on getting your book published.
Nate [00:29:55] Thank you.
Sean [00:29:56] It’s a gift to our craft and I really appreciate you joining us on the pod today.
Nate [00:30:01] Yeah, this has been great. I’m sure that you can tell by the way that I talk about it, I get really excited. So any time anyone says, “Hey, you can get on a podcast and just talk about it for, you know, half-hour or 40 minutes,” I’m definitely excited to do it.
Paul [00:30:14] Well Nate, thanks so much for taking the time today. We really appreciate it and we know that our audience does too. So cheers.
Nate [00:30:19] Thank you.
Paul [00:30:23] Well, that’s it for today, in line with our goals of transparency in listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get, so please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world, just like you’re doing. Thanks, everyone. We’ll see you next episode.