Listen
Podcast: Play in new window | Download
About
Itamar Gilad is a coach, author, and speaker specializing in evidence-guided product management and product strategy. For over two decades, he held senior product management and engineering roles at Google, Microsoft, and a number of startups. At Google, Itamar worked at YouTube and led parts of Gmail.
He is the author of the book, Evidence-Guided: Creating High-Impact Products in the Face of Uncertainty. He also publishes a popular product management newsletter and is the creator of a number of product management methodologies including GIST Framework and The confidence meter.
Itamar is based in Barcelona, Spain.
Recommended Reading
What percentage of your software product launches have been successful? If you answered, “about 50%,” you’re ahead of the curve, says Itamar Gilad. Itamar is a product leadership coach and author who also held senior product management and engineering roles at Microsoft and Google, where he worked at YouTube and led parts of Gmail.
In today’s conversation, Itamar looks back on his early career that – he readily admits – includes ‘not that many big wins.’ As he explains, “The engineer in me kept whispering, ‘you don’t really know. You’re just faking it.’ And the results spoke for themselves.”
It was then, while still at Google, that Itamar realized the power of discovery and user research. What he called, “all these good things that we now take for granted.” Once he embedded these techniques into his work, his perspective changed, and he started to consider a new product leadership playbook. What he came up with has replaced the traditional Plan and Execute model “that may have worked in the 20th century,” with an outcome- and evidence-based intuitive approach.
“We used to believe that if we spend enough time creating the perfect top-level plan, and build a set of cascading plans, and then execute well on these plans, we will achieve greatness,” Itamar says. “It simply doesn’t work that way.” Even ideas from the most visionary leaders were informed by research and evidence, hypothesis and testing, he adds.
In his book, Evidence Guided, Itamar presents an actionable model for bringing evidence-guided development into our organizations. Nested within Itamar’s larger framework are the GIST model, which leads to the ICE model, which then leads to the Confidence Meter. In this conversation, Itamar explains the role each model plays in specific detail and provides easy access to them.
Be sure to catch our entire episode with Itamar Gilad and consider this famous quote from the late British statistician George E. P. Box: “All models are wrong, but some are useful.”
Podcast: Play in new window | Download
Paul Gebel [00:00:19] Hey everyone, and welcome back to Product Momentum, a community of product people and a marketplace of ideas where leaders and learners come together to shape our way ahead. My name is Paul Gebel, and together with my co-host Sean Flaherty and the rest of our amazing team, we record conversations with thought leaders in product, UX, security, and beyond that will help you shape the lives of your users through software. Check us out on all your favorite listening platforms. But for those who prefer the video experience, you can find all our latest episodes on the Product Momentum YouTube channel.
Hey everybody! I’m really excited to share the conversation we just had with Itamar Gilad. He has a really insightful product manager, coach, and career leader. I think he’s got a lot of stories to tell and one of the most compelling parts of the conversation today, I hope jumps out to you, is just trying to figure out what kind of culture you’re operating within, and you might get an answer one way where you run an experiment and you discover some ideas or ways to implement a solution that tells you, I’m in a place where openness and transparency and hypothesis evidence guided product management can thrive. Or you might get some feedback that it might be time to brush up your resume and look for a place somewhere else. But either way, I think these models are going to be helpful for deciding between two good options or looking for evidence to shape a roadmap and build something really powerful. But either way, I think you’re going to find some real value in the models that Itamar has put together. And I think this is going to be a conversation worth listening to. Let’s get after it.
Paul Gebel [00:01:53] Hey everyone and welcome to the show. Today I’m delighted to be joined by Itamar Gilad. Itamar is a coach, author, speaker. He specializes in evidence-guided product management and product strategy. For over two decades now, he’s held senior product management and engineering roles at Google, Microsoft, and a number of startups and at Google Itamar worked at YouTube and led parts of Gmail. The work that he’s done in writing evidence-guided creating high-impact products in the face of uncertainty has really started to inform a new conversation about talking about product management and creating a number of product management methodologies, including GIST and the Confidence Meter. It. Ahmad, thanks so much for taking the time to join us today. I’m really excited to have you on the show.
Itamar Gilad [00:02:33] Thank you guys for inviting me and for this very generous introduction.
Paul Gebel [00:02:36] Absolutely. You know, someone who’s been through the trials and career and journey that you’ve had, from Google Plus to the tabs that we all now understand is just kind of using the Gmail experience, the promotions and social tab that you’ve been a part of, you know, so much of your work is in these really high visibility experimentation product laboratories of experience and research and evidence. And that’s a lot of the themes that you bring to all of the books that you’ve written, and especially in Evidence Guided. Just to get us started, can you talk about your background in a more technical domain and transition to product management and eventually coaching? How has this technical background informed your advocacy for this evidence-based approach? From an engineering to a product manager to a coach, what does it mean to you to be evidence-guided in product work?
Itamar Gilad [00:03:36] Spanning a career of 20 years, actually doing operative stuff, not just coaching or consulting, or training. I started out the first five years I was an engineer, software engineer, and I kind of bounced out between kernel-level stuff and today would call it a full stack engineer. I tried many different things, and then I found that I’m attracted to the product management side of things. So I switched roles and I became a product manager for the next 15 years. And I really enjoyed it. And then I switched to the dark side and became this coach thing.
So I would say you can’t really still be an engineer. It’s like, an ingrained objection to BS, I think, is the key, key attributes of us engineers. And I found that there is a fair amount of…I won’t call it BS. It’s a really intentional, honest attempt to build the right product. But using opinions, using consensus, using sparse data. And that part throughout my career as a product manager, I used. But I felt a bit apprehensive about. I felt like I’m projecting this false confidence that my ideas are the best. But then engineer in me kept whispering. It’s like, you don’t really know. You’re just faking it. And I think the results spoke for themselves. Like, for this career, I didn’t have that many big wins. I had a couple. But most of what I launched was kind of mundane. And I asked this question to a lot of engineers out there, sort of PMs and engineers out there. What percentage of your launch has actually has created impact, and we’ll talk about what the impact means. And people rarely say more than 50%. So everyone actually knows that the bulk of what they’re doing is not so good. So that really bothered me. And as I described in the book, it’s at Google that I discovered what the discovery, you know, research. User research, exploratory research, all these good things that we now take for granted. And I started using them. And that’s really changed my perspective. And that was the thing I wanted to teach.
Paul Gebel [00:05:46] Yeah. You know, right up front, I have to make sure it’s clear in no way is your Anti-BS, engineering approach meant to take potshots or criticize. You know, you’re very clear that even though you’ve been a part of some projects that are mundane or maybe even a failure or two along the way, that you’re not throwing stones at decisions that have been made where you’ve learned a lot through just taking Google+ as that as the most obvious example.
A lot of time, thousands, maybe millions of hours spent over the years. And a lot of people spent time on that. And you talk a lot about the plan-and-execute mindset versus the sort of outcome and evidence-based intuitive approach where we’re balancing this, looking for the data points, talking to people, and maybe even committing to an idea that we don’t fully agree with or understand. The tabs in Gmail, the promotion tab, and social tab, and Gmail is sort of this hypothesis and that humility is really clear throughout your writing that you’re not you’re not sort of being a skeptic just to poke holes in people’s decisions or claim to be smarter. It’s just trying to infuse a more open, transparent version of leadership at the product level. Is there anything that you’d like to add to that kind of mindset or approach, to your writing and the things that you thought and wrote about?
Itamar Gilad [00:07:12] No, I think that’s and that’s exactly right. I mean, it’s easy to assume that, you know, senior leadership stakeholders, there are mean people. They’re trying to control us. They’re not empowering us, but they’re actually trying their best. They’re really honest. For the bulk of my career, I never saw anyone really trying to sabotage the product or control product people just for the sake of the fun of it. They’re just using a playbook. It may have worked, I don’t know, mid-20th century, but with the levels of uncertainty and all the moving parts we have outside our company and inside our company, you know, technology in our product, it’s really impossible to use this playbook successfully today. And that’s the plan and execute model where we believe that if we just spend enough time creating the perfect top-level plan and then create a set of cascading plans and then execute very well on these plans, we will achieve greatness. And it simply doesn’t work.
Paul Gebel [00:08:12] Yeah. And I think there’s a lot of anecdotal, sort of almost apocryphal stories of the visionary founder that has the perfect idea and it’s very compelling. We all want to be that founder. We all want to have that killer app idea. We all want to be the product manager that leads the silver bullet, that solves all the problems, cures all the diseases. And I think that that aspect of the visionary founder is a little bit misguided if I could say so. I think even the most visionary taking Steve Jobs is sort of the name that often comes up as sort of these ideas kind of descend from on high and it just manifests fully formed and gets expressed. But even those founders that have those ideas as important as they are, have to be informed by evidence and research and hypothesis and testing. No idea pops into a product manager’s head sort of fully envisioned and just ready to hand to a team to go execute. Is that part of what this means to incorporate these evidence-based models into our decision-making?
Itamar Gilad [00:09:15] Absolutely. And I think the word leadership is very important here. You mentioned this earlier. We need to change the way we lead ourselves. And that’s at every level because even team leads are leads. They are leading a team; even individual contributors are leading an area. And the way we used to lead ourself, which was basically at my level, I’m going to create this plan. I’m going to tell you guys what you need to do. I may spell out to you in detail how what the result I expect, you’ll go and figure out the details. Sometimes it works, but you’re actually playing against the odds, because we now know from research that most ideas are simply not very good. And that includes the ideas that came from Steve Jobs and from Mark Zuckerberg and from all these brilliant, visionary people. They can and will tell people, ‘I want you to build this. I think this is the most important thing.’
But what we don’t know, and this is the story that’s usually hidden, is that they build around them a system of feedback in a system that is very evidence-guided to kind of inform them whether or not, they’re on the right track. And as Steve Jobs said, he said he admitted he is wrong often, but he doesn’t care as long as they do the right thing. And I think if you ask Zuckerberg, he said similar things. He said, ‘people think that innovation is just coming up with a good idea and doing it, but actually it’s trying out stuff.’ So, they really want to be informed. They really want to be told we tested your idea and they didn’t turn out as well. What do you think? And that’s exactly what I’m trying to teach this model of leadership. And, the results once you implement this is are incredible. And it can work in practically every organization. It is a culture change. Absolutely. It’s not an easy transition. A lot of organizations fell through it. But if you are able to do it, it’s a completely different ballgame.
Paul Gebel [00:11:12] Yeah, I think I want to jump into some of the models in just a minute, but just I think one asterisk that you’d agree with is no process can be copy pasted from one organization into another without adaptation. Every organization has their own product flow, feedback loops, decision-making paradigms. And while you do offer a process in the book, and I think it, you know, it’s fairly global in its approach where you could implement the pieces. It’s going to have to be adapted to your organization. So even in the process itself, there is no silver bullet. You have to test and tweak and kind of hem the ends a little bit to make sure that it fits your organizational style. So that being said, you offer a few kind of nested models, the GIST model, the ICE model, the Confidence Meter. And, you know, I would love to maybe not dive deeply into how to use each, but as a system, can you introduce some folks who maybe haven’t been exposed to your work before, or the models before? How these measurements can fit together into a hypothesis-driven, evidence-guided approach to product management.
Itamar Gilad [00:12:24] Right. So when I left Google, I felt I had a message that was unique, like, let’s do product discovery. But of course, Cagan and Eric Ries and the people of, Design Thinking and many others already communicated this message and most people actually knew about it. What I found is that implementation always fails or almost always fails. It’s really hard. It’s going against the grain of a lot of things that were used to do. So I started thinking as a product manager, so to speak, what kind of tool can I give my own clients so they can implement successfully? And I realized what Google did successfully – and not everything is perfect at Google. No company is perfect – is A) set goals very well. Measurable goals that are about outcomes. B) we were able – and again this is in pockets inside Google. I cannot generalize it across the entire company – We were able to prioritize. We were able to pick the right ideas. But pick an idea didn’t mean that we were going to build it in full. And these are the ideas we’ll test first. And that’s where the next layer of the GIST model goals, ideas, steps is about finding quick and easy ways to validate the key assumptions. And then once these assumptions are validated, go further and validating with more expensive ways up to a point where you feel that the idea is validated and switch to delivery. So discovery and delivery is implemented at this level.
And then this task layer is how do you bring this into the reality of your team. Your team is using Agile probably. And there’s a likelihood that somewhere along the way, someone gave them the message that their job is delivering and they need just burn through story points and Discovery is not in the picture at all, and how to change this reality. So this is the GIST Model. Goals, Ideas, Steps, and Tasks…and I try to create tools or offer off-the-shelf tools for each one of those. And those are four areas of change, every organization I think needs to tackle. Beyond that, there is strategy, there is exploratory research. Those are also very important things. Culture. That’s not what GIST is about. GIST is just like concretely how to change these four things. And it’s a leadership change. It’s a development change. It’s a lot of, fine details.
Within this, a key part within the evaluation and validation techniques that I described in that are the two parts of our discovery evaluation, I find that ICE is very useful. ICE is Impact, Confidence, and Ease. If you’re not familiar with it, for each idea, you’re supposed to guesstimate – what’s the impact an impact is how likely is this idea to affect or to create a major impact on a target metric or a goal. Ten out of ten is like the most impactful idea you can imagine. Zero is no impact whatsoever. Ease is like the opposite of effort, how easy or hard it is going to be. If it’s just two weeks to launch, super easy. I’ll give it a ten. If it’s really expensive, I’ll give it to two and so on. And for a lot of my career, those were the two parameters. Let’s pick the high-impact, relatively cheap ideas. Let’s find the balance between these two. But it turns out, and psychologists have proven this, that we’re very bad at guesstimating these things, especially when we like an idea. We tend to overestimate the impact and underestimate the ease. It’s called the planning fallacy. We all fell for this. And the brilliant part of ICE, introduced by Sean Ellis, the growth hacking guru is the confidence.
Confidence is another 0 to 10 number that says how sure are we about the impact on this? That’s all it says. And by forcing people to ask this question you’re introducing this element of risk. Can something actually fall? What are the assumptions, etc.? But I found that people are very generous with their confidence levels, are giving themselves sevens and eights. While it’s actually not the case, they’re using very low levels of evidence. So I created a tool called the Confidence Meter, which works a little bit like a thermometer. It goes for very low confidence, which is around zero. And that’s the area of opinions – your own self-opinion. And if you’re the smartest person in the industry, the opinion of the industry. If you have ideas about generative machine learning, does it mean that it’s a good idea? That’s thematic support. Then you can do reviews with your colleagues that gives you a little bit more confidence, but your colleagues are rarely the target audience. You can do back-of-the-envelope calculations or business model that sometimes shows you that the idea is not worth doing.
So that’s a slightly harder test, but up to this point, or maybe 0.5 confidence out of ten. Move further. You need to start collecting data. So data analysis, customer interviews, surveys, etc. This dictum might manifest as anecdotal. So just a couple of customers told you that this is a good idea. Anecdotal evidence can easily distract us. I’ve seen so many companies that like an idea and the leading competitors have this idea. This is another form of anecdotal. Let’s build it. The idea is validated. Absolutely not. You should test it further. It could be what I call market data. It comes in larger data sets. But that in itself again brings you maximum to 2 or 3 out of ten. To go higher to medium and high confidence. You need to test the idea. You need to validate it. And we have many, many forms of tests, by the way. Not all of them are expensive or difficult. Many are just faking the idea. And I would say that the biggest problem I see in adoption is that companies don’t embrace that part. They try to choose ideas based on the data they have and the few customer interactions they had. They don’t want to test it, but that’s really what gives you the high confidence. So this is the nested set of models: GIST, leading to ICE, leading to the Confidence Meter.
Paul Gebel [00:18:28] Yeah I’m glad you unpacked the Confidence Meter because I wanted to spend a minute on that. And just to tease it out a little bit more. I really like the way that you describe it as sort of turning it into a math problem. And I think a lot of that anecdotal evidence, it starts to build up the stories that we tell each other in the planning fallacies that you mentioned earlier. I think all go to, you know, that bigger picture story of what you mentioned at the beginning of why so many of the products that we embark on building fail. And I think a lot of it comes down to if we’d stopped and just taken a day or a half a day or just an hour to ask, how confident are we? And not just how confident do we feel? But have we tested it? Do we have data to back it up and ascribing some kind of measurable, number that we can plug into a formula to understand? Okay. We feel confident, but know that we’re confident and that distinction I think is a really helpful model, for product managers to understand because so many times we’re taking we take a list of features, we build out requirements, and we just get to work without really asking the question, why are we doing what we’re doing? So I really appreciate that calibration aspect that you bring to it. Changing gears just slightly, I want to talk about how to implement these models because all of these ideas are great. I highly recommend the book for anyone who hasn’t read it yet. But all of this change can be really scary if we bring this to an organization for the first time. Inertia is a thing and it is very hard to overcome once an organization becomes set in its ways. How can we start to bring these ideas to our teams, especially if we’re an Agile team, product owner, or individual contributor? You know, not necessarily a decision-maker for the overall processes of an organization. How can somebody in that role bring pieces and ideas of this back to their teams and, and use today having heard some of this, even if they’re just trying to influence without the authority to make a policy decision in their org.
Itamar Gilad [00:20:30] Yeah, that’s the billion-dollar question. Honestly, the transformation if you like, we need to understand it’s a multitude of changes. It’s not just one. It’s just one process that you implement within a couple of quarters. There you are. You are in this modern product development mindset. It is a gradual transition, and I think you need to really pick your battles and decide where to start. So, in the GIST model, I explained there are four areas: setting goals, choosing ideas, testing these ideas. And by the way, for me you need to reevaluate the ideas. Every time we test, whether it’s a one-time process, it’s an ongoing process. A lot of people, by the way, are apprehensive about putting numbers on ideas. It feels a little bit made up, but in truth, you cannot really put numbers on these things. But it’s just a forcing function. It forces people to reckon, and ask the question, what is the impact? Given the evidence, though, we still think it’s a seven. Maybe it’s a four. So that’s the beauty of it.
So, evaluating ideas and validating them and then changing the way the team works. And in every organization, they do a little bit of each. But some area is the most painful right now. Maybe you set way too many goals and the goals are all about output, or the goals are very misaligned, or sending the sales team in one direction, product org in another. The marketing, yet another vector so stalled there. And in the book, I give techniques how to create much better goals based on models. I suggest, picking two top-level metrics, creating metrics hierarchies below them, and then kind of identifying what are the key metrics that we really need to improve. And again, it goes back to leadership how to reduce the number of objectives and key results to the bare minimum at every level. On team level, I think you shouldn’t have more than four key results, period. One that is hard for most teams. How to continuously evaluate your OKRs and adjust towards achieving them. So that will be a good start. If you feel that that’s the key pain point for the organization. If you’re spending way too much time in debate, or a lot of ideas are just taken by people, try to introduce ICE and evidence and for all of the those, by the way, I suggest techniques how to not confront the wave, how not to kind of go against the establishment, but how to actually show that you are helping.
Offering helpful tips, and gradually showing them that there’s an alternative. So I’m happy to share 1 or 2 of those, if you like. If you’re not testing at all or you’re testing very slowly, start with the steps layer. And that means starting to map out assumptions and taking cheap ways to validate these assumptions, and letting more ideas into the funnel instead of the three ideas that you must launch. Now, pick 15 and validate many of them very, very quickly. And then you’re left with five, and then you go into the next five, etc. and if your team is very disengaged, I offer yet another, tool called the GIST board, which is a way to get the team coalesce around the goals, which are customer goals and business goals around ideas in around steps rather than just around the edge of rituals.
I mentioned Product Owner and we all know how many rituals there are, but if it goes vertically, it distracts the team from actually working towards achieving the business goals. They’re working towards burning story points or delivering small increments of code, which is not really in the spirit of Agile whatsoever. So how do we change this mindset? How do you make them discoverers as well as the delivery teams? So pick your battles if you find resistance in the book, I offer a lot of kind of types of resistance you might find, and I try to give you some ammunition, but one of the most common ones is that people will tell you ‘this is slow,’ or ‘this is going to make us lose business. We cannot afford this. We just need to run as fast as possible.’
Itamar Gilad [00:24:39] So one exercise you can do is go back to last year’s roadmap, where you ran like hell, and show them how much of this roadmap actually impacted the business or the customer experience. How many of these ideas were actually good? Most companies don’t even ask this question. They’re just happy to have launched this obviously good idea and then move on. Most of these ideas takes much longer to launch than we expect. Sometimes there’s surprises and we have to unload stuff. So how good is the roadmap planning model right now in your organization? So that’s one way to shine a light on the problem. If the organization is not yet aware of it. And the conclusion is usually at least 50% of our engineering resources are wasted. We’re just building the wrong things, and we’re confusing our customers, and we’re bloating our products, and we have to carry this luggage forever. So this is a very strong message that I think a leadership team should understand.
Paul Gebel [00:25:35] Yeah, I want to stay on the topic of leadership for just a few more minutes. I think, you know, having only a few minutes left, I want to make sure that the message that you just shared is really clear, that the idea of a product owner or product manager, team level, delivery team level influence can be meaningful, even though it’s slow, even though it might be small and incremental. Creating that culture of transparency and building trust through these models, I think, is much more powerful than meets the eye. It might look like just a dashboard or just a canvas, just a business model. But I think when you have these conversations, that’s where the real power is. I think having that ability to treat a delivery team like a discovery team, oftentimes that’s relegated to: that’s UX research’s domain. We’re just here to build features and ship code. And I think that your idea of bringing, the leadership of a product to solve a human problem is a very influential, though understated role that a product manager can play in a team. So just staying on this idea of leadership, I know that the models are very accessible. You can download them all for free from your website, and the book goes into a lot of detail to unpack and apply them all. But leadership as a mindset I think is really what is the catalyst behind all of this. None of this is valuable to anybody unless someone has the courage to speak up and say there is a better way, or there is a different way to think about this, or hey, maybe we can approach this from a slightly different angle, since the last one didn’t work out so well. It might be worth a shot. How can we start to develop some of this courage in product management communities of practice where people have the tools in their toolbox to feel like they are influential despite the title they hold. What comes to mind when you think of that ammunition that you referred to for ways to address resistance or just overcoming that inertia within our teams? What have you seen as a success story that you can point to for someone in that position trying to make a positive change.
Itamar Gilad [00:27:51] First off, I would love to give you a message that even if you’re a junior in a multi-tiered organization, you can transform everyone or get everyone to change. Unfortunately, that’s rarely the case. You can maybe make changes within your scope, and with permission from your manager, you can try out some of these techniques. And that might turn into a case study that maybe will be shared across the company. And you can be surprised how much your leadership actually wants this. I mean, they’re very frustrated with the organization, with a product organization because they feel it’s slow, they feel it’s not delivering the things they need to succeed in the business.
Almost universally, I hear the same complaint. They feel it’s the focus. It’s focused too much on engineering excellence and things like that. So they’re really interested in ways to harness the product organization, to serve the business bit. And that should be actually the mission of the product organization as well as well as serving the customers. Of course, the problem is that a lot of organizations believe that command and control, and exceedingly so is the way to do this, and you actually need to teach them it’s the opposite. They need to let go. They need to control way of showing you where you want to go, but they need to allow you to explore multiple ideas. So how do you build the courage to do this?
First off, until you ask, you’ll never know. I mean, if let’s take a practical example: it’s OKR season, your manager comes to you with, a bunch of things to build or an idea to build. Let’s take just one. So here’s a great idea. I want you guys to run with it. You could just take it and run with it, and then, you know, inside you, you know, you might suspect you’re actually an expert in this area, that this is not the best idea, but you hold back, you don’t speak because you want to be promoted or whatever it is, stay within the culture. But, you could start interviewing your manager very gently and ask, what will change if they say they’re successful? What metrics, what behaviors of the customers of the business, and I’m sure they’ll be able to explain. Then you can ask, do you mind if we add a note in the OKR that that’s actually the target goal? We can leave the idea right there. That’s still the goal. Don’t attack the idea just yet. Yeah. Go ahead, collect some data. And I offer a lot of ideas how to do that. Come back within a few days and say, you know what? The team and I found four other ideas that we think have even higher potential. Here’s the analysis we’ve done, and your idea stacks four out of five. These other ones seem to be of higher value.
Or you might even run a secret experiment, and come back with the data, the evidence that this idea is actually not in high demand at the moment, or a survey or something, and then you might run into to 1 or 2 responses. The manager might get really angry with you for wasting time, quote unquote, and just tell you to get back to it. And then you’re really faced either with a problematic person that is not being reasonable or really bad culture, or they might be actually pleasantly surprised that you took the initiative and it might say, yeah, I’m still very interested in this idea that they proposed to you, but let’s continue also with this other you if you can. And it’s really important to understand we don’t have to commit at this point. We just need to take it to the next step. We just need to test it a little bit further. And, it’s a horse race, which of these ideas will win? Maybe we need more than one idea because each one is contributing a little bit of impact. So that alone might change the relationship or the interaction between you and your manager. And then again, that might turn into a case study in the organization.
Beyond that, I really suggest you find the champion of the C-level or head a product of someone who really cares and is passionate about this, and will speak to the leadership at that level. I think training helps a lot to introduce vocabulary and concepts. I think books help a lot, sort of promoting not just mine – there are many good ones and I think, yeah, being honest and using evidence, not just opinions. Don’t attack the senior people’s opinions – come with evidence. Based on my research, this might be a better idea. Someone has to do it. It requires determined and innovative people to drive the change. The change will not come spontaneously. So you might be very frustrated if you wait for it so you can make steps in that direction.
Paul Gebel [00:32:34] I think that’s profound. I have one last question for you. And it’s it’s the crystal ball question. And, the irony of asking the author of a book called Evidence Guided to look into the proverbial crystal ball. It’s not lost on me, but I am curious to know, just because you’re so steeped in this conversation of evidence-guided and data-driven decision-making, as you’re seeing organizations trend in one way or the other, are you optimistic or pessimistic? Are you looking at organizations getting more rigid or more open to these kind of conversations? The reason I’m asking is up to the pandemic and even in the rebound post-pandemic, I feel like a lot of companies, even those that we work with, were looking to just add features on features and sort of throwing ideas into the backlog. Whether it’s through the hippo, the highest paid person in the room, or other influential plan-and-execute type models. It seems like from conversations that I’ve had recently, there is a shift towards a more evidence guided approach. I don’t know if that is just the the kinds of conversations that I’ve been in or if you’re seeing a more widespread trend. So again, asking you to look into your crystal ball, but being evidence guided as you do it. What what are you looking at as product community, a global product community going forward? Are we trending in a direction that resonates with your body of work, or do you think we still have a long way to go?
Itamar Gilad [00:34:07] Yeah, well, the people I speak with, probably there’s selection bias because they come to me usually after they’ve read my articles or my book or they hear one of my talks and they know what I’m about and they’re interested in that. So I see tremendous interest. And one of the indicators is the massive community of coaches and trainers and advisors out there that teach exactly this Product Discovery. How do we do it? How do we become more evidence guided? And I’m just one out of many. So I think there’s huge demand and huge thirst for it.
Unfortunately, in the last 20 years, I saw a trend in the other direction, which was let’s systemize more, let’s take Agile, which was this very ground roots movement, and turn it into this very convoluted, very structured, high-intensity thing. And there’s, you know, the scalable agile now and all these things. There’s this specialization within companies where the sales team and the marketing team and everyone has to follow their own goals, in different directions. And we kind of erected these walls between departments and between teams. So these things wore on me. Again, on the positive side, what I see in the last ten, 15 years is that our access to data is much greater now with analytics tools, etc. and I think there’s a very positive development now coming with generative machine learning, because now these systems will start collecting and processing the data for us. And the question “what evidence do we have and how strong is it?”, might be answered with a much lower cost.
Today there’s a lot of friction in running the experiments, collecting the data. So I think these machine learning tools can help us reduce the cost and tell us when we come up with an idea, you know, based on my data, here’s the confidence level. Here’s the impact. It’s like an estimate. I would take huge caution in buying into this without human judgment. And that’s also a theme in my book. Yeah. Evidence of keep human judgment, keep the humans in the loop. It’s just about informing our opinions with data. This mixture is the real potent, magic dust. It’s not about delegating the decisions to some spreadsheet or anything like that. So I think positive and negative. I see more traditionalism and more trying to go back to the old days and on the other hand, out with more data and evidence available. So we might see a bi-polar kind of market shift. I think natural selection will favor the companies that are more truly agile, and truly are iterating towards creating higher value for their customers and for themselves much faster. So that’s my prediction.
Paul Gebel [00:37:01] Yeah. Well, Itamar, I know it’s a message that you’ve been trying to scream from the rooftops for a while, and I’m really grateful for the time to hear that and get it straight from you. I think your humility and the stories that you’ve shared, the products that you’ve been a part of, I think come through and the real, I think, access to having that conversation striking that balance between data-driven and human judgment. I think that that is a really optimistic tone and one I want to leave us with. But before we go, I just want to ask, where would be the place that you’d point people to find more about you, more about what you’ve been writing and been up to? I’d love to give the folks some more access to what you are working on these days. And what’s inspiring you?
Itamar Gilad [00:37:54] Absolutely. Thank you for giving me the opportunity. So if you go to my website, itamargilad.com, you’ll find a lot of these resources that we mentioned for free. There’s ebooks, there are templates. And you can also find the book. It has its own website evidenceguided.com. But you can also find it very prominently in my website. And I just encourage you to explore some of these things and reach back to me if you have questions or if you try them and you find that they work or don’t work, I’m really into the evidence too from the field. What people actually find is working. So that will help me too if you do that. Yep. That’s how you can find me.
Paul Gebel [00:38:39] Yeah. Love that. You’re dogfooding your own evidence-based process. Well, Itamar it’s been a treat for me to have this conversation with you. I’m sure this is going to be really educational for our audience listening. Thanks for taking the time. Cheers.
Itamar Gilad [00:38:52] Thank you.
Paul Gebel [00:38:55] Well, that’s it for today. In line with our goals of transparency and listening. We really want to hear from you. Sean and I are committed to reading every piece of feedback that we get. So please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world just like you’re doing. Thanks, everyone. We’ll see you next episode.