In this eye-opening book, renowned economist Alex Edmans teaches us how to separate fact from fiction. Using colorful examples—from a wellness guru’s tragic but fabricated backstory to the blunders that led to the Deepwater Horizon disaster to the diet that ensnared millions yet hastened its founder’s death—Edmans highlights the biases that cause us to mistake statements for facts, facts for data, data for evidence, and evidence for proof. May Contain Lies is an essential read for anyone who wants to make better sense of the world and better decisions.

Alex Edmans is Professor of Finance at London Business School. His TED talk “What to Trust in a Post-Truth World” has been viewed two million times; he has also spoken at the World Economic Forum, Davos, and in the UK Parliament. In 2013, he was awarded tenure at the Wharton School, and in 2021, he was named MBA Professor of the Year by Poets&Quants. Edmans writes regularly for the Wall Street Journal, the Financial Times, and Harvard Business Review. His first book, Grow the Pie, was a Financial Times Book of the Year. He is a Fellow of the Academy of Social Sciences.

A lot has been written about misinformation, but you take the conversation further, positioning it as a symptom of our biases. Why is it essential to view misinformation through this lens? 

It’s tempting to blame misinformation on those who produce it or governments for failing to regulate misinformation. They should certainly take part of the blame, but you can’t just rely on regulation or producers’ goodwill. Producers have huge incentives to spread misinformation — they can go viral — and regulation can’t stop it because it can only check the facts. As I explain in the book, even if the facts are 100% accurate, the inferences you make from them can be misleading. Thus, we need to take the matter into our own hands, and the starting point is to recognise how our biases also contribute to us falling for misinformation. 

What has changed in recent decades to make us more vulnerable to misinformation?

Producers have more significant incentives to create misinformation due to scalability. Anybody can become famous by developing a YouTube or TikTok channel or self-publishing a book. Many best-selling books have been written by people with little expertise. Simon Sinek is a former advertising salesman, not someone who’s run a company or studied companies. David Allen is a former landscaper, vitamin distributor, glass blower, travel agent, petrol station manager, U-Haul dealer, moped salesman, and chef. Yet, he wrote a best-selling book on time management, Getting Things Done

Consumers are much more likely to fall for misinformation for the opposite reason — since there’s so much information out there, it’s easy to find a study that gives you the result you like, even if it’s flimsy. There’s now so much information out there that you can always cherry-pick a study that shows what you want or a book that makes you feel good because you like what it says.

How will the widespread availability of consumer Artificial Intelligence products impact the fight against misinformation? There is a lot of concern about the negative consequences, but will AI also be a tool in the fight against misinformation?

I recognise that many people are speaking out about AI because it’s topical, often shooting from the hip and saying things that sound sensible, but this isn’t responsible, and it illustrates an important point in the book. The fight against misinformation involves not spreading it. One of the most valuable things we can do is recognize our limitations as experts and not answer questions we are ill-equipped to answer. 

Bill Gates, now positioning himself as the world expert on Climate Change is a far greater problem than AI writing about climate change. People believe him because he’s Bill Gates, even though his expertise is software, not climate change. 

I’m an expert in the scientific method and the use and misuse of data and evidence, but to know whether AI will be a tool in the fight against misinformation, you need to be an expert on AI who deeply understands what it can do and its limitations — which I’m sadly not! So as a non-expert on AI, I must respectfully decline to comment. This is something that others like Bill Gates should follow.

You start the book by talking about our biases and how they shape our information-seeking and interpretation. What are the “twin biases,” and how do they work in tandem to exacerbate our vulnerability to misinformation?

The first is Confirmation Bias. This involves (a) Biased interpretation — naive acceptance of conclusions we like and blinkered skepticism to ones we don’t. (b) Biased search. We live in echo chambers where we only read newspapers or follow people who say things we like. For example, in the Brexit referendum, many Remainers only followed other Remainers. They dismissed Brexiters as being racists and uninformed. This had real consequences — they didn’t bother engaging with Brexiters to understand their concerns, and partly, as a result, they lost the vote. Hillary Clinton famously called Trump supporters “a basket of deplorables,” which swung many people against her. 

The second is Black and White Thinking. We view issues as black-and-white, as always good or always bad. Thus, we’re susceptible to extreme statements, such as “immigrants are scroungers,” “carbs are bad for you,” and “breastmilk is always better than formula.”

What are some red flags that our biases may be getting in the way of searching for or interpreting information? 

Look at who you follow or which newspapers you subscribe to. Are they all of the same political slant as yours? Think about how you feel when you read a study or a newspaper article. If you find yourself getting angry, it’s probably because you don’t like the findings of the study, rather than you being angry about the rigor of the study’s methodology.

This is not just a social or psychological phenomenon. Can you explain how our brains are hard-wired for bias?

There are two fascinating studies that take students and hook them up to an MRI scanner to see what happens to their brain when they see information they like or don’t.

One finds that our amygdala lights up when we hear evidence we don’t like. This is the same part of the brain that lights up when a tiger attacks us. We view a contradictory opinion like a tiger attack.

Another study is what happens when evidence we don’t like gets dismissed. This releases dopamine; the same pick-me-up chemical triggered when we run, enjoy a meal, or have sex. Confirmation bias just feels good. 

What is the ladder of misinference, and how can it help combat misinformation?

There are tons of books on misinformation, which I’ve learned a lot from. But some of them are laundry lists of all the ways you can fall for misinformation, and the reader can come away more confused than she started – there are tons of ways she can be deceived, so how can she possibly guard against them all?

So I developed the ladder of misinference as a framework to neatly categorise all the different ways we may fall for misinformation into four steps. This helps you combat misinformation because you know what to look out for. You might mistake a statement for fact, facts for data, data for evidence, and evidence for proof. Let’s go through each step in turn.  

The first step is that a statement is not fact, as it may not be accurate. Most people know about that step. We know to check the facts (e.g., Obama’s birth certificate), and some social media sites allow you to click a link to check a fact. But the punchline of the book is that checking the facts is not enough. Even if facts are 100% accurate, they may still be misleading, hence the importance of the other steps. 

  • A fact is not data: it may not be representative if it’s selectively quoted. Someone could trumpet a smoker who lived to 100 but hide the thousands of others who died from their habit.
  • Data is not evidence: it may not be conclusive if it’s a correlation without causation. An influencer might peddle the claim that people who eat whole grains are less likely to have heart disease. However, people who eat whole grains may lead healthier lives in general, and this could be causing lower heart disease rather than whole grains being a superfood.
  • Evidence is not proof: it may not be universal if it’s in a different context. The Marshmallow Studies showed that kids who resist eating a marshmallow now to get two later do better in life. But they focused on Stanford University children. Kids of less wealthy backgrounds may be accustomed to eating food immediately rather than saving it for later since the food may have disappeared by tomorrow.

This isn’t just about understanding our biases and how information is produced. What common ways data is manipulated to look like evidence, and how can we avoid falling for it?

One is data mining. If you want to show that (for example) diversity improves company performance, you can run tons of different tests with different measures of performance (sales, profit margin, profitability, stock price growth, etc.) and tons of different measures of diversity (whether a board has at least one woman, at least 2, at least 3, or at least one woman or ethnic minority, at least 2 …), cherry pick the results that work and only report those.

Another is dressing up correlation as causation. Breastfed babies have better health outcomes. But this doesn’t mean that breastfeeding causes better health. Instead, mothers with a more supportive home environment are able to breastfeed, and this supportive home environment is what causes better health. 

You write, “Now more than ever, the person on the street plays an important role in combatting or amplifying misinformation.” What are your tips for social media users to prevent them from amplifying misinformation?

To pause before sharing. Sharing misinformation is like spreading a virus — the person you share it with can infect others. Ensure the methodology is watertight rather than sharing it because you like the findings. If you don’t have the time or expertise to do this, don’t share it — just like if you’re not sure you’re contagious, don’t go to a party.

Calling out people for spreading misinformation is challenging and much easier said than done. Can you give an example of when you had to do this?

London Business School, where I’m a professor, released a report claiming that diversity improves performance. And not just any report, but one commissioned by the UK regulator so that it might shape law. I was delighted to see the finding as an ethnic minority, and I already saw it being widely shared on social media and by other companies.

But I needed to practice what I preach and not take it at face value, so I read the study. It did 90 tests linking diversity to performance, but none found a positive result. The authors claimed a result that simply wasn’t there. Yet so many people believed this finding that I needed to set the record straight, even though I wanted it to be accurate, so I wrote an Op-Ed in a leading UK newspaper. You might think it was brave to call out my own employer. But my intent was never to “call people out,” accuse them of spreading misinformation, or laugh at them for making basic mistakes; it was just to pursue the truth. I didn’t mention the example, only the study, since the article focused on the evidence; it was nothing personal. The tone of the op-ed was constructive, and it ended by saying that even if diversity doesn’t improve performance, that doesn’t invalidate diversity initiatives – instead, you pursue them because it’s the right thing to do, not to make more money.

What is the big takeaway of May Contain Lies for business leaders? What are some effective strategies for incorporating critical thinking into business decisions?

One is to beware of case studies, which have become very influential in business education. Case studies are isolated anecdotes that may be the exception to the rule. Case study writers have incentives to find the most extreme case illustrating their principle (e.g., a CEO who set a minimum wage of $70,000 and his company flourished), and the principle may not apply in more usual cases.

A second, quite different one, is to build smart-thinking organisations that actively seek and reward different views. In meetings, call on juniors to speak first so they don’t feel pressured to affirm what their seniors have expressed. But even that’s not enough because if the agenda has been released beforehand, people may already be discussing it around the water cooler, and the juniors may have gotten wind of what the seniors think. So this should be supplemented with a “silent start” where the agenda and pre-reading are released only at the start of the meeting, and you’re given half an hour to read it. That way, when the juniors are called on to speak first, what they say is truly their own view. 

Can you elaborate on how DEI helps organizations “think smarter?”

In fact, DEI, as commonly practiced, does not. Most DEI initiatives focus on demographic diversity, but there is very little evidence to suggest that this improves company performance (the evidence that claims this is flimsy). Instead, it’s about recruiting cognitive diversity (such as different educational and work backgrounds) and creating a psychologically safe culture where people are willing to speak up and are positively rewarded for sharing dissenting views. 

Your book encourages us to be skeptical of the information we are exposed to. So, let’s turn this back on your work – why should we take May Contain Lies at face value? 

You should not! I am human and prone to biases, so that I may have made mistakes. Thus, please read the book with the same critical eye I am trying to encourage.

I took steps to minimize the risk of mistakes. For example, out of all the agents who offered to represent me, I chose the one who was most critical about my proposal because I thought he’d have the biggest potential to improve it. After completing the first draft, I sent it to several people that I trusted and asked them to be as critical as possible. I paid research associates to critique and fact-check it. I’ve had to do this in 17 years as a finance professor and seven years as a journal editor, so I tried to apply the same diligence. I had to scrap a paper I included in Chapter 2 of the book because a critique paper highlighted that the research had issues. But I still may have made errors, so please be critical and let me know if something is wrong. 

You conclude the book with a hopeful message: “Understanding the limitations of evidence as well as its power helps us live more freely.” What do you mean by this?

Evidence is often much weaker than commonly portrayed, but people present it as black-and-white rules, such as women should only breastfeed or you need to cut out all carbs if you want to lose weight. When we recognise that evidence is rarely cast-iron proof, we don’t need to follow such rigid rules (and those rules are often misleading anyway).

FacebookTwitterTumblrLinkedInEmail