Connect with us

World

3 Ways Scientific Thinking Could Help Save the World

Published

on

3 Ways Scientific Thinking Could Help Save the World

A physicist, a philosopher and a psychologist walk into a classroom.

Although it sounds like a premise for a joke, this was actually the origin of a unique collaboration between Nobel Prize–winning physicist Saul Perlmutter, philosopher John Campbell and the psychologist Rob MacCoun. Spurred by what they saw as a perilously rising tide of irrationality, misinformation and sociopolitical polarization, they teamed up in 2011 to create a multidisciplinary course at the University of California, Berkeley, with the modest goal of teaching undergraduate students how to think—more specifically, how to think like a scientist. That is, they wished to show students how to use scientific tools and techniques for solving problems, making decisions and distinguishing reality from fantasy. The course proved popular, drawing enough interest to run for more than a decade (and counting) while sparking multiple spin-offs at other universities and institutions.

Now the three researchers are bringing their message to the masses with a new book, Third Millennium Thinking: Creating Sense in a World of Nonsense. And their timing is impeccable: Our world seems to have only become more uncertain and complex since their course began, with cognitive biases and information overload all too easily clouding debates over high-stakes issues such as climate change, global pandemics, and the development and regulation of artificial intelligence. But one need not be an academic expert or policymaker to find value in this book’s pages. From parsing the daily news to treating a medical condition, talking with opposite-minded relatives at Thanksgiving or even choosing how to vote in an election, Third Millennium Thinking offers lessons that anyone can use—individually and collectively—to make smarter, better decisions in everyday life.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scientific American spoke with Perlmutter, Campbell and MacCoun about their work—and whether it’s wishful thinking to believe logic and evidence can save the world.

[An edited transcript of the interview follows.]

How did all of this begin, and what motivated each of you to take on such an ambitious project?

PERLMUTTER: In 2011 I was looking at our society making big decisions: “Should we raise the debt ceiling?”—things like that. And surprisingly enough, we were not doing it in a very sensible way. The conversations I was hearing about these political decisions weren’t like those I’d have over lunch with a bunch of scientists at the lab—not because of politics, but rather because of the style of how scientists tend to think about solving problems. And I thought, “Well, where did scientists learn this stuff? And is it possible for us to articulate what these concepts are and teach them in a way that people would apply them in their whole lives, not just in a lab? And can we empower them to think for themselves using the best available cognitive tools rather than teaching them to ‘just trust scientists?’”

So that was the starting point of it. But that’s not the whole story. If you put a bunch of physicists together in a faculty meeting, they don’t necessarily act much more rational than any other faculty members, right? So it was clear we really needed expertise from other fields, too, such as John’s expertise in philosophy and Rob’s expertise in social psychology. We actually put a little sign up looking for people who’d want to help develop the course. It said something like, “Are you embarrassed watching our society make decisions? Come help invent our course; come help save the world.”

MacCOUN: When Saul approached me about the course, I was delighted to work with him. Even back in 2011 I was filled with angst about the inefficacy of policy debates; I had spent years working on two big hot-button issues: drug legalization and open military service for gay and lesbian individuals. I worked with policymakers and advocates on both sides, just trying to be an honest broker in these debates to help clarify the truth—you know, “What do we actually know, and what don’t we know?” And the quality of debate for both of those issues was so bad, with so much distortion of research findings. So when Saul mentioned the course to me, I just jumped at the chance to work on this.

CAMPBELL: It was obvious to me that this was philosophically very interesting. I mean, we’re talking about how science inputs into decision-making. And in decision-making, there are always questions of value, as well as questions of fact; questions about where you want to go, as well as questions about how do we get there; and questions about what “the science” can answer. And it’s very interesting to ask, “Can we tease apart facts and values in decision-making? Does the science have anything to tell us about values?” Well, likely not. Scientists always shy away from telling us about values. So we need to know something about how broader effective concerns can be woven in with scientific results in decision-making.

Some of this is about how science is embedded in the life of a community. You take a village—you have the pub, you have the church, you know clearly what they are for and how they function in the whole community. But then the science, what is that? Is it just this kind of shimmering thing that produces telephones, TVs and stuff? How does it fit into the life of the community? How does it embed in our civilization? Classically, it’s been regarded as a “high church” kind of thing. The scientists are literally in an ivory tower and do as they please. And then occasionally, they produce these gadgets, and we’re not sure if we should like them or not. But we really need a more healthy, grounded conception of how science plays into our broader society.

I’m glad you brought up the distinction between facts and values. To me, that overlaps with the distinction between groups and individuals—“values” feel more personal and subjective and thus more directly applicable to a reader, in a way. And the book is ultimately about how individuals can empower themselves with so-called scientific thinking—presumably to live their best lives based on their personal values. But how does that accord with this other assertion you’ve just made, saying science likely doesn’t have anything to tell us about values in the first place?

PERLMUTTER: Well, I think what John was getting at is: even once we develop all these ways to think through facts, we don’t want to stop thinking through values, right? One point here is that we’ve actually made progress together thinking about values over centuries. And we have to keep talking to each other. But it’s still very helpful to separate the values and the facts because each requires a slightly different style of thinking, and you want people to be able to do both.

MacCOUN: That’s right. Scientists can’t tell us and shouldn’t tell us, in fact, what values to hold. Scientists get in trouble when they try that. We talk in the book about “pathologies” of science that sometimes happen and how those can be driven by values-based thinking. Regarding values, where science excels is in clarifying where and how they conflict so that in public policy analysis, you can inform the trade-offs to make sure that the stakeholders in a debate empirically understand how its various outcomes advance certain values while impeding others. Usually what happens next is finding solutions that minimize those trade-offs and reduce the friction between conflicting values.

And let’s be clear: when we talk about values, we sometimes talk as if people are either one thing or another. You know, someone may ask, “Are you for or against ‘freedom?’” But in reality, everyone values freedom. It’s just a question of how much, of how we differ in our rankings of such things. And we’re all looking for some way to pursue more than one value at a time, and we need other people to help us get there.

PERLMUTTER: And let’s remember that we’re not even consistent within our own selves about our individual rankings of values, which tend to fluctuate a lot based on the situation.

I love how our discussion is now reflecting the style of the book: breezy and approachable but also unflinching in talking about complexity and uncertainty. And in it, you’re trying to give readers a “tool kit” for navigating such things. That’s great, yet it can be challenging for readers who might assume it’s, say, a science-infused self-help book offering them a few simple rules about how to improve their rational thinking. This makes me wonder: If you did have to somehow reduce the book’s message to something like a series of bullet points on a note card, what would that be? What are the most essential tools in the kit?

CAMPBELL: This may be a bit ironic, but I was reading somewhere recently that where AI programs such as ChatGPT really go wrong is in not giving sources. Most of these tools don’t tell you what evidence they’re using for their outputs. And you’d think, of course, we should always show what evidence we have for anything we’re gonna say. But really, we can’t do that. Most of us can’t remember the evidence for half of what we know. What we can usually recall is how likely we thought some assertion was to be true, how probable we thought it was. And keeping track of this is a worthwhile habit of mind: if you’re going to act on any belief you might have, you need to know the strength with which you can hold that belief.

PERLMUTTER: We spend a fair amount of time on this in the book because it allows you to see that the world doesn’t come to us with certainty in almost anything. Even when we’re pretty sure of something, we’re only pretty sure, and there’s real utility in having a sense of the possibility for something contradicting what we think or expect. Many people do this naturally all the time, thinking about the odds for placing a bet on their favorite sports team or about the chance of a rain shower spoiling a picnic. Acknowledging uncertainty puts your ego in the right place. Your ego should, in the end, be attached to being pretty good at knowing how strong or weak your trust is in some fact rather than in being always right. Needing to always be right is a very problematic way to approach the world. In the book, we compare it to skiing down a mountain with all your weight rigid on both legs; if you don’t ever shift your stance to turn and slow down, you might go very fast, but you usually don’t get very far before toppling over! So instead you need to be able to maneuver and adjust to keep track of what it is that you really do know versus what you don’t. That’s how to actually get wherever you’re trying to go, and it’s also how to have useful conversations with other people who may not agree with you.

MacCOUN: And that sense of working together is important because these habits of mind we’re discussing aren’t just about your personal decision-making; they’re also about how science works in a democracy. You know, scientists end up having to work with people they disagree with all the time. And they cultivate certain communal ways of doing that—because it’s not enough to just be a “better” thinker; even people well-trained in these methods make mistakes. So you also need these habits at a communal level for other people to keep you honest. That means it’s okay, and necessary even, to interact with people who disagree with you—because that’s how you find out when you’re making mistakes. And it doesn’t necessarily mean you’ll change your mind. But it’ll improve your thinking about your own views.

So in summary:

  1. Try to rank your confidence in your beliefs.

  2. Try to update your beliefs based on new evidence and don’t fear being (temporarily) wrong.

  3. Try to productively engage with others who have different beliefs than you.

That’s a pretty good “top three” list, I think! But, pardon my cynicism, do you worry that some of this might come off as rather quaint? We mentioned at the outset how this project really began in 2011, not much more than a decade ago. Yet some would probably argue that social and technological changes across that time have now effectively placed us in a different situation, a different world. It seems—to me at least—on average much harder now than it was 10 years ago for people with divergent beliefs and values to have a pleasant, productive conversation. Are the challenges we face today really things that can be solved by everyone just getting together and talking?

CAMPBELL: I agree with you that this sort of cynicism is now widespread. Across the past few decades we seem to have forgotten how to have a conversation across a fundamental divide, so now we take for granted that it’s pointless to try to convert those holding different views. But the alternative is to run society by coercion. And just beating people down with violent subjugation is not a long-term tenable solution. If you’re going to coerce, you have to at least show your work. You have to engage with other people and explain why you think your policies are good.

MacCOUN: You can think of cynicism as this god-awful corrosive mix of skepticism and pessimism. At the other extreme, you have gullibility, which, combined with optimism, leads to wishful thinking. And that’s really not helpful either. In the book we talk about an insight Saul had, which is that scientists tend to combine skepticism with optimism—a combo I’d say is not generally cultivated in our society. Scientists are skeptical, not gullible, but they’re optimistic, not pessimistic: they tend to assume that problems have a solution. So scientists sitting around the table are more likely to be trying to figure out fixes for a problem rather than bemoaning how terrible it is.

PERLMUTTER: This is something we’ve grappled with, and there are a couple of elements, I think, that are important to transmit about it. One is that there are good reasons to be disappointed when you look at the leaders of our society. They’ve structurally now gotten themselves into a fix, where they seem unable to even publicly say what they believe, let alone find real compromises on divisive issues. Meanwhile you can find lots of examples of “citizen assembly” events where a random selection of average people who completely disagree and support the opposite sides of the political spectrum sit down together and are much more able to have a civil, thoughtful conversation than their sociopolitical leaders can. That makes me think most of the [people in the] country (but not all!) could have a very reasonable conversation with each other. So clearly there’s an opportunity that we haven’t taken advantage of to structurally find ways to empower those conversations, not just the leaders trying to act for us. That’s something to be optimistic about. Another is that the daily news portrays the world as a very scary and negative place—but we know the daily news is not offering a very good representative take on the true state of the world, especially regarding the huge improvements in human well-being that have occurred over the past few decades.

So it feels to me that many people are living in “crisis” mode because they’re always consuming news that’s presenting us crises every moment and driving us apart with wedge issues. And I think there’s optimism to be found in looking for ways to talk together again. As John says, that’s the only game in town: to try to work with people until you learn something together, as opposed to just trying to win and then having half your population being unhappy.

CAMPBELL: We are maybe the most tribal species on the planet, but we are also perhaps the most amazingly flexible and cooperative species on the planet. And as Saul said, in these almost town-hall-style deliberative citizen assemblies you see this capacity for cooperation coming out, even among people who’d be bitterly divided and [belong to] opposite tribes otherwise—so there must be ways to amplify that and to escape being locked into these tribal schisms.

MacCOUN: And it’s important to remember that research on cooperation suggests you don’t need to have everybody cooperating to get the benefits. You do need a critical mass, but you’re never going to get everyone, so you shouldn’t waste your time trying to reach 100 percent. [Political scientist] Robert Axelrod and others studying the evolution of cooperation have shown that if cooperators can find each other, they can start to thrive and begin attracting other cooperators, and they can become more robust in the face of those who are uncooperative or trying to undermine cooperation. So somehow getting that critical mass is probably the best you can hope for.

I’m sure it hasn’t escaped anyone’s notice that as we discuss large-scale social cooperation, we’re also in an election year in the U.S., ostensibly the world’s most powerful democracy. And sure, part of the equation here is breaking down walls with basic acts of kindness and humility: love thy neighbor, find common ground, and so on. But what about voting? Does scientific decision-making give us some guidance on “best practices” there?

PERLMUTTER: Well, clearly we want this to be something that transcends election years. But in general, you should avoid making decisions—voting included—purely based on fear. This is not a time in the world where fear should be the dominant thing driving our individual or collective actions. Most of our fears divide us, yet most of our strength is found in working together to solve problems. So one basic thing is not to let yourself be flustered into voting for anyone or anything out of fear. But another is to look for leaders who use and reflect the scientific style of thinking, in which you’re open to being wrong, you’re bound by evidence, and you’re able to change your mind if it turns out that you were pursuing a bad plan. And that’s something that unfortunately we very rarely see.

CAMPBELL: At the moment we have an abundance of free speech—everyone can get on to some kind of social media and explain their views to the entire country. But we seem to have forgotten that the whole point of free speech was the testing of ideas. That was why it seemed like such a good thing: through free speech, new ideas can be generated and discussed and tested. But that idea of testing the ideas you freely express has just dropped out of the culture. We really need to tune back in to that in how we teach and talk about free speech and its value. It’s not just an end in itself, you know?

MacCOUN: And let’s be mindful of some lessons from history, too. For a lot of these issues that are so polarizing and divisive, it’s probably going to turn out that neither side was completely right, and there was some third possibility that didn’t occur to most, if any, of us. This happens in science all the time, with each victorious insight usually being provisional until the next, better theory or piece of evidence comes along. And in the same way, if we can’t move past arguing about our current conception of these problems, we’re trapping ourselves in this one little region of conceptual space when the solution might lie somewhere outside. This is one of very many cognitive traps we talk about in the book. Rather than staking out our hill to die on, we should be more open to uncertainty and experimentation: we test some policy solution to a problem, and if it doesn’t work, we’re ready to rapidly make adjustments and try something else.

Maybe we can practice what we preach here, this idea of performing evidence-based testing and course correction and escaping various sorts of cognitive traps. While you were working on this book, did you find and reflect on any irrational habits of mind you might have? And was there a case where you chose a hill to die on, and you were wrong, and you begrudgingly adjusted?

MacCOUN: Yeah, in the book we give examples of our own personal mistakes. One from my own research involves the replicability crisis and people engaging in confirmation bias. I had written a review paper summarizing evidence that seemed to show that decriminalizing drugs—that is, removing criminal penalties for them—did not lead to higher levels of use. After writing it, I had a new opportunity to test that hypothesis, looking at data from Italy, where in the 1970s they’d basically decriminalized personal possession of small quantities of all drugs. And then they recriminalized them in 1990. And then they redecriminalized in 1993. So it was like a perfect opportunity. And the data showed drug related deaths actually went down when they reinstituted penalties and went back up again when the penalties were removed. And this was completely opposite of what I had already staked my reputation on! And so, well, I had a personal bias, right? And that’s really the only reason I went and did more research, digging deeper on this Italian thing, because I didn’t like the findings. So across the same span of time I looked at Spain (a country that had decriminalized without recriminalizing) and at Germany (a country that never decriminalized during that time), and all three showed the same death pattern. This suggests that the suspicious pattern of deaths in fact had nothing to do with penalties. Now, I think that leads to the correct conclusion—my original conclusion, of course! But the point is: I’m embarrassed to admit I had fallen into the trap of confirmation bias—or, really, of its close cousin called disconfirmation bias, where you’re much tougher on evidence that seems to run counter to your beliefs. It’s a teachable moment, for sure.

CAMPBELL: It takes a lot of courage to admit these sorts of things and make the necessary transitions. One cognitive trap that affects many of us is what’s called the implicit bias blind spot, where you can be really subtle and perceptive in spotting other people’s biases but not your own. You can often see a bias of some sort in an instant in other people. But what happens when you look at yourself? The reaction is usually, “Na, I don’t do that stuff!” You know, I must have been through hundreds and hundreds of student applications for admission or searches for faculty members, and I never spotted myself being biased at all, not once. “I just look at the applications straight,” right? But that can’t always be true because the person easiest to fool is yourself! Realizing that can be such a revelation.

PERLMUTTER: And this really informs one of the book’s key points: that we need to find better ways to work with people with whom we disagree—because one of the very best ways to get at your own biases is to find somebody who disagrees with you and is strongly motivated to prove you wrong. It’s hard, but you really do need the loyal opposition. Thinking back, for instance, to the big race for measuring the cosmological expansion of the universe that led to the discovery of dark energy, it was between my team and another team. Sometimes my colleagues and I would see members of the other team showing up to do their observations at the telescopes just as we were leaving from doing ours, and it was uncomfortable knowing both teams were chasing the same thing. On the other hand, that competition ensured we’d each try to figure out if the other team was making mistakes, and it greatly improved the confidence we collectively had in our results. But it’s not good enough just to have two opposing sides—you also need ways for them to engage with each other.

I realize I’ve inadvertently left probably the most basic question for last. What exactly is “third millennium thinking?”

PERLMUTTER: That’s okay, we actually leave explaining this to the book’s last chapter, too!

MacCOUN: Third millennium thinking is about recognizing a big shift that’s underway. We all have a sense of what the long millennia predating science must have been like, and we all know the tremendous advances that gradually came about as the modern scientific era emerged—from the practices of various ancient civilizations to the Renaissance and the Enlightenment, all those shifts in thinking that led to the amazing scientific revolution that has so profoundly changed our world here in what, until the end of the 20th century, was the second millennium. But there’s also been disenchantment with science, especially recently. And there’s validity to concerns that science was sometimes just a handmaiden of the powerful and that scientists sometimes wield more authority than they deserve to advance their own personal projects and politics. And sometimes science can become pathological; sometimes it can fail.

A big part of third millennium thinking is acknowledging science’s historic faults but also its capacity for self-correction, some of which we’re seeing today. We think this is leading us into a new era in which science is becoming less hierarchical. It’s becoming more interdisciplinary and team-based and, in some cases, more approachable for everyday people to be meaningfully involved—think of so-called citizen science projects. Science is also becoming more open, where researchers must show their work by making their data and methods more readily available so that others can independently check it. And we hope these sorts of changes are making scientists more humble: This attitude of “yeah, I’ve got the Ph.D., so you listen to me,” that doesn’t necessarily work anymore for big, divisive policy issues. You need a more deliberative consultation in which everyday people can be involved. Scientists do need to stay in their lane to some extent and not claim authority just based on their pedigree—the authority comes from the method used, not from the pedigree.

We see these all connected in their potential to advance a new way of doing science and of being scientists, and that’s what third millennium thinking is about.

CAMPBELL: With the COVID pandemic, I think we’ve all sadly become very familiar with the idea that the freedom of the individual citizen is somehow opposed to the authority of the scientist. You know, “the scientist is a person who will boss you around, diminish your freedom and inject you with vaccines laced with mind-controlling nanobots” or whatever. And it’s such a shame. It’s so debilitating when people use or see science like that. Or alternatively, you might say, “Well, I’m no scientist, and I can’t do the math, so I’ll just believe and do whatever they tell me.” And that really is relinquishing your freedom. Science should be an enabler of individual power, not a threat to your freedom. Third millennium thinking is about achieving that, allowing as many people as possible to be empowered—to empower themselves—by using scientific thinking.

PERLMUTTER: Exactly. We’re trying to help people see that this combination of trends we’re now seeing around the world is actually a very fertile opportunity for big, meaningful, positive change. And if we lean into this, it could set us in a very good position on the long-term path to a really great millennium. Even though there are all these other forces to worry about at the moment, by applying the tools, ideas and processes from the culture of science to other parts of our lives, we can have the wind at our back as we move toward a brighter, better future.

Continue Reading