much of our policy debates rely on predictions, projections, and probabilities.
What will the results of the upcoming election be? How will this policy affect
economic growth? How big of a threat is climate change in the long term? What
do the epidemiological models say about handling the COVID pandemic? It’s
important to answer these questions as best as we can, but we should also
recognize that some uncertainty is inevitable. We can’t quantify our way
through difficult, ambiguous problems. At least, that’s the argument I explored
with a recent podcast guest, Lord Mervyn King.
Mervyn is a professor of both economics and law at New York University, and he is a former governor of the Bank of England. He is also the co-author, along with John Kay, of Radical Uncertainty: Decision-Making beyond the Numbers.
What follows is a lightly edited transcript of our conversation, including brief portions that were cut from the original podcast. You can download the episode here, and don’t forget to subscribe to my podcast on Apple Podcasts or Stitcher. Tell your friends, leave a review.
Pethokoukis: I’m going to start by reading just a sentence or two from your book:
“We are emphasizing the vast range of possibilities that lie in between the world of unlikely events, which can nevertheless be described with the aid of probability distributions, and the world of the unimaginable. This is a world of uncertain futures and unpredictable consequences, about which there is necessary speculation and inevitable disagreement; disagreement which often will never be resolved, and it is that world which we mostly encounter.”
In fact, I would add that we have just encountered that world in a very tangible, important way: the election we just had, in which we had polls which seemed to all point to a very large victory by former Vice-President Joe Biden, and then models, based on those polls, showing an 85-90 percent chance of a Joe Biden victory. And indeed, that was the eventual outcome. However, it does not feel like those polls and those models were correct. Would that qualify as the kind of radical uncertainty that the book addresses?
King: Yes. The radical uncertainty, as you’ve said in the quote, lies between events that we can very happily use the laws of probability to describe — tossing a coin, events of that sort, events which are repeatable so we can compute the probabilities — and events in which the probability is totally unimaginable.
So when it came to the election result, there were only two possible outcomes in terms of who would be president, so this was certainly imaginable. Still, the idea that we could easily attach probabilities to those things, I think, was an illusion. This was a one-off race. This was not an occasion, an election contest that was going to be repeated. So the models that people constructed to convey the illusion of probabilities were really attempts to express their confidence in their prediction. Still, it wasn’t the same as saying, “If this event were repeated so many times, then Biden would win 85 out of 100 elections, and Trump only 15.” It’s not like that, so probabilities in that context tend to confuse rather than help us think about the outcome.
I spent a lot of time looking at polls and models, and there’s been an increase in the amount of modeling of elections, which attempt to figure out what the polls get right and get wrong. They’re very highly featured: A lot of news organizations spent a lot of time creating their own models, and of course, you have something like FiveThirtyEight, which is its own thing. Is that just a useless endeavor?
Well, it may not be totally useless if certain people want to bet large sums of money on it, but I think what you have to recognize is that it’s the models that are driving the prediction, rather than the responses given by potential voters. So to map the direct responses of individuals to the polls into a predictive result requires a great deal of modeling about human behavior: How intense is people’s wish to go vote? These things change from election to election.
So in the UK, we have the very clear example in 2016 of the Brexit referendum, when young people did not vote. Then when it came to 2017, the modelers informing the pollsters said, “Ah, we’ve now learned that young people don’t vote very much, so we’ll infer from the responses that we can give less weight to young people.” Well, that turned out to be wrong in 2017 — they did come out and vote.
So, to use the technical phrase, people’s behavior is not stationary; it changes from one year to the next, or one period to the next. And unless you really understand that, I think there’s no way we can pretend that we can predict that. The models are bound to make mistakes. But in that sense, what we need to do is to recognize that there is a very wide margin of error around many of these polls and predictions. And in that case, they actually may not be terribly illuminating because with such a wide margin of error, in truth, then actually that’s not giving us the information we’d like to have. We simply will not know who wins until we get to the election day itself.
Does it surprise you at all that betting markets seem to have a much more skeptical view of the election? They seem to have been more accurate, as I mentioned or may not have mentioned. Betting markets were looking at about a 60-percent chance of a Biden win — unlike the models, which had Biden at about a 90-percent chance. Does the fact that the betting markets seem to be more accurate surprise you? Should policymakers maybe be relying more on betting markets, or should the media at least be relying more on betting markets when talking about elections than very sophisticated models?
I think we’ve learned that models — not just in this area, but economic models and COVID-19 epidemiological models, for that matter — are very bad at predicting the future. So the lesson is not to put too much money relying on these kinds of predictions. You want to keep your options open. Where there’s this type of uncertainty, the big lesson is to keep options open and wait until you know the outcome.
I think what we realize here is that there’s no objective truth to these forecasts or probabilities. They are judgments of people. Very often, the probabilities, as I said, reflect not some objective sense of probability but a degree of confidence on the part of the person making the prediction. And that is likely to be very different when you’re just making a prediction in contrast with a situation where people are actually putting sums of money down — quite large sums of money potentially — on the outcome. That is always one to look at because people are backing their judgment by putting money on the table. But even then, I’m not sure that I would want to make decisions which reflected on having to decide on a date in advance of an election. I would say, “Look, let’s keep our options open, wait until we’ve heard the result, and then decide what to do.” There are not that many decisions that you’re forced to take in advance of the election.
You mentioned Brexit. One of the feelings during that period was that the result was a rejection of experts and of false expertise. And people drew much the same conclusion from the financial crisis. And we have another election, in which the experts seem to have been wrong, and the conclusion again people will draw perhaps even more strongly is, “Experts, they’re useless. They’re unable to properly measure or predict anything of importance. We should just tune them out.”
I assume that’s not quite your conclusion — that we should just tune out experts, that we should disregard their supposed expertise, whether it’s in election modeling, or economic forecasts, or economic advice — right?
Well, I certainly don’t want to give up on the use of experts. After all, if you contract a disease, you don’t just want to talk to your next-door neighbor. You want to talk to a qualified doctor. And if you get into an airplane, you’d hope that the pilot is a qualified pilot, not someone who was given a discounted ticket. So you do need expertise when it comes to making decisions, and you want people to explain and understand what they’re doing. But I think there’s a very big difference between the use of experts in enabling us to understand what is going on and in terms of making predictions.
And I think a good example of that is what’s happened in many countries in the West with COVID-19. Experts can tell us about the nature of viruses because they understand epidemics. They can tell us that epidemics start slowly, which is why it’s difficult to track it in the first instance. Then it tends to accelerate, and it declines. And you may get second or third waves.
These insights are very important in thinking about how to tackle an epidemic. But what the epidemiological models are not good at doing is predicting the path of any given epidemic. And we’ve seen that very clearly since March. The forecast that had been made about the number of cases, and the number of deaths, had been way off in many cases.
That’s partly because the experts disagree about some of the key parameters, but in large part, it’s because some of the crucial parameters that determine the spread of an epidemic are about human behavior. And these are not scientific constants that you can measure in experiments and then feed them into the model to make a prediction. They are assumptions that the modelers are making, and those assumptions can very often be wrong, even if they’re about the science. So the mortality rate from COVID-19, for example — we simply didn’t know that in March, and it’s not all clear that we know today what that number is. So there are many things about the nature, the phenomenon of COVID-19 that we don’t understand. That is not to reject experts, but it’s to warn us that you don’t rely on experts to give you the answer.
So when governments today say, “We’re just doing what the science tells us to do,” that is very foolish, because the science doesn’t tell us what we must do. It informs the judgments that our political leaders have to make about difficult trade-offs between deaths that may result if we lock down the economy versus deaths from COVID-19. And we simply do not know enough to say, “If we take action A, then the number of deaths will be a certain number, whereas if we take a different policy response, we’ll have a different number of deaths.” We simply don’t know that.
That’s why the judgments are so difficult, and I suspect why too many politicians have hidden behind the scientists in saying, “We’re just doing what the science tells us to do,” when science can’t do that. I think what the experts need to do is be very honest about what they do know — which is a lot more than the rest of us — and what they do not know. And it’s what they do not know that makes it very difficult, if not impossible, to make good predictions.
Whether it’s epidemiologists or economists, I think people will view them as forecasting machines, which you say is not how we should view them. Should we then view them more as framers of choices? Is that a better way to view their role?
It’s a better way. I think when we’re confronted with any situation that’s radically uncertain in our sense, the right question to ask is, “What is going on here?”
What was going on in the election that we’ve just seen? Two very different visions, two styles of argument — so what was going on here? Did we really understand why people had voted for Trump? Did we really understand why people might decide to switch from voting for Trump last time to Biden this time? Those are the questions to ask. And there are no simple scientific facts that give you the answer to those questions. The same is true of COVID-19.
What experts can do is help us understand what may be going on so that we can think more clearly about the question. In COVID-19, it’s the scientific insights into how epidemics behave, and it’s the collection of data. It may also tell us what we need to know more about, so if you want to know what the virus’s mortality is, we may need to do large random testing of the population to find out how many people actually have it. Even if we can identify the number of people that have died from it, you need to know the number of people who’ve got it in the first place to know the mortality rate.
And I think it’s this role of helping us think through a problem that is important, not the pretense that the expert has the answer.
And yet, of course, people continue to view experts and economists, which is what I focus on a lot, as people who have the answer. I know the Biden campaign was very excited when one well-known economist said, “The Biden economic plan will generate seven million jobs over the next four years,” versus the Trump economic plan, which would generate a lesser number — something like four million jobs — over the next four years. And they touted that analysis as, “Aha, that proves our economic plan is better.”
Now I would expect politicians to continue to do that sort of that thing, but it seems that you’re saying that voters, at least, should be very skeptical of those kinds of claims.
They should be deeply skeptical of them, and I think they are skeptical of them. I’ll give you the example of the Brexit referendum in the UK, when the Remain side, led by the government, argued that every family would be 4,300 pounds a year worse off if we left the European Union.
Many of the people I spoke to didn’t want me to say how they should vote or give advice as an expert. They wanted to know where they could find the information. And they realized when the government made this claim that the government could not possibly know that. No one knew that. This was a very uncertain step that we were about to take, and no one could possibly predict exactly what the consequences would be. So, when one side made a claim with that degree of precision, people said, “Look. This is just propaganda. They’re not taking us, the voters, seriously. They’re not treating us seriously.” And that lost them a lot of support, I think, by making that kind of claim.
I think that politicians, in the long run, lose credibility by making claims and forecasts that are simply not borne out. And it is much better for a leader to be able to say to people, “Look, none of us know the answer to this question. I don’t know the answer, and I don’t think anyone else does. But let’s think our way through the problem. This is what, given our information today, seems the most sensible course to take, but as information comes in, we may have to change it.”
What you don’t want to do is to say, “This is the answer. I know it’s absolutely right.” And then three weeks later, say, “Well actually, I’ve changed my mind, but this now really is the right answer,” only three weeks later to change again. And that is what has happened in many countries with COVID-19. A much more humble approach, I think, would actually benefit politicians, because people’s belief in their credibility responds to honesty and openness and not to a belief in false predictions.
What is the role, then, of experienced-based intuition and judgment? At some point, you’ve heard the analysis. You’ve heard the forecast, you’ve seen the cost-benefit analysis, and it’s time to make a decision. Then how do you make that leap?
President Bush used to say that he had a great “gut” — that he knew how to “go with his gut.” Is there something to that at some point? Should you listen to that voice?
Well, there’s something to it. There’s something to that, but I think it’s actually important not just to rely on gut instinct. One of the experts’ roles is not to say, “This is the answer,” but to say to the political leader, “There are three or four numbers here that really matter.”
Suppose you’re thinking about the cost-benefit analysis of a new road, airport, or railway line. Typically, what happens is the experts go away, do a lot of calculations, and come back with one number. You don’t want that. You want the experts to say to you, “Look, there are three numbers that probably really matter here because this is what’s going to drive the outcome. You really need to have a decent handle on the cost of the project, and you need to have some handle on what it is you think this project is going to achieve.”
Keep it simple. Don’t complicate it, and don’t try to wrap up the answer in terms of something that only a computer can calculate. And so, in a way, it’s simplifying the problem.
Here’s where experience comes in: It gives you a strong feel for what things you should be suspicious of when people produce calculations and forecasts — and what things you think you can rely on. But you should always ask yourself the question, “What is going on here?”
In terms of making decisions, if you’ve got a group of people around you and you’ve got a proposed decision coming out, you really do want another team of people to work for you who will dissect that proposed decision and say, “Look, these are the potential weaknesses in the argument,” and actually challenge the decision before you take it. Challenge the narrative that’s going into the decision — that’s really important.
But the last thing you want to do is to outsource the decision to a group of so-called experts who’ve got a model, make assumptions about all kinds of things that you’re not aware of, turn the handle, get a number, and come back and say, “This is what you must do.” That is not a helpful way to make a decision.
It is a question of creating a narrative, where experience really does matter, but getting it challenged by other people. So just relying on one’s own gut instinct can be dangerous.
However, I can certainly imagine that given the growing sort of sophistication of artificial intelligence and these machine learning programs that policymakers looking for certainty and cover for their decisions may actually end up relying more in the future on models that under any circumstance are difficult to understand. Even more so, if you’re a policymaker whose background is the law rather than computer science, we’ll actually see these AI programs take a larger role in how we make decisions. Certainly, you mentioned, sometimes they do quite well. There have been some machine learning-based coronavirus forecasts I’m told that seem to have been very, very accurate. I suppose if you run enough models, you’ll find one that’s able to call it.
Are you concerned that, in fact, policymakers, rather than relying more on experience and multiple teams kind of challenging each other, will just say, “What does the AI say?” and go with that?
So there is a view that we should give as many decisions as possible to algorithms and AI because, unlike humans, they don’t make mistakes in terms of calculations or mathematical reasoning. And that’s fine if the only thing you’re doing is confronting a problem to which the answer can be obtained by mathematical reasoning, but that is not true of most of the decisions that our political leaders have to take.
There are some things for which algorithms are better than humans. For example, if you give scans of possible tumors to a machine, the machine can give you a more accurate response as to whether it is or is not a benign or malign tumor than even experienced doctors. The machines can be shown many, many more examples of tumors, and they can learn from that. It’s just building on the idea that machines can play chess or Go better than humans because they can play many more games than humans ever can. They play them against each other.
Now that’s fine, up to a point. But that only gives you help with certain kinds of decisions, where you can program in advance all the information needed to make an accurate decision. And most of the problems that we face are not only one-offs but also unique decisions, and there is no way in which you can program a computer to resolve that decision.
One of the examples we give in the book is that when President Obama was sitting in the Situation Room, deciding whether or not to send the Navy Seals in to the compound in Abbottabad, to try to obtain dead or alive Osama bin Laden. There were only two possible choices — send the Seals in or not — and only two possible outcomes — was bin Laden was in the compound or not? He either was or he wasn’t.
And the people advising him tried to give advice in terms of probabilities. But that just confused the situation, as President Obama said afterward. What he had to do was to accept that we just didn’t know whether it was bin Laden in the compound or not. He could ask for the information as to why his advisors thought it might be bin Laden while other people thought it might not be. But he had to process that information. There is no way you could put that into a computer and come up with an answer because there are not thousands of previous episodes when people have looked for bin Laden’s in compounds and could supply the information that a computer might be able to process. This was a one-off, unique decision, and it required judgment — it required someone’s ability to probe and challenge the arguments given by the advisors as to why they thought that it might be bin Laden. You ask the right questions, you challenge the narrative that it’s bin Laden, and then you end up making a decision.
And I think there’s no way around that for most important decisions that we confront — some things we can give to computers. We should do that whenever it seems attractive to do it, because they can think faster, in terms of doing calculations, than we as humans can.
But here’s the rub on all this: If thinking like a computer was so successful and so important, we would have evolved to think like computers, and we didn’t. Humans evolved, not to think like calculating machines, but to be incredibly good at looking at links between things that don’t seem to be connected. However, we see a connection, we can confront something we’ve never seen before, and we can struggle to find a way to adapt to it. We are very good at coping with the unexpected, more than any other species on Earth, which is why we are the most successful species on Earth.
But we’re not like computers. We should use computers to help us. We are complementary to computers, but computers will never supersede humans in terms of decision-making.
As I was reading the book, I kept thinking about the issue of climate change. And there seems, at least among people who think this is an important issue, that there’s a lot of risk involved.
I can think of two basic approaches. One is to say, “We’ve come up with some modeling, and things look bad, probably. There’s a lot of scenarios, but some of them are very bad. So here’s what we need to do right now. We need to take radical action right now to reduce carbon emissions. We need to put on carbon taxes. We need to shut down coal plants. We need to radically change our lives because the best modeling says this is very dangerous.”
Other people might say, “It indeed might be dangerous, but we really aren’t sure of the probabilities of all these outcomes. And suppose there are low-risk, low-cost interventions we can take right now. In that case, we should do them, but what we should mostly do is make sure that we continue to have a technologically advanced country and world with lots of resources so we can adapt to whatever the future holds with climate change. So if it turns out to be worse than we were thinking, we’ll have the technology and resources to quickly ramp up clean energy or try to geo-engineer the climate.”
So one, we do all this preparedness right now and act right now based on the models. Others say we should make sure that we can act in the future if we have to. Which way is better?
Well, I think this is a difficult judgment to take, and I think it’s not related to the value of predictions.
We’ve learned two big things with COVID-19 relevant to climate change. One is that survival does actually matter — that you can have as efficient an economy as you like, but if a pandemic comes along that wipes us out, that’s of no comfort. So, we do need to worry about the resilience of the human race and survival. And resilience means that we should not just worry about maximizing our production and minimizing the costs of being as efficient as possible. We should also try and organize our economy in such a way that it’s resilient to unexpected outcomes.
The second, of course, is that we know that survival is risked not just from climate change but also from many other factors — including pandemics, which includes bioterrorism. There are many ways in which the world could come to a rather unfortunate end, and climate change is not the only one.
But I think the key point I’d make is that what’s been unfortunate about the debate on climate change is that those people who want to take drastic action now are pinning their argument on predictions for which there is an enormous degree of uncertainty. And that is not sensible. The fact that we don’t know how the future climate will evolve and there is certainly room for debate about the cause of it and its extent that there’s uncertainty is not an argument for doing nothing. Indeed, it may well be an argument for saying, “Since one of the possible outcomes is that the planet comes to a rather sticky end, maybe we should take serious action to prevent that possibility.” That’s a perfectly good argument.
But I think what we need to do is shift the argument away from these competing predictions as to what will happen, when no one really knows what will happen, to actually a decision about how much we are prepared to put our future at risk in different types of ways. And we need to take precautions. We don’t want to do it in a way that’s so expensive that we destroy our standard of living, but there are difficult balancing acts here.
But the judgments about them do not, in my view, depend on which of the predictions you actually believe because none of the predictions can be held with any degree of certainty.
I guess my point isn’t so much the risk of destroying our standard of living. It’s the risk of losing our ability to respond to unforeseen — and unpredictable — events that we want to be able to respond to.
I suppose one way of thinking about pandemics is they’re very risky, and therefore we should try to decentralize our economy. High-density cities are dangerous; we should encourage people not to live in cities. I suppose that would be one approach. Of course, doing that would be bad for economic growth. We would lose all these agglomeration effects.
So I would say that rather than tell people that we should not live in cities anymore, we should do what preparedness we can with masks, ventilators, and these kinds of things. However, we must make sure we don’t do anything to screw up our ability to continue to advance technologically and deal with future pandemics through therapeutics and vaccines. That we don’t want to move away, we want to keep pushing that technological frontier, not retreat and go the other direction.
No, I agree with that. And I think, to go back to what you said at the beginning of the question, it is very important to keep our options open. If you’re confronted with radical uncertainty, you need to be able to keep options open.
And I think one of the lessons from COVID-19 is that a year ago, the international authorities judged that the US and the UK were countries best-prepared for a pandemic. But it turned out we weren’t. And in part, that’s because we were very well prepared for a pandemic of a particular type (similar to flu). So we didn’t keep our options open and think about what we might need to do in the event of a pandemic of a different kind altogether. So the question is: If you don’t know what the new virus that’s going to come along and the chances are that another pandemic will occur, what do we need to do?
Well, there are at least three things we need to worry about. One is we need to worry about international air traveling, because that’s the way today in which pandemics very quickly pass from one country to another. What we do on that, I think, is unclear. Still, there’s plenty of scope for trying to get some international cooperation. So if one country discovers that it has a virus, it is willing to see the travel between it and other countries in the world cease for the time being, until they’ve got on top of it. Otherwise, we’re going to start banning all international travel, which is where we’ve ended up today.
The second thing we need to do is that it looks as if some kind of test-and-trace program is fundamentally important to control the passage of the epidemic until such point as we have a vaccine.
And the third thing that we’ve learned is that we need to quickly expand the number of intensive care beds. We don’t quite know what treatments those patients will need, but we know that many people may need intensive care.
On that last one, I think we were fairly successful in expanding the number of intensive care beds. We did bring in new capacity quite quickly. I think we were less successful at ensuring adequate staffing for those new beds, but by and large, that bit we got right. But we did not get right the parts to do with test-and-trace; we weren’t prepared for that. And I think we have to think very carefully as to what we would want to invest in, so that we’ll be better prepared in the future and won’t have to take the extreme steps of saying to people, “Look, don’t live in cities.” That would be a very big step backwards for our society if we were to do that.
So I think you’re right: keeping options open, to me, is the key.
I think some people reading this book will think, “This is as persuasive a case as I’ve ever seen for a more minimalist hands-off government. If there’s all this uncertainty and it’s difficult to figure out what to do, how can policymakers want to do lots of things which require lots of decisions?” Do you think you’ve made a powerful case for libertarianism?
No, I don’t think we would say that. I don’t think we want to argue that our book justifies any particular system. I think what it does do is say, “Excessively detailed and complex regulation is likely to be seriously counterproductive.” We give a number of examples from the area of financial regulation. So I think that if the government’s going to be successful, keeping it simple is a very important principle.
There are issues that governments do need to intervene on. We’ve seen that you can’t combat an epidemic without having an effective government. If you worry about what has been going on in the growing inequality in many countries in the West — especially in the US — then that’s a government decision as to what to do about it. The opioid epidemic, I think, required intervention by decision-makers in order to challenge what the drug companies were doing. They didn’t expose it. It took individual researchers to expose that but coping with these problems is something that only an effective government can do.
And perhaps, and this is very topical looking at it from outside the United States, it would seem that the first challenge facing the next president is to find a way to bring America together and to reduce this extraordinary divisiveness, which was not present in the United States when I first arrived as a graduate student in 1971 and fell in love with America. However, it is present now, and it’s deeply disturbing. And I think that’s a political challenge which can’t be solved just by leaving it to people. We’ve got to find a way in which political leadership can restore the way in which, in my generation, political leaders inspired people to come together and work for each other, as well as for their country.
So I don’t think I’m ready yet to give up on political leadership.
The book is about uncertainty, but it shouldn’t be our goal to squeeze out all of the uncertainty in life. It’s not necessarily a bad thing, is it? Or am I confusing risk and uncertainty?
We tried to draw a distinction between risk and uncertainty in the book because we talk about risk being something that is a characteristic of individuals or organizations: They have a view as to where their lives are going, and risk is anything that is a downside outcome relative to that. So a simple example: the father preparing for the wedding of his daughter. He thinks things will go properly, but what could go wrong with it? Well, the caterers may fail to turn up, the bridegroom may disappear at the last minute, or it could pour with rain, destroying people’s enjoyment of the ceremony. Those are risks.
But we stress very much that uncertainty is to be embraced. Uncertainty is the source of almost everything good in life. So when Frank Knight, a century ago, wrote about uncertainty, he saw it in terms of what entrepreneurs could do to create new products that people hadn’t imagined before, so that there were things which would be created all the time, which no one knew about, and couldn’t make bets on. These things were the driving force behind a market economy. They are the creations that boost our living standards over time. So from a purely economic point of view, that degree of uncertainty is fundamentally important.
But more than that, I give you the example of what I find quite often at student graduations, when students will tell me that they’ve enjoyed their undergraduate course, but now they’re launching out into the world and are frightened because they think their future is very uncertain. And I say to them, “You should be very happy about that. If I could give to you today a list of five jobs that you might be in 20 years from now — with the probabilities attached to each of those five — and the names of the five people who could be your life partners — and the probabilities that you’ll end up living with each of these five people — you’d go away depressed because you think there’s nothing new and exciting in the world.”
Serendipity is the most wonderful thing. You meet people you didn’t know; you couldn’t imagine people being as wonderful as the ones that you meet. You go to places that you hadn’t seen before and didn’t imagine. You read a book or listen to a piece of music that you hadn’t thought about before. These are the things that make life exciting. They create enjoyment. They are the spice of life.
Uncertainty is fundamentally important to our enjoyment of life, which is why it is, I think, that humans have evolved to be pretty good at coping with uncertainty. We can’t predict, but we don’t want to be able to predict everything that’s going to happen to us. Otherwise, we would be bored to tears. So we embrace uncertainty while at the same time taking careful steps to manage the risks that we face.
My guest today has been Mervyn King. Mervyn, thanks for coming on the podcast.
It’s been a pleasure.