Your great grandchildren are powerless in today’s society. As Oxford philosopher William MacAskill says:
But the things we do now influence them: for better or worse. We make laws that govern them, build infrastructure for them and take out loans for them to pay back. So what happens when we consider future generations while we make decisions today?
Review: What We Owe the Future – William MacAskill (OneWorld)
This is the key question in What We Owe the Future. It argues for what MacAskill calls longtermism: “the idea that positively influencing the longterm future is a key moral priority of our time.” He describes it as an extension of civil rights and women’s suffrage; as humanity marches on, we strive to consider a wider circle of people when making decisions about how to structure our societies.
MacAskill makes a compelling case that we should consider how to ensure a good future not only for our children’s children, but also the children of their children. In short, MacAskill argues that “future people count, there could be a lot of them, and we can make their lives go better.”
Read more: Friday essay: 'I feel my heart breaking today' – a climate scientist's path through grief towards hope
It’s hard to feel for future people. We are bad enough at feeling for our future selves. As The Simpsons puts it: “That’s a problem for future Homer. Man, I don’t envy that guy.”
We all know we should protect our health for our own future. In a similar vein, MacAskill argues that we all “know” future people count.
Future people count, and MacAskill counts those people. The sheer number of future people might make their wellbeing a key moral priority. According to MacAskill and others, humanity’s future could be vast: much, much more than the 8 billion alive today.
While it’s hard to feel the gravitas, our actions may affect a dizzying number of people. Even if we last just 1 million years, as long as the average mammal – and even if the global population fell to 1 billion people – then there would be 9.1 trillion people in the future.
We might struggle to care, because these numbers can be hard to feel. Our emotions don’t track well against large numbers. If I said a nuclear war would kill 500 million people, you might see that as a “huge problem”. If I instead said that the number is actually closer to 5 billion, it still feels like a “huge problem”. It does not emotionally feel 10 times worse. If we risk the trillions of people who could live in the future, that could be 1,000 times worse – but it doesn’t feel 1,000 times worse.
MacAskill does not argue we should give those people 1,000 times more concern than people alive today. Likewise, MacAskill does not say we should morally weight a person living a million years from now exactly the same as someone alive 10 or 100 years from now. Those distinctions won’t change what we can feasibly achieve now, given how hard change can be.
Instead, he shows if we care about future people at all, even those 100 years hence, we should simply be doing more. Fortunately, there are concrete things humanity can do.
Read more: Labor's climate change bill is set to become law – but 3 important measures are missing
Another reason we struggle to be motivated by big problems is that they feel insurmountable. This is a particular concern with future generations. Does anything I do make a difference, or is it a drop in the bucket? How do we know what to do when the long-run effects are so uncertain?
Even present-day problems can feel hard to tackle. At least for those problems we can get fast, reliable feedback on progress. Even with that advantage, we struggle. For the second year in a row, we did not make progress toward our sustainable development goals, like reducing war, poverty, and increasing growth. Globally, 4.3% of children still die before the age of five. COVID-19 has killed about 23 million people. Can we – and should we – justify focusing on future generations when we face these problems now?
MacAskill argues we can. Because the number of people is so large, he also argues we should. He identifies some areas where we could do things that protect the future while also helping people who are alive now. Many solutions are win-win.
For example, the current pandemic has shown that unforeseen events can have a devastating effect. Yet, despite the recent pandemic, many governments have done little to set up more robust systems that could prevent the next pandemic. MacAskill outlines ways in which those future pandemics could be worse.
Most worrying are the threats from engineered pathogens, which
He gives examples, like militaries and terrorist groups, that have tried to engineer pathogens in the past.
The risk of an engineered pandemic wiping us all out in the next 100 years is between 0.1% and 3%, according to estimates laid out in the book.
That might sound low, but MacAskill argues we would not step on a plane if you were told “it ‘only’ had a one-in-a-thousand chance of crashing and killing everyone on board”. These threaten not only future generations, but people reading this – and everyone they know.
MacAskill outlines ways in which we might be able to prevent engineered pandemics, like researching better personal protective equipment, cheaper and faster diagnostics, better infrastructure, or better governance of synthetic biology. Doing so would help save the lives of people alive today, reduce the risk of technological stagnation and protect humanity’s future.
The same win-wins might apply to decarbonisation, safe development of artificial intelligence, reducing risks from nuclear war, and other threats to humanity.
Read more: Even a 'limited' nuclear war would starve millions of people, new study reveals
Some “longtermist” issues, like climate change, are already firmly in the public consciousness. As a result, some may find MacAskill’s book “common sense”. Others may find the speculation about the far future pretty wild (like all possible views of the longterm future).
MacAskill strikes an accessible balance between anchoring the arguments to concrete examples, while making modest extrapolations into the future. He helps us see how “common sense” principles can lead to novel or neglected conclusions.
For example, if there is any moral weight on future people, then many common societal goals (like faster economic growth) are vastly less important than reducing risks of extinction (like nuclear non-proliferation). It makes humanity look like an “imprudent teenager”, with many years ahead, but more power than wisdom:
Our biases toward present, local problems are strong, so connecting emotionally with the ideas can be hard. But MacAskill makes a compelling case for longtermism through clear stories and good metaphors. He answers many questions I had about safeguarding the future. Will the future be good or bad? Would it really matter if humanity ended? And, importantly, is there anything I can actually do?
The short answer is yes, there is. Things you might already do help, like minimising your carbon footprint – but MacAskill argues “other things you can do are radically more impactful”. For example, reducing your meat consumption would address climate change, but donating money to the world’s most effective climate charities might be far more effective.
MacAskill points to a range of resources – many of which he founded – that guide people in these areas. For those who might have flexibility in their career, MacAskill founded 80,000 Hours, which helps people find impactful, satisfying careers. For those trying to donate more impactfully, he founded Giving What We Can. And, for spreading good ideas, he started a social movement called Effective Altruism.
Longtermism is one of those good ideas. It helps us better place our present in humanity’s bigger story. It’s humbling and inspiring to see the role we can play in protecting the future. We can enjoy life now and safeguard the future for our great grandchildren. MasAskill clearly shows that we owe it to them.