My Blog
Politics

Effective altruism shaped Sam Bankman-Fried. Are its ethics at fault?


They’re “outraged.” They’re “humbled.” They’re “fucking appalled.”

That’s the reaction from prominent effective altruists after the downfall of Sam Bankman-Fried, the billionaire whose cryptocurrency exchange FTX imploded last week.

Effective altruism (EA) is a social movement that’s all about using reason and evidence to do the most good possible for the most people. Yet Bankman-Fried, one of its brightest stars and biggest funders, now seems to have done a lot of bad for a lot of people. He’s obliterated the savings of countless customers, and might have committed fraud in the process.

This is a crucible moment for EA. Members of the movement are rethinking their moral convictions in real time, asking themselves: Does this disaster mean there was something wrong with the intellectual underpinnings of EA? Should we have seen this coming?

It’s important to understand that Bankman-Fried is not just a freak accident for EA, someone who made his billions and then became enamored of the movement. He’s a homegrown EA billionaire. In many ways, EA is what made him “SBF,” as he’s now known within the movement and the media.

When Bankman-Fried was in college, he had a meal that changed the course of his life. His lunch companion was Will MacAskill, the Scottish moral philosopher who’s the closest thing EA has to a leader. Bankman-Fried told MacAskill that he was interested in devoting his career to animal welfare. But MacAskill convinced him he could make a greater impact by pursuing a high-earning career and then donating huge gobs of money: “earning to give,” as EA calls it. (MacAskill did not immediately reply to a request for comment.)

So the young acolyte pursued a career in finance and, later, crypto. To all appearances, he remained a committed effective altruist, pouring funding into neglected causes like pandemic prevention. (Disclosure: This August, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.) And then came the news of FTX’s spectacular implosion.

What, then, should the intellectual fallout be for EA? What ethical lessons should it learn from the actions of Bankman-Fried? Whether the scandal shakes the movement’s philosophical foundations or simply prompts a superficial makeover will depend on how effective altruists choose to answer six big questions. Let’s break them down.

1) Did EA lean too hard into utilitarianism?

The FTX scandal is still fresh, and we don’t yet know for sure why Bankman-Fried did what he did. Did he simply make a mistake and then double down on it, as rogue traders have before? Did he reason that the ends justify the means — and that the ends of his plan to give away his fortune would be so benevolent that the risk of wiping out customers’ savings was okay?

We may never know exactly what internal monologue passed through Bankman-Fried’s mind — in an interview earlier this week with the New York Times, he put the blame on other commitments that distracted from serious problems within his companies. But many within EA and outside it are seriously considering the possibility that a utilitarian-style “ends justify the means” mentality played some role.

Being an effective altruist does not necessarily mean you’re a consequentialist or a utilitarian, someone who thinks an action is morally right if it produces good consequences, and specifically if it maximizes the overall good. But EA has its roots in the work of Peter Singer, probably the most influential utilitarian philosopher alive, and the movement is heavily influenced by that philosophy. Most effective altruists weight it heavily even if they also give some weight to other ways of thinking about morality. Bankman-Fried himself agrees that he’s a “fairly pure Benthamite utilitarian,” meaning he thinks we should do the most good possible for the greatest number of people.

Within EA, some have worried aloud that the premium put on utilitarianism’s “do the most good possible” will make members likely to apply that philosophy in naive and harmful ways. Just a couple months ago, another EA leader, Holden Karnofsky, published a blog post titled “EA is about maximization, and maximization is perilous.” Here’s the crux of it:

If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X — a terrible idea unless you’re really sure you have the right X.

EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.

The upshot, Karnofsky wrote, is that “the core ideas of EA present constant temptations to create problems” and that EA doesn’t offer enough guidance on how to avoid these problems. “As EA grows,” he warned, “this could be a fragile situation.”

From today’s vantage, that warning seems eerily prescient.

Some — like the billionaire Dustin Moskovitz, one of EA’s biggest donors — had previously brushed this type of worry aside, only to revisit their conclusions in the wake of FTX’s implosion. “I thought this was too cynical,” Moskovitz wrote, “but feel incredibly humbled by this event.

All this now has Moskovitz thinking that maybe EA put too great a premium on utilitarianism, without articulating its limits loud and clear early on. He’s open to the idea that maybe EA’s followers should move more toward deontology (which defines good actions as ones that fulfill a duty) or virtue ethics (which defines good actions as ones that embody virtuous character traits, like honesty or courage).

2) Did EA do enough to clarify that the ends don’t justify the means?

Like Moskovitz, MacAskill is also soberly reassessing EA in light of the Bankman-Fried scandal. But rather than examining whether EA’s core ideas are themselves problematic, he seems more interested in arguing that EA’s core ideas are good but that they’ve been misused by a bad actor.

In the same Twitter thread, MacAskill argues that EA has long emphasized “the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose ‘ends justify the means’ reasoning.”

MacAskill links to passages from his bestselling book What We Owe the Future and to writings by two other EA leaders, Karnofsky and the Oxford philosopher Toby Ord. These passages do emphasize integrity, but unfortunately, they were published between 2020 and 2022 — about a decade too late to be useful in molding Bankman-Fried and other young effective altruists who came up with him.

More importantly, “respect commonsense morality” is not the message you’ll usually hear in EA circles. If anything, EA culture prides itself on offering a counter to commonsense morality. Their counter is that truly moral action should be about maximizing “expected value.” To calculate a decision’s expected value, you multiply the value of an outcome by the probability of it occurring. You’re supposed to pick the decision that has the highest expected value — to “shut up and multiply,” as some effective altruists like to say.

In his book, MacAskill writes that “we should accept that the ends do not always justify the means; we should try to make the world better, but we should respect moral side-constraints, such as against harming others.” But this message is too easily missed in the book, which spends much, much more time advocating for expected value calculations.

Putting aside the question of whether Bankman-Fried was means-justifying when he did what he did at FTX — whether he tried to “shut up and multiply,” and concluded that the potential benefit of billions was worth the risk of it all falling apart — we should also ask whether his entire way of choosing a career came down to means-justifying. The advice that MacAskill gave to young Bankman-Fried a decade ago was the approach favored by EA in its early days: “earn to give,” that is, get a high-paying job specifically so you can later donate lots of money.

On a basic level, the whole earn-to-give model is arguably susceptible to an “ends justify the means” mentality. It’s easy to see how this could translate to: Go work in crypto, which is bad for the planet, because with all that crypto money you can do so much good. The math may check out if you’re running a naive expected value calculation. But this does not sound like commonsense morality.

3) Did EA rely too much on a billionaire when it should have been decentralizing funding?

EA is a small, young movement that’s highly driven by personal networks. In that context, the downsides of billionaire philanthropy are bound to be magnified: One person — especially one person who offers to pour millions of dollars into the movement and fast — can gain outsized power to shape the agenda. Those close to him might enjoy a windfall that helps them build up their vision for EA, while those not in his inner circle might be left out. To guard against this, extra safeguards need to be put in place. Unfortunately, that was not done with Bankman-Fried.

Reporting on his philanthropic foundation, the New York Times notes: “The FTX Foundation itself had little to no oversight beyond Mr. Bankman-Fried’s close coterie of collaborators.”

Through the foundation’s Future Fund, Bankman-Fried was doling out millions to people with ideas about how to improve the future, and MacAskill was helping decide where that funding went. The largest grant the fund gave out was to a group called Longview, which lists MacAskill as one of its advisers, along with the chief executive of the FTX Foundation, Nick Beckstead. The second-largest grant went to the Center for Effective Altruism, where MacAskill was a founder and both he and Beckstead are on the board of trustees.

Some of EA’s critics had warned that the movement’s funding needs to be more decentralized. Otherwise, groupthink could take hold; EA could become a closed validation loop; and critics of orthodox EA views might not speak up for fear that they’d offend EA’s thought leaders, who could then withhold research funding or job opportunities. The Oxford scholar Carla Cremer argued in 2021 that, to address this, the movement should allow for bottom-up control over how funding is distributed, make funding decisions transparent, and actively fund critical work.

To its credit, the movement did try to decentralize funding somewhat: In February, the Future Fund launched a regranting program. It gave vetted individuals a budget that they could then regrant to people whose projects seemed promising.

And there is a charitable way to interpret the lack of independent decision-making and oversight among EA heavyweights: that they trusted each other because they had longstanding personal relationships, and when you’re in a young movement that’s still really small, the fastest and easiest way to tap the knowledge you need to make grants is to turn to the people you already trust. (The Future Fund team, which included MacAskill and Beckstead, resigned in a public letter last week, writing that “to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms.”)

Notice, though, that this goes back to the peril of maximization. If you want to “do the most good possible” as an EA grantmaker, you may be tempted to opt for an approach that saves you time and effort — but at the expense of robust institutional safeguards.

4) Did EA ignore calls to democratize its power structure?

Intellectual insularity is bad for any movement, but it’s especially egregious for one that aims to represent the interests of all humans now and for all eternity. And that is arguably what EA aims to do now that it’s increasingly embracing longtermism — the idea that we should prioritize positively influencing the future of humanity hundreds, thousands, or even millions of years from now.

This is why critics argue that the field needs to cultivate greater intellectual diversity and democratize how its ideas get evaluated, rather than relying on an overcentralized power structure that privileges a few elite voices.

“Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous,” Cremer argued in a 2021 paper co-written with Cambridge scholar Luke Kemp.

These scholars have advocated for effective altruists to use more deliberative styles of decision-making. For inspiration, they could turn to citizens’ assemblies, where a group of randomly selected citizens is presented with facts, then debates the best course of action and arrives at a decision together. We’ve already seen such assemblies in the context of climate policy and abortion policy; EA could be similarly democratic.

Beyond this, Cremer came up with a list of structural reforms she believes could improve governance in EA, which she presented to MacAskill earlier this year and which she tweeted out publicly over the weekend. Here’s a sample:

Set up whistleblower protection schemes for members of EA organizations

Within the next 5 years each EA institution should reduce their reliance on EA funding sources by 50% (ensures you need to convince non-members that your work is of sufficient quality and relevance)

Invite external speakers/academics who disagree with EA to give central talks and host debates between external speakers and leaders

No perceptible changes were made as a result of these recommendations. Instead, a culture of hero worship persisted within EA. Much of the veneration was directed at MacAskill, but Bankman-Fried was also valorized as a quirky do-gooder wunderkind, in part because of his association with MacAskill.

5) Does this mean longtermism is doomed?

One lesson it may not make sense to learn is that longtermism, the specific EA philosophy championed by Bankman-Fried, must be thrown out because of the FTX debacle. If we’re going to jettison a philosophy, we should jettison it on its merits, not just because of an association with a crypto calamity. Some ideas that are common within weaker versions of longtermism — like the idea that we should care more about, and take action to prevent, neglected existential risks like pandemics or bioweapons — are still worthy of attention.

However.

There’s a case to be made that longtermism, at least in its stronger versions, is so problematic on its merits that it should be jettisoned. Longtermism runs on a series of ideas that link together like train tracks. And when the tracks are laid down in a direction that leads to dark places, that increases the risk that some travelers will head, well, all the way to the dark places.

To the extent that longtermism (and EA more broadly) runs on perilous ideas like maximizing expected value, effective altruists should question the ideas themselves. While no belief system can make itself 100 percent impervious to harmful readings, some belief systems practically beg for harmful readings, and as Karnofsky argued, EA in its current formulation may be one of them. So, at the very least, EAs should be thinking hard about how to foreground some limits on utilitarian-style reasoning. Right now, the limits — like occasional reminders to “respect moral side-constraints, such as against harming others” — are too subtle by half, as MacAskill previously acknowledged to me.

And another thing. One of the bedrock premises of longtermism is that it’s possible to make the future go better, in part by predicting which actions are likely to have good downstream consequences. Longtermists and EAs generally are very into making predictions, and they try hard to improve their forecasting abilities by practicing, for example, in prediction markets. Yet there was an utter failure to predict the Bankman-Fried scandal.

As the economist and author Tyler Cowen put it:

Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.

The skepticism is warranted.

Any honest reckoning over effective altruism now will need to recognize that the movement has been overconfident. The right antidote to that is not more math or more fancy philosophy. It’s deeper intellectual humility.

6) Is effective altruism trying to be a philanthropic institution, a philosophy, a cultural identity — a political movement?

If you’re a small, rag-tag team of nerds just trying to find the best charities to donate to, the way you organize yourselves doesn’t matter as much as it does when you’re a billion-dollar force trying to reshape national and international politics.

A decade ago, EA was the former. Now it’s the latter. And a lot of its issues stem from failing to adjust quickly enough to that reality. In other words, there is still a genre confusion about what EA is, and that has hampered the movement from updating itself in crucial ways.

Some effective altruists might recoil at the democratizing proposal above, because it seems to be applying a standard to EA that has been applied to no other philanthropic institution or set of charities in history. And maybe if we were just talking about EA as it existed in, say, 2013, it wouldn’t make sense to insist on such a standard.

But it starts to look like a much fairer reform when you realize that EA is way more than a set of charities. Over the past couple years, EA has sought to influence who ends up in Congress and the presidency. Bankman-Fried spent millions of dollars bankrolling the 2022 campaign of Carrick Flynn in Oregon, who wanted to bring EA to Congress. And Bankman-Fried was one of the biggest backers of Joe Biden’s presidential campaign.

If EA wants to continue as a political movement, then it absolutely needs features like transparency, oversight, good governance and basic ethical safeguards. It should have the integrity to clearly declare its ambition — political power — and the receipts to prove that it is worthy of that power.



Related posts

At least 8 fake electors have been granted immunity in the Georgia Trump investigation

newsconquest

The Four Quadrants of American Politics

newsconquest

84-Year-Old Shooter Described As Radicalized By Fox News

newsconquest

Leave a Comment