The crypto industry has been thrown into crisis by the collapse of FTX, the second-largest cryptocurrency exchange in the world, and its sister hedge fund, Alameda Research. It’s raising questions about regulation, concern about those who may have lost money, and, of course, more than a little schadenfreude. But it’s also done major damage to Silicon Valley’s favorite philanthropic philosophy, effective altruism, which has longstanding financial, social, and ideological ties to FTX founder Sam Bankman-Fried.
The fates of effective altruism and FTX first became entwined in 2012 over a lunch conversation between then–rising star philosopher William MacAskill and then–MIT undergrad Sam Bankman-Fried. Ten years later, it seems that Bankman-Fried may have committed fraud on a jaw-dropping scale. He’s thrown FTX on the pyre of bankruptcy and even said that his previous promises to avoid unethical behavior were just some “dumb shit” he said. And the effective altruism community is left wondering: What if their previous benefactor and mascot has destroyed the movement entirely?
But EA may be worth saving if its leaders and community can take a hard look in the mirror. Effective altruism has injected new money and methods into the philanthropic sector, which has long struggled with accountability and measurement and lacked enthusiastic participation from younger generations of donors. This “crucible moment,” as Facebook and Asana co-founder and EA leader Dustin Moskovitz called it, could actually be an opportunity for EA to untie the knot binding it to the crypto industry and address the issues that have been roiling under the surface of the movement for years. The philosophy may survive and even evolve into a more inclusive and impactful force for good if it embraces a lower profile, and a narrower, more defined scope.
During that lunch a decade ago, MacAskill pitched physics major Sam Bankman-Fried on effective altruism, a new philosophy that aimed to use “evidence and reason” to do the most good in the world. The concept stemmed from applied ethics and utilitarianism, and was supported by tech entrepreneurs like Moskovitz. EA was focused on getting the biggest bang for a philanthropic buck, searching out underfunded areas where the smallest funding increments could save the most lives. The classic example of an EA effort was distributing mosquito nets in sub-Saharan Africa, a $5 scalable solution that has scientific evidence supporting its effectiveness in preventing one of the leading causes of death in the region.
But—in what may have been one of the early seeds of the community’s growth towards a narrow, elitist worldview—EA, then and now, also urged its community to choose a career for the greatest impact on the world, either by being a brilliant individual who initiates an industry-defining sea change or by donating earned wealth to EA causes. MacAskill told Bankman-Fried in 2012 that he was well-positioned to consider “earning to give”: an EA strategy for accumulating as much money as possible in order to give it away. Just a year before that conversation, MacAskill had helped give effective altruism its name and home in the Centre for Effective Altruism. In 2015, he’d publish Doing Good Better, the book that introduced a much wider audience to EA. Moskovitz—fresh off Facebook’s IPO—and his wife, Cari Tuna, would form a new EA-driven funding organization called Open Philanthropy, which provides grants supporting work in EA priority areas like animal welfare and global health, as well as (of course) the growth of the EA community.
Whether Bankman-Fried, who grew up in the Bay Area debating utilitarian concepts at the dinner table with his law-professor parents, really wanted to make money purely to pursue good is highly questionable now. But he claimed to have taken MacAskill’s advice, pursuing a career in finance after graduation and then building a suite of finance and crypto companies with the professed intention of donating his wealth to EA causes. By 30, Bankman-Fried became the youngest billionaire, the richest billionaire in crypto, and the second-largest donor to the Democratic Party in 2022. Earlier this year, he created the FTX Future Fund and committed to spending $1 billion on “big bet” causes like A.I. safety and preventing pandemics, in line with EA’s new and internally controversial shift toward longtermism—a belief that saving future generations should be philanthropy’s highest priority and biggest opportunity for impact. The poster boy for EA, he was the subject of glowing media profiles and donated more than $190 million to EA causes through the FTX Foundation and FTX Future Fund. As MacAskill toured with his new book on longtermism, he publicly credited Bankman-Fried with expanding EA’s reach. MacAskill even vouched for Bankman-Fried with Elon Musk as a potential investor in Twitter, telling Musk SBF was “Very dedicated to making the long-term future of humanity go well.”
In 2021, EA-aligned organizations directed $600 million in funding toward EA causes. But criticism of EA has been rising from both outside and inside the community. Concerns grew about a lack of diversity in leadership and funding, the stifling power dynamics within EA institutions, the push to find the “best” solutions as defined by a small group of already wealthy people, and the movement’s refocusing on longtermism’s speculative and impossible-to-measure big bets. Critics included philosophers, reporters (including me), EA-affiliated researchers, and even EA leaders like Holden Karnofsky, the co-founder and co-CEO of Open Philanthropy. Earlier this fall, in a post on the lively Centre for Effective Altruism forum, Karnofsky called EA’s core approach of identifying and maximizing a single “best” idea to its extreme conclusion “perilous.” Karnofsky suggested that EA’s laser focus requires absolute certainty—hard to come by, even with data—and lacked the ethical guardrails necessary to avoid harm during its pursuit. Karnofsky wrote in September that disaster has so far been avoided “due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls. As EA grows, this could be a fragile situation.” The FTX collapse in Bankman-Fried’s pursuit of growth at any cost may have been the lit match in the already parched forest: a spectacular and somewhat predictable emergency, given the circumstances.
Karnofsky’s observation about EA members’ judgment is telling about the movement’s culture, which has put a lot of trust in the “good intentions” of individuals and dismissed valid criticism as bad-faith takes. This culture reinforced an uncritical trust that billionaires who accumulated wealth to give it away were being ethical in their pursuit of economic success because they said they were. Regardless of whether Bankman-Fried is a sociopath or a good man who got caught up in a bad spiral—although his recent exchanges with Vox cast doubt on the latter—it is clear now that he acted unethically, probably illegally; lost millions of dollars; and damaged many people’s lives. Intentions are irrelevant to the suffering and chaos he sowed. As Karnofsky suggests—and as others have before him, including existential-risk researchers Luke Kemp and Carla Zoe Cremer—EA would benefit from a move toward more explicit community rules, more rigor in process, and more scrutiny of leadership, rather than a network-based reliance on good faith actors. The burning bush of a case study in FTX may finally be what it takes to light a fire under the pants of EA leaders about establishing a code of conduct for the community.
There’s reason to believe the EA community and leadership are savvy and flexible enough to adapt toward a different future for the movement. Even before the crisis, EA organizations were making new efforts to incorporate divergent perspectives (though whether these initiatives stemmed from cynical or genuine purposes is admittedly unclear). The Centre for Effective Altruism and FTX Future Fund both launched contests in the past year encouraging compelling criticisms of EA ideas. EA leaders regularly engage with critical posts on the EA forum, and EA organizations publicly post postmortems on past mistakes like communications mishaps and expanding project scopes too quickly. So it’s no surprise that since FTX’s collapse, many EA leaders rushed to take public stands against Bankman-Fried’s actions. The full staff of FTX Future Fund stepped down, citing “fundamental questions about the legitimacy and integrity” of their work. Open Philanthropy’s Dustin Moskovitz wrote on Twitter that he’s humbled and acknowledges that EA needs a clear story that rejects ends-justifying-means approaches. Even MacAskill spoke out, distancing himself from his longtime friend by condemning Bankman-Fried’s behavior and calling out his betrayal. Since those statements, the Centre for Effective Altruism’s head of communications stated that she’s advised EA leaders to not speak publicly until the organization has gathered enough information to create a communication strategy. The next coordinated messaging from EA will likely indicate the new direction leaders plan to take the community.
Financially, it may be challenging to build back the funds for the most ambitious longtermist efforts, many of which were backed by the FTX Foundation, the FTX Future Fund, and other people affiliated with FTX in some way. And with tech stocks plummeting and the FTX contagion spreading to the rest of the crypto space, individual sources of broader EA funding are likely affected as well. Open Philanthropy is offering funds specifically to grantees who were backed by FTX, but they claim their “cost-effectiveness bar” will be higher and they are specifically “pausing most new longtermist funding commitments,” which generally have a murky relationship to measurable impact. Open Philanthropy’s in-house counsel wrote on the EA forum that this is the beginning of a “multi-year legal process” and that some grantees may be asked to pay back the money from FTX sources. Boom time decision-making backed by a seemingly unending pool of crypto cash won’t work anymore. The impact of funding most longtermist efforts like A.I. “safety” or identifying the next set of world-changing geniuses cannot be measured using the classic EA rubric—or almost any other accounting except for money spent. A recession-era effective altruism might do well to return to the more narrowly scoped approach to funding only efforts that have measurable impact and efficient balance sheets.
But it’s easy for EA leaders to be self-critical now without making (yes) long-term change. Without significant realignment of values and strategy, any public rebranding of effective altruism will only be a fresh coat of paint slapped over the extensive fire damage. Unless EA leaders take earnest steps toward diversifying funding, diversifying the community, increasing accountability, and reconsidering its new focus on longtermist causes over urgent crises, EA and its sincere commitment to do good in the world will be nothing more than a crypto token: It may sound nice, but it won’t carry much real value.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.