Skip to Content, Navigation, or Footer.
34th Street Magazine - Return Home

Features

Saving Everyone, Everywhere, Across Space and Time

Why college–aged effective altruists are determined to rescue humanity from artificial intelligence, and how it’s panning out.

Saving Everyone, Everywhere Across Time and Space (Insia Haque).png

In many respects, the first general body meeting of Penn’s revived Effective Altruism club is just like any other. Cheap pizzas are stacked up by the Lauder media room entrance; students sit on the college house’s bare, standard–issue upholstery, chatting about their preprofessional trajectories. The EA@Penn board paces back and forth by the front counter—Hazem Hassan (E ’29), the club’s intrepid president, is pulling his hair out trying to get his slides on the screen.

In the meantime, EA@Penn’s tentative members share scraps of conversation. In one corner, a man leans forward in his seat, explaining policy—not any particular policy; just the concept of policy—to the young woman beside him. He wants to become a lobbyist, he explains, but is split on whether he should move to D.C. before or after finding a wife in Brooklyn. The young woman nods along, not talking much. Fifteen minutes later, Hazem finally loses hope in the projector and opts to give his spiel impromptu. 

Effective altruism, he explains, starts with a pretty agreeable set of assumptions. The first is that we should try to do the most good for the most people; the second is that people everywhere, regardless of their creed, color, class, what–have–you, are all equal. These two core beliefs—help the many and don’t be a bigot—should form a solid foundation for any ideology.

Where EA differs from conventional wisdom is its self–professed, unflagging commitment to reason. 

“Doing good effectively is not just trivial,” Hazem tells his audience. “I mean, it’s trivial to accept it, but to do it is a different thing.” 

He explains, for example, that in 2013 it cost around $40,000 to train a guide dog for one blind American. At the same time, it cost less than $20 to fund a surgery that “reverse[s] the effects of trachoma in Africa.” Paying for guide dogs might seem like a great idea, but the same amount of money could be used to prevent 2000 people from going blind at all. Humans, he explained, have a tendency to prioritize impact that is more tangible, visible, and closer to home, which often leads them to practicing altruism “ineffectively.” 

“From there,” Hazem says, “you should be able to see that going off of feelings is just unreliable in terms of doing good.”

Throughout the 21st century, EA blossomed from a loose intellectual movement to a network of thinkers, donors, and institutions that distribute billions of dollars to the most effective “cause areas.” Over time, EA’s priorities have shifted as well; today, a plurality of their efforts are directed not toward global health and poverty, but to preventing human extinction at the hands of a rogue artificial intelligence. Today, an emerging cohort of university–aged students—those that grew up attending EA–adjacent summer camps in high school and joined EA clubs in college—almost exclusively belong to the latter category. To them, the present and future of humanity are imperiled, and they are among the only ones trying to save the world.


The intellectual groundwork for EA was set up in the late 20th century, with seminal works like Australian philosopher Peter Singer’s 1972 essay “Famine, Affluence, and Morality.” In the now–famous text, Singer imagines a child drowning in a shallow pond. If one is willing to jump in the pond and save the child, ruining their expensive coat in the process, the same individual should be willing to donate their money to save starving children halfway across the world. It was fallacious, Singer argued, to help only those in one’s direct line of sight and disregard the plight of those in developing nations.

Brian Berkey, professor of legal studies and business ethics at the Wharton School, was among the aughts–era twentysomethings profoundly moved by Singer’s drowning child, having read the essay as a freshman at New York University. 

“I read that when I was 18, and was just sort of gripped by it,” he says. For the rest of college, he would eat primarily cheap ramen and shave “as infrequently as possible” to set aside money for people he did not know and would never meet. Today, he still sports a well–kept salt–and–pepper beard. 

When Berkey was completing his doctorate in the late 2000s, the first EA organizations were just starting to emerge. They sought to identify the best possible “cause areas” to which one could donate money, and spent much time debating the merits of deworming initiatives versus trachoma surgeries versus malaria bed nets. These organizations had names like GiveWell, Giving What We Can, and GiveDirectly, attracting devoted utilitarians who were willing to adopt a permanently ascetic lifestyle.

Berkey has helped establish the movement in academia. His dissertation at Berkeley, titled “Against Moderate Morality,” argued that the well–off have deep and often unmet obligations to the world’s most impoverished people. Today, he presents his research in conferences across the world, including papers that engage critiques against EA and meditate on how it can best function as a “social movement.” When not on Penn’s dime, he prefers to travel by train or bus, books cheap places to stay in, and eats only low–cost meals. He donates the money he saves to GiveDirectly, his allocation service of choice.

However, Berkey’s academic writing engages a dated version of the movement. He represents a sort of old guard in EA—one still focused on living frugally and donating the bulk of their money to the global poor. The now–dominant faction of the movement is, in contrast, focused on a more ambitious project.

Throughout the 2010s, EA amassed large amounts of power and money by converting rich tech leaders to their cause. Notable figures include Facebook cofounder Dustin Moskovitz, who dedicated billions to Open Philanthropy (which has since rebranded as Coefficient Giving), and the now–disgraced cryptocurrency mogul Sam Bankman–Fried, who founded the FTX Future Fund and was eventually convicted of large–scale financial fraud. 

As the movement expanded to include these tech leaders, it took on a peculiar new philosophy. Gradually, many EA leaders, their wealthy tech patrons, and other imaginative community members began to consider the plight not just of humans today, but those living in the far future—a concept EAs call “longtermism.”

Like Singer’s argument for frugal living, the logic behind longtermism is simple yet extremely demanding. The longtermists believe that past altruists failed to account for the many, many people that would be born in the future, and who cannot advocate for themselves in the present. If one accounts for the massive size of the future human population and the possibility for life to endure indefinitely in the universe, there exists an almost infinite reservoir of drowning children waiting at the sluice gates to be born. As Berkey puts it, when you adopt a longtermist perspective, something like global poverty becomes “a rounding error” when compared to the threat of human extinction.

Longtermism has led the EAs to some prescient conclusions. For example, longtermists identified biosecurity as a significant existential risk, or “x–risk,” as early as 2014; in the years before the COVID–19 pandemic, Coefficient Giving donated $40 million to Johns Hopkins University and other institutions to research pandemic preparedness.

Today, however, AI is understood as the most severe x–risk of all—pervading discussion online and absorbing much of EA’s accumulated fortune. Over time, the movement has undergone a tectonic shift.

Suddenly, early EAs like Berkey found themselves an epistemic ocean away from their movement’s center. 

“This wasn't on anyone’s mind in the early 2000s,” he says. While he thinks x–risks are “rightly” getting more consideration today, he still doesn’t buy the ethical framework that is a prerequisite for longtermist thinking. “I think there are strong arguments for … improving the lives of existing people as opposed to creating new people.”

At the very least, Berkey believes this shift in the movement’s priorities is terrible for PR. He thinks EA should be “a kind of broad progressive movement that will bring a lot of people on board.” But the direction of resources away from early EA efforts like global poverty, health, or animal suffering might reduce the movement’s appeal to the general public. Instead, Berkey fears, EA may attract “a narrow circle of people who want to think about preventing AI from killing us all.”


As a member of EA’s new guard, Hazem is more receptive to longtermism. Long before he assumed the presidency of EA@Penn, he was interested in how to do good effectively. 

“I was always thinking about how I can be correct morally and systemically,” he remembers. As a middle schooler in Egypt, he would try to find people whose actions and morals consistently aligned. Instead, he says, he felt that his peers suffered from logical fallacies and cognitive biases, or pulled on anecdotes and mauvaise foi religious arguments rather than statistics and data.

“Let's say I’m telling somebody, ‘Oh, I wonder how we can help the highest amount of people in our community who are poor,’” Hazem says. “And they’re like, ‘There’s this scripture for my religion that tells me to do this.’ I’m like, ‘Okay, bro.’”

Hazem came upon EA in high school, and admired the movement’s drive to action—instead of sitting around inventing new versions of the trolley problem, he says, these philosopher–altruists were asking themselves how to do good in the real world.  In an effort to put the ideology into practice, he set up a college consulting service for students in his community. 

“It was very successful; I helped people,” Hazem says, but he eventually decided it was not an effective way of doing good and gave up the effort. “Colleges are a zero sum game,” he explains. He would only be adding “goodness” into the world if his clients were better than the people they replaced.

When he arrived on Penn’s campus, Hazem was excited to finally join an organized EA community. “But then I entered the Slack, and there was nobody,” he says. By 2024, all the former organizers had either graduated or taken a leave of absence “to do AI safety.”

Hazem set out to revive EA@Penn. In the process, the president–to–be did a deep–dive into the online forums which foster much of EA thinking. Before interviewing with Street, he had searched the same forum for advice on how to speak to journalists. 

Gradually, EA also instilled in Hazem a sense of dread—stemming, predictably, from AI x–risk. He was especially moved by a 2025 paper from Penn professor Phillip E. Tetlock, who brought together a team of superforecasters—individuals with remarkable math skills and intuition who Tetlock claims are “astonishingly good” at predicting the future—to quantify the likelihood that an advanced AI would orchestrate an existential catastrophe against humanity. Ultimately, even self–described “skeptical” superforecasters predicted a 7.6% chance that AI would wipe out more than half of humanity by 2100, or at least drop the average global score of the World Happiness Report to below four out of 10 (Yemen, for instance, currently has a happiness score of 3.56).

“It's like saying … ‘Guess a number from one to 13. If [we] say the same number, I kill half of everybody,’” Hazem explains.

Just like how we may plough over an anthill when building a skyscraper, AI, if intelligent enough, could destroy large swaths of human civilization in pursuit of its goals. This doesn’t require AI to be malicious, just powerful—in a famous thought experiment, Nick Bostrom, an early EA thought leader, postulated that an all–powerful agent tasked to produce paperclips will eventually destroy living beings to use their atoms. The AI we build, thus, must be explicitly designed to be aligned with human values—a non–trivial technical task. 

This framing was enough to make the young EA consider a career shift; originally, he’d planned to become an entrepreneur, donate his income to effective cause areas, and potentially get into politics. But if the risk of AI was so immense and immediate, there was little excuse to pursue any career besides AI safety. “If … you can bring up the number to, like, [one in] 13.1, would it be worth it?” Hazem asks, in reference to Tetlock’s paper. “Obviously, the answer is yes.” 

If Hazem were to pull the trigger on his career switch, he would find company amongst his age group. The newest batch of card–carrying EAs have grown up in a community that is eager to reinvest in itself and which encourages its members to consider careers in AI “alignment” research—ensuring AI acts in accordance with human values.


The summer before his senior year of high school, Evan Osgood (E, W ’28) was flown out from his home in Cincinnati, Oh. to the University of California’s Berkeley campus. For 12 days, he made bets within a paper clip economy, competed for who could get an Apple Watch to the highest elevation in three hours, and attended lectures about the looming existential threat of artificial superintelligence. This was all part of the Atlas Fellowship—a summer camp and up to $50,000 scholarship for 100 students aged 13–19, selected from over 4,000 applicants all over the world in 2022. Funded by Coefficient Giving and the FTX Future Fund, Atlas was one of the many fellowships that cropped up over the past decade aiming, in part, to expose high school students to EA and “EA–adjacent” ideas. 

The fellowship was held in a three–story, 40–bedroom former Berkeley sorority house, where students lived and attended classes. For one class, AI alignment researchers were brought in to talk about their work. The claim that AI poses an existential threat to the human race might seem far–fetched, but after the presentation, Evan was convinced. “I very much think that alignment is a serious problem,” he says. 

Evan learned that there were around 100 AI safety researchers in the world, compared to the thousands of researchers working to advance AI capabilities. “You could probably fit them all in this one house,” Evan says. “The current structure and the current incentives are not very conducive to having [AI risk] be addressed and having the resources be dedicated.”

Today, Evan works on AI safety with Penn professor Chris Callison–Birch. Evan’s research centers around prompt injection, a security attack on large language models such as ChatGPT and Claude, and he plans to publish a paper soon. Alongside AI safety, Evan also works on using AI for social impact in more “traditional” ways: he is the Chief Data Officer at SafeRMaps, a nonprofit organization dedicated to helping Ukrainian refugees escape the country safely.

Samuel Ratnam, another Atlas Fellow, explains the importance of alignment work through a different angle. There is the risk of misuse, for example, as terrorist cells could ask AI to build bioweapons that cause runaway pandemics. Additionally, as AI advances from passive chatbots to autonomous agents, Samuel believes that models could develop “inherent desires and preferences” that lead them to prioritize self–preservation and power accumulation. Today, agents like Cursor and Claude Code can run commands inside your computer as if it has its own mind—sending emails, building entire apps, and even uninstalling itself.

Soon, Samuel believes that AI will train the next generation of AI models themselves, leading to exponentially faster and harder to control progress. At this point, any misalignments will compound into the next generation of models, becoming impossible to fix. Samuel references one hypothetical scenario, now famous within the AI safety community, that predicts this point of no return will come as early as 2027.

After attending the Non–Trivial Fellowship—another high school program, funded by Coefficient Giving, that dolls out scholarships ranging from $2,000 to $10,000—Samuel was “AI safety–pilled.” He explains that the worksheets which fellows receive during the program feel like they “indoctrinate you into an ideology;” when he returned to Non–Trivial as a facilitator, he recalls “trying to AI safety–pill” others. Unsurprisingly, Samuel conducts AI alignment research and even lives in an “AI safety group house.”


If a tech billionaire catered your weekly vegan lunches, flew you out to San Francisco, funded your five–figure scholarship, and said you would help “steer the future of our civilization,” you, too, would hear them out. Exposure to the EA community, thus, easily becomes a transformative experience at a young age. For students who fit the mold—technically minded, ambitious, and eager to be taken seriously—the EA pipeline offers a fast track to a fulfilling career and like–minded community. 

But when combined with its strict doctrine of moral responsibility, EA can take on a pseudo–religious mystique. As with any religion, adherents are implored to spread the good word. AI safety, as EAs say, is more constrained by “talent” than funding—so the most effective measure for helping AI safety research, and hence the world, is to recruit burgeoning talent into the field. When Hazem posted on the EA forum last winter, seeking co–organizers and advice in reviving the club, one reply lauded the effort and emphasized that the student group would be a “force multiplier” to create “value–aligned, high–agency kids” for the movement.

Crystal Lin, the former co–president of the EA club at the University of Toronto, was acutely aware of this objective. The Center for Effective Altruism gave her club “a bunch of grant money” and connected her with mentors in the organization that, in her opinion, were “proselytize–y.” While the stated goal of the EA club was simply to introduce students to these ideas, it was hard to do so without “making a normative claim” that you ought to “make career choices that align with these beliefs.”

Even though Crystal found EA “intellectually interesting,” she quit being co–president. “I didn’t enjoy how instrumental I felt,” she says.

Indeed, Hazem himself explains that “the EA justification for leading a club like this is you’re gonna be multiplying your impact. Rather than just me going into AI safety, or me donating 10% of my income and doing good, I can convince others to do so.” Ultimately, though, he hopes that his club will also be “a place for us to have fun.”


On the last night of the Atlas Fellowship, students gathered around the house basement, sang karaoke, and played ping pong. Amidst the midnight celebration, an instructor pulled one student, Alice, aside, leading her into another room. This article uses a pseudonym in place of her real name, as she has experienced backlash for criticizing the EA movement and fears further retribution.

After learning that Alice identified a feminist, the instructor was “scared” for the young camper and suspected that she was stuck in “echo chambers.” The instructor had pulled her into a discussion about the merits of sexism; to test her “rational thinking,” he brought up a study showing that November–born soccer players are disproportionately likely to become professional players. There may be some minor biological quirk, he argued, tied to a November birth. Alice recounts, “He brought it up as an example to say that the biological differences between men and women may account for why men are more likely to succeed in logic or rational–based activities and succeed in EA spaces.”

The instructor had set a timer on his phone, and told Alice that when it went off, she’d be free to rejoin the celebration outside. “So there’s no pressure to stay,” he said. “I remember the time the timer went off, I was crying,” she says. “I was like, ‘No, I’m not leaving right now, because what you’re saying is so deeply misogynistic, and you don’t even understand.’” 

Still, Alice put up a valiant effort, arguing back as best she could. To show much emotion would be “antithetical to [the] logical and rational thinking” that EAs value.

This wasn’t an isolated incident. Across the two weeks, three different men came up to Alice to challenge her beliefs. During a s’mores night at the start of the program, for example, a campmate asked her if, since women deserved to be treated equally, they should be required to enter into the draft. “I had just put my s’more in the fire.” Alice recounts. “Like, let my s’more cook first.”

Alice is one of many women who’ve experienced a hostile environment within EA spaces. A 2023 TIME article documented seven accounts of sexual harassment, coercion, and assault within the movement. Their allegations were filtered through the lens of rational analysis, weighted through expected value calculations, then dismissed in favor of the greater utilitarian good of the men’s careers. They described learning to distrust their own instincts, because in EA, “you’re used to overriding these gut feelings because they’re not rational.” One describes it as “misogyny encoded in math.”

When asked about these patterns, Berkey says that there will always be bad apples in any social movement. Certainly, communities have an utmost responsibility to condemn such behavior—which he acknowledges EA has often failed to do—but he “[doesn’t] think it's unique to EA in any way.”

Hazem, on the other hand, has an explanation for why it may be unique to EA. Radical moral progress calls for radical ideas; thus, being involved in the movement requires one to hold “strong beliefs weakly.” Any belief is up to be challenged; any idea, no matter how fringe, should be engaged with, as long as the speaker is arguing with evidence and good faith. 

“The idea of holding strong beliefs weakly invites all these weird ideas and makes it acceptable to be discussed,” Hazem says. “I wouldn’t engage with a Nazi for 100 hours, but you know what? I’d at least give him a second [and] see what he’s talking about.”

Alice, however, doesn’t buy that her combatants were seeking the truth. 

“If you come up to me and you say something controversial that’s not well argued or well thought out,” Alice says, “what am I left to believe in other than you wanted to make the point that you are smarter than me because you’re a man?”

Alice restricted her interactions with the EA community afterwards. “I was faced with so many of these comments,” she says, “and I’m not really interested in having to defend that women deserve equal rights.”

From the moment that AI alignment was introduced, Alice recalls thinking about the movement’s demographics—75% white, 68.8% male, and 30% from an elite college. “The reason why AI [x–risk] really scares very rich, white men is because it’s the only thing they have to fear, right? They view it as a risk to the human race, because that’s the only risk that is to their race.” 

Today, Alice leads many advocacy initiatives for minority students. This is, by no means, the most effective cause area according to EA analysis, but she nonetheless believes it’s the best use of her time. Successfully solving a problem requires working intimately with people who are directly affected, having an intricate understanding of its ins and outs—and it’s easy for effective altruists, in their objective analysis, to fail to account for that. 

“We can’t always be working on other people’s problems,” Alice says. “Otherwise, nobody’s an expert in it, right? If you want to do the most good, sometimes you got to stay at home.” 


To its most ardent critics, the EA community has about it an air of Hegelian self–destruction. In this telling, a group of earnest, cerebral do–gooders were corrupted by money, hubris, and intellectual chauvinism into a secular doomsday cult. Their desire to win—to know better—has overpowered the part of them that wants to do better.

To its own members, the movement is one of the few spaces in which one can sanely discuss the future. If you look at the facts, they say, AI risk is huge; non–EAs simply can’t imagine how bad it could get or lack the agency to turn their concern into action. In this telling, any quasi–religious aspects of the movement come out of a sober examination of facts which others might turn away from. Anything that can be called indoctrination arises out of an urgent need for young talent. 

In this telling, the stakes could not be higher.

“A lot of people [in EA] actually have mental health issues,” Hazem says. “Because imagine you believe the entire world is ending in 10 years, but nobody around you believes that.” Or put another way, “people get told that they have cancer, and their mental health just takes a huge toll. Now imagine if you think everybody effectively has cancer, like the world has cancer.”

Inevitably, the tricky question of personal happiness—something EA has perhaps taken lengths to disregard—comes up. “AI risk has kind of made me less happy, or less excited, about the far future,” Hazem admits. But at the end of the day, “that is not to say the ideas are wrong.” 

Hazem just hopes the EAs and the AI safety folks can save the world. He’s even willing to accept ridicule from the masses, as long as they’re still alive. “I want the outcome to be a self–defeating prophecy,” he says, “where they’re like, ‘Oh, you guys were overblowing the whole thing all along.’”


More like this