Takes on Forbes article: “Fear Of Super Intelligent AI Is Driving Harvard And MIT Students To Drop Out”
Providing context where needed.
Last week, Forbes put out an article titled “Fear Of Super Intelligent AI Is Driving Harvard And MIT Students To Drop Out”. The article features students from Harvard and MIT’s AI safety groups leaving school to work on AI safety, and some other AI startups founded by students who dropped out. As the co-director of Georgia Tech’s AI safety group, I was interviewed by the author but asked to not be quoted over concerns of misrepresentation.
In all fairness, this wasn’t a bad article; most of the statements made are factual and it seemed to be written in good faith. Victoria (the author) was understanding and respected the concerns I brought up during our interview. However, I was surprised by how strongly I felt about the writing - something along the lines of “they used this very important issue I work on for a catchy title but didn’t bother to try to understand and explore it”. So, this may be a bit of a rant, but I’m going to deconstruct some of this article’s content and provide additional context as a student that works on AI safety. All views are my own.
My criticisms of this article fall into three broad categories:
Problematic connotations
The title reads “Fear Of Super Intelligent AI Is Driving Harvard And MIT Students To Drop Out”. In my personal experience, a better description of what motivates us is something like ‘an urge to take action to help solve what we think is one of the most important, urgent, and intellectually interesting problems of our time’. A better word here is probably ‘concern’ - the distinction being that ‘fear’ as an emotion is paralyzing and hints at doomerism & irrationality. I don’t think ‘fear’ captures why students like myself and those interviewed work on AI safety. I’ve met Alice, Adam, and Nikola at various conferences and have chatted to them about their work. They are some of the most well-intentioned, agentic, and intelligent people I’ve ever met and I certainly didn’t get the feeling of ‘fear’ from them. It’s very hard to remain an effective agent in the current environment if you truly ‘fear’ AGI because our economic system is pouring trillions of dollars into efforts to build something like it.
Though left unsaid, this article implies that students dropping out of college to work on AI safety is irrational and perhaps problematic. This is primarily done by pitching the featured students against authoritative figures like Gary Marcus, Yann LeCun and Andrew Ng (though props to them for citing AI 2027!). I personally disagree with these experts and I don’t think anyone can make claims about whether AI poses an existential risk or even AI timelines with high confidence. For what it’s worth, I previously wrote about why I think AI poses various catastrophic risks and why I work in AI safety in a post here, and also recommend Situational Awareness by Leopold Aschenbrenner.
Misrepresenting the state of the world due to a lack of context
“Efforts to build AI with safeguards to prevent this from happening have exploded in the last few years, both from billionaire-funded nonprofits like the Center for AI Safety and companies like Anthropic.” This statement - while factual, has problematic connotations and misrepresents the state of the world. There is no mention of the effort and investment being poured into building superintelligence-esque systems without much regard for safety, which is easily 2-3 orders of magnitude larger. “Billionaire-funded nonprofits” links to a POLITICO article titled “AI doomsayers funded by billionaires ramp up lobbying” - the attempted narrative is fairly self-evident.
“Now, the field of AI safety and its promise to prevent the worst effects of AI is motivating young people to drop out of school.” To the best of my knowledge, this statement applies to O(10) (“on the order of 10”) people in the United States but implies a much larger scale and hints that this is a wider problem.
Carelessness
The article summary opens with “AGI — a theoretical AI that can do many of the same tasks as humans can — could come within a decade.”...... *sighs*. AI being “able to do many of the same tasks humans can” has been true since like GPT-4o, is it AGI? Probably not by any serious definition, at least not the type AI safety people are super concerned about. This might be one of the worst/careless definitions I’ve seen.
Aside from object level issues, one thing I noted was that mainstream media outlets really do cite/link to each other a lot. This Forbes article alone cited pieces from Politico, Times, and Fortune. My concern is that there seems to be some non-negligible degree of exaggeration/cherry picking they deem acceptable in exchange for flashier headlines and more provocative statements. So…wouldn’t citing each other propagate and escalate this degree of exaggeration and produce gross distortions of reality? For this article in particular, the Politico citation certainly felt this way.
Overall, the contents feel like a collection of a lot of true statements that, when put together, seem to say something about the world but falls apart under scrutiny. On topics like AGI and x-risk where experts often disagree with each other, I wish mainstream media would at least try to examine underlying arguments instead of simply writing “A said x, B said y” where A and B are authorities/insiders of the field. This probably happens to every ‘fringe issue’ that gets covered by mainstream media though, it seems really hard to attract attention AND capture nuance in a piece of media that falls within the average person’s attention span.
That’s about all I have to say! Optimization pressures dictate that enterprise & mainstream media aimed at the general public must sacrifice nuance/substance for whatever captures people’s attention when needed. On the topic of AI safety, doomerism and fearmongering seem to be attractive narratives. AI is shaping up to be the most transformative technology of our lifetime, so please please please examine the evidence out there and make your own judgement, it’s worth looking into! Here’s a few of my favorite series on the topic of AI:
Don’t worry about the vase by Zvi - general good takes and breakdown on technical and non-technical news.
Interconnects by Nathan Lambert - updates and takes on latest technical research.
Gradient Updates by Epoch AI - providing shorter-form commentary on important issues around AI.
AI Frontiers - covers topics in governance and impact of AI.
Massive thank you (in no particular order) to Yilin, Will, Parv, Atharva, and all my friends at XLab for feedback and chats :)


This puts words to most of my sentiments about the Forbes article about me very well. I might write up my own response at some point, but in the meantime I'll try to signal boost this. I also want to address the Futurism article (linked below), since that takes the Forbes article further and makes additional and more flagrant, unjustified and false assumptions about me and my choices.
https://futurism.com/mit-student-drops-out-ai-extinction
Very illuminating!