Are we cooked? | Reflections from 2026 MAIA-HSSIT Workshop.

Intro

I recently attended a workshop hosted by the MIT AI Alignment (MAIA) group and the AI Safety Student Team (AISST, Harvard group). These groups are among the largest in a network of college AI Safety initiatives, which receive funding (mostly) from Effective Altruism (EA) community groups like Open Philanthropy.

These groups are pivotal to AI Safety, as they accelerate talent development and attract people to AI safety research, which is facing a talent shortage. The MAIA/AISST workshop is (again) one of a few workshops that take people who’ve shown interest in AI safety (through reading groups at their respective college clubs) and calibrate their focus by surrounding them with other like-minded people. Usually, they are in some remoteish area and get that space to really think about the field. There’s a mix of networking, talks from researchers/professionals, exchanging ideas, and relaxing. (Also, with all the funding in the field, it’s usually fully compensated).

This trip was very eye-opening for me, as it was the first time I spoke with people who are really “a part” of the AI safety community. We don’t have much discussion about the short-term threats of AI at CASI, and it’s probably for good reason, since the CMU community is much less inclined toward that kind of talk. So the workshop was a great chance to see how AI safety is discussed in past LessWrong blog posts and on Twitter. I’ve followed a different path than others and certainly hold different views.

The purpose of this blog is to show how my views progressed throughout the workshop. I wanted to summarize most of my notes on my views before the workshop, recaps of days 1, 2, and 3, and my reflection thoughts.

  • If you want to skip the workshop recap and read my views, just read the “Prior Views” and “Final Views” sections.
  • There are a lot of incomplete ideas in this blog. I plan to write further about some of them, so (please) take everything with a grain of salt.

Prior Views

So I expected to hear many new perspectives that I probably didn’t agree with or had never heard before. I thought it be cool to write about my prior views on the flight there.

I think a lot of these views changed in retrospect, and I want to say that I didn’t fully think this “extreme,” but they were more like thoughts and semi-beliefs, semi in that a part of me did believe that.

Here’s what I wrote about my prior views:

  • The AI arms race (US vs. China) has been blown out of proportion, and companies mainly use it to escape regulation.
  • Human-AI is a super serious issue that isn’t talked about enough by the safety community.
  • The world won’t end in 5 years, dont drop out and build solid foundations.
    • (note: theres gonna be lots of discussion about this view)
  • There is a certain amount of complacency in industry roles (and, I feel, in EA) that leads people to avoid addressing real societal issues.
  • I’m very anti-company in general, questioning how much impact I actually have in the long run to be at one of these companies.
  • What is possible? What is knowledge
  • Questioning whether aligning AI really easier than regulating and shutting it down?

Workshop Recap

Day 1 Thoughts (Friday 3/28/2026)

Quick Recap: Day one started with leaving CMU very early to get to Boston. I had a couple of hours in Boston, so I got to take the subway(was hella geeked for this) and visit Harvard, which was fun. Once we got to the place, we had some time to eat and settle in. Then, we had one interactive talk from Anthropic’s Ross Nordby, in which he had the audience list their probabilities of extreme catastrophe (billion+ dead) from AI under various scenarios. Each slide had its own theme (e.g., the probability of an extreme catastrophe under different types of AI regulation). This talk stood out the most out of all the ones on Friday since I think it’s a fun way to get the audience to think about certain questions.

Thoughts that Night:

  • I tried my best to talk to a wide range of people, and the workshop had a good balance of backgrounds. A lot of undergrads from schools with AIS (AI Safety) clubs, industry professionals from all over (frontier labs, smaller labs, government, non-profits, etc.), graduate students, and industry people transitioning to AI safety.
  • Self-Reflection Thoughts:
    • While talking to some people, I noticed how opinionated I can be. This can really limit my ability to develop my opinions, so that’s something I need to work on/remind myself of (I’m also naturally stubborn).
    • I realized how much being at CMU leaves me in a bubble, as does being anywhere. I think I definitely need to pay more attention to outside/online since there’s so much cool work happening.
  • There was a lot of talk about ASI, and I’m starting to see some of the reasons for working at Frontier Labs.
  • Highkey, maybe working at OpenAI isn’t inherently wrong, as long as you have good intentions.
  • (Lots of thoughts/brainstorming about project)

Day 2 Thoughts (Saturday 3/29/2026)

Okay, Day 2 was packed. We had a bunch of interesting talks from all over the industry, non-profits, field-building, and (only one) academia. We also had small group discussions and one-on-ones. I got to talk to David Bau a lot. His talk was really interesting, and he’s one of the only professors who’s running an interp lab in academia. He gave me lots of advice on treating your PhD as a one-shot opportunity, and you should only do it once you have something you NEED to teach others about (like an itch). I also got to talk to Eric Gan from Redwood Research, who taught me the importance of being critical of companies, but not the people from said companies. I think this view solidified further after being said by people from companies who clearly see AI as a threat/existential risk. These people view going into frontier labs as the best place to have an impact. Due to time, I’ll just quickly give a shout-out to the talks from Sydney Von Arx from METR (love METR’s work, so it was a great talk), Tzu Kit Chan (Atlas Computing, good advice from a field builder), and Aryan Bhatt from Redwood. There was so much more that went on today, so I’m not gonna waste any time recapping more. I spent almost every minute from 10:00 am to 5 am thinking and talking to others (my social battery is drained) (Editor Note: I ended up getting sick after the workshop, and this is most likely why).

Thoughts that night:

  • If I were to go into academia (which is what I currently want to do), I would want to gain experience in industry or non-profit labs and possibly wait before starting it. At least, I should give this idea more thought. I really took to heart what David Bau said about knowing when to start your PhD (having the itch) and taking it more seriously (you only do it once).
  • There is a lot of emphasis on the ASI hitting the next 2-6 years, and how time is of the essence. While I want to go into academia for lifestyle reasons (i’ll get into it later), will I have enough impact?
  • A big idea from many talks: AI may automate all of our research over the next 2-3 years and eventually replace humans in this knowledge discovery. This is why time is of the essence: we need to align on it now to figure out how to align and control these agents. A lot of the research presented was forward-thinking; i.e., a model may not be scheming for us now, but how could we detect if this were the case?

Day 3 Thoughts (Sunday 3/30/2026)

More of what happened yesterday happened today. There was an interesting talk from Eric Gan from Redwood. I got to talk to many people from other AIS clubs at other schools and form connections, and I caught a cold today. I think I’m gonna leave most of my thoughts for the “final thoughts” section. A lot of the same stuff happened like yesterday. Today, I tended to butt heads on how I saw the impact of AI, which I will discuss below. Without this workshop, I might not have thought about my views as much. Specifically, on the bus ride back, I (semi) argued with some people about going into academia, the likelihood of short timelines, and understanding your inner motivations (i.e., why you believe certain timelines over others).

After I got off the bus (and after I spent nearly 48 hours talking about just AI safety), I was left with just my thoughts and the MIT campus. I ended up literally just sitting and thinking for a while. I tried to go to the library to do some work, but then I just kept thinking more:

Note: **For the next couple of days, I had this sort of “alone” feeling, and I found out that others who go to these workshops feel the same way. It’s not just depressing, but more reflective. **

The final plane ride back to Pittsburgh was filled with many thoughts and fears, which I label in the next section.

Final Thoughts on the Plane Ride Back to Pittsburgh

It’s currently 10 pm, my flight just got delayed for the 3rd time, and I have IDL tomorrow at 8 am :(. *Editor’s note: writing this spanned waiting for the plane and the actual ride itself)

I think I have 3 main things I’ve learned/understood more from this workshop:

  • There are real reasons to think that AI might automate all research very soon, leading to an ASI. In this case, there is some incentive to focus fully on aligning AI and detecting failure modes.
  • People in the community really do hold those prior views, and to them, the best way to work on it is in labs outside academia.
  • Safety is very much localized and hidden from the rest of the world. This is a core issue.

I guess some thoughts as I wrap it up (I’m on the flight now, and I kinda wanna get some rest) as follows:

  • I still think we need to be advocating for full regulation. The idea I brought up was “if I think AI is going to end in 3-4 years, I would probably (some action that isn’t research….)” as in I would probably be actively speaking out and trying to convince the public.
  • I used to think that people working in research and holding the view that ASI is coming in the next couple of years wouldn’t agree with my prior statements, yet I learned that many do.
    • I guess some believe that regulation just isn’t realistic (at least that’s the impression I got).
  • ASI outcomes related thoughts:
    • What I do believe is that IF ASI-outcomes were to happen in the next couple of years (which isn’t unrealistic), we are very cooked, and that is a big motivation for working on dealing with these outcomes. However, I wonder how possible it is to combat this, and wonder if we are better off screaming for full regulation (obviously an extreme swap, but you get the point)
    • I personally have always wanted to go into academia, and I never thought I would feel much pushback for that. However, a fair criticism people raise now is the impact I have. It’s still something I’m thinking about.
    • I’m still uncertain of upcoming ASI outcomes, and I would like to know if focusing on it too much hides other near-extinction possibilities and makes them worse (something I want to write about more).
    • I still think that there isn’t enough perspective on other outcomes, and people focus on the extreme-risk extinction ones. This makes sense, however I believe that other non-ASI but powerful and integrated AI (if trends continue) will lead to near-extinction. I think this happens in the long term.
  • Self reflection thoughts about my motivations:
    • A big part of my motivation for going to grad school is I still believe that interpretability is possible, but maybe im just chasing a life that i think is most satisfying for me
    • right now, im curious aboyt forecasting for the future, and what motivates my beleifs/decisions. I think that I still beleive/am skeptical of AI automating taking over research, but as someone pointed out when i was talking to them, I used words like “ASI cant replace true knowledge”, which might be me coping and holding onto some humanist notion of knowledge :sob:
  • Lots of yap, lots more to think about. I really dont also think this is performative aurafarming. In every scenario with AI, there are huge implications. Even if we dont achieve ASI, theres many near extinction threats that will lead to large-scale distributions in society.
  • I plan to keep thinking about this, and I think the workshop was a great chance for me to hear many new perspectives.
  • As of now, I still believe that everything im working on is to combat against bad futures with AI, however I might have to shift since this technology is growing way too fast (and I already had those veiws)
  • I still stick with the idea of going to grad school, and still focus on gaining lots of strong foundations in CS and Math, but I want to do alot outside of research to effect right now.
  • If ASI doesnt fully get formed, AI will still be fully embedded in society. In this case, we need to understand it, and to do that will take truly novel ideas that draw from many disciplines.
  • Next steps for me

Final Final Thoughts after editing this a bit and thinking.

The purpose of this blog was for me to try to get out all the conversations and ideas that happend over a 48-hour span. Alot of rough ideas in here, alot of speculiation, alot of “I think”, alot of yap. But yeah if you ended up reading it, hopefully you enjoyed it. There are some ideas here that I want to explore more in formal projects. I still have a cold btw…..




    Enjoy Reading This Article?

    Here are some more articles you might like to read next:

  • Silent Semi-Killers | Extinction-level impacts of AI that are right under our nose.
  • Opinion Piece | Alignment and Safety Benchmarks are Cope, and Bring Complacency
  • Upcoming Blogs
  • 25-26 Winter Studies Recap, Causality and Causal Representation Learning
  • about