AI ALIGNMENT FORUM
AF

Rohan Subramani
000
Message
Dialogue
Subscribe

I aim to promote welfare and reduce suffering as much as possible. This has led me to work on AGI safety research. I am particularly interested in foundation model agents (FMAs): systems like AutoGPT and Devin that equip foundation models with memory, tool use, and other affordances so they can perform multi-step tasks autonomously.

Previously, I completed an undergrad in CS and Math at Columbia, where I helped run Columbia Effective Altruism and Columbia AI Alignment Club (CAIAC).

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
0RohanS's Shortform
8mo
0
No Comments Found
No wikitag contributions to display.
21Hidden Reasoning in LLMs: A Taxonomy
19d
0
19Extract-and-Evaluate Monitoring Can Significantly Enhance CoT Monitor Performance (Research Note)
1mo
0