AI ALIGNMENT FORUM
AF

1333
Rohan Subramani
000
Message
Dialogue
Subscribe

I aim to promote welfare and reduce suffering as much as possible. This has led me to work on AGI safety research. I am particularly interested in foundation model agents (FMAs): systems like AutoGPT and Operator that equip foundation models with memory, tool use, and other affordances so they can perform multi-step tasks autonomously.

Previously, I completed an undergrad in CS and Math at Columbia, where I helped run Columbia Effective Altruism and Columbia AI Alignment Club (CAIAC).

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No Comments Found
No wikitag contributions to display.
0RohanS's Shortform
9mo
0
22Hidden Reasoning in LLMs: A Taxonomy
2mo
0
19Extract-and-Evaluate Monitoring Can Significantly Enhance CoT Monitor Performance (Research Note)
2mo
0