I am a senior CS/Math student at UMD, college park, and I "specialized" my major in machine learning. I have taken courses on Deep Learning and NLP, reinforcement learning, however I still feel like a novice in all of these categories, my proudest achievement being this half-ripped off sudoku CNN I trained that only achieved like 90% accuracy. 

All that being said I want to contribute to the ever progressing world of AI and ML, and I want to know where I should even begin. I would be entirely fine being sent on a super random and likely incorrect path, just to rule out that a method does not work. 

I have deliberated beginning my own projects trying to create Transformer networks or RL implementations, but it all feels a little daunting and arguably useless. Who am I to be able to work alone and come up with something truly useful in heavily researched fields.

What should I work on. Where can someone like myself, who is not currently pursuing further education or research get involved and help the ML/AI society?

P.S. shoutout Lex Fridman and Eliezer Yudkowsky, without their recent podcast together I do not think I would have found this platform.

New Answer
New Comment

2 Answers sorted by

aogara

Apr 06, 2023

60

Very glad you're interested! Here are some good places to start:

  1. Learn more about the alignment problem with the readings here: https://www.agisafetyfundamentals.com/ai-alignment-curriculum
  2. Learn more about technical agendas relevant to safety here: https://course.mlsafety.org
  3. High level guide on working in the field: https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment 
  4. Guide to developing research engineering skills: https://forum.effectivealtruism.org/posts/S7dhJR5TDwPb5jypG/levelling-up-in-ai-safety-research-engineering
  5. More challenging projects in research engineering: https://github.com/jacobhilton/deep_learning_curriculum 

I expect this will feel overwhelming; if so, that's normal enough that I thought to mention it. Feel very free to comment or make posts like this question again. (If not overwhelmed, cool, that's normal too.)

Super random and likely incorrect paths, here we go! So, first off: I didn't do traditional school, and the more I learn about how school affects people's thinking, the more I think that it heavily powers up some core components of research skill while beating some key forms of creativity out, and the most effective researchers are the ones who manage to retain that curiosity and arrogant creativity while also developing the skill of testing their ideas in short order. In general, get arrogant about your ability to understand, and be willing to make lots of bad bets. Read a lot shallowly, and level up your skill in making use of published literature - keeping in mind that most papers suck, and the key task of reading new papers is to develop your taste for which papers are actually likely to be worth building intuition on, especially keeping in mind that most papers will turn out to have been useless echoes of some underlying real thing that gets discovered a month later (or six months earlier). Or try writing up what problems you think exist.

With that in mind, I'd check out the open problems tag and see if you can figure out what you think tastes tractable, impactful, and potentially sufficient to reduce a key risk. I'd also suggest looking into what the structures of the key risks are. GPT3 and GPT4 are a great start for these things and I think people underuse both, but of course they only go so far. There's also https://stampy.ai/ which is specifically designed to be an intro q/a overview and is pretty dang solid. I'm excited in particular about focus on defining agency in terms of causality, ie the sort of stuff the https://causalincentives.com group is working on, and they're a great example of why I recommend reading papers before you're ready to understand them - those papers are hefty but attempting to digest them anyway and failing will seed your intuitions about what other things might be worth your time. I also have a number of intro-for-hunchbuilding-purposes posts I recommend skimming - eg, a bunch of videos, a dump of an old pdf on how to skim papers (see also [Literature Review For Academic Outsiders: What, How, and Why](Literature Review For Academic Outsiders: What, How, and Why)).

I'd recommend taking notes about these things; if you don't already have a note taking tool, both obsidian and logseq are cool and integrate nicely with other things. I use semanticscholar to discover papers that are relevant to my research interests, as well as arxivexplorer to look for specific topics; there's a lot of stuff out there, after all, and as you say, it's easy to try to do something someone else has already done. But don't underestimate how valuable it is to replicate work - doing something someone else has done is also known as a school project, and you can make any paper into a project if it feels appropriate for your skills. You don't necessarily need to come up with new things, but you do need to build intuition fast towards the goal of understanding what research trajectories have the potential to impact your target outcomes.

Hope this half-baked reply helps! let me know if there's more I can do.

Oh, see also https://www.lesswrong.com/tag/ai-alignment-intro-materials

Thank you so much for the information!

1 comment, sorted by Click to highlight new comments since: Today at 11:27 AM

This place is a good choice to ask if you want to work on a very specific subset of AI Safety/Alignment problems. If you want to work on AI Capabilities I don't think this is the place to ask - consensus among the people here is "the best way right now to work on AI Capabilities is to don't".