Software engineering, parenting, cognition, meditation, otherLinkedin, Facebook, Admonymous (anonymous feedback)
Some other noteworthy groups in academia lead by people who are somewhat connected to this community:- Jacob Steinhardt (Berkeley)- Dylan Hadfield-Menell (MIT)- Sam Bowman (NYU)- Roger Grosse (UofT)Some other noteworthy groups in academia lead by people who are perhaps less connected to this community:- Aleksander Madry (MIT)- Percy Liang (Stanford)- Scott Neikum (UMass Amhearst)
Some other noteworthy groups in academia lead by people who are somewhat connected to this community:- Jacob Steinhardt (Berkeley)- Dylan Hadfield-Menell (MIT)- Sam Bowman (NYU)- Roger Grosse (UofT)
Some other noteworthy groups in academia lead by people who are perhaps less connected to this community:- Aleksander Madry (MIT)- Percy Liang (Stanford)- Scott Neikum (UMass Amhearst)
Can you provide some links to these groups?
Each needs an environment to do so, but the key observation seems to be that a structure is reliably reproduced across intermediate forms (mitosis, babies, language, society) and build on top of each other. It seems plausible that there is a class of formal representations that describe
You don't talk about human analogs of grokking, and that makes sense for a technical paper like this. Nonetheless, grokking also seems to happen in humans, and everybody has had "Aha!" moments before. Can you maybe comment a bit on the relation to human learning? It seems clear that human grokking is not a process that purely depends on the number of training samples seen but also on the availability of hypotheses. People grok faster if you provide them with symbolic descriptions of what goes on. What are your thoughts on the representation and transfer of the resulting structure, e.g., via language/token streams?
I mean scoring thoughts in the sense of [Intro to brain-like-AGI safety] 3. Two subsystems: Learning & Steering with what Steven calls "Thought Assessors". Thoughts totally get scored in that sense.
About the problems you mention:
the apparent phenomenon of credit assignment improving over a lifetime. When you're older and wiser, you're better at noticing which of your past actions were bad and learning from your mistakes.
I don't get why you see a problem here. More data will lead to better models over time. You get exposed to more situations, and with more data, the noise will slowly average out. Not necessarily because you can clearly attribute things to their causes, but because you randomly get into a situation where the effect is more clear. It mostly takes special conditions to get people out of their local optimum.
without any anti-reinforcement event occurring
And if it looks like this comes in hindsight by carefully reflecting on the situation, that's not without reinforcement. Your thoughts are scored against whatever it is that the brainstem is evaluating. And same as above, earlier or later, you stumble into some thoughts where the pattern is more clearly attributable, and then the weights change.
The main difference between LDAIXI and a human in terms of ontology seems to be that the things the human values are ultimately grounded in senses and a reward tied to that. For example, we value sweet things because we have a detector for sweetness and a reward tied to that. When our understanding of what sugar is changes the detector doesn't, and thus the ontology change works out fine. But I don't see a reason you couldn't set up LDAIXI the same way: Just specify the reward in terms of a diamond detector - or multiple ones. In the end, there are already detectors that AIXI uses - how else would it get input?
Thank you for mentioning us. In fact, the list of candidate instincts got longer. It isn't in a presentable form yet, but please message me if you want to talk about it.
The list is more theoretical, and I want to prove that this is not just theoretical speculation by operationalizing it. jpyykko is already working on something more on the symbolic level.
Rohin Shaw recommended that I find people to work with me on alignment, and I teamed up with two LWers. We just started work on a project to simulate instinct-cued learning in a toy-world. I think this project fits research point 18.104.22.168, and I wonder now how to apply for funding - we would probably need it if we want to simulate with somewhat larger NNs.