Christian Kleineidam


Sorted by New

Wiki Contributions


Intelligence or Evolution?

Error correction codes help a superintelligence to avoid self-modifying but they don't allow goals necessarily to be stable with changing reasoning abilities. 

BASALT: A Benchmark for Learning from Human Feedback

The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won't be able to do so. That's what we've tried to do here.

It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems. 

Additionally, developing a safe system and developing a nonsafe system are very different. Even if your reward function works 99,9% of the time it can be exploited in those cases where it fails.

Developmental Stages of GPTs

Maybe put out some sort of prize for the best ideas for plans?

Are we in an AI overhang?

That's the terrifying thing about NNs and what I dub the "neural net overhang": the cost to create a powerful NN is millions of times greater than the cost to run that NN.

I'm not sure why that's terrifying. It seems reassuring to me because it means that there's no way for the NN to suddenly go FOOM because it can't just quickly retrain.

Goodhart Taxonomy

I upvoted the post for the general theory. On the other hand, I think the examples could be more clear and it would be good to find examples that are more commonly faced.