simeon_c

@SaferAI

Wiki Contributions

Comments

Sorted by

Are there existing models for which we're pretty sure we know all their latent knowledge ? For instance small language models or something like that.

Thanks for the answer! The post you mentioned indeed is quite similar!

Technically, the strategies I suggested in my two last paragraphs (Leverage the fact that we're able to verify solutions to problems we can't solve + give partial information to an algorithm and use more information to verify) should enable to go far beyond human intelligence / human knowledge using a lot of different narrowly accurate algorithms. 

And thus if the predictor has seen many extremely (narrowly) smart algorithms, it would be much more likely to know what is it like to be much smarter than a human on a variety of tasks. It probably still requires some optimism on generalization. So technically the counterexample could be happening on the gap between the capability of the predictor and the capability of the reporter. I feel like one question is : do we expect some narrow algorithms to be much better on very precise tasks than general-purpose algorithms (such as the predictor for instance) ? Because if it were the case, then the generalization that the reporter would have to do from training data (humans + narrowly accurate algorithms capabilities) to inference data (predictor's capabilities) could be small. We could even have data on the predictor's capability in the training dataset using the second approach I mentioned (i.e giving partial information to the predictor (e.g one camera in SuperVault) and using more information (i.e more cameras for humans) than him to verify its prediction). We could give some training examples and show the AI how the human fails much more often than the predictor on the exact same sample of examples. That way, we could greatly reduce the gap of generalization which is required. 

The advantage of this approach is that the bulk of the additionnal cost of training that the reporter requires is due to the generation of the dataset which is a fixed cost that no user has to repay. So that could slightly decrease the competitivity issues as compared with approaches where we affect the training procedure.

Despite all that, thanks to your comment and the report, I see why the approach I mention might have some intrinsic limitations in its ability to elicit latent knowledge though. The problem is that even if it understands roughly that it has incentives to use most of what it knows when we ask him simulating the prediction of someone with its own characteristics (or 1400 IQ), given that with ELK we look for an global maximum (we want that it uses ALL its knowledge), there's always an uncertainty on whether it did understand that point or not for extreme intelligence / examples or whether it tries to fit to the training data as much as possible and thus still doesn't use something it knows.

You said that naive questions were tolerated so here’s a scenario I can’t figure out why it wouldn’t work.

It seems to me that the fact that an AI fails to predict the truth (because it predicts as humans would) is due to the fact that the AI has built an internal model of how humans understand things and predict based on that understanding. So if we assume that an AI is able to build such an internal model, why wouldn’t we train an AI to predict what a (benevolent) human would say given an amount of information and a capacity to process information ? Doing so, it could develop an understanding of why humans predict badly and then understand that given a huge amount of information and a huge capacity to process information, the true answer is the right one.

A concrete training procedure could use the fact that even among humans, there’s a lot of variance in :

  • what they know (i.e the amount of information they have)
  • their capacity to process information (so for instance in the case of the vault, it could be the capacity to infer what happened based on partial information / partial images and based on a certain capacity to process images (no more that x images per second)

So we could use the variance among humans capacity to understand what happened to try to make the AI understanding that benevolent humans predict badly whether the diamond has been stolen only because they lack information or capacity to process information. There’s a lot of fields and questions where the majority of humans are wrong and only a small subset is right, becase there is either a big gap in the information they have or in their ability to process information.

Once we would have trained that AI on humans, we would like to do inference specifying the true capabilities of the AI to ensure that it tells us the truth. The remaining question might be whether such an out-of-sample prediction would work. My guess is that if we also included examples with the human bayes net to add more variance in the training dataset, it would probably reduce the chances that it fails.

Finally, the concrete problem of specifying the information humans have access to is not trivial but I think it’s feasible.

I don’t understand why it wouldn’t work, so I’d be happy to have an answer to better understand the problem!





EDIT : Here's an update after a discussion about my proposal. After having read Learning The Prior (https://ai-alignment.com/learning-the-prior-48f61b445c04), I think that the key novelty of my proposal, if there's any, is to give to the model input information about the capacity to reason of the person / entity that predicts an outcome. So here are a few relevant features that we could give it : 

  • In the case of the vault, we could give an estimate of the number of images that a human is able to see and understand per seconds.
  • IQ (or equivalent) when the task involves reasoning
  • Accuracy on a few benchmark datasets of the AI assistant who's helping the human to think (human's Bayes net)

 

That said, I feel like the main problem is to know whether such a model would do well out-of-distribution (i.e on problems no human is able to resolve). I feel like using the approach I suggested, we should able to use the great variations of capacities among humans and algorithms to increase the chances that our algorithm do well when it's much better. 

In particular, I thought about 2 approaches that could enable us to go far beyond human capacity. We could:

  • leverage the fact that we are able to verify a solution to some problems that we can't solve. There might be algorithms very good at solving some problems that no humans can solve, but we can still verify (and thus label) these results. There might be mathematical problems such that we can use algorithms that would do mathematical reasoning that no human would be able to understand and still be able to verify its solution. If we gave to our predictor's Bayes net the characteristics of the algorithm which is much better than us at solving some problems and the labels of the answers, that would be a way to leverage narrowly superhumans algorithms to generate more variance on the training dataset and decrease the chances of failures in inference. I feel like this approach is very interesting because that would enable to expose the predictor's Bayes net to algorithms who are both superhuman and have a lot of information (more than any human) which is the situation in which the predictor's would have to do inferences.
  • use more information than a more (narrowly) clever entity than us to verify its prediction. In the case of the vault for instance, there could be situations such that no humans would be able to predict whether the vault was stolen or not given partial information, but we could then just use better cameras for ourselves to be able to verify accurately the predictions of a good algorithm. That way, we would also be able to communicate to the predictor's Bayes net how is it like to be incredibly smart (more than any human) and do good predictions when you are like that. I feel like using that approach of hiding information to extremely accurate narrow algorithms could be a way to still be able to verify their predictions, and expose the predictor's Bayes net to very smart entities in multiple domains.  The problem of this approach though is that it doesn't give examples of both "maximal information" and "maximal intelligence" to the predictor because it relies on the fact that it always hides a part of the information to the algorithm to enable us to still verify its claims.

 

Despite that, I don't know whether asymptotically, I'd expect the algorithm to still be truthful. But it could greatly increase the distribution on which it's truthful.