A Ray

Wiki Contributions

Comments

Sorted by
A Ray11

I think at this point these feel like empirical questions, which I think would be much more clearly answered by demonstrations or experiments.

Trying to encode an additional penalty on changing non-semantic information is an interesting idea.

However I think you're missing that you don't have the ability to directly compare to a reference LM in cases where you're training to improve on some performance benchmark.  During training the model will change its predictions on everything to some degree (both semantic and nonsemantic content).

So your proposed detection will always show some amount of disagreement between the prior and the trained model on weird grammatical patterns as well as conceptual tokens.  The question is: "is the difference merely due to the changes to improve performance, or is it also transmitting hidden information"

A Ray21

If what you’re saying is “any change to the distribution will change KL” — I think that’s just correct.

This also applies to changes during training where the model is learning to perform better on the objective task.

So we are expecting some amount of KL divergence already.

My claims are:

  1. The cheapest place to hide information (due to KL) are places where the model already has high entropy (ie it is uncertain between many possible outputs)

  2. optimization pressure will try to push this extra information into the cheapest places to hide

  3. the increase in KL won’t be clearly distinguishable from the increase due to increased performance on the task

A Ray20

I think I understand what you're saying, but I want to double check and try laying it out explicitly.

  • The "null space" is actually the null space of the prior model, which is human-like, but not exactly lined up to human perception of language
  • Because of this, the information is "hidden" w.r.t. human perceptions, it's "hidden" w.r.t. the language model prior

I think I agree with both of these points, but here's my thinking for why I still expect to see this phenomena (and why the article above was simplified to just be "human")

  • Empirically drawing many samples results in lots of "contains similar high level contents according to a human (me)"
    • It might be worth studying this explicitly, such as setting up a bunch of chain of thought reasoning like scenarios, and sampling a lot of generated reasoning, then asking humans to rate how similar the generations are
    • My prediction is that people will think that they are largely similar
    • We could also use this same method to study human-measured relevance of other non-trivial sampling methods, like beam search, etc
  • "Hiding" w.r.t. language model prior is sufficient, in the limit of good language models
    • One of the things I think my reasoning relies on is that we will continue to produce better language models, which means the prior over time more and more closely resembles human priors
    • This suggests that one way to avoid this phenomena is to only use smaller / less capable models -- or at least they will do a worse job at hiding information.  I currently expect that many research labs will continue to do chain of thought reasoning with state of the art models, though.
  • There's a lot of room in the model distribution
    • Chain of thought reasoning is not currently, nor do I expect it will ever be, overconstrained.
    • Instead, it can be seen as a mechanism for dramatically increasing the space of possible outputs, both those seen during training, and those encountered at test time
  • Human preference effects
    • Research Campaign Effects: Human researchers will want human-like outputs, and thus will be expected to tweak/retrain/etc in order to get human-like reasoning in the chain of thought.  This introduces a separate optimization pressure from the KL penalty towards reasoning being human-like
    • Human preference models: I think an easy-to-predict future direction here is training reward models to match subjective human preferences and then optimizing with them or sampling with them (e.g. best of N) -- these preference models could also introduce additional optimization pressure towards human-like 

I think the most important point in all of this is that I expect the model prior to be sufficiently close to the human prior -- and I'd be interested in learning more about evidence or research to the contrary.

A Ray30

I think this is worth considering even in cases of things like recursive prompting for a couple of reasons that I haven't organized:

  • In general additional compute time lets you improve your answer, so by default I'd expect answers to any significant question to take multiple steps
    • I agree for trivial questions it might not do any of this.  My expectation is that AI deployments impacts will mostly be on things like non-trivial questions or tasks
  • So far current research hasn't done a great deal of regularizing or penalizing models for using a lot of thinking
    • Mostly they've optimized for "maximize the chance of correct answer"
    • This could change in the future, especially in cases where the model is expected to be deployed / otherwise sensitive to runtime costs
  • Steganography is rewarded over openness if the openness is somehow far from the prior for a given piece of information it wants to transmit
    • Caveat this point is more speculative than the above document
    • One of my points is that due to null space + optimization pressure, it's very easy and convenient for gradient descent to put information in these hidden dimensions
    • I think one of the things this can do well is handle concepts that don't yet exist -- and so reasoning moves like "invent a new term" or "give an illustrative example" or "invent a thought experiment" are lower likelihood under the prior
    • In these cases, it can be more likely for steganography to transmit certain information, if it is less expensive in terms of KL penalties
    • I do agree that for straightforward reasoning on straightforward evidence, where the reasoning is human-like and close to the prior, we can expect it to mostly reason out loud in a way that is "open"
  • Myopia -- though less important than it was earlier in RFLO research -- seemed like a good alignment criterion to aim for, and still does today somewhat
    • The original starting point for this direction of thinking was about Myopia / Non-Myopia in chain of thought reasoning systems
    • The sorts of guarantees I was hoping to get for chain of thought systems included myopia guarantees
    • Steganography came up as an example of how it could break myopia (other more complicated examples include side effects and situational awareness)
    • The concepts in this document came up by asking "how might steganography arise naturally given current research techniques and directions"
A Ray30

Putting what I think is the most important part of my reply first: I think research into mitigations is premature and instead demonstrating/measuring the phenomena should take priority.

However given that, I think I agree that these are all possible mitigations to the phenomena, in particular (rephrasing your points):

  • Reducing the channel capacity, and thus the amount of information that can be transmitted
    • Constraining the outputs, and thus reducing the degrees of freedom of the null space
    • Adding noise (via dropout or otherwise) also reduces the channel capacity, and probably also has some desirable side effects (like increasing the diversity or creativity of reasoning)
  • Introducing "barriers" past which optimization for hidden information becomes difficult (in your example, you use different models; I also think "stop gradients" in end-to-end training has similar effects, if weaker)
  • If you can detect/localize this phenomena, you might be able to use interpretability like techniques to limit or constrain it
    • I'm less certain about this, but possible future interpretability research will unlock this ability.

Also I think an important point here is that we can expect to see it with fairly simple techniques -- merely including sampled/generated data in the training set is sufficient, as opposed to it requiring a complicated reinforcement learning algorithm like MuZero.

Did you publish your proposal?  I'd be interested in reading it.

A Ray50

Agree that founders are a bit of an exception.  Actually that's a bit in the longer version of this when I talk about it in person.

Basically: "The only people who at the very top of large tech companies are either founders or those who were able to climb to the tops of moral mazes".

So my strategic corollary to this is that it's probably weakly better for AI alignment for founders to be in charge of companies longer, and to get replaced less often.

In the case of facebook, even in the face of all of their history of actions, I think on the margin I'd prefer the founder to the median replacement to be leading the company.

(Edit: I don't think founders remaining at the head of a company isn't evidence that the company isn't a moral maze.  Also I'm not certain I agree that facebook's pivot couldn't have been done by a moral maze.)

A Ray70

I think there should be a norm about adding the big-bench canary string to any document describing AI evaluations in detail, where you wouldn't want it to be inside that AI's training data.

Maybe in the future we'll have a better tag for "dont train on me", but for now the big bench canary string is the best we have.

This is in addition to things like "maybe don't post it to the public internet" or "maybe don't link to it from public posts" or other ways of ensuring it doesn't end up in training corpora.

I think this is a situation for defense-in-depth.

A Ray80

AGI will probably be deployed by a Moral Maze

Moral Mazes is my favorite management book ever, because instead of "how to be a good manager" it's about "empirical observations of large-scale organizational dynamics involving management".

I wish someone would write an updated version -- a lot has changed (though a lot has stayed the same) since the research for the book was done in the early 1980s.

My take (and the author's take) is that any company of nontrivial size begins to take on the characteristics of a moral maze.  It seems to be a pretty good null hypothesis -- any company saying "we aren't/won't become a moral maze" has a pretty huge evidential burden to cross.

I keep this point in mind when thinking about strategy around when it comes time to make deployment decisions about AGI, and deploy AGI.  These decisions are going to be made within the context of a moral maze.

To me, this means that some strategies ("everyone in the company has a thorough and complete understanding of AGI risks") will almost certainly fail.  I think the only strategies that work well inside of moral mazes will work at all.

To sum up my takes here:

  • basically every company eventually becomes a moral maze
  • AGI deployment decisions will be made in the context of a moral maze
  • understanding moral maze dynamics is important to AGI deployment strategy
A Ray70

Thanks so much for making this!

I'm hopeful this sort of dataset will grow over time as new sources come about.

In particular, I'd nominate adding MLSN (https://www.alignmentforum.org/posts/R39tGLeETfCZJ4FoE/mlsn-4-many-new-interpretability-papers-virtual-logit) to the list of newsletters in the future.

A Ray10

This seems like an overly alarmist take on what is a pretty old trend of research.  Six years ago there was a number of universities working on similar models for the VizDoom competition (IIRC they were won by Intel and Facebook).  It seems good to track this kind of research, but IMO the conclusions here are not supported at all by the evidence presented.

Load More