Interested in math, Game Theory, etc.


Sorted by New

Wiki Contributions


I think we get enough things referencing quantum mechanics that we should probably explain why that doesn't work (if I it doesn't) rather than just downvoting and moving on.

We realized that if we consider an empty board an optimizing system then any finite pattern is an optimizing system (because it's similarly robust to adding non-viable collections of live cells)

Ah. I interpreted the statement about the empty board as being one of:

A small random perturbation, will probably be non-viable/collapse back to the empty board. (Whereas patterns that are viable don't (necessarily) have this property.)

I then, asked about whether the bottle cap example, had the same robustness.

An empty board is also an example of an optimizing system that is robust to adding non-viable collections of live cells (e.g., fewer than 3 live cells next to each other). 

And the 'bottle cap' example is not (robust to adding cells, or cells colliding* with it)? But if it was, then it would be an 'optimizing system'?

*spreading out, and interacting with it

(Weird meta-note: Are you aware of something unusual about how this comment is posted? I saw a notification for it, but I didn't see it in the comments section for the post itself until initially submitting this reply. I'm newish to posting on Lightcone forums...)

Ah. When you say lightcone forums, what site are you on? What does the URL look like?

For this point, I'm not sure how it fits into the argument. Could you say more?

It's probably a tangent. The idea was:

1) Criticism is great.

2) Explaining how that could be improved is marginally better. (I then explained for that case* how citing 'old evidence' or 'old stuff' could still apply to new stuff. It was kind of a niche application of evidence though. If someone had a good reason for using the old evidence, elaborating on that reason might help.)

*In abstract terms - I didn't have any examples in mind.

The paper makes a slightly odd multi-step argument to try to connect to active debates in the field:

This comment is some quick feedback on those:

Weirdly, this even happens in papers that themselves to show positive results involving NNs.


citations to failures in old systems that we've since improved upon significantly.

Might not be a main point, but this could be padded out with an explanation of how something like that could be marginally better. Like adding:

"As opposed to explaining how that is relevant today, like:

[Old technique] had [problem]. As [that area] has matured [problem has been fixed in this way]. However [slower deployment]/[more humans in the loop]/[other fix] would have reduced [problems]. Using [these fixes]/not making them critical systems which is risky because _ can help ensure [this new area] which [has the same problem] and probably will for [time] until it matures, does not have the same problems [old area] did [for length of time]."


But is that actually the right way to minimize the risk of harms? We should expect [that]

  • Is there any empirical base which could be used to estimate this/provide information on improving things? Anything similar

We should expect the impacts of these technologies to grow dramatically as they get better

  • What if the impact grows dramatically as...they get deployed widely? Even if it it's a bad idea, it's widely done because it's popular/cool/a fad/etc.?
  • What approach would work best then?
It's remarkable that googling "thermodynamics of the game of life" turns up zero results. 

It's not obvious that thermodynamics generalizes to the game of life, or what the equivalents of energy or order would be: at first glance it has perpetual motion machines ("gliders").

This was a good post. I'd bookmark it, but unfortunately that functionality doesn't exist yet.* (Though if you have any open source bookmark plugins to recommend, that'd be helpful.) I'm mostly responding to say this though:

Designing Recommender Systems to Depolarize

While it wasn't otherwise mentioned in the abstract of the paper (above), this was stated once:

This paper examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict.

I though this was worth calling out, although I am still in the process of reading that 10/14 page paper. (There are 4 pages of references.)

And some other commentary while I'm here:

It's common for people to be worried about recommender systems being addictive

I imagine the recommender system is only as good as what it has to work with, content wise - and that's before getting into 'what does the recommender system have to go off of', and 'what does it do with what it has'.

Whenever I talk to someone who seems to have actually studied the topic in depth, it seems they think that there are problems with recommender systems, but they are different from what people usually imagine.

This part wasn't elaborated on. To put it a different way:

It's common for people to be worried about recommender systems being addictive or promoting filter bubbles etc, but as far as I can tell, they don't have very good arguments for these worries.

Do the people 'who know what's going' on (presumably) have better arguments? Do you?

*I also have a suspicion it's not being used. I.e., past a certain number of bookmarks like 10, it's not actually feasible to use the LW interface to access them.

How do you try to discourage all "deliberate mistakes"? 

1. Make something that has a goal. Does AlphaGo make deliberate mistakes at Go? Or does it try to win, and always make the best move* (with possible the limitation that, it might not be as good at playing from positions it wouldn't play itself into)?

*This may be different from 'maximize score, or wins long term'. If you try to avoid teaching your opponent how to play better, while seeking out wins, there can be a 'try to meta game' approach - though this might require games to have the right structure, especially in training to create a tournament, rather than game focus. And I would guess it is game focused, rather than tournament.

Why do you suppose it's not an agent? Isn't that essentially the question of inner optimizers? IE, does it get its own goals? Is it just trying to predict?

A fair point. Dealing with this at the level of 'does it have goals' is a question worth asking. I think that it, like AlphaGo, isn't engaging in particularly deliberate action because I don't think it is existing properly to do that, or learn to do that.

You think of the spelling errors as deception. Another way of characterizing it might be 'trying to speak the lingo'. For example we might think of as an agent, that, if it chatted with you for a while, and you don't use words like 'aint' a lot, might shift to not use words like that around you. (Is an agent that "knows its audience" deceptive? Maybe yes, maybe no.)

You think that there is a correct way to spell words. GPT might be more agnostic. For example, (it's weird to not put this in terms of prediction) if another version of GPT (GPT-Speller) somehow 'ignored context', or 'factored it 'better'', then we might imagine Speller would spell words right with a probability. You and I understand that 'words are spelled (mostly) one way'. But Speller, might come up with words as these probability distributions over strings - spelling things right most of the time (if the dataset has them spelled that way most of the time), but always getting them wrong sometimes because it:

  • Thinks that's how words are. (Probability blobs. Most of the time "should" should be spelled "should", but 1% or less it should be spelled "shoud".)
  • Is very, but not completely certain it's got things right. Even with the idea that there is one right way, there might be uncertainty about what that way is. (I think an intentional agent like us, as people, at some point might ask 'how is this word spelled', or pay attention to scores it gets, and try to adjust appropriately.**)

**Maybe some new (or existing) methods might be required to fix this? The issue of 'imperfect feedback' sounds like something that's (probably) been an issue before - and not just in conjunction with the words 'Goodhart'.

I also lean towards 'this thing was created, and given something like a goal, and it's going to keep doing that goal like thing'. If it 'spells things wrong to fit in' that's because it was trained as a predictor, not a writer. If we want something to write, yeah, figuring out how to train that might be hard. If you want something out of GPT that differs from the objective 'predict' then maybe GPT needs to be modified, if prompting it correctly doesn't work. Given the way it 'can respond to prompts' characterizing it as 'deceptive' might make sense under some circumstances*, but if you're going to look at it that way, training something to do 'prediction' (of original text) and then have it 'write' is systematically going to result in 'deception' because it has been trained to be a chameleon. To blend in. To say what whoever wrote the string it is being tested against at the moment. It's abilities are shocking and it's easy to see them in an 'action framework'. However, if it developed a model of the world, and it was possible to factor that out from the goal - then pulling the model out and getting 'the truth' is possible. But the two might not be separable. If trained on say "a flat earther dataset" will it say "the earth is round"? Can it actually achieve insight?

If you want a good writer, train a good writer. I'm guessing garbage in, garbage out, is an AI rule as much as straight up programming.*** If we give something the wrong rewards, the system will be gamed (absent a system (successfully) designed and deployed to not do that).

*i.e., it might have a mind, but it also might not. Rather it might just be that

***More because the AI has to 'figure out' what it is that you want, from scratch.

If GPT, when asked 'is this spelled correctly: [string]' it tells us truthfully, then as deception, that's probably not an issue. As far as deception goes...arguably it's 'deceiving' everyone all the time, that it is a human (assuming most text in it's corpus is written by humans, and most prompts match that), or trying to. If it things it's supposed to play the part of a someone who is bad at spelling, it might be hard to read.

(I haven't heard of it making any new scientific discoveries*. Though if it hasn't read a lot of papers, it could be trained...)

*This would be surprising, and might change the way I look at it - if a predictor can do that, what else can it do, and is the distinction between an agent an a predictor a meaningful one? Maybe not. Though pre-registration might be key here. If most of the time it just produces awful or mediocre papers, then maybe it's just a 'monkey at a typewriter'.

The most useful definition of "mesa-optimizer" doesn't require them to perform explicit search, contrary to the current standard.

And presumably, the extent to which search takes place isn't important, a measure of risk, or optimizing. (In other words, it's not a part of the definition, and it shouldn't be a part of the definition.)

Some of the reasons we expect mesa-search also apply to mesa-control more broadly.

expect mesa-search might be a problem?

Highly knowledge-based strategies, such as calculus, which find solutions "directly" with no iteration -- but which still involve meaningful computation.

This explains 'search might not be the only problem' rather well (even if isn't the only alternative).

Dumb lookup tables.

Hm. Based on earlier:

Mesa-controller refers to any effective strategies, including mesa-searchers but also "dumber" strategies which nonetheless effectively steer toward a misaligned objective. For example, thermostat-like strategies, or strategies which have simply memorized a number of effective interventions.

It sounds like there's also a risk of smart lookup tables. That might not be the right terminology, but 'look up tables which contain really effective things', even if the tables themselves just execute and don't change, seems worth pointing out somehow.

I think mesa-control is thought of as a less concerning problem than mesa-search, primarily because: how would you even get severely misaligned mesa-controllers? For example, why would a neural network memorize highly effective strategies for pursuing an objective which it hasn't been trained on?
  • AgentOne learns to predict AgentTwo so they don't run into each other as they navigate their environment and try to pursue their own goals or strategies (jointly or separately).
  • Something which isn't a neural network might?
  • If people don't want to worry about catastrophic forgetting, they might just freeze the network. (Training phase, thermostat phase.)
  • Someone copies a trained network, instead of training from scratch - accidentally.
  • Malware

The point of inner alignment is to protect against those bad consequences. If mesa-controllers which don't search are truly less concerning, this just means it's an easier case to guard against. That's not an argument against including them in the definition of the inner alignment problem.

A controller, mesa- or otherwise, may be a tool another agent creates or employs to obtain their objectives. (For instance, if someone creates malware that hacks your thermostat to build a bigger botnet (yay Internet of Things!). It might be better to think of the 'intelligence/power/effectiveness of an object for reaching a goal' (even for a rock) to be seen as a function of the system, rather than the parts.)

If you used your chess experience to create a lookup table that could beat me at chess, it's 'intelligence' would be an expression of your int/optimization.

For non-search strategies, it's even more important that the goal actually simplify the problem as opposed to merely reiterate it; so there's even more reason to think that mesa-controllers of this type wouldn't be aligned with the outer goal.

How does a goal simplify a problem?

My model is that GPT-3 almost certainly is "hiding its intelligence" at least in small ways. For example, if its prompt introduces spelling mistakes, GPT-3 will 'intentionally' continue with more spelling mistakes in what it generates.

Yeah, because it's goal is prediction. Within prediction there isn't a right way to write a sentence. It's not a spelling mistake, it's a spelling prediction. (If you want it to not do that, then train it on...predicting the sentence, spelled correctly. Reward correct spelling, with a task of 'seeing through the noise'. You could try going further, and reinforce a particular style, or 'this word is better than that word'.)

Train a model to predict upvotes on Quara, Stackxchange, and similar question-answering websites. This serves as a function recognizing "intelligent and helpful responses".

Uh, that's not what I'd expect it to do. If you're worried about deception now, why don't you think that'd make it worse? (If nothing else, are you trying to create GPT-Flattery?)

If this procedure works exceedingly well, causing GPT to "wake up" and be a human-level conversation partner or greater, we should be very worried indeed. (Since we wouldn't then know the alignment of the resulting system, and could be virtually sure that it was an inner optimizer of significant power.)

It's not an agent. It's a predictor. (It doesn't want to make paperclips.)

I think you're anthropomorphizing it.

If you would be interested in participating conditional on us offering pay or prizes, that's also useful to know.

Do you want this feedback at the same address?

Load More