Luke H Miles

Wiki Contributions

Comments

There is a lot of room between "ignore people; do drastic thing" and "only do things where the exact details have been fully approved". In other words, the Overton window has pretty wide error bars.

I would be pleased if someone sent me a computer virus that was actually a security fix. I would be pretty upset if someone fried all my gadgets. If someone secretly watched my traffic for evil AI fingerprints I would be mildly annoyed but I guess glad?

Even google has been threatening unpatched software people to patch it or else they'll release the exploit iirc

So some of the Q of "to pivotally act or not to pivotally act" is resolved by acknowledging that extent is relevant and you can be polite in some cases

This is the post I would have written if I had had more time, knew more, thought faster, etc

One note about your final section: I expect the tool -> sovereign migration to be pretty easy and go pretty well. It is also kind of multistep, not binary.

Eg current browser automation tools (which bring browsers one step up the agency ladder to scriptable processes) work very well, probably better than a from-scratch web scripting tool would work.

Fake example: predict proteins, then predict interactions, then predict cancer-preventiveness, THEN, if everything is going good so far, solve for the protein that prevents the cancer. You might not need more steps but you could also incrementally solve for the chemical synthesis process, eliminate undesired byproducts, etc.

More generally, it might be easy to flip simulators/predictors over into optimizers when the system is trustworthy and the situation demands.

If your system is low impact or whatever then you can have it melt gpus without going off and running the world forever.

Assumptions:

  1. Operator making high-level judgement calls is a pretty good solution for the foreseeable future.
  2. The self referenced decision theory stuff won't come in in an important way for a good while (at least the problems won't demand anything super rigorous)

So +1 for staying in happy tool land and punting on scary agent land

Someone make a PR for a builder/breaker feature on lesswrong

Thanks, especially like vague/incorrect labels to refer to that mismatch. Well-posed Q by Garrabrant, might touch on that in my next post.

Good point about validators failing silently and being more strongly vetted.

using a 'cleanroom' approach and investing heavily in formal verification (and testing) can get you an orders-of-magnitude lower error rate than ordinary software... at orders-of-magnitude greater cost

Abstractly, it seems to me that once the tooling and process is figured out for one task in a narrow domain, you could reuse that stuff on other tasks in the same domain at relatively low cost. But the history of repeated similar vulnerabilities over long time ranges in narrow domains (eg gnu core utils) is perhaps some evidence against that.

At more ordinarily-reasonable levels of rigor, I'd emphasize understandable interfaces rather than primitives, and agree that ensuring people can think about composition helps albeit in weaker ways than I think you're suggesting.

I agree with the first half and would add that restricting the kind of interface has large additional benefit on top of the interfaces themselves. If you are dealing with classes and functions and singleton module things and macros then you're far more prone to error compared to just using any single one of those things. Even if they are all simple.

If system validation has exponential cost with respect to confidence and system size then I think the simplicity of the primitives is perhaps uh the base of the exponent. And is a lot smaller than .

My main point is that this the difference between "annoying to validate" and "we will never ever be able to validate"

Thoughts on when models will or won't use edge cases? For example, if you made an electronic circuit using evolutionary algorithms in a high fidelity simulation, I would expect it to take advantage of V = IR being wrong in edge cases.

In other words, how much of the work do you expect to be in inducing models to play nice with abstraction?

ETA: abstractions are sometimes wrong in stable (or stabilizable) states, so you can't always lean on chaos washing it out

What counts as a solution? You could ofc set this up completely manually. Or you could train another net to tweak the first. Gold standard would be to have a basically normal net do this on a basically normal task...

This is true in every field and is very difficult to systemize apparently. Perhaps a highly unstable social state to have people changing directions or thinking/speaking super honestly often.

How could one succeed where so few have?

It seems I was missing the right keywords in my search for demos of this because when I google "ai research assistant" there is quite a lot of work

The difference between evolution and gradient descent is sexual selection and predator/prey/parasite relations.

Agents running around inside everywhere -- completely changes the process.

Likewise for comparing any kind of flat optimization or search to evolution. I think sexual selection and predator-prey made natural selection dramatically more efficient.

So I think it's pretty fair to object that you don't take evolution as adequate evidence to expect this flat, dead, temporary number cruncher will blow up in exponential intelligence.

I think there are other reasons to expect that though.

I haven't read these 500 pages of dialogues so somebody probably made this point already.

Load More