Luke H Miles

Wiki Contributions

Comments

Someone make a PR for a builder/breaker feature on lesswrong

Thanks, especially like vague/incorrect labels to refer to that mismatch. Well-posed Q by Garrabrant, might touch on that in my next post.

Good point about validators failing silently and being more strongly vetted.

using a 'cleanroom' approach and investing heavily in formal verification (and testing) can get you an orders-of-magnitude lower error rate than ordinary software... at orders-of-magnitude greater cost

Abstractly, it seems to me that once the tooling and process is figured out for one task in a narrow domain, you could reuse that stuff on other tasks in the same domain at relatively low cost. But the history of repeated similar vulnerabilities over long time ranges in narrow domains (eg gnu core utils) is perhaps some evidence against that.

At more ordinarily-reasonable levels of rigor, I'd emphasize understandable interfaces rather than primitives, and agree that ensuring people can think about composition helps albeit in weaker ways than I think you're suggesting.

I agree with the first half and would add that restricting the kind of interface has large additional benefit on top of the interfaces themselves. If you are dealing with classes and functions and singleton module things and macros then you're far more prone to error compared to just using any single one of those things. Even if they are all simple.

If system validation has exponential cost with respect to confidence and system size then I think the simplicity of the primitives is perhaps uh the base of the exponent. And is a lot smaller than .

My main point is that this the difference between "annoying to validate" and "we will never ever be able to validate"

Thoughts on when models will or won't use edge cases? For example, if you made an electronic circuit using evolutionary algorithms in a high fidelity simulation, I would expect it to take advantage of V = IR being wrong in edge cases.

In other words, how much of the work do you expect to be in inducing models to play nice with abstraction?

ETA: abstractions are sometimes wrong in stable (or stabilizable) states, so you can't always lean on chaos washing it out

What counts as a solution? You could ofc set this up completely manually. Or you could train another net to tweak the first. Gold standard would be to have a basically normal net do this on a basically normal task...

This is true in every field and is very difficult to systemize apparently. Perhaps a highly unstable social state to have people changing directions or thinking/speaking super honestly often.

How could one succeed where so few have?

It seems I was missing the right keywords in my search for demos of this because when I google "ai research assistant" there is quite a lot of work

The difference between evolution and gradient descent is sexual selection and predator/prey/parasite relations.

Agents running around inside everywhere -- completely changes the process.

Likewise for comparing any kind of flat optimization or search to evolution. I think sexual selection and predator-prey made natural selection dramatically more efficient.

So I think it's pretty fair to object that you don't take evolution as adequate evidence to expect this flat, dead, temporary number cruncher will blow up in exponential intelligence.

I think there are other reasons to expect that though.

I haven't read these 500 pages of dialogues so somebody probably made this point already.

That misses element 4 right?

>>> from itertools import product
>>> B = [[{0, 1}, {2, 3, 4}], [{0, 2, 3}, {1, 3}]]
>>> list(product(*B))
[({0, 1}, {0, 2, 3}),
({0, 1}, {1, 3}),
({2, 3, 4}, {0, 2, 3}),
({2, 3, 4}, {1, 3})]
>>> [set.intersection(*tup) for tup in product(*B)]
[{0}, {1}, {2, 3}, {3}]
>>> set.union(*[set.intersection(*tup) for tup in product(*B)])
{0, 1, 2, 3}

Definition paraphrasing attempt / question:

Can we say "a factorization B of a set S is a set of nontrivial partitions of S such that  " (cardinality not taken)? I.e., union(intersect(t in tuple) for tuple in cartesian_product(b in B)) = S. I.e., can we drop the intermediate requirement that each intersection has a unique single element, and only require the union of the intersections is equal to S?

Load More