Adele Lopez

Wiki Contributions


Biology-Inspired AGI Timelines: The Trick That Never Works

You're missing the point!

Your arguments apply mostly toward arguing that brains are optimized for energy efficiency, but the important quantity in question is computational efficiency! You even admit that neurons are "optimizing hard for energy efficiency at the expense of speed", but don't seem to have noticed that this fact makes almost everything else you said completely irrelevant!

Biology-Inspired AGI Timelines: The Trick That Never Works

Going to try answering this one:

Humbali: I feel surprised that I should have to explain this to somebody who supposedly knows probability theory. If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you're concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does. Your probability distribution has lower entropy. We can literally just calculate out that part, if you don't believe me. So to the extent that you're wrong, it should shift your probability distributions in the direction of maximum entropy.

[Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?]

The uncertainty must already be "priced in" your probability distribution. So your distribution and hence your median shouldn't shift at all, unless you actually observe new relevant evidence of course.

Visible Thoughts Project and Bounty Announcement

This plausibly looks like an existing collection of works which seem to be annotated in a similar way:

Christiano, Cotra, and Yudkowsky on AI progress

That seems a bit uncharitable to me. I doubt he rejects those heuristics wholesale. I'd guess that he thinks that e.g. recursive self improvement is one of those things where these heuristics don't apply, and that this is foreseeable because of e.g. the nature of recursion. I'd love to hear more about what sort of knowledge about "operating these heuristics" you think he's missing!

Anyway, it seems like he expects things to seem more-or-less gradual up until FOOM, so I think my original point still applies: I think his model would not be "shaken out" of his fast-takeoff view due to successful future predictions (until it's too late).

Christiano, Cotra, and Yudkowsky on AI progress

It seems like Eliezer is mostly just more uncertain about the near future than you are, so it doesn't seem like you'll be able to find (ii) by looking at predictions for the near future.

Matthew Barnett's Shortform

I lean toward the foom side, and I think I agree with the first statement. The intuition for me is that it's kinda like p-hacking (there are very many possible graphs, and some percentage of those will be gradual), or using a log-log plot (which makes everything look like a nice straight line, but are actually very broad predictions when properly accounting for uncertainty). Not sure if I agree with the addendum or not yet, and I'm not sure how much of a crux this is for me yet.

Yudkowsky and Christiano discuss "Takeoff Speeds"

Spending money on R&D is essentially the expenditure of resources in order to explore and optimize over a promising design space, right? That seems like a good description of what natural selection did in the case of hominids. I imagine this still sounds silly to you, but I'm not sure why. My guess is that you think natural selection isn't relevantly similar because it didn't deliberately plan to allocate resources as part of a long bet that it would pay off big.

Ngo and Yudkowsky on AI capability gains

There's more than just differential topology going on, but it's the thing that unifies it all. You can think of differential topology as being about spaces you can divide into cells, and the boundaries of those cells. Conservation laws are naturally expressed here as constraints that the net flow across the boundary must be zero. This makes conserved quantities into resources, for which the use of is convergently minimized. Minimal structures with certain constraints are thus led to forming the same network-like shapes, obeying the same sorts of laws. (See chapter 3 of Grady's Discrete Calculus for details of how this works in the electric circuit case.)

Ngo and Yudkowsky on AI capability gains

I think "deep fundamental theory" is deeper than just "powerful abstraction that is useful in a lot of domains".

Part of what makes a Deep Fundamental Theory deeper is that it is inevitably relevant for anything existing in a certain way. For example, Ramón y Cajal (discoverer of the neuronal structure of brains) wrote:

Before the correction of the law of polarization, we have thought in vain about the usefulness of the referred facts. Thus, the early emergence of the axon, or the displacement of the soma, appeared to us as unfavorable arrangements acting against the conduction velocity, or the convenient separation of cellulipetal and cellulifugal impulses in each neuron. But as soon as we ruled out the requirement of the passage of the nerve impulse through the soma, everything became clear; because we realized that the referred displacements were morphologic adaptations ruled by the laws of economy of time, space and matter. These laws of economy must be considered as the teleological causes that preceded the variations in the position of the soma and the emergence of the axon. They are so general and evident that, if carefully considered, they impose themselves with great force on the intellect, and once becoming accepted, they are firm bases for the theory of axipetal polarization.

At first, I was surprised to see that the structure of physical space gave the fundamental principles in neuroscience too! But then I realized I shouldn't have been: neurons exist in physical spacetime. It's not a coincidence that neurons look like lightning: they're satisfying similar constraints in the same spatial universe. And once observed, it's easy to guess that what Ramón y Cajal might call "economy of metabolic energy" is also a fundamental principle of neuroscience, which of course is attested by modern neuroscientists. That's when I understood that spatial structure is a Deep Fundamental Theory.

And it doesn't stop there. The same thing explains the structure of our roadways, blood vessels, telecomm networks, and even why the first order differential equations for electric currents, masses on springs, and water in pipes are the same.

(The exact deep structure of physical space which explains all of these is differential topology, which I think is what Vaniver was gesturing towards with "geometry except for the parallel postulate".)

Ngo and Yudkowsky on AI capability gains

"you can't make an engine more efficient than a Carnot engine."

That's not what it predicts. It predicts you can't make a heat engine more efficient than a Carnot engine.

Load More