romeostevensit

Wiki Contributions

Comments

Optimization Concepts in the Game of Life

Defining a distance function between two patterns might yield some interesting stuff and allow some porting in of existing math from information theory. There is also the dynamic case (converging and diverging distances) between different patterns. Seems like it could play into robustness eg sensitivity of patterns to flipping from convergent to divergent state.

Analogies and General Priors on Intelligence

I understand, thought it was worth commenting on anyway.

Analogies and General Priors on Intelligence

the small size of the human genome suggests that brain design is simple

Bounds, yes but the bound can be quite high due to offloading much of the compression to the environment.

Draft report on AI timelines

Is a sensitivity analysis of the model separated out anywhere? I might just be missing it.

AI Safety Research Project Ideas

Detecting preferences in agents: how many assumptions need to be made?

I'm interpreting this to be asking how to detect the dimensionality of the natural embedding of preferences?

Agency in Conway’s Game of Life

Related to sensitivity of instrumental convergence. i.e. the question of whether we live in a universe of strong or weak instrumental convergence. In a strong instrumental convergence universe, most possible optimizers wind up in a relatively small space of configurations regardless of starting conditions, while in a weak one they may diverge arbitrarily in design space. This can be thought of one way of crisping up concepts around orthogonality. e.g. in some universes orthogonality would be locally true but globally false, or vice versa, or locally and globally true or vice versa.

[AN #148]: Analyzing generalization across more axes than just accuracy or loss
  1. First-person vs. third-person: In a first-person perspective, the agent is central. In a third-person perspective, we take a “birds-eye” view of the world, of which the agent is just one part.
  1. Static vs. dynamic: In a dynamic perspective, the notion of time is explicitly present in the formalism. In a static perspective, we instead have beliefs directly about entire world-histories.

I think these are two instances of a general heuristic of treating what have traditionally been seen as philosophical positions (e.g. here cognitive and behavioral views and A and B theories of time) instead as representations one can run various kinds of checks on to achieve more sample complexity reduction than using a single representation.

Coherence arguments imply a force for goal-directed behavior

This seems consistent with coherence being not a constraint but a dimension of optimization pressure among several/many? Like environments that money pump more reliably will have stronger coherence pressure, but also the creature might just install a cheap hack for avoiding that particular pump (if narrow) which then loosens the coherence pressure (coherence pressure sounds expensive, so workarounds are good deals).

Load More