romeostevensit

Wiki Contributions

Comments

Draft report on AI timelines

Is a sensitivity analysis of the model separated out anywhere? I might just be missing it.

AI Safety Research Project Ideas

Detecting preferences in agents: how many assumptions need to be made?

I'm interpreting this to be asking how to detect the dimensionality of the natural embedding of preferences?

Agency in Conway’s Game of Life

Related to sensitivity of instrumental convergence. i.e. the question of whether we live in a universe of strong or weak instrumental convergence. In a strong instrumental convergence universe, most possible optimizers wind up in a relatively small space of configurations regardless of starting conditions, while in a weak one they may diverge arbitrarily in design space. This can be thought of one way of crisping up concepts around orthogonality. e.g. in some universes orthogonality would be locally true but globally false, or vice versa, or locally and globally true or vice versa.

[AN #148]: Analyzing generalization across more axes than just accuracy or loss
  1. First-person vs. third-person: In a first-person perspective, the agent is central. In a third-person perspective, we take a “birds-eye” view of the world, of which the agent is just one part.
  1. Static vs. dynamic: In a dynamic perspective, the notion of time is explicitly present in the formalism. In a static perspective, we instead have beliefs directly about entire world-histories.

I think these are two instances of a general heuristic of treating what have traditionally been seen as philosophical positions (e.g. here cognitive and behavioral views and A and B theories of time) instead as representations one can run various kinds of checks on to achieve more sample complexity reduction than using a single representation.

Coherence arguments imply a force for goal-directed behavior

This seems consistent with coherence being not a constraint but a dimension of optimization pressure among several/many? Like environments that money pump more reliably will have stronger coherence pressure, but also the creature might just install a cheap hack for avoiding that particular pump (if narrow) which then loosens the coherence pressure (coherence pressure sounds expensive, so workarounds are good deals).

Behavioral Sufficient Statistics for Goal-Directedness

I noticed myself being dismissive of this approach despite being potentially relevant to the way I've been thinking about things. Investigating that, I find that I've mostly been writing off anything that pattern matches to the 'cognitive architectures' family of approaches. The reason for this is that most such approaches want to reify modules and structure. And my current guess is that the brain doesn't have a canonical structure (at least, on the level of abstraction that cognitive architecture focuses on). That is to say, the modules are fluid and their connections to each other are contingent.

Utility Maximization = Description Length Minimization

Hypothesis: in a predictive coding model, the bottom up processing is doing lossless compression and the top down processing is doing lossy compression. I feel excited about viewing more cognitive architecture problems through a lens of separating these steps.

What are the best precedents for industries failing to invest in valuable AI research?

There's a fairly straightforward optimization process that occurs in product development that I don't often see talked about in the abstract that goes something like this:

It seems like bigger firms should be able to produce higher quality goods. They can afford longer product development cycles, hire a broader variety of specialized labor, etc. In practice, it's smaller firms that compete on quality, why is this?

One of the reasons is that the pressure to cut corners increases enormously at scale along more than one dimension. As a product scales, eking out smaller efficiency gains is still worth enough money that that particular efficiency gain can have an entire employee, or team devoted to it. The incentive is to cut costs in all ways that are illegible to the consumer. But the average consumer is changing as a product scales up in popularity. Early adopters and people with more specialized needs are more sensitive to quality. As the product scales to less sensitive buyers, the firm can cut corners that would have resulted in lost sales earlier on in the product cycle, but now isn't a large enough effect to show up as revenues and profits go up. So this process continues up the curve as the product serves an ever larger and less sensitive market. Fewer things move the needle, and now the firm is milking its cash cow, which brings in a different sort of optimization (bean counters) which continues this process.

Now, some firms, rather than allow their lunch to get eaten, do engage in market segmentation to capture more value. The most obvious is when a brand has a sub brand that is a luxury line, like basically all car makers. The luxury line will take advantage of some of the advantages of scale from the more commoditized product lines but do things like manufacture key components in, say, germany instead of china. But with the same management running the whole show, it's hard for a large firm to insulate the market segmentation from exactly the same forces already described.

All of this is to answer the abstract question of why large firms don't generate the sort of culture that can do innovation, even when they seemingly throw a lot of money and time at it. The incentives flow down from the top. The 'top' of firms are answerable to the wrong set of metrics/incentives. This is 100% true of most of academia as well as private R&D.

So to answer the original question, I see micro examples of failing to invest in the right things everywhere. Large firms could be hotbeds of experimentation in large scale project coordination, but in practice individuals within an org are forced to conform to internal APIs to maintain legibility to management which explains why something like Slack didn't emerge as an internal tool at any big company.

Load More