[ Question ]

Problems Involving Abstraction?

by johnswentworth1 min read20th Oct 20208 comments

13

AbstractionOpen ProblemsRationality
Frontpage

I'm working on a post of examples for how to formulate problems involving abstraction (using the abstraction formulation here). This isn't going to solve problems, just show how to set them up mathematically.

To that end, I'd like to hear particular problems people are interested in which intuitively seem to involve abstraction. Examples of the sort of thing I have in mind: 

  • Humans generally seem to care about abstract objects, not individual atoms, so it seems like abstraction should be relevant to impact measures. How would we formalize that?
  • Humans can figure out what a new word means with ridiculously few examples, suggesting that we already have some "latent space" with a simple representation of the-class-of-things-corresponding-to-the-new-word. That sounds like it has something to do with abstraction. What's going on there?
  • The sort of "maps" we use in the real world (like street maps, for instance) are lossy, abstract representations of the territory (i.e. streets). How can we usefully formulate map-territory correspondence for such abstract maps? Is possible for a system to use its abstract map to recognize flaws in its own abstract-map-making process?

There is a high chance that your request (or at least something very similar to it) will be incorporated in the post. So, what examples would people like to see?

New Answer
Ask Related Question
New Comment

3 Answers

Entropy and temperature inherently require the abstraction of macrostates from microstates. Recommend reading this: http://www.av8n.com/physics/thermo/entropy.html if you haven't seen this before (or just want an unconfused explanation).

1johnswentworth1moAt some point I need to write a post on purely Bayesian statistical mechanics, in a general enough form that it's not tied to the specifics of physics. I can probably write a not-too-long explanation of how abstraction works in this context. I'll see what I can do.

One we already talked about together is the problem of defining the locality of goals. From an abstraction point of view, local goals (goals about inputs) and non-local goals (goals about properties of the world) are both abstractions: they throw away information. But with completely different results!

1johnswentworth1moThis plays well with impact measures, too. I can definitely include it.

When do we learn abstractions bottom-up (like identifying regularities in sense data) versus top-down (like using a controlled approximation to a theory that you can prove will converge to the right answer)? What are the similarities between what you get out at the end?

1johnswentworth1moAbstraction learning in general is an area where I'm not yet fully satisfied with my own understanding, but I'll see if I can set up anything interesting around this.
2 comments, sorted by Highlighting new comments since Today at 2:47 AM

Not quite sure how specifically this connects, but I think you would appreciate seeing it.

As a good example of the kind of gains we can get from abstraction, see this exposition of the HashLife algorithm, used to (perfectly) simulate Conway's Game of Life at insane scales.

Earlier I mentioned I would run some nontrivial patterns for trillions of generations. Even just counting to a trillion takes a fair amount of time for a modern CPU; yet HashLife can run the breeder to one trillion generations, and print its resulting population of 1,302,083,334,180,208,337,404 in less than a second.

Ooh, good one. If I remember the trick to the algorithm correctly, it can indeed be cast as abstraction.