Jonathan Bostock

Wiki Contributions

Comments

Sorted by

I love the depth you're going into with this sequence, and I am very keen to read more about this. I wonder if the word "knowledge" is not ideal. It seems like the examples you've given, while all clearly "knowledge" could correspond to different things. Possibly the human-understandable concept of "knowledge" is tied up with lots of agent-y optimizer-y things which make it more difficult to describe in a human-comfortable way on the level of physics (or maybe it's totally possible and you're going to prove me dead-wrong in the next few posts!)

My other thought is that knowledge is stable to small perturbations (equivalently: small amounts of uncertainty) of the initial knowledge-accumulating region: a rock on the moon moved a couple of atoms to the left would not get the same mutual information with the history of humanity, but a ship moved a couple of atoms to the left would make the same map of the coastline.

This brings to mind the idea of abstractions as things which are not "wiped out" by noise or uncertainty between a system and an observer. Lots of examples I can think of as knowledge seem to be representations of abstractions but so do some counterexamples (it's possible - minus quantumness - to have knowledge about the position of an atom at a certain time).

Other systems which are stable to small perturbations of the starting configuration are optimizers. I have written about optimizers previously using an information-theoretic point of view (though before realizing I only have a slippery grasp on the concept of knowledge). Is a knowledge-accumulating algorithm simply a special class of optimization algorithm? Backpropagation definitely seems to be both, so there's probably significant overlap, but maybe there are some counter examples I haven't thought of yet.