[Epistemic status: Describes a failed research approach I had a while ago, and my only purpose here is to warn people off from that way of thinking. Every now and then I see someone working on an AIS subproblem say "if only we had a distance function for things in domain X", and my intuition is that they are probably doing a wrong-way reduction. But I only mean this as a soft guideline, and I'm only somewhat confident in my current thinking on this.]

~~~

Terminology: We use the terms distance or distance function to denote any function that intuitively tells us how “dissimilar” any two members of a set X are (regardless of the whether d is a metric).

Counterfactual Worlds

Consider the counterfactual "If Lincoln were not assassinated, he would not have been impeached". If we would like to say this has a truth value, we need to imagine what such a counterfactual world would have looked like: was it because Lincoln (somehow) survived his wounds, John Wilkes Booth (somehow) missed, that the plot was (somehow) discovered the day before, etc. Somehow, we must pick out the world that is in some sense "closest" to our actual world, but it seems very difficult to compare any two such worlds in a principled way.

To formalize Functional Decision Theory (FDT), we likely need to have a better understanding of counterfactuals, although even in restricted mathematical contexts, we don't have a satisfactory understanding of why "If 0 = 1..." simply returns incoherence, yet "If the Modularity Theorem were false..." seemingly conjures up a possible world that we feel we can reason about.

(Also, in terms of corrigibility, we are often interested in formalizing the notion of "low-impact" agents, and the naive idea one often has is to define a distance metric on counterfactual world-states, as in p. 5 of Concrete Problems in AI Safety).

Algorithmic Similarity

In the FDT framework, we do not view ourselves as a solitary agent, but as a function (or algorithm) that can be copied, modified, and read, and we wish to maximize the utility achieved by our algorithm. Minor details of our implementation that don't affect our behavior (such as whether we are written in Java or Python) should not be decision-relevant, and if some algorithm does the same thing as us "most" of the time, then we would probably (e.g.) want to cooperate with it in a Prisoner's Dilemma. Defining what it means for two algorithms to be similar remains an outstanding open problem.

At MSFP 2018, a small group (4-5) of us tried tackling this for a couple hours, had a few ideas that "felt" promising, but gradually realized that none of these made any sense, until ultimately we gave up with the feeling that we hadn't made any intellectual advances. I only say this to give outside-view evidence of intractability, but it's difficult for me to concisely communicate why its hard (I could say "try it yourself for an hour and you'll see", but part of my point is that hour is better spent). For those who insist on inside-view evidence, here's an outline of one of the ideas we had and why it turned out to be unworkable:

We attempted to partition algorithm-space into equivalence classes that represent "conceptual similarity", which should not be harder than defining a distance function on the space. By the Curry-Howard correspondence, we can rephrase this as asking when two proofs are similar (this felt easier for us to think about, but that's entirely subjective). Suppose we have some proof A of size n, and we want to find proofs that "don't use any fundamentally different ideas". The obvious approach is to think of which proofs we can get to with minor edits. If we make some edit of size ϵ⋅n for some small ϵ and the result is still a valid proof, it should be more or less the same. If we take the closure under minor edits that preserve validity, it would seem superficially plausible that this would result in proofs that are similar. However, suppose we discover a one-line proof B that's totally different from A: then we can append it to A as a minor edit, then gradually delete A with minor edits, until we have a drastically different proof (among other complications).

Adversarial Examples

Given some data point x correctly classified by an ML model, a new point x′:=x+ϵ is an adversarial example if it is now misclassified, despite only differing from x by a tiny amount ϵ (i.e. making relatively small RGB changes to a few pixels). For every state-of-the-art image classifier tested, if you give me:

  • Any image classified correctly by that model
  • Any target class you would like to have the model misclassify the image as

Then one can usually find some small perturbation of that image that the model believes is in the target class with high probability.

In the classic example we can have GoogLeNet classify a panda as a gibbon with 99% confidence. Moreover, these have been found to generalize very well across different models, even with very different architectures. Last year, a paper came out taking this further, by obtaining adversarial examples with the best cross-generalization, and giving these to humans who had only a few seconds to classify the image. Interestingly, the humans were "fooled" in the sense that their snap judgments--those formed by their pure visual system--differed from how they classified the images when given more time for reflection. In terms of robustness to these examples, it seems, our perceptual system by itself is not qualitatively better than today's classifiers, but our lens can see its own flaws.

The paper was popularized in various places under a bolder headline, namely that there now existed full-blown adversarial examples for humans (reflection or not). This was showcased with a picture from a different part of the paper showing an image of a (somewhat dog-like) cat being given a tiny amount of noise, and subsequently looking like a dog to a human with any amount of visual processing and top-down feedback. This sparked controversy, with many pointing out that a small change (in RGB values) to some visual concept does not necessarily correspond to a small change in concept-space. The paper itself punted on this:

it is philosophically difficult to define the real object class for an image that is not a picture of a real object. In this work, we assume that an adversarial image is misclassified if the output label differs from the human-provided label of the clean image that was used as the starting point for the adversarial image. We make small adversarial perturbations and we assume that these small perturbations are insufficient to change the true class.

And in response to comments, co-author Ian Goodfellow acknowledged on Twitter:

While everyone else was scrambling to finish running experiments for ICML, my co-authors and I were having intense debates about philosophy and semantics and how to write the paper. Some of our open office colleagues were entertained by how surreal this sounded.

Making models robust against adversarial examples remains an outstanding and difficult topic with a considerable paper trail. The problem of merely verifying that a given model has no local adversarial examples (e.g. within a few RGB values of a given data point) has been the subject of some interesting formal verification work in the past couple years. But to even do this verification work, one needs a formal specification of what an adversarial example is, which in turn requires a formal specification of what a "small change" between (e.g.) images is, that somehow captures something about conceptual distance. It seems to me that even this smaller problem will be hard to solve in a philosophically satisfying way because of the inherent subjectivity/fuzziness in defining "distance in concept-space" or anything that even comes close.

Distance Functions are Hard: The Evidence

What we are asking for, in all these instances, is some distance function precise enough to be mathematizable in some form, but robust enough to include many very fuzzy desiderata we have in mind. It seems natural to ask what distance functions of this form have been successfully developed before. The Encyclopedia of Distances comes out to over 700 pages, split roughly in half between those distances used in pure math (especially, as one would expect, topology, geometry, and functional analysis), and those used in applied math, computing disciplines, and the natural sciences.

Of the distance functions listed in the latter half, most were simply "the obvious thing one would do" given the preexisting mathematical structure around the topic in question (e.g. Levenshtein distance on strings). Others were less obvious, but usually because they used nontrivial mathematical machinery to answer specific mathematical questions, not to actually shed light on fuzzy philosophical questions one would have about it.

Getting to the social science section, where no existing mathematical formalism existed on most of the topics in the first place, virtually none of the distances particularly helped to remedy this fuzziness by themselves. Though I do not claim to have spent that much time flipping through this tome, never did I see a distance notion that struck me as a profound non-mathematical insight, or that even gestured at an "art of coming up with distance functions".

Conclusions

I conclude, with medium confidence, that each of the questions posed in the first 3 sections will be particularly hard to answer in a satisfying way, and if they are, then probably this won't be by thinking about distance functions directly.

As a general heuristic, I feel like if you've reduced a philosophical problem to "defining the appropriate distance function", then it's worth pausing to consider if you've made a wrong-way reduction. Chances are, the distance function you want is inherently value-laden, and so the problem of defining it inherits the difficulty of the value alignment problem itself.

I also think this heuristic is especially salient if you're trying to capture something like "conceptual similarity/distance": if you could do this, then you'd have an objective map/taxonomy of (a large fraction of) concept-space.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 5:26 AM

Learning a distance function between pictures of human faces has been used successfully to train deep learning based face recognition systems.

My takeaway from your examples is not that "distance functions are hard" so much as "hardcoding is brittle". The general approach of "define a distance function and train a model based on it" has been pretty successful in machine learning.

At the same time, the importance of having a good distance/divergence, the lack of appropriate ones, and the difficulty of learning them are widely acknowledged challenges in machine learning.

A distance function is fairly similar to a representation in my mind, and high-quality representation learning is considered a bit of a holy grail open problem.

Machine learning relies on formulating *some* sort of objective, which can be viewed as analogous to the choice of a good distance function, so I think the central point of the post (as I understood it from a quick glance) is correct: "specifying a good distance measure is not that much easier than specifying a good objective".

It's also an open question how much learning, (relatively) generic priors, and big data can actually solve the issue of weak learning signals and weak priors for us. A lot of people are betting pretty hard on that; I think its plausible, but not very likely. I think its more like a recipe for unaligned AI, and we need to get more bits of information about what we actually want into AI systems somehow. Highly interactive training protocols seem super valuable for that, but the ML community has a strong preference against such work because it is a massive pain compared to the non-interactive UL/SL/RL settings that are popular.




Why are highly interactive training protocols a massive pain?

Do you have any thoughts on self-supervised learning? That's my current guess for how we'll get AGI, and it's a framework that makes the alignment problem seem relatively straightforward to me.

They're a pain because they involve a lot of human labor, slow down the experiment loop, make reproducing results harder, etc.

RE self-supervised learning: I don't see why we needed the rebranding (of unsupervised learning). I don't see why it would make alignment straightforward (ETA: except to the extent that you aren't necessarily, deliberately building something agenty). The boundaries between SSL and other ML is fuzzy; I don't think we'll get to AGI using just SSL and nothing like RL. SSL doesn't solve the exploration problem, if you start caring about exploration, I think you end up doing things that look more like RL.

I also tend to agree (e.g. with that gwern article) that AGI designs that aren't agenty are going to be at a significant competitive disadvantage, so probably aren't a satisfying solution to alignment, but could be a stop-gap.

They're a pain because they involve a lot of human labor, slow down the experiment loop, make reproducing results harder, etc.

I see. How about doing active learning of computable functions? That solves all 3 problems.

Instead of standard benchmarks, you could offer an API which provides an oracle for some secret functions to be learned. You could run a competition every X months and give each competition entrant a budget of Y API calls over the course of the competition.

RE self-supervised learning: I don't see why we needed the rebranding (of unsupervised learning).

Well I don't see why neural networks needed to be rebranded as "deep learning" either :-)

When I talk about "self-supervised learning", I refer to chopping up your training set into automatically created supervised learning problems (predictive processing), which feels different from clustering/dimensionality reduction. It seems like a promising approach regardless of what you call it.

I don't see why it would make alignment straightforward (ETA: except to the extent that you aren't necessarily, deliberately building something agenty).

In order to make accurate predictions about reality, you need to understand humans, because humans exist in reality. So at the very least, a superintelligent self-supervised learning system trained on loads of human data would have a lot of conceptual building blocks (developed in order to make predictions about its training data) which could be tweaked and combined to make predictions about human values (analogous to fine-tuning in the context of transfer learning). But I suspect fine-tuning might not even be necessary. Just ask it what Gandhi would do or something like that.

Re: gwern's article, RL does not seem to me like a good fit for most of the problems he describes. I agree active learning/interactive training protocols are powerful, but that's not the same as RL.

Autonomy is also nice (and also not the same as RL). I think the solution for autonomy is (1) solve calibration/distributional shift, so the system knows when it's safe to act autonomously (2) have the system adjust its own level of autonomy/need for clarification dynamically depending on the apparent urgency of its circumstances. I have notes for a post about (2), let me know if you think I should prioritize writing it.

I see. How about doing active learning of computable functions? That solves all 3 problems

^ I don't see how?

I should elaborate... it sounds like your thinking of active learning (where the AI can choose to make queries for information, e.g. labels), but I'm talking about *inter*active training, where a human supervisor is *also* actively monitoring the AI system, making queries of it, and intelligently selecting feedback for the AI. This might be simulated as well, using multiple AIs, and there might be a lot of room for good work there... but I think if we want to solve alignment, we want a deep and satisfying understanding of AI systems, which seems hard to come by without rich feedback loops between humans and AIs. Basically, by interactive training, I have in mind something where training AIs looks more like teaching other humans.

So at the very least, a superintelligent self-supervised learning system trained on loads of human data would have a lot of conceptual building blocks (developed in order to make predictions about its training data) which could be tweaked and combined to make predictions about human values (analogous to fine-tuning in the context of transfer learning).

I think it's a very open question how well we can expect advanced AI systems to understand or mirror human concepts by default. Adversarial examples suggest we should be worried that apparently similar concepts will actually be wildly different in non-obvious ways. I'm cautiously optimistic, since this could make things a lot easier. It's also unclear ATM how precisely AI concepts need to track human concepts in order for things to work out OK. The "basin of attraction" line of thought suggests that they don't need to be that great, because they can self-correct or learn to defer to humans appropriately. My problem with that argument is that it seems like we will have so many chances to fuck up that we would need 1) AI systems to be extremely reliable, or 2) for catastrophic mistakes to be rare, and minor mistakes to be transient or detectable. (2) seems plausible to me in many applications, but probably not all of the applications where people will want to use SOTA AI.

Re: gwern's article, RL does not seem to me like a good fit for most of the problems he describes. I agree active learning/interactive training protocols are powerful, but that's not the same as RL.

Yes ofc they are different.

I think algorithms the significant features of RL here are: 1) having the goal of understanding the world and how to influence it, and 2) doing (possibly implicit) planning. RL can also be pointed at narrow domains, but for a lot of problems, I think having general knowledge will be very valuable, and hard to replicate with a network of narrow systems.

I think the solution for autonomy is (1) solve calibration/distributional shift, so the system knows when it's safe to act autonomously (2) have the system adjust its own level of autonomy/need for clarification dynamically depending on the apparent urgency of its circumstances.

That seems great, but also likely to be very difficult, especially if we demand high reliability and performance.

^ I don't see how?

No human labor: Just compute the function. Fast experiment loop: Computers are faster than humans. Reproducible: Share the code for your function with others.

I'm talking about interactive training

I think for a sufficiently advanced AI system, assuming it's well put together, active learning can beat this sort of interactive training--the AI will be better at the task of identifying & fixing potential weaknesses in its models than humans.

Adversarial examples suggest we should be worried that apparently similar concepts will actually be wildly different in non-obvious ways.

I think the problem with adversarial examples is that deep neural nets don't have the right inductive biases. I expect meta-learning approaches which identify & acquire new inductive biases (in order to determine "how to think" about a particular domain) will solve this problem and will also be necessary for AGI anyway.

BTW, different human brains appear to learn different representations (previous discussion), and yet we are capable of delegating tasks to each other.

I'm cautiously optimistic, since this could make things a lot easier.

Huh?

My problem with that argument is that it seems like we will have so many chances to fuck up that we would need 1) AI systems to be extremely reliable, or 2) for catastrophic mistakes to be rare, and minor mistakes to be transient or detectable. (2) seems plausible to me in many applications, but probably not all of the applications where people will want to use SOTA AI.

Maybe. But my intuition is that if you can create a superintelligent system, you can make one which is "superhumanly reliable" even in domains which are novel to it. I think the core problems for reliable AI are very similar to the core problems for AI in general. An example is the fact that solving adversarial examples and improving classification accuracy seem intimately related.

I think algorithms the significant features of RL here are: 1) having the goal of understanding the world and how to influence it, and 2) doing (possibly implicit) planning.

In what sense does RL try to understand the world? It seems very much not focused on that. You essentially have to hand it a reasonably accurate simulation of the world (i.e. a world that is already fully understood, in the sense that we have a great model for it) for it to do anything interesting.

If the planning is only "implicit", RL sounds like overkill and probably not a great fit. RL seems relatively good at long sequences of actions for a stateful system we have a great model of. If most of the value can be obtained by planning 1 step in advance, RL seems like a solution to a problem you don't have. It is likely to make your system less safe, since planning many steps in advance could let it plot some kind of treacherous turn. But I also don't think you will gain much through using it. So luckily, I don't think there is a big capabilities vs safety tradeoff here.

I think having general knowledge will be very valuable, and hard to replicate with a network of narrow systems.

Agreed. But general knowledge is also not RL, and is handled much more naturally in other frameworks such as transfer learning, IMO.

So basically I think daemons/inner optimizers/whatever you want to call them are going to be the main safety problem.

Yes, perhaps I should've been more clear. Learning certain distance functions is a practical solution to some things, so maybe the phrase "distance functions are hard" is too simplistic. What I meant to say is more like

Fully-specified distance functions are hard, over and above the difficulty of formally specifying most things, and it's often hard to notice this difficulty

This is mostly applicable to Agent Foundations-like research, where we are trying to give a formal model of (some aspect of) how agents work. Sometimes, we can reduce our problem to defining the appropriate distance function, and it can feel like we've made some progress, but we haven't actually gotten anywhere (the first two examples in the post are like this).

The 3rd example, where we are trying to formally verify an ML model against adversarial examples, is a bit different now that I think of it. Here we apparently need transparent, formally-specified distance function if we have any hope of absolutely proving the absence of adversarial examples. And in formal verification, the specification problem often is just philosophically hard like this. So I suppose this example is less insightful, except insofar as it lends extra intuitions for the other class of examples.

Here we apparently need transparent, formally-specified distance function if we have any hope of absolutely proving the absence of adversarial examples.

Well, a classifier that is 100% accurate would also do the job ;) (I'm not sure a 100% accurate classifier is feasible per se, but a classifier which can be made arbitrarily accurate given enough data/compute/life-long learning experience seems potentially feasible.)

Also, small perturbations aren't necessarily the only way to construct adversarial examples. Suppose I want to attack a model M1, which I have access to, and I also have a more accurate model M2. Then I could execute an automated search for cases where M1 and M2 disagree. (Maybe I use gradient descent on the input space, maximizing an objective function corresponding to the level of disagreement between M1 and M2.) Then I hire people on Mechanical Turk to look through the disagreements and flag the ones where M1 is wrong. (Since M2 is more accurate, M1 will "usually" be wrong.)

This is actually one way to look at what's going on with traditional small perturbation adversarial examples. M1 is a deep learning model and M2 is a 1-nearest-neighbor model--not very good in general, but quite accurate in the immediate region of data points with known labels. The problem is that deep learning models don't have a very strong inductive bias towards mapping nearby inputs to nearby outputs (sometimes called "Lipschitzness"). L2 regularization actually makes deep learning models more Lipschitz because smaller coefficients=smaller eigenvalues for weight matrices=less capacity to stretch nearby inputs away from each other in output space. I think maybe that's part of why L2 regularization works.

Hoping to expand the previous two paragraphs into a paper with Matthew Barnett before too long--if anyone wants to help us get it published, please send me a PM (neither of us has ever published a paper before).