C.S. Lewis wrote a short book attacking moral subjectivism in education; it's available online here as a pdf, here as raw text, and here as a series of videos [1 2 3], and I think probably worth reading in full (at 50 pages or ~100 minutes of video at 1x speed). This post is mostly me rambling about what I saw as the central point, especially connected to individual development and community health, by quoting sections and then reacting to them.

The book begins with a reaction to a grammar textbook (written in 1939) euphemistically called The Green Book whose lessons are also philosophical; Lewis doesn't object to the bait-and-switch (outside of one paragraph) so much as the content and quality of the philosophy. (One suspects Lewis wouldn't object to the Copybook Headings referenced by Kipling, even tho that mixes writing lessons and philosophy.)

Until quite modern times all teachers and even all men believed the universe to be such that certain emotional reactions on our part could be either congruous or incongruous to it--believed, in fact, that objects did not merely receive, but could merit, our approval or disapproval, our reverence or our contempt.

First, let's get the obvious objections out of the way: the claim of universality is probably false. Even supposing it were true, then the underlying change seems worth investigating. Naive belief that one's map is objective reality disintegrates on contact with different maps and after noticing surprising divergences between one's predictions and observations; one can imagine this happening in the moral realm as well as the physical one. But presumably we should just ignore this as standard "the contemporary world is fallen and bad" framing instead of an actual historical claim.

The more interesting claim here is the question of whether or not there can or should be a question of merit, distinct from a question of flavor or fact. A previous taxonomy I've liked a lot (that I was mostly introduced to by Sapiens) is the split between objective (determined by reality), subjective (determined by the person in question), and intersubjective (determined by some group process); the rules of a game are not just 'my personal whims' and are also not 'scientific' in the sense that any outside observer would be able to determine it themselves. Without access to human civilization; aliens would figure out the same physics, and they might play something like chess, but they likely won't play chess. Nevertheless, concepts like chess are an important component of your epistemology and there is such a thing as a 'legal move' or 'illegal move.'

But what is common to [religious traditions] is something we cannot neglect. It is the doctrine of objective value, the belief that certain attitudes are really true, and other really false, to the kind of thing the universe is and the kind of things we are. Those who know the Tao can hold that to call children delightful or old men venerable is not simply to record a psychological fact about our own parental or filial emotions at the moment, but to recognize a quality which demands a certain response from us whether we make it or not.”

Lewis is trying to go a step further; in my framing, there's a thing about the 'game that is society' that involves 'playing with reality' in a way that makes it something a little more objective than the 'intersubjective.' It's not just that everyone jointly decided that old people are venerable and thus the fashion is to venerate them; it's that somehow venerating old people is congruous with the Tao and not venerating them isn't, and so getting that question wrong is worse on some dimension than just playing chess by the wrong rules. Play chess by the wrong rules and people will throw you out of the chess club; play society by the wrong rules and your society collapses or misery abounds. Lewis uses 'the Tao' to refer to both 'the underlying territory as distinct from the map' and 'the sort of human behavior congruous with the territory', in a way that seems connected to this sense of 'the universe as participant in the game that is society.'

Note that he says "true to the kind of thing the universe is and the kind of things we are", as opposed to simply "true." This seems consistent with 'morality as the product of game theory', and a sort of subjectivism that allows for different environments to have different moralities, or different professions to have different ethics; the Tao of the soldier may be distinct from the Tao of the doctor, and the Tao of the Inuit different from the Tao of the Swahili. It reminds me of the claim that Probability is Subjectively Objective; if one is a soldier, the 'right way to be' is different than if one is a doctor, but there is still a meaningful sense in which there is only 'one right way to be' that is not destroyed by that variation. [Imagine a function from 'broad situation' to 'proper behavior'; this function can vary as you change the input while still being a deterministic function.]

If they embark on this course the difference between the old and the new education will be an important one. Where the old initiated, the new merely ‘conditions’. The old dealt with its pupils as grown birds deal with young birds when they teach them to fly; the new deals with them more as the poultry-keeper deals with young birds--making them thus or thus for purposes of which the birds know nothing. In a word, the old was a kind of propagation--men transmitting manhood to men; the new is merely propaganda.

The contrast between 'initiation' and 'conditioning' stuck out to me. One way you could get such a split is a separation between Educators and Students where most students will not become educators, whereas most boy-children become men. When I try to figure out what the difference between religions and cults are, especially when it comes to things like the rationality community, I keep thinking about this sense of "explorers trying to create more explorers", and how it differs from "carnies trying to use marks", and somehow it seems connected to feedback loops. The man trying to make the next generation into men relates to the next generation differently from how the carnie trying to extract money from marks relates to those marks. Not only does the former involve identification with the audience (where the latter recoils from that), the former is trying to get the audience to understand the whole process (so that they too, in their time, can perform it), whereas the latter is trying to get the audience to misunderstand the whole process (so that they will come back and be fleeced again).

To the extent that the High Modernist or Reformer or Rationalist sees the outside as a thing to be optimized, as opposed to part of a system that needs to support further optimization, it seems like there's some deep short-sightedness and disconnection from the Tao. To the extent that some profession sees the outside world as something to be profited from, as opposed to a body in which they are an organ, we should expect the society to be sick in some way.

Let us suppose for a moment that the harder virtues could really be theoretically justified with no appeal to objective value. It still remains true that no justification of virtue will enable a man to be virtuous. Without the aid of trained emotions the intellect is powerless against the animal organism. … The head rules the belly through the chest--the seat, as Alanus tells us, of Magnanimity, of emotions organized by trained habit into stable sentiments. The Chest-Magnanimity-Sentiment--these are the indispensable liaison officers between cerebral man and visceral man. It may even be said that it is by this middle element that man is man; for by his intellect he is a mere spirit and by his appetite mere animal.
The operation of The Green Book and its kind is to produce what may be called Men without Chests. It is an outrage that they should be commonly spoken of as Intellectuals. This gives them the chance to say that he who attacks them attacks Intelligence.

This reminded me of Bayesians vs. Barbarians, with a new dimension added; it is not that the Barbarians gain from having less in their head, it is that the Bayesians lost because they forgot to develop their chests. When I was younger, I read through The Fountainhead and Atlas Shrugged and was confused by the educational strategy; here were these staunchly moral characters, as evidenced by their disgust at taking immoral actions that would benefit them, but the source of their morality seemed unspoken and unjustified. This felt like a serious contrast to what I observed at my local church, where people put in serious amounts of effort to become slightly more aligned with their reasoned values. It looked like all that was assumed unnecessary; one simply had to paint the picture of correctness and it would be followed by the righteous without any exercise or training.

Another Eliezer reference is Feeling Rational, which points at the congruity property of emotions, but only with regards to factual truth; if you're afraid about an iron being hot and it's cold, you're making a mistake, and if you're calm about an iron being cold and it's hot, you're making a mistake. But that seems to miss the intersubjective angle; in some contexts, reacting to criticism with defensiveness is inappropriate and reacting to criticism with curiosity is appropriate, and some large part of 'training human rationality' is inculcating the right emotional responses in oneself. A dojo isn't just about transfer of technique, but also about transfer of attitude.

New Comment
13 comments, sorted by Click to highlight new comments since:

This

The Chest-Magnanimity-Sentiment--these are the indispensable liaison officers between cerebral man and visceral man. It may even be said that it is by this middle element that man is man; for by his intellect he is a mere spirit and by his appetite mere animal.

and

This reminded me of Bayesians vs. Barbarians, with a new dimension added; it is not that the Barbarians gain from having less in their head, it is that the Bayesians lost because they forgot to develop their chests.

Feels like it's getting at something real, but I'd be interested in checking how this grounds out in something physiologically real. What are the gears inside "develop your chest?"

I don't have a solid sense of this yet, in large part because of how much of it is experiential.

I think I would count the 5-second level as gesturing in this direction; I also note the claim that HPMOR lets people 'experience' content from the Sequences instead of just read it. Some friends who did (old-style) boxing described it as calibrating their emotional reactions to danger and conflict in a way that seems related.

I've been experimenting with conceptualizing some of my long-standing dilemmas as questions of the form "does this desire have merit?" as opposed to something closer to "should I do A or B?", but it's too soon to see if that's the right approach.

Strong upvote for: 1) reading C.S. Lewis in the first place (since I think he is largely outside of the rationalist canon), 2) steel-manning his opinions, 3) connecting his opinions to the rationalist diaspora, 4) understanding Lewis' point at a pretty deep level.

To draw from this post, Jordan Peterson, and a few other things I've read, I think their message is something like:

"We, as a society, are losing something very valuable. There is the Way (Tao) of living that used to be passed from generation to generation. This Way is in part reflected in our religion, traditions, and virtues. Over time there was an erosion, especially on the religious side. This led to the society that abandoned religion, traditions, and virtues. We should try to get back to the Way."

I mostly agree. I think the best route is to find a new way "back", rather than try to undo the steps that led us here. Trying to teach religion, tradition, or virtues directly is largely missing the Way. (Similarly to how teaching only the first 11 virtues of rationality is missing the last and most important one.) At this point we have come so far as a society that we should be able to find new, more direct, and more epistemically honest ways of teaching Tao.

Lewis is trying to go a step further; in my framing, there's a thing about the 'game that is society' that involves 'playing with reality' in a way that makes it something a little more objective than the 'intersubjective.' It's not just that everyone jointly decided that old people are venerable and thus the fashion is to venerate them; it's that somehow venerating old people is congruous with the Tao and not venerating them isn't, and so getting that question wrong is worse on some dimension than just playing chess by the wrong rules.

This reminds me of the much less philosophical work 7 Habits of Highly Effective People, in which Stephen Covey posits posits principles that govern reality and humans- if you work in harmony with them, good things happen. Like (I imagine) Lewis, he posits that these principles ultimately derive from God.

One challenge to this view is the idea of multiple ontologies of meaning. That is, if we define Objective morality as suggesting actions that are both doable and desirable in an ontology, and that lead to meaning and rightness in that ontology, I think it's the care that there exist multiple consistent ontologies that fit this description, but which are incompatible with eachother.

To use your game theory as morality examples, you can imagine three ontologies based on EDT, CDT, and UDT, all of which match the criteria be of above but which are incompatible with eachother.

And then, if the thing you actually care about is the meaning and rightness, you might imagine a stance which switched between these 3 ontologies based on their ability to make meaning of the current situation.

I was trying to point at this problem in this not very well received short form post: https://www.lesswrong.com/posts/i2XikYzeL39HoSSTr/matt-goldenberg-s-short-form-feed#whte7aXqecmma7PMv

I'm not sure CDT vs UDT or what-not quite match what I think is important here.

I might think in terms of causal decision theory, or updateless, or functional. But... I think for most people these are more like descriptors of their True Ontology than the actual driver. If I'm currently doing causal decision theory and noticing that other people are making more money from Omega than me I can stop and think "hmm, this seems sorta stupid." And maybe I successfully revise my decision theory. (Or maybe I can't – sometimes Omega is just systematically rewarding people with bad decision algorithms and there's no way to predictably do the right thing. But I should probably at least keep going 'hmmm', and being sort of frustrated until I'm confident that I've thought through it sufficiently)

But if I switch from CDT to UDT I don't think my true ontology changed, just my understanding of it.

Part of the point of morality-as-game-theory is that it works independent of people's values, decision making and goal structures. For arbitrary values and strategies that bump into each other, some processes and coordination mechanisms will tend to work robustly, and some won't. Some will be able to play positive sum games with each other than bend the arc of the universe towards justice, and some won't.

This will be situation dependent (where the situation includes how many players there are, how powerful they are, and what strategies and values they're pursuing). I'm not sure that this is ontology dependent.

Or, rather: maybe a given "local morality" is a combination of ontology, preferred strategy, and values. Two players with two different local moralities might best achieve their goals via different tactics. But Objective Morality has something to say about what viable strategies are actually available to them.

I might think in terms of causal decision theory, or updateless, or functional. But... I think for most people these are more like descriptors of their True Ontology than the actual driver. If I'm currently doing causal decision theory and noticing that other people are making more money from Omega than me I can stop and think "hmm, this seems sorta stupid."

I don't think omega actually exists though? Like, instead what I'd imagine to see is some people cooperating in the prisoners dillema, and some people defecting, and each group going "yes, this the best outcome giving my ontology".

But, the people who are defecting eventually notice that they are getting outcompeted by people who cooperated.

There are some consistent worldviews where the thing to do is "stick to your principles" even if that strategy will eventually eradicate itself. (i.e. it's always good to defect on people who want the wrong things, or things like Shakers deciding not to have children). Nonetheless, there is a fact of the matter about which sorts of principles can survive iterated game theory and which can't.

I claim most people who defect unreflectively because of CDT-type reasons are just making a mistake (relative to their own goal structure), rather than actually getting a good outcome given their ontology.

Alternatively, the UDT style reasoning keeps getting defected on because it does a bad job of predicting similar agents. This is another part of the original point that has trouble when mixing with complicated reality - the principles that work depend not only on your OWN ontology, but also the ontology of your community. There are stable states that work when most people are under a specific ontology, and ones that work no matter how many people are operating under which ontology, and ones that work in many circumstances but you can certainly construct tournaments with specific types of behaviors where they fail spectacularly.

UDT is one of those that works really well when many other people are also operating under UDT, AND actually have similar source code they can predict each other. However there are many societies/times when that's not true.

There are stable states that work when most people are under a specific ontology, and ones that work no matter how many people are operating under which ontology,

But part of my point is that if your stable-state only works if everyone is in a particular ontology, this only matters if your stable state includes a mechanism to maintain, or achieve, everyone having that particular ontology. (either by being very persuasive, or obtaining power, or some such)

There exist moral ontologies that I'd describe as self-defeating, because they didn't have any way of contending with a broader universe.

There exist moral ontologies that I’d describe as self-defeating, because they didn’t have any way of contending with a broader universe.

Agreed 100%. I think the reverse statement though: "There exist ontologies that are both human compatible and can contend with all existing/possible configurations of the universe" is also false.

The central idea behind being a robust agent I think, is how close can we get to this, and I think it's actually a really interesting and fruitful research direction, and an interesting ontology all on its' own. However, I tend to be skeptical of its' usefulness on actual human hardware, at least if "elegance" or "simplicity" is considered as a desirable property of the resulting meta-ontology.

ETA: I expect the resultant meta-ontology for humans to look much more like "based on a bunch of hard to pin down heuristics, this is the set of overlapping ontologies that I'm using for this specific scenario"

I have a few different answers for this:

  • There is some fact-of-the-matter about "which ontologies are possible to run on real physics [in this universe] or in hypothetical physics [somewhere off in mathematical Tegmark IV land]."
    • Sticking to 'real physics as we understand it', for now, I think it is possible to grade ontologies on how well they perform in the domains that they care about. (where some ontologies get good scores by not caring about as much, and others get good scores by being robust)
  • There is some fact of the matter about what the actual laws of physics and game theory are, even if no one can compute them.
  • Meta-ontologies are still ontologies. I think ontologies that are flexible will (longterm) outcompete ontologies that are not.
    • There are multiple ways to be flexible, which include:
      • "I have lots of tools available with some hard to pin down heuristics for which tools to use"
      • "I want to understand the laws of the universe as deeply as possible, and since I have bounded compute, I want to cache those laws into heuristics that are as simple as possible while cleaving as accurately as possible to the true underlying law, with varying tools specifically to tell me when to zoom into the map."
    • I expect that in the next 10-100 years, the first set frame will outcompete the second frame in terms of "number of people using that frame to be reasonably successful." But in the long run and deep future, I expect the second frame to outcompete the first. I'd *might* expect this whether or not we switch from human hardware to silicon uploads. But I definitely expect it once uploads exist.