EDIT: Thank you to the commenters on this essay, as well as the commenters who responded to my comments on Nate's original essay for explaining the concepts further to me. I no longer endorse the title or point of this essay as stated. My understanding is now that I was using a colloquial definition of optimizer combined with an internal sense of optimizing instead of the strict sense. I'm leaving this essay up as I think it's good practice to not delete history. I'll update my epistemic humility a bit now as well, I think. Thank you again for engaging me on this. Also, for future reference, the essay Thou art Godshatter helped me further understand the 'humans are not fitness maximizers" argument.

After writing this essay I'm left entirely confused about what an optimizer/maximizer is. My understanding was that an optimizer/maximizer is an agent's best attempt at picking the best option and that an agent can be closer or farther away from optimizing over the desired target. Thus a mediocre optimizer is using a number of proxies or failing to implement a recognized best solution. This model might conflict with what Nate Soares intended. Nevertheless, here is my rebuttal under possibly diverging key definitions.

In Humans aren't fitness maximizers Nate Soares explains that humans aren't fi... well, ok. A+ for title clarity. Anyway, specifically he makes the following claim about Inclusive Genetic Fitness (IGF):

Humans are not literally optimizing for IGF, and regularly trade other values off against IGF.

This is untrue. 

Some humans definitely are optimizing for IGF. These humans go through every major life decision thinking about how they can achieve the highest quality and quantity of offspring. They are currently a minority cause your interest in having PiV sex and your inability to avoid accidents during your favorite hobby used to yield enough IGF to have those genetic strains continue in abundance. Now we can keep the birds and the bees in pharmacologically sealed prisons and tuck the key behind a glass pane that reads "Only break in case of Family Planning".

But people do break the glass. And some people break it a lot. Some people make their lifestyle decisions with their future children in mind. Some people select romantic partners purely on parental and (presumed) genetic health. Some people will eschew entire careers cause they would never be compatible with family building. They aren't the majority. Yet. But they soon will be cause sex drive, status, and cultural memes are hardly producing kids anymore. 

We don't have to feel like optimizers to be optimizers

Nate proposes the following argument to support his claim that humans are not IGF maximizers:

We profess to enjoy many other things, such as art and fine foods.

Suppose someone came to you and said: "I see that you've got a whole complex sensorium centered around visual stimuli. That sure is an inefficient way to optimize for fitness! Please sit still while I remove your enjoyment of beautiful scenery and moving art pieces, and replace it with a module that does all the same work your enjoyment was originally intended to do (such as causing you to settle down in safe locations with abundant food), but using mechanical reasoning that can see farther than your evolved heuristics." Would you sit still? I sure wouldn't.

No, I wouldn't. But that's either due to framing or bad trade-offs. Consider these two variants of the experiment.

Variant 1 -  Imagine that instead of invasive brain surgery, you are offered a selection of pills that have specific effects:

  1. A pill that will make you crave and deeply enjoy the exact foods your body needs to be in peak health. Foods that decrease your overall health will start tasting horrible.
  2. A pill that makes you instantly recognize and deeply enjoy environments that give substantial health benefits.
  3. A pill that allows you to suppress your aversion to an unhealthy living or working environment if you truly believe the overall trade-off is best for your ability to find a suitable partner.

I'd argue most people would elect to take at least one of these pills, if not all three. The difference is that they don't invoke feelings around invasive brain surgery and they point to specific optimizations that you can easily understand and thus endorse.

Variant 2 - Now on rereading the quote, I started to wonder if Nate meant that we would never opt in to knowledge of optimal fitness-predicting-aesthetics while losing our motivational force toward a much messier and inaccurate signal that doubles as a great emotional regulator. If that was the intended argument, then I'd say that most of us get more IGF value out of an easily accessible mood regulator than from increased fitness assessments. But, let's lean in to this variant further for the sake of argument. 

Using pills.

  1. A pill that removes your enjoyment of frivolous sensory experiences but allows you to detect if a prospective mate has any underlying health issues that could affect your offspring. This would include ones currently undetectable to science, cause otherwise you can just do that test instead of losing a great mood regulator.
  2. A pill that removes your enjoyment of frivolous sensory experiences but allows you to detect the exact detrimental health effects of any environment. It would have to include some sort of contaminant x-ray vision to beat the performance of normal common sense inspections.

Here we are trading off a mood regulator against some gain in fitness, but a mood regulator also increases fitness. So it really depends on the exact gains you expect. If (I) you anticipate that you have a lot of degrees of freedom in selecting mates and environments, (II) you are liable to make fairly suboptimal choices, and (III) you have sufficient mood regulations options at hand ... well, then this is a golden ticket. Sure, these pills might be less popular than rewiring your reward system so brussel sprouts taste like chocolate. But that's cause you are actually losing something powerful here. Mood regulation options can help you better deal with adversity and stressors. It makes sense the price is higher to give all that up.

Luckily, Nate offers his own variant of the experiment as well:

And if you're like "maybe mates would be less likely to sleep with me if I didn't enjoy fine art", suppose that we tune your desirability-to-mates upwards exactly as much as needed to cancel out this second-order effect. Would you give up your enjoyment of visual stimuli then, like an actual IGF optimizer would?

I do think a substantial part of people who struggle to find suitable mates, would take this trade. Secondly, again, this example suffers from the same issues as above. It's too vague and scary and relies on us imaging we would trust any authority with invasive brain surgery of which we don't understand the exact effects. Try out this variant:

Imagine a genie comes to you and offers you a deal: They have found the perfect mate for you in every way, but this mate has a horrible and exotic condition. They will die in a frothing and convulsing heap if ever exposed to fine art. ("if we optimize hard enough on culture barbarians, this is where you end up.", she mumbles to herself.)

Would you want to meet them?

I'd claim the vast majority of people (who are not committed to a mate yet) would take this deal. We don't actually care about fine art all that much. The example Nate uses offers a vague "equal" exchange, but I think this triggers a very healthy hell-no response as we humans in the real world would rightly expect Mr. Unlikely Neurosurgery to completely shaft us. I think it's much harder to fight these responses in ourselves, or even recognize them at all, then to compare how you feel about these variant scenarios that avoid such subconscious wiring all together.

Nate continues:

And when you search in yourself for protests, are you actually weighing the proposal based on how many more offspring and kin's-offspring you'll have in the next generation? Or do you have some other sort of attachment to your enjoyment of visual stimuli, some unease about giving it up, that you're trying to defend?

[nods vigoriously] - yes, emotional regulation!

Now, there's a reasonable counterargument to this point, which is that there's no psychologically-small tweak to human psychology that dramatically increases that human's IGF. (We'd expect evolution to have gathered that low-hanging fruit.) But there's still a very basic and naive sense in which living as a human is not what it feels like to live as a genetic fitness optimizer.

Agreed. I highly doubt many people would identify directly with being IGF optimizers, regardless if they are or not. The internal experience of being one or not does not add to the optimization. At least not so far in our evolution. This doesn't matter for the end conclusion though: Some of us are IGF optimizers. And if we give it a few more generations, most of us will be. 

 

Like: it's pretty likely that you care about having kids! And that you care about your kids very much! But, do you really fundamentally care that your kids have genomes? If they were going to transition to silicon, would you protest that that destroys almost all the value at stake?

Nope. Cause I care about the continuity of the intergenerational orga(ni)sm that I am, which relies on the information in said genome. Transitioning to silicon would be fine. Breaking the continuity would not.

 

Or, an even sharper proposal: how would you like to be killed right now, and in exchange you'll be replaced by an entity that uses the same atoms to optimize as hard as those atoms can optimize, for the inclusive genetic fitness of your particular genes. Does this sound like practically the best offer that anyone could ever make you? Or does it sound abhorrent?

I wouldn't, no. I'm a bit of a mediocre IGF optimizer in that regard (This is the foreshadowed moment when my definitional confusion dawned on me). But let's say everyone who takes that deal gets 100 offspring. Or even 10. Within no time, humanity would be populated by the offspring of the very few hardcore IGF optimizers we have walking around now. Bless them, and their little death.

Now I don't think I actually disagree with Nate on much else. His less strong claims I'm actually on board with. I'd just say humans are definitely optimizing on IGF. We're just kind of bad at it. Give us a constant environment and a few more generations. 

 

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 3:32 AM

AFAICT you haven't argued that anyone is anywhere close to being an IGF maxer, or even anywhere close to being an IGF maxer within reasonable constraints of human capabilities. 

 

When you say stuff like "Give us a constant environment and a few more generations.", you're affirming that human evolution is approximately a IGF maxer (well, at least, it selects among readily presented options strictly according to that criterion), not that any humans are IGF bounded-maxers.

I wasn't sure how I hadn't argued that, but between all the difference comments, I've now pieced it together. I appreciate everyone engaging me on this, and I've updated the essay to "deprecated" with an explanation at the top that I no longer endorse these views.

Applause for putting your thoughts out there, and applause for updating. Also maybe worth saying: It's maybe worth "steelmanning" your past self; maybe the intuitions you expressed in the post are still saying something relevant that wasn't integrated into the picture, even if it wasn't exactly "actually some humans are literally IGF maximizers".  Like, you said something true about X, and you thought that IGF meant X, but now you don't think IGF means X, but you still maybe said something worthwhile about X. 

I really appreciate that thought! I think there were a few things going on:

  • Definitons and Degrees: I think in common speech and intuitions it is the case that failing to pick the optimal option doesn't mean something is not an optimizer. I think this goes back to the definition confusion, where 'optimizer' in CS or math literally picks the best option to maximize X no matter the other concerns. While in daily life, if one says they optimize on X then trading off against lower concerns at some value greater than zero is still considered optimizing. E.g. someone might optimize their life for getting the highest grades in school by spending every waking moment studying or doing self-care but they also spend one evening a week with a romantic partner. I think in regular parlance and intuitions, this person is said to be an optimizer cause the concept is weighed in degrees (you are optimizing more on X) instead of absolutes (you are disregarding everything else except X).
  • unrepresented internal experience: I do actually experience something related to conscious IGF optimization drive. All the responses and texts I've read so far are from people that say that they don't, which made me assume the missing piece was people's awareness of people like myself. I'm not a perfect optimizer (see above definitional considerations) but there are a lot of experiences and motivations that seemed to not be covered in the original essay or comments. E.g. I experience a strong sense of identity shift where, since I have children, I experience myself as a sort of intergenerational organism. My survival and flourishing related needs internally feel secondary to that of the aggregate of the blood line I'm part of. This shift happened to me during my first pregnancy and is quite a disorienting experience. It seems to point so strongly at IGF optimization that claiming we don't do that seemed patently wrong. From examples I can now see that it's still a matter of degrees and I still wouldn't take every possible action to maximize the number of copies of my genes in the next generation.
  • where we are now versus where we might end up: people did agree we might end up being IGF maximizers eventually. I didn't see this point made in the original article and I thought the concern was that training can never work to create inner alignment. Apparently that wasn't the point haha.

Does that make sense? Curious to hear your thoughts.

I think this goes back to the definition confusion, where 'optimizer' in CS or math literally picks the best option to maximize X no matter the other concerns.

I wouldn't say "picks the best option" is the most interesting thing in the conceptual cluster around "actual optimizer". A more interesting thing is "runs an ongoing, open-ended, creative, recursive, combinatorial search for further ways to greatly increase X". 

 

E.g. I experience a strong sense of identity shift where, since I have children, I experience myself as a sort of intergenerational organism
...
This shift happened to me during my first pregnancy and is quite a disorienting experience. It seems to point so strongly at IGF optimization that claiming we don't do that seemed patently wrong.

I mean certainly this is pointing at something deep and important. But the shift here I would say couldn't be coming from agentic IGF maximization, because agentic IGF maximization would have already, before your pregnancy, cared in the same qualitative way, with the same orientation to the intergenerational organism, though about 1/8th as much, about your cousins, and 1/16th as much about the children of your cousins. Like, of course you care about those people, maybe in a similar way as you care about your children, and maybe connected to IGF in some way; but something got turned on, which looks a lot like a genetically programmed mother-child caring, which wouldn't be an additional event if you'd been an IGF maxer. (One could say, you care about your children mostly intrinsically, not mostly because of an IGF calculation. Yes this intrinsic care is in some sense put there by evolution for IGF reasons, but that doesn't make them your reasons.)

where we are now versus where we might end up: people did agree we might end up being IGF maximizers eventually. I didn't see this point made in the original article and I thought the concern was that training can never work to create inner alignment. Apparently that wasn't the point haha.

Hm. I don't agree that this is very plausible; what I agreed with was that human evolution is closer to an IGF maxer, or at least some sort of myopic https://www.lesswrong.com/tag/myopia  IGF maxer, in the sense that it only "takes actions" according to the criterion of IGF. 

It's a little plausible. I think it would have to look like a partial Baldwinization https://en.wikipedia.org/wiki/Baldwin_effect of pointers to the non-genetic memeplex of explicit IGF maximization; I don't think evolution would be able to assemble brainware that reliably in relative isolation does IGF, because that's an abstract calculative idea whose full abstractly calculated implications are weird and not pointed to by soft, accessible-to-evolution stuff (Chomskyists notwithstanding); like how evolution can't program the algorithm to take the square of  a number, and instead would program something like "be interested in playing around with moving and stacking physical objects" so that you learn on your own to have a sense of how many rocks you need to cover the floor of your hut. Like, you'd literally breed people to be into Mormonism specifically, or something like that (I mean, breed them to imprint heavily on some cues that are reliably associated with Mormonism, like how humans are already programmed to imprint heavily on what other human-faced-and-bodied things in the world are doing). Or maybe the Amish would do better if they have better "walkabout" protocols; over time they get high fertility and also high retention into the memeplex that gives high fertility.
 

I wouldn't say "picks the best option" is the most interesting thing in the conceptual cluster around "actual optimizer". A more interesting thing is "runs an ongoing, open-ended, creative, recursive, combinatorial search for further ways to greatly increase X". 


Like, "actual optimizer" does mean "picks the best option". But "actual bounded optimizer" https://en.wikipedia.org/wiki/Bounded_rationality can't mean that exactly, while still being interesting and more relevant to humans, while very much (goes the claim) not looking like how humans act. Humans might take a visible opportunity to have another child, and would take visible opportunities to prevent a rock from hitting their child, but they mostly don't sit around thinking of creative new ways to increase IGF. They do some versions of this, such as sitting around worrying about things that might harm their children. One could argue that this is because the computational costs of increasing IGF in weird ways are too high. But this isn't actually plausible (cf. sperm bank example). What's plausible is that that was the case in the ancestral environment; so the ancestral environment didn't (even if it could have) select for people who sat around trying to think of wild ways to increase IGF. 

A pill that will make you crave and deeply enjoy the exact foods your body needs to be in peak health. Foods that decrease your overall health will start tasting horrible.

I would take this pill, but for a different reason.

I am already trying, mostly unsuccessfully, to lose some weight. Taking the pill would mean losing enjoyment of the things that I am already trying to give up anyway; it would eliminate the need to spend precious willpower on this (I am generally very low on willpower); improving my health would make my everyday life more pleasant in various ways, and probably allow me to live longer on average, which is a multiplier on all the pleasures. I would still regret the lost pleasure of eating unhealthy foods, but the tradeoff seems obviously worth it.

The idea that it would allow me to have more kids was not even on my mind when I considered this all. Yes, it is true that longer life allows one to have more kids. But that is not the main reason why I would choose a longer and better life.

*

An actual fitness maximizer would be quite different from someone who merely chooses healthy partners and gives up career in order to have a greater family. Probably the best strategy for a man would be to work hard to fulfill or fake all criteria for sperm donation, and donate as often as possible. Using dozen different fake identities, for example each pretending different ancestry, to further increase the number of women that would choose his sperm. Would try to break in the labs, to replace other men's sperm with his own, too. Most importantly, he would start working hard for this goal since the first day he learned that such possibility exists.

My claim was purely that some people do actually optimize on this. It's just fairly hard, and their success also relies on how their abilities to game the system compares to how strong the system is. E.g. There was that fertility doctor that just used his own sperm all the time, for instance.

I'm not sure which one you mean because there's a few examples of that, but he still has not maximized even for quite generous interpretations of 'maximize': none of those doctors so much as lobbied their fellow doctors to use him as their exclusive sperm donor, for example, nor offered to bribe them; none of the doctors I've read about appear to have spent any money at all attempting to get more offspring, much less to the extent of making any dent in their high doctor-SES standard of living (certainly no one went, 'oh, so that is what he had secretly devoted his life to maximizing, we were wondering'), much less paid for a dozen surrogacies with the few million net assets they'd accumulate over a lifetime. You can't excuse this as a typical human incompetence because it requires only money to cut a check, which they had.

On further reflection, I changed my mind (see title and edit at top of article). Your comment was one of the items that helped me understand the concepts better, so just wanted to add a small thank you note. Thank you!

The surrogacy example originally struck me as very unrealistic cause I presumed it was mostly illegal (it is in Europe but apparently not in some States of the US) and heavily frowned upon here for ethical reasons (but possibly not in the US?). So my original reasoning was that you'd get in far more trouble for applying for many surrogates than for swapping out sperm at the sperm bank.

I guess if this is not the case then it might have been a fetish for those doctors? I'm slightly confused about the matter now what internal experience put them up to it if they'd eschew surrogates while they are legal and socially acceptable in parts of the US.

The other options just seem like relatively risky endeavors that are liable to blow up their succesful sperm swapping projects.

Yes, the story of the doctor was the inspiration for my comment. Compared to him, other "maximizers" clearly did not do enough. And as Gwern wrote, even the doctor could have done much better.

(Also, I have no evidence here, but I wonder how much of what the doctor did was a strategy, and how much was just exploiting a random opportunity. Did he become a fertility doctor on purpose to do this, or did he just choose a random high-status job, and then noticed an opportunity? I suppose we will never know.)

Thank you. Between all the helpful comments, I've updated my point of view and updated this essay to deprecated with an explanation + acknowledgement at the top.

Woop, take credit for changing your mind!

Thanks!

On that note, I was wondering if there was any way I could tag the people that engaged me on this (cause it's spread between 2 articles) just so I can say thanks? Seems like the right thing to do to high five everyone after a lost duel or something? Dunno, there is some sentiment there where a lightweight acknowledgement/update would be a useful thing to deliver in this case, I feel, to signal that people's comments actually had an effect. DM'ing everyone or replying to each comment again would give everyone a notification but generates a lot of clutter and overhead, so that's why tagging seemed like a good route.

No especially good suggestion from me. Obvious options:

  • You could make a comment that links to the most helpful comments.
  • You could make one PM convo that includes everyone (you can add multiple people to a PM convo) and link them to the comment

Agree that tagging/mentions would be nice here.

good to know, thank you!

In return, your new disclaimer at the beginning of the article made me notice something I was confused about -- whether we should apply the label "X maximizer" only to someone who actually achieves the highest possible value of X, or also to someone who tries but maybe fails. In other words, are we only talking about internal motivation, or describing the actual outcome and expecting perfection?

To use an analogy, imagine a chess-playing algorithm. It is correct to call it a "chess victory maximizer"? On one hand, the algorithm does not care about anything other than winning at chess. On the other hand, if a better algorithm comes later and defeats the former one, will we say that the former one is not an actual chess victory maximizer, because it did some (in hindsight) non-victory-maximizing moves, which is how it lost the game?

When talking about humans, imagine that a random sci-fi mutation turns someone into a literal fitness maximizer, but at the same time, that human's IQ remains only 100. So the human would literally stop caring about anything other than reproduction, but maybe would not be smart enough to notice the most efficient strategy, and would use a less efficient one. Would it still be okay to call such human a fitness maximizer? Is it about "trying, within your limits", or is it "doing the theoretically best thing"?

I suppose, if I talked to such guy, and told him e.g. "hey, do you realize that donating at sperm clinic would result in way more babies than just hooking up with someone every night and having unprotected sex?", if the guy would immediately react by "oh shit, no more sex anymore, I need to save all my sperms for donation" then I would see no objection to calling him a maximizer. His cognitive skills are weak, but his motivation is flawless.

(But I still stand by my original point, that humans are not even like this. The guys who supposedly maximize the number of their children would actually not be willing to give up sex forever, if it resulted in more babies. Which means they care about some combination of pleasure and babies.)

Just a single data point, but I would decline both the "brain surgery" and the "pills" (which are functionally identical). I would also turn down the "genie". "Brain surgery", "pills", and "genie" in these stories mean only "somehow".

Makes sense. I'm starting to suspect I overestimated the number of people who would take these deals, but I think there still would be more for the above than for the original thought experiments.