Nuno Sempere

I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value. And I haven't left the forum entirely: I remain subscribed to its RSS, and generally tend to at least skim all interesting posts.


I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship. Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.


You can share feedback anonymously with me here.

Posts

Sorted by New

Comments

Copying my response from the EA forum:

(if this post is right)

The post does actually seem wrong though. 

Glad that I added the caveat.

Also, the title of "there are no coherence arguments" is just straightforwardly wrong. The theorems cited are of course real theorems, they are relevant to agents acting with a certain kind of coherence, and I don't really understand the semantic argument that is happening where it's trying to say that the cited theorems aren't talking about "coherence", when like, they clearly are.

Well, part of the semantic nuance is that we don't care as much about the coherence theorems that do exist if they will fail to apply to current and future machines

IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).

Here are some scenarios:

  • Our highly intelligent system notices that to have complete preferences over all trades would be too computationally expensive, and thus is willing to accept some, even a large degree of incompleteness. 
  • The highly intelligent system learns to mimic the values of human, which end up having non-complete preferences, which the agent mimics
  • You train a powerful system to do some stuff, but also to detect when it is out of distribution and in that case do nothing. Assuming you can do that, their preference is incomplete, since when offered tradeoffs they always take the default option when out of distribution. 

The whole section at the end feels very confused to me. The author asserts that there is "an error" where people assert that "there are coherence theorems", but man, that just seems like such a weird thing to argue for. Of course there are theorems that are relevant to the question of agent coherence, all of these seem really quite relevant. They might not prove the things in-practice, as many theorems tend to do. 

Mmh, then it would be good to differentiate between:

  • There are coherence theorems that talk about some agents with some properties
  • There are coherence theorems that prove that AI systems as will soon exist in the future will be optimizing utility functions

You could also say a third thing, which would be: there are coherence theorems that strongly hint that AI systems as will soon exist in the future will be optimizing utility functions. They don't prove it, but they make it highly probable because of such and such. In which case having more detail on the such and such would deflate most of the arguments in this post, for me.

For instance:

Coherence arguments’ mean that if you don’t maximize ‘expected utility’ (EU)—that is, if you don’t make every choice in accordance with what gets the highest average score, given consistent preferability scores that you assign to all outcomes—then you will make strictly worse choices by your own lights than if you followed some alternate EU-maximizing strategy (at least in some situations, though they may not arise). For instance, you’ll be vulnerable to ‘money-pumping’—being predictably parted from your money for nothing.

This is just false, because it is not taking into account the cost of doing expected value maximization, since giving consistent preferability scores is just very expensive and hard to do reliably. Like, when I poll people for their preferability scores, they give inconsistent estimates. Instead, they could be doing some expected utility maximization, but the evaluation steps are so expensive that I now basically don't bother to do some more hardcore approximation of expected value for individuals, but for large projects and organizations.  And even then, I'm still taking shortcuts and monkey-patches, and not doing pure expected value maximization.

“This post gets somewhat technical and mathematical, but the point can be summarised as:

  • You are vulnerable to money pumps only to the extent to which you deviate from the von Neumann-Morgenstern axioms of expected utility.

In other words, using alternate decision theories is bad for your wealth.”

The "in other words" doesn't follow, since EV maximization can be more expensive than the shortcuts.

Then there are other parts that give the strong impression that this expected value maximization will be binding in practice:

“Rephrasing again: we have a wide variety of mathematical theorems all spotlighting, from different angles, the fact that a plan lacking in clumsiness, is possessing of coherence.”

 

“The overall message here is that there is a set of qualitative behaviors and as long you do not engage in these qualitatively destructive behaviors, you will be behaving as if you have a utility function.”

 

  “The view that utility maximizers are inevitable is supported by a number of coherence theories developed early on in game theory which show that any agent without a consistent utility function is exploitable in some sense.”

 

Here are some words I wrote that don't quite sit right but which I thought I'd still share: Like, part of the MIRI beat as I understand it is to hold that there is some shining guiding light, some deep nature of intelligence that models will instantiate and make them highly dangerous. But it's not clear to me whether you will in fact get models that instantiate that shining light. Like, you could imagine an alternative view of intelligence where it's just useful monkey patches all the way down, and as we train more powerful models, they get more of the monkey patches, but without the fundamentals. The view in between would be that there are some monkey patches, and there are some deep generalizations, but then I want to know whether the coherence systems will bind to those kinds of agents.

No need to respond/deeply engage, but I'd appreciate if you let me know if the above comments were too nitpicky.

At some point, I looked at the base rate for discontinuities in what I thought was a random enough sample of 50 technologies. You can get the actual csv here. The base rate for big discontinuities I get is just much higher than 5% that keeps being mentioned throughout the post.

Here are some of the discontinuities that I think can contribute more to this discussion: 
- One story on the printing press was that there was a hardware overhang from the Chinese having invented printing, but applying it to their much more difficult to print script. When applying similar methods to the Latin alphabet, printing suddenly became much more efficient. [note: probably wrong, see comment below]
- Examples of cheap physics hacks: The Bessemer process, activated sludge, de Laval nozzles, the Bayer + Hall–Héroult processes.

To overcome small inconveniences, I'm copying the whole csv from that post here:

TechnologyIs there plausibly a discontinuitySize of the (plausible) discontinuity
History of aviationYes. With the Wright brothers, who were more analytical and capable than any before them.Big
History of ceramicsProbably not. 
History of cryptographyYes. Plausibly with the invention of the one-time pad.Medium
History of cyclingYes. With its invention. The dandy horse (immediate antecessor to the bicycle) was invented in a period where there were few horses, but it could in principle have been invented much earlier, and it enabled humans to go much faster.Small
History of filmProbably not. 
History of furnitureMaybe. Maybe with the invention of the chair. Maybe with the Industrial Revolution. Maybe in recent history with the invention of more and more comfy models of chairs (e.g., bean bags)Small
History of glass.Yes. In cheapness and speed with the industrial revolutionMedium
Nuclear historyYes. Both with the explosion of the first nuclear weapon, and with the explosion of the (more powerful) hydrogen bombBig
History of the petroleum industryYes. Petroleum had been used since ancient times, but it took off starting in ~1850Big
History of photographyProbably not. 
History of printingYes. With Gutenberg. Hardware overhang from having used printing for a more difficult problem: Chinese characters vs Latin alphabetBig
History of rail transportYes. With the introduction of iron, then (Bessemer process) steel over wood, and the introduction of steam engines over horses. Great expansion during the Industrial Revolution.Medium
History of roboticsMaybe. But the 18th-21st centuries saw more progress than the rest combined.Small
History of spaceflightYes. With the beginning of the space race.Big
History of water supply and sanitationYes. With the Industrial revolution and the push starting in the, say, 1850s to get sanitation in order (https://en.wikipedia.org/wiki/Great_Stink; https://en.wikipedia.org/wiki/Activated_sludge); the discovery/invention of activated sludge might also be another discontinuity. But I’d say it’s mostly the “let us, as a civilization, get our house in order” impulse that led to these inventions.Medium
History of rocketsYes. With Hale rockets, whose spinning made them more accurate. Then with de Laval nozzles (hypersonic rockets; went from 2% to 64% efficiency). Then plausibly with Germany’s V2 rocket (the German missile program cost levels comparable to the Manhattan project).Big
History of artificial lifeProbably not. 
History of calendarsProbably not. Maybe with the Khayyam calendar reform in 1079 in the Persian calendar, but it seems too precise to be true. “Because months were computed based on precise times of solar transit between zodiacal regions, seasonal drift never exceeded one day, and also there was no need for a leap year in the Jalali calendar. [...] However, the original Jalali calendar based on observations (or predictions) of solar transit would not have needed either leap years or seasonal adjustments.” 
History of candle makingYes. With industrialization: “The manufacture of candles became an industrialized mass market in the mid 19th century. In 1834, Joseph Morgan, a pewterer from Manchester, England, patented a machine that revolutionized candle making. It allowed for continuous production of molded candles by using a cylinder with a moveable piston to eject candles as they solidified. This more efficient mechanized production produced about 1,500 candles per hour, (according to his patent ". . with three men and five boys [the machine] will manufacture two tons of candle in twelve hours"). This allowed candles to become an easily affordable commodity for the masses”Small
History of chromatographyProbably not. Any of the new types could have been one, though. 
Chronology of bladed weaponsProbably not. Though the Spanish tercios were probably discontinuous as an organization method around it. 
History of condomsProbably not 
History of the diesel carYes. In terms of efficiency: the diesel engine’s point is much more efficient than the gasoline engine.Medium
History of hearing aidsProbably not 
History of aluminiumYes. With the Bayer + Hall–Héroult processes in terms of cheapness.Big
History of automationMaybe. If so, with controllers in the 1900s, or with the switch to digital in the 1960s. Kiva systems, used by Amazon, also seems to be substantially better than the competition: https://en.wikipedia.org/wiki/Amazon_RoboticsMedium
History of radarYes. Development was extremely fast during the war.Big
History of radioYes. The first maybe discontinuity was with Marconi realizing the potential of electromagnetic waves for communication, and his superior commercialization. The second discontinuity was a discontinuity in price as vacuum tubes were replaced with transistors, making radios much more affordable.Big
History of sound recordingMaybe. There were different eras, and any of them could have had a discontinuity. For example, magnetic tape recordings were much better than previous technologiesSmall
History of submarinesYes. Drebbel's submarine "seemed beyond conventional expectations of what science was thought to have been capable of at the time." It also seems likely that development was sped up during major conflicts (American Civil War, WW1, WW2, Cold War)Small
History of televisionMaybe. Work on television was banned during WW2 and picked up faster afterwards. Perhaps with the super-Emitron in the 1930s (“The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes and, in some cases, this ratio was considerably greater”)Medium
History of the automobileYes. In speed of production with Ford. Afterwards maybe with the Japanese (i.e., Toyota)Big
History of the batteryMaybe. There have been many types of batteries throughout history, each with different tradeoffs. For example, higher voltage and more consistent current at the expense of greater fragility, like the Poggendorff cell. Or the Grove cell, which offered higher current and voltage, at the expense of being more expensive and giving off poisonous nitric oxide fumes. Or the lithium-ion cell, which seems to just have been better, gotten its inventor a Nobel Price, and shows a pretty big jump in terms of, say, voltage.Small
History of the telephoneProbably not. If so, maybe with the invention of the automatic switchboard. 
History of the transistorMaybe. Probably with the invention of the MOSFET; the first transistor which could be used to create integrated circuits, and which started Moore’s law.Big
History of the internal combustion engineProbably not. If so, jet engines. 
History of manufactured fuel gasesProbably not. 
History of perpetual motion machinesNo. 
History of the motorcycleProbably not. If there is, perhaps in price for the first Vespa in 1946 
History of multitrack recordingMaybe. It is possible that Les Paul’s experimenting was sufficiently radical to be a discontinuity.Small
History of nanotechnologyProbably not 
Oscilloscope historyProbably not. However, there were many advances in the last century, and any of them could have been one. 
History of paperMaybe. Maybe with Cai Lun at the beginning. Probably with the industrial revolution and the introduction of wood pulp w/r to cheapness.Small
History of polymerase chain reactionYes. Polymerase chain reaction *is* the discontinuity; a revolutionary new technology. It enabled many new other technologies, like DNA evidence in trials, HIV tests, analysis of ancient DNA, etc.Big
History of the portable gas stoveProbably not 
History of the roller coasterProbably not 
History of the steam engineMaybe. The Newcomen engine put together various disparate already existing elements to create something new. Watt’s various improvements also seem dramatic. Unclear abou the others.Medium
History of the telescopeMaybe. If so, maybe after the serendipitous invention/discovery of radio telescopySmall
History of timekeeping devicesMaybe. Plausibly with the industrial revolution in terms of cheapness, then with quartz clocks, then with atomic clocks in terms of precision.Medium
History of wind powerMaybe. If there is, maybe https://en.wikipedia.org/wiki/History_of_wind_power#Danish_development TvindcraftSmall