This is a special post for quick takes by Nathan Young. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

20 comments, sorted by Click to highlight new comments since: Today at 6:55 PM

I am trying to learn some information theory.

It feels like the bits of information between 50% and 25% and 50% and 75% should be the same.

But for probability p, the information is -log2(p).

But then the information of .5 -> .25 is 1 bit and but from .5 to .75 is .41 bits. What am I getting wrong?

I would appreciate blogs and youtube videos.

I might have misunderstood you, but I wonder if you're mixing up calculating the self-information or surpisal of an outcome with the information gain on updating your beliefs from one distribution to another.

An outcome which has probability 50% contains bit of self-information, and an outcome which has probability 75% contains bits, which seems to be what you've calculated.

But since you're talking about the bits of information between two probabilities I think the situation you have in mind is that I've started with 50% credence in some proposition A, and ended up with 25% (or 75%). To calculate the information gained here, we need to find the entropy of our initial belief distribution, and subtract the entropy of our final beliefs. The entropy of our beliefs about A is .

So for 50% -> 25% it's

And for 50%->75% it's

So your intuition is correct: these give the same answer.

If you or a partner have ever been pregnant and done research on what is helpful and harmful, feel free to link it here and I will add it to the LessWrong pregnancy wiki page. 

https://www.lesswrong.com/tag/pregnancy 

Epistemic status: written quickly, probably errors

Some thoughts on Manifund

  • To me it seems like it will be the GiveDirectly of regranting (perhaps along with NonLinear) rather than the GiveWell
  • It will be capable of rapidly scaling (especially if some regrantors are able to be paid for their time if they are dishing out a lot). It's not clear to me that's a bottleneck of granting orgs.
  • There are benefits to centralised/closed systems. Just as GiveWell makes choices for people and so delivers 10x returns, I expect that Manifund will do worse, on average than OpenPhil, which has centralised systems, centralised theories of impact.
  • Not everyone wants their grant to be public. If you have a sensitive idea (easy to imagine in AI) you may not want to publicly announce you're trying to get funding
  • As with GiveDirectly, there is a real benefit of ~dignity/~agency. And I guess I think this is mostly vibes, but vibes matter. I can imagine crypto donors in particular finding a transparent system with individual portfolios much more attractive than OpenPhil. I can imagine that making a big difference on net.
  • Notable that the donors aren't public. And I'm not being snide, I just mean it's interesting to me given the transparency of everything else.
  • I love mechanism design. I love prizes, I love prediction markets. So I want this to work, but the base rate for clever mechanisms outcompeting bureaucratic ones seems low. But perhaps this finds a way to deliver and then outcompetes at scale (which seems my theory for if GiveDirectly ends up outcompeting GiveWell)

Am I wrong? 

No petrov day? I am sad.

The best way to honor Stanislav Petrov is to arrange systems not to need Stanislav Petrov.

There is an ongoing Petrov Day poll. I don't know if everyone on LW is being polled.

Why you should be writing on the LessWrong wiki.

There is way too much to read here, but if we all took pieces and summarised them in their respective tag, then we'd have a much denser resources that would be easier to understand.

There are currently no active editors or a way of directing sufficient-for-this-purpose traffic to new edits, and on the UI side no way to undo an edit, an essential wiki feature. So when you write a large wiki article, it's left as you wrote it, and it's not going to be improved. For posts, review related to tags is in voting on the posts and their relevance, and even that is barely sufficient to get good relevant posts visible in relation to tags. But at least there is some sort of signal.

I think your article on Futarchy illustrates this point. So a reasonable policy right now is to keep all tags short. But without established norms that live in minds of active editors, it's not going to be enforced, especially against large edits that are written well.

Thanks for replying.

Would you revert my Futarchy edits if you could?

I think reversion is kind of overpowered. I'd prefer reverting chunks.

I don't see the logic that says we should keep tags short. That just seems less useful

I don't see the logic that says we should keep tags short.

The argument is that with the current level of editor engagement, only short tags have any chance of actually getting reviewed and meaningfully changed if that's called for. It's not about the result of a particular change to the wiki, but about the place where the trajectory of similar changes plausibly takes it in the long run.

I think reversion is kind of overpowered.

A good thing about the reversion feature is that reversion can itself be reverted, and so it's not as final as when it's inconvenient to revert the reversions. This makes edit wars more efficient, more likely to converge on a consensus framing rather than with one side giving up in exhaustion.

Would you revert my Futarchy edits if you could?

The point is that absence of the feature makes engagement with the wiki less promising, as it becomes inconvenient and hence infeasible in practice to protect it in detail, and so less appealing to invest effort in it. I mentioned that as a hypothesis for explaining currently near-absent editor engagement, not as something relevant to reverting your edits.

Reverting your edits would follow from a norm that says such edits are inappropriate. I think this norm would be good, but it's also clearly not present, since there are no active editors to channel it. My opinion here only matters as much as the arguments around it convince you or other potential wiki editors, the fact that I hold this opinion shouldn't in itself have any weight. (So to be clear, currently I wouldn't revert the edits if I could. I would revert them only if there were active editors and they overall endorsed the norm of reverting such edits.)

I did a quick community poll - Community norms poll (2 mins) 

I think it went pretty well. What do you think next steps could/should be? 

Here are some points with a lot of agreement.

Things I would do dialogues about:

(Note I may change my mind during these discussions but if I do so I will say I have)

  • Prediction is the right frame for most things
  • Focus on world states not individual predictions
  • Betting on wars is underrated
  • The UK House of Lords is okay actually
  • Immigration should be higher but in a way that doesn't annoy everyone and cause backlash

I appreciate reading women talk about what is good sex for them. But it's a pretty thin genre, especially with any kind of research behind it.

So I'd recommend this (though it is paywalled): 

https://aella.substack.com/p/how-to-be-good-at-sex-starve-her?utm_source=profile&utm_medium=reader2

Also I subscribed to this for a while and it was useful:

https://start.omgyes.com/join

You don't want to warn us that it is behind a paywall?

I didn't think it was relevant, but happy to add it.

I suggest that rats should use https://manifold.love/ as the Schelling dating app. It has long profiles and you can bet on other people getting on.

What more could you want!

I am somewhat biased because I've bet that it will be a moderate success.

Relative Value Widget

It gives you sets of donations and you have to choose which you prefer. If you want you can add more at the bottom.

https://allourideas.org/manifund-relative-value 

Other things I would like to be able to express anonymously on individual comments:

  • This is poorly framed - Sometimes i neither want to agree nor diagree. I think the comment is orthogonal to reality and agreement and disagreement both push away from truth.
  • I don't know - If a comment is getting a lot of agreement/disagreement it would also be interesting to see if there could be a lot of uncertainty

It's a shame the wiki doesn't support the draft google-docs-like editor. I wish I could make in-line comments while writing.