This is part 1 of a series of posts I initially planned to organize as a massive post last summer on principal-agent problems. As that task quickly became overwhelming, I decided to break it down into smaller posts that ensure I cover each of the cases and mechanisms that I intended to.

Overall, I think the trade-off between the alignment of agents and the competence of agents can explain a lot of problems to which people often think there are simple answers. The less capable an agent is (whether the agent is a person, a bureaucracy, or an algorithm) the easier it is for a principal to assess the agent, and ensure the agent is working toward the principal’s goals. As agents become more competent, they become both more capable of actually accomplishing the principal’s goals and of merely appearing to accomplish the principal’s goals while pursuing their own. In debating policy changes, I often find one sided arguments that neglect this trade-off, and in general I think efforts to improve policies or the bureaucratic structures of companies, non-profits, and governments should be informed by it.


Part 1:

Virtue signaling and moralistic anger are both forces that have been useful for holding people accountable, and powerful mechanisms of cultural evolution: spreading some norms more successfully than others, and resulting in many societies holding similar norms.

However, the larger a group becomes, the less members of the group know on average about other individual member’s behavior or the consequences of it: making it harder to evaluate complex actions. This in turn gives an advantage to more clear forms of signaling that are more inefficient and costly than those that could be sustainable in smaller groups.

Examples:

  • While it would be efficient for a politician to accept money from competing special interest groups and to keep their behavior consistent with their constituents regardless, it is simpler for politicians to convince allies they aren’t corrupt by not accepting money from political opponents at all.
  • With more complex tax codes, governments can implement more economically efficient pigouvian taxes which increase economic growth by concentrating more and more of the tax burden on actions which produce negative externalities for others. However, the power to assess and tax negative externalities gives those that influence tax code the opportunity to shape tax code to their own advantage at the expense of others.
  • While a police officer could accept bribes and enforce the law anyway, unless you have a lot of information on the officer, you wouldn’t be convinced that there weren’t cases the officer was looking the other way. Likewise, people probably don’t trust the objectivity of police departments with the power of civil asset forfeiture even if such power can be used to reduce the tax burden of police and to create stronger deterrent effects on crime.
  • More pacifistic states are more credible in not holding hostile expansionist intentions, than defensive states, who are in turn more credible than states that take offensive or pre-emptive action. 
  • It is simpler for someone to be vegan than to try explaining a series of edge cases about animal welfare/consciousness or a strategy of eating meat when it doesn’t increase demand for meat. It would also look pretty suspicious for vegans to try selling wasted meat, even though doing so would undercut meat producers.
  • It may be more efficient for you to work from home or to shift your hours on fairly autonomous work to match times you are more productive, but generally employers require explanations, and seek to avoid giving their employees room to slack off.
  • Nepotistic hiring enables employers to ensure the alignment of employees, via additional information on prospective employees and leverage on their social capital. More meritocratic hiring is more ideal for society, however, the more candidates there are to assess the harder it becomes to investigate the merit of each.  Accordingly, the larger a competing population of candidates is, the more their education will become focused on winning signaling competitions, and the less it will become focused on gaining skills that are difficult to demonstrate quickly.

In summary there are a lot of actions that are more directly efficient and selfishly beneficial for those that do them, but because they are not credible signals of good intent/are excuses that the corrupt would use, the options are not sustainable in larger societies. Small groups where people know each other well on the other hand can allow weirder norms to be sustainable without corruption due to their increased ability to vet each other. This may also explain why smaller groups in history often had more sustainable norms of exploiting defectors or outsiders which wouldn’t be sustainable in larger societies since you can’t tell if someone is robbing a thief or an innocent person. Reducing attempts at exploitation between small competing groups of insiders is likewise probably a good thing for scaling up societies.

In general, these signaling costs come from scenarios where people’s interests may not align, and the costs are paid to demonstrate alignment. Without efficient mechanisms to assess and vet each other, as groups scale they lose trust, and more costly signaling becomes required to sustain cooperation.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 9:26 PM

Wish I had this post to reference when I wrote AGI will drastically increase economies of scale.

Yea, when you can copy the same value function across all the agents in an bureaucracy, you don't have to pay signaling costs to scale up. Alignment problems become more about access to information rather than having misaligned goals.

I find it interesting to compare this post to Robin Hanson's "Who Likes Simple Rules?". In your post, when people's interests don't align, they have to switch to a simple/clear mechanism to demonstrate alignment. In Robin Hanson's post, people's interests "secretly align", and it is the simple/clear mechanism that isn't aligned, so people switch to subtle/complicated mechanisms to preserve alignment. Overall I feel pretty confused about when I should expect norms/rules to remain complicated or become simpler as groups scale.

I am a little confused about the large group sizes for some of your examples. For example, the vegan one doesn't seem to depend on a large group size: even among one's close friends or family, one might not want to bother explaining all the edge cases for when one will eat meat.

I think those two cases are pretty compatible. The simple rules seem to get formed due to the pressures created by large groups, but there are still smaller sub-groups within large groups than can benefit from getting around the inefficiency caused by the rules, so they coordinate to bend the rules.

Hanson also has an interesting post on group size and conformity: http://www.overcomingbias.com/2010/10/towns-norm-best.html

In the vegan case, it is easier to explain things to a small number of people than a large number of people, even though it may still not be worth your time with small numbers of people. It's easier to hash out argument with one family member than to do something your entire family will impulsively think is hypocritical during Thanksgiving.