This review originally appeared on the blog Astral Codex Ten as part of a contest. You can now read it here, with working footnotes. There's even an audio version as part of the ACX podcast; the footnotes are all read out at the end.
Shasta County, northern California, is a rural area home to many cattle ranchers.1 It has an unusual legal feature: its rangeland can be designated as either open or closed. (Most places in the country pick one or the other.) The county board of supervisors has the power to close range, but not to open it. When a range closure petition is circulated, the cattlemen have strong opinions about it. They like their range open.
If you ask why, they'll tell you it's because of what happens if a motorist hits one of their herd. In open range, the driver should have been more careful; "the motorist buys the cow". In closed range, the rancher should have been sure to fence his animals in; he compensates the motorist.
They are simply wrong about this. Range designation has no legal effect on what happens when a motorist hits a cow. (Or, maybe not quite no effect. There's some, mostly theoretical, reason to think it might make a small difference. But certainly the ranchers exaggerate it.) When these cases go to court, ranchers either settle or lose, and complain that lawyers don't understand the law.
Even if they were right about the law, they have insurance for such matters. They'll tell you that their insurance premiums will rise if the range closes, but insurers don't adjust their rates on that level of granularity. One major insurer doesn't even adjust its rates between Shasta County and other counties in California. They might plausibly want to increase their coverage amount, but the cost of that is on the order of $10/year.
No, the actual effect that range designation has, is on what happens when a rancher's cow accidentally trespasses on someone else's land. In closed range, the owner is responsible for fencing in his cattle. If they trespass on someone else's land, he's strictly liable for any damage they cause. In open range, the landowner is responsible for fencing the cattle out; the cattle owner is only liable for damages if the land was entirely fenced or if he took them there deliberately. (Law enforcement also has more power to impound cattle in closed range, but most years they don't do that even once.)
The cattlemen mostly don't understand this detail of the law. They have a vague grasp of it, but it's even more simplified than the version I've just given. And they don't act upon it. Regardless of range designation, they follow an informal code of neighborliness. According to them, it's unneighborly to deliberately allow your cattle to trespass; but it's also unneighborly to make a fuss when it does happen. The usual response is to call the owner (whom you indentify by brand) and let him know. He'll thank you, apologize, and drive down to collect it. You don't ask for compensation.
Or, sometimes it would be inconvenient for him to collect it. If his cow has joined your herd, it's simpler for it just to stay there until you round them up. In that case, you'll be feeding someone else's cow, possibly for months. The expense of that is perhaps $100, a notable amount, but you still don't ask for compensation.
Sometimes a rancher will fail to live up to this standard of neighborliness. He'll be careless about fencing in his cattle, or slow to pick them up. Usually the victims will gossip about him, and that's enough to provoke an apology. If not, they get tougher. They may drive a cow to somewhere it would be inconvenient to collect - this is questionably legal. They might threaten to injure or kill the animal. They might actually injure or kill it - this is certainly illegal, but they won't get in trouble for it.
They almost never ask for money, and lawyers only get involved in the most exceptional circumstances (the author found two instances of that happening). When someone does need to pay a debt, he does so in kind: "Should your goat happen to eat your neighbor's tomatoes, the neighborly thing for you to do would be to help replant the tomatoes; a transfer of money would be too cold and too impersonal."2 Ranchers do keep rough mental account of debits and credits, but they allow these to be settled long term and over multiple fronts. A debt of "he refused to help with our mutual fence" might be paid with "but he did look after my place while I was on holiday".
(This is how ranchers deal with each other. Ranchette3 owners will also sometimes complain to public officials, who in turn talk to the cattle owner. They'll sometimes file damage claims against the rancher's insurance. It's ranchette owners who are responsible for range closure petitions.)
Range designation also doesn't affect the legal rules around building and maintaining fences. But it does change the meaning of the fences themselves, so maybe it would change how cattlemen handle fencing? But again, no. Legally, in some situations neighbors are required to share fence maintenance duties, and sometimes someone can build a fence and later force his neighbor to pay some of the cost. The cattlemen don't generally know this, and would ignore it if they did. They maintain fences unilaterally; if one of them doesn't do any work for years, the other will complain at them. If they want to build or upgrade a fence, they'll talk to their neighbor in advance, and usually figure out between them a rough way to split the material costs and labor in proportion to how many cattle each has near the fence. (Crop farmers aren't asked to pay to keep the ranchers' animals out.) Occasionally they can't reach an agreement, but this doesn't cause much animosity. This is despite that fences cost thousands of dollars per mile to build, and half a person-day per mile per year to maintain.
So this is a puzzle. Range designation is legally relevant with regard to cattle trespass, but it doesn't change how ranchers act in that regard. Range designation is not legally relevant to motor accidents, and ranchers have no reason to think it is; but that's why they ostensibly care about it.
(And it's not just words. Many of them act on their beliefs. We can roughly divide cattlemen into "traditionalists who don't irrigate and can't afford fences" and "modernists who irrigate and already use fences" - by improving pasture, irrigation vastly decreases the amount of land needed. After a closure, traditionalists drop their grazing leases in the area. Modernists oppose closures like traditionalists, but they don't react to them if they pass.)
What's up with this? Why do the cattlemen continue to be so wrong in the face of, you know, everything?
Order Without Law: How Neighbors Settle Disputes is a study of, well, its subtitle. The author, Robert Ellickson, is a professor and legal scholar. He comes across as a low-key anarchist, and I've seen him quoted at length on some anarchist websites, and I wouldn't be surprised to learn that he's just a full-blown anarchist. He doesn't identify as one explicitly, at least not here, and he does respect what states bring to the table. He just wishes people would remember that they're not the only game in town. Part of the thesis of the book could be summed up (in my words, not his) as: we credit the government with creating public order, but if you look, it turns out that people create plenty of public order that has basically nothing to do with the legal system. Sometimes there is no relevant law, sometimes the order predates the law, and sometimes the order ignores the law. More on this later.
Part one is an in-depth exploration of Shasta County that I found fascinating, and that I've only given in very brief summary. He goes into much more detail about basically everything.4
One oversight is that it's not clear to me how large the population Ellickson studied is. Given that it's a case study for questions of groups maintaining order, I think the size of the group matters a lot. For example, according to wikipedia on Dunbar's number: "Proponents assert that numbers larger than this generally require more restrictive rules, laws, and enforced norms to maintain a stable, cohesive group. It has been proposed to lie between 100 and 250, with a commonly used value of 150."
Does Shasta County support that? I think not, but it's hard to say. Ellickson admittedly doesn't know the population size of the area he studied. (It's a small part of a census division whose population was 6,784 in 1980, so that's an upper bound.) But I feel like he could have been a lot more helpful. Roughly how many ranchers are there, how many ranchette owners, and how many farmers? (I think most of the relevant people are in one of those groups. I'm not sure to what extent we should count families as units. I'm not sure how many people in the area are in none of those groups.) Overall I'd guess we're looking at perhaps 300-1000 people over perhaps 100-300 families, but I'm not confident.
(I tracked down the minutes of the Shasta County Cattlemen's Association, and they had 128 members in June 2011. I think "most ranchers are in the Association but ranchette owners and farmers generally aren't" is probably a decent guess. But that's over twenty years later, so who knows what changed in that time.)
Near the end of part one, Ellickson poses the "what's up with this?" question. Why are the cattlemen so wrong about what range designation means?
His answer is that it's about symbolism. Cattlemen like to think of themselves as being highly regarded in society. But as Shasta County urbanizes, that position is threatened. A closure petition is symbolic of that threat. Open range gives cattlemen more formal rights, even if they don't take advantage of them. It marks them as an important group of people, given deference by the law. So if the range closes, that's an indication to the whole county that cattlemen aren't a priority.
They care about this sort of symbolism - partly because symbols have instrumental value, but also just because people care about symbols inherently. But you can't admit that you care about symbols, because that shows insecurity. So you have to make the battle about something instrumental, and they develop beliefs which allow them to do so. They're fairly successful, too - there haven't been any closures since 1973. (Though I note that Ellickson documents only one attempted closure in that time. It was triggered by a specific rogue cattleman who left the area soon after. It sounds like there may have been other petitions that Ellickson doesn't discuss, but I have no idea how many, what triggered them, or how much support they got. So maybe it's not so much that the cattlemen are successful as that no one else really cares.)
As for how they remain wrong - it simply isn't costing them enough. It costs them some amount, to be sure. It cost one couple $100,000 when a motorist hit three cattle in open range. They didn't have enough liability insurance, and if they'd understood the law, they might have done. But the question is whether ignorant cattlemen will be driven out of work, or even just outcompeted by knowledegable ones. This mistake isn't nearly powerful enough for that. Nor does anyone else have much incentive to educate them about what range designation actually means. So they remain uneducated on the subject.
This all seems plausible enough, though admittedly I'm fairly predisposed to the idea already. For someone who wasn't, I feel like it probably wouldn't be very convincing, and it could stand to have more depth. (Though it's not the focus of the work, so I hope they'd forgive that.) I'd be curious to know more about the couple who didn't have enough insurance - did they increase their insurance afterwards, and do they still think the motorist buys the cow? Did that case encourage anyone else to get more insurance? It seems like the sort of event that could have triggered a wide-scale shift in beliefs.
(Is this just standard stuff covered in works like the Sequences (which I've read, long ago) and Elephant in the Brain (which I haven't)? I'm not sure. I think it's analyzing on a different level than the Fake Beliefs sequence - that seems like more "here's what's going on in the brain of an individual" and this is more "here's what's going on in a society". Also, remember that it long predates those works.)
A counterpoint might be… these cases aren't all that common, and don't usually go to court, and when they do they're usually settled (on the advice of lawyers) instead of ruled. And "lawyers don't understand this specific part of the law" isn't all that implausible. So although the evidence Ellickson presents is overwhelming that the cattlemen are wrong, I'm not sure I can fault the cattlemen too hard for not changing their minds.
Part one was mostly a case study, with some theorizing. It kind of felt like it was building towards the "what's up with this?" question for part two, but instead it gave a brief answer at the end. Part two is a different style and focus: about evenly split between theorizing and several smaller case studies. We're explicitly told this is what's going to happen, but still, it's a little jarring.
Ellickson spends some time criticising previous theories and theorists of social control, which he divides broadly into two camps.
His own background is in the law-and-economics camp5, which studies the law and its effects in terms of economic theory. Among other things, this camp notably produced the Coase theorem.6 But law-and-economics theorists tend to put too much emphasis on the state. Hobbes' Leviathan is a classic example:
Hobbes apparently saw no possibility that some nonlegal system of social control - such as the decentralized enforcement of norms - might bring about at least a modicum of order even under conditions of anarchy. (The term anarchy is used here in its root sense of a lack of government, rather than in its colloquial sense of a state of disorder. Only a legal centralist would equate the two.)
But Coase fell into this trap too:
Throughout his scholarly career, Coase has emphasized the capacity of individuals to work out mutually advantageous arrangements without the aid of a central coordinator. Yet in his famous article "The Problem of Social Cost," Coase fell into a line of analysis that was wholly in the Hobbesian tradition. In analyzing the effect that changes in law might have on human interactions, Coase implicitly assumed that governments have a monopoly on rulemaking functions. … Even in the parts of his article where he took transaction costs into account, Coase failed to note that in some contexts initial rights might arise from norms generated through decentralized social processes, rather than from law.
As have others:
Max Weber and Roscoe Pound both seemingly endorsed the dubious propositions that the state has, and should have, a monopoly on the use of violent force. In fact, as both those scholars recognized elsewhere in their writings, operative rules in human societies often authorize forceful private responses to provocative conduct.
(See what I mean about coming across as a low-key anarchist?)
There's plenty of evidence refuting the extreme version of this camp. We can see that social norms often override law in people's actions. (The Norwegian Housemaid Law of 1948 imposed labor standards that were violated by the employers in almost 90% of households studied, but no lawsuits were brought under it for two years.) People often apply nonlegal sanctions, like gossip and violence. ("Donald Black, who has gathered cross-cultural evidence on violent self-help, has asserted that much of what is ordinarily classified as crime is in fact retaliatory action aimed at achieving social control.") Even specialists often don't know the law in detail as it applies to their speciality. (The "great majority" of California therapists thought the Tarasoff decision imposed stronger duties than it actually did.) And people just don't hire attorneys very often. We saw examples of all of these in Shasta County as well; part one can be seen as a challenge to the law-and-economics camp.
The other camp is law-and-society, emphasizing that the law exists as just one part in the broader scheme of things. These scholars tend to have a more realistic view of how the legal system interacts with other forms of control, but they've been reluctant to develop theory. They often just take norms as given, rather than trying to explain them. The theories they have developed are all flawed, although Ellickson thinks functionalism is on the right track. (This is the idea that norms develop which help a group to survive and prosper.) Ellickson explicitly describes part two as a "gauntlet" thrown towards law-and-society.
(Also, some law-and-society scholars go too far in the other direction, thinking that the legal system is ineffectual. They're just as mistaken. See7: Muslim Central Asia after the Russian Revolution; US civil rights laws in the 50s and 60s; range closure in Shasta County; "that the allocation of legal property rights in the intertidal zone affects labor productivity in the oyster industry, that the structure of workers' compensation systems influences the frequency of workplace fatalities, and that the content of medical malpractice law affects how claims are settled." [Footnotes removed.])
Ellickson has his own theory of norms, which he formed after studying Shasta County. The main thrust of part two is to elaborate and defend it:
Members of a close-knit group develop and maintain norms whose content serves to maximize the aggregate welfare that members obtain in their workaday affairs with one another. … Stated more simply, the hypothesis predicts that members of tight social groups will informally encourage each other to engage in cooperative behavior. [Emphasis original; footnotes removed.]
(He doesn't name this theory, calling it simply "the hypothesis". I admire that restraint, but I kind of wish I had a name to refer to it by.)
Ellickson makes sure to clarify and caveat the hypothesis here, so that we don't interpret it more strongly than he intends. But before looking at his clarifications, I'm going to jump ahead a little, and look at an example he uses of the hypothesis in action.
Consider the Shasta County norm that a livestock owner is strictly responsible for cattle trespass damages. The hypothesis is that this norm is welfare-maximizing. To test that, we have to compare it to alternatives. One alternative would be non-strict liability. Another would be that trespass damages are borne by the victim.
Compared to a negligence standard, strict liability requires less investigation but triggers more sanctions. (Apparently there's a "premise that strict-liability rules and negligence rules are equally effective at inducing cost-justified levels of care", but Ellickson doesn't really explain this.) In Shasta County, the sanctions have basically no transaction costs, since they're just neighbors adjusting mental accounts. So strict liability it is.
To be welfare maximizing, costs should be borne by whoever can avoid them most cheaply. In this case that's the ranchers; I'm not sure I fully buy Ellickson's argument, but I think the conclusion is probably true.8
So Ellickson argues that the Shasta County trespass norms support the hypothesis.9 He also makes a prediction here that things were different in the mid-nineteenth century. "During the early history of the state of California, irrigated pastures and ranchettes were rare, at-large cattle numerous, and motorized vehicles unknown. In addition, a century ago most rural residents were accustomed to handling livestock. Especially prior to the invention of barbed wire in 1874, the fencing of rangelands was rarely cost-justified. In those days an isolated grower of field crops in Shasta County, as one of the few persons at risk from at-large cattle, would have been prima facie the cheaper avoider of livestock damage to crops." And so the farmer would have been responsible for fencing animals out, and borne the costs if he failed to.
Before we go further, let's look at Ellickson's clarifications. It's important to know what the hypothesis doesn't say.
Ellickson emphasizes that it's descriptive, not normative; it's not a recommendation that norms should be used in preference to other forms of social control. Not all groups are close-knit; welfare isn't the only thing people might want to optimize for; and norms of cooperation within a group often come at the expense of outsiders.
He also emphasizes that a loose reading would give a much stronger version of the hypothesis than he intends. The terms "close-knit", "welfare" and "workaday affairs" are all significant here, and Ellickson explains their meanings in some depth. In order of how much I want to push back against them:
A "close-knit" group is one where "informal power is broadly distributed among group members and the information pertinent to informal control circulates easily among them." This is admittedly vague, but unavoidably so. Rural Shasta County residents are close-knit, and residents of a small remote island are even closer-knit. Patrons of a singles bar at O'Hare Airport are not. Distributed power allows group members to protect their selves and their property, and to personally enforce sanctions against those who wrong them. Information gives people reputations; it allows for punishing people who commit small wrongs against many group members, and for rewarding people who perform those punishments.
Notably, a close-knit group need not be small or exclusive. Someone can be a member of several completely nonoverlapping close-knit groups at once (coworkers, neighborhood, church). And although a small population tends to increase close-knittedness through "quality of gossip, reciprocal power, and ease of enforcement", the size itself has no effect. This is where I think it would be really nice to know how large the relevant population in Shasta County is - as the major case study of the book, it could lend a lot of weight to the idea that large populations can remain close-knit and the hypothesis continues to apply.
"Workaday affairs" means to assume that there's a preexisting set of ground rules, allowing group members to hold and trade personal property. (Which also requires, for example, rules against murder, theft and enslavement.) This is necessary because to calculate welfare, we need some way to measure peoples' values, and we can only do that if people can make voluntary exchanges. The hypothesis doesn't apply to those rules. Seems like a fair restriction.
A little more hackily, it also doesn't apply to "purely distributive" norms, like norms of charity. If you take wealth from one person and give it to another, the transfer process consumes resources and creates none, reducing aggregate welfare. (This is assuming Ellickson's strict definition of welfare, which he's explained by now but I haven't. Sorry.) But clearly norms of charity do exist. There are theories under which they do enhance welfare (through social insurance, or reciprocity). But those might be too simplistic, so Ellickson thinks it prudent to just exclude charity from the hypothesis.
Actually, he goes further than that. He cites Mitch Polinsky (An Introduction to Law and Economics) arguing that for a legal system, the cheapest way to redistribute wealth is (typically) through tax and welfare programs. And so, Polinsky argues, most legal doctrine should be shaped by efficiency concerns, not redistribution. That is, areas like tort and contract law should focus on maximizing aggregate welfare. In a dispute between a rich and a poor person, we shouldn't consider questions like "okay, but the poor person has much more use for the money". In such disputes we should assume the same amount of wealth has equal value whoever's hands it's in, and the point is just to maximize total wealth. Then, if we end up with people having too little wealth, we have a separate welfare system set up to solve that problem.
I can buy that. Ellickson doesn't actually present the argument himself, just says that Polinsky's explained it lucidly, but sure. Stipulated.
Ellickson assumes that the same argument holds for norms as it does for law. Not only that, he assumes that norm-makers subscribe to that argument.10 That… seems like a stretch.
But granted that assumption, norms would follow a similar pattern: most norms don't try to be redistributive, and if redistribution is necessary, there would be norms specifically for that. For example, the hypothesis predicts "that a poor person would not be excused from a general social obligation to supervise cattle, and that a rich person would not, on account of his wealth, have greater fencing obligations."
That seems entirely reasonable to me, and it's consistent with Shasta County practice. And actually, I don't think we need the strong assumption to get this kind of pattern? It's the kind of thing that plausibly could happen through local dynamics. I would have been happy if Ellickson had just assumed the result, not any particular cause for it. This is a fairly minor criticism though.
(It's a little weird. Normally I expect people try to sneak in strong assumptions that are necessary for their arguments. Ellickson is explicitly flagging a strong assumption that isn't necessary.)
(I'm not sure the words "workaday affairs" was the best way to point at these restrictions. I think I see where he's coming from, but the name doesn't hook into the concept very well for me. But that's minor too.)
This gets its own section because apparently I have a lot to say about it.
The point of "welfare" maximization is to avoid subjectivity problems with utility maximization. I can work to satisfy my own preferences because I know what they are. But I don't have direct access to others' preferences, so I can't measure their utility and I can't work to maximize it.
In Economics, the concepts of Pareto efficiency and Kaldor-Hicks efficiency both work with subjective valuations, people can just decide whether a particular change would make them better off or not. That works fine for people making decisions for themselves or voluntary agreements with others.
But third-party controllers are making rules that bind people who don't consent. They're making tradeoffs for people who don't get to veto them. And they can't read minds, so they don't know people's subjective utilities.
They could try to measure subjective utilities. Market prices are a thing - but at best, they only give the subjective preferences of marginal buyers and sellers. (That is, if I buy a loaf of bread for $1, I might still buy it for $2 and the seller might still sell it for $0.50.) And not everything is or can be bought and sold. We can slightly improve on this with for example the concept of shadow prices but ultimately this just isn't going to work.
(Ellickson doesn't consider just asking people for their preferences. But that obviously doesn't work either because people can lie.)
And so third-party controllers need to act without access to people's subjective preferences, and make rules that don't reference them. Welfare serves as a crude but objective proxy to utility.
We can estimate welfare by using market prices, and looking at voluntary exchanges people have made. (Which is part of the reason for the "workaday affairs" restriction.) When a fence-maintenance credit is used to forgive a looking-after-my-house debit, that tells us something about how much one particular person values those things. This process is "sketchy and inexact", and we just admitted it doesn't give us subjective utilities - but that doesn't mean we can do any better than that.
To be clear, welfare doesn't just count material goods. Anything people might value is included, "such as parenthood, leisure, good health, high social status, and close personal relationships." Ellickson sometimes uses the word "wealth", and while he's not explicit about it, I take that to be the material component of welfare.
What welfare doesn't consider, as I understand it, is personal valuations of things. That is, for any given thing, its value is assumed to be the same for every member of society. "As a matter of personal ethics, you can aspire to do unto others as you would have them do unto you. Because norm-makers don't know your subjective preferences, they can only ask you to do unto others as you would want to have done unto you if you were an ordinary person."
Ellickson doesn't give examples of what this means, so I'll have to try myself. In Shasta County, there's a norm of not getting too upset when someone else's cattle trespass on your land, provided they're not egregious about it. So I think it's safe to suppose that the objective loss in welfare from cattle trespass in Shasta County is low. Suppose, by some quirk of psychology, you found cattle trespass really unusually upsetting. Or maybe you have a particular patch of grass that has sentimental value to you. Cattle trespass would harm your utility a lot, but your welfare only a little - no more than anyone else's - and you'd still be bound by this norm. But if you had an objective reason to dislike cattle trespass more - perhaps because you grow an unusually valuable crop - then your welfare would be harmed more than normal. And so norms might be different. One Shasta County rancher reported that he felt more responsibility than normal to maintain a fence with a neighbor growing alfalfa.
Or consider noisiness and noise sensitivity. Most people get some amount of value from making noise - or maybe more accurately, from certain noisy actions. Talking on the phone, having sex, playing the drums. And most people get some amount of disvalue from hearing other people's noise. In the welfare calculus, there'd be some level of noisemaking that's objectively valued equal to some level of noise exposure. Then (according to hypothesis, in a close-knit group) norms would permit people to be that amount of noisy. If someone was noisier than that, their neighbors would be permitted to informally punish them. If a neighbor tried to punish someone less noisy than that, the neighbor would risk punishment themselves. The acceptable noise level would change depending on the time (objective), but not depending on just "I happen to be really bothered by noise" (subjective). What about "I have young children"? (Or, "some of the inhabitants of that house are young children".) Maybe - that's an objective fact that's likely to be relevant to the welfare calculus. Or "I have a verifiably diagnosed hearing disorder"? Still maybe, but it feels less likely. In part because it's less common, and in part because it's less visible. Both of those seem like they'd make it less… accesible? salient? to whatever process calculates welfare. And if you're unusually noise sensitive and the welfare function doesn't capture that, the cost would fall on you. You could ask people to be quiet (but then you'd probably owe them a favor); or you could offer them something they value more than noise-making; or you could learn to live with it (e.g. by buying noise-cancelling headphones).
So okay. One thing I have to say is, it seems really easy to fall into a self-justifying trap here. Ellickson criticizes functionalism for this, and maybe he doesn't fall into it himself. But did you notice when I did it a couple of paragraphs up? (I noticed it fairly fast, but it wasn't originally deliberate.) I looked at the norms in Shasta County and used those to infer a welfare function. If you do that, of course you find that norms maximize welfare.
To test the hypothesis, we instead need to figure out a welfare function without looking at the norms, and then show that the norms maximize it. In Shasta County, we'd need to figure out how much people disvalue cattle trespass by looking at those parts of their related behaviour that aren't constrained by norms. For example, there seems to be no norm against putting up more fences than they currently do, so they probably disvalue (the marginal cost of cattle tresspass avoided by a length of fence) less than they disvalue (the marginal cost of that length of fence).
How much freedom do we have in this process? If two researchers try it out, will they tell us similar welfare functions? If we look at the set of plausible welfare functions for a society, is the uncertainty correlated between axes? (Can we say "X is valued between $A and $B, Y is valued between $C and $D" or do we have to add "…but if Y is valued near $C, then X is valued near B"?)
And even this kind of assumes there's no feedback from norms to the welfare function. Ellickson admits that possibility, and admits that it leads to indeterminacy, but thinks the risk is slight. (He seems to assume it would only happen if norms change the market price of a good - unlikely when the group in question is much smaller than the market.) I'm not so convinced. Suppose there's a norm of "everyone owns a gun and practices regularly". Then it's probably common for people to own super effective noise-cancelling headphones. And then they don't mind noisy neighbors so much, because they can wear headphones. That's… perhaps not quite changing the welfare function, because people still disvalue noisiness the same, they just have a tool to reduce noisiness? But it still seems important that this norm effectively reduces the cost of that tool. I dunno. (For further reading, Ellickson cites one person making this criticism and another responding to it. Both articles paywalled.)
Separately, I wish Ellickson was clearer about the sorts of things he considers acceptable for a welfare function to consider, and the sorts of calculations he considers acceptable for them to perform. Subjective information is out, sure. But from discussion in the "workaday affairs" section, it seems that "I give you a dollar" is welfare-neutral, and we don't get that result just from eliminating subjective information. We do get it if we make sure the welfare function is linear in all its inputs, but that seems silly. I think we also get it if we also eliminate non-publicly-verifiable information. The welfare function would be linear in dollars, because I can pretend to have more or fewer dollars than I actually do. But it wouldn't need to be linear in the number of children I'm raising, because I can't really hide those. I feel like Ellickson may have been implicitly assuming a restriction along those lines, but I don't think he said so.
Separately again, how closely does welfare correspond to utility? A utility monster couldn't become a welfare monster; I'm not sure if that's a feature or a bug, but it suggests the two can diverge considerably. A few chapters down, Ellickson does some formal game theory where the payoffs are in welfare; is it safe to ignore the possibility of "player gets higher welfare from this quadrant, but still prefers that quadrant"? It seems inevitable that some group members' utilities will get higher weighting in the welfare function than others'; people with invisible disabilities are likely to be fucked over. Ellickson admits that welfare maximization isn't the only thing we care about, but that leaves open the question of how much we should value it at all?
Suppose Quiet Quentin is unusually sensitive to noise, and happy to wear drab clothing. Drab Debbie is unusually sensitive to loud fashion, and happy to be quiet. Each of them knows this. One day Debbie accidentally makes a normal amount of noise, that Quentin isn't (by norm) allowed to punish her for. But wearing a normally-loud shirt doesn't count as punishing her, so he does that. Debbie gets indignant, and makes another normally-loud noise in retaliation, and so on. No one is acting badly according to the welfare function, but it still seems like something's gone wrong here. Is there anything to stop this kind of thing from happening?
It feels weird to me that things like parenthood and personal relationships are a component of the welfare function. Obviously they're a large part of people's subjective utility, but with so much variance that putting an objective value on them seems far too noisy. And what does a system of norms even do with that information?11
This one feels very out-there, but for completeness: the reason for using welfare instead of utility is that a norm can't reference people's individual preferences. Not just because they're subjective, but also because there's too many of them; "Alice can make loud noise at any time, but Bob can only make loud noise when Carol isn't home" would be far too complicated for a norm. But when people interact with people they know well, maybe subjectivity isn't a problem; maybe people get a decent handle on others' preferences. And then norms don't need to reference individual preferences, they can just tell people to take others' preferences into account. The norm could be "make loud noise if you value making noise more than others nearby value you not doing that". This feels like it wouldn't actually work at any sort of scale, and I don't fault Ellickson for not discussing it.
Despite all this, I do think there's some "there" there. A decent amount of "there", even. I think Ellickson's use of welfare should be given a long, hard look, but I think it would come out of that ordeal mostly recognizable.
There's another clarification that I think is needed. The phrase "develop and maintain" is a claim about dynamics, partial derivatives, conditions at equilibrium. It's not a claim that "all norms always maximize welfare" but that "norms move in the direction of maximizing welfare".
Ellickson never says this explicitly, but I think he'd basically agree. Partly I think that because the alternative is kind of obviously ridiculous - norms don't change immediately when conditions change. But he does also hint at it. For example, he speculates that a group of court cases around whaling arose because whalers were having trouble transitioning from one set of norms to a more utilitarian set (more on this later). Elsewhere, he presents a simple model of a society evolving over time to phase out certain rewards in favor of punishments.
Taken to an extreme, this weakens the hypothesis significantly. If someone points at a set of norms that seems obviously nonutilitarian, we can just say "yeah, well, maybe they haven't finished adapting yet". I don't think Ellickson would go that far. I think he'd say the dynamics are strong enough that he can write a 300-page book about the hypothesis, not explicitly admit that it's a hypothesis about dynamics, and it wouldn't seem all that weird.
Still, I also think this weakens the book significantly. When we admit that it's a hypothesis about dynamics, there's a bunch of questions we can and should ask. Probably the most obvious is "how fast/powerful are these dynamics". But there's also "what makes them faster or slower/more or less powerful" and "to what extent is the process random versus deterministic" and "how large are the second derivatives". (For those last two, consider: will norms sometimes update in such a way as to make things worse, and only on average will tend to make things better? Will norms sometimes start moving in a direction, then move too far in that direction and have to move back?) I'd be interested in "what do the intermediate states look like" and "how much do the individual personalities within the group change things".
I don't even necessarily expect Ellickson to have good answers to these questions. I just think they're important to acknowledge.
(I'd want to dock Ellickson some points here even if it didn't ultimately matter. I think "saying what we mean" is better than "saying things that are silly enough the reader will notice we probably don't mean them and figure out what we do mean instead".)
I think this is my biggest criticism of the book.
With all these clarifications weakening the hypothesis, does it still have substance?
Yes, Ellickson says. It disagrees with Hobbes and other legal centrists; with "prominent scholars such as Jon Elster who regard many norms as dysfunctional"; with Marxism, which sees norms as serving only a small subset of a group; with people who think norms are influenced by nonutilitarian considerations like justice; and with "the belief, currently ascendant in anthropology and many of the humanities, that norms are highly contingent and, to the extent that they can be rationalized at all, should be seen as mainly serving symbolic functions unrelated to people's perceptions of costs and benefits."
And it's falsifiable. We can identify norms, by looking at patterns of behavior and sanctions, and aspirational statements. And we can measure the variables affecting close-knittedness. ("For example, if three black and three white fire fighters belonging to a racially polarized union were suddenly to be adrift in a well-stocked lifeboat in the middle of the Pacific Ocean, as an objective matter the social environment of the six would have become close-knit and they would be predicted to cooperate."12)
But what we can't do is quantify the objective costs and benefits of various possible norm systems. So we fall back to intuitive assessments, looking at alternatives and pointing out problems they'd cause. This is not quite everything I'd hoped for from the word "falsifiable", but it'll do. Ellickson spends the next few chapters doing this sort of thing, at varying levels of abstraction but often with real-world examples. He also makes occasional concrete predictions, admitting that if those fail the hypothesis would be weakened. I'll only look at a few of his analyses.
A common contract norm forbids people from lying about what they're trading. The hypothesis predicts we'd find such norms among any close-knit group of buyers and sellers. I bring this up for the exceptions that Ellickson allows:
Falsehoods threaten to decrease welfare because they are likely to increase others' costs of eventually obtaining accurate information. Honesty is so essential to the smooth operation of a system of communication that all close-knit societies can be expected to endeavor to make their members internalize, and hence self-enforce, norms against lying. Of course a no-fraud norm, like any broadly stated rule, is ambiguous around the edges. Norms may tolerate white lies, practical joking, and the puffing of products. By hypothesis, however, these exceptions would not permit misinformation that would be welfare threatening. The "entertaining deceivers" that anthropologists delight in finding are thus predicted not to be allowed to practice truly costly deceptions. [Footnotes removed; one mentions that "A cross-cultural study of permissible practical joking would provide a good test of the hypothesis."]
It's not clear to me why norms would allow such exceptions, which still increase costs of information and are presumably net-negative. To sketch a possible answer: the edge cases are likely to be where the value of enforcing the norm is lower. I'd roughly expect the social costs of violations to be lower, and the transaction costs of figuring out if there was a violation to be higher. (I feel like I've read a sequence of three essays arguing about one particular case; they wouldn't have been necessary if the case had been a blatant lie.13) So, okay, minor violations don't get punished. But if minor violations don't get punished when they happen, then (a) you don't actually have a norm against them; and (b) to the extent that some people avoid those violations anyway, you've set up an asshole filter (that is, you're rewarding vice and punishing virtue).
So plausibly, the ideal situation is for it to be common knowledge that such things are considered fine to do. We might expect this to just push the problem one level up; so that instead of litigating minor deceptions, you're litigating slightly-less-minor deceptions. But these deceptions have a higher social cost, so more value to litigating them, so maybe it's fine.
(Aside, it's not clear to me why the hypothesis specifically expects such norms to be internalized, rather than enforced some other way. Possible answer: you do still need external enforcement of these norms, but that enforcement will be costly. It'll be cheaper if you can mostly expect people to obey them even if they don't expect to get caught, so that relies on self-enforcement. But is that a very general argument that almost all norms should be internalized? Well, maybe almost all norms are internalized. In any case, I don't think that clause was very important.)
The second-most-detailed case study in the book is whalers. If a whale is wounded by one ship and killed by another, who keeps it? What if a dead whale under tow is lost in a storm, and found by another ship? The law eventually developed opinions on these questions, but when it did, it enshrined preexisting norms that the whalers themselves had developed.
Ellickson describes a few possible norms that wouldn't be welfare maximizing for them, and which in fact weren't used. For example, a whale might simply belong to whichever ship physically held the carcass; but that would allow one ship to wait for another to weaken a whale, then attach a stronger line and pull it in. Or it might belong to the ship that killed it; but that would often be ambiguous, and ships would have no incentive to harvest dead whales or to injure without killing. Or it might belong to whichever ship first lowered a boat to pursue it, so long as the boat remained in fresh pursuit; but that would encourage them to launch too early, and give claim to a crew who might not be best placed to take advantage of it. Or it might belong to whichever ship first had a reasonable chance of capturing it, so long as it remained in fresh pursuit; but that would be far too ambiguous.
In practice they used three different sets of norms. Two gave full ownership to one party. The "fast-fish/loose-fish" rule said that you owned a whale as long as it was physically connected to your boat or ship. The "first iron" (or "iron holds the whale") rule said that the first whaler to land a harpoon could claim a whale, as long as they remained in fresh pursuit, and as long as whoever found it hadn't started cutting in by then.
Whalers used these norms according to the fishery14 they hunted from, and each was suited to the whales usually hunted from that fishery. Right whales are weak swimmers, so they don't often escape once you've harpooned them. Fast-fish works well for hunting them. Sperm whales do often break free, and might be hunted by attaching the harpoon to a drogue, a wooden float that would tire the whale and mark its location. The concept of "fresh pursuit" makes first-iron more ambiguous than fast-fish, which isn't ideal, but it allows more effective means of hunting.
(Sperm whales also swim in schools, so ideally you want to kill a bunch of them and then come back for the corpses. If you killed a whale, you could plant a flag in it, which gave you claim for longer than a harpoon. You had to be given reasonable time to come back, and might take ownership even if the taker had started cutting in. Ellickson doesn't say explicitly, but it sounds like American whalers in the Pacific might have had this rule, but not American whalers operating from New England, for unclear reasons.)
The other was a split-ownership rule. A fishery in the Galápagos Islands split ownership 50/50 between whoever attached a drogue and whoever took the carcass. This norm gave whalers an incentive to fetter lots of them and let others harvest them later, but it's not clear how or why that fishery developed different rules than others. On the New England coast, whalers would hunt fast finback whales with bomb-lances; the whales would sink and wash up on shore days later. The killer was entitled to the carcass, less a small fee to whoever found it. This norm was binding even on people unconnected with the whaling industry, and a court upheld that in at least one case. I'm not sure how anyone knew who killed any given whale. Perhaps there just weren't enough whalers around for it to be ambiguous?
(Ellickson notes that the "50/50 split versus small fee" questions is about rules versus standards. Standards let you consider individual cases in more detil, taking into account how much each party contributed to the final outcome, and have lower deadweight losses. But rules have fewer disputes about how they should be applied, and thus lower transaction costs.)
So this is all plausibly welfare-maximizing, but that's not good enough. Ellickson admits that this sort of ex post explanation risks being "too pat". He points out two objections you could raise. First, why did the norms depend on the fishery, and not the fish? (That would have been more complicated, because there are dozens of species of whale. And you had to have your boats and harpoons ready, so you couldn't easily change your technique according to what you encountered.)
More interestingly, what about overfishing? If norms had imposed catch quotas, or protected calves and cows, they might have been able to keep their stock high. Ellickson has two answers. One is that that would have improved global welfare, but not necessarily the welfare of the current close-knit group of whalers, as they couldn't have stopped anyone else from joining the whaling business. This is a reminder that norms may be locally welfare-maximizing but globally harmful.
His other answer is… that that might not be the sort of thing that norms are good at? Which feels like a failure of the hypothesis. Here's the relevant passage:
Establishment of an appropriate quota system for whale fishing requires both a sophisticated scientific understanding of whale breeding and also an international system for monitoring worldwide catches. For a technically difficult and administratively complicated task such as this, a hierarchical organization, such as a formal trade association or a legal system, would likely outperform the diffuse social forces that make norms. Whalers who recognized the risk of overfishing thus could rationally ignore that risk when making norms on the ground that norm-makers could make no cost-justified contribution to its solution. [Footnote removed]
There's some subtlety here, like maybe he's trying to say "norms aren't particularly good at this, so if there's another plausible source of rules, norm-makers would defer to them; but if there wasn't, norm-makers would go ahead and do it themselves". That feels implausible on the face of it though, and while I'm no expert, my understanding is that no other group did step up to prevent overfishing in time.
This section is one place where Ellickson talks about the hypothesis as concerning dynamics. There are only five American court cases on this subject, and four of them involved whales caught between 1852 and 1862 in the Sea of Okhotsk; the other was an 1872 decision about a whale caught in that sea in an unstated year. Americans had been whaling for more than a century, so why did that happen? The whales in that area were bowheads, for which fast-fish may have been more utilitarian than first-iron. Ellickson speculates that "American whalers, accustomed to hunting sperm whales in the Pacific, may have had trouble making this switch."
(He does give an alternate explanation, that by that time the whaling industry was in decline and the community was becoming less close-knit. "The deviant whalers involved in the litigated cases, seeing themselves nearing their last periods of play, may have decided to defect.")
There's something that stuck out to me especially in this section, which I don't think Ellickson ever remarked upon. A lot of norms seem to bend on questions that are unambiguous given the facts but where the facts are unprovable. If I take a whale that you're in fresh pursuit of, I can tell everyone that you'd lost its trail and only found me days later. Who's to know?
Well, in the case of whalers, the answer is "everyone on both of our ships". That's too many people to maintain a lie. But even where it's just one person's word against another's, this seems mostly fine. If someone has a habit of lying, that's likely to build as a reputation even if no one can prove any of the lies.
In private (i.e. non-criminal) law, when someone is found to be deviant, the standard remedy is to award damages. That doesn't always work. They might not have the assets to make good; or they might just be willing to pay that price to disrupt someone's world. So the legal system also has the power of injunctions, requiring or forbidding certain future actions. And if someone violates an injunction, the legal system can incarcerate them.
Norms have analagous remedies. Instead of damages, one can simply adjust a mental account. Instead of an injunction, one can offer a more-or-less veiled threat. Instead of incarcerating someone, one can carry out that threat.
Incarceration itself isn't a credible threat ("kidnapping is apt both to trigger a feud and to result in a criminal prosecution"), but other forms of illegal violence are. ("Indeed, according to Donald Black, a good portion of crime is actually undertaken to exercise social control." cite)
Remedial norms require a grievant to apply self-help measures in an escalating sequence. Generally it starts at give the deviant notice of the debt; goes through gossip truthfully about it; and ends with sieze or destroy some of their assets. Gossip can be omitted when it would be obviously pointless, such as against outsiders. This is consistent with the hypothesis, since the less destructive remedies come first in the sequence. It's also consistent with practice in Shasta County, and we see it as well in the lobstermen of Maine when someone sets a trap in someone else's territory. They'll start by attaching a warning to a trap, sometimes sabotaging it without damaging it. If that doesn't work, they destroy the trap. They don't seem to use gossip, perhaps because they can't identify the intruder or aren't close-knit with him.
"Sieze or destroy" - which should you do? Destroying assets is a deadweight loss, so it might seem that siezing them would be better for total welfare. But destruction has advantages too. Mainly, it's more obviously punitive, and so less likely to be seen as aggression and to lead to a feud. The Shasta County practice of driving a cow somewhere inconvenient isn't something you'd do for personal gain. But also, it's easier to calibrate (you can't sieze part of a cow, but you can wound it instead of killing). And it can be done surreptitiously, which is sometimes desired (though open punishment is usually prefered, to maintain public records).
We don't have a good understanding of how norms work to provide order. But the key is "altruistic" norm enforcement by third parties. (Those are Ellickson's scare quotes, not mine.) How do we reconcile that with the assumption of self-interested behavior?
One possibility is internalized norms, where we feel guilty if we fail to act to enforce norms, or self-satisfied if we do act. (I feel like this is stretching the idea of self-interest, but then we can just say we reject that assumption, so whatever.)
Another is that the seemingly altruistic enforcers are themselves motivated by incentives supplied by other third parties. This seems to have an infinite regress. Ellickson gives as example a young man who tackled a pickpocket to retrieve someone's wallet. The woman he helped wrote into the New York Times to publicly thank him, so there's his incentive. But we also need incentives for her to write that letter, and for the editor to publish it, and so on.
(I'm not actually entirely sure where "so on" goes. I guess we also need incentive for people to read letters like that. Though according to Yudkowsky's Law of Ultrafinite Recursion there's no need to go further than the editor.)
This infinite regress seems bad for the chances of informal cooperation. But it might actually help. Ellickson's not entirely detailed about his argument here, so I might be filling in the blanks a bit, but here's what I think he's going for. Suppose there's a virtuous third-party enforcer "at the highest level of social control". That is, someone who acts on every level of the infinite regress. They'll sanction primary behavior as appropriate to enforce norms; but also sanction the people who enforce (or fail to enforce) those norms themselves; and the people who enforce (or fail to enforce) the enforcement of those norms; and so on, if "so on" exists.
Then that enforcer could create "incentives for cooperative activity that cascade down and ultimately produce welfare-maximizing primary behavior." They don't need to do all the enforcement themselves, but by performing enforcement on every level, they encourage others to perform enforcement on every level.
This might work even with just the perception of such an enforcer. God could be this figure, but so could "a critical mass of self-disciplined elders or other good citizens, known to be committed to the cause of cooperation". Art and literature could help too.
Academia seems to have a disproportionate number of legal centrists. So you might think professors would be unusually law-abiding. Not when it comes to photocopying. The law says how they should go about copying materials for use in class: fair-use doctrine is quite restrictive unless they get explicit permission, which can be slow to obtain15. Professors decide they don't really like this, and they substitute their own set of norms.
The Association of American Publishers tells us that campus copyright violation is (Ellickson quotes) "widespread, flagrant, and egregious". They seem to be right. Ellickson asked law professors directly, and almost all admit to doing it - though not for major portions of books. The managers of law school copy rooms don't try to enforce the rules, they let the professors police themselves. Several commercial copy centres made multiple copies for him of an article from a professional journal. "I have overheard a staff member of a copy center tell a patron that copyright laws prevented him from photocopying more than 10 percent of a book presented as a hardcopy original; the patron then asked whether he himself could use the copy center's equipment to accomplish that task and was told that he could."16
So professors' norms seem to permit illegal repeated copying of articles and minor parts of books. That lets them avoid knowing fair-use doctrine in detail. And since the law would require them to write (and respond to) many requests for consent, it lets them avoid that too.
Professors sense that Congress is unlikely to make welfare-maximizing copyright law. (Publishers can hire better lobbyists than they can.) This lets them frame their norms as principled subversion. I'm not sure if it's particularly relevant though - if copyright law was welfare-maximizing overall, but not for the professors, I think the hypothesis would still predict them to develop their own norms. But thinking back to the stuff on symbolism, maybe "being able to frame your actions as principled subversion" is a component of welfare.
Why will they copy articles, but not large portions of books? Authors of articles don't get paid much for them, and for no charge will mail reprints to colleagues and allow excerpts to be included in compilations. "It appears that most academic authors are so eager for readers to know and cite their work that they usually regard a royalty of zero or even less as perfectly acceptable. For them small-scale copying is not a misappropriation but a service." But book authors do receive royalties, and large-scale copying would diminish those significantly. So according to the hypothesis, this restraint comes from wanting to protect author-professors' royalty incomes, not from caring about publishers' and booksellers' revenues. (Though they might start to care about those, if they thought there might be a shortage of publishers and booksellers. They also might care more about university-affiliated publishers and booksellers.)
(There's a question that comes to mind here, that Ellickson doesn't bring up. Why do professors decline to copy books written by non-academics? I can think of a few answers that all seem plausible: that this is a simpler norm; that it's not necessarily clear who is and isn't an academic; and that it makes it easier to sell the "principled subversion" thing.)
Notably, in the two leading cases around academic copying, the plaintiffs were publishers and the primary defendant was an off-campus copy center. This is consistent with the hypothesis. In these situations, those two parties have the most distant relationship. Publishers have no use for copy centers, and copy centers don't buy many books, so neither of them has informal power over the other. Even more notably, in one of these cases, university-run copy centers weren't included as defendants - that might anger the professors, who do have power over publishers.
But Ellickson admits that all of this could be cherry-picking. So he looks at two well-known cases that he expects people to point to as counterexamples. (I hadn't heard of either of them, so I can't rule out that he's cherry-picking here, too. But I don't expect it.)
The first is the Ik of northern Uganda. These are a once-nomadic tribe with a few thousand members. Colin Turnbull found an unsettling pattern of inhumanity among them. Parents were indifferent to the welfare of their children after infancy, and people took delight in others' suffering. In Turnbull's words: "men would watch a child with eager anticipation as it crawled toward the fire, then burst into gay and happy laughter as it plunged a skinny hand into the coals. … Anyone falling down was good for a laugh too, particularly if he was old or weak or blind."
Ellickson replies that the Ik were "literally starving to death" at the time of Turnbull's visit. A few years prior, their traditional hunting ground had been turned into a national park, and now they were forced to survive by farming a drought-plagued area. (Turnbull "briefly presented these facts" but didn't emphasize them.) "Previously cooperative in hunting, the Ik became increasingly inhumane as they starved. Rather than undermining the hypothesis, the tragic story of the Ik thus actually supports the hypothesis' stress on close-knittedness: cooperation among the Ik withered only as their prospects for continuing relationships ebbed." [Footnote removed.]
I note that Wikipedia disputes this account. "[Turnbull] seems to have misrepresented the Ik by describing them as traditionally being hunters and gatherers forced by circumstance to become farmers, when there is ample linguistic and cultural evidence that the Ik were farmers long before they were displaced from their hunting grounds after the formation of Kidepo National Park - the event that Turnbull says forced the Ik to become farmers." To the extent that Ellickson's reply relies on this change in circumstances, it apparently (according to Wikipedia) falls short. But perhaps the important detail isn't that they switched from hunting to farming, but that they switched from "not literally starving to death" to "literally starving to death" (because of a recent drought).
Ellickson also cites (among others) Peter Singer as criticising Turnbull in The Expanding Circle, pp 24-26. Looking it up, Singer points out that, even if we take Turnbull's account at face value, Ik society retains an ethical code.
Turnbull refers to disputes over the theft of berries which reveal that, although stealing takes place, the Ik retain notions of private property and the wrongness of theft. Turnbull mentions the Ik's attachment to the mountains and the reverence with which they speak of Mount Morungole, which seems to be a sacred place for them. He observes that the Ik like to sit together in groups and insist on living together in villages. He describes a code that has to be followed by an Ik husband who intends to beat his wife, a code that gives the wife a chance to leave first. He reports that the obligations of a pact of mutual assistance known as nyot are invariably carried out. He tells us that there is a strict prohibition on Ik killing each other or even drawing blood. The Ik may let each other starve, but they apparently do not think of other Ik as they think of any non-human animals they find - that is, as potential food. A normal well-fed reader will take the prohibition of cannibalism for granted, but under the circumstances in which the Ik were living human flesh would have been a great boost to the diets of stronger Ik; that they refrain from this source of food is an example of the continuing strength of their ethical code despite the crumbling of almost everything that had made their lives worth living.
This seems to support the hypothesis too. I do think there's some tension between these two defenses. Roughly: their circumstances made them the way they were; and anyway, they weren't that way after all. But they don't seem quite contradictory.
The other potential counterexample is the the peasonts in Mentegrano, a southern Italian village, as studied by Edward Banfield.
Banfield found no horrors as graphic as [those of the Ik], but concluded that the Italian peasants he studied were practitioners of what he called "amoral familialism," a moral code that asked its adherents to "maximize the material, short-run advantage of the nuclear family; assume all others will do likewise." According to Banfield, this attitude hindered cooperation among families and helped keep the villagers mired in poverty. [One footnote removed; minor style editing.]
Ellickson has two replies here. Firstly, the evidence is arguably consistent with the hypothesis: some of Banfield's reviewers suggested that, going by Banfield's evidence, the villagers had adapted as well as possible to their environment. Secondly, Banfield's evidence often seems to contradict Banfield's thesis: neighbors have good relationships and reciprocate favors. Banfield apparently discounted that because they did so out of self-interest, but it's still compatible with the hypothesis.
(I don't think these replies are in the same kind of tension.)
For a more general possible counterexample, Ellickson points at primitive tribes believing in magic and engaging in brutal rites. (This is something I did have in my mind while reading, so I'm glad he addressed it.) Some anthropologists are good at finding utilitarian explanations for such things, but Ellickson rejects that answer. Instead, he simply predicts that these practices would become abandoned as the tribe becomes better educated. "A tribe that used to turn to rain dancing during droughts thus is predicted to phase out that ritual after tribe members learn more meteorology. Tribes are predicted to abandon dangerous puberty rites after members obtain better medical information. As tribe members become more familiar with science in general, the status of their magicians and witch doctors should fall. As a more contemporary example, faith in astrology should correlate negatively with knowledge of astronomy. These propositions are potentially falsifiable."
This was my guess as to an answer before I reached this part of the book, which I think says good things about both myself and the book. And I basically agree with his prediction. But I also think it's not entirely satisfactory.
It seems like we need to add a caveat to the hypothesis for this kind of thing, "if people believe that rain dances bring rain, then norms will encourage rain dances". And I kind of want to say that's fair enough, you can't expect norms to be smarter than people. But on the other hand, I think the thesis of The Secret of Our Success and the like is that actually, that's exactly what you can expect norms to be. And it seems like a significant weakening of the hypothesis - do we now only predict norms to optimize in ways that group members understand? Or to optimize not for welfare but for "what group members predict their future welfare will be"? I dunno, and that's a bad sign. But if the hypothesis doesn't lose points for rain dances, it probably shouldn't gain points for manioc. (Though as Ben Hoffman points out, the cost-benefit of manioc processing isn't immediately obvious. Maybe the hypothesis should lose points for both manioc and rain dances.)
If a ritual is cheap to implement, I'd be inclined to give it a pass. There's costs of obtaining information, and that could apply to whatever process develops norms like it does to individuals. Plus, it would only take a small benefit to be welfare-maximizing, and small benefits are probably less obvious than larger ones. (Though if that's what's going on, it's not clear whether we should expect education to phase the rituals out.)
But for vicious and dangerous rituals, this doesn't seem sufficient. Ellickson mentions a tribe where they "cut a finger from the hand of each of a man's close female relatives after he dies"; what medical knowledge are they lacking that makes this seem welfare-maximizing?
I think this is my biggest criticism of the hypothesis.
Another possible counterexample worth considering would be Jonestown, and cults in general. (h/t whoever it was brought this to my attention.) I don't feel like I know enough about these to comment, but I'm going to anyway. I wonder if part of what's going on is that cults effectively don't have the rule of law - they make it costly for you to leave, or to bring in outside enforcers, and so you can't really enforce your property rights or bodily autonomy. If so, it seems like the "workaday" assumption is violated, and the hypothesis isn't in play.
Or, what about dueling traditions? We might again say the "workaday" assumption (that brings rules against murder) is violated, but that seems like a cheat. My vague understanding, at least of pistol dueling as seen in Hamilton, is it was less lethal than we might expect; and fell out of favor when better guns made it more lethal. But neither of these feels enough to satisfy, and we should demand satisfaction. Did the group gain something that was worth the potential loss of life? Alternatively, were such things only ever a transition phase?
Something I haven't touched on is Ellickson's use of formal game theory. To do justice to that section, I split it into its own essay. The tl;dr is that I think he handled it reasonably well, with forgiveable blind spots but not outright mistakes that I noticed. I don't feel like I need to discount the rest of the book (on subjects I know less well) based on his treatment of game theory.
Is this a good book? Yes, very much so. I found it fascinating both on the level of details and the level of ideas. Ellickson is fairly readable, and occasionally has a dry turn of phrase that I love. ("A drowning baby has neither the time nor the capacity to contract for a rescue.") And I don't know if this came across in my review, but he's an unusually careful thinker. He owns up to weaknesses. He rejects bad arguments in favor of his position. He'll make a claim and then offer citations of people disagreeing. He makes predictions and admits that if they fail the hypothesis will be weakened. I think he made some mistakes, and I think his argument could have been clearer in places, but overall I'm impressed with his ability to think.
Is the hypothesis true? I… don't think so, but if we add one more caveat, then maybe.
The hypothesis says that norms maximize welfare. Note that although Ellickson calls the welfare function "objective", I think a better word might be "intersubjective". The welfare function is just, like, some amorphous structure that factors out when you look at the minds of group members. Except we can't look at their minds, we have to look at behaviour. The same is true of norms themselves: to figure out what the norms are in a society we ultimately just have to look at how people in that society behave.
And so if we're to evaluate the hypothesis properly, I think we need to: look at certain types of behaviour, and infer something that's reasonable to call "norms"; and then look at non-normative behavior - the behaviour that the inferred system of norms doesn't dictate - and infer something that's reasonable to call a "welfare function". And then the hypothesis is that the set of norms will maximize the welfare function. ("Maximize over what?" I almost forgot to ask. I think, maximize over possible systems of norms that might have been inferred from plausible observed behaviour?)
Put like that it sounds kind of impossible. I suspect it's… not too hard to do an okay job? Like I'd guess that if we tried to do this, we'd be able to find things that we'd call "norms" and "a welfare function" that mostly fit and are only a little bit circular; and we wouldn't have an overabundance of choice around where we draw the lines; and we could test the hypothesis on them and the hypothesis would mostly come out looking okay.
But to the extent that we can only do "okay" - to the extent that doing this right is just fundamentally hard - I suspect we'll find that the hypothesis also fails.
There are problems which are known to be fundamentally hard in important ways, and we can't program a computer to reliably solve them. Sometimes people say that slime molds have solved them and this means something about the ineffable power of nature. But they're wrong. The slime molds haven't solved anything we can't program a computer to solve, because we can program a computer to emulate the slime molds.17 What happens is that the slime molds have found a pretty decent approach to solving the problem, that usually works under conditions the slime molds usually encounter. But the slime molds will get the wrong answer too, if the specific instance of the problem is pathological in certain ways.
In this analogy, human behavior is a slime mold. It changes according to rules evaluated on local conditions. (Ellickson sometimes talks about "norm-makers" as though they're agents, but that feels like anthropomorphising. I expect only a minority of norms will have come about through some agentic process.) It might be that, in doing so, it often manages to find pretty good global solutions to hard problems, and this will look like norms maximizing welfare. But when things are set up right, there'd be another, better solution.
(I'm not sure I've got this quite right, but I don't think Ellickson has, either.)
So I want to add a caveat acknowledging that sort of thing. I don't know how to put it succinctly. I suspect that simply changing "maximize" in the hypothesis to "locally maximize" weakens it too far, but I dunno.
With this additional caveat, is the hypothesis true? I still wouldn't confidently say "yes", for a few reasons. My main inside-view objections are the ritual stuff and duelling, but there's also the outside-view "this is a complicated domain that I don't know well". (I only thought of duelling in a late draft of this review; how many similar things are there that I still haven't thought of?) But it does feel to me like a good rule of thumb, at least, and I wouldn't be surprised if it's stronger than that.
I want to finish up with some further questions.
The world seems to be getting more atomic, with less social force being applied to people. Does that result in more legal force? Ellickson gives a brief answer: "In [Donald Black's] view, the state has recently risen in importance as lawmakers have striven to fill the void created by the decline of the family, clan, and village." (Also: "increasing urbanization, the spread of liability insurance, and the advent of the welfare state". But Black speculates it'll decline in future because of increasing equality. And although the number of lawyers in the US has increased, litigation between individuals other than divorce remains "surprisingly rare".)
Can I apply this to my previous thoughts on the responsibility of open source maintainers? When I try, two things come to mind. First, maintainers know more about the quality of their code than users. Thus, if we require maintainers to put reputation on the line when "selling" their code, we (partially) transfer costs of bad code to the people who best know those costs and are best able to control them. So that's a way to frame the issue that I don't remember having in mind when I wrote the post, that points in the same directions I was already looking. Cool. Second, I feel like in that post I probably neglected to think about transaction costs of figuring out whether someone was neglecting their responsibility? Which seems like an important oversight.
To test the hypothesis, I'd be tempted to look at more traditions and see whether those are (at least plausibly) welfare-maximizing. But a caveat: are traditions enforced through norms? I'd guess mostly yes, but some may not be enforced, and some may be enforced through law. In those cases the hypothesis doesn't concern itself with them.
Making predictions based on the hypothesis seems difficult. Saying that one set of norms will increase welfare relative to another set might be doable, but how can you be confident you've identified the best possible set? Ellickson does make predictions, and I don't feel like he's stretching too far - though I can't rule it out. But I'm not sure I'd be able to make the same predictions independently. How can we develop and test this skill?
Sometimes a close-knit group will fracture. What sort of things cause that? What does it look like when it happens? What happens to the norms it was maintaining?
What are some follow-up things to read? Ellickson approvingly cites The Behavior of Law a bunch. If we want skepticism, he cites Social Norms and Economic Theory a few times. At some point I ran across Norms in a Wired World which looks interesting and cites Ellickson, but that's about all I know of it.
How does this apply to the internet? I'd note a few things. Pseudonymity (or especially anonymity) and transience will reduce close-knittedness, as you only have limited power over someone who can just abandon their identity. To the extent that people do have power, it may not be broadly distributed; on Twitter for example, I'd guess your power is roughly a function of how many followers you have, which is wildly unequal. On the other hand, public-by-default interactions increase close-knittedness. I do think that e.g. LessWrong plausibly counts as close-knit. The default sanction on reddit is voting, and it seems kind of not-great that the default sanction is so low-bandwidth. For an added kick or when it's not clear what the voting is for, someone can write a comment ("this so much"; "downvoted for…"). That comment will have more weight if it gets upvoted itself, and/or comes from a respected user, and/or gets the moderator flag attached to it. Reddit also has gilding as essentially a super-upvote. For sanctioning remedial behavior, people can comment on it ("thanks for the gold", "why is this getting downvoted?") and vote on those comments. But some places also have meta-moderation as an explicit mechanism.
How much information is available about historical norms and the social conditions they arose in? Enough to test the hypothesis?
There's a repeated assumption that if someone has extraordinary needs then it's welfare maximizing for the cost to fall on them instead of other people. I'm not sure how far Ellickson would endorse that; my sense is that he thinks it's a pretty good rule of thumb, but I'm not sure he ever investigates an instance of it in enough detail to tell whether it's false. It would seem to antipredict norms of disability accomodation, possibly including curb cuts. (Possibly not, because those turn out to benefit lots of people. But then, curb cuts are enforced by law, not norm.) This might be a good place to look for failures of the hypothesis, but it's also highly politicized which might make it hard to get good data.
Ellickson sometimes suggests that poor people will be more litigous because they have their legal fees paid. We should be able to check that. If it's wrong, that doesn't necessarily mean the hypothesis is wrong; there are other factors to consider, like whether poor people have time and knowledge to go to court. But it would be a point in favor of something like "the hypothesis is underspecified relative to the world", such that trying to use it to make predictions is unlikely to work.
Is Congress close-knit? Has that changed recently? Is it a good thing for it to be close-knit? (Remember, norms maximizing the welfare of congresspeople don't necessarily maximize the welfare of citizens.)
Does this work at a level above people? Can we (and if so, when can we) apply it to organizations like companies, charities, and governments?
Suppose the book's analysis is broadly true. Generally speaking, can we use this knowledge for good?
In this review, I use the present tense. But the book was published in 1991, based on research carried out in the 1980s. ↩
Something like this is familiar to me from the days when most of my friendships took place in pubs. Small favours, even with specific monetary value, would typically be repaid in drinks and not in cash. Once, apparently after a disappointing sexual experience, I was asked my opinion on the exchange rate between drinks and orgasms.
It strikes me that your neighbor is still clearly worse off than if your goat hadn't eaten his tomatoes. He's gone from having tomatoes-now to only having future-tomatoes. But that means your neighbor has no reason to be careless with his tomatoes. And helping to replant may encourage you to control your goat more than paying money would. ↩
Ellickson never says explicitly what one of these is, but my read is a small ranch, more a home than a business and operated more for fun than profit. Only a handful of animals or possibly none at all, and sometimes crops. ↩
"This counterintuitive proposition states, in its strongest form, that when transaction costs are zero a change in the rule of liability will have no effect on the allocation of resources. … This theorem has undoubtedly been both the most fruitful, and the most controversial, proposition to arise out of the law-and-economics movement." The paper which first presented the theorem used cattle trespass as an example, directly inspiring the study in part one. ↩
Ellickson offers citations but (apart from Shasta County) no elaboration on these. ↩
Ellickson makes two points. First, that ranchers are more familiar than ranchette owners with barbed-wire fencing. To some extent that seems circular, since they're the ones who are expected to know about it, but it's also in part because many ranchette owners have moved from the city. Second, that ranchers can fence in their own herds unilaterally, while victims would have to coordinate; motorists in particular would have trouble with that, and arguably they benefit the most. But motorists aren't part of the relevant close-knit group, so we should ignore them for this analysis. And as far as I know the other noteworthy victims are all landowners, who don't need to coordinate to protect their own interests.
But: even if victims wouldn't need to coordinate, they'd all need to act individually, and acquiring the skills would be a cost to them. Ranchers would presumably still need those skills, and even if not, there are presumably fewer of them. So it seems cheaper for all ranchers to acquire the skills, than all of their potential victims. ↩
Of course, since this example was a generator of the hypothesis, that says little by itself. This isn't a big deal, Ellickson looks outside Shasta County plenty, I'm just pointing it out because it's important to notice things like this. ↩
No, really. This isn't him saying one thing and me saying "well that only works if…". He says explicitly that the hypothesis "assumes that norm-makers in close-knit groups would subscribe to an unalloyed version of this principle". ↩
Actually, for parenthood, a plausible answer does come to mind: deciding who society celebrates as parents (rich couples who can mostly pay the costs of parenthood themselves) and who it shames (poor single mothers who socialize the costs). Then I guess the hypothesis predicts that you're allowed to socialize the costs of parenthood to the same extent that parenthood is welfare-positive. Except… that doesn't really work, because the total welfare change seems like it would be more-or-less the same whether the costs are borne by the parents or by society. I dunno, I think I'm still confused. ↩
This is kind of a weird example, as he all-but-admits in a footnote: "This statement assumes the continuing presence of foundational rules that forbid the firefighters from killing, maiming or imprisoning each other." If we want to know what actually happens in this situation, he points us to Dudley and Stephens and a book named Cannibalism and the Common Law. ↩
I haven't read these recently, and might be misremembering. ↩
Minor complaint: I wish Ellickson had been clearer about what exactly a "fishery" is. Did two boats from different fisheries ever encounter each other? ↩
In one case study, 23 permission letters were sent to publishers and only 17 received a response in six months. Ellickson doesn't say how many were denied. ↩
I looked it up out of curiosity. Although the 10% figure may have come from the relevant guidelines, they're unsurprisingly a lot more restrictive than that. For prose the maximum seems to be "1,000 words or 10% of the work, whichever is less, but in any event a minimum of 500 words." ↩
At least, if we can't in practice, there's nothing stopping us in theory. I'm not sure if we know exactly what the slime molds are doing. But I'm sure that if we did know, there wouldn't turn out to be anything fundamentally mysterious and unprogrammable-in-computers about it. ↩
Posted on 10 July 2021comments powered by Disqus