The problem with probabililities without models

Scott Alexander writes in defense of probabilities without models. I denied the possibility of this before, also in the context of Scott’s steel-manning of Yudkowskyanism, but back then the focus of the discussion was slightly different. So this is a response to the new post, and if I wasn’t trying to revive this here blog it would probably be a comment. It’s not really intelligible without first reading the post I’m replying to.


For starts, what’s a probability model? Actually, while we’re at it, what’s a probability? Even more actually thats one step too far, because nobody has a general answer and perhaps there is none, though philosophers have lots of theories that seem wrong in different interesting ways. Modern mathematicians aren’t really bothered by this because they don’t care what things are as long as they know how they behave. (In bigger words this is called the axiomatic method.)

OK, so how do probabilities behave? Well they are numbers and they belong to events. What’s an event? Well, the mathematician doesn’t know, but here’s how they behave: 1. There is one event that is always true. 2. If something is an event, then “not that” is also an event. 3. If we have a bunch of events, then “all of them” is also an event. In bigger words, the events, whatever they are, form a σ-Algebra. Then the events get probabilities, and they behave like this: 1. All probabilities are all between 0 and 1 (where 1 is sometimes written as 100%). 2. If we have a bunch of incompatible events, then the probability for “any of those” is just all the individual probabilities added together. In bigger words, the probabilities are the values of a probability measure. As far as the abstract mathematician is concerned, that’s it.

In the real world, we don’t really need to know what a probability is either, as long as we can make sure that the things we are talking about obey these rules. Stereotypical example: We want to do probabilities with dice throws. We have a bunch of events, such as “1”, “6”, “an even number”, “either 2 or five” and so on. (64 events total if you are counting.) Then the events get probabilities. Often those will be \(\frac{1}{6}\) for each of “1”, “2”, “3”, “4”, “5” and “6”. But not always; perhaps we want to talk about loaded dice. But however we do it, the probability for “either 1 or 2” better be the summed probabilities of “1” and “2”.

In summary, when we use probabilities in the real world, we have some idea what the events are, we have probabilities of these events and these probabilities aren’t blatantly contradictory. This is – tada – called a probability model.


In practice we will also often need conditional probabilities, i.e. probabilities for something happening if something else happens first. For example, someone could have two different probabilities for being in a car accident, depending on whether they drive drunk or not. That doesn’t change the story much though, because by the Rev. T. Bayes’ theorem conditional probabilities are reducible to non-conditional ones. In the example, if we have probabilities for drunken and non-drunken accidents and a probability for drinking, then we can calculate the probability for an accident given drinking and non-drinking.


Coming back to the dice example, what if the dice, while still in motion, gets swallowed by a dog? What’s the probability of that? Well, the model didn’t account for that, so there is no answer. I silently assumed this to be a non-event and only events get probabilities.

OK, so maybe I should have used a better model including “the dog will eat it”, for a total of 65 events. That probability will be small, but not quite 0, because the whole point of including it is that canine dice engobblement is actually possible. But note that the probabilities for all possible numbers used to add up to 1. Now they will have to add up to 1 minus that small number. So in other words, if my model changes, then so do the probabilities.

Fine, but what if puritan space aliens destroy the dice with their anti-vice laser? If I want to account for that possibility, I’ll have to change my model again, and, in doing so, change my probabilities. And so on, every time I think of a new possibility the model changes and the probabilities change with it.

So what are the real probabilities? You might say those of the correct model. But then what’s the correct model? Well, if you need to bother with probabilities you’re not omniscient and if you’re not omniscient you can’t ever figure it out. Strictly speaking all the probabilities you’ll ever use are wrong.


Can we get around this by just adding an event “all the possibilities I didn’t think of”? Not really. Remember if you add a new event you must assign new probabilities. And you can’t just do that by taking from all other events equally. This is easiest to see in the case of conditional probabilities.

Contrived but sufficiently insightful example: For me it would be slightly dangerous to drive without my glasses. For most other people it would be more or less dangerous to drive with my glasses. Now suppose the puritanical space aliens, figuring that driving more dangerously than necessary is also a vice,  try to figure out the best way for humans to drive. Naturally they also study the effect of glasses. Unfortunately they didn’t quite understand that glass-wearers are compensating for bad eyes, so they calculate accident-probabilities with and without glasses and universalize those to the entire population. Maybe their sample has a lot of glass-needers who occasionally forget their glasses. Then they will conclude everybody needs to drive with glasses. Or maybe people who need glasses do always wear them. Then they will probably decide glass-wearing is actually dangerous, because glasses aren’t perfect and glass-needers still have more accidents. Either way, their probabilities for some specific person wearing glasses and being in an accident can be almost arbitrarily wrong. After they start zapping the wrong people with their anti-vice laser someone may tell them why some people wear glasses and others don’t and they will have to revise their probabilities.

But suppose they had wanted to account for possibly missing a possibility beforehand. They could have assigned, say, a 30% probability to “we are missing something very important”. Fine, but for that to help them at all, they also need probabilities for you crashing while you wear glasses and they miss something very important. And they can’t come up with that probability without knowing exactly what they are missing. In other words, you can’t just use catchall misspecification events in probability models.


Then why are probabilities so useful? Because often we can make assumptions that are good enough for a given purpose. In the dice example we are making calculations in a game and are fine with assuming that game will proceed orderly. So we get probabilities and we don’t care about them being meaningless if the assumption turns out to be false.

Similarly, if we design an airplane, we might start with probabilities for its various parts failing and then calculate a probability for the whole thing falling down. This doesn’t tell us anything about planes crashing because of drunk pilots, but that’s not the question the model was made for. Actually, aviation has a few decades of experience with improving the models whenever a plane crashes and then adding regulations to make that very improbable. So nowadays commercial planes basically only crash for new reasons. There remain situations where the crash probabilities are meaningless, but they are still extremely useful in their proper context.

Similarly, an investment banker assumes that everything that could possibly happen has already happened in the last twenty years, and, well in that case it turns out it wasn’t such a great idea.


Now that might sound fine in theory, but hasn’t Scott given practical examples of probabilities without models? Darn right he hasn’t. To see that, let me nitpick the alleged examples to explain why they don’t count.

In the president and aliens example, Scott himself considers the possibility that the involved probabilities are

only the most coarse-grained ones, along the lines of “some reasonable chance aliens will attack, no reasonable chance they will want bananas.” Where “reasonable chance” can mean anything from 1% to 99%, and “no reasonable chance” means something less than that.

But actual probabilities aren’t coarse-grained. In probabilistic reasoning you get to be uncertain, but you don’t get to be uncertain about how uncertain you are. All those theorems about Bayesian reasoning being the very bestest conceivable method of reasoning evar presume probabilities to be real numbers i.e. not coarse-grained. In other words, these coarse-grained probabilities are called so only by analogy. They don’t make for examples of real probabilities any more than my printer’s device driver is an example of vehicular locomotion.

At this point my mental model of Scott protests thusly: “Hey, I didn’t admit that! I mentioned it as something a doubter might say, but actually the president should be using real probabilities” (No actual Scott consulted, so my mental model may or may not be mental in more ways than intended.) Fine, but then the example doesn’t work anymore. The president can make his decisions without knowing anything about probability theory. He is making judgments about some things being more likely than others but not attaching numbers to it. In fact he could be more innumerate than a lawyer and it wouldn’t affect his decisions one bit. If we want to make it about real probabilities the example simply doesn’t show anything about their necessity.

Concerning the research funding agency, first of all I’ll question the hypothetical. It’s hard to imagine a proposal for a research project that has a \(\frac{1}{1000}\) chance of success given the best information. The starry-eyed idealists surely don’t think so. So if there is actual disagreement there will be reasons for that disagreement, and if the decision is important the correct answer is examining those reasons, i.e. improving the models. This is actually a large part of why real funding agencies rely on peer review. Using probabilities here is somewhat like the global warming example in my above-mentioned prior post on a similar subject, where the entire point is that the probability relates to a model I wouldn’t want to use for real decisions.

But perhaps all competent reviewers got killed in a fire at their last conference, so let’s skip that objection for least convenient possible world reasons. Also, Scott notes similar decisions can and often should be made informally, in which case it’s not real probabilities, exactly like in the president and aliens example.

Let’s advance to the interesting point though. Scott says:

But refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident. You can do other things. You can compare people’s success rates. You can do arithmetic on them (“if both these projects have 1/1000 probability, what is the chance they both succeed simultaneously?”), you can open prediction markets about them.

I’ll  start with the accuracy checking use. If you’re guessing only once the probability doesn’t help at all. You’ll either be right or wrong, but you’re not getting a correct probability to compare against. That’s why the foundation director makes the check for a 1000 similar projects. If she has a 1000 similar projects that’s a strategy I can endorse. But at that point she has informally established a probability model. She has established events, namely combinations of succeeding and failing projects. This rules out any unforeseen possibilities, like the agency’s funding being slashed next year, nuclear war, the communist world revolution, and raptures both Christian and nerdy. That’s fine, because it’s not the kind of circumstances these probabilities are meant to think about. Furthermore, she has established probabilities of the events by (perhaps implicitly) assuming that the individual projects are equivalent for success-probability purposes and statistically independent, so they won’t all succeed or fail together. As long as she wants to reason within these assumptions, I’m fine with her doing so probabilistically. But notice that the probabilities get totally useless as soon as these assumptions fail. For example, there may be a new idea about what the natural laws might be and 200 proposals to exploit it. Those proposals will fail or succeed together and thus not be any help in predicting the rest. Or the next batch of proposals may be about ideas she knows more or less about, so they can’t be lumped with the old ones for accuracy judgment.

In summary, the probabilities are very useful as long as the implicit assumptions hold and totally worthless when they don’t. Also note, that the probability calculations are the boring part of the judgment process. All the interesting stuff happens in deciding which projects are comparable, which is the non-probabilistic part of the thought process.

It’s very similar with comparing people’s success rates. Here the implicit model is that people have a given success rate and those are independent. That’s often fine, but as always it breaks down if the (implicit) modeling assumptions stop to apply. For example, people may be experts on some things and not others and then their success rates will depend on what the problems of the day are. They may also listen to each other and come to a consensus or, worse, two consensus neatly aligned with political camps. Those are situations where the probabilities won’t help much.

The prediction market is slightly different: Here we are assuming someone else has a better model than we do and is willing to bet money on it. This is indeed a case where we use probabilities in a model we don’t know. Still, someone else does and our probabilities won’t be any more correct than that model.

Bottom line: All examples of probabilities being really useful are also examples of (at least implicitly) established models. And then the usefulness extends exactly as far as the modeling assumptions.


Scott actually came up with this in a context: estimating the probability of Yudkowskyan eschatology.

Personally, I find this a lot less interesting than the part about probabilities without models. But still I’ll comment on it very briefly: I think in that context probability talk is a distraction from the real question. The real question is if we should worry about those future scenarios or not.

It can be clouded in discussions about whole categories of models. For example, I would be inclined to reason about Yudkowsky the way Scott reasons about Jesus: There are 7.3 billion people in the world and world-saving-wise most of them don’t seem to be at a disadvantage compared to Eliezer.  So models assigning chances much above \(\small 1.3\times 10^{-10} \) are implausible. Scott would probably reply that Eliezer is more likely to save the world than other people, and that could lead to a very long argument if either of us had the time and nerve to follow it through.

But notice that that argument wouldn’t be about probabilities at all. The arguments would be about what kind of scenarios should be considered and compared with what, and the potential conclusions would be about what we should do with our money. Also, probabilities wouldn’t help with accuracy checking or comparing us, because we’re talking about a one-time event. There is nothing the number would add except an illusion of doing math.

Posted in Arguments, Math | Tagged , , | 16 Comments

Last scene of the fourth act

[Hindsight note: This prediction was wrong.]

Most people won’t realize how serious this is until it happens. Within a week.

June 26th was the Greek tragedy’s third act: The conflict couldn’t be resolved short of a catastrophe. The Greeks wouldn’t accept the lender nations’ conditions, the lender nations wouldn’t lend without that conditions, neither side would fold  and without further lending Greece would be screwed.

Then the first scene of the third act: Tsipras calls a referendum. The Eurogroup hopes the Greeks will fold with a yes vote. The Greeks hope after the commitment of the No vote the Eurogroup will have to fold. For a week both sides have some hope but neither side actually folds.

Second scene: Tones of reconciliation. The Eurogroup makes some fuss about still being willing to negotiate and and the Greeks seem to be trying for some goodwill points with Varoufakis resigning.

Third scene: This last Friday were hearing about the French helping the Greeks with there proposal. Then the Institutions make some noises about taking it seriously. And it looks like both sides may be able to call it a victory: The Greeks could claim victory over debt relief, the lender nations about even harsher austerity then they had demanded. For a brief moment it looked like the catastrophe could be averted. I was surprised.

Fourth scene: Conflict in Eurogroup but basically the decision is to demand more specifics and improvements of details. That sounds basically like a return to the third act, where Greece would have a rephrased “new” proposal every few days and then the Eurogroup wold demand specifics and improvements, to which Greece would respond with a new proposal, etc. But actually it’s the transition to the fifth act: Sometime next week the Greek banks will run out of money. There might be some pretense of negotiation until that moment, but basically all the fourth act’s hopes are shattered.

There are still some plot strings to resolve in the fifth act:

The narratives of blame are already being set up: The creditor narrative is that the Greeks never were willing to admit their wealth had been smoke and mirrors, tried to weasel out of that realization by refusing to commit to anything specific, blew away their best chance at the polls and then continued to make new demands drunk with overconfidence until the very moment they got crushed. The Syriza narrative will be that the creditors never were really willing to help them even when they would have swallowed a complete and humiliating defeat days before the collapse.  The ECB’s narrative will be that they didn’t decide this and helped delaying the reckoning as long as possible until both sides’ politicians ran into the wall that had been plainly visible for weeks. All these narratives are basically true and have been since the day Syriza won the election. But now they will have to be worked out into actual blame.

Connected to that is how the Greeks will apportion blame between their government and the creditors. Most of the people who voted No just a few days ago expected to get a better deal out of it, which is what their government loudly promised them over voices to the contrary. If most of the internal blame goes to the creditors, Greece may manage the transition peacefully. If it goes to the government that lied to them and will be paying its pensioners and  civil servants in scrip, there might be a revolution.

And the nature of the scrip isn’t quite clear yet: It may be drachmas or, more likely I think, it may officially be Euro IOU’s.

Basically both sides expected to win so neither side really planned for what everybody should have expected. In a few days, the Greeks will be living in interesting times. Oremus.

Posted in Politics | Tagged , , | Comments Off on Last scene of the fourth act

Assumptions behind a curtain

This is basically an overly long response to a recent blog post by Scott Alexander. It’s not very interesting outside of that context, so read that first unless you did so already. Also, most of this is further simplification of Cosma Shalizi’s ancient and semi-famous blog posts on IQ, so if you understood those well you probably won’t find much new here.

Let me start of with a factor analysis that happened long before we knew the math of factor analysis: female sexual attractiveness.

One could build a very IQ-like measure of that by having a few random men rank a sample of women and then taking average percentiles.

Then one could do a lot of correlational studies to find things correlated with beauty and assign them to input and output categrories, drawing a flow-chart from inputs through attractiveness to outputs.

On the input side of the flow-chart one might have things like facial symmetry, make-up, breast size, skin clarity, waist-to-hip ratio, hair length and shininess, etc.

On the output side one would have things like the cost of favours men are likely to do for a woman, popularity, other woman getting angry when their boy-friend looks at the woman, etc.

Also on that side one would have things like the probability of her getting pregnant per sex event and the probability of the baby being born healthy if she gets pregnant.

In reality though only some of those arrows should flow through the central box. Some of the input factors are directly correlated with some of the output factors, namely those that contribute to prospects of offspring. Those arrows aren’t actually going through a real central box, it’s just that the outputs are relevant for basically the same reason.

On the other hand, some arrows here actually flow through a central box one might label “the brain stem’s estimate of biological procreation prospects” or female sexual attractiveness in the strict sense. Things correlating with the outputs not actually going through the central box still tend to correlate with it, because the brain stem is somewhat good at estimating such prospects.


Let’s look at how evolution designed the central box. (I’ll talk of evolution like an agent here. I know that’s not how it works in reality, but everyone talks that way for good reason.)

It had a lot of facts available, like “broad hips make it easier for the baby to get out alive”, “visible diseases are sometimes inherited by the baby”, etc. It also knew that some of these facts were more important than others. Using all those rules directly would be computationally inefficient though, and evolution didn’t want to waste too many resources on a large look-up table. So basically it created a weighted sum of many known female physiological influences on procreation and then tinkered with the weights until predictions with that sum became sufficiently similar to predictions made with the original data. Sufficient meaning in that context, that the better results one would get from the real calculation are not worth the cost of doing that calculation.

Basically this is an efficiency hack for reasoning with the facts evolution had available.


Let’s also look at where the central box works and where it doesn’t.

Nowadays there are a lot ways to confuse the input variables, like make-up, chemical hair-shinyfication, breast-implants etc. They do change the decision the brain stem makes through the central box, like male favor cost, girlfriend look-triggered angryness etc. But they don’t change any of the outputs correlated with the original inputs of the central box, which it was supposed to optimize. In other words, from a designer’s perspective these are things the box is bad at.

On the other side, modern medicine has also changed the consequences of some of the traits the box is adding up. For example, the correlation between sexual attractiveness and probability of conception is probably smaller than in used to be in the ancestral environment, because nowadays people having sex might be contracepting. Also there are now caesarians, so the correlation between obstacles and the baby not getting out alive probably went down. In other words, from the designers perspective even the original inputs are now probably weighted wrongly.

Still, the central box still works reasonably good where nothing has changed.


Now compare the situations that evolution considered in designing the central box with the situations it does a good job on. As you might have noticed, they are identical.

This is not a coincidence. Remember that box was created by taking all the correlations available at the time and then throwing some of the data away until it was boiled down to a single score. In other words it doesn’t contain any information that didn’t already go into constructing it and not even all of that. So it does a reasonably good job on the kind of correlations it was built to simplify, but there is no reason why it should work on correlations that came up later and so it doesn’t.

So even though make-up correlates positively with sexual attractiveness and sexual attractiveness correlates negatively with miscarriage even the dumbest conceivable doctor wouldn’t prescribe more make-up to prevent miscarriage, because the make-up-attractiveness-correlation is not among the ones the box summarizes and reasoning through the box and unsummarized correlations is not a valid argument.


Compare this to one of Scott’s examples, blood pressure. I’ll come back to his points about blood pressure not working that well, but first let me talk about why it works when it does.

My oversimplified layman’s understanding is that we basically have a good idea of how blood pressure works. If it is high, blood will press against blood vessels more and sometimes that will make them break. This is a bad thing, particularly if it happens in the brain. On the other hand, more pressure makes the blood go faster, which means the cells get more oxygen per time. This is why people with a blood pressure of zero (Or maybe equal to ambient pressure, don’t want to figure the details out right now) tend to go brain-dead a few minutes later.

On the other side, we also know how it is influenced. Things changing blood pressure either change how hard the heart presses or how big the blood vessels are, which in turn changes how hard they press back or maybe how much blood there is in total, which determines how hard the body must push to keep it in.

So, while we only care about changing blood pressure because of its effects, we actually know these effects are mediated through real thing called blood pressure. Eating too much salt will give you seizures by increasing blood pressure and not, say, by chemically corroding the blood vessels. Likewise, a heart attack will kill your brain by reducing blood pressure to zero , not by, say, just phoning the brain and telling it the apacalypse is here so it might as well go home now.

Blood pressure being real in that way means we can validly make arguments on newly found correlations between blood pressure and other things. So if we find a new drug increasing blood pressure and if that drug doesn’t have any other direct effects on the body (to be honest that latter one is the mother of all ifs, but I’m arguing principles here), then yes, that drug will cause the kind of things higher blood pressure causes.

On to the ways it doesn’t work. As Scott explains, pressure is different in different parts of the body. We actually care about blood pressure in the body parts where we care about effects but measure it somewhere else and that’s close enough except whenn it isn’t. So the explanation I gave above isn’t quite right, which in turn means it doesn’t work perfectly. Note however, that the explanation’s success is due to it being like actual reality and its failure is due to it not being like reality. Also, Scott notes measurement methods suck. Fine, but again measuring blood pressure works because the measurement is close enough to the actual physical quantity and fails because it isn’t.

Flow chart wise, we actually know the picture is similar to reality. In particular, the box in the middle corresponds to something in reality, there is only one such thing and the arrows are rightly drawn in going through that one box. They shouldn’t really go directly to the boxes on the right side or maybe through fifteen boxes we omitted that also have arrows between each other, some of which go backwards. Again, this isn’t perfectly true, but blood pressure is a good concept because and in so far as it is close enough and we have a justified expectation of it being close enough for newly discovered correlations.

In cool stats lingo, blood pressure is a causal node. Female attractiveness is a causal node only for the things evolution conditioned on it, but not for the things evolution was trying to achieve.

What evolution did with sexual attractiveness can be done with math for things we care about. The method doing that is called factor analysis. Sometimes we may be lucky and discover an actual causal node that way. But the way the math works, we almost always can build a box from a large bunch of correlations, even if no causal node is out there. Such a box is called a factor. Sometimes it represents something real. Sometimes it doesn’t.

Spearman did this for the various parts of IQ-tests and came up with a box he called the g-factor. (For the pedantic, he used a now-obsolete predecessor method of factor analysis, which hadn’t been invented yet.) This was particularly cool, because at the time he had good evidence for the g-factor being a causal node. That evidence turned out to be a fluke though.

So now we don’t have good proof of the g-factor actually being causal. Some people think it is though, and that matters because some arguments will be valid if it is but not if it isn’t. So they can think they proved things they actually only assumed in the disguise of assuming g to be causal.

At this point I’ll make a slight digression on heritability.

Scott thinks IQ-sceptics are trying to avoid thinking about claims like “Intelligence is at least 50% heritable”.

I’m actually fine with that claim, as long as we stick to its technical meaning.

Problem is, the word “heritable” sounds like it should mean but actually doesn’t mean “unchangeable short of bioengineering”.

To illustrate, let me make up a toy model were the two things differ. I’m not claiming that’s how it works, just showing one easy example of how they could differ. So in my fake model intelligence consists of 10000 binary abilities, each of which you either have or don’t have. Some of these abilities depend on each other, so you can’t learn the advanced ones before the primitive ones. All are learnable. The genetic part is that for each ability you have a genetically determined teaching time necessary to master it. For some abilities some people need less learning time than others, but given enough time everybody can learn every ability. If you get enough teaching time for a given ability you learn it, if not not.

Actual teaching time isn’t strongly enough dependent of needed teaching time, so ceteris paribus, people who would need more time on fundamental abilities (though perhaps less on advanced ones, for an equal total travel time to the ceiling) tend to learn less abilities. Thus intelligence is strongly heritable in our present environment.

As schooling improves we discover abilities many people didn’t learn previously and spend more time on them. Thus the Flynn effect.

Eventually we will figure out all the critical abilities and human differences in IQ will vanish without any bioengineering.

Again, I’m not claiming this is how it works, mostly because its a complex just-so story I just made up. But it is very compatible with all the results we get from twin studies. In fact more so than the story the average “human biodiversity” guy on the intertubes professes as a necessary result of that data. Which is to say, heritability doesn’t prove strict biodetermination.

However, if you assume g is causal and make a second and actually falsified assumption no one really believes to outlaw models like this one and add in the results of twin studies, then you can conclude intelligence differences can’t be much reduced short of bioengineering. Problem is, a lot of people think this follows from twin studies alone. It doesn’t.


Like Scot I’m concerned there’s a motte and bailey tactic involved here, only I think it’s on the other side of the controversy.

The motte:
When you need to screen people for the kind properties IQ tests are designed to screen for, and when you don’t have more specific tests to screen for a more specific version of what you’re looking for (for example subject matter tests in college admissions) then IQ tests will do a better job than nothing. So far so good, this is pretty obviously true and actually not all that controversial.

Now for the bailey:
Almost all human societies have fairly hereditary social strata, where people tend to end up with approximately the same amounts of power, prestige, and money their parents also had, and so on until the tenth generation. This is somewhat embarassing for societies that follow a nominal ideology of giving everyone equal chances. It is particularly embarassing in America, where the underclass has a distinct skin colour, making it comparatively hard to just ignore the problem.

Here comes the IQ ideologue with cruel but comforting story dressed up as science: The underclass is so stable because they are irredeemably dumb for genetic reasons. No kind of affirmative action can ever alleviate that genetic stupidity so we better don’t even try. See, it’s nature itself that is highly unfair, not, perish the thought, the social structure. So those of us getting fairly high positions on the totem pole won a genetic lottery, not a benefit-from-structural-sin lottery.

There actually is no good evidence for that story. To the extent it’s directly testable it’s wrong. For example, shithole countries tend to have low average IQ’s and one could argue about what causes what. Except that occasionally countries do emerge from poverty and bad institutions, and when they do, the average achievements of their inhabitants go up in degrees this theory declares impossible in a single country.

But such stories are very attractive for an upper class, look how Malthus previously Eulered basically the same consequences from then exotic math (exponential growth) and some people still want to stick with that bullshit.

So if you put in some additional assumptions, assumptions that are so subtle you don’t even need to understand them or know you’re making them, then you can derive this bailey from the actually justifiable motte. And from the unfounded assumptions, but don’t look at that curtain too hard.

Posted in Arguments | Tagged , , | 7 Comments

Against realism

Leah Libresco is stymied at this week’s edition of her Pope Francis bookclub. (You won’t be able to follow the rest of this without reading that post.)

I’m not guaranteeing for my own interpretation here, but I think the problem is a scope error.

Leah seems to think of the general project of moral discernment, but right here Bergoglio seems to be talking about a particular sub-project. Going by the first page, the intent of the whole chapter is pre-discerning typical temptations affecting the making of “the various apostolic decisions we must make in our pastoral work/efficacy”. (Or something like that, I’m going by the German edition here.) So it’s not just discernment in general, it’s discernment on evangelizing and in particular evangelizing the culture.

In that context, the struggle at hand is not the struggle for every individual soul but the cosmic one, which in fact is already won. Or more precisely it is ongoing but we know the enemy is fighting in an irrecoverably lost position even if it looks like he’s winning a lot of the time.

The pessimism Bergolglio talks about would then basically boil down to loosing sight of that victory and reducing the struggle to what it sometimes looks like.

For example, from the inside compromising eternal goods for temporal ones often looks like making the sacrifices necessary for winning the battle. An example of the example would be identifying one’s Christianity with a political side and then treating every election as a bonsai eschaton. Looking at it this way, the following section on separating wheat and chaff before the time looks very connected.

Now I’ll go on to blatantly substitute my reflection for Leah’s:

One of the underlying ideas here is that, once one has made a fairly stable decision for good, evil will mostly present dressed up as good, or, to say the same in Jesuit, sub angelo lucis. But then it needs a costume appropriate to the context. So basically examining different contexts and thinking of how evil may look there is a heuristic for figuring out costumes evil may be wearing. This is somewhat similar to the modernist project of cataloging fallacies and biases, just for morality rather than rationality in general.

Summarizing in a very “anacultural” way, I think this chapter is mainly about evil dressing up as practical rationality. Bergoglio  has several examples of how it  may do so: By the pessimism route (pessimism seem like realism and realism is rational), or by urging premature separation of wheat and chaff (categorizing feels super-rational), or, somewhat obviously, by general sterile over-intellectualization, or by slacking in petitionary prayer (petitonary prayer often seems impractical).

And he categorizes all of this as failures of faith, because they separate a theoretical faith from practical efficacy.  It’s interesting to translate this to my reframing: When evil dresses up as practical rationality, it’s actually trying to separate our epistemic and practical rationalities.

By the way, my spell-checker wants to correct eschaton to Charleston. I suspect this would be super-funny if I ever had been there.

Posted in Meditations | Tagged , , | 3 Comments

Hi everybody

Just a note for folks coming over from the ITT at Unequally Yoked:

As you can see, this blog is presently inactive. That’s because I’m all stressed out on a new job and moving houses next week. The blog will rise again, but not very soon.

Meanwhile you might check out my archive or subscribe to my feed for the content I will be posting eventually.

Posted in Meta | 2 Comments

Models as filthy water

[I was planning on this main metaphor before that other filthy water metaphor shook the Catholic blogosphere. Absolutely no reference intended.]

When journalists asked Konrad Adenauer, the first post-war German chancellor, why his foreign office had so many employees who had been Nazis just a few years earlier, he answered

Man schüttet kein dreckiges Wasser aus, wenn man kein reines hat!

(One doesn’t pour out filthy water if one doesn’t have pure [water]!)

I think he was right. Running a government with loads of allegedly reformed Nazis was terrible and had some very bad consequences in actual policy, but it’s not like the people complaining about it had any realistic alternatives to offer. So sometimes we must make do with filthy water.

This is one good objection to my last post, where I ranted about Less Wrong/MIRI/CFAR folks trying to eat the menu by promoting some simplified mathematical models as the definition as rationality. Sure, someone might say, these models are filthy water, but we can’t think without simplifying, so we don’t have pure water, so we can’t throw out the filthy water.

My main reply is that filthyness is contextual. For example, coffee is very filthy water for purposes of washing clothes but better than pure for drinking. On the other hand some dangerous bacteria can be killed off by drying, so they can make water too filthy for drinking but still pure enough for washing clothes if totally pure water isn’t available.

You may believe I’m getting carried away by my metaphor, but actually the metaphor is getting carried away by me. The thing is, models too can be pure enough for some purposes while being too filthy for others. In other words, they have a domain of things they describe fairly well and get worse as we extrapolate beyond that domain. So a model can easily be the best we can do for a certain kind of question but still give worse than worthless answers for others. Those other questions might be better answerable by other models (perhaps even the informal model of our intuitions) or they may not be describable by any available model (i.e. we may actually know nothing about them).

As an example, let’s look at probabilistic reasoning. I argued that it is useful if (a) the range of potentially relevant events is properly understood and (b) some way exists to assign them to categories, and (c) we have enough experience to have some idea of how often our guesses for a  given category turn out correct. This is most paradigmatically the case if the events are arbitrarily repeatable (in which case probabilities are expected frequencies), but some other use cases are close enough. Essentially this makes probabilistic reasoning into a kind of meta-model: It works as long as simplifying assumptions making (a-c) true are pure enough water. Which, if any, simplifying assumptions are pure enough depends on the context, which is why we shouldn’t talk about probabilities without the simplifications being at least implicitly specified.  So probabilities are great for handling one specific kind of uncertainty. On the other hand they totally suck for the kind of uncertainty that is rooted in sui generis cases or the knowledge that our model is misspecified but we don’t have any better model yet. And we do have purer water for that kind of uncertainty: It’s the “classical rationality” the canonical writings of Less Wrong consider outdated.

For example, this recent post at  Less Wrong is all about hanging epicycles on a probability model used outside the domain probability models are good for. These problems are entirely homemade, they only arise from the assumption that probabilistic reasoning always has better water purity than the old fashioned methods used by the polloi.

I’m not saying probabilities are bad, I’m saying they are sometimes good and sometimes bad and we have at least some vague rules of thumb for when they are good and when they are bad.

So illustrated on the example of probability, this is my criticism of the thinking system promoted at Less Wrong: They take mathematical (or sometimes mathematical sounding) models that are somewhat pure for some purposes, canonize them  as the definition of rationality, and then use them for other purposes they are too filthy for.

Posted in Arguments | Tagged , , | 7 Comments

The disappointing rightness of Scott Alexander

It turns out that Scott Alexander is even smarter than I thought. This is somewhat disappointing. Perhaps I should slow down on explaining that?

The proof of his smartness is of course in agreeing with me. On his blog he has a defense of handwavy utilitarianism as a false but still useful heuristic. His conclusion:

It’s not that I think it will work. It’s that I think it will fail to work in a different way than our naive opinions fail to work, and we might learn something from it.

You still need to read  the whole thing or else what follows won’t make much sense.

Well I agree with his main point and that is sad because now who will live up to my stereotype?

More specifically, if I met a CFAR employee in real life and asked them to pitch their version of rationality to me, I could easily imagine his dialog happening with me in the role of the student. And I would be that stubborn because giving them a number – any number – would seem like a concession that there is a correct if unknown number and only one such number and I don’t believe that because I’m a frequentist. Let me explain that in a more long-winded way.

CFAR has a page on what (they think) is rationality and I think it’s simply wrong. According to them

For a cognitive scientist, “rationality” is defined in terms of what a perfect reasoner would look like.[…]its beliefs would obey the laws of logic[…]  its degrees of belief would obey the laws of probability theory[…]  its choices would obey the laws of rational choice theory.

(and they clearly want to use the word like that cognitive scientist)
According to their claims, the reason we aren’t perfect reasoners is basically because we run on sucky hardware.

And that actually  is one reason, but another reason is simply that a perfect reasoner is a mathematical model abstracting from many of the most interesting problems actual reasoning deals with.

For starts, a perfect reasoner lives in a probability space and all its reasoning is about  the events of that probability space. It can be mistaken, but, by definition, it can’t fail to at least consider the correct answer. And then it has consistent if possibly wrong probabilities for every one of those possibilities.

A perfect reasoner never needs to ask what probability space a probability lives on, because, by definition, every probability it encounters lives on the same one. The lack of that convenience is a problem for any real reasoner, because you can’t have a probability without a probability space. Probabilities without probability spaces are quite simply meaningless, exactly like asking if a crocodile is greener than long.

Fortunately we have a practical solution to that problem. We just define a probability space and then we come up with some rules of which real world circumstances we will map to which events of that probability space. In other words we make models. Or, again in other words, we assume for the purpose of a particular probability argument that the world is structured in a certain way. And there is nothing wrong with that as long as we remember two things: First, outside of the model the probability is not even wrong it is simply meaningless. And second, the model  is not true, it is at best useful.

The most clear cut example of probability models being useful is for random events that can be arbitrarily repeated. The classical examples are throwing dices, flipping coins, etc. but also e.g. repeated error-prone measurement of a physical property. In those cases probabilities will correspond to frequencies in the long run. There are still some imposed assumptions in such models. For example, in the real world a thrown dice could fly into the fire and burn rather than producing a number from one to six and the probability model doesn’t account for that. But still, the numbers have a very clear meaning in that kind of model. If I say there’s a 50% chance of the coin coming up heads, I know what those words mean. I’m not radical enough to call those the only kind of good probabilities (let’s call that radical frequentism) but I do think it’s the prototype and other probabilities are probabilities by analogy to this kind.

The next best kind is if I have non-repeatable events that are still similar enough for me to make good guesses by forcing them into a model that treats them as identical and thus repeatable. This is e.g. what an insurance does. Of course there aren’t thousands of identical houses occupied by identical residents but treating many different houses and residents as identical is close enough to mostly work. Sometimes I can do the same. For example, I don’t get a few dozen identical worlds to see in how many of them a prediction turns out true, but I can come up with categories of predictions I am about equally sure about and then see how often I’m right on that kind of prediction. If I am reasonably sure about my categorization I’m fairly comfortable doing that. A popular way to formalize this kind of thing is betting odds and in some situations that works fairly well. To be honest, I probably would be willing to grant a probability for my computer breaking down under this rubric, as long as that interpretation is understood. So if I say there is a 0.6% chance of my Computer breaking down this month, I know what those words mean.

This approach gets gradually worse as my similarity judgments get less confident or relevant. For example, what’s the probability that human emissions of greenhouse gases play a large roll in global warming? A few years ago I looked into what I would have to read before I would feel confident actually arguing on that question and it turned out it would take about two months of full time work before I would think myself worth listening to. Given that I don’t have any power or influence, I decided my having a correct opinion on this question isn’t important enough to justify that investment. This doesn’t quite prevent me from categorizing and assigning a probability. I know that that question is studied in comparatively hard sciences and that organizations aggregating the opinions of the scientists involved report the answer to probably be yes. I also know that the area of research is fairly young, that there are some dissenting voices and that the question is politically charged on both sides of the question. Ex ante et ex ano, I’d guess that the majority will be right about two out of three situations like this one. So technically I can assign a probability of about 67%  to human emissions of greenhouse gases playing a large roll in global warming. But this probability is far less useful than the ones I talked of before. Because if I ever needed  to make an important decision this question was relevant to, the correct approach would not be to plug 2/3 into any calculation it would be to do my homework. I wouldn’t even offer bets on this kind of probability because that would just invite better informed people to take my money. Given that probability models are justified by being useful that is quite an indictment. Still there is some connection to reality here. In fifty years everyone will know the answer and then I can see if the scientists were actually right on two out of three such questions. So if I say there is a 67% chance of them being right, I know what those words mean.

OK, let’s add another straw. According to present science, the law of gravitation is \(F_g=\gamma \frac{m_1 m_2}{r^2}\).  In principle there could be another small term,  so maybe it really is \(F_g=\gamma \frac{m_1 m_2}{r^2} +\gamma^\prime \frac{m_1 m_2}{r^{42}}\). If \(\gamma^\prime\) wasn’t too large, our measurements wouldn’t pick up on this and science wouldn’t know about it. I don’t believe this to be true because of Friar Occam’s razor, but I do think it’s possible. So if it’s possible, what is the probability of it being true?  If you shrug that is exactly the right answer, because if you told me that chance was \(x\), I seriously,  honestly  wouldn’t know what those words mean. This is a different kind of not knowing, because my uncertainty about models is so much more important than my uncertainty in models. With global warming my model sucked, but I at least knew which sucky model I was reasoning in. Here I don’t even know that.

Or what’s the chance of murder being evil? I’m quite sure it is evil and I do think that is a question about objective reality rather than mere preference, but I’m also quite sure it is not an empirical question. There is just no way to tie this up with any kind of prediction so talking about probabilities in this context is simply a category error.

Some people try to get around this by just pretending they can do probability without models. This strategy could be called by names like hybris, superstition, or delusion, but the most commonly accepted euphemism is  Bayesianism. (Or maybe radical Bayesianism because there  are some people who mysteriously call themselves Bayesians despite knowing this.) My reaction to this is similar to what Bertrand Russel said in another context:

The method of “postulating” what we want has many advantages; they are the same as the advantages of theft over honest toil.

Of course in the real world communication is contextual and often people talk about probabilities with the model being understood implicitly. And that is fine as long as it isn’t the question at hand, but if I’m talking Bayesianism with a Bayesian, giving them a belief number they will interpret in a model-free way is just letting them beg the question. So if I was the student in Scott’s dialog I wouldn’t let Ms Galef put any belief number on the computer breaking down either, even if she acknowledged that number was a guess, unless she also acknowledged even the right number relating to a specific model and possibly being different in other models. It’s not just that this number may be wrong, it’s that there may be no such thing as the right number. And behold Scott getting this, in his comments he even talks about an example of  “weird failure[s] of math to model the domain space like Pascal’s Mugging”.

So much for probability theory, now on to the same complaint for rational choice theory. The rational agent of rational  choice theory is basically a simplified consequentialist. It has ranges of possible consequences and preferences among them  and ranges of possible actions and probability distributions for what the actions might lead to. Then it acts in a way it expects to result in the consequences it likes best. Depending on what additional simplifications we are willing to make, the math of  those choices can sometimes be made fairly simple. Of course the probability distribution part incorporates all the problems I already mentioned, so consider them repeated. In addition to that, the rational agent can’t consider actions themselves, only their consequences. For example, its  choices would be obviously wrong on at least one of the trolley and the fat man problems. In the mathematically most simple versions it also can’t have lexical preferences. Sometimes this model is  useful but on other occasions  it fails not only as a description of how people do work but also as a description of how people should work. So in addition to all the problems of the probability theory part using rational choice theory in the definition of rationality also assumes rather than arguing for a whole lot of ethical assumptions I disagree with.

Finally, let’s consider utilitarianism. This is actually not something CFAR seems to talk about, but it clearly is a major defining belief of the cognitive culture both Scott and CFAR hail from. Basically the idea here is that the preferences all agents have can somehow be aggregated into one preference and  then morality consists in acting so as to get the best result according to those aggregated preferences. As a fundamental account of ethics that is pretty much hopeless because we know such an aggregation is mathematically impossible. And unlike the rest of the Less Wrong collective, Scott is actually aware of that fact. So now he basically says that even if it is not a correct model it is still often a useful model.

So looking at it we still have some differences (for example while he no longer seems like an orthodox utilitarian, I think he still is an orthodox consequentialist), but on the big picture our attitudes to mathematical decision making now mainly differs in emphasis. I say those models may be very useful, but they are still wrong  and he says those models may be wrong, but they are still useful. The still useful part may be captured in his assertion that “imperfect information still beats zero information”, and that is quite true as  long as long as we remember the complement that I might formulate as “but zero information still beats false information”.

The disappointing part of this comes from me having assigned Scott the role of the intelligent champion for the Yudkowskian/MIRI/Less Wrong/CFAR world-view. From my viewpoint the entire point of that philosophy is believing that those models are not only useful but correct. They actually want to build a perfect reasoner and expect it to be a perfect reasoner in the non-technical sense of the word. And when they don’t call it an AI they call it a Bayesian superintelligence because they actually expect it to work by manipulating Bayesian probabilities all the way down. (Which is presumably why they don’t call it a polyheuristic superintelligence.) And they talk of programming that machine to maximize “the coherent extrapolated volition of humankind” as if those words had a meaning. Actually that’s my central criticism of Yudkowskyanism: Basically all its teachings are based on this one mistake of taking a vaguely mathematical or at least math-sounding model vaguely appropriate for some problem domain, absolutizing it and then running far far beyond the problem domain it was vaguely appropriate for.

Of curse if your role is defending that philosophy, casually converting from Bayesianism and fundamental utilitarianism to moderate frequentism and some weird kind of contractualism even while noting the old views are still great heuristics is just a failure to stay in character.

Of course more seriously I do realize Scott isn’t defined by the roles I privately assign him and I do think it is better for him to be less wrong than before and I’ll just have to reassign him as the champion of his own syncretic philosophy. But now I have an unpaid vacant position for the champion of Yudkowskyanism, for Scott is no longer qualified. So there.

Posted in Arguments | Tagged , , | 6 Comments

Samurai Mormon

This cracks me up.

Posted in Minor notes | Comments Off on Samurai Mormon

Oversimplification by Catholic cardinals and atheist bloggers

Atheist blogger JT Eberhard has a piece accusing the Catholic Church of supporting the Pinochet regime in Chile. This is somewhat surprising given the conventional wisdom is that the Chilean Catholic Church under Cardinal Silva basically ran what little internal organized opposition there was to that regime. So lets look at the details.

The new evidence is a then secret but meanwhile declassified cable from the US Roman embassy to the state department that was recently published by Wikileaks. The cable in question reports on what Archbishop Benelli, then a high-ranking Vatican diplomat said in a rant to Robert Illing, then a US diplomat at the Holy See. And a lot of what he said is indeed very, very bad. Basically Benelli thought things were going fairly well in Chile amd the reports of the new regime systematically murdering its opponents – which we now know to have been clearly true – were just so much communist propaganda designed to detract from this setback for communism. He based this on the report of cardinal Silva – the guy who around that time was founding his first oppositional human-rights organization – whom he thought more sympathetic to the old regime than to the new one. He noted that

Despite its conviction that truth far from picture found in media, Vatican has been impotent in its quiet efforts to convince anyone of same.

and complains that

leftist propaganda has been remarkably successful even with number of more conservative cardinals and prelates who seem incapable of viewing situation objectively. Result is that leftists have managed to create situation in which pope would be attacked by moderates if he defends truth on Chile.

Then for icing he relays a story about the old regime having itself planned a cup and stockpiled weapons at the Cuban embassy for that purpose. We now know that to have been a lie the new regime made up to justify the cup.

So basically the Vatican’s diplomacy department fell for the new regime’s propaganda hook, line, and sinker, unsuccessfully tried to convince other people of it, and blamed the toppled regime for the cup, all while raving about everybody else being victims of communist propaganda. I don’t know if I should laugh or cry.

To this JT adds a link to a 2010 media story about the Chilean bishops proposing an amnesty from which some of the human-rights violating officers would also have benefited. He thinks

This is incredibly odd behavior if the Catholic Church is representative of a loving god’s will on earth.  Of course, stuff like this is exactly what we’d expect to see if the Catholic Church was a political organization interested primarily in its own power, run by corruptible mortals.

I agree this is evidence of the Church being run by corruptible mortals, not to mention the occasional idiot. But the “political organization interested primarily in its own power” part doesn’t fit the facts either. (At least not in this case, history does of course have examples of that too).

The thing is, if you’re trying to maximize your own power, founding an internal human-rights organization criticizing the local powers in one place while simultaneously telling their foreign enemies they are not that bad is about the worst possible way to go about it. And then calling for amnesty of people that lost power twenty years ago isn’t exactly power-maximizing either, particularly if you originally argued for prosecuting them. Nor does calling the dictator a dictator while he is still in power and saying you must struggle to bring democracy to that country sound like a particularly promessing plan for power-maximization.

So lets try for an understanding that can handle the complexity. First, treating the Church as a unitary subject on that kind of thing is already a mistake. Catholics can be found on both sides of nearly every political issue. So it does seem kinda relevant that Benelli and Silva were different people. So is it good Catholics in Chile, bad ones in the Vatican? Even that would be an oversimplification, because people see things differently at different points in time.  For example, another cable almost four weeks later in the same declassified collection reports on cardinal Silva visiting Rome and the Chilean press vilifying him for parts of the statement he made there. The cable concludes (emphasis is mine):

With Allende gone and under the  benevolent gaze of a govt many of whose members view the cardinal’s leftof- center stance with some suspicion, the attack on him has been renewed and may be continued. The latter is especially  possible if Silva follows advice given to him in Rome (as reported REFTEL) to press Junta for greater moderation.

And a few months later a report on the new Chilean ambassador to the Holy See notes:

Diplomatic observers here anticipate a certain amount of rough going for Riesle who arrives at a time when Vatican patience is somewhat exasperated over continuation of what church considers unnecessary repression in  Chile.

(Considers? Remember this is the state department under Nixon and Kissinger, which actually did support the Pinochet regime.) So the Vatican picture of the situation didn’t remain that rosy either.

Basically that leaves us with Benelli’s first reaction of totally falling for the regime’s propaganda. As far as I can ascertain, the credibility of the atrocity reports was debatable at the time, though of course now we know they were true. For example, about three weeks earlier, a New York Times article explains how the Junta sucks but hedges in passing that

the reports of largescale executions of alleged leftists may not be true

And a few days before that the US state department cabled loads of institutions telling them

Casualty estimates continue to vary widely. Junta has listed only 115 dead; unofficial reports contend toll much higher.

Of course “debatable” doesn’t mean “whatever”. Benelli confidently picked the wrong option, decided everyone else was a deluded propaganda victim and then actively tried to convince others of that standpoint.

Now let me speculate about the motivation of that failure. This was during the cold war.  A cleric of the time would have been very invested in anti-communism, because the Church suffered very real persecution in all communist countries. He would also have known that communists lie whenever it’s politically convenient both by experience and because that’s what Lenin advised as the scientifically correct way to do communism. He would remember that Allende was in the process of nationalizing the means of production in blatant disregard both of the law and of the will of the democratic majority. (Yep, Pinochet was a bazillion times worse, but there is no cosmic law guaranteeing a hero to balance every villain. Allende was way better then Pinochet but still pretty bad.) And he would have observed that the people talking about atrocities were mainly the same people claiming Allende had been totally law-abiding, which actually was obvious propaganda. None of that is actually relevant to the truth of the atrocity reports, because all of it would be (actually was) also true in a world where they were true. But it would have a massive emotional impact. He had a story about the world and one way of oversimplifying complex data fit that story very, very nicely. So he went ahead believing it. And ignored the reports he could have studied if he had also been looking for evidence disconfirming his story. Luckily he didn’t convince even people friendly to the Vatican.

Did I already mention this was a failure? Let me do so again: this was a failure. But it’s not like that kind of failure only happens to cardinals.

For example, JT Eberhard has a story about the Catholic Church. In that story it is a political organization interested primarily in its own power. And he thinks he has very good reasons to believe this. So then he ran along media reports of this cable. Most of those media reports actually were of the “man were those folks dumb” variety, which is a fairly obvious take. But with just a little bit of oversimplification it fits the story of the power hungry Church really, really nicely. So he concluded the Catholic Church must have been in league with Pinochet. And if you don’t look for evidence incompatible with that simple story you don’t even have to suspect it’s an over-simplification. To be fair, I didn’t notice any other atheist bloggers talking about that story, so he doesn’t seem to have been very convincing either.

Epistemic morals:

  • Beware of interpreting complex events in preconceived stories about  the good and evil guys. Sometimes it will be correct, but it should be a warning light for likely over-simplification.
  • If you do interpret complex events in preconceived stories about  the good and evil guys make sure to also look for disconfirming evidence.
  • There seems to be good hope of this particular failure mode not being particularly viral, so thinking in communities might help.
  • Also, did you note how most of this is speculation concordant with a simple story? It is my best guess and short of telepathy I can’t  disconfirm it. But it is also quite possible that Benelli and/or Eberhard went wrong for totally different reasons.
Posted in Arguments | Tagged , , , , | Comments Off on Oversimplification by Catholic cardinals and atheist bloggers

Confusion and the Morning After Pill

About two months ago the German bishops made the news with a statement on the Morning After Pill and rape. I was dissatisfied with basically all sides’ knee-jerk reactions but also too busy to explain lots of details. So  here’s my “it’s complicated” post on a question essentially nobody cares about anymore.

First some cultural background. In Germany, abortion is not a matter of intense political discourse. The law is that a woman can get an abortion within 12 weeks of conception after a forced counseling session. Often the government will pay for it. Abortions later in pregnancy are available for health reasons. There is an increasing psychology-creep in those health reasons, but mostly they are still either fetal deformities or actual health reasons. Technically abortion is illegal but not punished, but most Germans aren’t even aware of that distinction. You might notice that this is pretty much the wishy-washy situation large parts of the United States would end up with if the question was up for democratic decision. That’s exactly what happened in Germany decades ago, and while the result is totally incoherent it also enjoys so broad popular support the question has basically dropped off the public agenda. Of course the Catholic Church is opposed, but then it isn’t particularly fond of divorce either, and political discourse pays about equal attention to both positions.

With most people not caring and most of those who do care fighting more realistic battles the cranks tend to stand out among those who remain. When I was at the March for Life a year and a half ago (I was too sick this year), I was very uncomfortably aware of the large-sign-guy whose  reason for opposing abortion actually is that the race will be harmed if women aren’t forced to breed. Of course there were more than two thousand protesters more sane than that guy as well as a few dozen counter-protesters less sane than him, but still the cranks stand out in a small movement. And even on the comparably normal side the evaporative cooling is very noticeable. For example, I returned home from that march in a bus full of Evangelicals. There was a young girl running up and down the bus, and when she saw my rosary she squealed “coooool chaiiiin” and wanted  to touch it. I let her, but I sat there scared stiff, because if she had taken it to her parents they probably would have thought I was seducing her into idolatry. Or maybe I’m just paranoid, but then I had been spending the last hour talking to seat neighbors who weren’t quite sure if Catholics can be saved. This is a very unusual experience for a German.

So basically the pro-life movement is very small  and even many good pro-life people wouldn’t want to stand too close to it. I think the bishops should try to take it over rather than standing there without any plausible course of action, but then that’s a different question.

The point here is that there basically is no abortion debate in Germany. And if you can’t even get a hearing on the simple cases, there is very little pressure to sort out the details of hard cases. So while all German bishops are honestly pro-life, until a few months ago most of them probably hadn’t spent as much as ten minutes thinking about what consequences that may have for the Morning After Pill in cases of rape. The average German bishop probably knows a lot less about this than the average Catholic blogger in America.

On to the medical background, the Morning After Pill’s primary effective mechanism is preventing ovulation, i.e. the egg actually becoming available for fertilization. In cases of rape, that mechanism is totally OK with Catholic teaching. The bad thing about contraception is the separation of the unitive and procreative aspects of sex, but that’s clearly not what a rape victim is trying to do. In more modern words, the Catholic church knows that rape isn’t sex and has known so long before that particular question arose. For example, celibate nuns in dangerous countries are allowed to use the normal Pill in case they might be raped. Some trads are grumpy about this, but they are just ignorant of actual magisterial teaching.

The problem is that the Morning After Pill probably also has a secondary effect of preventing implantation. That leads to the baby’s death and is not something we can set  out to do.

The gray area is the question of double effect. Basically  it is OK to do something good even if one is aware that some unintended bad side effects can occur. We actually apply that logic to pregnancy already. For example, a pregnant woman can drink coffee for enjoyment, even if there is a very small risk of that contributing to a miscarriage. Depending on the probabilities of the two effects of the Morning After Pill, the same logic might be applied to it. If a woman takes it with the good intention of preventing ovulation, the risk of killing  a fertilized egg can be  acceptable. The same thoughts apply  to the doctor prescribing it in that situation. Of course the situation is entirely different if they know only the bad effect is relevant, which would be  the case if they know ovulation has already occurred.  I hear this can be tested, but I got conflicting information on how invasive the test is. If it actually requires a second vaginal invasion I think that wouldn’t be proportional to the risk involved and it would be OK to go without the test. If it’s basically doing a two second test on blood already drawn for other purposes I would think that a morally mandatory precaution. If it’s somewhere between those extremes, the question gets very hairy. So, bottom line, the Catholic rules on the Morning After Pill in cases of rape are more complicated than yes or no.

The final piece of necessary background is the turn of events that gave rise to the bishops statement:

A few months earlier some of the pro-lifers I would prefer not to stand so close to me visited Catholic hospitals in Cologne pretending they had been raped and demanded and got the Morning After Pill. Then they started a trad-media campaign about Catholic hospitals dispensing abortifacients. They have some flimsy excuses for this being OK Catholicism-wise but basically this is the one serious sin involved in the whole affair. For a while it seemed to work: the diocesan authorities instructed the hospitals not to dispense abortifacients including the Morning After Pill. At that point I think nobody seems to have thought about non-abortive effects. In consequence of that decision Catholic hospitals got dropped from the government-sponsored rape evidence preservation network, which requires dispensing the Morning After Pill, and then no longer had rape evidence preservation kits.

Then in January a raped woman visited a government-run emergency room on a complex of buildings that also hosts a Catholic hospital. The doctor on duty prescribed the Morning After Pill and then called the hospital to arrange sending the patient over for evidence preservation – which it of course no longer could provide. Then she called another Catholic hospital and got the same response. After  getting the patient admitted to a non-Catholic hospital she contacted the media. And the story immediately got shortened to “Two Catholic hospitals refuse to admit rape victim because of Morning After Pill concerns”. And there was a great shit-storm.

This is probably the first time the matter really came to any German bishop’s attention. And while Cardinal Meisner – one of the most conservative German bishops and the one in whose diocese all this had happened – consulted the experts real quick the story seems to have gotten mangled. While actually the pill has two effects he seems to have understood there were two pills, one working against ovulation and one against implantation. And then he announced the (obvious in Catholic moral theology) conclusion that the ovulation-preventing pill was OK while the implantation-preventing one wasn’t. He also noted that a Catholic hospital could also give out honest information on what was available elsewhere, provided the Catholic position was also explained without exercising pressure.

And then, a week later, the German bishops’ conference discussed the same matter. And then they announced this:

The plenary assembly affirmed that women who have been victims of rape of course will receive human, medical, psychological and pastoral help in Catholic hospitals. This can include the administration of the “morning-after pill” as long as it has a preventive and not an abortive effect. Medical and pharmaceutical methods which result in the death of an embryo still may not be used. The German bishops trust that in facilities run by the Catholic Church decisions on the practical treatment will be taken on the basis of these moral and theological guidelines. In every case, the decision of the woman concerned must be respected. In addition to first statements on the “morning-after pill”, the plenary assembly recognizes the need to study in more detail other implications of this issue – also in contact with those responsible in Rome – and to make the necessary differentiations. The bishops will have talks on this issue with Catholic hospitals, Catholic gynaecologists and consultants.

Now that trust in Catholic hospitals figuring it out seems a bit optimistic, given that even the bishops haven’t figured it out yet and doctors in Catholic hospitals aren’t moral theologians or  even necessarily Catholic. So basically it boils down to “We’re starting to work on the guidelines now and until we’re done it’s all up to the individual doctor”. It’s reasonable to suspect that some doctors might decide according to very different standards. To be honest, I think that is a bit of a cop-out. But then it is quite obvious the bishops didn’t have a clue about the practical side of the question and couldn’t get one in time. So it’s not very satisfying, but I don’t see much better options either. Of course nobody payed any attention to the part about making the necessary differentiations later. And the part about the doctors deciding until then essentially got shortened into “German bishops allow the Morning After Pill”. And that is the story that went around the world.

The fairly obvious part of the moral is that consequentialism backfires even on consequentialist grounds. The false rape victims triggering this entire chain of events thought fighting for a policy that might save lives was well worth braking the 8th commandment, so they did evil that good may come from it. Not only did they fail, they also caused great harm to the Church while doing so.

Other than that, I didn’t find any sides reaction convincing. The liberal reaction was basically “Great, so now that’s settled”. No, it isn’t. Absolutely nothing changed about Catholic moral theology, trying for the abortive effect is still a no-no and all the hard questions are still hard. But then what I heard from most conservative Catholics wasn’t much better. A lot of them announced that we basically must go back to a total ban because the good pill Meisner talked about doesn’t exist and we can never be 100% sure the Morning After Pill doesn’t prevent implantation. Which is equally wrong, because it totally ignores the doctrine of double effect. And almost everyone is glad that the shit-storm is over. That kind of misses the question what will happen when the differentiations are made and the media discovers the “everything permitted now” narrative isn’t quite right. So basically everyone reacted to caricature versions of the story and my reply to basically everyone is “it’s more complicated than that”.

Kind of boring for a moral, huh?

Posted in Politics | Tagged , , | 18 Comments