The problem with probabililities without models

Scott Alexander writes in defense of probabilities without models. I denied the possibility of this before, also in the context of Scott’s steel-manning of Yudkowskyanism, but back then the focus of the discussion was slightly different. So this is a response to the new post, and if I wasn’t trying to revive this here blog it would probably be a comment. It’s not really intelligible without first reading the post I’m replying to.

I

For starts, what’s a probability model? Actually, while we’re at it, what’s a probability? Even more actually thats one step too far, because nobody has a general answer and perhaps there is none, though philosophers have lots of theories that seem wrong in different interesting ways. Modern mathematicians aren’t really bothered by this because they don’t care what things are as long as they know how they behave. (In bigger words this is called the axiomatic method.)

OK, so how do probabilities behave? Well they are numbers and they belong to events. What’s an event? Well, the mathematician doesn’t know, but here’s how they behave: 1. There is one event that is always true. 2. If something is an event, then “not that” is also an event. 3. If we have a bunch of events, then “all of them” is also an event. In bigger words, the events, whatever they are, form a σ-Algebra. Then the events get probabilities, and they behave like this: 1. All probabilities are all between 0 and 1 (where 1 is sometimes written as 100%). 2. If we have a bunch of incompatible events, then the probability for “any of those” is just all the individual probabilities added together. In bigger words, the probabilities are the values of a probability measure. As far as the abstract mathematician is concerned, that’s it.

In the real world, we don’t really need to know what a probability is either, as long as we can make sure that the things we are talking about obey these rules. Stereotypical example: We want to do probabilities with dice throws. We have a bunch of events, such as “1”, “6”, “an even number”, “either 2 or five” and so on. (64 events total if you are counting.) Then the events get probabilities. Often those will be \(\frac{1}{6}\) for each of “1”, “2”, “3”, “4”, “5” and “6”. But not always; perhaps we want to talk about loaded dice. But however we do it, the probability for “either 1 or 2” better be the summed probabilities of “1” and “2”.

In summary, when we use probabilities in the real world, we have some idea what the events are, we have probabilities of these events and these probabilities aren’t blatantly contradictory. This is – tada – called a probability model.

II

In practice we will also often need conditional probabilities, i.e. probabilities for something happening if something else happens first. For example, someone could have two different probabilities for being in a car accident, depending on whether they drive drunk or not. That doesn’t change the story much though, because by the Rev. T. Bayes’ theorem conditional probabilities are reducible to non-conditional ones. In the example, if we have probabilities for drunken and non-drunken accidents and a probability for drinking, then we can calculate the probability for an accident given drinking and non-drinking.

III

Coming back to the dice example, what if the dice, while still in motion, gets swallowed by a dog? What’s the probability of that? Well, the model didn’t account for that, so there is no answer. I silently assumed this to be a non-event and only events get probabilities.

OK, so maybe I should have used a better model including “the dog will eat it”, for a total of 65 events. That probability will be small, but not quite 0, because the whole point of including it is that canine dice engobblement is actually possible. But note that the probabilities for all possible numbers used to add up to 1. Now they will have to add up to 1 minus that small number. So in other words, if my model changes, then so do the probabilities.

Fine, but what if puritan space aliens destroy the dice with their anti-vice laser? If I want to account for that possibility, I’ll have to change my model again, and, in doing so, change my probabilities. And so on, every time I think of a new possibility the model changes and the probabilities change with it.

So what are the real probabilities? You might say those of the correct model. But then what’s the correct model? Well, if you need to bother with probabilities you’re not omniscient and if you’re not omniscient you can’t ever figure it out. Strictly speaking all the probabilities you’ll ever use are wrong.

IV

Can we get around this by just adding an event “all the possibilities I didn’t think of”? Not really. Remember if you add a new event you must assign new probabilities. And you can’t just do that by taking from all other events equally. This is easiest to see in the case of conditional probabilities.

Contrived but sufficiently insightful example: For me it would be slightly dangerous to drive without my glasses. For most other people it would be more or less dangerous to drive with my glasses. Now suppose the puritanical space aliens, figuring that driving more dangerously than necessary is also a vice,  try to figure out the best way for humans to drive. Naturally they also study the effect of glasses. Unfortunately they didn’t quite understand that glass-wearers are compensating for bad eyes, so they calculate accident-probabilities with and without glasses and universalize those to the entire population. Maybe their sample has a lot of glass-needers who occasionally forget their glasses. Then they will conclude everybody needs to drive with glasses. Or maybe people who need glasses do always wear them. Then they will probably decide glass-wearing is actually dangerous, because glasses aren’t perfect and glass-needers still have more accidents. Either way, their probabilities for some specific person wearing glasses and being in an accident can be almost arbitrarily wrong. After they start zapping the wrong people with their anti-vice laser someone may tell them why some people wear glasses and others don’t and they will have to revise their probabilities.

But suppose they had wanted to account for possibly missing a possibility beforehand. They could have assigned, say, a 30% probability to “we are missing something very important”. Fine, but for that to help them at all, they also need probabilities for you crashing while you wear glasses and they miss something very important. And they can’t come up with that probability without knowing exactly what they are missing. In other words, you can’t just use catchall misspecification events in probability models.

V

Then why are probabilities so useful? Because often we can make assumptions that are good enough for a given purpose. In the dice example we are making calculations in a game and are fine with assuming that game will proceed orderly. So we get probabilities and we don’t care about them being meaningless if the assumption turns out to be false.

Similarly, if we design an airplane, we might start with probabilities for its various parts failing and then calculate a probability for the whole thing falling down. This doesn’t tell us anything about planes crashing because of drunk pilots, but that’s not the question the model was made for. Actually, aviation has a few decades of experience with improving the models whenever a plane crashes and then adding regulations to make that very improbable. So nowadays commercial planes basically only crash for new reasons. There remain situations where the crash probabilities are meaningless, but they are still extremely useful in their proper context.

Similarly, an investment banker assumes that everything that could possibly happen has already happened in the last twenty years, and, well in that case it turns out it wasn’t such a great idea.

VI

Now that might sound fine in theory, but hasn’t Scott given practical examples of probabilities without models? Darn right he hasn’t. To see that, let me nitpick the alleged examples to explain why they don’t count.

In the president and aliens example, Scott himself considers the possibility that the involved probabilities are

only the most coarse-grained ones, along the lines of “some reasonable chance aliens will attack, no reasonable chance they will want bananas.” Where “reasonable chance” can mean anything from 1% to 99%, and “no reasonable chance” means something less than that.

But actual probabilities aren’t coarse-grained. In probabilistic reasoning you get to be uncertain, but you don’t get to be uncertain about how uncertain you are. All those theorems about Bayesian reasoning being the very bestest conceivable method of reasoning evar presume probabilities to be real numbers i.e. not coarse-grained. In other words, these coarse-grained probabilities are called so only by analogy. They don’t make for examples of real probabilities any more than my printer’s device driver is an example of vehicular locomotion.

At this point my mental model of Scott protests thusly: “Hey, I didn’t admit that! I mentioned it as something a doubter might say, but actually the president should be using real probabilities” (No actual Scott consulted, so my mental model may or may not be mental in more ways than intended.) Fine, but then the example doesn’t work anymore. The president can make his decisions without knowing anything about probability theory. He is making judgments about some things being more likely than others but not attaching numbers to it. In fact he could be more innumerate than a lawyer and it wouldn’t affect his decisions one bit. If we want to make it about real probabilities the example simply doesn’t show anything about their necessity.

Concerning the research funding agency, first of all I’ll question the hypothetical. It’s hard to imagine a proposal for a research project that has a \(\frac{1}{1000}\) chance of success given the best information. The starry-eyed idealists surely don’t think so. So if there is actual disagreement there will be reasons for that disagreement, and if the decision is important the correct answer is examining those reasons, i.e. improving the models. This is actually a large part of why real funding agencies rely on peer review. Using probabilities here is somewhat like the global warming example in my above-mentioned prior post on a similar subject, where the entire point is that the probability relates to a model I wouldn’t want to use for real decisions.

But perhaps all competent reviewers got killed in a fire at their last conference, so let’s skip that objection for least convenient possible world reasons. Also, Scott notes similar decisions can and often should be made informally, in which case it’s not real probabilities, exactly like in the president and aliens example.

Let’s advance to the interesting point though. Scott says:

But refusing to frame choices in terms of probabilities also takes away a lot of your options. If you use probabilities, you can check your accuracy – the foundation director might notice that of a thousand projects she had estimated as having 1/1000 probabilities, actually about 20 succeeded, meaning that she’s overconfident. You can do other things. You can compare people’s success rates. You can do arithmetic on them (“if both these projects have 1/1000 probability, what is the chance they both succeed simultaneously?”), you can open prediction markets about them.

I’ll  start with the accuracy checking use. If you’re guessing only once the probability doesn’t help at all. You’ll either be right or wrong, but you’re not getting a correct probability to compare against. That’s why the foundation director makes the check for a 1000 similar projects. If she has a 1000 similar projects that’s a strategy I can endorse. But at that point she has informally established a probability model. She has established events, namely combinations of succeeding and failing projects. This rules out any unforeseen possibilities, like the agency’s funding being slashed next year, nuclear war, the communist world revolution, and raptures both Christian and nerdy. That’s fine, because it’s not the kind of circumstances these probabilities are meant to think about. Furthermore, she has established probabilities of the events by (perhaps implicitly) assuming that the individual projects are equivalent for success-probability purposes and statistically independent, so they won’t all succeed or fail together. As long as she wants to reason within these assumptions, I’m fine with her doing so probabilistically. But notice that the probabilities get totally useless as soon as these assumptions fail. For example, there may be a new idea about what the natural laws might be and 200 proposals to exploit it. Those proposals will fail or succeed together and thus not be any help in predicting the rest. Or the next batch of proposals may be about ideas she knows more or less about, so they can’t be lumped with the old ones for accuracy judgment.

In summary, the probabilities are very useful as long as the implicit assumptions hold and totally worthless when they don’t. Also note, that the probability calculations are the boring part of the judgment process. All the interesting stuff happens in deciding which projects are comparable, which is the non-probabilistic part of the thought process.

It’s very similar with comparing people’s success rates. Here the implicit model is that people have a given success rate and those are independent. That’s often fine, but as always it breaks down if the (implicit) modeling assumptions stop to apply. For example, people may be experts on some things and not others and then their success rates will depend on what the problems of the day are. They may also listen to each other and come to a consensus or, worse, two consensus neatly aligned with political camps. Those are situations where the probabilities won’t help much.

The prediction market is slightly different: Here we are assuming someone else has a better model than we do and is willing to bet money on it. This is indeed a case where we use probabilities in a model we don’t know. Still, someone else does and our probabilities won’t be any more correct than that model.

Bottom line: All examples of probabilities being really useful are also examples of (at least implicitly) established models. And then the usefulness extends exactly as far as the modeling assumptions.

VII

Scott actually came up with this in a context: estimating the probability of Yudkowskyan eschatology.

Personally, I find this a lot less interesting than the part about probabilities without models. But still I’ll comment on it very briefly: I think in that context probability talk is a distraction from the real question. The real question is if we should worry about those future scenarios or not.

It can be clouded in discussions about whole categories of models. For example, I would be inclined to reason about Yudkowsky the way Scott reasons about Jesus: There are 7.3 billion people in the world and world-saving-wise most of them don’t seem to be at a disadvantage compared to Eliezer.  So models assigning chances much above \(\small 1.3\times 10^{-10} \) are implausible. Scott would probably reply that Eliezer is more likely to save the world than other people, and that could lead to a very long argument if either of us had the time and nerve to follow it through.

But notice that that argument wouldn’t be about probabilities at all. The arguments would be about what kind of scenarios should be considered and compared with what, and the potential conclusions would be about what we should do with our money. Also, probabilities wouldn’t help with accuracy checking or comparing us, because we’re talking about a one-time event. There is nothing the number would add except an illusion of doing math.

This entry was posted in Arguments, Math and tagged , , . Bookmark the permalink.

16 Responses to The problem with probabililities without models

  1. Probabilities are a mathematical formalization of degrees of belief. Obviously they do not exactly correspond to what they formalize, any more than mathematical formalizations of physical situations exactly correspond to the real physical situations, which are always much more complicated. But they seem to help to prevent many kinds of errors and inconsistencies which we are otherwise prone to fall into.

    Overall are you simply saying that they only help in this way when we have a model of the situation? It seems to me that to the degree that this is true, I can make a model of situations where I don’t have a model, and say, “In situations where I felt this uncertain, and I didn’t have a model, I guessed correctly about what would happen about 65% of the time, based on counting those situations,” and then conclude that my current guess has a probability of about 65%. What do you think of this way of thinking?

    I agree with you that Eliezer isn’t much more likely than anyone else to save the world, but I’m also not sure that Scott believes this. Mainly he thinks that people should be concerned about risk from AI, which does not necessarily have much to do with Eliezer personally.

    • Gilbert says:

      I have no problem with models being simpler than the real world. But both in physics and in probability that limits the model’s scope. For a physics example, there’s no problem with omitting air resistance in the description of a coin falling a few meters. But the model fails to describe a parachute falling out of an airplane. Similarly a probability model will work to the extent its simplifying assumptions don’t matter and fail when they do.

      I think radical Bayesians are missing those restrictions in two ways

      • Some uncertainties are best handled by probabilities and others not. In particular, when real people learn something cognitive they make new distinctions, consider new hypotheses, etc., i.e. understand the question. After that they may wonder which of the newly understood possibilities is true, and if that can somehow be bound to something repeatable probabilities may be a useful tool. But the most interesting part happened before probabilities.
      • We have many probability models used in different contexts. Any given probability relates to a specific probability model. That model is defined by assumptions good enough for some purposes and not others. All probabilistic reasoning happens in one probability model. If we use probabilities we should be sure we are still in that range of purposes.

      As for your suggested model, it depends. You want to group together guesses you are about equally sure of. That can work if they are sufficiently analogous and sufficiently independent. This is basically how an insurance works. But notice that insurances sometimes still go bust if their modeling assumptions turn out wrong. You may at some point figure out that a set of things you lumped together for probability purposes really should be split into two sets. At that point probabilities won’t help you. And if you are unsure now, that’s a kind of uncertainty not well modeled by probabilities. Also, if you calibrate your judgment to the real world, that will only work for things repeatable fairly often, so you won’t be able to handle small probabilities. Im summary, this can work, but not always.

      I don’t think Scott just believes people should be concerned with AI risk, he also thinks they should donate money to AI risk institutions like MIRI. And that only makes sense if those institutions have a reasonable chance of saving the world. You’re right that the Jesus example doesn’t go well with a 5% chance for religion, but Scott mentioned it explicitly in that overconfidence post, and I think I remember it being Yudkowskyan canon. Anyway, my main point is not what the probabilities are, but rather that attaching numbers is a distraction from the actual questions. Whether Jesus is Christ or MIRI will bring out omnipotent FAI are both questions where the main disagreements are of the kind probabilities don’t help with.

      • I agree that the fact that probability is simplified relative to real life implies that it won’t apply to every possible situation. For example, sometimes people argue that Pascal’s Wager fails because there is some probability that God will give an infinite reward to atheists for their unbelief, and that infinity multiplied by a finite value is infinity regardless of the finite value. Therefore, according to this reasoning, if you are allowed to consider infinite values, you have no way to prefer one alternative to another.

        Of course this particular response fails simply because if your mathematical model suggests that you shouldn’t prefer a higher chance of an infinite reward over a lower chance, that is a problem with your model, not with reality.

        Another example is our uncertainty about mathematical claims; probability theory will derive a contradiction if we admit that we are not completely certain about them, but in reality we are not completely certain anyway.

        I agree with the second restriction you mention but I am not entirely sure I understand what you meant by the first one. Of course a lot of thinking and analysis goes on which does not involve any explicit probabilities. And if you simply mean to say that you shouldn’t distinguish two possibilities and then arbitrarily say something like, “The first possibility has a probability of 87.6% and the second a probability of 12.4%”, then I agree. But I also think that it can be useful to think in terms of probabilities to the extent of thinking that each of the two possibilities has a roughly equal chance of being true unless I have some reason to prefer one over the other.

        It seems to me that at least a good deal of what you are saying is that probability theory won’t necessarily prevent you from falling into error, as in “But notice that insurances sometimes still go bust if their modeling assumptions turn out wrong.” Of course this is true, but not using probability theory also won’t prevent you from falling into error. People are sometimes wrong; there is no way to infallibly avoid this possibility. And it seems to me to help if people think at least vaguely in terms of probabilities, as in the example I gave of guessing that two possibilities are about equally possible, even though this won’t necessarily prevent people from being wrong either.

        I don’t think there’s a reasonable chance of MIRI in particular saving the world, and I wouldn’t personally donate to them, but I don’t think it’s crazy for someone to donate to them because he believes that there is a reasonable chance of AI destroying the world. It wouldn’t be necessary for him to hold that MIRI in particular is going to save the world, but just that they might contribute something to analyzing the problems, just as you might donate to a cancer research institute without necessarily thinking that there is a reasonable chance that this particular institute is going to cure cancer.

        I think Scott mentioned the complexity argument against religion because it is one that people make, not because he actually thinks religion is that improbable. You are right about Eliezer, who repeatedly has said that he thinks religion is approximately infinitely improbable. I think this is basically insane, but anyway it isn’t Scott’s position.

        • Gilbert says:

          One way to look at what I meant in the first bullet point is known and unknown unkwnowns. Uncertainties your model accounts for are known unknowns. In the insurance example, the insurance used a model and assigned probabilities to damages and then calculated a payout distribution. Then it set reserves so that the bankruptcy chance is less than, say, 1%/year. That’s a good thing. In reality though, insurances rarely go bankcrupt because of a 1% event, they go bankrupt because their modeling assumptions were wrong, e.g. because nobody knew astbestos was dangerous back when they designed their loss model. So the probabilty represents part of the risk and then there’s another part hiding in unknown unknowns like astbestos. It’s not just that you still can go wrong with probabilities, it’s also that probabilities don’t represent all of your uncertainty.

          And then there are questions where the unknown unknowns are so much more important than the known unknowns that bothering with probabilities isn’t worth it. We can think about such questions, but probabilities are not the way to do it.

          In summary, probabilities model a certain type of uncertainty, not uncertainty per se.

          • I still feel like I don’t understand something about this, but I’m not sure what it is.

            It makes sense to me that the uncertainty in unknown unknowns wouldn’t be modeled well by probability. The example of mathematical uncertainty illustrates this, since a proof may seem perfectly valid, but may not be valid anyway.

            At the same time, saying that “bothering with probabilities isn’t worth it” in the situations you are talking about may be technically true but misleading. That is, it may not be worthwhile discussing particular numbers, but even uncertainty which isn’t modelled well by probabilities is going to have to act in many ways like a probability. For example either you think something is more likely than not, or you don’t. Likewise, evidence that favors a position should make it seem more likely, and evidence against it less likely, even when you don’t have anything like a number. It seems to me that talking this way can lead people to ignore these basic things that will still apply even when you don’t actually have a probability.

          • Gilbert says:

            This seems like a special case of reasoning by analogy being a double-edged sword in general.

            On one hand, reasoning by analogy is immensely useful. For example, reasoning about empirical reality always involves (perhaps hidden) analogies, analogies allow us to transform questions into less emotion-laden forms, etc.

            On the other hand analogies are always wrong in some way, so we must be careful about carrying them too far.

            Probabilities are a well-behaved and well-understood picture of some uncertainties so there may be something to be gained by using them as metaphors for other kinds of uncertainty. As long as the metaphorical character of that reasoning is clear, that can be a good thing. The analogy may help us remember that even non-probabilistic thinking still has rules.

            But, on the downside, I think Internet Bayesians, in particular the Less Wrong folk, generally aren’t aware of any distinction between actual and metaphorical probabilities, so they end up unknowingly treating analogies as deductions.

            Made explicit, the fallacy runs like this: “This is (actually only somewhat like) probability” \(\land\) “probabilities are updated according to Bayes’ rule / are subject to Aumann’s agreement theroem / etc.” \(\Rightarrow\) “This is updated according to Bayes’ rule / subject to Aumann’s agreement theroem / etc.”

            Two (of “probably” many) ways the analogy doesn’t carry:

            1. Where real probabilities are applicable, there are theorems about Bayesian updating being the single correct way of reasoning. In other domains there are still rules, but they are much less well understood. There is a risk of transferring normative rules beyond their grounding.

              One example of this is Less Wrongers being permanently befuddled by Pascal’s muggings(*). The seemingly mathematical answer is clear and obviously wrong. Less Wrongers have various lame ad hoc ideas to deal with this, but the real answer is that the math simply doesn’t apply.

            2. I think talking of new evidence making a belief more or less likely in non-probabilisic contexts masks the actually interesting part.

              Suppose I talk with someone about some random belief B. After that I still believe B, but I’m less sure. In (idealized) probabilistic reasoning that means I have made a one-time calculation, which is now over, and resulted in my confidence in B decreasing. In reality a network of related beliefs has become unstable, I’m looking for ways to make it coherent again and notice B is in the general zone where something will have to change. It’s not a small change in the past, it’s the anticipation of a possible big change in an ungoing update. But maybe the required change is elsewhere and a week later the sentiment I was expressing as B seeming less likely might rightly have been resolved withhout a change to B.

              By the way, this is why sudden conversions seem fishy. Someone who hears an argument and changes his mind on the spot didn’t have the time human rationality actually takes. Much more impressive if he starts on a lengthy examination and somewhat anticlimacticly changes his mind a few weeks or years later.

            * Using existing terminology for practicality, even though it is defamatory of the real Blaise Pascal.

          • I agree that would be a problem if someone tried in every case to change how he felt about something, and succeeded in making such a change. And that’s not an unreasonable way to understand what I said, namely “consequently I am a bit less sure of B than I was at first,” but it isn’t what I meant.

            Basically I think that direct doxastic voluntarism is true. That’s probably worth a whole separate discussion in itself, but in any case the basic reason I think it is true is because it is obvious to me that it is within my power to assert, defend, and to act upon any belief I choose. And it seems to me clear that someone behaving in that way believes something, so it is clear to me that I am choosing my beliefs.

            On the other hand, if this is right, then how I feel about it is something different, even if somewhat related, because I can’t directly choose to feel more or less sure of something. Rather, my actions may have some influence on this, but it is more like trying to influence an emotion like love or anger. I can’t simply choose to be in love with someone or to be angry with someone, but my choices can indirectly affect this. Likewise my choices can have some indirect effect on how I feel about various beliefs. But I don’t consider this feeling to be the belief. Rather the belief is the whole pattern of thinking, speaking, and acting, or anyway the act or habit of behaving that way. Or if we limit it to something mental, it would be the voluntary mental behavior that results in this whole pattern.

            Now there aren’t going to be that many obviously different levels of feeling sure or unsure about a statement, as in your link, although I think the number can be somewhat increased by practice, just as someone can develop perfect pitch with practice. In any case, with a limited number, it would be a bad idea to try to change this with every argument or with any random evidence you discover.

            But patterns of thinking, speaking, and acting can have unlimited degrees of variation, and it is here, precisely where there is something voluntary, that I am proposing that one should be “a bit less certain.”

            That could mean any number of things in practice, as for example simply acknowledging the existence of a new argument against my position, or being just a little bit more likely to spend time considering the opposite position, or giving a bit more respect on an intellectual level to people who hold the opposite, and so on.

            The variation in such things is evidently basically unlimited, just like the unlimited number of arguments and unlimited amount of evidence for and against things, so it will not necessarily have the consequence in your link. And the opposite behavior will have the bad consequence of not changing your mind about something wrong, as long as the evidence against your position comes in bit by bit and not all at once.

            You might say that it would still be impossible to do this without sometimes going overboard and giving arguments more weight than they deserve. But this would be no different from saying that it is impossible for human beings to be perfectly chaste or temperate or charitable and so on without ever varying in the slightest from the mean of virtue, and concluding that there are times when it’s not worth bothering to be chaste or temperate or charitable.

          • Gilbert says:

            I think we don’t have much substantial disagreement left, just disagreements about semantics.

            If keeping the new argument in mind, respecting its proponents more etc. all count as updating, then yes, one should update on very little evidence. But at that point the similarity to what Bayesians mean by updating seems very remote.

            On beliefs, I see the distinction between what you call the belief and what you call the feeling and I agree what you call the belief is voluntary. But in common usage I think the word “belief” referrs to what you call the feeling.

          • Ok. Yes, that’s what I meant. I agree that it isn’t very similar to a numerical update, but in any case I did say in the beginning that I was talking about an analogy with probability theory rather than an actual mathematical calculation.

            I also agree that the way I’m defining “belief” doesn’t match up entirely with the way people generally use it, but I don’t think it’s all that diverse either. It seems to me that people use it in a more vague way which sometimes refers to one thing and sometimes to another, or to some combination.

            I also think limiting it to the voluntary aspects eliminates the vagueness while preserving most individual cases, i.e. I would speak of people believing things in most of the same cases that people ordinarily do.

            One particular problem I have with the vaguer usage (and also with limiting it to the non-voluntary part) is that a person’s voluntary and non-voluntary assessments of something can sometimes come apart, and then you get what Less Wrong people usually call “belief in belief.” I would rather just admit that people believe what they say they believe. If you don’t, then e.g. you might say that St. Therese doubted or even disbelieved in heaven for many years, even though obviously according to the voluntary aspects she chose to believe in it.

            In any case I agree that’s a semantic issue and there isn’t all that much need to resolve that completely.

        • Ok, thanks for the last comment. I started to write a reply, realized that it was going to go on for many pages, started over, and quit again when the same thing was going to happen. So I’ll just talk about what you called the “actually interesting part.”

          I agree that you are correctly describing what frequently happens when people think about things. But I think that this process can easily ignore objective features of reality that you can take into account by talking about evidence making a belief more or less likely, even without numerical probabilities.

          Let’s say B is my belief, A is a new argument I hear against it, and N is my network of related beliefs.

          After hearing A, I am less sure of B. But I realize that both A and B are related to N. After thinking about it for a while, suppose I resolve this by concluding that I should modify N into N’, which leaves B unchanged, but which perhaps reverses another belief C (part of N), or changes my confidence in various opinions contained in N. So I end up feeling that B is basically “just as likely” as it was at first.

          It seems to me that if I do this, I have almost certainly made a mistake.

          Basically what was happening there is that I saw that A could mean that B was false; but then I saw that it could also mean that B was true but C was false. But if I end up thinking B “just as likely” as at first, that means I concluded that A could only mean that C was false, and could not possibly mean that B was false.

          If it’s incorrect to claim that kind of absolute certainty, as I suppose, then even after the whole process I should end up saying something like: I think most likely A means that C is false, and B is actually still true. But it’s possible I’m mistaken about the meaning of A, and it still could mean that B is false. I wasn’t previously aware of this particular possible way for B to be wrong, and consequently I am a bit less sure of B than I was at first.

          In other words, I think some element of the “one-time calculation” that is implied by probability theory needs to be preserved in order to be reasonable, even though I fully admit that in practice people do not always do this.

  2. Also, Scott said in his previous post on overconfidence that he estimated the probability of God’s existence at 5%, and he characterized this in the comments as his “probability of religion being true.” Based on the way he talks about various issues I think most of that weight goes into the probability that Catholicism is true, so he doesn’t appear to be claiming much certainty about Jesus at all.

  3. Felipe says:

    I’ve been thinking a lot about probabilities without models lately, and since I also have an interest in this blog reviving (though I only found it a few months ago), I thought I’d comment. This is sort of a boring comment, though, because I think I agree with the points you make.

    As a mathematics student, I find myself hesitant to call something a “probability” if it is not clear what the underlying probability space is. I think that in many cases, the language of probability theory can be useful to model real-world situations (like dice rolling), where an event is to be repeated several times and an empirical probability can be computed. I even accept that if you make several somewhat unrelated predictions using a common algorithm, it’s fair to state your confidence in terms of a probability, because you can assess the accuracy of your algorithm over time. But in all of these cases, I think probability theory is to be thought of as a useful tool that can be used to combine your uncertainties and update them, and nothing more.

    If you think of probability theory as just another mathematical tool, like calculus or linear algebra, then it’s clear that there are certain situations in which it doesn’t apply. I happen to think that this is the case with discussions of whether a superintelligent AI will be created that will destroy humanity, or of the existence of God. What’s the probability space? I can’t satisfactorily answer that question to myself, so I don’t use probabilities to answer those questions. (Though if pressed, I might assign “probability-like” numbers to them .. )

    So now I can finally come to my point. To this, my internal version of Scott says that even if I don’t have a probability, I still have to make a decision, and the fact that I made a decision reveals something about the probability I assign to each possible outcome. For example, I might not assign a probability to the event that purple fuzzy aliens will land in my living room and steal the die out of the air, but the fact that I bet it won’t happen reveals that my “probability” is indistinguishable from 0. For a less silly example, the fact that the President puts the military on alert says something about the President’s probability that the aliens are hostile. My response is that in these situations, it is not necessarily rational to act as an “expected utility maximizer.” The “expected utility maximizer” strategy works well when it is available, and to the extent that the computed expected utility reflects something in the real world. But I believe this is sometimes impossible, and decisions must be made by other means. Don’t ask me how .. I don’t know.

  4. 27chaos says:

    Scott’s argument is not about “probabilities without models” so much as “probabilities with really shitty vague models and a lot of black boxes inside the human brain”.

    • Gilbert says:

      I don’t think it’s that simple. He doen’t seem to get that this makes probabilities contextual and that lack of knowledge about useful models can make his probabilities completely useless.

  5. Jeremy says:

    I like the illustration of “unknown unknowns” preventing you from calculating conditional probabilities.

    As other people have commented, I think you have to be careful about disregarding probabilities in situations where you have to act. Even if the method you use to make your decision doesn’t explicitly invoke probabilities, it still implicitly reveals them in how you weight different outcomes.

Comments are closed.