[I was planning on this main metaphor before that other filthy water metaphor shook the Catholic blogosphere. Absolutely no reference intended.]
When journalists asked Konrad Adenauer, the first post-war German chancellor, why his foreign office had so many employees who had been Nazis just a few years earlier, he answered
Man schüttet kein dreckiges Wasser aus, wenn man kein reines hat!
(One doesn't pour out filthy water if one doesn't have pure [water]!)
I think he was right. Running a government with loads of allegedly reformed Nazis was terrible and had some very bad consequences in actual policy, but it's not like the people complaining about it had any realistic alternatives to offer. So sometimes we must make do with filthy water.
This is one good objection to my last post, where I ranted about Less Wrong/MIRI/CFAR folks trying to eat the menu by promoting some simplified mathematical models as the definition as rationality. Sure, someone might say, these models are filthy water, but we can't think without simplifying, so we don't have pure water, so we can't throw out the filthy water.
My main reply is that filthyness is contextual. For example, coffee is very filthy water for purposes of washing clothes but better than pure for drinking. On the other hand some dangerous bacteria can be killed off by drying, so they can make water too filthy for drinking but still pure enough for washing clothes if totally pure water isn't available.
You may believe I'm getting carried away by my metaphor, but actually the metaphor is getting carried away by me. The thing is, models too can be pure enough for some purposes while being too filthy for others. In other words, they have a domain of things they describe fairly well and get worse as we extrapolate beyond that domain. So a model can easily be the best we can do for a certain kind of question but still give worse than worthless answers for others. Those other questions might be better answerable by other models (perhaps even the informal model of our intuitions) or they may not be describable by any available model (i.e. we may actually know nothing about them).
As an example, let's look at probabilistic reasoning. I argued that it is useful if (a) the range of potentially relevant events is properly understood and (b) some way exists to assign them to categories, and (c) we have enough experience to have some idea of how often our guesses for a given category turn out correct. This is most paradigmatically the case if the events are arbitrarily repeatable (in which case probabilities are expected frequencies), but some other use cases are close enough. Essentially this makes probabilistic reasoning into a kind of meta-model: It works as long as simplifying assumptions making (a-c) true are pure enough water. Which, if any, simplifying assumptions are pure enough depends on the context, which is why we shouldn't talk about probabilities without the simplifications being at least implicitly specified. So probabilities are great for handling one specific kind of uncertainty. On the other hand they totally suck for the kind of uncertainty that is rooted in sui generis cases or the knowledge that our model is misspecified but we don't have any better model yet. And we do have purer water for that kind of uncertainty: It's the "classical rationality" the canonical writings of Less Wrong consider outdated.
For example, this recent post at Less Wrong is all about hanging epicycles on a probability model used outside the domain probability models are good for. These problems are entirely homemade, they only arise from the assumption that probabilistic reasoning always has better water purity than the old fashioned methods used by the polloi.
I'm not saying probabilities are bad, I'm saying they are sometimes good and sometimes bad and we have at least some vague rules of thumb for when they are good and when they are bad.
So illustrated on the example of probability, this is my criticism of the thinking system promoted at Less Wrong: They take mathematical (or sometimes mathematical sounding) models that are somewhat pure for some purposes, canonize them as the definition of rationality, and then use them for other purposes they are too filthy for.