This is the Title of My 300th Post

(For more on self-referentiality, click here.)

It’s a momentous occasion, and it calls for something special. So I’m going to “take down the barriers” and let people see the real me, the sensitive Neuropoison that you always knew was in there somewhere. Here’s a little ditty I wrote, inspired by an old song from like medeival times or whatever. (The meadeiaval period is defined as the period so long ago that no one knows, or cares, how to spell “miedeivael” any more.)

Are you going to Scarborough Fair?
Parsley, sage, rosemary and thyme.
Remember Mita Whonhoo lives there.
She once was a true love of mine.

Tell her to make me a sandwich right now.
Parsley, sage, rosemary and thyme.
And if she delays, say, “Do it, you cow!”
Then she’ll start making that sandwich of mine.

Tell her to make it with sourdough bread.
Parsley, sage, rosemary and thyme.
And then when I’m done, she’s to give me some head.
Then she’ll be a true love of mine.

Tell her to lube up her asshole for me.
Parsley, sage, rosemary and thyme.
I’ll assfuck her hard, and I’ll do it for free.
Then she’ll be a true love of mine.

Then when we’re through she’s to fetch me a beer.
Parsley, sage, rosemary and thyme.
Then I’ll fuck her so hard that the neighbors will hear.
Then she’ll be a true love of mine.

Damn, I’m sensitive. Sometimes I’m so sensitive I almost make myself cry. I like taking long walks on the beach too. At sunset. It’s so good to be an enlightened, sensitive, twenty-first century man.

Straightening out the “Anthropic Principle”


This is my second and presumably last post on the anthropic principle. The first one is here.

The anthropic principle per Wikipedia is the “philosophical consideration that any data we collect about the universe is filtered by the fact that, in order for it to be observable in the first place, it must be compatible with the conscious and sapient life that observes it.” In popular discourse, this notion often manifests something like this: “How is the outrageously unlikely fact of our existence explained? Well, if the universe weren’t consistent with human life, we wouldn’t be here to ask that question!” I have beaten this formulation with a big heavy stick before (see the foregoing link) and I’ve now figured out how to frame the issue in a different but equally clear way.

First note that probabilities from your point of view depend on how much you know. For example, there’s probability that it will rain on any random day in Boston, given no other information. Then there’s there’s probability that it will rain today in Boston, given that it rained yesterday. These are generally going to be different probabilities.

Stat folks say “conditional on” instead of “given that.” E.g. where a normal person would say “the probability that it will rain today, given that it rained yesterday,” a Stat person would say “the probability that it will rain today, conditional on the fact that it rained yesterday.” And the probability that it will rain on any random day, given no other information, is called the unconditional probability.

On the “anthropic principle”: When people ask things like, “How is the outrageously unlikely fact of our existence explained?” they are interested in the unconditional probability that the universe has properties that can support human life (and that human life actually did evolve, but let’s just stick with the first part). Whereas the “anthropic principle” answers the completely trivial question, “What is the probability that the universe can support human life, conditional on the observation that human life actually exists?” The answer to that utterly trivial question is 100%, obviously.

Literally no human being ever, in the history of the world, meant to ask, “What’s the probability that the universe can support human life, given that it actually does support human life?” Yet that is the question that the so-called “anthropic principle” answers. Seriously, here’s the Wikipedia formulation again: The anthropic principle is the “philosophical consideration that any data we collect about the universe is filtered by the fact that, in order for it to be observable in the first place, it must be compatible with the conscious and sapient life that observes it.” That is literally saying, “The probability that the universe can support life, given that there is life to observe it, is 1.”

So much for the “anthropic principle.”

So what’s the honest answer to the unconditional question? I don’t see how anyone could know, because to answer this we’d need to know the probability distribution from which the actual universe was drawn. We don’t know that. Of course one can put forth if-then propositions about it. A common one is, “Suppose all universes which are physically possible exist. (The many worlds hypothesis.) Then the probability of any particular universe existing (including ones with humans) is 1.” Sure. But we don’t know whether the many worlds hypothesis is true.

On the Possibility of Having True Beliefs

I vaguely recall a blog in which people were debating the role of self-confidence in pickup. (I forget whether it was a Game blog or a “rationalist” blog.) One person in the comments made the well-known point, a la Heartiste, that it’s better to be irrationally self-confident than rationally pessimistic, because with confidence you’ll do better with chicks.

Some doofus disputed this, saying it’s not good, because then “you’ll have beliefs that are demonstrably false.” Who cares, doofus? I’d rather have the false belief that I’ll score with 99% of chicks, which self-confidence leads me to score with say 20% of them, than have the belief that I’ll only score with 1%, if that pessimism would be a self-fulfilling prophecy and lead me to score with 1% of them. Or even worse, what if you had the belief that you’d score with zero chicks, and that became a self-fulfilling prophecy?!

The second commenter missed that while having true beliefs is good, there are other things that are also good. Like sex, for example.

Nietzsche: “Knowledge for its own sake”—that is the last snare laid by morality: we are thereby completely entangled in morals once more. The opening sections of Beyond Good and Evil engage with this in more depth. Old Fred was an interesting guy.

I could just stop right there, but I want to springboard from here to make a broader point about beliefs and outcomes. Consider general situations in which beliefs affect reality. For example self-fulfilling prophecies (SFPs) are common in economics, e.g. if enough people think a recession is coming, that can make them freak out and behave in ways— cutting back on consumer spending, laying off workers— that bring on a recession.

You can also have the opposite of an SFP. Example: I hear tell that chicks don’t like wearing the same dress as other chicks. Let’s suppose that every chick who’s going to a certain party tonight believes that lots of other chicks will be wearing a certain off-the-rack dress. Since they hate wearing what other chicks are wearing, none of them wears that dress. So the belief prevents itself from coming true.

In general beliefs can affect reality in ways more complex than self-fulfilling prophecies or self-blocking prophecies.

Here is my main point: In situations in which beliefs matter, it’s not at all obvious that there always even exist equilibrium beliefs, that is, beliefs that are both true and game-theoretically stable.

In math-speak, the item of interest is the mapping between beliefs and reality and the question is, Does that mapping even have a fixed point? That is, are there any beliefs that are self-confirming? It’s far from obvious whether the answer is always Yes.

(In the pickup example, a fixed point would be any SFP about your success rate. E.g. if you think you’ll score with 60% of chicks and that level of confidence causes you to indeed score with 60% of chicks.)

It’s possible that there are kinds of interactions in which any given belief is like the dress belief, in the sense that any particular belief will prevent itself from being true. In this kind of situation, hectoring someone because he has beliefs “which are demonstrably false” is even more idiotic, because it’s not even possible to have beliefs which won’t be demonstrably false!

(In theory an external observer— someone who’s not going to the party, in the dress example— could make correct predictions about the situation, but my focus here is the beliefs of people involved in the situation, e.g. you’re a guy going to a bar to try to pick up a chick and that’s what you’re forming beliefs about. By the way, even an external observer can’t form correct predictions without knowing all participants’ beliefs. That would require reading people’s minds, so no.)

On this claim that equilibrium beliefs may not be possible, people familiar with game theory may say “Ha! Nash’s Theorem, bitch!” But of course Nash’s Theorem makes certain assumptions about the environment, and uses a fixed-point theorem to prove the existence of equilibrium. If the mapping from beliefs to outcomes isn’t continuous, standard fixed point theorems don’t apply so that kind of proof doesn’t work.

(Note to nerdlingers: Nash’s Theorem deals with the continuity problem by letting agents’ moves be chosen probabilistically; this makes the relevant strategy sets continuous. But here, an agent’s “move” is his belief. The beliefs are about probabilities, but the beliefs are not themselves chosen probabilistically. There is a difference between (A) believing that a certain coin has a 0.5 probability of coming up heads, and (B) randomly switching between believing that it has a 100% probability and a 0% probability of coming up heads. If agents change their beliefs randomly, Nash’s Theorem might apply, but that’s not what we mean when we talk about beliefs, and certainly not rational beliefs. Changing your behavior with a random component can be rational, in adversarial games where you don’t want to be predictable, but changing your beliefs at random is not rational.)

The relevant mapping doesn’t even necessarily fill up the entire space. (Nerds: The mapping needn’t even be a surjective, i.e. “onto” mapping.) Indeed, there’s no man in the world who can be guaranteed to score with every woman in the world if he tries. Doesn’t matter if you believe you’ll have a 100% success rate; you won’t. So not all success probabilities are even in the range of the mapping.

“Damn it, Neuropoison; you’re really ass-raping my attention span here!” Okay sugar-tits, look at the pretty picture:

A mapping from believed probabilities (horizontal axis) to actual probabilities (vertical axis).

The diagram has the same variable on both axes, probabilities in [0,1]. Thus the entire admissible space is a 1 x 1 square, though I prefer to call it a “2-dimensional hypercube” as that helps me to score with intellectual chicks. Any point on the identity line is a fixed point, a self-confirming belief about the probability of some event. The issue is that the mapping from beliefs to reality is not continuous, so there isn’t a fixed point. That is, there are no self-confirming beliefs.

Just eye-balling it, it looks like if your belief is 100%, the reality is about 60%, and that’s as high as it gets. If this described your F-close rate with chicks, your best belief (if you could choose your beliefs purposefully) would be that you’d score with 100% of chicks you hit on, which would lead to a success rate of 60%. Obviously I just pulled these numbers out of my ass, but the point is, anyone who says, “Your beliefs are demonstrably false” should be given a wedgie for various reasons, among them there are no beliefs which will self-confirm as demonstrably true anyway.

Now that I’ve finished writing this I’m wondering whether it’s mathematically robust. It seems to be, but did I miss something? Is there some way to do a Nash on this and guarantee that everything is actually continuous in the relevant way, thus guaranteeing at least one fixed point? If not, it’s unsettling, as it illustrates that there can be situations in which having correct beliefs is not even theoretically possible.

UPDATE a few days later: It turns out I was right. Nerdlinger explanation: The reason you can’t “do a Nash” on this is that Nash’s theorem applies to game theory, in which all players are best responding to other players’ moves. (From now on I’m going to write playahs because that amuses me.) That is, each playah’s move is his best option given the other playahs’ moves. And “best responding” means optimizing, which (with other features of Nash’s setup) allows the Theorem of the Maximum to be applied. And that theorem implies the continuity of best-response mappings, which in turn implies the existence of at least one fixed point. But here, there is no optimization/best responding. You believe some probability, then cause and effect kicks in and results in some actual probability. There’s no other playah who is choosing the actual probabilities to optimize some goal function. Therefore, nothing prevents the relevant mapping from being discontinuous, so there is not necessarily a fixed point.

Sweet, an opportunity to use my Epistemology tag!

Slate Star Codex:

“I’m increasingly uncertain that confirmation bias can be separated from normal reasoning.

Suppose that one of my friends says she saw a coyote walk by her house I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.

Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.

Is this confirmation bias? It sure sounds like it. When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it.”

No, you’re not wrong to do this; you’re using your beliefs for their proper purpose: making judgments about the world. The whole reason you have a belief that polar bears are extremely rare or non-existent in Berkeley is so that if you think you see a polar bear, you’ll look again more carefully, or that if your friend says “Polar bear!” you’ll consider that she might be playing a joke on you, etc.

The point of having beliefs is not just to have them. It’s to use them to guide yourself through the world. You use them to, e.g. make judgments about how likely it is that your friend is lying or playing a joke on you, etc.

Furthermore, it’s a known fact that people sometimes joke, lie, are mistaken, etc. What entitles you to dismiss that fact? If you believe your friend, you’re abandoning your well-founded belief that people sometimes say false things AND your well-founded belief that there are no polar bears in Berkeley. That’s a weird decision to make.

If you disbelieve your friend, you are retaining your well-founded beliefs that people sometimes say false things and that there are no polar bears in Berkeley. That seems sensible, given the monstrously large number of times humans are observed to say false things, and the large number of times you’ve failed to observe any polar bears in Berkeley.

If I said I saw gnomes dancing on my roof, what would you actually do? Slightly raise your probability that there are gnomes, or significantly raise your probability that I’m a jokester?

Pro-Natalist Ben Gadwin

I just serendipitously found Ben Gadwin’s excellent pro-natalist Twitter account,

Update: Some of his stuff is actually kinda weird, like his notion that he’ll give all his wealth to his first-born son and nothing to any of his other children. WTF? And I don’t trust this:

Four dates a day? On average? Hard to take that seriously, even if he’s independently wealthy and doesn’t have to work.

A random selection of good stuff from it:


If your religion doesn’t build healthy, loving, ambitious, and large families that want to spread life to the stars, there’s something wrong with it.


Dates in Western Europe vs Eastern Europe (I wish I was making these up):
Western Europe: “I’m studying political science. It’s fascinating. I’m learning about critical theory and how capitalism oppresses all of us.”
Eastern Europe: “Feminists are crazy. Why would I want a job when I can stay at home, cook, and raise kids?”

3) Some stories still have happy endings:

Son of psychotically evil woman, who tried to turn him into a girl, will be allowed to attend school as a boy. Next step: Applying the death penalty to his “mother” (who is not even his actual, biological mother).

4) A sad case:

Creator of Sex and the City, single at 60, regrets choosing a career over children as she is now “truly alone.”

5) Normie/Rebel


Not sure I’d even fuck the girl on the left. Holy moly, would I fuck the girl on the right, and I’d cum so hard I’d blow the top of her head off like a shotgun.

I Don’t Know How to “Solve” Banking, and I Doubt Anyone Does

Unless I’m wrong, and you know how to solve it, in which case let me know in the comments. I’m curious. Or get it published in an Economics journal, and get the Nobel prize.


Consider this scenario: Vinny the Loan Shark, whom you owe $50,000, came up to you at 9:00 this morning and said, “Gimmee my fifty back by 5:00 tonight or I’ll break both your legs.” Vinny is a very direct guy. Isn’t it great to get away from stuffy circumlocutions?

Fortunately, you own a house worth $100,000. No problem! You’ll just sell it by 5:00 this evening, and you’ll have enough money to pay Vinny back and then some. You and your legs will be fine. What’s that? You can’t liquidate your house that quickly? Or dear, you do have a problem. I hope your health insurance is paid up.

Before you censure Vinny too harshly, grok this: When it comes to banking, you and I are Vinny.

See, we all want banks to offer checking accounts so we can use them to pay for stuff. Obviously this only works if you can spend the money in those accounts any time you want. So, from your point of view, the funds in that account are a zero-maturity asset. From the bank’s point of view, since they have to make payment whenever a check is cleared, those funds are a zero-maturity liability.

The problem, of course, is that most of the bank’s assets are in rather illiquid forms like 10-year loans they made to businesses, 30-year mortgage loans they made to families to buy houses, etc. They can’t liquidate all that instantly. So if a bunch of checking account owners – Vinnies – come up to them at once and say, “We all want our money back right now,” the bank can’t comply.

This is called a bank run and leads to that exciting scene in the movie It’s a Wonderful Life in which the bank where the hero works suffers a run and he has to deal with the emergency. It also leads to the same thing happening in real life, which is more exciting but less entertaining.

“OK,” you say, “just prohibit banks from holding very much of their assets in 10-year business loans, 30-year mortgage loans, etc.” Alas, that won’t work, because you and I demand that banks provide mortgage loans so we can buy houses. And businesses need loans for various purposes, etc.

So here’s the problem: You and I, the public, demand that banks borrow money from us with zero-maturity checking accounts and lend to us with 30-year mortgage loans. This is called “maturity mismatch,” for obvious reasons, and it’s not just an unfortunate accident that banks are set up this way. The public demands that they be set up this way. How are we supposed to square this circle?

I don’t know. So far, no one else knows, either.

Deposit insurance has greatly reduced the problem of bank runs, but as the crisis of 2007-2009 showed, we are far from Eden.

I once heard an economist say that he thinks the current system— riven though it is with occasional banking panics, waves of failures, etc.— may actually be the best that can be achieved! It was much worse in the bad old days. At the start of the Great Depression, there was a banking crisis that forced about 30% of the nation’s banks to shut their doors! Believe me, the regulators are very aware of the maturity mismatch problem. We’ve been tinkering away with banking policy in the last 80 years and seem to have improved things, but certainly haven’t achieved perfection.

So: If you know how to totally fix banking, send a quick email to Jerome Powell. He’ll appreciate it.

Just Some Inside Baseball

Update: Alf links here and weighs in with:

“No room for blue pilled men, which is to say, they are free to join in on other topics, but if they start spouting off bs about women, they are put in place. See Neurotoxin.”

Jesus, you’re still butt-hurt about losing a debate that ended more than a week ago. If it bugs you that much, don’t engage in debates.

Most readers of this blog won’t know or care what this is about; it’s in reference to some horseshit from another forum where my posts are likely to be deleted. Just getting some stuff on the record. I regret that this clutter is necessary. Regular posts resume next time.

You (you who know who you are): “I demand examples!”

Me: “OK, here.”

You: “That’s not an example of what I’m talking about! Provide examples!”

Me: “Yes, it is. Also here’s another example.”

You: “No it’s not! Also, where’s your argument? I demand that you make an argument.”

Me: “The argument is (etc.)”

You: “Hey, stop rambling for screen after screen.”

Cute. This kind of behavior is characteristic of malignant narcissists. Caught.

And… bye.

Funny, I thought I’d feel regret exiting that forum. Instead I feel this flood of relief.

Female Captive Synd… I mean, Stockholm Syndrome

On the subject of so-called Stockholm Syndrome – really, the correct term is female captive syndrome – conventional discourse has usually been coy about the captives’ sex as a relevant aspect. I once did a Net search for some terms like Stockholm Syndrome, gender breakdown, etc., and came up with nothing. It really is astonishing how much our broader culture is invested in denial about female nature. This has only started to change recently, and you have to go looking for it to find it.

Here are some links, with varying degrees of explicitness about the gendered nature of the phenomenon:

(1) On the original Stockholm Syndrome case, this link doesn’t mention the sex of the captives for the first couple of paragraphs, and never says anything about it, except mentioning, eventually, that one captive was named Kristin:

Olofsson… became friendly with one of the hostages, Kristin Ehnemark; they met occasionally and even their families became friends.

Another notorious case of Stockholm Syndrome is that of millionaire heiress Patty Hearst, who… was kidnapped from her apartment in Berkeley, California by a left-wing urban guerrilla group calling itself the Symbionese Liberation Army (SLA). On April 3 Hearst announced on an audiotape that she had joined the SLA under the pseudonym of ‘Tania’, and on April 15 she was photographed wielding an M1 carbine while robbing a bank in San Francisco.

Eventually the author does get around to saying,

… Most of human history has been played out in hunter-gatherer societies in which abductions, particularly of women and their dependent children, must have been a very common occurrence. Thus, it is possible to envisage that the capture-bonding psychological response exhibited by Kristin Ehnemark, Patty Hearst, and countless others is not just an ego defense, but also an adaptive trait that promotes survival in times of war and strife.

(2) An acknowledgement of the peculiarly female nature of Stockholm Syndrome in the academic literature. None of it is surprising to a red-pilled person – it’s just common-sense evolutionary game theory – but it’s nice to see the level of the discussion rising in academia:

Through the ages, women have suffered greatly because of wars. [Unlike defeated men, who are more likely to have been slaughtered.] Consequently, to protect themselves and their offspring, our female ancestors may have evolved survival strategies specific to problems posed by warfare, says Michelle Scalise Sugiyama of the University of Oregon…

Scalise Sugiyama believes that ancestral women may have developed certain strategies to increase their odds of survival and their ability to manage their reproduction in the face of warfare… The so-called Stockholm Syndrome, in which hostages bond with their captors, could have ancestral roots… as a way to help captives identify and ultimately integrate with enemy groups. This then motivates acceptance of the situation and reduces attempts to resist the captor — which may ultimately increase a woman’s chances of survival.

“Lethal raiding has recurrently imposed fitness costs on women. Female cognitive design bears reexamination in terms of the motivational and decision-making mechanisms that may have evolved in response to them,” says Scalise Sugiyama.

(3) This one has 15 examples of Stockholm Syndrome. Notably, only two of them involve male captives, and about one of those they say, “There is some debate as to whether Shawn Hornbeck suffered from Stockholm Syndrome, but we’ll lay out the facts and allow you to judge.” Warning: some of the examples are disgusting, e.g., involving incest. And a couple are misplaced, for example, it’s not really Stockholm Syndrome when the victim is abducted at a couple of days old and believes her captors to be her actual family. Still, some interesting examples.

(4) Rational Male: War Brides.

The Rational Male blogger, “Rollo Tomassi,” dances around the evo psych of the topic in a way that seems surprising, for a red-pilled man, when you first encounter it. You have to read between the lines. The reason for this, as I recall, is that his wife knows about his blog, and so he’s elliptical about some topics, for the sake of domestic tranquility. (To what extent that should be a consideration for a red-pilled dude is another post.)

Should You Trust “Experts”?

The sometimes perceptive, sometimes bizarre, but usually interesting Eliezer Yudkowsky has a new set of posts up at, excerpts from his book Inadequate Equilibria. ( is the successor site, established in 2017, to the rationalist site Less Wrong.) The first post of interest is

Inadequacy and Modesty.

Yudkowsky has two main topics: One is when to trust one’s own judgment, when one disagrees with experts, versus going with the experts’ opinion. The link above talks about informationally efficient and inefficient situations, where we can roughly define an informationally efficient situation as one where the experts are as right as possible given currently available information.

His second topic is the set of ways that a society can get stuck in a suboptimal equilibrium. That’s an enormous topic, which I’ll take up in a later post, but the connection is this: One example of such suboptimality is when there are insufficient incentives for the discovery and spread of information. The dissemination of information – who knows what – is obviously connected to the topic of expertise.

For me, there are two main items of interest in all this. One is random walks and their links to rational beliefs, and the other is the question of expertise itself.

A random walk is a kind of variable that often arises in informationally efficient situations. Yudkowsky uses the classic exanple of stock prices, though he doesn’t use the term “random walk.” He discusses the reasons to believe that stock markets are informationally efficient, which means that all relevant information known to market participants is already incorporated into stock prices. That in turn implies that you can’t profit by second-guessing the market, because the expected (i.e., mean) change in the price is zero. That in turn is the definition of a random walk. (If you’ve ever taken a Finance class this may sound familiar; it’s the Efficient Markets Hypothesis.)

See my post The Mind Cannot Foresee its Own Advance, which presents a generalization of this point.

There are good reasons to think that stock markets are pretty damn informationally efficient:

Empirically, it’s extremely difficult to beat the market – for periods of time long enough to not just be temporary luck – even with Cray big iron and a truckload of quant PhDs on staff. This empirical regularity is the real meat of the argument. But why is that the case? Several reasons, noted by Yudkowsky (but my wording here):

1. There are enormous incentives $$$!!! to uncover patterns that other market participants haven’t uncovered, so you can second-guess them and make bug bucks,

2. There are lots of people involved in attempting to do this constantly, which tends to push securities prices in the correct direction,(*)

(*) If a security is underpriced you should buy it to profit when people eventually realize it’s underpriced and the price rises. But also, the very fact of your buying it constitutes an increase in demand for it, which tends to push the price up. A symmetric phenomenon happens when you bet on a price falling in the future.

3. There is fast feedback from empirical reality telling them whether their trading strategies are successful, so fast error correction,

4. You can bet either way in the stock markets. That means that whether everybody else is too optimistic or too pessimistic, there are bets you can place to profit when the current mispricing is eventually corrected.

At the link above Yudkowsky says, somewhat floridly,

In the thickly traded parts of the stock market, where the collective power of human civilization is truly at its strongest, I doff my hat, I put aside my pride and kneel in true humility to accept the market’s beliefs as though they were my own, knowing that any impulse I feel to second-guess and every independent thought I have to argue otherwise is nothing but my own folly. If my perceptions suggest an exploitable opportunity, then my perceptions are far more likely mistaken than the markets. That is what it feels like to look upon a civilization doing something adequately.

A very important point here is that stock markets are exceptional! As points 1 – 4 explain, there are reasons to think the experts – in this case, securities trading firms – are correct on average.

But in the generic case – consider the field of history, e.g. – none of those things is true. And the whole question of who is an expert basically scuttles this attempt to say “You should often defer to experts.”

For example, Marxist historians like to say that they’re experts on history – they’ve figured out its ineluctable laws! – but they’re actually a bunch of ideological fuckwits who can’t think themselves out of a paper bag. But they’re also professors at many a US university (all the universities, I think). They’ll tell you they’re experts, man. It says so right here on the label, “I’m an expert.”

So is that a reason to regard them as experts?

Plainly not. Well, but we don’t have to trust them on this; after all, they’re hired by universities! Universities must be unbiased; it says so right on the label! Right? No, actually universities (outside of STEM fields, and increasingly even there) are also largely a bunch of ideological fuckwits.

Or so I claim. Am I right or wrong? Before you respond, I claim that I’m an expert on this topic. Hmmmm, we’d need to assess their degree of ideological fuckwittery empirically, wouldn’t we?

Or are we going to count heads? Whose heads? The historians’? But they’re the ones whose very credibility is being questioned, so that would be going in a circle.

How about the average person? Well, the Marxists lose that one, since most people aren’t Marxists. More to the point, if we’re asking people other than soi-disant “experts,” we’ve already departed from the dictum “trust experts.”

(By the way, should we trust the pollsters who are doing the head counting? Are their polling methods unbiased? Are they competent polling experts? How do you know?)

Furthermore, there are no particular incentives to be correct in this area. Your fellow historians are largely leftists who will grant you tenure for saying, in various ways, “Socialism is nice. Capitalism is bad.” You are not betting your own money or your life by predicting that next time – in China… Cuba… Venezuela! – socialism is sure to work.

Marxism is not intended to factually describe reality anyway. It’s an ideology of power, a convenient pre-made language for people who want to seize power, and realize they need a veneer of justice to help them gain adherents, put their opposition on the defensive, etc. Or as Marx himself put it, more discreetly: “The philosophers have only interpreted the world, in various ways. The point, however, is to change it.”

Once it’s recognized that you can’t just trust anyone who says “I’m an expert” – anyone can say that – the whole epistemic question returns to the forefront. How are you going to judge who’s an expert? Hmmm, maybe we need some standards regarding the use of evidence. (Frequentist? Bayesian?) Also some procedures, like whether testable empirical claims are being made, whether they’re replicable and actually replicated, etc.

You see the problem: In order to judge who’s an expert, you have to be damn far along the trajectory of knowledge in the relevant subject, far enough along that you’d be a jack-leg expert yourself on the topic. You might as well just assess the evidence for yourself, ignoring the purported “experts.”

What I do, and I hope everyone does, is try to identify areas in which there don’t seem to be monetary or ideological incentives to be biased, and provisionally trust the experts in those areas, and ignore experts in areas where the incentives are bad. This is far from foolproof, of course. (I’m perpetually surprised by how politicized nutrition science is.) But given the impossibility of becoming experts ourselves in all topics, we use heuristics, imperfect though they are, to try to avoid getting scammed by fraudulent “experts.”

There are many reasons, not just malign intent, that this instinct to mistrust “experts” is sound. Yudkowsky mentions some and I will mention others in my next post.

Here’s a teaser: In my next post I will cite a paper by two game theorists on the topic of expertise. This paper was published in The Quarterly Journal of Economics, which is a top Economics journal in the world. A major conclusion of the paper: In general, the equilibrium outcome is that experts will deliberately not inform you perfectly. Now here’s my question for anyone who says “trust experts”: Do you trust the conclusion of these two game theory experts?