“My god is stronger than your god!”

Severian at Founding Questions has recently been thinking about Julian Jaynes’s “bicameral mind” theory. Halfway through the book I’ve decided I’m not a fan of Jaynes – his logical leaps based on irrelevant evidence can be quite, er, impressive. But as he notes, his theory has different parts, and some may be sorta true for some people even if other aspects are completely false. One part is that people in olden times used to hallucinate voices which they interpreted as instructions from gods or kings.

This part could apply in particular to leftists, by which I mean street-level leftists (not the leadership). Leftists were hit with the tag “NPC”— “non-player character”— for a reason. They can’t think, they have no desire to think, they don’t even know what thinking is. They just look around for authority to obey.

(Women are particularly notorious for this, among red-pilled men, but it’s a general phenomenon on the left.)

Leftists only care about which god/king is the strongest, and which god/king is the strongest is decided by which one has more adherents.

(Or perhaps it’s power-weighted adherents that matters. One famous sports star has as much weight as ten regular people, or whatever.)

This certainly could be one way of accounting for leftists’ baffling and infuriating total indifference to fact, truth, common sense, or even internal consistency. They simply don’t care about those things. They’re just looking around for the most powerful god. And the most powerful god is the one whose voice is most powerful. This accounts for the fact that to them, what’s on TV is what matters, pretty much by definition. TV is the voice of the god in modern society. If you’re saying something that’s opposed to TV, you’re opposing the only thing that matters, the voice of Authority. Leftists are utterly baffled by why you’d want to do that.

This is one of the reasons that their opposition to President Trump was so shriekingly disproportionate. Imagine that nothing matters to you but obeying and publicly repeating what Authority says. As long as Harvard, the New York Times, and the President all speak with one voice, no problem. But if Harvard and the New York Times say one thing and the President says the opposite… Disaster! Catastrophe! The end of the world! Two camps, both of indisputable Authority, saying opposite things! You don’t know whom to obey! This is what leftists have in place of cognitive dissonance. And it’s agonizing for them. Hypocrisy obviously does not bother them in the least. Saying that white people who flee black neighborhoods are evil, while being a white person fleeing a black neighborhood, doesn’t even register with them. But not knowing who to obey, that is the worst emotional and intellectual torture that it is possible for them to experience. And yet… of course they know that really it was Trump who was out of step with True Authority: He disagreed with what “journalists” and college professors said! That is literally the most heretical of all possible heresies! But still, the Presidency is nevertheless very real, undeniably important and valid Authority. Anything to stop this pain of divided Authority!

This explains the enraging tendency for leftists, noted by pretty much every person on the right, to be utterly immune to fact and logic 99% of the time… and if you do, miraculously, manage to wrest a concession about some issue from them, the next time you see them they’ll have done a complete memory wipe of the debate and tell you they kicked your ass in that debate. All the facts, logic, reason, arguments that you marshaled will have been forgotten, absolutely forgotten. Why? And how? Simple: They don’t actually care about any of those things. The voice of Authority is telling them something different from what you proved yesterday, ergo you cannot have proved it.

Today you get one of them to admit that, say, government policies were the preponderant cause of the mortgage bubble and collapse of circa 2008. “Thank God!” you think. “That took seven hours of debate but at least I made a tiny dent in leftism.” But lo and behold! When you see him the next day he tells you that you proved no such thing and that the mortgage collapse was entirely caused by greedy white male capitalist loan officers. He totally denies everything that he conceded less than 24 hours before, to an utterly shameless extent that would be gaslighting if he were doing it on purpose. But that’s not really what’s going on; he’s not thinking “Bwah ha ha; I shall now gaslight this person.” What’s going on is that within 5 seconds of leaving your presence he reboots and reinstalls the Official Party Line of Authority, and the Official Party Line of Authority is that the mortgage meltdown was caused by greedy white male capitalist loan officers.

(If he’s not totally goodthinkful leftist— that is, if facts sorta enter his head, in a dim way, once or twice a year— the rebooting may require that he hop onto the Net and read a sentence or two of some Op-Ed that appeared on CNN’s web site, which reminds him of what Authority’s official position is.)

Authority’s official position is a substitute for the truth in his little leftist NPC mind. It is isomorphic to the truth in the NPC mental topology. We have truth; they have “what Authority says.”

Authority’s Official Party Line is not necessarily truth— which does not exist in any important sense in the leftist mind— rather, Authority’s Official Party Line is What We Are Saying. What We Say is what Authority Says. Repeating What Authority Says is What We Do and the truth of the claims assertions propositions strings of words is not even a question that it occurs to leftists to ask, let alone care about.

We have truth; leftists have What Authority Says.

An example. Remember when you got up this morning. Think about the shirt you’re wearing now (shirt, dress, blouse, whatever). Remember how, before you put it on, you counted every thread in it to make sure that it doesn’t have a prime number of threads? Wait, what? You didn’t do that? It didn’t even OCCUR to you to do that? The thought never even entered your head? That’s how leftists are about the truth of the claims wordstrings they repeat. It literally never enters their heads to care whether they’re true or not. If they think about it at all, they think WE’RE the weird ones because we care about truth!

This also accounts for the bizarre leftist tactic in debate of saying “No one else agrees with you.” To them this is a devastating nuclear bomb. They expect you to be crushed. Or at least to care. That we brush it off and go on talking about facts probably baffles and frustrates them as much as their total indifference to facts baffles and frustrates us.

So we won’t make progress in defeating the broad mass of the leftist cadre until we take over the educational system and the media. Yikes. That’s not going to happen short of a no-foolin’ civil war. Luckily— “luckily,” snort— the left is bumbling us into one with their insanity and hysterical refusal to compromise about anything ever. Also, the domestic situation will be shaken up enormously, natch, if we get involved with a war with Russia, the world’s largest nuclear power. And that could happen if our insane NPCs don’t back off. And so far they’re refusing to back off…

An observable characteristic of p-zombies

In case you’re not familiar with this concept: p-zombies is short for philosophical zombies. This does not mean zombies who are interested in Aristotle. It means biological robots who are human in every respect except that they have no consciousness. They are not self-aware, just very sophisticated robots made out of flesh. The concept is used in philosophy, neuroscience, etc. as a foil for thinking about consciousness.

The standard view of p-zombies is that they’d be observationally indistinguishable from humans as we know them. This is because, if we take a purely materialist view of consciousness, consciousness does not actually do anything; it’s just “along for the ride” as they say. All the neural structures that support cognition can do so without requiring that there be any subjectivity involved. (That’s one theory, anyway.) So p-zombies would talk, laugh, do mathematics, have sex, fight, etc. There just wouldn’t be anybody home inside their skulls.

But I was thinking about this recently because I’ve been reading Jaynes’s Bicameral Mind (due to the fact that Severian at Founding Questions has been talking about it). And it occurred to me…

There is something normal humans do that p-zombies wouldn’t: Talk about consciousness. P-zombies can talk about anything that’s empirically observable in the classic positivist sense of observable, e.g. they can say “red” when electromagnetic radiation of a certain wavelength hits their eyes. But there is no consciousness in a world of p-zombies. There’s nothing for them to observe, no empirical phenomenon to provoke any kind of response from their nervous systems. No talk about consciousness could occur on a planet inhabited by such beings.

If, by assumption, there is no consciousness, then there is no analogue of “light of a certain wavelength” for p-zombies’ nervous systems to respond to.

This is not like saying that p-zombies couldn’t talk about centaurs because centaurs don’t exist. Sure they could; they could combine the ideas of horse and a person. But a notable feature of consciousness is that it’s not observable (in a classical positivist sense). Consciousness is not like centaurs.

(Consciousness is sui generis; that’s why there’s a philosophical and scientific problem of consciousness but no philosophical and scientific problem of centaurs.)

The situation is fragile, though. A planet inhabited by nothing but p-zombies could never invent the term “consciousness,” but it would only take one normal human to exist and start saying things like “I have a subjective consciousness” to get the p-zombies to start uttering similar sentences by imitation. So the very existence of word-concepts like “consciousness” is proof that somewhere, somewhen, there has been at least one self-aware human. Paraphrasing Heinlein from The Moon is a Harsh Mistress: “Are humans self-aware? Well, I don’t know about you, tovarisch, but…”

“I Fucking Love Science!”

(1) People who fancy themselves “rationalists” or “lovers of science,” or, to use an older and now dorky-sounding term, “secular humanists,” wanted to join a community of rational people who would get everything right. Or at least, they’d get everything rightER over time, because they were committed to the right methods—logic, the scientific method, etc.—and that’s what matters, yes? Their notion was “We’ll shed horrors of the past like (literal) witch hunts based on ignorant religious beliefs” and they’d obtain all the benefits of keeping up with the latest discoveries in nutrition, etc. And— whether they admitted this to themselves on a conscious level or not— at the same time they’d receive the emotional and social benefits of joining a church. And all this while exorcizing the demons of irrationality! Awesome!

But that’s not what happens. What happens is that the minnow swallows the whale. What happens is that humans are humans, and so the demons of irrationality assimilate the “rationalistic pursuit of truth” so that that becomes just another empty slogan. The slogan is more-or-less immediately turned toward vicious witch hunts.

Instead of pointing at people and screaming Witch! “scientific rationalists” now point at them and scream Mask denier!

This may require some reminders about rhetoric from February and March 2020 which turned out to be politically inconvenient and so has been memory-holed. In case you’ve forgotten: In February and March 2020 the standard leftist line on masks was that masks were somehow a fascist plot of then-president Trump. The argument was never made coherent, but the basic structure was clear:

1. Trump might support masking.

2. ?

3. Therefore, Fascism.

Soon after that the left executed one of its impressively disciplined U-turns, from declaring on date X that anyone who supported masking was a fascist to declaring on date X+2 that anyone who did not support masking was a fascist.

My point: If any of these people who claimed “I fucking love science!” actually cared about science they would have taken to heart the traditional rhetoric of science, which declares that all knowledge is provisional, subject to future revision, etc. (Whether this is literally always true doesn’t matter; it’s true at least a large fraction of the time.) Thus they would have been less quick to accuse pro-maskers of being Trumpian fascists in February 2020, and would have been hesitant about declaring anti-maskers to be Trumpian fascists in April.

Which episode demonstrates that…

If you make science your religion substitute you don’t get a “rational belief system.” You get all the worst aspects of religion assimilating the intended rationality.

Why the worst? (Why not the best?) Because traditional religions have been around for hundreds or thousands of years and (a) have had their most destructive moral fires burn out long ago, and (b) have learned certain lessons about real-world humans over time. Thus they’ve accumulated brake pedals, negative feedback mechanisms, procedures for cooling down excessively fervent new converts, etc. New religions, whether of “pure reason” or Marxism, etc., don’t have any of that. They’re pure moralistic assholery, which is why they do things like murder people by the tens of millions in the name of peace and enlightenment.

You cannot have a religion, or religion substitute, based on science or rationality.

Fundamentally, trying to have a religion based on (the search for) pure truth is a category mistake: It’s like expecting that taking a music class will give you big muscles or that lifting weights will teach you about music. And that category mistake, in which all that happens is that your hopes are disappointed, is the best-case scenario. The worst-case scenario is worse, oh so much worse. See: the French Revolution, the entire twentieth century, etc., etc.

(2) Suppose you could get people to take the pursuit of truth seriously. Humans being humans, a holiness spiral would start on that, sooner or later. (Translation: Sooner.)

Think you can’t holiness spiral on the value of truth?

You work in the lab searching for truth 17 hours a day. I say everyone should be forced to work in the lab 18 hours per day!

You only found the mass of the electron out to 11 decimal places! I say we should never rest until we’ve figured it out to 12 decimal places! A thousand decimal places!


(3) A certain amount of social coordination might require some false statements. I’m not sure about this; it’s just a hypothesis.

But suppose it’s snowing out and I say “It’s snowing out.” You say, “Yes, indeed it is.” We’re not doing anything but noting a fact.

Suppose I say, “Snow isn’t white; it’s orange with green polka dots.” If you say “Yup, it sure is,” something else is going on. Plainly we don’t have this exchange to note a fact. I’m basically saying “I’m a member of the Orange Snow Cult” and you’re saying, “Hey, me too!” This is a social function that has nothing to do with sharing observations about empirical reality.

And it’s a really strong signal of shared group membership precisely because its so obviously false.

Comprehension quiz: Why does Catholic doctrine say the bread is actually, literally, not metaphorically the body of Christ? Why is it not an important aspect of Catholic doctrine, or any ideology, that snow is cold? (Explain your answer, 10 points.)

If we’re going to create a world with a sane public ideology/ religion/ cohesiveness-fostering memeplex, we need to consider this aspect of communication verbal behavior. I haven’t yet given up on the idea that a state religion could consist solely of (a) prescriptive statements (“Don’t steal”), (b) empirically true statements (“Snow exists”) and (c) safely meaningless statements (“The Transcendental is infinite.”) But it’s possible that we’re going to need some flagrantly false ones too, to facilitate social cohesion. In fact history rather strongly suggests that’s the case. (Fuck me sideways, but humans are weird.) If it is necessary, then we should devote some thought to blatantly false statements that are harmless. Something like “Snow is orange with green polka dots.”

We’ll have to navigate between a Scylla and Charybdis: We want to avoid statements like “Men and women are equal” that stupid people might take seriously, or evil people might pretend to take seriously. But at the same time, “Snow is orange with green polka dots” plainly won’t do: it’s not nearly grand-sounding enough. We need something that sounds important like “Men and women are equal”—something which, if it were true, would be important. At the same time it can’t actually be important because then it lends itself to being taken seriously and holiness spiraled. On the other hand, “Snow is orange with green polka dots” does not lend itself to being holiness spiraled, because it’s obviously fanciful, but for that very reason it won’t win any hearts and minds. No 17-year-old is going to become a fervent Orange Snower the way they become fervent Christians or Marxists or Objectivists etc. We need something that’s grand-sounding like “The Transcendental is infinite” but less vague. It must sound like it might actually mean something.

In the Hitchiker’s Guide to the Galaxy series some aliens have a problem. Billions of years ago they created an enormous computer to find the Ultimate Answer, the answer to the question of Life, the Universe, and Everything. After calculating for a long time, it comes up with the answer, which is 42. However, it doesn’t know what the question is. The two aliens who are supposed to convey the Ultimate Answer to the rest of their species try to just bluff their way out of this by making up a question.

“The only thing we can do now,” said Benjy, crouching and stroking his whiskers in thought, “is to try and fake a question, invent one that will sound plausible.”

“Difficult,” said Frankie. He thought. “How about, What’s yellow and dangerous?”

Benjy considered this for a moment.

“No, no good,” he said. “Doesn’t fit the answer.”

They sank into silence for a few seconds.

“All right,” said Benjy. “What do you get if you multiply six by seven?”

“No, no, too literal, too factual,” said Frankie, “wouldn’t sustain the punter’s interest.”

Again they thought.

Then Frankie said: “Here’s a thought. How many roads must a man walk down?”

“Ah!” said Benjy. “Aha, now that does sound promising!” He rolled the phrase around a little. “Yes,” he said, “that’s excellent! Sounds very significant without actually tying you down to meaning anything at all. How many roads must a man walk down? Forty-two. Excellent, excellent, that’ll fox ’em. Frankie, baby, we are made!”

We need a “How many roads must a man walk down?” The challenge is that we are not writing comedy SF but trying to design a workable state religion. We must be cynical and not cynical at the same time. We must be cynical because we are trying to design a state religion that will work for actual humans. And we must not be cynical because we’re the good guys and we’re trying to design a state religion that is sane and good.

Well… no one can accuse us of insufficient ambition in our goals!

Political Philosophy and the Theory of Martingales

My political philosophy is the theory of martingales.

A martingale is a dynamic stochastic process which— don’t fucking freak out. “Dynamic” simply means changing over time and “stochastic” means having a random component. In other words, dynamic stochastic processes are Life, the Universe, and Everything.

Anyway, a martingale is a generalization of the concept of a random walk, a dynamic stochastic process which has an expected change of zero. That’s all, no biggie. If you can’t handle this go look at some funny cat videos on YouTube and I’ll see you for my next post. Besides, I’m shuffling some ignorable technical asides off to the footnotes.(1) Maybe that will get this published in USA Today; they fucking love the theory of dynamic stochastic processes.

It’s trivially easy to prove that the evolution of a rational person’s beliefs over time is a martingale.(2) Basic idea: If you’re rational you’ll have anticipated all future events that are anticipatable, so the only thing left over to change your beliefs over time is events that weren’t anticipatable. That is, you cannot predict the future evolution of your own beliefs.

The more precise statement of this explicitly mentions events’ probabilities, but it’s the same idea.

Sample path: Rational agent’s belief over time in 2-space.

I briefly discussed some implications that this fact has for politics in one of my first posts, The Mind Cannot Foresee Its Own Advance, and I want to return to this theme. Despite the opening sentence of this post, my entire political philosophy is not really the theory of martingales; I just wanted to open with a strong statement. Like Beethoven’s famous duh duh duh DUH.

Now the human race (SPOILER ALERT) is not rational, but we are learning over time in way that involves paying attention to data. (Some of us are, anyway.) This is true at least for subjects that aren’t too politicized, e.g. astronomy, and for pretty much all subjects in investigations and discussions outside of our formal institutions (which are hotbeds of reality-hating dogmatism).

But note that crucial caveat: for subjects that aren’t too politicized. We need to keep the “politicize everything” crowd disempowered and at the margins of society so they can’t step on the human race’s ability to advance. Things must be kept as loose as possible so we can continually follow the unpredictable martingale of our evolving beliefs toward the truth.

I mentioned astronomy as a subject that’s not too politicized, but of course no subject is safe from the left. Apparently the USSR had a Marxist dogma about whether the universe is finite. I forget which way Marxist dogma landed on that, but the point is, the dogma didn’t come from data; it came from some notion about whether an infinite universe was consistent with dialectical materialism. (LOL, WTF?) And of course there was the Lysenkoist period, which— according to a Soviet biologist in the Gorbachev era— set Soviet biology back by at least a decade.

Obviously neither astronomy nor anything else is safe from the increasingly insane left, a mob that has decided that acronyms are racist and the statement that 2+2 = 4 is western cultural imperialism.

Not that further examples are needed these days, but an example of how this insanity affects hard science: Bill Nye the Science Guy used to have a video explaining how XY and XX determine a person to be male or female. Netflix censored that video. Nye himself, apparently quite the screaming pussy, disavowed reality and embraced “gender fluidity” theory around that time.

These are examples of the human race moving backwards, but it’s not enough to not regress; we need to advance. And no dogma can ever say “We’ve figured it out; no further intellectual innovations are valid,” because the future evolution of our beliefs is unpredictable.

This doesn’t mean you can’t hold the opinion that certain matters are for practical purposes settled. Often the probability of further significant revisions is small enough (based on current information) that that’s a reasonable belief. But it does mean that no person or group should ever be allowed the power to stop other people from investigating the allegedly settled subject.

Einstein sure as shit wasn’t predictable based on Newton. If future discoveries were predictable they wouldn’t be discoveries.

I’ll return again (probably ad nauseam) to a theme of this blog: The things that many of us used to believe about women were largely the exact opposite of the truth. God forbid that we not be allowed to revise our beliefs over time! I won’t get into specifics much— I rehash them enough— but let me mention that some of this body of knowledge about female psychology is relevant for a single man on the dating market and some of it is relevant from a point of view of “social policy.” (A lot of it is relevant for both.) An example: That a lot of women want to play a game of “Let’s you and him fight” between men, including groups of men at a societal level, has existential implications for the survival of our society. That’s something I never could have foreseen when I was 15, or probably when I was 25.

As Eliezer Yudkowsky said in a lucid moment: “Let the winds of evidence blow you about as though you are a leaf.” This is nothing but a poetic formulation of the martingale proposition.

Does all this imply that free speech absolutism should be non-negotiable? Actually, yes, in principle that’s exactly what it implies. But. We have seen in the last 75 years or so that that may not be a long-run stable situation. Total freedom of speech, as well as providing enormous benefits, also provides malign power-seekers unrestricted opportunity to coordinate and plan with each other. This is a problem because (along with 99 other reasons) when they acquire power one of their first moves is to crush all ideas they don’t like. So it’s possible that level-1 censorship might be necessary to prevent level-10,000 censorship. E.g. we might have to exclude communists from universities because if we let them in, they’ll soon take over and exclude everyone but communists.

Or maybe it’s not that simple. There are enormous practical problems with ceding anything to the idea of censorship because that abandons the clearest Schelling point on the issue. Maybe the best formulation of the problem is not “Choose zero censorship,” because that might not be a long-run tenable situation, but “Choose the minimum sustainable level of censorship.” Not to cop out, but: It’s complicated.

In any case…

The human race faces fearsome challenges, as it always has and always will. We must be free to have our beliefs changed unpredictably by new evidence if we are to learn, adapt, and overcome the tests.

Or to put it more tersely: Rational learners’ belief revisions are mean-zero, so kill all the censors!

(1) A martingale is a generalization of the concept of random walk because the only requirement for a variable to be a martingale is that it have an expected (mean) change of zero. A random walk, at least the versions that I’m familiar with, also typically assumes that the probabilities are symmetric about that mean and indeed, frequently assumes that the probabilities are Normal. It also assumes that the probability distribution governing the dynamics is constant over time. A martingale allows the probability distribution to mutate all it wants, provided that one feature, the mean-zero change, always holds. For example, a martingale needn’t have a constant variance.

(2) It’s an immediate consequence of the Law of Iterated Expectations. Here’s another way of seeing it: If you’ve ever studied Statistics, you’ll remember the obvious fact that a rational forecast algorithm has zero-mean forecast errors. So if you’re rational, then the mean revision to your beliefs as you correct your forecast errors will be… See? Not that hard.
By the way, note that if Joe has data that Jill doesn’t, then Joe can predict how Jill’s beliefs will change when she gets the data, but Jill can’t predict that. She has to wait until she gets the data.

(3) Invisible bonus footnote only for those who read the other two! If all this sounds vaguely familiar but you can’t quite place it, it might be because you once read about the Efficient Markets Hypothesis in Finance. Note the EMH assumes rational market participants. Its random walk implication is an example of the point I’m making in this post. I exposited this idea here.

Thoughts from a Random Sampling of Beyond Good and Evil

A pic of Nietzsche from before his moustache grew to the point where it had its own detectable gravitational field.

Since I’ve read Nietzsche’s Beyond Good and Evil a couple of times I allow myself the liberty of dipping back into it at random every now and then when I have some free minutes. Here are some reactions from a random sampling from June 2020:

1. Nietzsche as Frenchman

The opening of the Preface:

Supposing truth is a woman— what then? Are there not grounds for suspecting that all philosophers, insofar as they have been dogmatists, have been very inexpert about women? That the gruesome seriousness and clumsy obtrusiveness with which they have usually approached truth so far have been awkward and very improper methods for winning a woman’s heart? What is certain is that she has not allowed herself to be won— and today every kind of dogma is left standing dispirited and discouraged. If it is left standing at all!

My first reaction: Interesting opening. Unfortunately the book does not consistently live up to this opening, though it has its moments.

My second reaction: How very French! If you didn’t know N. was German, you’d probably guess that was written by a Frenchman. You can see why post-moderns like Nietzsche: For stylistic as well as substantive reasons.

Also, note the red pill knowledge here: The gruesome seriousness and clumsy obtrusiveness with which they have usually approached truth so far have been awkward and very improper methods for winning a woman’s heart. Indeed. The essence of seduction is obliqueness.

Speaking of French sensibilities, try this from Section 1:

The will to truth, which will still tempt us to many a hazardous venture, the famous truthfulness of which all philosophers have hitherto spoken with respect— what questions has this will to truth not laid before us! What strange, wicked, questionable questions! It is already a long story— yet it seems as if it had hardly begun. Is it any wonder if we at last grow distrustful, lose patience, and turn impatiently away? That we should finally learn from this Sphinx to ask questions too? Who is it really that puts questions to us here? What in us really wants “truth”?

Indeed we made a long halt at the question about the cause of this will—until at last we came to a complete stop before a yet more fundamental question. We asked about the value of this will. Suppose we want the truth: Why not rather untruth? And uncertainty? Even ignorance?

The problem of the value of truth came before us—or was it we who came before the problem? Which of us is the Oedipus here? Which the Sphinx? It is a rendezvous, it seems, of questions and question marks.

Not sure what the hell all that means, but it sure sounds profound if you don’t think about it too carefully. And oh so French.

This part again: The problem of the value of truth came before us—or was it we who came before the problem? That sounds like a low-brow person’s stereotypical image of “intellectual bullshit spewed by ivory tower eggheads.” Nietzsche’s not doing intellectuals any favors here.

Nietzsche employed his style… or did his style employ Nietzsche? Which is the artist here, and which the art? Or is this questionable question too dangerous to ask? Perhaps it will take harder men, sterner philosophers of the future, to look this question in the eye and (blah blah).

I had a friend who maintained that N. took himself way too seriously. I think not even N’s fiercest partisans can deny his guilt on that count. Here is a relatively mild sampling, from the end of the same section:

It almost seems to us as if the problem [of the desirability of truth] had never been put so far— as if we were the first to see it, fix it with our eyes, and risk it.

2. Nietzsche as Stereotypical “Nietzschean”

The first sentence of Section 29:

Independence is for the very few; it is a privilege of the strong.

That sounds like something that stereotypical Nietzsche would say. There actually is a lot of justice in Nietzsche’s popular reputation. What people like Otto from A Fish Called Wanda miss is that N’s interest in strength and independence were primarily about emotional and intellectual virtues.

3. Nietzsche as Post-Modernist

From Section 38:

The French Revolution… noble and enthusiastic spectators all over Europe have contemplated it from a distance and interpreted it according to their own indignations and enthusiasms for so long, and so passionately, that the text finally disappeared under the interpretation. So a noble posterity might once more misunderstand the whole of the past, and in that way alone make it tolerable to look at.

Or rather, isn’t this what has already happened? Have not we ourselves been that “noble posterity”? And isn’t now precisely the moment when, insofar as we comprehend this, it is all over?

In this passage, especially the emphasized part— the emphasis is in the original— one again can see why the post-moderns liked N.

4. Nietzsche as (Possible) Social Darwinist

From Section 62:

The hitherto paramount religions… are among the principal causes which have kept the type of “man” upon a lower level: they have preserved too much that which should have perished… when they had given comfort to the sufferers, courage to the oppressed and despairing, a staff and support to the helpless, and when they had allured from society into convents and spiritual penitentiaries the broken-hearted and distracted: what else had they to do in order to work systematically in that fashion, and with a good conscience, for the preservation of all the sick and suffering, which means, in deed and in truth, to work for the deterioration of the European race?

It’s not clear to me whether N. is actually talking about genetics here or purely cultural effects. But he is obviously saying that coddling the weak is bad.

5. Nietzsche as Intellectual and Would-Be Prophet

From Section 212:

The philosopher, being of necessity a man of tomorrow and the day after tomorrow, has ever found himself, and had to find himself, in contradiction to his today: his enemy was ever the ideal of today.

A fun perspective on intellectuality… but it has been the source of much mischief in the last 100 years. Think of Marxism, for example, with its hordes of genocidal disciples convinced they were leading humanity into the future.

Also, this quote exemplifies a hilarious theme in Nietzsche’s thought: he’s a real believer in progress! This from the guy who sees himself as a provocative Loki, running around questioning everything, kicking out the legs from the sanctified beliefs of his day by means of radical skepticism! In the passage I quoted from Section 1 he even goes so far as to question whether truth is valuable. Yet he believes it’s possible to anticipate the future’s major beliefs. (Or perhaps he saw himself as creating those beliefs.) The guy who sees himself as a sort of epater le bourgeois crusader, attacking with glee the cherished beliefs of his day as the horrified gentility look on— that guy couldn’t even see through the 19th century belief in Progress! My God, that is funny! That. Is. Hilarious.

N’s belief in progress is in fact a major theme of all his writing, manifest in his constant talk about new philosophers coming up. E.g. the end of Section 2: “One must await the advent of a new order of philosophers, such as will have other tastes and inclinations, the reverse of those hitherto prevalent—philosophers of the dangerous ‘maybe’ in every sense of the term. And to speak in all seriousness: I see such new philosophers beginning to appear.”

6. Nietzsche as Actual Prophet

Nietzsche, writing in the 1880s, looks forward to the twentieth century:

From Section 251:

I have never yet met a German who was favorably inclined to the Jews; and however decided the repudiation of actual anti-Semitism may be on the part of all prudent and political men, this prudence and policy is not perhaps directed against the nature of the sentiment itself, but only against its dangerous excess… That Germany has amply enough Jews, that the German stomach, the German blood, has difficulty (and will long have difficulty) in digesting even this quantum of “Jew” …is the unmistakable declaration and language of a general instinct, to which one must listen… “Let no more Jews come in!” …thus commands the instinct of a people…

A thinker who has the future of Europe at heart, in all his perspectives concerning the future, will figure the Jews, as the Russians, as above all the surest and likeliest factors in the great play and battle of forces.

Well! After World War II and the Cold War, that seems pretty damn prescient!

From Section 208:

I do not say this as one who desires it, in my heart I should rather prefer the contrary—I mean such an increase in the threatening attitude of Russia, that Europe would have to make up its mind to become equally threatening—namely, to acquire one will, by means of a new caste to rule over the Continent, a persistent, dreadful will of its own, that can set its aims thousands of years ahead; so that the long spun-out comedy of its petty-statism, and its dynastic as well as its democratic many-willed-ness, might finally be brought to a close. The time for petty politics is past; the next century will bring the struggle for the dominion of the world—the compulsion to large-scale politics.

This passage prefigures the Cold War and the European Union project!

One could also do a section on “Red Pill Nietzsche” or “Nietzsche on Women” or something, but that topic deserves its own blog post.

Straightening out the “Anthropic Principle”


This is my second and presumably last post on the anthropic principle. The first one is here.

The anthropic principle per Wikipedia is the “philosophical consideration that any data we collect about the universe is filtered by the fact that, in order for it to be observable in the first place, it must be compatible with the conscious and sapient life that observes it.” In popular discourse, this notion often manifests something like this: “How is the outrageously unlikely fact of our existence explained? Well, if the universe weren’t consistent with human life, we wouldn’t be here to ask that question!” I have beaten this formulation with a big heavy stick before (see the foregoing link) and I’ve now figured out how to frame the issue in a different but equally clear way.

First note that probabilities from your point of view depend on how much you know. For example, there’s probability that it will rain on any random day in Boston, given no other information. Then there’s there’s probability that it will rain today in Boston, given that it rained yesterday. These are generally going to be different probabilities.

Stat folks say “conditional on” instead of “given that.” E.g. where a normal person would say “the probability that it will rain today, given that it rained yesterday,” a Stat person would say “the probability that it will rain today, conditional on the fact that it rained yesterday.” And the probability that it will rain on any random day, given no other information, is called the unconditional probability.

On the “anthropic principle”: When people ask things like, “How is the outrageously unlikely fact of our existence explained?” they are interested in the unconditional probability that the universe has properties that can support human life (and that human life actually did evolve, but let’s just stick with the first part). Whereas the “anthropic principle” answers the completely trivial question, “What is the probability that the universe can support human life, conditional on the observation that human life actually exists?” The answer to that utterly trivial question is 100%, obviously.

Literally no human being ever, in the history of the world, meant to ask, “What’s the probability that the universe can support human life, given that it actually does support human life?” Yet that is the question that the so-called “anthropic principle” answers. Seriously, here’s the Wikipedia formulation again: The anthropic principle is the “philosophical consideration that any data we collect about the universe is filtered by the fact that, in order for it to be observable in the first place, it must be compatible with the conscious and sapient life that observes it.” That is literally saying, “The probability that the universe can support life, given that there is life to observe it, is 1.”

So much for the “anthropic principle.”

So what’s the honest answer to the unconditional question? I don’t see how anyone could know, because to answer this we’d need to know the probability distribution from which the actual universe was drawn. We don’t know that. Of course one can put forth if-then propositions about it. A common one is, “Suppose all universes which are physically possible exist. (The many worlds hypothesis.) Then the probability of any particular universe existing (including ones with humans) is 1.” Sure. But we don’t know whether the many worlds hypothesis is true.

On the Possibility of Having True Beliefs

I vaguely recall a blog in which people were debating the role of self-confidence in pickup. (I forget whether it was a Game blog or a “rationalist” blog.) One person in the comments made the well-known point, a la Heartiste, that it’s better to be irrationally self-confident than rationally pessimistic, because with confidence you’ll do better with chicks.

Some doofus disputed this, saying it’s not good, because then “you’ll have beliefs that are demonstrably false.” Who cares, doofus? I’d rather have the false belief that I’ll score with 99% of chicks, which self-confidence leads me to score with say 20% of them, than have the belief that I’ll only score with 1%, if that pessimism would be a self-fulfilling prophecy and lead me to score with 1% of them. Or even worse, what if you had the belief that you’d score with zero chicks, and that became a self-fulfilling prophecy?!

The second commenter missed that while having true beliefs is good, there are other things that are also good. Like sex, for example.

Nietzsche: “Knowledge for its own sake”—that is the last snare laid by morality: we are thereby completely entangled in morals once more. The opening sections of Beyond Good and Evil engage with this in more depth. Old Fred was an interesting guy.

I could just stop right there, but I want to springboard from here to make a broader point about beliefs and outcomes. Consider general situations in which beliefs affect reality. For example self-fulfilling prophecies (SFPs) are common in economics, e.g. if enough people think a recession is coming, that can make them freak out and behave in ways— cutting back on consumer spending, laying off workers— that bring on a recession.

You can also have the opposite of an SFP. Example: I hear tell that chicks don’t like wearing the same dress as other chicks. Let’s suppose that every chick who’s going to a certain party tonight believes that lots of other chicks will be wearing a certain off-the-rack dress. Since they hate wearing what other chicks are wearing, none of them wears that dress. So the belief prevents itself from coming true.

In general beliefs can affect reality in ways more complex than self-fulfilling prophecies or self-blocking prophecies.

Here is my main point: In situations in which beliefs matter, it’s not at all obvious that there always even exist equilibrium beliefs, that is, beliefs that are both true and game-theoretically stable.

In math-speak, the item of interest is the mapping between beliefs and reality and the question is, Does that mapping even have a fixed point? That is, are there any beliefs that are self-confirming? It’s far from obvious whether the answer is always Yes.

(In the pickup example, a fixed point would be any SFP about your success rate. E.g. if you think you’ll score with 60% of chicks and that level of confidence causes you to indeed score with 60% of chicks.)

It’s possible that there are kinds of interactions in which any given belief is like the dress belief, in the sense that any particular belief will prevent itself from being true. In this kind of situation, hectoring someone because he has beliefs “which are demonstrably false” is even more idiotic, because it’s not even possible to have beliefs which won’t be demonstrably false!

(In theory an external observer— someone who’s not going to the party, in the dress example— could make correct predictions about the situation, but my focus here is the beliefs of people involved in the situation, e.g. you’re a guy going to a bar to try to pick up a chick and that’s what you’re forming beliefs about. By the way, even an external observer can’t form correct predictions without knowing all participants’ beliefs. That would require reading people’s minds, so no.)

On this claim that equilibrium beliefs may not be possible, people familiar with game theory may say “Ha! Nash’s Theorem, bitch!” But of course Nash’s Theorem makes certain assumptions about the environment, and uses a fixed-point theorem to prove the existence of equilibrium. If the mapping from beliefs to outcomes isn’t continuous, standard fixed point theorems don’t apply so that kind of proof doesn’t work.

(Note to nerdlingers: Nash’s Theorem deals with the continuity problem by letting agents’ moves be chosen probabilistically; this makes the relevant strategy sets continuous. But here, an agent’s “move” is his belief. The beliefs are about probabilities, but the beliefs are not themselves chosen probabilistically. There is a difference between (A) believing that a certain coin has a 0.5 probability of coming up heads, and (B) randomly switching between believing that it has a 100% probability and a 0% probability of coming up heads. If agents change their beliefs randomly, Nash’s Theorem might apply, but that’s not what we mean when we talk about beliefs, and certainly not rational beliefs. Changing your behavior with a random component can be rational, in adversarial games where you don’t want to be predictable, but changing your beliefs at random is not rational.)

The relevant mapping doesn’t even necessarily fill up the entire space. (Nerds: The mapping needn’t even be a surjective, i.e. “onto” mapping.) Indeed, there’s no man in the world who can be guaranteed to score with every woman in the world if he tries. Doesn’t matter if you believe you’ll have a 100% success rate; you won’t. So not all success probabilities are even in the range of the mapping.

“Damn it, Neuropoison; you’re really ass-raping my attention span here!” Okay sugar-tits, look at the pretty picture:

A mapping from believed probabilities (horizontal axis) to actual probabilities (vertical axis).

The diagram has the same variable on both axes, probabilities in [0,1]. Thus the entire admissible space is a 1 x 1 square, though I prefer to call it a “2-dimensional hypercube” as that helps me to score with intellectual chicks. Any point on the identity line is a fixed point, a self-confirming belief about the probability of some event. The issue is that the mapping from beliefs to reality is not continuous, so there isn’t a fixed point. That is, there are no self-confirming beliefs.

Just eye-balling it, it looks like if your belief is 100%, the reality is about 60%, and that’s as high as it gets. If this described your F-close rate with chicks, your best belief (if you could choose your beliefs purposefully) would be that you’d score with 100% of chicks you hit on, which would lead to a success rate of 60%. Obviously I just pulled these numbers out of my ass, but the point is, anyone who says, “Your beliefs are demonstrably false” should be given a wedgie for various reasons, among them there are no beliefs which will self-confirm as demonstrably true anyway.

Now that I’ve finished writing this I’m wondering whether it’s mathematically robust. It seems to be, but did I miss something? Is there some way to do a Nash on this and guarantee that everything is actually continuous in the relevant way, thus guaranteeing at least one fixed point? If not, it’s unsettling, as it illustrates that there can be situations in which having correct beliefs is not even theoretically possible.

UPDATE a few days later: It turns out I was right. Nerdlinger explanation: The reason you can’t “do a Nash” on this is that Nash’s theorem applies to game theory, in which all players are best responding to other players’ moves. (From now on I’m going to write playahs because that amuses me.) That is, each playah’s move is his best option given the other playahs’ moves. And “best responding” means optimizing, which (with other features of Nash’s setup) allows the Theorem of the Maximum to be applied. And that theorem implies the continuity of best-response mappings, which in turn implies the existence of at least one fixed point. But here, there is no optimization/best responding. You believe some probability, then cause and effect kicks in and results in some actual probability. There’s no other playah who is choosing the actual probabilities to optimize some goal function. Therefore, nothing prevents the relevant mapping from being discontinuous, so there is not necessarily a fixed point.

Sweet, an opportunity to use my Epistemology tag!

Slate Star Codex:

“I’m increasingly uncertain that confirmation bias can be separated from normal reasoning.

Suppose that one of my friends says she saw a coyote walk by her house I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.

Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.

Is this confirmation bias? It sure sounds like it. When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it.”

No, you’re not wrong to do this; you’re using your beliefs for their proper purpose: making judgments about the world. The whole reason you have a belief that polar bears are extremely rare or non-existent in Berkeley is so that if you think you see a polar bear, you’ll look again more carefully, or that if your friend says “Polar bear!” you’ll consider that she might be playing a joke on you, etc.

The point of having beliefs is not just to have them. It’s to use them to guide yourself through the world. You use them to, e.g. make judgments about how likely it is that your friend is lying or playing a joke on you, etc.

Furthermore, it’s a known fact that people sometimes joke, lie, are mistaken, etc. What entitles you to dismiss that fact? If you believe your friend, you’re abandoning your well-founded belief that people sometimes say false things AND your well-founded belief that there are no polar bears in Berkeley. That’s a weird decision to make.

If you disbelieve your friend, you are retaining your well-founded beliefs that people sometimes say false things and that there are no polar bears in Berkeley. That seems sensible, given the monstrously large number of times humans are observed to say false things, and the large number of times you’ve failed to observe any polar bears in Berkeley.

If I said I saw gnomes dancing on my roof, what would you actually do? Slightly raise your probability that there are gnomes, or significantly raise your probability that I’m a jokester?

Stop Talking About the Anthropic Principle, Idiots

There’s a thread at Slate Star Codex about the Fermi Paradox and the Great Filter:

Don’t Fear The Filter

In the comments, the “anthropic principle” comes up several times. The notion is well summaraized by the Infogalactic article’s second sentence: “Some proponents of the anthropic principle reason that it explains why the universe has the age and the fundamental physical constants necessary to accommodate conscious life.” I’m getting really tired of this as an “explanation” of anything or an “answer” to any question. It’s a testament to the stupidity of so many people who consider themselves “wonks” or whatever. Very plainly, people:

Say you have an enemy who is an expert marksman. One day he shoots at you and misses. You might be interested in why he missed. Saying, “If he hadn’t missed I wouldn’t be here to ask the question” IS NOT A FUCKING ANSWER TO THE QUESTION! One wants to know WHY he missed. Was he on drugs? Was he sick? Nervous? Just bad luck, e.g. he was he distracted at the wrong instant? That is, this is about cause and effect. The effect was him missing. We want to know what caused that effect. Your being alive to ask the question is not the cause. It can’t be, since it’s happening after the event it’s purported to explain, fucking duh! So unless you claim to have a time machine, STOP CLAIMING THIS IS AN ANSWER TO THE GODDAM QUESTION!

Robert Heinlein once said that a touchstone for how intellectually serious a person is, is what they think about astrology (he wrote that in the 1970s, back when astrology was a fad). I’m about ready to use mentioning the anthropic principle in the same way. If you mention it, unless you’re being ironic, YOU ARE AN IDIOT. YES, LITERALLY, AN ACTUAL IDIOT.

Suppose your kid asks you, “Hey Dad, how did you and Mom meet?” and you respond, “Well, if we hadn’t met, you wouldn’t be here to ask the question.” Seriously? Anyone who says with a straight face that this is a satisfactory answer – or any kind of answer at all – should be forced to wear underwear made out of steel wool and given a nuclear wedgie.

Cut it out, morons.

Update: I have a follow-up post here: https://neurotoxinweb.wordpress.com/2020/05/27/straightening-out-the-anthropic-principle/

Should You Trust “Experts”?

The sometimes perceptive, sometimes bizarre, but usually interesting Eliezer Yudkowsky has a new set of posts up at lesserwrong.com, excerpts from his book Inadequate Equilibria. (Lesserwrong.com is the successor site, established in 2017, to the rationalist site Less Wrong.) The first post of interest is

Inadequacy and Modesty.

Yudkowsky has two main topics: One is when to trust one’s own judgment, when one disagrees with experts, versus going with the experts’ opinion. The link above talks about informationally efficient and inefficient situations, where we can roughly define an informationally efficient situation as one where the experts are as right as possible given currently available information.

His second topic is the set of ways that a society can get stuck in a suboptimal equilibrium. That’s an enormous topic, which I’ll take up in a later post, but the connection is this: One example of such suboptimality is when there are insufficient incentives for the discovery and spread of information. The dissemination of information – who knows what – is obviously connected to the topic of expertise.

For me, there are two main items of interest in all this. One is random walks and their links to rational beliefs, and the other is the question of expertise itself.

A random walk is a kind of variable that often arises in informationally efficient situations. Yudkowsky uses the classic example of stock prices, though he doesn’t use the term “random walk.” He discusses the reasons to believe that stock markets are informationally efficient, which means that all relevant information known to market participants is already incorporated into stock prices. That in turn implies that you can’t profit by second-guessing the market, because the expected (i.e., mean) change in the price is zero. That in turn is the definition of a random walk. (If you’ve ever taken a Finance class this may sound familiar; it’s the Efficient Markets Hypothesis.)

See my post The Mind Cannot Foresee its Own Advance, which presents a generalization of this point.

There are good reasons to think that stock markets are pretty damn informationally efficient:

Empirically, it’s extremely difficult to beat the market – for periods of time long enough to not just be temporary luck – even with Cray big iron and a truckload of quant PhDs on staff. This empirical regularity is the real meat of the argument. But why is that the case? Several reasons, noted by Yudkowsky (but my wording here):

1. There are enormous incentives $$$!!! to uncover patterns that other market participants haven’t uncovered, so you can second-guess them and make bug bucks,

2. There are lots of people involved in attempting to do this constantly, which tends to push securities prices in the correct direction,(*)

(*) If a security is underpriced you should buy it to profit when people eventually realize it’s underpriced and the price rises. But also, the very fact of your buying it constitutes an increase in demand for it, which tends to push the price up. A symmetric phenomenon happens when you bet on a price falling in the future.

3. There is fast feedback from empirical reality telling them whether their trading strategies are successful, so fast error correction,

4. You can bet either way in the stock markets. That means that whether everybody else is too optimistic or too pessimistic, there are bets you can place to profit when the current mispricing is eventually corrected.

At the link above Yudkowsky says, somewhat floridly,

In the thickly traded parts of the stock market, where the collective power of human civilization is truly at its strongest, I doff my hat, I put aside my pride and kneel in true humility to accept the market’s beliefs as though they were my own, knowing that any impulse I feel to second-guess and every independent thought I have to argue otherwise is nothing but my own folly. If my perceptions suggest an exploitable opportunity, then my perceptions are far more likely mistaken than the markets. That is what it feels like to look upon a civilization doing something adequately.

A very important point here is that stock markets are exceptional! As points 1 – 4 explain, there are reasons to think the experts – in this case, securities trading firms – are correct on average.

But in the generic case – consider the field of history, e.g. – none of those things is true. And the whole question of who is an expert basically scuttles this attempt to say “You should often defer to experts.”

For example, Marxist historians like to say that they’re experts on history – they’ve figured out its ineluctable laws! – but they’re actually a bunch of ideological fuckwits who can’t think themselves out of a paper bag. But they’re also professors at many a US university (all the universities, I think). They’ll tell you they’re experts, man. It says so right here on the label, “I’m an expert.”

So is that a reason to regard them as experts?

Plainly not. Well, but we don’t have to trust them on this; after all, they’re hired by universities! Universities must be unbiased; it says so right on the label! Right? No, actually universities (outside of STEM fields, and increasingly even there) are also largely a bunch of ideological fuckwits.

Or so I claim. Am I right or wrong? Before you respond, I claim that I’m an expert on this topic. Hmmmm, we’d need to assess their degree of ideological fuckwittery empirically, wouldn’t we?

Or are we going to count heads? Whose heads? The historians’? But they’re the ones whose very credibility is being questioned, so that would be going in a circle.

How about the average person? Well, the Marxists lose that one, since most people aren’t Marxists. More to the point, if we’re asking people other than soi-disant “experts,” we’ve already departed from the dictum “trust experts.”

(By the way, should we trust the pollsters who are doing the head counting? Are their polling methods unbiased? Are they competent polling experts? How do you know?)

Furthermore, there are no particular incentives to be correct in this area. Your fellow historians are largely leftists who will grant you tenure for saying, in various ways, “Socialism is nice. Capitalism is bad.” You are not betting your own money or your life by predicting that next time – in China… Cuba… Venezuela! – socialism is sure to work.

Marxism is not intended to factually describe reality anyway. It’s an ideology of power, a convenient pre-made language for people who want to seize power, and realize they need a veneer of justice to help them gain adherents, put their opposition on the defensive, etc. Or as Marx himself put it, more discreetly: “The philosophers have only interpreted the world, in various ways. The point, however, is to change it.”

Once it’s recognized that you can’t just trust anyone who says “I’m an expert” – anyone can say that – the whole epistemic question returns to the forefront. How are you going to judge who’s an expert? Hmmm, maybe we need some standards regarding the use of evidence. (Frequentist? Bayesian?) Also some procedures, like whether testable empirical claims are being made, whether they’re replicable and actually replicated, etc.

You see the problem: In order to judge who’s an expert, you have to be damn far along the trajectory of knowledge in the relevant subject, far enough along that you’d be a jack-leg expert yourself on the topic. You might as well just assess the evidence for yourself, ignoring the purported “experts.”

What I do, and I hope everyone does, is try to identify areas in which there don’t seem to be monetary or ideological incentives to be biased, and provisionally trust the experts in those areas, and ignore experts in areas where the incentives are bad. This is far from foolproof, of course. (I’m perpetually surprised by how politicized nutrition science is.) But given the impossibility of becoming experts ourselves in all topics, we use heuristics, imperfect though they are, to try to avoid getting scammed by fraudulent “experts.”

There are many reasons, not just malign intent, that this instinct to mistrust “experts” is sound. Yudkowsky mentions some and I will mention others in my next post.

Here’s a teaser: In my next post I will cite a paper by two game theorists on the topic of expertise. This paper was published in The Quarterly Journal of Economics, which is a top Economics journal in the world. A major conclusion of the paper: In general, the equilibrium outcome is that experts will deliberately not inform you perfectly. Now here’s my question for anyone who says “trust experts”: Do you trust the conclusion of these two game theory experts?