Less Wrong/2007 Articles/Summaries

From LessWrong
Jump to navigation Jump to search
Some Claims Are Just Too Extraordinary

Publications in peer-reviewed scientific journals are more worthy of trust than what you detect with your own ears and eyes.

Outside the Laboratory

Those who understand the map/territory distinction will integrate their knowledge, as they see the evidence that reality is a single unified process.

(alternate summary:)

Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, e probably doesn't understand the foundations of science very well.

Politics is the Mind-Killer

In your discussions, beware, for people have great difficulty being rational about current political issues. This is no surprise to someone familiar with evolutionary psychology.

(alternate summary:)

People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food, or let you kill your hated rival. If you must talk about politics (for the purpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it, but don't blame it explicitly on the whole Republican/Democratic/Liberal/Conservative/Nationalist Party.

Just Lose Hope Already

Admit it when the evidence goes against you, or else things can get a whole lot worse.

(alternate summary:)

Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - he hasn't declared bankruptcy, and has just attempted to purchase another house. While this behavior seems merely stupid, it brings to mind Merton and Scholes of Long-Term Capital Management, who made 40% profits for three years, and then lost it all when they overleveraged. Each profession has rules on how to be successful, which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.

You Are Not Hiring the Top 1%

Interviewees represent a selection bias on the pool skewed toward those who are not successful or happy in their current jobs.

(alternate summary:)

Software companies may see themselves as being very selective about who they hire. Out of 200 applicants, they may hire just one or two. However, that doesn't necessarily mean that they're hiring the top 1%. The programmers who weren't hired are likely to apply for jobs somewhere else. Overall, the worst programmers will apply for many more jobs over the course of their careers than the best. So programmers who are applying for a particular job are not representative of programmers as a whole. This phenomenon probably shows up in other places as well.

Policy Debates Should Not Appear One-Sided

Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.

(alternate summary:)

Robin Hanson proposed a "banned products shop" where things that the government ordinarily would ban are sold. Eliezer responded that this would probably cause at least one stupid and innocent person to die. He became surprised when people inferred from this remark that he was against Robin's idea. Policy questions are complex actions with many consequences. Thus they should only rarely appear one-sided to an objective observer. A person's intelligence is largely a product of circumstances they cannot control. Eliezer argues for cost-benefit analysis instead of traditional libertarian ideas of tough-mindedness (people who do stupid things deserve their consequences).

Burch's Law

Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.

(alternate summary:)

Just because your ethics require an action doesn't mean the universe will exempt you from the consequences. Manufactured cars kill an estimated 1.2 million people per year worldwide. (Roughly 2% of the annual planetary death rate.) Not everyone who dies in an automobile accident is someone who decided to drive a car. The tally of casualties includes pedestrians. It includes minor children who had to be pushed screaming into the car on the way to school. And yet we still manufacture automobiles, because, well, we're in a hurry. The point is that the consequences don't change no matter how good the ethical justification sounds.

The Scales of Justice, the Notebook of Rationality

People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.

(alternate summary:)

In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.

Blue or Green on Regulation?

Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.

(alternate summary:)

In a rationalist community, it should not be necessary to talk in the usual circumlocutions when talking about empirical predictions. We should know that people think of arguments as soldiers and recognize the behavior in our selves. When you think about all the truth values around you come to see that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.

(alternate summary:)

Burch's law isn't a soldier-argument for regulation; estimating the appropriate level of regulation in each particular case is a superior third option.

Superstimuli and the Collapse of Western Civilization

As a side effect of evolution, superstimuli exist, and, as a result of economics, they are getting and will likely continue to get worse.

(alternate summary:)

At least 3 people have died by playing online games non-stop. How is it that a game is so enticing that after 57 straight hours playing, a person would rather spend the next hour playing the game over sleeping or eating? A candy bar is superstimulus, it corresponds overwhelmingly well to the EEA healthy food characteristics of sugar and fat. If people enjoy these things, the market will respond to provide as much of it as possible, even if other considerations make it undesirable.

Useless Medical Disclaimers

Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having them there, maybe we ought to tax those people.

(alternate summary:)

Eliezer complains about a disclaimer he had to sign before getting toe surgery because it didn't give numerical probabilities for the possible negative outcomes it described. He guesses this is because of people afflicted with "innumeracy" who would over-interpret small numbers. He proposes a tax wherein folks are asked if they are innumerate and asked to pay in proportion to their innumeracy. This tax is revealed in the comments to be a state-sponsored lottery.

Archimedes's Chronophone

Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...

(alternate summary:)

Imagine that Archimedes of Syracuse invented a device that allows you to talk to him. Imagine the possibilities for improving history! Unfortunately, the device will not literally transmit your words - it transmits cognitive strategies. If you advise giving women the vote, it comes out as advising finding a wise tyrant, the Greek ideal of political discourse. Under such restrictions, what do you say to Archimedes?

Chronophone Motivations

If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.

(alternate summary:)

The point of the chronophone dilemma is to make us think about what kind of cognitive policies are good to follow when you don't know your destination in advance.

Self-deception: Hypocrisy or Akrasia?

It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites". Instead they are suffering from "akrasia" or weakness of will. At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.

(alternate summary:)

If part of a person--for example, the verbal module--says it wants to become more rational, we can ally with that part even when weakness of will makes the person's actions otherwise; hypocrisy need not be assumed.

Tsuyoku Naritai! (I Want To Become Stronger)

Don't be satisfied knowing you are biased; instead, aspire to become stronger, studying your flaws so as to remove them. There is a temptation to take pride in confessions, which can impede progress.

Tsuyoku vs. the Egalitarian Instinct

There may be evolutionary psychological factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.

"Statistical Bias"

There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.

Useful Statistical Biases

If you know an estimator has high variance, you can intentionally introduce bias by choosing a simpler hypothesis, and thereby lower expected variance while raising expected bias; sometimes total error is lower, hence the "bias-variance tradeoff". Keep in mind that while statistical bias might be useful, cognitive biases are not.

The Error of Crowds

Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing square error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.

(alternate summary)

Mean squared error drops when we average our predictions, but only because it uses a convex loss function. If you faced a concave loss function, you wouldn't isolate yourself from others, which casts doubt on the relevance of Jensen's inequality for rational communication. The process of sharing thoughts and arguing differences is not like taking averages.

The Majority Is Always Wrong

Anything worse than the majority opinion should get selected out, so the majority opinion is rarely strictly superior to existing alternatives.

Knowing About Biases Can Hurt People

Learning common biases won't help you obtain truth if you only use this knowledge to attack beliefs you don't like. Discussions about biases need to first do no harm by emphasizing motivated cognition, the sophistication effect, and dysrationalia, although even knowledge of these can backfire.

Debiasing as Non-Self-Destruction

Not being stupid seems like a more easily generalizable skill than breakthrough success. If debiasing is mostly about not being stupid, its benefits are hidden: lottery tickets not bought, blind alleys not followed, cults not joined. Hence, checking whether debiasing works is difficult, especially in the absence of organizations or systematized training.

"Inductive Bias"

Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description.

Suggested Posts

This is an obsolete "meta" post.

Futuristic Predictions as Consumable Goods

The Friedman Unit is named after Thomas Friedman who called "the next six months" the critical period in Iraq eight times between 2003 and 2007. This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.

Marginally Zero-Sum Efforts

After a point, labeling a problem as "important" is a commons problem. Rather than increasing the total resources devoted to important problems, resources are taken from other projects. Some grants proposals need to be written, but eventually this process becomes zero- or negative-sum on the margin.

Priors as Mathematical Objects

A prior is an assignment of a probability to every possible sequence of observations. In principle, the prior determines a probability for any event. Formally, the prior is a giant look-up table, which no Bayesian reasoner would literally implement. Nonetheless, the formal definition is sometimes convenient. For example, uncertainty about priors can be captured with a weighted sum of priors.

Lotteries: A Waste of Hope

Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy. Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Lottery advocates are failing to complete the third step.

New Improved Lottery

If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment.

Your Rationality is My Business

As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth. That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking.

Consolidated Nature of Morality Thread

This post was a place for debates about the nature of morality, so that subsequent posts touching tangentially on morality would not be overwhelmed.

Examples of questions to be discussed here included: What is the difference between "is" and "ought" statements? Why do some preferences seem voluntary, while others do not? Do children believe that God can change what is moral? Is there a direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does Tarski's definition of truth extend to moral statements? If you were physically altered to prefer killing, would "killing is good" become true? If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker? What are the referents of moral claims, or are they empty of content? Are there "pure" ought-statements, or do they all have is-statements mixed into them? Are there pure aesthetic judgments or preferences?

Feeling Rational

Strong emotions can be rational. A rational belief that something good happened leads to rational happiness. But your emotions ought not to change your beliefs about events that do not depend causally on your emotions.

Universal Fire

You can't change just one thing in the world and expect the rest to continue working as before.

Universal Law

In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality.

Think Like Reality

"Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."

Beware the Unsurprised

If reality consistently surprises you, then your model needs revision. But beware those who act unsurprised by surprising data. Maybe their model was too vague to be contradicted. Maybe they haven't emotionally grasped the implications of the data. Or maybe they are trying to appear poised in front of others. Respond to surprise by revising your model, not by suppressing your surprise.

The Third Alternative

People justify Noble Lies by pointing out their benefits over doing nothing. But, if you really need these benefits, you can construct a Third Alternative for getting them. How? You have to search for one. Beware the temptation not to search or to search perfunctorily. Ask yourself, "Did I spend five minutes by the clock trying hard to think of a better alternative?"

Third Alternatives for Afterlife-ism

One source of hope against death is Afterlife-ism. Some say that this justifies it as a Noble Lie. But there are better (because more plausible) Third Alternatives, including nanotech, actuarial escape velocity, cryonics, and the Singularity. If supplying hope were the real goal of the Noble Lie, advocates would prefer these alternatives. But the real goal is to excuse a fixed belief from criticism, not to supply hope.

Scope Insensitivity

The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

One Life Against the World

Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

Risk-Free Bonds Aren't

There are no risk-free investments. Even US treasury bills would fail under a number of plausible "black swan" scenarios. Nassim Taleb's own investment strategy doesn't seem to take sufficient account of such possibilities. Risk management is always a good idea.

Correspondence Bias

Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

(alternate summary:)

Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.

Are Your Enemies Innately Evil?

People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just. That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy.

Open Thread

This obsolete post was a place for free-form comments related to the project of the Overcoming Bias blog.

Two More Things to Unlearn from School

School encourages two bad habits of thought: (1) equating "knowledge" with the ability to parrot back answers that the teacher expects; and (2) assuming that authorities are perfectly reliable. The first happens because students don't have enough time to digest what they learn. The second happens especially in fields like physics because students are so often just handed the right answer.

Making Beliefs Pay Rent (in Anticipated Experiences)

Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" instead of "What statements do I believe?"

Belief in Belief

Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.

Bayesian Judo

You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.

Professing and Cheering

A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements. This seemed weirder than "belief in belief" (she didn't act like she needed validation) or "religious profession" (she didn't try to act like she took her religion seriously). So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims.

Belief as Attire

When you've stopped anticipating-as-if something is true, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.

Religion's Claim to be Non-Disprovable

Religions used to claim authority in all domains, including biology, cosmology, and history. Only recently have religions attempted to be non-disprovable by confining themselves to ethical claims. But the ethical claims in scripture ought to be even more obviously wrong than the other claims, making the idea of non-overlapping magisteria a Big Lie.

The Importance of Saying "Oops"

When your theory is proved wrong, just scream "OOPS!" and admit your mistake fully. Don't just admit local errors. Don't try to protect your pride by conceding the absolute minimal patch of ground. Making small concessions means that you will make only small improvements. It is far better to make big improvements quickly. This is a lesson of Bayescraft that Traditional Rationality fails to teach.

Focus Your Uncertainty

If you are paid for post-hoc analysis, you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty. But what if you don't know the outcome yet, and you need to have an explanation ready in 100 minutes? Then you want to spend most of your time on excuses for the outcomes that you anticipate most, so you still need a theory that focuses your uncertainty.

The Proper Use of Doubt

Doubt is often regarded as virtuous for the wrong reason: because it is a sign of humility and recognition of your place in the hierarchy. But from a rationalist perspective, this is not why you should doubt. The doubt, rather, should exist to annihilate itself: to confirm the reason for doubting, or to show the doubt to be baseless. When you can no longer make progress in this respect, the doubt is no longer useful to you as a rationalist.

The Virtue of Narrowness

One way to fight cached patterns of thought is to focus on precise concepts.

(alternate summary:)

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

You Can Face Reality

This post quotes a poem by Eugene Gendlin, which reads, "What is true is already so. / Owning up to it doesn't make it worse. / Not being open about it doesn't make it go away. / And because it's true, it is what is there to be interacted with. / Anything untrue isn't there to be lived. / People can stand what is true, / for they are already enduring it."

The Apocalypse Bet

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. Yudkowsky later realized that this wouldn't work.

Your Strength as a Rationalist

A hypothesis that forbids nothing permits everything, and thus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

I Defy the Data!

If an experiment contradicts a theory, we are expected to throw out the theory, or else break the rules of Science. But this may not be the best inference. If the theory is solid, it's more likely that an experiment got something wrong than that all the confirmatory data for the theory was wrong. In that case, you should be ready to "defy the data", rejecting the experiment without coming up with a more specific problem with it; the scientific community should tolerate such defiances without social penalty, and reward those who correctly recognized the error if it fails to replicate. In no case should you try to rationalize how the theory really predicted the data after all.

Absence of Evidence Is Evidence of Absence

Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence.

Conservation of Expected Evidence

If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.

Update Yourself Incrementally

Many people think that you must abandon a belief if you admit any counterevidence. Instead, change your belief by small increments. Acknowledge small pieces of counterevidence by shifting your belief down a little. Supporting evidence will follow if your belief is true. "Won't you lose debates if you concede any counterarguments?" Rationality is not for winning debates; it is for deciding which side to join.

One Argument Against An Army

It is tempting to weigh each counterargument by itself against all supporting arguments. No single counterargument can overwhelm all the supporting arguments, so you easily conclude that your theory was right. Indeed, as you win this kind of battle over and over again, you feel ever more confident in your theory. But, in fact, you are just rehearsing already-known evidence in favor of your view.

Hindsight bias

Hindsight bias makes us overestimate how well our model could have predicted a known outcome. We underestimate the cost of avoiding a known bad outcome, because we forget that many other equally severe outcomes seemed as probable at the time. Hindsight bias distorts the testing of our models by observation, making us think that our models are better than they really are.

Hindsight Devalues Science

Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.

Scientific Evidence, Legal Evidence, Rational Evidence

For good social reasons, we require legal and scientific evidence to be more than just rational evidence. Hearsay is rational evidence, but as legal evidence it would invite abuse. Scientific evidence must be public and reproducible by everyone, because we want a pool of especially reliable beliefs. Thus, Science is about reproducible conditions, not the history of any one experiment.

Is Molecular Nanotechnology "Scientific"?

The belief that nanotechnology is possible is based on qualitative reasoning from scientific knowledge. But such a belief is merely rational. It will not be scientific until someone constructs a nanofactory. Yet if you claim that nanomachines are impossible because they have never been seen before, you are being irrational. To think that everything that is not science is pseudoscience is a severe mistake.

Fake Explanations

People think that fake explanations use words like "magic," while real explanations use scientific words like "heat conduction." But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

Guessing the Teacher's Password

In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result. SilasBarta 00:54, 13 April 2011 (UTC)

Science as Attire

You don't understand the phrase "because of evolution" unless it constrains your anticipations. Otherwise, you are using it as attire to identify yourself with the "scientific" tribe. Similarly, it isn't scientific to reject strongly superhuman AI only because it sounds like science fiction. A scientific rejection would require a theoretical model that bounds possible intelligences. If your proud beliefs don't constrain anticipation, they are probably just passwords or attire.

Fake Causality

It is very easy for a human being to think that a theory predicts a phenomenon, when in fact is was fitted to a phenomenon. Properly designed reasoning systems (GAIs) would be able to avoid this mistake with our knowledge of probability theory, but humans have to write down a prediction in advance in order to ensure that our reasoning about causality is correct.

Semantic Stopsigns

There are certain words and phrases that act as "stopsigns" to thinking. They aren't actually explanations, or help to resolve the actual issue at hand, but they act as a marker saying "don't ask any questions."

Mysterious Answers to Mysterious Questions

The theory of vitalism was developed before the idea of biochemistry. It stated that the mysterious properties of living matter, compared to nonliving matter, was due to an "elan vital". This explanation acts as a curiosity-stopper, and leaves the phenomenon just as mysterious and inexplicable as it was before the answer was given. It feels like an explanation, though it fails to constrain anticipation.

The Futility of Emergence

The theory of "emergence" has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren't able to make any new predictions.

Positive Bias: Look Into the Dark

Positive bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

Say Not "Complexity"

The concept of complexity isn't meaningless, but too often people assume that adding complexity to a system they don't understand will improve it. If you don't know how to solve a problem, adding complexity won't help; better to say "I have no idea" than to say "complexity" and think you've reached an answer.

My Wild and Reckless Youth

Traditional rationality (without Bayes' Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.

Failing to Learn from History

There are no inherently mysterious phenomena, but every phenomenon seems mysterious, right up until the moment that science explains it. It seems to us now that biology, chemistry, and astronomy are naturally the realm of science, but if we had lived through their discoveries, and watched them reduced from mysterious to mundane, we would be more reluctant to believe the next phenomenon is inherently mysterious.

Making History Available

It's easy not to take the lessons of history seriously; our brains aren't well-equipped to translate dry facts into experiences. But imagine living through the whole of human history - imagine watching mysteries be explained, watching civilizations rise and fall, being surprised over and over again - and you'll be less shocked by the strangeness of the next era.

Stranger Than History

Imagine trying to explain quantum physics, the internet, or any other aspect of modern society to people from 1900. Technology and culture change so quickly that our civilization would be unrecognizable to people 100 years ago; what will the world look like 100 years from now?

Explain/Worship/Ignore?

When you encounter something you don't understand, you have three options: to seek an explanation, knowing that that explanation will itself require an explanation; to avoid thinking about the mystery at all; or to embrace the mysteriousness of the world and worship your confusion.

"Science" as Curiosity-Stopper

Although science does have explanations for phenomena, it is not enough to simply say that "Science!" is responsible for how something works -- nor is it enough to appeal to something more specific like "electricity" or "conduction". Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works. In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it!" But you shouldn't let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model. SilasBarta 01:22, 13 April 2011 (UTC)

Absurdity Heuristic, Absurdity Bias

Under some circumstances, rejecting arguments on the basis of absurdity is reasonable. The absurdity heuristic can allow you to identify hypotheses that aren't worth your time. However, detailed knowledge of the underlying laws should allow you to override the absurdity heuristic. Objects fall, but helium balloons rise. The future has been consistently absurd and will likely go on being that way. When the absurdity heuristic is extended to rule out crazy-sounding things with a basis in fact, it becomes absurdity bias.

Availability

Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.

Why is the Future So Absurd?

New technologies and social changes have consistently happened at a rate that would seem absurd and impossible to people only a few decades before they happen. Hindsight bias causes us to see the past as obvious and as a series of changes towards the "normalcy" of the present; availability biases make it hard for us to imagine changes greater than those we've already encountered, or the effects of multiple changes. The future will be stranger than we think.

Anchoring and Adjustment

Exposure to numbers affects guesses on estimation problems by anchoring your mind to an given estimate, even if it's wildly off base. Be aware of the effect random numbers have on your estimation ability.

The Crackpot Offer

If you make a mistake, don't excuse it or pat yourself on the back for thinking originally; acknowledge you made a mistake and move on. If you become invested in your own mistakes, you'll stay stuck on bad ideas.

Radical Honesty

The Radical Honesty movement requires participants to speak the truth, always, whatever they think. The more competent you grow at avoiding self-deceit, the more of a challenge this would be - but it's an interesting thing to imagine, and perhaps strive for.

We Don't Really Want Your Participation

Advocates for the Singularity sometimes call for outreach to artists or poets; we should move away from thinking of people as if their profession is the only thing they can contribute to humanity. Being human is what gives us a stake in the future, not being poets or mathematicians.

Applause Lights

Words like "democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.

Rationality and the English Language

George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.

Human Evil and Muddled Thinking

It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil.

Doublethink (Choosing to be Biased)

George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.

Why I'm Blooking

Eliezer explains that he is overcoming writer's block by writing one Less Wrong post a day.

Planning Fallacy

We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.

(alternate summary:)

Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold. As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong.

Kahneman's Planning Anecdote

Nobel Laureate Daniel Kahneman recounts an incident where the inside view and the outside view of the time it would take to complete a project of his were widely different.

Conjunction Fallacy

Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions; it requires certain general habits of thought.

Conjunction Controversy (Or, How They Nail It Down)

When it seems like an experiment that's been cited does not provide enough support for the interpretation given, remember that Scientists are generally pretty smart. Especially if the experiment was done a long time ago, or it is described as "classic" or "famous". In that case, you should consider the possibility that there is more evidence that you haven't seen. Instead of saying "This experiment could also be interpreted in this way", ask "How did they distinguish this interpretation from ________________?"

Burdensome Details

If you want to avoid the conjunction fallacy, you must try to feel a stronger emotional impact from Occam's Razor. Each additional detail added to a claim must feel as though it is driving the probability of the claim down towards zero.

What is Evidence?

Evidence is an event connected by a chain of causes and effects to whatever it is you want to learn about. It also has to be an event that is more likely if reality is one way, than if reality is another. If a belief is not formed this way, it cannot be trusted.

The Lens That Sees Its Flaws

Part of what makes humans different from other animals is our own ability to reason about our reasoning. Mice do not think about the cognitive algorithms that generate their belief that the cat is hunting them. Our ability to think about what sort of thought processes would lead to correct beliefs is what gave rise to Science. This ability makes our admittedly flawed minds much more powerful.

How Much Evidence Does It Take?

If you are considering one hypothesis out of many, or that hypothesis is more implausible than others, or you wish to know with greater confidence, you will need more evidence. Ignoring this rule will cause you to jump to a belief without enough evidence, and thus be wrong.

Einstein's Arrogance

Albert Einstein, when asked what he would do if an experiment disproved his theory of general relativity, responded with "I would feel sorry for [the experimenter]. The theory is correct." While this may sound like arrogance, Einstein doesn't look nearly as bad from a Bayesian perspective. In order to even consider the hypothesis of general relativity in the first place, he would have needed a large amount of Bayesian evidence.

Occam's Razor

To a human, Thor feels like a simpler explanation for lightning than Maxwell's equations, but that is because we don't see the full complexity of an intelligent mind. However, if you try to write a computer program to simulate Thor and a computer program to simulate Maxwell's equations, one will be much easier to accomplish. This is how the complexity of a hypothesis is measured in the formalisms of Occam's Razor.

9/26 is Petrov Day

September 26th is Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

How to Convince Me That 2 + 2 = 3

The way to convince Eliezer that 2+2=3 is the same way to convince him of any proposition, give him enough evidence. If all available evidence, social, mental and physical, starts indicating that 2+2=3 then you will shortly convince Eliezer that 2+2=3 and that something is wrong with his past or recollection of the past.

The Bottom Line

If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong.

What Evidence Filtered Evidence?

Someone tells you only the evidence that they want you to hear. Are you helpless? Forced to update your beliefs until you reach their position? No, you also have to take into account what they could have told you but didn't.

Rationalization

Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational. It is as if "lying" were called "truthization".

Recommended Rationalist Reading

Book recommendations by Eliezer and readers.

A Rational Argument

You can't produce a rational argument for something that isn't rational. First select the rational choice. Then the rational argument is just a list of the same evidence that convinced you.

We Change Our Minds Less Often Than We Think

We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.

Avoiding Your Belief's Real Weak Points

When people doubt, they instinctively ask only the questions that have easy answers. When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most.

The Meditation on Curiosity

If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.

Singlethink

The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.

No One Can Exempt You From Rationality's Laws

Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating - as defections from cooperative norms. But viewing rationality as a social obligation gives rise to some strange ideas. The laws of rationality are mathematics, and no social maneuvering can exempt you.

A Priori

The facts that philosophers call "a priori" arrived in your brain by a physical process. Thoughts are existent in the universe; they are identical to the operation of brains. The "a priori" belief generator in your brain works for a reason.

Priming and Contamination

Even slight exposure to a stimulus is enough to change the outcome of a decision or estimate. See also Never Leave Your Room by Yvain, and Cached Selves by Salamon and Rayhawk.

(alternate summary:)

Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.

Do We Believe Everything We're Told?

Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.

Cached Thoughts

Brains are slow. They need to cache as much as they can. They store answers to questions, so that no new thought is required to answer. Answers copied from others can end up in your head without you ever examining them closely. This makes you say things that you'd never believe if you thought them through. So examine your cached thoughts! Are they true?

The "Outside the Box" Box

When asked to think creatively there's always a cached thought that you can fall into. To be truly creative you must avoid the cached thought. Think something actually new, not something that you heard was the latest innovation. Striving for novelty for novelty's sake is futile, instead you must aim to be optimal. People who strive to discover truth or to invent good designs, may in the course of time attain creativity.

Original Seeing

One way to fight cached patterns of thought is to focus on precise concepts.

How to Seem (and Be) Deep

Just find ways of violating cached expectations.

(alternate summary)

To seem deep, find coherent but unusual beliefs, and concentrate on explaining them well. To be deep, you actually have to think for yourself.

The Logical Fallacy of Generalization from Fictional Evidence

The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.

Hold Off On Proposing Solutions

Proposing solutions prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.

"Can't Say No" Spending

Medical spending and aid to Africa have no net effect (or worse). But it's heartbreaking to just say no...

Congratulations to Paris Hilton

Eliezer offers his congratulations to Paris Hilton, who he believed had signed up for cryonics. (It turns out that she hadn't.)

Pascal's Mugging: Tiny Probabilities of Vast Utilities

An Artificial Intelligence coded using Solomonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?

Illusion of Transparency: Why No One Understands You

Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.

Self-Anchoring

Related to contamination and the illusion of transparancy, we "anchor" on our own experience and under-adjust when trying to understand others.

Expecting Short Inferential Distances

Humans evolved in an environment where we almost never needed to explain long inferential chains of reasoning. This fact may account for the difficulty many people have when trying to explain complicated subjects. We only explain the last step of the argument, and not every step that must be taken from our listener's premises.

Explainers Shoot High. Aim Low!

Humans greatly overestimate how much sense our explanations make. In order to explain something adequately, pretend that you're trying to explain it to someone much less informed than your target audience.

Double Illusion of Transparency

In addition to the difficulties encountered in trying to explain something so that your audience understands it, there are other problems associated in learning whether or not you have explained something properly. If you read your intended meaning into whatever your listener says in response, you may think that e understands a concept, when in fact e is simply rephrasing whatever it was you actually said.

No One Knows What Science Doesn't Know

In the modern world, unlike our ancestral environment, it is not possible for one person to know more than a tiny fraction of the world's scientific knowledge. Just because you don't understand something, you should not conclude that not one of the six billion other people on the planet understands it.

Why Are Individual IQ Differences OK?

People act as though it is perfectly fine and normal for individuals to have differing levels of intelligence, but that it is absolutely horrible for one racial group to be more intelligent than another. Why should the two be considered any differently?

Bay Area Bayesians Unite!

An obsolete post in which Eliezer queried Overcoming Bias readers to find out if they would be interested in holding in-person meetings.

Motivated Stopping and Motivated Continuation

When the evidence we've seen points towards a conclusion that we like or dislike, there is a temptation to stop the search for evidence prematurely, or to insist that more evidence is needed.

Torture vs. Dust Specks

If you had to choose between torturing one person horribly for 50 years, or putting a single dust speck into the eyes of 3^^^3 people, what would you do?

A Case Study of Motivated Continuation

When you find yourself considering a problem in which all visible options are uncomfortable, making a choice is difficult. Grit your teeth and choose anyways.

A Terrifying Halloween Costume

The day after Halloween, Eliezer made a joke related to Torture vs. Dust Specks, which he had posted just a few days ago.

Fake Justification

We should be suspicious of our tendency to justify our decisions with arguments that did not actually factor into making said decisions. Whatever process you actually use to make your decisions is what determines your effectiveness as a rationalist.

An Alien God

Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot god burbling chaotically at the center of everything.

The Wonder of Evolution

...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.

(alternate summary:)

The wonder of the first replicator was not how amazingly well it replicated, but that a first replicator could arise, at all, by pure accident, in the primordial seas of Earth. That first replicator would undoubtedly be devoured in an instant by a sophisticated modern bacterium. Likewise, the wonder of evolution itself is not how well it works, but that a brainless, accidentally occurring optimization process can work at all. If you praise evolution for being such a wonderfully intelligent Creator, you're entirely missing the wonderful thing about it.

Evolutions Are Stupid (But Work Anyway)

Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.

(alternate summary:)

Modern evolutionary theory gives us a definite picture of evolution's capabilities. If you praise evolution one millimeter higher than this, you are not scoring points against creationists, you are just being factually inaccurate. In particular we can calculate the probability and time for advantageous genes to rise to fixation. For example, a mutation conferring a 3% advantage would have only a 6% probability of surviving, and if it did so, would take 875 generations to rise to fixation in a population of 500,000 (on average).

Natural Selection's Speed Limit and Complexity Bound

Tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument. It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the Genome Project discovered only 25,000 genes in the human genome - but this particular math may not be the correct argument.

Beware of Stephen J. Gould

A lot of people have gotten their grasp of evolutionary theory from Stephen J. Gould, a man who committed the moral equivalent of fraud in a way that is difficult to explain. At any rate, he severely misrepresented what evolutionary biologists believe, in the course of pretending to attack certain beliefs. One needs to clear from memory, as much as possible, not just everything that Gould positively stated but everything he seemed to imply the mainstream theory believed.

The Tragedy of Group Selectionism

A tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.

(alternate summary:)

Describes a key case where some pre-1960s evolutionary biologists went wrong by anthropomorphizing evolution - in particular, Wynne-Edwards, Allee, and Brereton among others believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat. Since evolution does not usually do this sort of thing, their rationale was group selection - populations that did this would survive better. But group selection is extremely difficult to make work mathematically, and an experiment under sufficiently extreme conditions to permit group selection, had rather different results.

Fake Selfishness

Many people who espouse a philosophy of selfishness aren't really selfish. If they were selfish, there are a lot more productive things to do with their time than espouse selfishness, for instance. Instead, individuals who proclaim themselves selfish do whatever it is they actually want, including altruism, but can always find some sort of self-interest rationalization for their behavior.

Fake Morality

Many people provide fake reasons for their own moral reasoning. Religious people claim that the only reason people don't murder each other is because of God. Selfish-ists provide altruistic justifications for selfishness. Altruists provide selfish justifications for altruism. If you want to know how moral someone is, don't look at their reasons. Look at what they actually do.

Fake Optimization Criteria

Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.

Adaptation-Executers, not Fitness-Maximizers

A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.

Evolutionary Psychology

The human brain, and every ability for thought and emotion in it, are all adaptations selected for by evolution. Humans have the ability to feel angry for the same reason that birds have wings: ancient humans and birds with those adaptations had more kids. But, it is easy to forget that there is a distinction between the reason humans have the ability to feel anger, and the reason why a particular person was angry at a particular thing. Human brains are adaptation executors, not fitness maximizers.

Protein Reinforcement and DNA Consequentialism

Brains made of proteins can learn much faster than DNA, but DNA does seem to be more adaptable. The complexity of the evolutionary hypothesis is so enormous that no species, other than humans, is capable of thinking it, and yet DNA seems to implicitly understand it. This happens because DNA is learns through the actual consequences, but protein brains can simply imagine the consequences.

Thou Art Godshatter

Describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.

(alternate summary:)

Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.

Terminal Values and Instrumental Values

Proposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.

Evolving to Extinction

Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.

(alternate summary:)

On how evolution could be responsible for the bystander effect.

(alternate summary:)

It is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.

No Evolutions for Corporations or Nanodevices

Price's Equation describes quantitatively how the change in a average trait, in each generation, is equal to the covariance between that trait and fitness. Such covariance requires substantial variation in traits, substantial variation in fitness, and substantial correlation between the two - and then, to get large cumulative selection pressures, the correlation must have persisted over many generations with high-fidelity inheritance, continuing sources of new variation, and frequent birth of a significant fraction of the population. People think of "evolution" as something that automatically gets invoked where "reproduction" exists, but these other conditions may not be fulfilled - which is why corporations haven't evolved, and nanodevices probably won't.

The Simple Math of Everything

It is enormously advantageous to know the basic mathematical equations at the base of a field. Understanding a few simple equations of evolutionary biology, knowing how to use Bayes' Rule, and understanding the wave equation for sound in air are not enormously difficult challenges. However, if you know them, your own capabilities are greatly enhanced.

Conjuring An Evolution To Serve You

If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs. Sounds logical, right? But this selection may actually favor the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens. Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death. Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning the awesome power of evolution to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...

Artificial Addition

If you imagine a world where people are stuck on the "artifical addition" (i.e. machine calculator) problem, the way people currently are stuck on artificial intelligence, and you saw them trying the same popular approaches taken today toward AI, it would become clear how silly they are. Contrary to popular wisdom (in that world or ours), the solution is not to "evolve" an artificial adder, or invoke the need for special physics, or build a huge database of solutions, etc. -- because all of these methods dodge the crucial task of understanding what addition involves, and instead try to dance around it. Moreover, the history of AI research shows the problems of believing assertions one cannot re-generate from one's own knowledge.

Truly Part Of You

Any time you believe you've learned something, you should ask yourself, "Could I re-generate this knowledge if it were somehow deleted from my mind, and how would I do so?" If the supposed knowledge is just empty buzzwords, you will recognize that you can't, and therefore that you haven't learned anything. But if it's an actual model of reality, this method will reinforce how the knowledge is entangled with the rest of the world, enabling you to apply it to other domains, and know when you need to update those beliefs. It will have become "truly part of you", growing and changing with the rest of your knowledge.

Not for the Sake of Happiness (Alone)

Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

Leaky Generalizations

The words and statements that we use are inherently "leaky", they do not precisely convey absolute and perfect information. Most humans have ten fingers, but if you know that someone is a human, you cannot confirm (with probability 1) that they have ten fingers. The same holds with planning and ethical advice.

The Hidden Complexity of Wishes

There are a lot of things that humans care about. Therefore, the wishes that we make (as if to a genie) are enormously more complicated than we would intuitively suspect. In order to safely ask a powerful, intelligent being to do something for you, that being must share your entire decision criterion, or else the outcome will likely be horrible.

Lost Purposes

On noticing when you're still doing something that has become disconnected from its original purpose.

(alternate summary)

It is possible for the various steps in a complex plan to become valued in and of themselves, rather than as steps to achieve some desired goal. It is especially easy if the plan is being executed by a complex organization, where each group or individual in the organization is only evaluated by whether or not they carry out their assigned step. When this process is carried to its extreme, we get Soviet shoe factories manufacturing tiny shoes to increase their production quotas, and the No Child Left Behind Act.

Purpose and Pragmatism

It is easier to get trapped in a mistake of cognition if you have no practical purpose for your thoughts. Although pragmatic usefulness is not the same thing as truth, there is a deep connection between the two.

The Affect Heuristic

Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.

Evaluability (And Cheap Holiday Shopping)

It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.

(alternate summary:)

Is there a way to exploit human biases to give the impression of largess with cheap gifts? Yes. Humans compare the value/price of an object to other similar objects. A $399 Eee PC is cheap (because other laptops are more expensive), yet a $399 PS3 is expensive (because the alternatives are less expensive). To give the impression of expense in a gift chose a cheap class of item (say, a candle) and buy the most expensive one around.

Unbounded Scales, Huge Jury Awards, & Futurism

Without a metric for comparison, estimates of, e.g., what sorts of punitive damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.

The Halo Effect

Positive qualities seem to correlate with each other, whether or not they actually do.

Superhero Bias

It is better to risk your life to save 200 people than to save 3. But someone who risks their life to save 3 people is revealing a more altruistic nature than someone risking their life to save 200. And yet comic books are written about heroes who save 200 innocent schoolchildren, and not police officers saving three prostitutes.

Mere Messiahs

John Perry, an extropian and a transhumanist, died when the north tower of the World Trade Center fell. He knew he was risking his existence to save other people, and he had hope that he might be able to avoid death, but he still helped them. This takes far more courage than someone who dies, expecting to be rewarded in an afterlife for their virtue.

Affective Death Spirals

Human beings can fall into a feedback loop around something that they hold dear. Every situation they consider, they use their great idea to explain. Because their great idea explained this situation, it now gains weight. Therefore, they should use it to explain more situations. This loop can continue, until they believe Belgium controls the US banking system, or that they can use an invisible blue spirit force to locate parking spots.

Resist the Happy Death Spiral

You can avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that "you can't prove are wrong"; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box.

Uncritical Supercriticality

One of the most dangerous mistakes that a human being with human psychology can make, is to begin thinking that any argument against their favorite idea must be wrong, because it is against their favorite idea. Alternatively, they could think that any argument that supports their favorite idea must be right. This failure of reasoning has led to massive amounts of suffering and death in world history.

Fake Fake Utility Functions

Describes Eliezer's motivations in the sequence leading up to his post on Fake Utility Functions.

Fake Utility Functions

Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Evaporative Cooling of Group Beliefs

When a cult encounters a blow to their own beliefs (a prediction fails to come true, their leader is caught in a scandal, etc) the cult will often become more fanatical. In the immediate aftermath, the cult members that leave will be the ones who were previously the voice of opposition, skepticism, and moderation. Without those members, the cult will slide further in the direction of fanaticism.

When None Dare Urge Restraint

The dark mirror to the happy death spiral is the spiral of hate. When everyone looks good for attacking someone, and anyone who disagrees with any attack must be a sympathizer to the enemy, the results are usually awful. It is too dangerous for there to be anyone in the world that we would prefer to say negative things about, over saying accurate things about.

The Robbers Cave Experiment

The Robbers Cave Experiment, by Sherif, Harvey, White, Hood, and Sherif (1954/1961), was designed to investigate the causes and remedies of problems between groups. Twenty-two middle school aged boys were divided into two groups and placed in a summer camp. From the first time the groups learned of each other's existence, a brutal rivalry was started. The only way the counselors managed to bring the groups together was by giving the two groups a common enemy. Any resemblance to modern politics is just your imagination.

Misc Meta

An obsolete meta post.

Every Cause Wants To Be A Cult

Simply having a good idea at the center of a group of people is not enough to prevent that group from becoming a cult. As long as the idea's adherents are human, they will be vulnerable to the flaws in reasoning that cause cults. Simply basing a group around the idea of being rational is not enough. You have to actually put in the work to oppose the slide into cultishness.

Reversed Stupidity Is Not Intelligence

The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.

Argument Screens Off Authority

There are many cases in which we should take the authority of experts into account, when we decide whether or not to believe their claims. But, if there are technical arguments that are available, these can screen off the authority of experts.

Hug the Query

The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

Guardians of the Truth

There is an enormous psychological difference between believing that you absolutely, certainly, have the truth, versus trying to discover the truth. If you believe that you have the truth, and that it must be protected from heretics, torture and murder follow. Alternatively, if you believe that you are close to the truth, but perhaps not there yet, someone who disagrees with you is simply wrong, not a mortal enemy.

Guardians of the Gene Pool

It is a common misconception that the Nazis wanted their eugenics program to create a new breed of supermen. In fact, they wanted to breed back to the archetypal Nordic man. They located their ideals in the past, which is a counterintuitive idea for many of us.

Guardians of Ayn Rand

Ayn Rand, the leader of the Objectivists, praised reason and rationality. The group she created became a cult. Praising rationality does not provide immunity to the human trend towards cultishness.

The Litany Against Gurus

A piece of poetry written to describe the proper attitude to take towards a mentor, or a hero.

Politics and Awful Art

When producing art that has some sort of political purpose behind it (like persuading people, or conveying a message), don't forget to actually make it art. It can't just be politics.

Two Cult Koans

Two Koans about individuals concerned that they may have joined a cult.

False Laughter

Finding a blow to the hated enemy to be funny is a dangerous feeling, especially if that is the only reason why the joke is funny. Jokes should be funny on their own merits before they become deserving of laughter.

Effortless Technique

Things like the amount of effort put into a project, or the number of lines in a computer program, are positive things to maximize. But this is silly. Surely it is better to accomplish the same task with fewer lines of code.

Zen and the Art of Rationality

Rationality is very different in its propositional statements from Eastern religions, like Taoism or Buddhism. But, it is sometimes easier to express ideas in rationality using the language of Zen or the Tao.

The Amazing Virgin Pregnancy

A story in which Mary tells Joseph that God made her pregnant so Joseph won't realize she's been cheating on him with the village rabbi.

Asch's Conformity Experiment

The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.

On Expressing Your Concerns

A way of breaking the conformity effect in some cases.

Lonely Dissent

Joining a revolution does take courage, but it is something that humans can reliably do. It is comparatively more difficult to risk death. But is is more difficult than either of these to be the first person in a rebellion. To be the only one who is saying something different. That doesn't feel like going to school in black. It feels like going to school in a clown suit.

To Lead, You Must Stand Up

By attempting to take a leadership role, you really have to get people's attention first. This is often harder than it seems. If what you attempt to do fails, or if people don't follow you, you risk embarrassment. Deal with it.

Cultish Countercultishness

People often nervously ask, "This isn't a cult, is it?" when encountering a group that thinks something weird. There are many reasons why this question doesn't make sense. For one thing, if you really were a member of a cult, you would not say so. Instead, what you should do when considering whether or not to join a group, is consider the details of the group itself. Is their reasoning sound? Do they do awful things to their members?

My Strange Beliefs

Eliezer explains that he references transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.

End of 2007 articles