Documente online.
Zona de administrare documente. Fisierele tale
Am uitat parola x Creaza cont nou
 HomeExploreaza
upload
Upload




What Happens When You Are Under An Illusion?

art


What Happens When You Are Under An Illusion?

Illusion can become so apparent in us that we may not be aware that what we perceived may not actually be what it exactly is. This explains the frightening reason for letting go of your illusions. There is an attempt to conceal one's authentic exclusiveness. The artificial belief system you have been using to support you has collapsed and this change may be a painful one for many. Everything that you have lived for suddenly becomes worthless and you begin to feel skeptical about humanity. However, the results are that the 'new you' that emerges, and that makes all the pain of change worthwhile.



We can be so much deceived by an illusion that it is difficult to see behind it. Some are so scared of revealing their unique selves that they- go to ludicrous extent of attempt to conceal their authentic exclusiveness. They believe that they will only be accepted if they are like others. Yet trying to be what you are not is a daunting task; beneath the facade you feel disgusted with yourself and everyday is a drain of your physical and mental energy for having to live in pretence. Never bargain with your integrity. Many politicians, actors, and millionaires who practise this, bluster, deny or run away when they are exposed. How can any man be his genuine self when he has violated his integrity? He lives in days of weariness from pretence and much of his life become judged much by how others gauge him. He may possess all his material wants but living in peace with himself is another matter altogether.

Causes Of Illusion

Believing that people will like us better if we try to be someone that we are not is a common illusion. Why are we so afraid of being ourselves? What has brought about this fear of revealing our true selves?

To avoid pain, we often illusion ourselves. When we ourselves know that we are not what we would like to be ideally, we naturally attempt to adjust ourselves such that we become as close to that ideal image as possible. However, living in pretence can make us pay a high price - we lose out on love. We forget that we were of value; we stop loving ourselves and we stop trusting others. If we tell ourselves something, we will end up believing it and we trick ourselves into believing that the pretences, the illusions and the lies are more important than the reality. We should place emphasis on how you feel instead of how we look. Allowing others to become more important invokes a sense of inferiority and we hence lose the power to do anything about the way we are feeling.

We cannot begin to change until we face the truth and see where we can become better at being ourselves rather than mimicking someone else, or trying to live up to what we believe is acceptable behavior.

Ways To Handle Illusions

Pretensions are not necessary when you like who you are. Think about some behavior or way which you present yourself that does not feel comfortable to you. You may feel inferior to people you see as being clever or better educated, behave in a subservient manner to people whom you see as important or influential, your appearance such as dressing in expensive clothes so that you can get on in the world, agree with people in order to be liked or accepted or even speak with a certain accent so that one would be taken seriously. After identifying your pretensions, the first requirement will be to let go of your body and relax. Then proceed to identifying what you like about yourself and you will realize that many of the pretensio 11311m1215l ns which you are living with are actually unnecessary.

Ask yourself why you get uncomfortable feelings when you behave in a certain way. Is it because you are acting a part and not being true to yourself? (This may be because you lack confidence.) Do you feel life will treat you more kindly if you act that way? Have you stopped trusting who you really are? Are you afraid that if you reveal your true self because people will reject you? Are you afraid of being labeled stupid or unkind?

Having identified your feelings, can you see that they are controlled by illusions? No one has more rights than you, no one is better than you. Therefore you do not need to try to be anything but what you are.

Imagine how it would feel if you let go of that illusion, or stopped supporting someone else's illusion, and behaved with complete honesty?

Ways To Handle Illusions (Continued)

Do you feel better about yourself when you do this? Do you feel more at ease, less bored or intolerant? Move the images around in your mind until you, feel happy with them. Remember, this exercise is not about keeping everyone else happy but feeling better about the way you behave.

When you are ready to make this change, say quietly to yourself, I can be completely honest with myself and with others. I only have to be myself.

Slowly open your eyes, breathe deeply and feel good about your new resolution to be honest in your behavior and in your treatment of others. Repeat this exercise daily for at least two weeks, perceiving yourself using this new behavior and stating your affirmations. It is best done at night when you are relaxed in bed just before you settle down to sleep.

Some of the foregoing factors and principles are involved in illusions. An illusion is a misinterpretation of the stimuli your senses receive. Since illusions are deceiving, they are often deliberately used to create desired effects. People can try to make themselves appear slimmer by wearing vertical stripes. Decorators make rooms look larger with color, wallpaper, design, and furniture arrangement. Makeup and hairstyles can be used to accent a person's best physical features and minimize weak points. It is interesting to try to figure out why your senses deceive you. It is important to keep illusions in mind when you are making major purchases, to guard against being deceived.

Dealing With Illusion In Love Relationships

At the outset of a relationship, we usually put in our best in order to make ourselves accepted. To conceal our flaws, we often try not to offend and mind our manners. Men usually deliver an impression of competence while keeping a low profile on how needy or possessive they can be. Few will show vulnerability for doing so would be to admit they can be hurt, and to them that parallels to being weak. On the other hand, most woman attempts to downplay their strengths or accomplishments to prevent intimidating men. Women fear they will drive men away when they come across to them as being too demanding or insecure.

However, flaws will inevitably leak out as the relationship develops. The longer and harder one tries to hide their flaws, the stronger the shock and the greater the disruption to the relationship it will be once the hidden imperfections become more apparent. The other party will then realize that what they truly have loved has not been real in itself and therefore feel deceived. Mutual trust then becomes even more difficult to build or create even more misunderstanding and tension. This problem is often aggravated by another form of not being real: allowing someone's insensitive or hurtful behavior to pass without comment. The unacceptable traits can turn habitual if we fail to express disapproval. Then, greater resentment arises which can often make us overreact and henceforth seem even more demanding, intolerant and abrasive.

Suggestions

In a relationship, it is crucial to put your real (not best) foot forward as early as possible. It can only spell trouble if the opposite party grows to like someone you are not. You feel worn out eventually and soon perceive relationship to be a chore more than a connection your partner. Intimacy is built on trust. If you do not trust your partner, neither parties can become intimate and a person living in a lie usually comes across as weak. For instance for a woman, the men she worked so hard to please would invariably lose respect for her and so would she. If the man could not handle it, he was the wrong partner for her.

Therefore, the next time you find yourself putting up a false front again, question yourself why you would want to be with someone who likes what you are not. Yet, it is also important to understand that if you reveal too much too soon, the other person can be put off. A woman displaying virtually her entire biography along with a checklist of her expectations in a relationship once she meets an attractive man will make him feel like he is at an audition, not on a date. On the other hand, tension will accumulate if the wait is too long.

Here are some methods which you can try:

*

Be your real self right from the very beginning of a relationship. Would you want to be involved with someone who doesn't like you for what you really are?

*

Be honest without being blunt. Express your needs, wishes or frustrations as statements of your feelings, not as demands or ultimatums. For example, "It frustrates me when you say we need to talk and then interrupt me if I say something you don't like."

*

Do not make the other person feel sorry for you or responsible for solving your problems when you share your troubles.

*

Do not speck with without arrogance or conceit when you talk about things you are proud of.

*

When you express disapproval about someone's behavior,

a)  preface it by saying something about him.

b)  use non-judgmental phrases such as "It upsets me when. . . "

c)   invite him to something about you that upsets him.

Of What Value Is Illusion in Life?

Summary
The following is transcribed from a session by The Wonders entitled "Of What Value Is Illusion in Life?" This teaching explains how we expend a great deal of energy creating illusions about ourselves for others to see. The illusions are only "valid" if others buy into them and we fail to present ourselves as we truly are. We feel illusions give us power, but really that's an illusion. The Wonders provide guidance in moving past these illusions and realizing our true power is in just being who we are.

Transcript:
View it from this perspective, dear friends. Everything that you do, everything that you say, everything that you present yourself as, is in truth, an illusion - an illusion to others [who] see you.

As you walk through life on a daily basis, move from place to place, point to point, you present yourself as being something that is not necessarily the totality of yourself, but is an aspect of yourself. In interactions with friends, acquaintances, mates, you present certain aspects of yourself that you decide and choose are beneficial to [that] aspect of yourself and, therefore, beneficial to the other person.

Eventually, what comes about is that, as you move through life, you eventually present all aspects of yourself from beginning to end, so to speak, yet not at the same time. So, if one were truly to get to "know" you, they would have to be with you from the time you are born to the time you die. And even then, there is not, in truth, the capacity to get to know all the aspects that you present of yourself.

Now, having recognized this - that each and every aspect of yourself is presented in part only - those that see you, that interact with you are, in truth, making a judgment based upon the limitation of the interaction. And when that judgment is made, therefore, they begin to view you as being a kind person, a loving person, an angry person, a joyful person. Yet, in truth, they are only seeing but an aspect of yourself - that aspect that you choose to demonstrate, that you choose to allow through the interaction. Recognizing this, dear friends, then you begin to recognize the amount of power that exists within yourself and how you yourself control or command - preferably, command - the interactions that exist around you.

Now in life, as you move through life, individuals around you will say, "But that is not the case. For, in truth, people judge us, they make assumptions about [us] or they know us very well." But do they, dear friends? Do they truly "know" you very well, or do they only "know" the aspect that, in truth, you yourself "know" about yourself.

How can you possibly allow another individual to know an aspect of yourself unless you yourself know those aspects of yourself as well? As a result, the interactions that create the illusion are based upon the self-knowledge that exists within the self.

Now, if you as an individual were to begin to recognize that the knowledge that exists of yourself, within yourself, is rather limited and allow yourself to move through a certain process that would allow you to develop that self-knowledge, then at that point, dear friends, you will all recognize or observe the alteration of the interaction between yourself and all those [who] exist around you. Other individuals will begin to recognize you as being this type person, that type person, based upon your own self-discovery of your own knowledge.

Have you ever been through a process where friends, though you haven't seen them for a period of time, upon seeing them again you discover that they are new individuals, new people; that they have grown, they have expanded? In truth, what you are noticing when you are noticing their growth and expansion, is the growth and expansion that you yourself have created within yourself and that which they themselves recognize within themselves that they may then allow this particular interaction to occur.

Now, having said all that we have said, recognize one thing, dear friends. The illusion of reality, the illusion of yourself, cannot exist without another person's acceptance of that illusion.

We did say that the illusion is created by yourself based upon your own self-knowledge. As well, another person's acceptance of that illusion completes the illusion and, therefore, determines how you are viewed by those around you. As you yourself expand your own self-awareness, self-knowledge and self-growth, then so too will the aspect of yourself that you demonstrate to others expand and, therefore, your "illusion" will expand.

However, dear friends, it is subject to others accepting the illusion. If you have friends around you that, from their perspective, do not choose to accept your alteration of the illusion, then eventually these friends, acquaintances, will remove themselves from your sphere of influence. Which eventually will mean that you will replace them with others that are more suitable to yourself and that will accept your new reference of illusion. As a result, you then move through life changing friends, altering acquaintances, growing, expanding, leaving one mate, growing to another mate, and so on, and so on.

Now, if you encounter someone who is aware of themselves, someone who knows the majority of the aspects of themselves, then we assure you, dear friends, that the illusion you perpetrate on yourself and others around you will be seen as the illusion. That individual will be able to see to the truth of your existence, the truth of the essence of yourself and, as a result, will see beyond the illusion.

Now, this can be to your advantage, if you but recognize the other individual for whom they are. However, if you refuse to recognize that - because you refuse to see their illusion which is based upon their reality or their knowledge of self reality - then we assure you, dear friends, that in that case you will not [view] the interaction as being beneficial to yourself. But rather will view it, from one perspective, as being arrogance, as being something that is unacceptable to yourself to such a degree as to not allow yourself to enjoy the interaction of one who can see through your illusion.

Now, for many people this is very disconcerting, for you have placed a great deal effort and time and energy to build this illusion. You have done this from the moment you were born. From that very instant when you came into this reality, and that very instant when your ego-personality was created, you have begun to create the illusion of self, and, from that perspective then, dear friends, a lot of energy has been expended.

However, recognize that that particular illusion of self is self-created, self-perpetuating, moves of its own, and, we assure you, dear friends that, as a result, the only person that you are "kidding" is yourself. For, in truth, you have created your own illusion and have begun to believe it.

How many people do you see around you that see themselves as being benevolent individuals, yet, as you yourself view them, you recognize that they have certain aspects within themselves that in fact preclude benevolence? And, therefore, you see them as something different.

The reason that you see them for something different is that you recognize within yourself the truth of your own aspect around benevolence and, as a result, it is easy to view that in another. However, if you yourself were not aware of your own aspect of benevolence, and another presented themselves as benevolent, you would see them as that, for you would accept their perspective of their own self-illusion. Do you see how this works, dear friends?

Now, to yourselves - each and every one of you here this evening - each and every one of you has spent a great deal of time, effort and energy to create for others around you the illusion of reality, the illusion of what you see yourselves as or how you wish to portray yourselves as.

We assure you, dear friends, that though this illusion is beneficial to yourselves - for it has helped you survive what you call 3rd dimensional reality - though it has had certain benefits to yourselves, we assure you, dear friends, [that] of greater benefit to yourselves would be to recognize that it is a self-illusion. And, once that recognition is made, then you can begin the process of rediscovery of yourself - rediscovery of the true essence of yourselves, the true power that exists within yourselves. For, in each and every one of you, recognize [that] you are energy. And energy, in and of itself, is power, from that perspective.

The power is not in the power of being able to control another, being able to move another at will, forcing another to do as you wish, being able to create an illusion that forces another to move or to manipulate others - or even to manipulate yourselves, or control yourselves, or force yourselves in a certain direction.

True power exists in being, dear friends - in simply existing. Not existing from the perspective of "existence from beginning to end," but existing from the perspective of beingness.

If you allow yourselves to be that which you are, then, dear friends, you will be power, for that which you are is power - power as part of your existence. Not the totality of yourselves, but, we assure you, the manifestation of yourself is energy, which is power.

As a result, dear friends, there would be no need for you then to force illusion upon others - or, even more insidiously, force illusion upon yourself. For as you recognize your own truth, you shed for yourself the aspects of illusion that, therefore, bring to you the greatest truth, which is your own self-truth. Do you see, dear friends?

Now, having said all of this, how can you apply this in your daily life?

We have suggested many times, dear friends, that each and every one of you begin the process of observation. Observe yourself! Not from the perspective of judgment, not from the perspective of right or wrong, good or bad, better or worse, but rather from the perspective of what is.

Begin to observe yourself - observe your thoughts, observe your actions, observe your feelings, observe your emotional state, your mental state, your physical state, as well as your spiritual state.

Observe how you interact with others. Observe how you move through daily life. How do you move through the jungle of humanity? How do you move through the living, the aspect of 3rd dimensional reality you call Earth? How do you relate? How do you interact? And in the observation, dear friends, you begin to discover or begin to paint a picture that is more accurate of that which you are.

Now, as this picture is being painted, we assure you, dear friends, it is changing. However, that it is changing is beneficial in that, as you observe, you now can also observe change. And, as a result, you are constantly updating your own particular self image to being that which you are - not that which you think you are, not that which you believe you are, but, in truth, that which you are.

Simple, pure essence, of that which you are.

And, as you begin to discover these aspects of yourselves, dear friends, you will then move to certain aspects and to present yourself as that which you are. No need for illusion, no need for manipulation, no need for "games to be played."

Rather, it is of greater benefit to yourself to present yourself as that which you are. That way you yourself can then begin to love yourself for whom you are, and therefore, in truth, create an environment around you where others can also love that which you are, for whom you are, as opposed to for whom you are not.

Allow yourselves to move in this direction, dear friends, and, we assure you, you will discover greater power than you have ever imagined possible.

Many, many in this world today think that power comes from the ability to manipulate others at their own will, to create certain environments that move others in certain directions as they themselves choose. We assure you, dear friends, that that is not power, that is the illusion of power. For, in truth, it is only power as long as the others are willing to go along with your own manipulations, your own creations of reality. As they choose to step aside, [to] remove themselves from your manipulations, your games, you lose power, and, therefore, it no longer exists. That is the illusion that people see within power.

However, if you yourself are that which you are, there is no need to create manipulations, therefore, but just to be. And as you are, or are into your beingness, others will interact with your beingness, and will do so through choice, dear friends, through their free will, their free choice, and, as a result, will interact with you in a fashion that is to them beneficial, as well as to you. And will move through free will and free choice - and, we assure you, dear friends, there are no stronger bonds in the universe today than the bond of free will and free choice.

Once an individual chooses to interact with you, we assure you, dear friends, that nothing - absolutely nothing - can break that bond until such time as that individual chooses to no longer interact with you. And no matter how many individuals, how many people try to manipulate or force that separation, it cannot occur until an individual themselves chooses to remove the interaction, the bond of interaction.

Recognize your powers, dear friends. For, in truth, that is where you yourselves will come into your own essence of self.

Some of you this evening are able to pursue the aspect of what we are speaking of at an earlier age than others. We assure you, as you move through your process of life, if you choose to manipulate, if you choose to control, if you choose to create an illusion of yourself in order to force others to believe in your own self illusion, dear friends, you will also choose to shorten you life. For every time that man chooses to create an illusion, or chooses to use self-illusion as an aspect of interaction, that person then reduces their life span by approximately one tenth of one day.

And as a result, if they would only recognize that by just being themselves, today, mankind could exist for thousands of years, without altering themselves, [and] therefore, they would choose to do so. Many individuals, however, are so afraid to believe the aspect of beingness, that they prefer to choose the aspect of the illusion of their own reality, and, as a result, manifest themselves for periods of 50, 60, 70, 80 years, some 90, a few in the hundreds. But we assure you, dear friends, all of you could live for thousands of years, if you were but to choose it.

For the amount of energy that you expend to maintain your illusion of reality is so vast as to reduce your life span by one tenth of twenty-four hours every day, every time you expend that energy to create the illusion, and, we assure, you dear friends, that you have expended a great amount of energy from the time you were born till now. Do you see?

Social evolution and social influence: selfishness, deception, self-deception

Mario F. Heilmann
University of California at Los Angeles

Running Head: Social Power, Evolution and Deception

I. TABLE OF CONTENTS

I. Table of contents 2

II. Rationality, consciousness, sincerity 3

A. Unconsciousness and irrationality: the myth of rationality 3

B. Deception: the myth of sincerity 4

C. Hypotheses of this paper: an overview 5

III. Evolutionary theory 6

A. Ultimate reasons 6

B. The survival of the fittest 7

C. Inclusive fitness and altruism: the selfish gene 7

D. Validity of evolutionary theory for humans 10

E. The influence of group living 11

F. War and intergroup violence: group selection revisited 13

G. Learning and culture 15

IV. Deception and impression management 16

A. Deception 16

B. Countermeasures against influence and deception 17

C. Self deception 19

D. The cost of impression management 21

E. The cost of courtship 24

F. Unconsciousness 25

V. Some Aspects of Raven's Power interaction model under an evolutionary point of view 26

A. Motivation: Why social influence 26

B. Coercion and reward 28

C. Referent power 28

D. Expert and informational power of medical doctors 29

a) Weaknesses of expert, informational power and statistics 30

b) Overconfidence in choice of medical treatment 32

c ) Mistrust towards expert power 33

VI. An alternative utopia 35

VII. Summary 36

VIII. References 38

II. RATIONALITY, CONSCIOUSNESS, SINCERITY

A. Unconsciousness and irrationality: the myth of rationality

The model of the human as a "naive scientist", a rational decision maker prevailed in Social Psychology for several years after cognitive psychologists had proved it wrong by demonstrating a myriad of biases (Kahnemann, Slovic & Tversky, 1982). The notion that we are basically rational beings still predominates intuitive and popular thinking, in spite of proof to the contrary (Taylor, 1989, Taylor and Brown, 1988, Nisbett and Ross, 1980).

Men tend to value a car more if it is introduced in the presence of an attractive woman, and we all tend to vote for the taller and more attractive political candidate (Cialdini, 1993, p.140), and are fonder of people and things presented to us while eating (Razran, 1938, 1940; cited in Cialdini, 1993, p. 158). In these and similar cases, the targets of influence, full of honest conviction, vehemently deny having been influenced by such irrelevant factors.

In spite of numerous findings to the contrary, the myth of human rationality and consciousness continues to pervade our thinking and our literature. It was difficult for authors like Ury (1993) to overcome these ideas: "Because what I learned at Harvard Law School is that all that counts in life are the facts- who's right and who's wrong. It's taken me twenty-five years to learn that just as important as the facts, if not more important, are people's perception of those facts" (p. 18). He concludes that "humans are reaction machines" (p. 8). Pushing will make them more resistant. Indirect actions are needed. "It requires you to do the opposite of what you naturally feel like doing in difficult situations" (p. 10).

B. Deception: the myth of sincerity

Making the target of social influence falsely believe we are not trying to push him satisfies intuitive as well as formal definitions (see Mitchell, 1986) of deception. Only on rare occasions do authors dare to call manipulative influencing strategies deception: "Many ploys depend on your not knowing what is being done to you. . . . If you don't realize that he is using his partner as a "bad guy", you may agree innocently to the changes" (Ury, 1993, p. 42). But, generally, the myth of human sincerity prevails.

"The more common everyday self-presenter who wants others to perceive, validate, and be influenced by his selfless integrity, even though he might vigorously deny such motivation and, indeed, be unaware of it" (Jones and Pittman, 1982, p. 246). "A tantalizing conspiracy of cognitive avoidance is common to the actor and his target. the actor does not wish to see himself as ingratiating; the target wants also to believe that the ingratiator is sincere" (p. 236).

I believe that self presentational concerns and preoccupation with saving other people's face prevent us from seeing the pervasiveness of deception. Furthermore, our egocentrism provides us with the wrong model of human behavior. Intuitively, we seem to think that human biology and social dispositions made us apt to be rational scholars in a just and free society. Evolutionary theorists point out that our phylogenesis should have provided us with very different dispositions. Their most extreme proponent, R. D. Alexander states that "human society is a network of lies and deception, persisting only because systems of conventions about permissible kinds and extents of lying have arisen" (1975, p. 96). Lazarus (1979, p. 47) notes that there is a "collective illusion that our society is free, moral, and just"

Evolutionary theory can causally explain why humans tend to deceive themselves and others about the fact that they are deceiving. It can tie together all the topics of this paper: deception, irrationality in human impression management and social influence techniques. It can elucidate why we are willing to pay such a high cost for impression management. Jones and Pittman (1982) state this last point very candidly: "For many of us, self-promotion is almost a full-time job."

C. Hypotheses of this paper: an overview

This paper endeavors to point out that the selfish interests of individuals caused deception and countermeasures against deception to become driving forces behind social influence strategies. The expensive and wasteful nature of negotiation and impression management is a necessary and unavoidable consequence of this arms race between deception and detection. Natural selection created genetic dispositions to deceive, and to constantly and unconsciously suspect deception attempts. In a competitive, selfish, and war-prone world, these techniques, proven in billions of years in evolution, still are optimal. Therefore they are reinforced by cultural selection and learning. Conscious awareness of deception and countermeasures is not required, often even counterproductive. This is so because conscious deception is easier to detect and carries harsher sanctions. Humans not only deceive, but also deceive themselves and others about the fact that they deceive, into believing that they do not deceive. This double deception makes the system so watertight, that it tends to evade detection even by psychologists.

III. EVOLUTIONARY THEORY

A. Ultimate reasons

Due to the failure of prior grand theories, psychologists tend to satisfy themselves with micro-theories, that describe only a narrow domain. The tend to equate description or prediction with explanation. Don MacKay (1993) deplores these limitations and suggests that a true theory should be "explanatory, not just descriptive." He complains that "miniature models have only proliferated rather than merged" into "ever larger theories." In his proposed rational epistemology, "observations often do not count as scientific facts until a plausible theoretical mechanism for explaining them is proposed" (MacKay, 1988).

Nobel prize winner Tinbergen (1963) distinguishes between proximate explanations (how physiology or behavior work) and ultimate explanations (why they work this way). Even if every single cognitive process and every single neuron connection were known, the question remains, why the organism is the way it is.

Ultimate explanations historically were the domain of religions and myths. To my knowledge, evolutionary theory is the only scientific theory that plausibly proposes ultimate explanations.

B. The survival of the fittest

Charles Darwin (1859) established the theory of evolution. This theory suggests that those species and individuals that are best equipped for survival and procreation survive. Genes that determine or mediate a behavior proliferate, if the behavior helps survival, mate finding, and finally creation of viable offspring that will have offspring of its own.

This theory could not explain altruistic behavior. Altruism is defined as behavior that gives (reproductive) advantage to another individual at some (reproductive) cost to the altruist. To solve the riddle of altruism, it was proposed that individuals act for the best interests of their group or species.

This "group selection fallacy" is still often invoked, even though it was soundly disproved (see chapter 4 of Trivers, 1985). A group of altruists would not be evolutionarily stable. A single individual without the altruistic "group benefit gene" would reap the benefit of the other individuals' altruism without paying the price for his of her own altruism. And a gene that brings about a mere 1% higher number of offspring, will, by exponential growth, crowd out competing alleles and be the dominant gene within five hundred generations. Therefore, the altruists would be extinguished and selfish individuals would take over.

C. Inclusive fitness and altruism: the selfish gene

W. D. Hamilton solved the puzzle of how altruism could possibly have developed and survived. He recalls that close relatives, like brothers, parents, sons and daughters have 50% of their genes in common with us. Therefore, a sacrifice that gives more than twice as much benefit to our brother than it costs us, has an indirect net reproductive benefit to our genes, via our relative's offspring. The reproductive success that accounts for both direct and indirect (via relatives) reproduction is called "inclusive fitness". Maximization of inclusive fitness "means that an organism behaves over a lifetime in such a way as to maximize the copies of its genes, or alleles, which by one route or another it projects into the gene pools of future generations" (Irons, 1991). It explains altruistic behavior of bees and ants, as well as human altruism towards kin and human nepotism. One theorist said, jokingly: "I would not give my life for my brother, but maybe for 3 brothers or 9 cousins."S. Haldane, see p 30 Daly Wilson sex evolution and behavior)

Reciprocal altruism is another way how altruism can bring about a reproductive advantage. If we can be sufficiently sure that a favor will be returned to us, a temporary sacrifice can be to our own long term advantage. This seems to be so strongly built into human genes (or culture?) that Raven (1992) calls the reciprocity norm a legitimate power basis, that tends to be willingly accepted by the obliged person. It is so strong, that people used to feel obliged to donate to Krishna solicitors who had given them an unsolicited flower as a present. (Cialdini, 1993, p. 21).

Equally, the door-in-the-face or rejection-then-retreat technique (Cialdini, 1993, p. 36), which involves a large request and then a retreat to a smaller request, makes the recipient of the request feel obliged to retreat, too. He gives in to the smaller demand he would not have given in to, had he been asked directly. These phenomena are often (proximately) described but rarely (ultimately) explained.

For the reciprocity rule to be maintained, punishment for non-compliance is a must, to avoid invasion by cheaters. "The fitness of the reciprocator must be greater than the fitness of the cheater" (Kaplan, 1987). And, the fitness of the punisher must be at least as great as the fitness of the non-punisher, because otherwise nobody would take the altruistic task of spending energy to punish noncompliant people, at a personal cost and for the benefit of the society. Righteous moralizing indignation seems to be one of the elements that mediate distribution of punishment. I believe mob lynching behavior is one such way of punishing perceived deviants at low cost to the individual involved.

Once compliance with the reciprocity norm has become automatic, it, works, unexpectedly, even with Krishna solicitors who cannot punish a non-reciprocator. But, the arms race between influencer and influencee continues, on a non-genetic or learning basis. Over the years, most Americans have become immune to the Hare Krishna adepts' tactics. The same tactics, though, are said to be very successful with still inexperienced and unsuspecting Russians.

If selfish desire to gain personal advantage through reciprocity is one major reason for altruistic behavior (the other reason is reputation, which also pays in the long run (Irons, 1991)), we would predict altruism to be stronger towards people that we expect will return the favor. Essock, McGuire and Hooper (1988) at UCLA studied self reported helping patterns of 300 Los Angeles women. They concluded that "help was distributed neither randomly nor altruistically, but in a strategic manner which, however (un)consciously, favored the biological goals of survival and reproduction." For example, rich relatives received more help than poor ones. Poor people may need more help but it is advantageous to help rich relatives who have more means to reciprocate.

Additionally, the authors report: "Subjects were significantly more likely to report that they had given help than that they had received help." In random samples we should expect equal amount of helping and receiving. Therefore, impression management and/or self deception were at work. The authors explained the value of deception and self deception in impression management: "All else being equal, the individual who successfully masquerades as an altruistic, beneficent person would be more likely to attract a mate and friends than one who displays his or her selfishness unmasked. Likewise, the individual convinced of his or her own beneficence has a greater chance of convincing others than the individual who, with false conviction, attempts to deceive." This strategy is optimal both in today's civilization and in past evolutionary times. Of course, everyone is, necessarily, convinced that he does not use this self and other-deceptive behavior.

D. Validity of evolutionary theory for humans

The entire book by Trivers (1985) demonstrates the extremely strong empirical support evolutionary hypotheses have in biological sciences. Its application to humans finds strong resistance in social sciences. The reason, I think, is emotional. Partially, this is due to the fact that sociobiology has historically been abused for conservative political purposes, as defense of the status quo, of the survival of the powerful in society at the cost of the poor. This politically motivated abuse was based on misunderstanding, first because evolutionary biologists describe natural laws but not moral imperatives, and second, because the poor tend to have lots of viable offspring and therefore may even have superior fitness.

The only legitimate reason to reject the theory would be the contention that human behavior is totally independent of genetics, a position that is being disproved by twin research at the University of Minnesota (Bouchard & McGue, 1981, and Segal, 1984, cited in Shaw & Wong, 1989, p. 37).

Evolutionary theory contends that humans have changed very little over the last hundred thousand years. "Thus, paleoanthropology, studies of free-living primates and modern hunter-gatherer societies are important sources of information about personality dynamics" (Hogan, 1982). Some behavior, like preference for fat and sweet food, is very adaptive in a society without overabundant food supply, but is harmful in our affluent supermarket society. Other behavior patterns were useful then and still are useful now. Finally, humans have built in flexibility that often optimally adapts to new situations.

E. The influence of group living

Historically, people always lived in groups. First, a "selfish group" confers advantages against predators: a hawk can only kill one bird at a time, so it is safer to be among 50 conspecifics than to be alone. Second, the group is more likely to detect the hawk's approach and to escape unharmed. Third, groups of primates and humans can fend off predators. Finally, groups of men can hunt large animals. In addition to the obvious nutritional benefits, the possession and distribution of large amounts of meat proffers social power. It increases a male's chances to gain sexual favors from females, similarly as today's dinner date.

"The behavior of other pack-hunting animals (e.g. lions, wolves, hyenas), along with evidence of ritualized burial practices at least 50,000 years ago, suggests that hominid social life has been carefully structured (i.e. rule governed) from the beginning. . . . Every group is organized in terms of status hierarchy. This suggests that the two most important problems in life concern attaining status and popularity" (Hogan, 1982). Status provides "opportunity for preferential breeding and reproductive success". Because "homicide rates among hunter-gatherers are high even by modern urban standards ... , popularity has substantial survival value." This explains a powerful drive for social approval and avoidance of disapproval and criticism. It also explains personal coercive and personal reward power, the power of approval or rejection by someone we value or like (Raven, 1992), in other words, of a potential ally. In monkey groups, an allied pair can gain enormous advantages by dominating an entire group or it can defend females against more dominant individuals. The stronger one of the pair usually has to respect the weaker monkey's sensitivities; he forfeits the use of coercive power against the ally and does not take his bananas by force (de Waal, 1987, p. 429).

Shaw and Wong (1989, p. 53) suggest, that weapons development caused a major shift in human evolution. The development of arms reduced the cost of attacking (weapons can even be thrown) while increasing the cost of being attacked. The "new high costs of within-group aggression would act to change the character of the dominance system. Insofar as dominant individuals could not afford to be injured in rank-order fighting, there would be an increased selection for social skills in attaining and maintaining status, and decreased emphasis on overt aggression. . . . intergroup conflict would select for greatly increased human capacity to establish and accept group hierarchy as well as to recognize enemies versus relatives and friends." Thus, in negotiations, it is all important to be categorized by the other party as a friend (i.e., a reciprocal altruist), not as a (totally selfish) enemy. Ury (1993, p. 53) suggests that "stepping on their side" is an essential step in "getting past no." If the target of influence rates us as inimical, we lose all the subtle power bases that alliance sensitivities bring with them.

F. War and intergroup violence: group selection revisited

Humans found an additional selection factor that is rarely found in primates: tribal raids and war. Entire villages and populations could be exterminated by their neighbors. Group extermination is one obvious exception, where group selection can occur. It is not very costly to have unwarranted suspicion of outgroups a hundred times, but one single instance of unwarranted trust may spell annihilation of the individual or even the tribe. A group that is less aggressive and less suspicious of out-groups is more likely to be eradicated. So is a group that splits up easily and cannot maintain a large size.

In an evolutionary perspective, group size increased over time. The small kin-groups that stayed together for protection against predators and to hunt large animals fused into larger groups, "largely or entirely because of the threat of other, similar nearby groups of humans" (Shaw and Wong, 1987, p. 54). This required the social and cultural organization necessary to hold larger groups together.

"The more the brain evolved and the more intelligence was utilized to insure within-group solidarity, including the sharing of information, the more the group would likely have succeeded in driving competing groups into less desirable peripheral areas. . . . successful human groups may have been the selective forces which pushed less intelligently cooperative groups into inhospitable habitats, severely lessening their chances of contributing to the genetic future of the species" (Shaw and Wong, 1987, p. 58).

In my opinion, humans differ from animals in that group selection factors come back into play. There is an exquisite balance between individual selfishness against other members of the ingroup and cooperation against the outgroup. It seems that the enmity and threats from outgroups increase ingroup cohesion. From the inclusive fitness maximization standpoint this makes sense. If there is no outside threat, then individual selfishness against other members of the ingroup should be the best strategy. If survival of the entire group is threatened, then, obviously, ingroup cohesion and ingroup selfishness is in the best interest of the individual. I believe that rituals, beliefs and religions are pervasive factors in all human groups, because groups without this bond would have been dispersed and exterminated. This is especially noteworthy because virtually every human society has a religion its members truly believe in while they laughing off all other religions as ridiculous, absurd and false. Simple logic can show that at least 80% of the world's population have false religious beliefs. I assume that a mixture of cultural transmission and genetic propensity maintain these cultural artifacts.

G. Learning and culture

Genetic change is very slow, it takes many generations, or even millions of years. Therefore we would expect adaptations to the more "recent" changes of the last 50,000 years to be based on learning and cultural transmission.

But even the "process of learning itself is often controlled by instinct", "various animals are smart in the ways natural selection has favored and stupid where their life-style does not require a customized learning program. The human species is similarly smart in its own adaptive ways and almost embarrassingly stupid in others" (Gould and Marler, 1987, cited in Shaw & Wong, 1989, p. 70). "Innate tendencies in mental development are most obvious (and least disputed) in humanity's capacity for learning language and culture, but they are also evident in the manifestation of phobias or tendencies to lean toward certain choices over others" (Shaw & Wong, 1989, p. 67). We humans are blissfully unaware that we are driven to behave in ways that maximize inclusive fitness. Because of the advantages of unawareness of our own deceptive tactics and of our suspicion, I suggest that innate tendencies made us "embarrassingly stupid" as far as conscious awareness of these facts is concerned.

Opponents of genetic theories often confuse genetic propensities for genetic determinism. This is a misunderstanding. People can learn to avoid fatty food counter to their genetic programming. Even birds adapt the number of eggs they lay to the environmental conditions. Even the staunchest plant geneticist is well aware that peas grow much taller when planted in fertile soil than their genetically identical brothers and sisters who received inferior nurturing on bad soil.

IV. DECEPTION AND IMPRESSION MANAGEMENT

A. Deception

Evolutionary theory predicts the inherent selfishness of the individual. Therefore, we would not expect communication to develop as a means of informing others of the truth, if such truth gives the recipient an advantage at the expense of the sender. Cronk (1991) suggests to "follow the example of animal behavior studies in seeing communication more as a means to manipulate others than as a means to inform them". In other words, most communication serves for the purpose of social influence, defined as "change in one person's beliefs, attitudes, behavior, or emotions brought about by some other person or persons" (Raven, 1983, p. 8).

Evolution produced deceptive mechanisms frequently. Mitchell (1986) lists four levels of deception. Level one is permanent appearance, for example a butterfly whose tail looks like a head, so it can escape when a bird attacks its tail thinking it is its head, or animals that look like wasps or other impalatable species. Level two is coordinated action. Examples are fireflies who mimic the mating flashes of the female of another firefly species in order to prey on the males. It also includes bird's injury feigning in order to distract predators from their nest. Level three involves learning: a dog who feigns injury because he has been petted more when he had a broken leg. Deceit may depend on the deceived organism's learning, too: a blue jay learns to avoid a palatable butterfly after experiencing the nausea of eating the similar looking distasteful one. Level 4 involves planning: a chimp who misleads about the location of food or a human who lies on purpose.

This demonstrates that deceit as an influence strategy is neither new nor a human invention. Second, it is likely that humans employ strategies as low as level two (body language signals of strength or submission) or maybe even level one (immature and baby-like facial features in an adult).

B. Countermeasures against influence and deception

Of course, evolution also favored the capacity to detect deception, because someone who is not easily deceived has higher inclusive fitness. "Deceit selects for efficient mind-readers." "Bluff by signalers can be countered in a variety of ways and if honest signals are costly they may be impossible to mimic" (Harper, 1992).

In interpersonal influence, elaborate stage setting techniques are often applied (Raven, 1992). My interpretation of this is that to avoid bluff it is often necessary to demonstrate that one has the means for the use of power. Coercive power, for example, requires the agent to show not only that he has the means, but also the determination and ruthlessness to carry out his threat. Street gang toughs need to rough up innocents to gain respect of their peers. And Adolf Hitler went into maniacal fits to convince the Austrian chancellor von Schuschnigg he had the resolve to commit crazy acts of violence and thus coerced him to give in to his demands (Raven, 1986).

Of course, the next step in the arms race are counter-countermeasures: how to deceive without being caught. It is not a good strategy to honestly admit that we are not truthful. Rather it is more useful that we deny our lies, deceive about the fact that we are deceiving. This way we can reap the benefits of a good reputation: according to Anderson (1968b, cited in Sears et al, 1991, p. 270) the most liked personality traits are sincere, honest, understanding, loyal, truthful. The authors of the book do not note the absurdity of the result and the apparent deceptiveness and self deception of the respondents. Imagine the husband or boyfriend of a sincerity-loving respondent to Anderson's questionnaire telling her about his attraction to other women: "Honey, I really enjoyed my visit to the strip joint". Or picture her son telling her about his drug habits or the hate he feels for her. I am certain their honesty would not be greeted with high praise. Her love for honesty is quite limited, it is another self deception. In other words, the appropriate tactic is not being actually honest as the naive and misguided individuals in the above examples. Rather, the best strategy is to appear honest. But who would admit he likes people who appear honest?

My contention that deceit and self deception are the rule sounds so provocative, because we have large investments to camouflage deception. But social psychology research sometimes confirms this unflattering picture: The textbook by the UCLA professors Sears, Peplau and Taylor (1991, p. 224) states that "the most influential perspective on social interaction is social exchange theory". This theory proposes that we are "attracted to those partners we think are best able to reward us" and "try to arrange our interactions to maximize our own rewards". Again, unawareness, deception and self deception are quite obvious. I have never met a person who told me he likes to be my friend because he thinks I am best able to reward him.

In summary, we should expect a good strategists to strive to maintain an image of being a truthful person. He or she should be prepared to deceive whenever it confers a sizable advantage versus a much smaller risk.

C. Self deception

If we believe our own lies it is much more difficult to be caught, because we are not making conscious efforts to lie. Furthermore, moral codes and laws punish the conscious lie much more stringently than the "honest" error.

Gur and Sackheim (1979) defined self deception as the motivated unawareness of one of two conflicting cognitions. They required that (i) the individual holds two contradictory beliefs (p and notp) (ii) these beliefs are held simultaneously (iii) the individual is not aware of holding one of the beliefs (for example p) and (iv) the mental operation that determines which mental content is and which is not subject to awareness is motivated

They managed to prove the existence of self deception even according to these stringent requirements. It surprises me that knowledge of the repressed truth (not p) remains stored somewhere in the brain. Jokes who induce laughter by alluding to taboos seem to tap int.o these secret memories. Maybe there is a fitness advantage to having access to the truth. Maybe the truth is required in some emergency situations.

Paulhus (1986) introduces a less restricted definition of self-deception in a more general sense. He termed it auto-illusion: an honest belief in a false characterization of the self, due to cognitive or informational biases. This term is probably more useful, as self-deception in the most stringent sense has been shown in only two studies (Gur & Sackheim, 1979; Sackheim, 1983, cited in Paulhus, 1986).

Paulhus (1986) shows the relationship between self deception and various other constructs: "The SDQ [Self Deception Questionnaire] is highly negatively correlated with standard measures of psychopathology, including Beck's Depression Inventory and the Manifest Anxiety Scale." This counterintuitive result supports the evolutionary hypothesis, that high self deception is natural. I propose that people low on self deception are at such a disadvantage in social life that this increases their anxiety levels. Alternatively, low self deception may be a part of psychopathological personality patterns.

Factor analyses show that social desirability scales diverge into two factors, into self-deception or "autistic bias" and impression management or "propagandistic bias" (Paulhus, 1986).

D. The cost of impression management

It is quite surprising to me, that rarely an author on impression management and social power ponders about the cost issue. I don't just talk about the cost of maintaining an army or of waging war (coercive power or defense against coercive power). I am concerned about people wearing Armani suits in a tropical climate with ties strangling our throats, when a four dollar thrift shop outfit would be more comfortable and appropriate to the climate. It is obviously wasteful to drive an expensive 50,000.- dollar car, when a bicycle or simple 2,000.- dollar car would do. But, a high powered real estate broker would undermine his power would he dare to drive a 1983 Ford Pinto or come to a board meeting dressed in bicycling shorts. It is important to note that price, not age or functionality of the car count, because he or she could get away with driving an antique 1935 Ford.

The time and money spent for this impression management could be used to directly increase inclusive fitness by increasing the number or the quality of offspring.

Ury (1993, p. 111) states that "negotiation is not just a technical problem-solving exercise but a political process in which the different parties must participate and craft an agreement together. The process is just as important as the product. . . . negotiation is a ritual". In other words, It takes 3 months of negotiations, strikes, lockouts etc. to arrive at an agreement of, say, 5.1% wage increase, a result that could have been reached in 5 minutes.

I propose that deception avoidance is one of the main reasons for this drawn out and expensive process. Participation is an good strategy to minimize the chance of being deceived.

Jones and Pittman, (1982) contend that the "trappings of power" reassure the client that the professional knows what he is doing. If he were incompetent, he could hardly afford a Lear Jet or a traveling secretary. "For many of us, self-promotion is almost a full-time job", he concludes.

These aspects of intra-species competition can be found in animals. Deers carry the dead weight of elaborate antlers. The peacock's long tails and the stickleback's brilliant colors, as well as the songs of birds make these animals more prone to be preyed upon. Evolutionary biologists think that expensive signals are more difficult to be falsified. So the fact that they are wasteful and expensive makes them more credible. Zahavi (1975) goes even further, he suggests that the fact that an individual survived in spite of the unwieldy tail is a signal of his superior qualities. "To avoid deception, females choose on the basis of characteristics that second-rate males are incapable of faking, and that would seem to mean characteristics that cannot be produced cheaply" (Daly & Wilson, p. 133). A second-rate deer cannot survive with enormous antlers, and a second-rate lawyer cannot afford a Lear Jet.

False advertising, when detected, may cause problems for the impostor. The more dark feathers a Harris's sparrow has in his winter plumage, the higher his rank in the dominance hierarchy. S. Rohwer (cited in Daly & Wilson, p. 133) asked why low ranking birds do not lie. He painted low ranking males' feathers dark. "The dominance hierarchy is generally maintained without much overt aggression, but the relative rank of birds of similar status is occasionally tested. And when advertising is then revealed to be false, the aggression persists and intensifies. Honesty seems to be the best policy for a Harris's sparrow" (Daly & Wilson, p. 133). Among humans, a homeless person with an impeccable custom-made suit or a martial arts dud with a black belt around his waist, would probably share the same fate: initially undeserved respect, later, when the bluff is detected, strong aggression.

There are several factors which render these strategies stable and self perpetuating in spite of their cost. For example, if all birds of a species raise their feathers in order to appear 30% heavier and more intimidating, a lone individual cannot simply step out of the routine. He would be underestimated and would have to waste energy fighting adversaries who would usually give in voluntarily. Similarly, if every successful professional buys the most expensive car he can possibly afford on credit, the rare corporate executive who would buy a plain car would be underestimated by everyone.

An "intelligent" female peacock who would wisely choose a capable mate that does not have the impediment of a long tail, would father sons that are unattractive to other females and hence reduce her own reproductive success.

Successful sons are especially important because males usually have more variance in reproductive success. Therefore, high ranking sons confer more reproductive success than high ranking daughters, while low ranking daughters confer more reproductive success than low ranking males. Surprisingly, statistics show that even in humans the sex ratio varies with socioeconomic status. In the United States, in the lowest socioeconomic groups 96 males are born for every 100 female babies, in the highest about 104 males per 100 females (Teitelbaum & Mantel, 1971, cited in Trivers, 1985, p. 298).

E. The cost of courtship

"Animals - including humans- spend an inordinate amount of time getting ready to have sex. Something that could be achieved by mutual agreement in a minute or two is commonly drawn out into hours, days, even weeks of assiduous pursuit, comical misadventure, and brain-numbing stress. In a word: courtship" (LeVay, 1993, p. 57). Due to the fact that fathering is cheap because one male can fertilize a large number of females, females have acquired the power to choose a mate among a large number of male suitors. In using this power they tend to choose a male who has qualities that improve the chance of survival of the offspring, either one who provides "good genes" (the football star) or who promises to be a "good fathers" (the reliable husband) who invest in the raising of the offspring. Female choice actually produces superior offspring- at least in fruit flies. Experiments have shown that female fruit flies that had the chance to pick among several males have fitter offspring than females in the no-choice condition (Daly & Wilson, 1983, p. 131).

It is well known that human males tend to be deceptive about their reliability as long term fathers, and both sexes tend to deceive about their faithfulness. Similarly, in animals "we may see very costly signals and very cautious receivers. Courtship displays are often remarkable for the ridiculous contortions of males and the apparent indifference of females" (Harper, 1992). I suggest that the large expense of time in courtship is due to the arms race between deception and attempts to foil deception.

Actually, sexual reproduction itself seems wasteful. Males of most species are almost useless, they provide only sperm. Females who could reproduce genetically identical copies of themselves by simple cell division would easily outreproduce sexually reproducing females. A sexually reproducing couple needs an average of two surviving and reproducing offspring to keep the number of members of the species constant. With non-sexual reproduction, two offspring per mother means doubling the population size with every generation, increasing population size 128-fold in 7 generations. Researchers of the few asexually reproducing species arrived at the consensus, that parasites would quickly decimate the asexually produced identical. The wasteful effort of sexual reproduction provides needed genetic variety to resist disease and survive in ever changing environments (Trivers pp. 315-330).

F. Unconsciousness

Of course, mem are not aware of all these biological considerations when he courts a woman, and women don't know the evolutionary reasons for their choice criteria. As I said, consciousness is not required for an evolutionary mechanism to function. In fact, the amount of non-verbal body language transpiring in social interaction exceeds the processing capacity of our conscious mind (see Moscovici, 1992, 1981). Nature did not create any species that consciously pursues the strategy of inclusive fitness maximization and calculates which actions are most apt to achieve this goal. Rather our instincts and feelings tend to lead us in this direction unknowingly (Hogan, 1982).

V. SOME ASPECTS OF RAVEN'S POWER INTERACTION MODEL UNDER AN EVOLUTIONARY POINT OF VIEW

A. Motivation: Why social influence

Millions of years of evolutionary arms race have developed optimized and sophisticated influence techniques and counterinfluence techniques. They are be optimized for primitive circumstances of hunter-gatherer society. Due to the flexibility of the human brain, we continually develop new techniques based on culture and learning (= software) and not on genetics (hardware). These techniques are usually optimal for the purpose of inclusive fitness, especially for avoidance of extinction of the tribe due to war and assault.

I surmise that it is hard to find new influencing tactics that cannot be found in some other species. Evolution tends to find all possible strategies in order to occupy diverse ecological niches. The capacity of the human brain should allow for a great variety of techniques to be used flexibly by one single individual. I also allows to elevate the complexity and flexibility of the strategies to a height that simpler organisms are not able to. Some theorists (Tooby & Cosmides, 1992) think that our brain grew more capable because this way we would be more efficient at detecting cheaters and deceivers.

Why would one want to use social influence? Raven (1992) describes reasons like need for power and dominance, for status, role requirements, desire to adhere to social norms, concern for image, and desire for attaining extrinsic goals. It is intuitively clear how all these motivations serve inclusive fitness and hence are consistent with the model described so far. Additional motives, cited by Raven (1992), are attaining of extrinsic goals or desire to benefit or harm target. These motives usually tend to be in the service of inclusive fitness, too.

Raven's model also deals with the question of why would one let oneself be influenced or why would one resist. "Needs for independence, for power, for self esteem, may mitigate against influence, and may indeed lead to reactance" (Raven, 1992). The evolutionary model would predict people to resist influence attempts because these would usually serve the influencing agent's selfish interests. "Additionally, the target of influence may be "concerned about how s/he would look to third parties if s/he complied" (Raven, 1992). A major factor contributing to the arrest of drunk boisterous males was the "presence or absence of female onlookers" (Kipnis, 1986, cited in Raven, 1992). This looks very much like a straightforward attempt by the drunks to increase inclusive fitness by impressing the females. The police may be doing the same. I suppose that the females present were young and attractive, and not the arrestee's grandmothers or school principals.

B. Coercion and reward

Coercion and reward come first in Raven's (1992) list of bases of social power. (The others are legitimacy, expert, reference and informational). Coercion and reward function in animals and were, in simplified form, the bread and butter of behaviorist learning experiments. Trivers (1985) suggests that closeness in time between stimulus and reward is the best heuristic, nature could have found to infer causality. As a support he cites experiments by Garcia, who demonstrated that nausea - induced by x-rays- makes rats avoid food ingested many hours ago and not avoid the most recently executed action. "In life, some causal connections involve a long time delay, yet they are important for the animal to comprehend. . . . the animal gains from the assumption that bad food or water causes sickness and a whole series of other activities do not" (Trivers, 1985, pp. 105-106). Consciousness is not needed: nature made us find pleasurable what helps survival and offspring production, and aversive what hinders it.

Coercion and reward power require surveillance by the influencing agent (Raven, 1992). The model of the selfish influencing agent explains this well: the target tends to suspect that the agent's desires are to the target's detriment.

C. Referent power

"Referent power depends upon a person's identification with the influencing agent, or at least his or her desire for such identification" (Raven & Rubin, 1983, p. 413), the influencing agent serves as a model. For example, we use the type of clothes a famous baseball player wears. In this case, we would not suspect that the actor tries to selfishly manipulate us to our disadvantage. Rather, he is acting independently of us (unless we suspect that he or she does so as a manipulative display to influence us, which would undermine referent power ). Referent power needs no surveillance because the target would feel he or she acts in his own best interest. Parents often find out that children do not as parents (in their sometimes selfish interest) say, but, as parents (without manipulative intent) act. Children are good at detecting and not following "manipulative" models, who display behavior with the intent that children follow the model.

Referent power facilitates learning from positive models and therefore enhances inclusive fitness. Following the example of popular people also tends to increases liking by third parties, which again increases inclusive fitness.

D. Expert and informational power of medical doctors

Expert power involves following the person who knows best, informational power involves doing what is best for us after analysis of the facts. If the information is not perceived as given with manipulative intent (which, alas, is often suspected), compliance can be explained by the target's selfish interest to act optimally, by doing what he perceives to be correct.

a) Weaknesses of expert, informational power and statistics

Raven (1992) and Raven and Litman-Adizes (1986) deplore the ineffectiveness of medical expert and informational power. People behave in unhealthy ways in spite of better knowledge.

One reason for this is the fact that evolution made us choose fitness enhancing behavior not as a result of logical analysis but due to pleasure and aversion. Therefore, it is hard for us to override our liking of sweets with logical nutritional information, and to overcome our aversion to restraint with information about the life-saving features of safety belts.

Furthermore, there is no evolutionary precedent for peer reviewed research and unbiased statistics on large random samples. We are not prepared to value it as highly as we should. And even this research is not bullet proof, it often succumbs to the researcher's basic belief system or his greed for recognition. The evolutionary precedent of selfishness and everyone for himself pollutes even academic research.

The evolutionary hypothesis does not merely suggest that our genetic hard-wiring predisposes us to such behavior. Additionally, I suggest that our present environment is of the same kind as before, of individuals using all available methods to pursue goals to their individual advantage. Therefore ,the old strategies, based on the survival of the selfish individual, and tested over billions of years of evolution, are still the most successful ones, even if those strategies are not transmitted genetically. Communism failed, in my opinion, because it is vulnerable to invasion by cheaters, and because it required pure altruism. Voluntary and unsurveilled altruism towards non-kin and non-reciprocators is not an evolutionary stable strategy. In nature, only closely related individuals, like ants and bees display totally unselfish behavior. This behavior is detrimental for individual fitness but has been shown to be optimal for inclusive fitness.

So why do people tend not to follow doctor's orders? The assumption of the arms race between deception and countermeasures is in agreement with the observations. Patients are predisposed to distrust the doctor, maybe they even meet him with more distrust than he deserves. There is no evolutionary precedent for the kind of controls we have on medical research. But, mistrust is not totally unjustified. Human nature's inherent selfishness lurks and finds its way wherever it can. Raven and Litman-Adizes (1986) suggest that "health professionals tended to discourage the use of informational influence in relating to patients, since it was looked upon as a threat to the medical profession. . . . . Indeed, the patient may become more self-sufficient and less dependent upon the practitioner." Furthermore, professional "ethics" tend to defend the professional's private interests against the client's. Finally, it is intriguing that medical science insists that today's state of the art is the truth, and that patients should trust it. This occurs in spite of the fact, that, historically, a very large percentage of one decade's scientific "truths" turned out to be the next decade's untruth or laughing stock.

But, modesty and excessive realism were not advantageous in prehistoric times. Neither are they today. Self confidence is impressive, even when it is false. Patient "satisfaction is inversely and significantly correlated with the patient's perception of uncertainty in the physician." "Clinicians often equate confidence with competence, a perception that may be shared by patients" (Baumann et al., 1982, p. 167). This is also dangerous in court proceedings, as the "jury may well accept the opinion of an expert who exudes confidence over the opinion of an opposing expert who expresses appropriate caution" (p. 173).

b) Overconfidence in choice of medical treatment

Hence, overconfidence is advantageous for status and success, and therefore for reproductive success. And, as predicted, this type of deception becomes automatic and the influencing agent himself becomes more credible by believing in his false confidence.

For example, among people with a cough who were diagnosed as having pneumonia with 88% confidence, only 20% actually had pneumonia (Lichtenstein, Fischhoff and Phillips, 1982, p. 321). Baumann et al. (1991) tested physicians with precise descriptions of a woman's breast cancer case. They found micro-certainty- a high confidence expressed by the individual physician about his decision- in spite of great macro-uncertainty - a great variation of actions across individuals. This macro-uncertainty expresses the uncertainty of the profession as a whole. A woman might have a radical mastectomy, chemotherapy or maybe even no treatment at all, depending on which doctor she happens to meet. And she will not be told that the treatment she receives depends on the doctor's individual preference, which greatly differs from other doctor's choices. "Micro-certainty [. . .] is likely to mislead patients as to the true state of clinical opinion, and lessen their role in decision making about their own health" (p. 173). It may also "impede the self-scrutinity required to implement quality assurance programmes".

As a result of the arms race between deception and counter measures, expert power's credibility is enhanced by the expert's deceivingly secure attitude and self deception. Overconfidence seems to be as necessary and adaptive as the positive illusions described by Taylor and Brown (1988).

Of course, overconfidence backfires when it is exposed. Therefore confidence should be higher than warranted, but not exceedingly high. In Baumann et al. (1991), the danger of detection is minimal. I would predict that high probability of being exposed decreases overconfidence. If the target was not an unsuspecting patient but a professor and cancer expert examining the doctor's knowledge for continuing education credit, I would predict greatly reduced overconfidence.

c) Mistrust towards expert power

Mistrust towards influencing agents also can explain negative expert power: "But it has been observed that sometimes we may do exactly the opposite of what the influencing agent does or desires that we do. [What Hovland, Janis, & Kelley (1953) called the 'boomerang effect']. . . . We assume that he [an aggressive used car salesman] is using his expertise in his own best interests, not in ours" (Raven, 1992). In other words, if he warmly recommends a certain car, it might be the most overpriced or problematic car in the lot. Honesty does not pay, unless if it gives future gain in reputation: used car salespeople often do one shot deals. Selfish defection is the best strategy in short term relationships, as shown by Axelrod's game theoretical work on the prisoner's dilemma (cited in Dawkins 1989). In long term relationships, cooperation is advantageous. It develops so naturally and spontaneously, that it frequently made soldiers of opposing armies in WWII trenches cooperate by staging noisy mock shootouts while actually avoiding to hurt any opponent who in turn would not hurt him (Dawkins, 1989, p. 228). And people who expect long term interactions with us are more likely to give honest information about the car they sell. This does not occur because of that person's inherent "goodness", but because his selfish long term interests are served best by a good reputation and continued alliance with us. Of course, he would not want to admit this to us (=deception) nor to himself (=self deception)

To summarize: Expert power is prone to be used for selfish and manipulative purposes, even if the expert denies this, and even if the expert himself believes that this is not the case. Therefore it is met with innate distrust, even when the target cannot point out what the nature of the suspected manipulation is. The same is valid for informational power, because information is rarely free-standing but rather dependent on expert information. A simple information like: "You should eat apples because they contain lots of Vitamin C," implicitly requires the target to believe that Vitamin C does exist, that it is contained in apples and that it is good for us.

This mistrust hypothesis would lead to the prediction that the more the target can verify the data and ascertain that he or she is not being deceived, the more rationally he or she will act in his own best interest. The positive results of the "mutual participation model" (Raven and Litman-Adizes, 1986) seem to confirm this prediction.

VI. AN ALTERNATIVE UTOPIA

"Imagine all the people live in love and peace" (John Lennon, approx. 1968). Nothing to live or die for, no hunger, no heaven and no religion. I would add more points. Imagine we could use our cars and clothes until they wear out. And need not pay for the prestigious brands in the first place. And they were made for maximum usefulness, not for flashiness and planned obsolescence. Imagine we used cars only when absolutely necessary, because we unselfishly concluded that bicycles are better for the air we all breathe and for our natural environment. Imagine we would enjoy the benefit of healthy exercise by walking and bicycling instead of succumbing to our preprogrammed tendency to evade avoidable physical effort, and hence we were not wrought with the health damage coming with our sedentary life styles.

Imagine what would happen if we renounced social influence through violence (war and crime) and through deception (marital infidelity, tax fraud). 95% of all topics for novels and movies would disappear! Imagine we could find sexual partners without lies and manipulations, without having to spend decades acquiring useless status and beauty. And imagine humanity would, for the good of our and other species, voluntarily cease selfish behavior and even stop the population explosion. I estimate that over 90 percent of our working time and financial expenses would immediately be freed.

But, I forgot! Even if our general predispositions would allow the utopia, a few selfish individuals could take advantage of the system. Then we would need to be careful to prevent cheating. The cheaters would evade our precautions by cheating in more sophisticated ways. It would pay if we only trusted expensive impression management displays that are hard to falsify. Sorry! We're back to square one.

VII. SUMMARY

People tend to influence others for selfish reasons. They tend to hide this fact from others and even from themselves. Targets of influencing attempts act as if they knew the influencing agent cannot be trusted. An extraordinary amount of effort is devoted to impression management, the effort to establish credibility.

The arms race between influencing agent and target, between deceiver and defenses against deception, is very expensive. Impression management is a full time job, and the other full time job in life serves to acquire the finances needed to buy the paraphernalia (like designer clothes, car, condo and prestigious schools) to impress with. The very fact that these items are expensive and difficult makes them hard to fake and therefore more credible.

All animals are genetically programmed to maximize their inclusive fitness (the number of their genes in the gene pool of future generations). Humans have genetic and cultural tendencies to maximize inclusive fitness. Social influence and even altruism tend to be in the service of inclusive fitness maximization. Everyone seeks his maximum benefit. Alliances with nonkin are utilitarian.

Deception will be used whenever useful. This fact should be hidden: one's reputation is enhanced by being seemingly altruistic. We would want to deceive others about our selfishness and deception. Furthermore, we can deceive better when we ourselves are convinced of what we say. We tend to deceive ourselves, but often a part of us knows the repressed truth.

After all this pessimistic outlook, is there any reason for optimism? Maybe we can change if we become aware of our unawareness, if we stop deceiving ourselves and others about the fact that we are deceiving. Change would require that true and ruthless honesty be socially acceptable, and mere attempts at deceiving be stigmatized. If true honesty and awareness pay, if they increase inclusive fitness, our fitness maximizing instincts will embrace them.

VIII. REFERENCES

Baumann, A. O., Deber, R. B., Thompson, G. G. (1991). Overconfidence among physicians and nurses: The 'micro-certainty, macro uncertainty' phenomenon. Social Science and Medicine, 32, 167-174.

Cialdini, R. B. (1993). Influence: science and practice. New York: Harper Collins.

Cronk, L. (1991). Communication as manipulation: Implications for biosociological research. Unpublished paper presented at the American Sociological Association Annual Meetings. Cincinnati, Ohio.

Daly, M., & Wilson, M. (1983). Sex, evolution, and behavior (2nd ed.). Boston, Ma.: PWS Publishers.

Darwin, C. R. (1859). The origin of species. London: John Murray.

Dawkins, R. (1989). The selfish gene (New ed.). Oxford, Great Britain: Oxford University Press.

de Waal, F. B. M. (1987). Dynamics of social relationships. In Smuts, B. B., Dheney, D. L., Seyfarth, R. M., Wrangham, R. W., & Struhsaker, T. T. (Eds.), Primate societies, 421-429, Chicago, Ill.: University of Chicago Press.

Essock, S. M., McGuire, M. T., & Hooper, B. (1988). Self-deception in social-support networks. In Joan S. Lockard & Delroy L. Paulhus (Eds.) Self deception: an adaptive mechanism? Englewood Cliffs, N. J.: Prentice Hall.

Gur, R. C., & Sackheim, H. A. (1979). Self deception: a concept in search of a phenomenon. Journal of Social and Personality Psychology, 37, 147-169.

Harper, D. G. (1992) Communication. In Krebs and Davies, Behavioral ecology, (3rd ed.), pp 375-397

Hogan, R. (1982). A socioanalytic theory of personality. In M. M. Page and L. A. Pervin (Eds.) Nebraska Symposium on Motivation. Lincoln: University of Nebraska Press.

Irons, W. (1991). How did morality evolve? Zygon, 26, 49-89.

Jones, E. E., Pittman, T. S. (1982). Toward a general theory of self presentation. In Jerry Suls (Ed.) Psychological Perspectives on the Self. Vol. 1, (pp. 231-263). Hillsdale, NJ: Erlbaum.

Kahnemann, D., Slovic, S., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases (pp. 306-334). New York: Cambridge University Press.

Kaplan, H. (1987). Human communication and contemporary evolutionary theory. In Stuart J. Sigman (Ed.) Research on Language and Social Interaction, 20, 79-139.

LeVay, S. (1993). The sexual brain. Cambridge, Ma.: MIT Press.

Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahnemann, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 306-334). New York: Cambridge University Press.

MacKay, D. G. (1988). Under what conditions can theoretical psychology survive and prosper? Integrating the rational and empirical epistemologies. Psychological Review, 93, 559-565.

MacKay, D. G. (1993). The theoretical epistemology: A new perspective on some longstanding methodological issues in psychology. In G. Keren & C. Lewis (Eds.), Methodological and quantitative issues in the analysis of psychological data, pp. 229-255. Hillsdale, N.J.: Erlbaum.

Mitchell, R. W. (1986). A framework for discussing deception. In R. W. Mitchell and N. S. Thompson (Eds.) Deception, perspectives on human and nonhuman deceit. Albany, N.Y.: State University of New York Press.

Moscovici, S. (1992). The return of the unconscious. Social Research, 39-93.

Moscovici, S. (1981). Bewußte und unbewußte Einflüsse in der Kommunikation. Zeitschrift für Sozialpsychologie, 12, 93-103.

Nisbett, R. E., Ross, L. (1980). Human inference: strategies and shortcomings of social judgment. Englewood Cliffs, N.J.: Prentice Hall.

Paulhus, D. (1986). Self-deception and impression management in test responses. In Angleitner, A. & Wiggins, J. S., Personality assessment via questionnaires. New York, NY: Springer.

Paulhus, D. (1982). Individual differences, self-presentation, and cognitive dissonance: their concurrent operation in forced compliance. Journal of Personality and Social Psychology, 43, 838-852.

Raven, B. H. (1992). A power/interaction model of interpersonal influence. Journal of Social Behavior and Personality , 7, 217-244.

Raven, B. H., Litman-Adizes, T. (1986). Interpersonal influence and social power in health promotion. Advances in Health Education and Promotion, Vol 1 Pt. A, 181-209

Raven, B. H., Rubin, J. Z. (1983). Social Psychology, 2nd ed., New York, N. Y.: Riley.

Sears, D. O., Peplau, L. A., Taylor, S. E. (1991). Social Psychology. Englewood Cliffs: Prentice Hall.

Taylor, S. E. (1989). Positive illusions: creative self-deception and the healthy mind. Basic Books.

Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin, 10(2), 193-210.

Tinbergen, N. (1963). On aims and methods of ethology. Zeitschrift für Tierpsychologie, 20, 410-433.

Tooby, J., Cosmides, L. (1992). The psychological foundations of culture. In Barkow, Cosmides, & Tooby (Eds.) The Adapted Mind. N. Y.: Oxford University Press.

Trivers, R. (1985). Social Evolution. Menlo Park, Ca.: Benjamin/Cummings.

Ury, W. (1993). Getting past no: Negotiating your way from confrontation to cooperation. New York: Bantam.

Shaw, R.P., & Wong, Y. (1989). Genetic seeds of warfare: Evolution, Nationalism, and Patriotism. Boston, Ma.: Unwin Hyman.

Zahavi, A. (1975). Mate selection- a selection for a handicap. Journal of Theoretical Biology, 53, 205-214.


Home |  Current Issue |  Links |  Conferences/Events |  Archives
About the Journal |  Submit |  Subscribe

Search Janus Head       

The Psychology of Self-Deception as Illustrated in Literary Characters

Christopher Frost

Southwest Texas State University

Michael Arfken

Dylan W. Brock

Few people nowadays know what man is. Many sense this ignorance and die the more easily because of it . . . I do not consider myself less ignorant than most people . . . I have been and still am a seeker, but I have ceased to question stars and books; I have begun to listen to the teachings my blood whispers to me. My story is not a pleasant one; it is neither sweet nor harmonious as invented stories are; it has the taste of nonsense and chaos, of madness and dreams like the lives of all men who stop deceiving themselves. (Hesse 105)

Introduction

We have all experienced insight resulting from the recognition that some prior belief or perception was incorrect. In this instance, pleasure and happiness may result from the intrinsic delight that often accompanies authentic learning. Conversely, anxiety and fear may result from a disturbing realization: If what I once believed to be true now appears false, other beliefs may prove to be false as well. The intensity of response to each insight is relative to the salience of the knowledge domain: namely, how central the notion is to an individual’s sense of self. Therefore, if the new insight involves self-understanding, accepting the new information would obviously entail altering self-perception. In this case, the “saliency test”—a test so see whether information is relevant to self and hence worthy of attention—is met, regardless of how inconsequential the information might appear to an outside observer. Thus, the potential exists for any kind of new self-referential information to be emotionally laden,

which means that the potential for invoking anxiety or fear is exacerbated.

 
We are continually flooded with information that could challenge self-image. In an effort to avoid damaging it, we often deceive ourselves. The purpose of this inquiry is to define self-deception, its potential, its functions, and the range of strategies that are employed in avoiding or distorting information that conflicts with self-perception. In doing so, we attempt a phenomenology of self-deception. Given the inherent paradox of the subject matter—the possibility that anything we bring to bear based on our own experiences might itself be a deception—we turn to literary characters for insight, namely Jean-Baptiste in Camus’s The Fall, Captain Vere in Melville’s Billy Budd, Sailor, Howard Campbell in Vonnegut’s Mother Night, and the Mariner in Coleridge’s “The Rime of the Ancient Mariner.”

I. Definitional and Conceptual Issues

According to Freud, knowledge begins with perception and ends with responses. As information flows, it can be diverted, transformed, or erased. A modification begins at the first perceptible moment, when information passes through a “first memory system,” or what contemporary cognitive psychologists refer to as the “sensory memory”:

Freud’s prescience is exemplified in his positing a perceptual capacity that has no memory of its own, takes fleeting note of the sensory world, but stores no lasting impressions. He saw that the functions of receiving sensory signals and registering them are separate, a fact later borne out by the neurophysiology of the sensory cortex. It was not until 1960 that his description of perception found a scientific basis with the experimental discovery of what we today call “sensory storage,” a fleeting, immediate impression of our sensory world. (Goleman 58)

From this first memory system, information can either dissipate or continue to flow to one of a number of other memory systems. As it does, only a small percentage enters conscious awareness, the rest resides below its threshold. According to Freud, the key tenet to self-deception is that though we are not aware of the existence of this information, it exerts a considerable influence over our behavior. Once memories are somehow designated as “threatening,” the information is either transformed (via mechanisms of defense) or barred from conscious awareness by cognitive censors. The censors filter out information likely to provoke pain or anxiety, while allowing non-threatening information to flow. The immediate relevance of Freud’s model to the phenomenon of self-deception is readily apparent. Each lacuna (perceptual gap or cognitive omission) prevents an accurate or complete perception of reality. But, because we are seldom aware of the lacuna, we believe our cognitions accurate.

Jean-Paul Sartre also addressed self-deception, or, as he termed it,  “mauvaise foi” (“bad faith”). In his discussion of “bad faith,” he defines consciousness as “a being, the nature of which is to be conscious of the nothingness of its being” (Sartre 147); like William James, he perceives that consciousness would be more accurately conveyed as a verb than as a noun. The apprehension of its own “nothingness,” which creates a sense of “lacking” or “need,” directs itself towards some type of understanding, similar to William James’ link between attention and meaning. A thing may be present to a person a thousand times, but if it goes completely unnoticed by the individual, it cannot be said to enter his experience.  A person’s “empirical thought depends on the things he has experienced, but what these shall be is to a large extent determined by his habits of attention” (James 286). Thus James concludes that all of our consciousness—our sense of meaning, our very sense of self—must be constructed from material to which we have attended. The meaning we derive as we experience life, the

consciousness that is a stream of this ongoing experience, and the self that we construct as a personal representation of consciousness are all dependent upon our habits of attention. Take the slave, for example, who unmindful of his severe constraints, suddenly realizes his current position and now attends to advantages enjoyed by his master. The freedom his master enjoys becomes very appealing. His awareness, however, of severe punishment or even death for pursuing this freedom causes him to bury this realization in a morass of reasons why the life of the slave is enviable. In Sartre’s view, he now exists with “bad faith.”

Sartre concedes that “bad faith” can best be understood as “a lie to oneself, on condition that we distinguish the lie to oneself from lying in general” (Sartre 148), which requires another person. Herein lies a key distinction in his formulation: The liar, in order to complete his task, must maintain complete lucidity about some truth that he possesses. One cannot lie without possessing some personal truth, and lying is different than simply being in error. Taking it further, Sartre directly criticizes Freud’s model, especially the concept of the censor:

Thus, psychoanalysis substitutes for the notion of bad faith, the idea of a lie without a liar; it allows me to understand how it is possible for me to be lied to without lying to myself since it places me in the same relation to myself that the Other is in respect to me. (Sartre 154)

In other words, Sartre argues that the censor must know a truth in order to provide the resistance that Freud describes. “There must be an original intention and a project of ‘bad faith’; this project implies a comprehension of ‘bad faith’ as such and a pre-reflective apprehension [of] consciousness as affecting itself with ‘bad faith’” (Sartre 150-151). To deceive ourselves “successfully,” we must pre-reflectively be aware that we are acting in “bad faith.” Placing the source of “bad faith” in a “location” of the mind that cannot be easily accessed (like the Freudian unconscious) renders the project of authenticity virtually impossible. Sartre believed that adhering to a “unity of consciousness” allows the project of “bad faith” to be a conscious project and places the locus of control for “bad faith” with the individual.

Herbert Fingarette attempts to avoid the paradox of the Freud–Sartre debate: How can one know something, and, at the same time, not know it?:

Rather than the paradox of knowing ignorance, I have treated as central the capacity of a person to identify himself to himself as a particular person engaged in the world in specific ways, the capacity of a person to reject such identification, and the supposition that an individual can continue to be engaged in the world in a certain way even though he does not acknowledge it as his personal engagement and therefore displays none of the evidence of such acknowledgment. (Fingarette 91)

We all engage ourselves with the world in some way, but one does not necessarily articulate this engagement; that is, one may fail to reflect on it. According to Fingarette, an individual may either avow this engagement as his own, or disavow it altogether. To disavow selected sequences of engagement is similar to denying responsibility. Although the original project of “bad faith” (disavowing elements of one’s experience) is itself a decision made in “bad faith,” it does not begin as intentional deception. Instead, the decision to avow or disavow is influenced by the threat or reward such an apprehension poses toward self. If choosing to avow a particular engagement with reality threatens the current self-schema, then attention may be directed to another aspect of one’s engagement with the world. In a manner consistent with Festinger’s description of cognitive dissonance, anxiety is avoided by not “noticing” the very thing that threatens one’s identity. The crucial step toward “bad faith” is rooted in a failure of attention: we disavow by “not noticing,” and then failing to notice that we have not noticed. By adhering to this model, Fingarette avoids the infinite regression into which a Freudian view may lapse. If one defensive maneuver covers another, it is impossible to distinguish the last defense from prior ones. Part of our psyche shields another part from awareness.

But, according to Fingarette, one is not destined to a life of denial and deception. To the contrary, he believes that one may choose the careful, painstaking path of avowing one’s engagements with the world (Fischer 148). But avowing our engagements with the world must entail making our motivations apparent. The individual engaged in self-deception refuses. With each omission, the project of deception becomes more rooted in the nature of self.

How well do the theories of Freud, Sartre, and Fingarette capture the essence of self-deception? Is it ever possible to know when one is deceiving oneself or can we only become aware of the deception after it occurs? To answer these questions and more, we examine four literary characters actively involved in self-deception.

II. Jean-Baptiste

Jean-Baptiste, the main character in Camus’ The Fall, serves as a fine example of an individual practicing self-deception. The work describes Jean-Baptiste’s confession to a man in a bar, and throughout, he emphasizes his extraordinary ability to forget: “To be sure, I knew my failings and regretted them. Yet I continued to forget them with a rather meritorious obstinacy” (Camus 76). This admission seems peculiar; most do not boast forgetfulness. But, as our self-deception theorists remind us, forgetting something—especially something relevant to self—can be a useful tool for maintaining consistency and avoiding anxiety or pain. In this case, the fact that Jean-Baptiste regretted his failings illustrates that he was aware of them. In addition, the pleasure derived from his superior ability to forget indicates that these failings must have initially created considerable anxiety. The following passage suggests a purpose to his motivated forgetting:

In the interest of fairness, it should be said that sometimes my forgetfulness was praiseworthy. You have noticed that there are people whose religion consists in forgiving all offenses, and who do in fact forgive them but never forget them?  I wasn’t good enough to forgive offenses, but eventually I always forgot them. And the man who thought I hated him couldn’t get over seeing me tip my hat to him with a smile. According to his nature, he would then admire my nobility of character or scorn my ill breeding without realizing that my reason was simpler: I had forgotten his very name. The same infirmity that often made me indifferent or ungrateful in such cases made me magnanimous. (Camus 49-50)

Notice that at the same time Baptiste is confessing his forgetfulness, he paradoxically identifies the individuals he has supposedly forgotten. Therefore, he has not really forgotten, nor has he exchanged forgetting for forgiveness. Consider another instance of his lapse in memory:

I contemplated, for instance, jostling the blind on the street; and from the secret, unexpected joy this gave me, I recognized how much a part of my soul loathed them; I planned to puncture the tires of invalids’ vehicles, to go and shout “lousy proletarian” under the scaffoldings on which laborers were working, to slap infants in the subway. I dreamed of all that and did none of it, or if I did something of the sort, I have forgotten it. (Camus 91-92; italics ours)

Baptiste’s desire to engage in destructive and antisocial behavior is set against his ability to forget these impulses.

Motivated forgetting contributes to his positive self-image.

Jean-Baptiste avoided telling the man in the bar that he did nothing to prevent a woman from committing suicide (only later does the reader make this unsettling discovery). The following passage suggests that though he avoided dealing with the woman at the time, it affected him:

Whether ordinary or not, it served for some time to raise me above the daily routine and I literally soared for a period of years, for which to tell the truth, I still long in my heart of hearts. I soared until the evening when . . . But no, that’s another matter and it must be forgotten . . . I ran on like that, always heaped with favors, never satiated, without knowing where to stop, until the day—until the evening rather when the music stopped and the lights went out. (Camus 29-30)

Again we confront a paradox: it is Jean-Baptiste alone who broaches suicide, while he simultaneously suppresses the thoughts from consciousness in order to forget.

Psychological research on memory suggests that the suppression of a painful thought can lead to an obsession with the suppressed memory (Wegner et al.). In fact, the difficulty in suppressing even a simple, non-painful thought can be easily illustrated in Wegner’s challenge, which we urge the reader to undertake: “Right now, try not to think of a white bear. Keep trying. Do not think of a white bear.  Remember, don’t think of a white bear.” The dilemma is evident, suggesting the complexity of mental processing required in simply forgetting a white bear. When we attempt to forget an experience that is rooted in reality and painful to behold, the complexity may be attenuated.

Camus, a keen observer of human experience, recognized that multiple themes define the overall project of self-deception. While motivated forgetting provides one possibility, another is laughter, which appears throughout Baptiste’s confession. At one point, Jean-Baptiste states, “I again began to laugh. But it was another kind of laugh; rather like the one I had heard on the Pont des Arts. I was laughing at my speeches and my pleadings in court” (Camus 65). Baptiste realizes the absurdity of his actions as a lawyer when he questions his own arguments. Laughter serves to close the gap between the disparity of what he believes and how he presents himself; Jean-Baptiste laughs to avoid the pain of incongruity.

Jean-Baptiste is playing the part of a lawyer, and as Sartre contends, we assume any convenient role in order to avoid making decisions. When the role gains ascendance over self, we can simply respond reflexively to its demands by thinking and feeling nothing. His lack of awareness is evident throughout his confession:

Why, shortly after the evening I told you about, I discovered something. When I would leave a blind man on the sidewalk to which I had convoyed him, I used to tip my hat to him. Obviously the hat tipping wasn’t intended for him, since he couldn’t see it. To whom was it addressed?  To the public. After playing my part, I would take the bow. Not bad, eh?  (Camus 47)

To be sure, I occasionally pretended to take life seriously. But very soon the frivolity of seriousness struck me and I merely went on playing my role as well as I could. (Camus 87)

Playing a role, or as Fromm would put it, “escaping from freedom,” allowed Jean-Baptiste to avoid responsibility. By purposefully forgetting aspects of himself, and laughing at the “frivolity” of his endeavors, he continues playing a role. The dictates of the role, in turn, provide a false sense of consistency.

Ultimately, being judged is Baptiste’s greatest fear, and the avoidance of judgment his greatest motivation:

But I was aware only of the dissonance and disorder that filled me; I felt vulnerable and open to public accusation. In my eyes my fellows ceased to be the respectful public to which I was accustomed. The circle of which I was the center broke and they lined up in a row as on the judges’ bench. In short, the moment I grasped that there was something to judge in me, I realized that there was in them an irresistible vocation for judgment. Yes, they were there as before, but they were laughing. (Camus 78)

Baptiste’s awareness of this internal conflict leads him to the realization that if he can judge himself, then so can everyone else. By refusing to acknowledge faults in himself and constructing a view of self without them, he can easily defend against the judgments of others.

In spite of Baptiste’s self-deceptive behaviors, evidence that contradicts one’s self-concept that is “out of character” still manages to break through to awareness. To combat it, Baptiste practices diffusion:

But to be happy it is essential not to be concerned with others. Consequently, there is no escape. Happy and judged, or absolved and wretched. (Camus 80)

That is what no man (except those who are not really alive—in other words, wise men) can endure. Spitefulness is the only possible ostentation. People hasten to judge in order not to be judged themselves. (Camus 80)

Now my words have a purpose. They have the purpose, obviously, of silencing the laughter, of avoiding judgment personally, though there is apparently no escape. Is not the great thing that stands in the way of our escaping it the fact that we are the first to condemn ourselves? Therefore it is essential to begin by extending the condemnation to all, without distinction, in order to thin it out at the start. (Camus 131)

The sting of incongruent information can be softened: It is not that “I” am that way; it is rather that “everyone” is that way. In one sense, Jean-Baptiste’s strategy to avoid judgment is similar to the idea of diffusion of responsibility (Darley & Latane; Latane & Nida). Rather than hold himself responsible, he attributes the characteristics to everyman. In another sense, this mirrors Freud’s notion of projection, save for a minor modification: I see the characteristic as “within me” at the same time that I project it onto “you.”  In Freud’s scheme, we make such projections while denying projected content as relevant to self.

The more I accuse myself, the more I have a right to judge you. Even better, I provoke you into judging yourself, and this relieves me of that much of the burden. (Camus 140)

Jean-Baptiste’s strategy follows a certain logic: If I find something undesirable within myself, then, whether they are aware of it or not, other people must have this same attribute. If everyone else possesses this negative characteristic, then there is nothing particularly wrong with me. The undesirable attribute is not distinctive to self, and therefore, does not need to be incorporated into my self-concept; the “saliency test” is no longer met.

III. Captain Vere

In Melville’s novella, Billy Budd, Sailor, the simple story of a conflict between shipmates plays a subservient role to the discussion of self-deception. At the center of this discussion is the Captain, Edward Vere. The plot concerns Billy Budd, a moral young sailor, who accidentally kills his superior, John Claggart, the evil master-at arms. When this event occurs, Vere must make a crucial decision: Should he uphold naval law and condemn Billy to death, or do what is morally right, opt for another punishment, and let him live? He knows that Claggart falsely accused Billy of mutiny. He also knows that Billy has a speech impediment, and therefore has to resort to using his fist to defend against the accusation. No sooner than Billy accidentally lands the fatal blow, Vere has already sealed the sailor’s fate. He states of Claggart: “Struck dead by an angel of God, yet the angel must hang” (Melville 101). The reader immediately notices a conflict arising in Vere. He considers Billy an angel, but believes that he must sentence him to death. The reader asks: How can one condemn an angel to death? The answer

lies in a study of Vere’s self-deception.

There is a constant struggle between Vere’s morality, and the naval laws he must uphold as captain. In deceiving himself, Vere, like our other literary characters, is able to justify his actions and resolve the struggle. The work itself is rather deceptive, so we must look beyond what is stated, when an index is given, to what is implied about this struggle. In Chapter 11, the dialogue between the narrator and “his senior” is one such time. The narrator states:

Knowledge of the world assuredly implies the knowledge of human nature, and in most of its varieties.

His “senior” replies:

Yes, but a superficial knowledge of it, serving ordinary purposes. But for anything deeper, I am not certain whether to know the world and to know human nature be not two distinct branches of knowledge, which while they may coexist, yet either may exist with little or nothing of the other.  (Melville 75)

This exchange suggests that one may be knowledgeable of the world, or reality, yet create a division between an

understanding of human nature, or the identity of true self, and consciousness. For one who accepts reality and perceives himself accurately, there is no division: “human nature,” or the identity of true self, would be included in “knowledge of the world.”  In Vere’s case, they are “branched” by his self-deception.

 
As the previous example illustrates, the reader must recognize the “double meanings” (Melville 49) inherent in almost every aspect of the work. The “right” meaning is sometimes hidden. “Plain readings” do not go well with Melville—the reader must delve deeper. As Watson states, “Though the book be read many times, the student may still remain baffled by Melville’s arrangement of images. The story is so solidly filled out as to suggest dimensions in all directions. As soon as the mind fastens upon one subject, others flash into being” (Watson 44). The following passage may “baffle,” but if we delve further, we can uncover Vere’s deception of self. The narrator states:

Forty years after a battle it is easy for a non-combatant to reason about how it ought to have been fought. It is another thing personally and under fire to direct the fighting while involved . . . Much so with respect to other emergencies involving considerations both practical and moral, and when it is imperative promptly to act . . . Little ween the snug card-players in the cabin of the responsibilities of the sleepless man on the bridge. (Melville 114)

The “battle” metaphorically represents what happened to Billy. Vere’s self-deception is the “noncombatant” who is “reasoning” about those events: “It” was not there; therefore “it’s reasoning” is not based in reality. It speculates about the “oughts.” Its very purpose is to distort reality. His consciousness is “personally under fire” and “involved” in the actual events. Notice how Melville hints at this correlation by describing it as similar to “other emergencies involving considerations both practical and moral.” To drive this home, he utilizes another clever metaphor in the last sentence: Vere’s self-deception is the “snug card players,” which “little weens . . . the responsibilities of the sleepless man on the bridge” (Melville 114, italics mine) or, in other words, his consciousness.

 
Vere knows what is morally right, yet tries to deceive not only himself, but other as well. In Sartre’s terminology, he lives with “bad faith.” He demands “the maintenance of secrecy” (Melville 103) in what turns out to be the fatal meeting between himself, Claggart, and Billy. He knows that this decision is questionable, and, in the end, an open meeting might have prevented the homicide. Additionally, Vere forbids emotions from swaying the jurors’ verdict in the trial. He says that the heart “must here be ruled out” (Melville 111), and they must “strive against scruples that may tend to enervate decision,” due to “paramount obligations” (obligations to a man who is practicing self-deception). He also knows that there is good reason for the jurors’ “troubled hesitancy” (Melville 110) in sentencing Billy to death, but he tells the jurors to follow his example, and “to challenge” their “scruples.” He pleads that they “recur to the facts: In war-time at sea a man-of-war’s-man strikes his superior in grade,

and the blow kills” (Melville 111). Billy did kill Claggart, but it was unintentional and precipitated by a serious, false accusation. Therefore, Vere does not really adhere to the facts in the case, and by doing so, he displays a definite “disdain for innocence” (Melville 78).

 
Another example of Vere’s self-deception is the following broad description: “His settled convictions were as a dyke against those invading waters of novel opinion, social, political and otherwise” (Melville 123-4).  “Settled convictions” is close-mindedness, the enemy of accurately perceiving reality, and “otherwise” is all-inclusive: The “invading waters” of accurate self-perception would definitely fall under this description.

 
The following passage describes Vere’s attitude towards his companions concerning their conversations, and it gives the reader additional insight into his self-deception:

Since not only did the Captain’s discourse never fall into the jocosely familiar, but in illustrating of any point touching the stirring personages and events of the time he would be as apt to cite some historic character or incident of antiquity as that he would cite from the moderns. He seemed unmindful of the circumstance that to his bluff company such remote allusions, however pertinent they might really be, were altogether alien to men whose reading was mainly confined to the journals. But considerateness in such matters is not easy to natures constituted like Captain Vere’s. Their honesty prescribes to them directness, sometimes far-reaching like that of a migratory fowl that in its flight never heeds when it crosses a frontier. (Melville 63)

This description clearly illustrates that Vere is not concerned with reality. Instead, his attention is directed at “historic characters” and “incidents of antiquity.” Therefore, he is “unmindful” of the fact that his conversations do not make sense. He bars this information from awareness with cognitive censors in order to reduce the likelihood of experiencing the pain or anxiety inherent in facing reality. Vere fails to be attentive (a fatal step toward “bad faith”); he “never heeds the frontier.” He disavows, and in doing do, fails to accept responsibility. His denial ultimately destroys both himself and Billy.

 
Scholars familiar with the work of Melville know that he is a master at the art of ambiguity, a deceptive, yet effective literary device. He uses ambiguity as sly indexes to how we should read the narrative. The narrative should bring us to certain realizations concerning self-deception, not personal opinions concerning specific events. The following passage describes the closeted interview between Vere and Billy. The scene takes place before Billy’s trial and contains interesting ambiguities that further illustrate Vere’s self-deception:

That the condemned one suffered less than he who mainly had effected the condemnation was apparently indicated by the former’s exclamation in the scene soon perforce to be touched upon . . . Between the entrance into the cabin of him who never left it alive, and him who when he did leave it left it as one condemned to die. (Melville 115-116)

The reader must ask himself: Is the narrator referring to Vere or to Billy? Who is the “condemned one and who effected it?” Who is the “former?” Who “never left it alive?” This leads to some very weighty conclusions when you consider that the descriptions apply as much to Vere as they do to Billy. Vere’s self-deception would definitely cause him to suffer. Self-deception, in general, can be described as the condemnation of the truth and the killing of reality.

 
When Billy is hung, the narrator describes Vere’s reaction: “Vere, either thro’ stoic self-control or a sort of momentary paralysis induced by emotional shock, stood erectly rigid as a musket in the ship-armorer’s rack” (Melville 87). This “momentary” paralysis is his consciousness creeping in, but he blocks it from awareness with cognitive sensors in order to reduce anxiety and stands rigidly defiant. Like most of us, Vere is not a one-dimensional deviant who enthusiastically embraces evil, but as he continues down a path of deception, he is more than able to sacrifice a human life. Melville attempts to convey to the reader that it doesn’t have to be this way. The novella concludes with Vere murmuring, “Billy Budd, Billy Budd” (Melville 129) on his deathbed. He is remorseful for his actions, and has perhaps gained insight—but much too late and at such a cost.


IV. Howard Campbell

In Mother Night, Vonnegut’s characterization of Howard Campbell, a renowned American born playwright living in Germany during the Nazis’ ascent to power, illustrates a classic account of self-deception. The work revolves around the repercussions of Campbell’s decision to pose as a Nazi propagandist. The plan is that Nazi war secrets will be encoded in his radio broadcasts, thereby aiding allied forces. On the surface, Campbell will appear to be a Nazi, but he is actually an allied supporter. Note that Vonnegut begins the work with a moral to the tale: “We are what we pretend to be, so we must be careful about what we pretend to be” (Vonnegut V). Campbell relays secret messages to Allied Forces, but because they are embedded in Nazi propaganda and delivered so persuasively, he inspires the Germans. In the end, we must ask: “Who is Campbell really helping?” The answer to that question portends the question of identity: Which identity is the “real” Howard Campbell? The following dialogue between Campbell and another character expounds on this question:

“Three people in all the world knew me for what I was–” I said.
“And all the rest–” I shrugged. “They knew you for what you were too,” he said abruptly.
“That wasn’t me,” I said, startled by his sharpness.
“Whoever it was–” said Wirtanen, “he was one of the most vicious sons of bitches who ever lived.” (Vonnegut 138)

The character, Wirtanen, poses the haunting question: If not Campbell, who was this renowned Nazi propagandist? Campbell did not know the answer, and did not realize the effects of his “playing a role.” Vonnegut delves into this further in a conversation between Campbell and his proud Nazi father-in-law:

“And do you know why I don’t care now if you were a spy or not?” he said. “You could tell me now that you were a spy, and we would go on talking calmly, just as we’re talking now. I would let you wander off to wherever spies go when a war is over. You know why?” he said.
“No,” I said.
“Because you could never have served the enemy as well as you served us,” he said. “I realized that almost all the ideas that I hold now, that make me unashamed of anything I may have felt or done as a Nazi, came not from Hitler, not from Goebbels, not from Himmler—but from you.”  He took my hand. “You alone kept me from concluding that Germany had gone insane.” (Vonnegut 80-81)

Further developing the self-deception theme, Vonnegut relates a dialogue between Campbell and Adolf Eichmann that takes place in an Israeli prison following the war:

“May I ask a personal question?” I said . . .
“Certainly . . . ”
“Do you feel that you are guilty of murdering six million Jews?” I said.
“Absolutely not,” said the architect of Auschwitz . . .
“Listen–” he said, “about those six million–"
“Yes?”  I said.
“I could spare you a few for your book,” he said. “I don’t think I really need them all  . . . ” It’s possible that Eichmann wanted me to recognize that I had killed a lot of people, too, by the exercise of my fat mouth. But I doubt that he was that subtle a man, man of many parts as he was. I think if we got right down to it, that, out of the six million murders generally regarded as his, he wouldn’t lend me so much as one. If he were to start farming out all those murders, after all, Eichmann as Eichmann’s idea of Eichmann would disappear.” (Vonnegut 123-125)

By having Campbell describe Eichmann, Vonnegut offers us a keen glimpse into self-deception: The comments concerning Eichmann can easily be applied to Campbell. If he acknowledged his actions, his false self-concept would collapse. But he doesn’t, and the web of self-deception remains intact.  In the end, the reader is left at precisely the same point as Campbell himself: with a question, but no answer, as to which is the “real” identity.

V. The Mariner

In considering Coleridge’s dark nineteenth century ballad, “The Rime of the Ancient Mariner,” we organize our treatment of the title character’s self-deception around two central questions. First, exactly what is the Mariner’s “fault?” And second, how does that fault relate to both his and the reader’s perception of reality? In answering these questions, the ballad’s classic interpretations of “sin and redemption” or “crime and punishment” are helpful, but not exhaustive. A deeper analysis of the Mariner’s self-deception hinges on four themes: the Mariner’s insistence on continually relating his story (even after his redemption), the reader’s desire to hear it, the significance of vision, and most important, the concept of relatedness (between the Mariner and fellow beings).

 
To begin, the bird appears and is greeted with unmitigated enthusiasm:

As if it had been a Christian Soul,

       We hailed it in God’s name (Lines 65-66).

Coleridge paints a portrait of relatedness that is positive and glowing, ending with the literal sheen of the moon: “glimmered the white Moon-shine” (78).  It is at precisely at this point that the listener interrupts and asks: “Why look’st thou so?” (81). And it is with no hesitation and no explanation that Coleridge’s ancient Mariner responds: “With my crossbow I shot the Albatross” (81-82). The following lines illustrate the Mariner’s failure of interpersonal relatedness:

He [the spirit] loved the bird that loved

The man who shot him with his bow (404-405, italics mine).

The Mariner’s real fault lies in the senselessness of the act: Lacking any apparent motive, he slays the bird just the same. This act stems from his will, yet lacks conscious intention. It was committed not as an expression of self, but for reasons unknown. The Mariner’s fault is rooted in self-deception, which, in his case, is rooted in perception.

 
The subtlety with which Coleridge conveys self-deception becomes apparent because even at the moment the Albatross falls away, the Mariner remains unaware:

A spring of love gushed from my heart,

And I bless them unaware:

Sure my kind saint took pity on me,

And I blessed them unaware (284-287).

The Mariner’s fault lies in his unawareness. But a shift in perception does occur, and the Mariner perceives the beauty of the water snakes, whereas only moments before he saw “a thousand slimy things” (238). Consequently, this shift causes the Albatross to fall from his neck. But, as the ballad continues, the Mariner confronts the voices within or his “inner self”:

 
But ere my living life returned,

I heard and in my soul discerned

Two voices in the air. (395-397)

The voices point to disassociation—a failure of integration—on the part of the Mariner. Ignoring these “inner voices” allows him to act out without realizing it.

 
Jung believed that the demon we fear the most lies within the psyche, and Coleridge captures this view in the Mariner’s continued fearfulness—even after the Albatross had dropped from his neck:

Like one, that on a lonesome road

Doth walk in fear and dread,

And having once turned round walks on,
And turns no more his head;

Because he knows, a frightful fiend

Doth close behind him tread. (446-451)

An accurate perception of self can be upsetting. The Mariner has gained insight, but has not achieved an integration of self. According to Freud, the price of repression is repetition. The Mariner, though absolved of shooting the albatross, must nevertheless repeat his narrative to keep from repeating his horrible deed:

Since then, at an uncertain hour,

That agony returns:

And till my ghastly tale is told,

This heart within me burns. (582-585)

Nevertheless, as long as there are inner voices, there is a possibility for the Mariner—and for us—to change.  It is here that Coleridge answers the critical question: How does the Mariner’s “fault” relate to readers of his tale? He answers this for us by placing the tale within the framework of a recountance told by the Mariner to an “innocent” wayward guest, who, upon its conclusion, leaves the Mariner in the same way as should readers: “sadder and wiser.” Like the guest, readers are now wiser because, grasping the same insight as the Mariner, we now perceive reality more accurately.  We depart sadder, however, because we recognize that the path from self-deception and toward self-integration is long, painful, and fraught with obstacles: discordant voices within are not so easily harmonized into a cohesive arrangement.  More to the point, the sadness of the wedding guest, and the reader,, stems from a stunning recognition: I am that Mariner.”

He went like one that had been stunned,

And is of sense forlorn:

A sadder and a wiser man,

He rose the morrow morn. (622-625)

VI. Conclusion

We began this essay with an assertion that everyone has experienced insight that altered some prior perception. As we began to question false ideas concerning self-insight, the complexity of our task grew exponentially. Having turned to literature as a potential source for illumination, what have we learned? Answering this question requires working towards a theory of self that not only allows for the possibility of mistaken or deceptive beliefs, but also embraces them as fundamental to the construction of a self-concept. Although a completely accurate reading of all dimensions of self is impossible, our literary characters suggest that relative degrees of accuracy are attainable. Therefore, the dilemma of self-deception is best approached not as a phenomenal thing, but as a phenomenal process (much like consciousness itself). At every moment of existence, we are flooded with information that potentially challenges our current perception of self. We say, “potentially challenges” because, as our literary characters instruct us, we ignore a large amount of information that conflicts with prior perceptions.

 
Leon Festinger’s theory of cognitive dissonance (1957), for example, addresses the way individuals avoid potentially discrepant information in order to avoid discomfort. Festinger shares a presupposition with many identity theorists: consistency of self and world is a primary motivational attribute. We briefly register new information in sensory memory, giving primacy to information that matches what is already stored in long-term memory, while simultaneously blocking information that contradicts with that which we already know. Jean-Baptiste illustrates most vividly the role memory plays in self-deception: If he “did something of the sort,” he has “forgotten it”  (Camus 92).  There are, however, problems with Festinger’s theory: it addresses the “discrepancy” test (information is perceived that conflicts) without adequately considering the “validity” test (issue of whether that information is accurate or not). Here is a rather mundane example. Consider an advertisement that states: “You are what you wear.” A consumer named Charles hears this message, tests it against his own belief system, and then rejects the proposition as false.  For Charles, a person is not reducible to what he wears. A couple of hours pass and he simply forgets about the message. Is he guilty of self-deception? No. Later that day, someone walks up to Charles and says: “You are wearing Flash sneakers, which were manufactured in a

sweatshop in Indonesia. You are supporting the oppression of innocent people.”  Charles acknowledges that the sneakers are of that brand, and that he is wearing them, but he quickly rejects the accusation of being an “oppressor,” because it is discrepant from his self-identity. As for the information regarding the sweatshop, he eventually “forgets” that information, acts as if he never “knew” that information, and reminds himself that millions of people wear Flash sneakers. Is he now guilty of self-deception? Assuming the validity of the information regarding the origin of the sneakers, yes. The first case may be seen as an act of self-affirmation, while the second case clearly suggests self-deception.

 
Festinger’s theory of cognitive dissonance and Freud’s model of defense take an important step toward understanding self-deception: They describe a cognitive mechanism by which individuals unconsciously reject information that is dissonance-producing. The problem is that one can reject information that is both dissonance producing and threatening (“you are what you wear”) and not practice self-deception. How, then, are we to distinguish?

 
Sartre’s “bad faith” and Fingarette’s “disavowal” move us toward a theory framed by “integrative” or “harmonizing” motifs, while both Festinger’s and Freud’s theories suggest a dissociative element implicit in self-deception. In a sense, the two sets of theorists provide a glimpse of opposite sides of the same coin. To act in “bad faith,” to “disavow” some facet of our engagement with reality, creates conditions of incongruence and dissociation within our psyche. The disparate elements are buffered, are separated, by lacunae—blind spots that literally block perception of self, or of reality. Sartre’s concept of “bad faith” can be understood as a full-view mirror for the psyche. To understand the self accurately, we must be aware of our motivations—a process that requires absolute attention to consciousness.
While it is possible to split the consciousness, or hide from oneself, we may choose not to do so. Sartre describes this as unity of consciousness, Jung as integration of self. The

full-view mirror, however, is not sufficient: we also need a full-view window to the outside world, because self emerges when reality is accurately perceived. The crucial question, then, and the problem that our literary characters each faced in his own way, is this one:  Is it possible to look both into a full-view mirror and out of a full-length window at the same time?

 
The complexity and elusiveness of the integrative task, it would seem, is that it demands a bi-directional gaze: accuracy of both self-perception and world-perception are required, all from a cognitive system that first seeks “consistency” with what knowledge (of both self and world) already exists. At each moment, we are presented with a range of stimuli that far exceeds the capacity of our selective attention. As our focus shifts, the contents of consciousness also imperceptibly shift in pursuit, transferring awareness to our memory. And, as memory researchers warn, when we attend to material previously stored, we reconstruct it in a manner more fitting to our current attentive gaze. Perceiving reality of self and world is no simple matter. We can understand why Jean-Baptiste begins to remember the night of the woman’s suicide only to relegate it to a matter of lesser

importance that must be forgotten; how Vere can become so immersed in his position that he perverts the very justice that he is supposed to uphold; how Howard Campbell can assume a role, pretending to be a Nazi, only to become so immersed in the part that he “forgets” that he is pretending; and why the ancient Mariner must maintain vigilance for “a frightful fiend,” and shoot the bird that loved him.

 
To speak of complexity and difficulty, however, is not to speak of impossibility. If cognitive psychologists are correct, human beings are capable of “divided attention”; it is possible to gaze into both a full-view mirror and through a full-length window simultaneously. Awareness, attendance to self, and articulation of engagement with the world frees the mind from self-deceptive tendencies. Both attentiveness to deception and maintenance of attention become the prime prerequisites of integration
. The puzzling paradox of self-deception is that it bestows short-term benefits to self by helping us maintain consistency in order to avoid anxiety. But this comes at a great price to ourselves in the long-term, as well as to others—in both the short-term and long-term. If our literary characters are instructive, the greater an individual’s proclivity for self-deception, the more pronounced is that person’s capacity to harm others—without even perceiving his or her actions as harmful. In this

way, cruelty becomes deceptively camouflaged. By the time our literary characters witness the suicide of another human being, the hanging of an angel, the murder of six million Jews, or the killing of something that only loves, they have inadvertently turned their attention to the fatal acts themselves and ignored their cause. It may be that the telling of their stories, and the consequences implicit therein, constitute essential first steps for the re-direction of our own attentive processes.

References

 
Camus, Albert. The Fall. New York: Knopf, 1956.

Coleridge, Samuel Taylor. “The Rime of the Ancient Mariner.” Poems. Ed. John Beer. London: J.M. Dent,1993. 214-255.

Darley, J., & Latane, B.  “Bystander intervention in emergencies: Diffusion of responsibility.” Journal of Personality and
    Social Psychology 8 (1968):377-383.

Festinger, L. A Theory of Cognitive Dissonance.  Stanford: Stanford UP, 1957.

Fingarette, Herbert. Self-DeceptionNew York: Humanities P, 1969.

Fischer, W.F.  “Self-Deception: An Empirical-Phenomenological Inquiry into its Essential Meanings.” Phenomenology and

    Psychological Research. Ed. A. Giorgi. Pittsburgh: Duquesne U P, 1985. 118-154.

Goleman, Daniel. Vital Lies, Simple Truths. New York: Simon & Schuster, 1985.

Hesse, Herman. Siddartha, Demian, and other Writings. Ed. Egon Schwarz. New York: Continuum, 1992.

Latane, B., & Nida, S. “Ten years of research on group size and helping.” Psychological Bulletin 89 (1981): 308-324.

Melville, Herman. Billy Budd, Sailor. Eds. Hayford, Harrison and Merton M. Sealts, Jr. Chicago: U of Chicago P, 1962.

Sartre, J.P.  “Bad faith.”  Jean-Paul Sartre: Essays in Existentialism.  Ed. W. Baskin. New Jersey: Carol Publishing
    Group,1956,147-186.

Vonnegut, Kurt. Mother Night. New York: Dell, 1961.

Watson, E.L. Grant. “Acceptance.” Critical Essays On Melville’s Billy Budd, Sailor. Ed. Robert Milder. Boston: G.K. Hall,
    1989. 41-45.

Wegner, D.M., Schneider, D.J., Carter, S., & White, L. “Paradoxical consequences of thought suppression.” Journal of

    Personality and Social Psychology

How We Perceive Self-Deception

By: Dan Schulman

Summary: Men value competence, women value hard work

Find a therapist

near you.

Enter your City or Zip:

When it comes to "buying" excuses, women aren't exactly in the market, according to a new study that explores how men and women perceive self-deception. Men and women alike have long claimed everything from sleep deprivation to debilitating hangovers in an attempt to excuse poor academic, athletic or job performance. Creating a rationale for our shortcomings, or self-handicapping, sidesteps the issue of innate ability--or lack thereof. "Self-handicapping seems to buffer people's self-esteem when they fail," explains study co-author Edward Hirt, Ph.D., a social psychologist at Indiana University Bloomington. "It's also an impression-management strategy, a way to make other people perceive them as competent."

The researchers posed 888 subjects with a scenario in which a student named Chris (the gender was randomly assigned) forgoes studying for an important exam to go to the movies. The marks Chris received on the test varied, as did his/her reasons for slacking off. In one instance Chris' self-sabotage is overt: s/he invites a friend to the movies. In the other, Chris' indiscretion is subtle: a friend invites Chris to the movies. Afterward participants were surveyed on their perceptions of Chris.

Hirt found that whether Chris was said to be a man or woman did not influence participants' assessments. But men and women viewed Chris' behavior differently. The study, published in the Journal of Personality and Social Psychology, determined that both men and women readily discounted ability as the cause of Chris' poor test results. But women often viewed self-handicapping as a personality flaw: Chris was inherently lazy, unmotivated and lacking in self-control. Male respondents were more hesitant to condemn Chris, though many did admit that s/he would make a poor study buddy. Women also picked up on more obscure forms of handicapping. When Chris accepted an invitation to the movies instead of initiating the trip, women still regarded his/her motives dubiously, but men attributed the act to peer pressure.

Hirt's research appears to pinpoint a fundamental difference in the qualities that men and women value in others. "Guys seem to value competence to a greater extent. They don't really see effort as inherently good by itself.... Women have very strong belief systems about effort withdrawal. They pride themselves on being hard workers," says Hirt.

self-deception

Ninety-four percent of university professors think they are better at their jobs than their colleagues.

Twenty-five percent of college students believe they are in the top 1% in terms of their ability to get along with others.

Seventy percent of college students think they are above average in leadership ability. Only two percent think they are below average. --Thomas Gilovich How We Know What Isn't So

Eighty-five percent of medical students think it is improper for politicians to accept gifts from lobbyists. Only 46 percent think it's improper for physicians to accept gifts from drug companies. --Dr. Ashley Wazana JAMA Vol. 283 No. 3, January 19, 2000

People tend to hold overly favorable views of their abilities in many social and intellectual domains....This overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.
--"Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," by Justin Kruger and David Dunning Department of Psychology Cornell University, Journal of Personality and Social Psychology December 1999 Vol. 77, No. 6, 1121-1134.

Self-deception is the process or fact of misleading ourselves to accept as true or valid what is false or invalid. Self-deception, in short, is a way we justify false beliefs to ourselves.

When philosophers and psychologists discuss self-deception, they usually focus on unconscious motivations and intentions. They also usually consider self-deception as a bad thing, something to guard against. To explain how self-deception works, they focus on self-interest, prejudice, desire, insecurity, and other psychological factors unconsciously affecting in a negative way the will to believe. A common example would be that of a parent who believes his child is telling the truth even though the objective evidence strongly supports the claim that the child is lying. The parent, it is said, deceives him or herself into believing the child because the parent desires that the child tell the truth. A belief so motivated is usually considered more flawed than one due to lack of ability to evaluate evidence properly. The former is considered to be a kind of moral flaw, a kind of dishonesty, and irrational. The latter is considered to be a matter of fate: some people are just not gifted enough to make proper inferences from the data of perception and experience.

However, it is possible that the parent in the above example believes the child because he or she has intimate and extensive experience with the child but not with the child's accusers. The parent may be unaffected by unconscious desires and be reasoning on the basis of what he or she knows about the child but does not know about the others involved. The parent may have very good reasons for trusting the child and not trusting the accusers. In short, an apparent act of self-deception may be explicable in purely cognitive terms without any reference to unconscious motivations or irrationality. The self-deception may be neither a moral nor an intellectual flaw. It may be the inevitable existential outcome of a basically honest and intelligent person who has extremely good knowledge of his or her child, knows that things are not always as they appear to be, has little or no knowledge of the child's accusers, and thus has not sufficient reason for doubting the child. It may be the case that an independent party could examine the situation and agree that the evidence is overwhelming that the child is lying, but if he or she were wrong we would say that he or she was mistaken, not self-deceived. We consider the parent to be self-deceived because we assume that he or she is not simply mistaken, but is being irrational. How can we be sure?

A more interesting case would be one where (1) a parent has good reason to believe that his or her child is likely to tell the truth in any given situation, (2) the objective evidence points to innocence, (3) the parent has no reason to especially trust the child's accusers, but (4) the parent believes the child's accusers anyway. Such a case is so defined as to be practically impossible to explain without assuming some sort of unconscious and irrational motivation (or brain disorder) on the part of the parent. However, if cognitive incompetence is allowed as an explanation for apparently irrational beliefs, then appeals to unconscious psychological mechanisms are not necessary even in this case.

Fortunately, it is not necessary to know whether self-deception is due to unconscious motivations or not in order to know that there are certain situations where self-deception is so common that we must systematically take steps to avoid it. Such is the case with belief in paranormal or occult phenomena such as ESP, prophetic dreams, dowsing, therapeutic touch, facilitated communication, and a host of other topics taken up in the Skeptic's Dictionary.

In How We Know What Isn't So, Thomas Gilovich describes the details of many studies which make it clear that we must be on guard against the tendencies to

misperceive random data and see patterns where there are none;

misinterpret incomplete or unrepresentative data and give extra attention to confirmatory data while drawing conclusions without attending to or seeking out disconfirmatory data;

make biased evaluations of ambiguous or inconsistent data, tending to be uncritical of supportive data and very critical of unsupportive data.

It is because of these tendencies that scientists require clearly defined, controlled, double-blind, randomized, repeatable, publicly presented studies. Otherwise, we run a great risk of deceiving ourselves and believing things that are not true. It is also because of these tendencies that in trying to establish beliefs non-scientists ought to try to imitate science whenever possible. In fact, scientists must keep reminding themselves of these tendencies and guard against pathological science.

Many people believe, however, that as long as they guard themselves against wishful thinking they are unlikely to deceive themselves. Actually, if one believes that all one must be on guard against is wishful thinking, then one may be more rather than less liable to self-deception. For example, many intelligent people have invested in numerous fraudulent products that promised to save money, the environment, or the world, not because they were guilty of wishful thinking but because they weren't. Since they were not guilty of wishful thinking, they felt assured that they were correct in defending their product. They could easily see the flaws in critical comments. They were adept at finding every weakness in opponents. They were sometimes brilliant in defense of their useless devices. Their errors were cognitive, not emotional. They misinterpreted data. They gave full attention to confirmatory data, but were unaware of or oblivious to disconfirmatory data. They sometimes were not aware that the way in which they were selecting data made it impossible for contrary data to have a chance to occur. They were adept at interpreting data favorably when either the goal or the data itself was ambiguous or vague. They were sometimes brilliant in arguing away inconsistent data with ad hoc hypotheses. Yet, had they taken the time to design a clear test with proper controls, they could have saved themselves a great deal of money and embarrassment. The defenders of the DKL LifeGuard and the many defenders of perpetual motion machines and free energy devices are not necessarily driven by the desire to believe in their magical devices. They may simply be the victims of quite ordinary cognitive obstacles to critical thinking. Likewise for all those nurses who believe in therapeutic touch and those defenders of facilitated communication, ESP, astrology, biorhythms, crystal power, dowsing, and a host of other notions that seem to have been clearly refuted by the scientific evidence. In short, self-deception is not necessarily a weakness of will, but may be a matter of ignorance, laziness, or cognitive incompetence.

On the other hand,  self-deception may not always be a flaw and may even be beneficial at times. If we were too brutally honest and objective about our own abilities and about life in general, we might become debilitated by depression.

PSYCHOLOGY OF SAFETY: Who are you kidding?

How self-deception blinds us to safety

By Scott Geller

POSTED: 08/01/2003

If only we could all view our organization as a family of people working together to achieve common goals. We see our own family members as people, and we never hesitate to actively care for their safety and health. In fact, we inflate the virtues of those in our family and often communicate these virtues to others. Such "positive gossip" increases appreciation and admiration within the family, making it natural to actively care for each other.

Actively caring behavior in an organization increases directly with the number of employees (including managers) who view their coworkers as "family." We show empathy for family members, we don't betray family members, and any distortions of the qualities of family members are more likely to be positive than negative. People-based safety enables us to successively approximate a family atmosphere in the workplace.

People-based safety

For many years, I've claimed that a comprehensive approach to address the human dynamics of industrial safety requires "people-based safety." This perspective combines the objective, research-supported tools of behavior-based safety (BBS) with person-based safety or the internal, feeling states of individuals.

After submitting my July ISHN article on the soft side of psychology, I came across an intriguing book that was uncannily consistent with my key points last month. The title of this book is "Leadership and Self-Deception" by the Arbinger Institute (San Francisco, CA: Berret-Koehler Publishers, Inc., 2002). This article reviews the key points in this book as they relate to people-based safety.

Betrayal and deception

We often find ourselves in situations where we consider helping another person but don't follow through, according to the Arbinger Institute. We observe an environmental hazard in a person's work area but don't remove it. We see someone working without the proper safeguards or personal protective equipment (PPE) and walk on by. Or, while buckled-up in a vehicle, we notice that another occupant is not using the available safety belt, but we don't say anything.

This experience of not actively caring when you know you should leads to self-betrayal, and then to self-deception, according to the Arbinger Institute. Since it's uncomfortable to accept and maintain self-betrayal, we engage in a variety of self-deceiving thought processes to justify our lack of caring.

To live with our self-betrayal, we inflate the faults of the person we didn't help while inflating our own virtues. For example, we might presume the worker not using proper PPE is incompetent, lazy, unmotivated, uncaring, or not a team player. At the same time, we pump up our own positive characteristics, including the qualities that make us effective at our important work. We conclude we have better things to do than to help an incompetent, mindless worker decrease risks of personal injury.

Faults over facts

Such kind of distorted self-deception facilitates fault-finding over fact-finding - a common problem in the safety world. The easiest way to justify one's self-betrayal or lack of actively caring is to blame someone else. It's not my fault for not helping, it's their fault for not being responsible or self-accountable. The defensive personal script might be something like, "Safety is a personal issue. If those workers don't care enough to protect themselves, why should I?"

Notice the self-serving cycle of distortion in fault-finding. The more blame we connect to others, the less possible fault we find in ourselves. To reduce the tension that comes from betraying our better selves, we find the other person(s) blameworthy. This leads to more self-deception, feeding an already distorted reality.

Breaking the cycle

So what can we do about this universal problem?

First, we need to recognize that we don't actively care for other's safety as much as we should and could. Next, we need to own up to personal misrepresentations of reality we hold in order to justify this self-betrayal. Third, we need to realize the power of seeing others as people with their own feelings, intentions, and aspirations rather than viewing these individuals as merely objects providing a service. When we embrace the human side of our coworkers, we are more likely to actively care for their well-being and less likely to practice self-deception to justify our self-betrayal.

By apologizing for not actively caring and finding other opportunities to actively care and follow through, we break the cycle of self-serving self-deception. Focus your mental scripts on what you do right rather than what you do wrong. This prevents feelings of self-betrayal and a need for self-deception.

Don't blame others for not actively caring. See these individuals as people with a variety of interpersonal and cultural constraints inhibiting their helping behavior. Then, set the actively-caring example yourself. Show you have overcome personal and environmental factors that hold back actively caring behavior and facilitate self-betrayal and self-deception.

Bottom line: Take a people-based approach to industrial safety and health. Realize the power of empathy when listening, recognizing, helping, and leading people. Anything less will initiate a cycle of self-deception that distorts perceptions of people, confines the benefits from interpersonal interaction, and makes it impossible to bring out the best in people and achieve an injury-free workplace.

AUTHOR INFORMATION

Professor of psychology, Virginia Tech, and senior partner with Safety Performance Solutions. For more information call (540) 951-7233 or visit www.safetyperformance.com.

Abstract:

This paper argues that self-deception cannot be explained without employing a depth-psychological ("psychodynamic") notion of the unconscious, and therefore that mainstream academic psychology must make space for such approaches. The paper begins by explicating the notion of a dynamic unconscious. Then a brief account is given of the "paradoxes" of self-deception. It is shown that a depth-psychological self of parts and subceptive agency removes any such paradoxes. Next, several competing accounts of self-deception are considered: an attentional account, a constructivist account, and a neo-Sartrean account. Such accounts are shown to face a general dilemma: either they are able only to explain unmotivated errors of self-perception--in which case they are inadequate for their intended purpose--or they are able to explain motivated self-deception, but do so only by being instantiation mechanisms for depth-psychological processes. The major challenge to this argument comes from the claim that self-deception has a "logic" different to other-deception--the position of Alfred Mele. In an extended discussion it is shown that any such account is explanatorily adequate only for some cases of self-deception--not by any means all. Concluding remarks leave open to further empirical work the scope and importance of depth-psychological approaches.

Do the Self-Deceived Get What They Want?

Eric Funkhouser

Abstract: Two of the most basic questions regarding self-deception remain unsettled: What do self-deceivers want? What do self-deceivers get? I argue that self-deceivers are motivated by a desire to believe. However, in significant contrast with Alfred Mele’s account of self-deception, I argue that self-deceivers do not satisfy this desire. Instead, the end-state of self-deception is a false higher-order belief. This shows all self-deception to be a failure of self-knowledge.

Do the Self-Deceived Get What They Want?

Eric Funkhouser

In the last few decades several anthologies and numerous philosophical articles have been dedicated to analyzing self-deception. Though almost all parties agree that self-deception is some type of motivated irrationality, the most fundamental questions about its nature remain unsettled. I aim to go some way towards resolving two of the most basic of such questions: What do self-deceivers want? And what do self-deceivers get? The answers I will offer are in significant contrast with the recently influential account of self-deception offered by Alfred Mele.1

Section

1 addresses the question of what self-deceivers want. In the current debate over the nature of self-deception, a division exists between those who judge self-deception to result from an intention to deceive (intentionalism) and those who hold that the deception merely be the result of some motivational state, typically a desire (motivationalism).2 Though I favor motivationalism, I will not argue against the intentionalist here—motivationalism will simply be assumed. Instead, I am concerned with an internal dispute amongst motivationalists that results from the following question: What is the content of the operative motivational state of self-deceivers?

Two answers have been prominent. Let ‘p’ stand for the proposition that the deception is about. These two candidates for the operative desire in all cases of self-deception are:

World-focused desire: Self-deceivers desire that the world be such that p.

Self-focused desire: Self-deceivers desire that they be such that they believe that p.3

The self-focused version is endorsed in section 1. Those who endorse the self-focused version typically assume, or argue, that self-deceivers satisfy this operative desire by acquiring the belief that p. However, in section 2 I argue that self-deceivers do not satisfy this desire. An important distinction is made between cases in which this desire is satisfied and cases in which the desire is not satisfied. This is the distinction between self-delusion and self- deception. (This special terminology is explained later.) While Mele has offered a strong account of self-delusion, his proposal does not fit with the realities of self-deception.

If they do not satisfy their operative desire, what is it that self-deceivers achieve that makes them self-deceived? In section

3 I argue that successful self-deception results in a false higher-order belief that the self-focused desire is satisfied. Self-deception is then a failure of self-knowledge. This account fits the practical irrationality of self-deceivers, explains the cognitive and behavioral “tension” characteristic of self-deceivers, and explains why self-deceivers cannot be fully aware of their self-deception.

1. What self-deceivers want

Given the assumption that self-deception is motivated, we can inquire about the content of this motivation. Considering cases can help us in this endeavor.

Case 1: Mitchell is a vain man, sensitive about his receding hairline. He has taken to combing his hair over from one side in a rather exaggerated and distasteful manner. Though he takes such obvious steps to disguise his baldness, he fails to acknowledge that he is bald. His friends find it embarrassing that Mitchell often makes a point of mockingly referring to the baldness of other men, while failing to recognize that he is one of them. Mitchell’s directions to his barber are very precise, and he invariably poses at a certain angle whenever being photographed or viewing himself in the

mirror. Mitchell does not even allow his wife to tussle his hair.

Mitchell, I take it, is self-deceived about his hair. One might think that the motive for this deception is obvious—Mitchell desires that he have a full head of hair. This suggests a general account of the motivation of self-deception. Whenever a person is self-deceived about p, that person’s self-deception is motivated by a desire that p. This is the world-focused version.

As sensible as that suggestion might sound for Case 1, it does not generalize to other cases.

Case 2: Joey is a jealous man. Objective observers are convinced that it is highly unlikely that his wife Marcia is having an affair. But Joey says otherwise, often calling her names that shock his friends. They tell him that he has no reason to think that she’s sneaking off to see another guy whenever she visits with her female

friends. They say that her recently increased sex drive is probably not an act to cover her guilt. But Joey violently protests. Unsurprisingly, Joey does not like to be

proven wrong and refrains from calling Marcia’s friends to confirm her stories.

Joey, I take it, is self-deceived about his wife’s fidelity. But the motive for his deception is not so obvious. Perhaps some men have unconscious desires to have their relationships sabotaged, but Joey does not—he genuinely wants Marcia to be faithful. The suggested generalization from Case 1 fails because Joey lacks a desire corresponding to the content of his deception. Mele (2001) termed cases of this form twisted self-deception. Case 1, in contrast, is straight self-deception because the agent desires that what he is deceived about is the case.

The distinction between straight and twisted self-deception suggests two possibilities about motivation. First possibility: Corresponding to these two types of self-deception, there are two versions of motivational content. Second possibility: There is still a unified account of motivation, but it is more sophisticated than we originally thought. Unified explanations generally being preferred, it would be better if we can vindicate the second possibility.4

But what motivation is common to both Case 1 and Case 2, as well as all other paradigmatic examples of self-deception? Nelkin (2002) has been the most recent advocate of what I take to be the correct answer. Whenever a person is self-deceived about p, that person’s self-deception is motivated by a desire to believe that p. The first step to confirming this proposal is establishing that such desires are always true of self-deceivers. Establishing this for Cases 1 and 2 is a good start.

Mitchell not only desires a full head of hair, he desires to believe that he has a full head of hair (even if he does not have a full head of hair). What evidence is there for this latter desire? His avoidance behavior5 (e.g., always looking in the mirror at a certain angle and his combover technique) suggests that he is not motivated to believe the truth. Rather, he wants to believe that he has a full head of hair because that belief is valuable for its own sake. This

belief is comforting to his vanity, and he wishes to believe it regardless of its truth-value. Joey, too, engages in avoidance behavior (e.g., not checking on his wife’s stories) that reveals his desire to believe for non-truth-conducive reasons. But for Joey the deception regarding his wife’s infidelity is not intrinsically pleasant. Still, he is motivated to acquire this belief even though the evidence available to him does not support it. Being the jealous, insecure type, and having been cheated on before, Joey desires, out of caution, to believe that his wife his unfaithful.6

Avoidance behavior and other indications that self-deceivers know the truth “deep down” provide support for the self-focused account of motivation. The best explanation for such phenomena is that the self-deceiver is guided by a desire for a certain state of mind. Perhaps the desire can be given greater specificity in certain cases, but the most appropriate generalization we can truthfully make of self-deceivers, limiting ourselves to the traditional categories of folk psychology, is that they are guided by a desire to believe. But this “guiding” need not be construed as either conscious or intentional. In fact, for most cases it is neither.

Further support for the desire to believe account might be sought by making comparisons to interpersonal cases of deception. When person A deceives some other person B about p, A desires that B believe that p. Person A certainly need not desire that p. The self-focused account preserves this form of motivation even in cases when A=B. However, making this parallel offers little support. Well-known paradoxes arise when self-deception is viewed on the model of interpersonal deception,7 and theorists must deny a parallel at some point. In section 2 I argue for placing this break elsewhere. But it is not merely accidental that the motivation aligns for interpersonal and self-deceivers. It is the nature of deception to aim at changing someone’s mind, not the world.

The desire-to-believe account of motivation suggests a third type of self-deception, in addition to the now standard straight and twisted varieties. In straight cases of self-deception the agent desires that p, and in twisted cases the agent desires that not-p. But it is

not necessary that self-deceivers have any such world-focused motivation. It is possible for there to be self-deceivers who neither desire that p nor desire that not-p. That is, one can desire to believe that p, while lacking any world-focused desire regarding p. Such self-deception would be appropriately termed indifferent or apathetic self-deception. It should be stressed that such self-deceivers still have some motivation (i.e., a self-focused motivation); they are simply indifferent or apathetic at the world-level. We should expect such cases to be unusual, however, because it would be odd to have a vested interest in believing that p without having a vested interest in p. But there are such cases. One example is self-deception prompted by peer pressure. We often have desires (more generally: motivations) to be like those around us. Common examples include desires to dress and talk like our peers. Sometimes we even have desires to believe as those around us believe. Just as it can be awkward to be the oddball in dress or speech, it is sometimes awkward to hold a minority belief. And one could be motivated to self-deception by having a desire to believe what one’s peers believe, while being indifferent to the truth or falsity of what is believed. The belief is simply desired for its utility. Such a case does not fall under either the straight or twisted varieties, but is accounted for by our self-focused desire.

Nelkin (2002) argues that the desire to believe account reveals self-deception to be an understandable example of practical rationality.8 Self-deceivers have a desire to believe that p, and they come to believe it. Self-deception is a species of goal or desire satisfaction, she reasons. In this regard, I think Nelkin’s desire-to-believe account is flawed. In the next section we will argue that self-deceivers, or an interesting class of self-deceivers at least, do not get what they want. This case is heavily supported by the presence of avoidance behavior, and other such indicators, we have previously discussed.

2. Do self-deceivers get what they want?

Accurately characterizing the belief states of self-deceivers is no easy task. Mele and Nelkin, in their motivationalist analyses, take it as uncontroversial that self-deceivers are successful in coming to believe as they desire.9 In our examples, they would interpret

Mitchell as believing that he has a full head of hair and Joey as believing that his wife is having an affair. (Better: Either these interpretations hold, or Mitchell and Joey are not self-deceived.) While Mele spends considerable time arguing against “dual-belief” views10—views that interpret self-deceivers as both believing that p and believing that not-p—he has surprisingly little to say in support of his positive position that self-deceivers believe that p. An analogy to interpersonal cases would support attributing this belief to the self-deceived, but Mele certainly is not sympathetic to such analogies.

A number of critics have objected to Mele by noting that there is a tension in self-deceivers that Mele’s analysis fails to capture.11 Robert Audi has characterized the tension of self-deceivers as a tendency to say one thing while believing another.12 Applying this to Case 1: Mitchell will sincerely assert that he is not bald, though he knows that he is bald. Kent Bach has argued that, even if the self-deceiver does not believe the truth, he at least has nagging suspicions.

In self-deception, unlike blindness or denial, the truth is dangerously close at hand. His would not be a case of self-deception if it hardly ever occurred to him that his wife might be playing around and if he did not appreciate the weight of the evidence,

at least to some extent.13

Others have noted that suspicions, coupled with avoidance behavior, are characteristic of self-deceivers.14

I agree with Audi that self-deceivers tend to say one thing, while believing another. It would be helpful to have this judgment backed by an account of belief that explains why, in our belief attributions, non-linguistic behavior should be privileged over linguistic behavior (at least in the case of self-deceivers). In particular, why should Mitchell’s avoidance behavior count as sufficient evidence that he believes he is bald, but his avowals not count as sufficient evidence for the contradictory belief? Fortunately, we do not need to develop a full theory of belief to answer this question. Two very general and widely accepted claims about beliefs and desires will help us towards this end. The first claim is that beliefs and

desires earn their keep by the role they occupy in predicting/explaining behavior, interacting with other mental states (including other beliefs and desires), and being caused by external stimuli. This is a statement of the Functionalist’s position. We will give special attention to the role of beliefs and desires in explaining and predicting behavior—a role that reaches further back, brought to the forefront by the Behaviorists. I assume that there is some necessary connection between beliefs/desires and behavior. Our second claim about beliefs is that the psychological explanation of human behavior is not simplistic. Even assuming a belief-desire psychology, it is not the case that a single belief-desire pair explains any actual human behavior. Rather, it is an entire network of beliefs and desires that interacts to prompt behavior. Because of these complex interactions, it is possible for an agent to have a single belief-desire pair that suggests a certain behavior, but yet the behavior be thwarted by pressures elsewhere in the network.

Given this complexity, it is not possible to look at one tidbit of behavior and infer the beliefs and desires of the agent. (This is the general point of evidence underdetermining theory, as made famous in Quine’s applications to the translation of languages and psychological states.) When someone claims “p”, we generally take this as some evidence that that person believes that p. This is because we generally attribute to people a desire to tell the truth and assume that they have some privileged access regarding what they believe. So, given the proper motive and ability, the utterances of others can provide us with windows to their psychology. But if either this motive or ability is lacking, then we have a defeating condition. I suggest that when the motive is lacking, they are deceiving us; but when the ability is lacking, they are often deceiving themselves. One way in which this privileged access can become tainted is by the presence of a desire to believe. We can often “find” what we want to find, even if it is supposed to be in our own head. Such a result is to be expected given the well-known confirmation biases and (at least possible) opacity of the mind.

We are now in a position to identify the reason for giving greater weight to Mitchell’s non-linguistic behavior. There are other desires in the network that explain why Mitchell’s general desire to tell the truth does not combine with his belief that he is bald and cause him to utter, “I am bald!” Namely, Mitchell desires to believe that he is not bald, and this desire explains why he gives utterances supporting the content of that desire. A natural question arises, however: Why doesn’t the desire to believe that he is not bald similarly prompt Mitchell to the non-linguistic behavior of one who actually believes this? For example, why doesn’t this desire cause Mitchell to quit the combover tactic? An answer to this question should point to a difference between the non-linguistic and linguistic behavior with regard to the desire to believe. The most important difference seems to be as follows. By asserting “I am not bald” Mitchell is not jeopardizing his goal of so believing. If anything, such repetition would further that goal. However, by allowing his wife to tussle his hair Mitchell is jeopardizing this goal. It would be quite difficult for Mitchell to refuse to acknowledge his baldness were that to happen.

This explanation also reveals that Mitchell does believe that he is bald. It is no accident that Mitchell systematically avoids evidence of his baldness. There is no other plausible psychological explanation for Mitchell’s avoidance behavior but to attribute this belief to him.15 However, we have seen that there is another, plausible psychological explanation for his contrary avowals. We know, as a matter of empirical fact, that such desires can cause one to avow as a true believer. Plus, such avowals do not frustrate the desire to believe.

We have seen that we must look at the entire network of beliefs and desires, and consider alternative explanations, when attributing beliefs and desires. Indeed, it sometimes happens in our belief attributions that, contrary to the situation with self-deceivers, avowals that “p” will trump behavior generally indicative of a belief that not-p. But, again, there will have to be a special story as to why this behavior does not warrant attributing the belief that not-p. Here is one such example: I am playing a game of Old Maid with a child. It is obvious to me which card is the “Old Maid”, but I pick that card anyway and lose the game.

An adolescent observer later asks how I could fail to notice which card was the “Old Maid.” I respond, “Of course I knew. I was just allowing the child to win.” When we understand this supplemental desire to allow the child to win, we can then understand why I behaved contrary to how one playing the game generally would (i.e., in a competitive mode) with my beliefs.

It is possible to complete the story about Mitchell in such a way that his combover technique and refusal to let his wife tussle his hair are not sufficient evidence for attributing to Mitchell the belief that he is bald. But, such a tale would require attributing fanciful beliefs and/or desires elsewhere in Mitchell’s network of beliefs. For example, Mitchell refuses to let his wife tussle his hair because he believes it is an unmanly thing to allow. And, he opts for the combover technique because he genuinely likes that style. If all of Mitchell’s “avoidance” behavior can be explained in such a manner, then we are no longer justified in attributing to Mitchell the true belief. And, as a matter of fact, many self-deceivers eventually do acquire these auxiliary beliefs and desires, and engage in their “avoidance” behavior for these reasons. When this happens, however, they have passed from being self-deceived to self-deluded (a distinction to be explained shortly). For, we are no longer entitled to attribute to them the true belief.16 As a matter of psychological epistemology, it might not be an easy task to determine when such a transition occurs. But this is because there is a general problem with attributing beliefs and desires, and determining an agent’s real reasons for action. This is not a special problem for self-deception.

Cases 1 and 2 display the tension characteristic of self-deception—avoidance behavior that points to the agent possessing the true belief, but the agent avowing otherwise. But let’s consider a final pair of cases to illustrate the distinction between the presence and absence of such tension.

Case 3: Nicole possesses much evidence that her husband Tony is having an affair with her friend Rachel. Nicole’s other friends have reported to her that Tony’s car is often seen parked in Rachel’s driveway, at times when he claims to be with his male friends. Tony has lost sexual interest in Nicole, and other suspicious behavior

provides sufficient evidence for Nicole to be more than skeptical. Yet she laughs off the concerns of her girlfriends, and thinks to herself that Tony is certainly a faithful husband. (“After all, I am still an intelligent, charming, and attractive woman—certainly more so than Rachel!”) Yet, in the evenings when Tony claims to be with his male friends, Nicole avoids driving by Rachel’s house—even when it requires her to

drive out of her way

Nicole’s avoidance behavior might be unconscious or it might be rationalized away, but this behavior certainly suggests that she has beliefs contrary to what she tells herself and her girlfriends. Why else would she go out of her way to avoid Rachel’s house? Similarly, why else would Mitchell refuse to let his wife tussle his hair? Why else would Joey refrain from confirming his wife’s stories?

The tension dissipates if we change the story at the end:

Case 4: Nicole possesses the same evidence as in Case 3, and she laughs off the allegations just as before. The key difference is that she exhibits no behavior indicating that she suspects, or outright believes, that her husband is having an affair with Rachel. As such, Nicole confidently drives by Rachel’s house even when Tony claims to be with his male friends. Why wouldn’t she? She believes that he isn’t there.

The absence of suspicion and avoidance behavior separates Case 4 from the previous three. Her behavior and accompanying first-person phenomenology whole-heartedly indicate that she believes as she desires.

We can grant that Mele’s model, describing the self-deceived as believing as they desire, accounts for cases of the fourth type. But we should question whether cases of the fourth type are what raise the interesting problems of self-deception. Following Mele, we can distinguish between the dynamic puzzle of how an agent can set about deceiving himself and the static puzzle of how he can simultaneously possess the requisite doxastic states for self-deception. We can agree with Mele’s proposal that no intention to deceive is necessary, and that standard biases can resolve the dynamic puzzle. However, there is also a static puzzle regarding how to characterize the belief-states of self-deceivers. While Mele is correct that no such static puzzle arises for cases of type 4, his proposal is inadequate for handling

examples like our Cases 1-3. I suggest that cases of this latter type have provided the philosophically interesting cases of self-deception all along.

The current debate over self-deception would benefit from terminology that can separate these two types of cases. I suggest the following terminology. Let us reserve the term self-deception for cases like 1-3. In cases of self-deception the agent has a desire to believe that p, and this motivates her to engage in biased reasoning, avoidance behavior, and similar deceptive measures that have been extremely well characterized by Mele and other theorists. For self-deceivers, this desire does not result in a belief that p, however. (What does it result in? This question is answered in the next section.) Self-deceivers engage in behavior, which reveals that they know, or at least believe, the truth (not-p). I hope the cases of Mitchell, Joey, and our original Nicole, have adequately illustrated this combination.

With regard to the static puzzle, Case 4 is uninteresting. Let us call such cases self-delusion. Self-delusion is the state self-deceivers enter once they believe what they want to believe.17 Lest anyone protest, such cases surely are deception in some sense, but we should mark this kind of deception off from our previous category. ‘Delusion’ seems to capture the full-blown misjudgment which separates the Nicole of Case 4 from the Nicole of Case 3.

This distinction is important to note. When Mele presents examples of (what he calls) self-deception, he invariably presents cases of what we call self-delusion.18 Such examples include a survey which shows that 94% of university professors think they are better at their jobs than their colleagues.19 A good many of these professors are presumably guilty of motivated irrationality. Mele’s examples of “garden-variety” self-deception are almost all of the same form: parents believe against the evidence that their child has not committed treason or has not experimented with drugs, a scholar inappropriately concludes that his paper should not have been rejected for publication, or a young man misinterprets a woman’s behavior as evidence that she is in love with him.20 Mele is right that in such cases, as he describes them, it is clear that the agent believes what is false. These are cases of self-delusion, and Mele presents a plausible account of self-delusion. But who would have

thought that such cases pose a static puzzle? They clearly lack the necessary ingredient found in our Cases 1-3 that separates self-deception from self-delusion—the presence of behavior that points against the avowed belief.

Mele has considered similar criticisms before, and responds with comments like the following:

For example, it is alleged that although self-deceivers like Sam [from an earlier example] sincerely assure their friends that their spouses are faithful, they normally treat their spouses in ways that manifest distrust. This is an empirical matter on

which I cannot pronounce.21

But this response is not sufficient to handle the present objection. My claim is not an empirical claim (or a claim of fiction-interpretation) about whether the characters in Mele’s examples really would behave in ways that manifest distrust. They very well might not. But then those would be cases of self-delusion, and why would we think that such cases pose a static puzzle? Instead of such an empirical objection, my point is that there are cases in which people sincerely avow one thing while behaving otherwise. These are the philosophically interesting cases that pose the static puzzle. So I suggest that we limit the term ‘self-deception’ to these interesting cases and reserve ‘self-delusion’ for Mele-style examples. And Mele does not offer an account of self-deception so defined.

There is no parallel to self-deception, so defined, in cases of interpersonal deception. All cases of interpersonal deception are analogous to self-delusion. In self-delusion the agent believes as he desires, and in interpersonal deception the victim believes as the deceiver desires. Once the self has been deluded, or the interpersonal victim deceived, the deceiver can “walk away” from the act and the deluded state can still remain. Self-deception, in contrast, is a continual process of believing truly, but hiding this belief from oneself out of a desire to believe otherwise.22

3. What self-deceivers get

Self-deceivers do not believe as they desire to believe. But there must be some state that they enter into which separates them from those who have a motive to believe but are not self-deceived. What is this end-state that marks the transition from self-deceiving to self-deceived? I suggest the following: self-deceivers (falsely) believe that they believe as they desire.23

We find initial support for this suggestion by observing that it seems to capture the irrationality of self-deceivers. Self-deceivers do not get what they want. This type of irrationality is recognized by those who advocate the world-focused version of the motivating desire. Recall that according to the world-focused version the self-deceived desire that p, and this then motivates them to acquire the belief that p. But this end-state is not a satisfaction of the operative desire—believing that p typically does not make it so. On the world-focused version the self-deceived mistakenly believe that the end they desire has been achieved—p. (Clarification: The self-deceived need not believe that they have satisfied their operative desire as such. They simply believe that p, and this, as a matter of fact, is the satisfying end-state.) Call this type of irrationality Mistaken Ends Irrationality.

In section 1 we argued that the world-focused account of motivation is mistaken. Still, such theorists seem to correctly capture the type of irrationality displayed by self-deceivers. Let us now apply Mistaken Ends Irrationality to the self-focused version. On our account the self-deceived desire to believe that p. In their self-deception they mistakenly believe that they have attained the end they desire. That is, the self-deceived (falsely) believe that they believe that p.

While Mistaken Ends Irrationality does ring true of self-deceivers, the reasoning here might seem a bit too clever. And, we have already rejected the world-focused version of motivation, so why assume that the world-focused account is right in this regard? To better support our chosen end-state, we should explain what is characteristic of higher-order belief failures and how these characteristics are found in the self-deceived.

It takes a reflective creature to have second-order beliefs—a creature that is thinking about its own mental states—and as such there is good reason to associate second-order beliefs with conscious reasoning and introspection.24 Another reason why second-order beliefs might be more associated with consciousness and cognition, as compared to their first-order counterparts, is that it is difficult to see how a person’s behavior could directly reflect the second-order belief that p (i.e., the belief that they believe that p) apart from simply reflecting the first-order belief that p. Indeed, some philosophers have gone so far as to analyze all conscious experience in terms of higher-order thoughts or beliefs.25 While I do not accept a higher-order belief account of consciousness in general, there are certainly significant connections and correlations between higher-order beliefs and conscious awareness. As has been recently argued, higher-order theories are most appropriate for explaining self-consciousness and mental states that we are aware of.26

I propose that second-order beliefs endow their possessor with four distinct abilities. An agent possessing a second-order belief can: a) report, b) entertain as true, c) use in practical reasoning, and d) integrate in theoretical reasoning the embedded first-order belief. These are not necessarily abilities one possesses simply in virtue of believing that p. For example, assume Jill believes that a childhood friend of hers lived in a certain house. Jill might retain this belief from childhood, but only in the remote recesses of her mind. Jill cannot report this belief to anyone, nor does she ever entertain it as true. The belief is not given any consideration. It is an isolated tidbit, removed from the premises Jill uses (even implicitly) in her practical and theoretical reasoning. In short, she does not believe that she believes that a childhood friend of hers lived in a certain house. Returning to her old neighborhood might stir this hibernating belief, causing her to gain the higher-order knowledge. Then the first-order belief can be entertained and utilized.

It sometimes happens that people have false higher-order beliefs. Such cases are failures of self-knowledge—having mistaken beliefs about what one believes. I have suggested that this is the case for the self-deceived. The self-deceived do not believe that p, but they believe

that they believe that p. We might wonder if the four abilities that typically come with higher-order beliefs hold even when the embedded first-order belief is lacking.27 The first two abilities do hold even when the higher-order belief is false. The self-deceived can, and do, report that they possess the false first-order belief and entertain this thought as true. These two abilities are public and private versions of truth-avowals. Mitchell would claim to others, and to himself, that he is not bald, even though he truly believes that he is bald. These first two abilities reside more in the realm of contemplation than action, and thus relate more to the first-person character of beliefs. When one reports and entertains something as true one can give the superficial appearance of genuine first-order belief, as well as possess its characteristic phenomenology. And this is at least part of what self-deceivers want. Their Mistaken Ends Irrationality is such that they desire a certain peace of mind, even if “deep down” they know that the world is not as they desire to believe that it is.

The comments at the end of the last paragraph suggest an alternative account of the motivational content of self-deceivers. Maybe the self-deceived do not desire to believe that p, but desire simply to have the first-person qualities associated with believing that p. This is roughly the desire to have the second-order belief that p. Since this is the end-state that the self-deceived get, on this alternative the self-deceived are not guilty of Mistaken Ends Irrationality. To settle this question we would need to know if Mitchell, Joey, Nicole, and other self-deceivers desire the behavioral dispositions associated with the deceived belief, in addition to desiring the phenomenology associated with the deceived belief. We know that they do not get these behavioral dispositions in any robust sense (e.g., Mitchell won’t let his wife tussle his hair and Joey won’t check to confirm if is wife is telling the truth). But do both straight and twisted self-deceivers desire such dispositions? Given what we have already established, this is equivalent to asking if self-deceivers are guilty of Mistaken Ends Irrationality. At this point I am willing to hedge a bit and leave this as an open question. (The title is then disappointing if the reader was expecting a definitive answer.) But we have supported some definite answers in the vicinity. The self-deceived desire either to believe

that p or merely to have the first-person qualities associated with such a belief. The self-deceived get these first-person qualities, but do not get the belief itself.28

Let us return to our discussion of the 4 abilities associated with higher-order beliefs. When the first-order belief is lacking, the last two abilities that typically come with higher-order beliefs—use in practical reasoning and integration in theoretical reasoning—are severely tempered. Nicole, the self-deceived wife of Case 3, truly believes that her husband is having an affair with Rachel. Although Nicole believes that she believes that her husband is not having such an affair, this higher-order belief is false. Since she does not believe that her husband is not having an affair, Nicole cannot let the thought that he is faithful guide her actions in any robust sense. For example, she does not let that thought guide her to Rachel’s house at times that she should believe (and, indeed, does believe) that he is there. In a weak sense the self-deceived present behavior that is indicative of believing as they desire. They report such a belief, and utterances are behavior. But they also exhibit contrary avoidance behavior. Again, the view of belief favored here is one which places greater emphasis on non-linguistic behavioral dispositions than first-person phenomenology and linguistic reports. And the behavioral dispositions of the self-deceived, especially when in situations where the costs of mistake are high, are tipped toward believing the truth. It is unlikely that Mitchell would be willing to have a camera focus on his scalp for a shampoo commercial—there the costs are simply too high.29 If Mitchell were to consent to such a public and critical examination of his scalp, that would be strong evidence that he is self-deluded, not self-deceived.

We can see how this higher-order account of self-deception explains the features that many have noted of self-deceivers—features that Mele’s theory does not explain. The main feature that is missing in Mele’s theory is the tension characteristic of self-deceivers. Audi explains it as follows:

My positive suggestion here is that what is missing (above all) [in Mele’s theory] is a certain tension that is ordinarily represented in self-deception by an avowal of p (or

tendency to avow p) coexisting with knowledge or at least true belief that not-p.30

Audi is correct that self-deceivers do tend to say one thing, while truly believing otherwise. Private and public avowals fall under the first two abilities of higher-order beliefs. So the higher-order account explains why self-deceivers avow what they do, but nevertheless are guided in their actions by a contradictory belief. And the disharmony between the first-order and second-order beliefs clearly demonstrates a tension. For self-deceivers the first-person and third-person aspects of belief are not in accord. From the inside it seems that they believe one thing, but from the outside their behavior reveals otherwise. This tension is sometimes described as a conflict between a subconscious true belief and conscious false belief.31 The higher-order theory also confirms this claim to the extent that higher-order beliefs correlate with consciousness and mere first-order beliefs (i.e., first-order beliefs that are not embedded as the content of a second-order belief) do not. We should add that not only must self-deceivers have a false second-order belief, they must also lack the true second-order belief that not-p. This would explain why the false claim is before consciousness, but the true claim is not. Because the self-deceived do not have contradictory beliefs at the same level, this is not a dual-belief account.

A final advantage of this second-order account is that it explains why self-deceivers cannot know of their self-deceived status.32 Call this the opacity of self-deception. L. Jonathan Cohen remarks on this apparent feature of self-deception when he writes:

Spotting self-deceit in yourself is a lot more difficult than spotting it in others, but your own self-deceit is intrinsically easier to eliminate once you have spotted it. For, once you accept that you have spotted self-deceit in yourself on some issue, it has

presumably thereby ceased to exist in you on that issue.33

If this is not convincing, try to imagine someone remaining in a state of self-deception while knowing that they are self-deceived.

Let’s return to the example of the self-deceived wife to illustrate how our second-order theory explains the opacity of self-deception. If Nicole truly believes that she is self-

deceived about whether her husband is cheating on her, then she believes that she has hidden the true belief (that he is cheating on her) from herself. So, she would believe that she believes that he is cheating on her. But on our account we have stated that Nicole lacks this second-order belief. Instead, she believes that she believes that he is not cheating on her. The relevant belief-states are not transparent, and this lack of self-knowledge is essential to self-deception.

The discussion in sections 1-3 suggests the following sketch of an analysis of self-deception.

An agent is self-deceived at time t if and only if:

1. The agent at t possesses sufficient evidence to warrant a belief that not-p.

2. The agent at t believes that not-p.

3. However, the agent at (and since sometime before) t desires to believe that p.34

4. This desire, by prompting characteristic deceptive strategies, causes the agent to believe, at t, that she believes that p.

5. The agent at t does not believe that she believes that not-p.35

This is only a sketch of an analysis, because certain key concepts have been left unexplained—e.g., ‘possessing sufficient evidence’ and ‘characteristic deceptive strategies.’ The self-deceived employ the same deceptive strategies as the self-deluded, but with less success (in that they do not attain the desired belief). So, in clarifying line 4, we can borrow heavily from Mele’s excellent work in identifying the deceptive tactics of the self-deluded—such as selective attention to evidence, selective acquisition of evidence, confirmation bias, etc.

While this is only a sketch of an analysis, our discussion has yielded considerable new insights into the problem of self-deception. Our section 1 discussion of the motive given in line 3 revealed that there is self-deception that is neither straight nor twisted, but indifferent. And this sketch, unlike the proposals in Mele (1997, 2001), offers a plausible explanation of cases of self-deception in our more restricted sense, as opposed to the less puzzling cases of

self-delusion. Future discussions of motivated irrationality would benefit if the deception/delusion distinction were kept in mind. The self-deceived do not believe as they want to believe. Instead, they possess a false belief about what they believe. This higher-order belief theory explains the tension that separates self-deception from self-delusion.36

Endnotes

Mele (1997, 2001). The answers I defend are most in line with the view of self-deception advanced by Robert Audi (1985, 1988).

Donald Davidson (1982, 1986) offers the most prominent and explicit philosophical version of the intentionalism. Quattrone and Tversky (1984) support this approach through an interpretation of their psychological experiments. Intentionalism has been recently criticized in the philosophical literature by Mele (2001) and Lazar (1999).

The world-focused version has been supported by Bach (1981) and Cohen (1992). Pears (1984) and Nelkin (2002) endorse the self-focused version. Davidson (1986) and Audi (1985) are explicitly agnostic. Mele (2001) also leaves the motivational content open for cases of “garden-variety” self-deception. There are also non-desire motivational accounts that I am presently ignoring. For example, Dalgleish (1997) and Lazar (1999) consider emotion-driven motivational accounts.

Mele (2001), in contrast, offers distinct accounts for straight and twisted self-deception.

This notion of avoidance behavior will be prominent in our discussion. By ‘avoidance behavior’ I mean the sophisticated behavior of avoiding evidence that not-p in a way that shows the agent already possesses sufficient information that not-p.

In this regard, the present view is compatible with Mele’s psychologically well-informed account of self-deceivers as attempting to minimize costly errors (e.g., Joey falsely believing that his wife is faithful) rather than being primarily concerned with true belief. See Mele (2001), Chapter 2.

See Mele (2001), Chapter 1 for a good introduction to these puzzles.

Nelkin (2002), p. 396.

Mele seems to simply assume that the self-deceived are successful. He does not include acquiring the belief that p as one of the sufficient conditions, but as a presupposition of an analysis! Before listing his sufficient conditions he states: “I suggest that the following conditions are sufficient for entering self-deception in acquiring a belief that p.” Mele (2001), p. 50 (italics in original). Also see Nelkin (2002), p. 394.

Mele (2001), Chapter 4.

The analysis of self-deception in Mele (2001) is as follows: “I suggest that the following conditions are jointly sufficient for entering self-deception in acquiring a belief that p.

1. The belief that p which S acquires is false.

2. S treats data relevant, or at least seemingly relevant, to the truth of p in a motivationally biased way.

3. This biased treatment is a nondeviant cause of S’s acquiring the belief that p.

4. The body of data possessed by S at the time provides greater warrant for ~p than for

p.” (pp. 50-51)

Audi (1985), pp. 173-177; Audi (1988), p. 94; Audi (1997), p. 104.

Bach (1997), p. 105.

Martin (1997) and Perring (1997).

It might be challenged that some other intentional state, besides belief, could account for Mitchell’s avoidance behavior. For example, perhaps Mitchell is motivated by a fear of going bald, or by a mere suspicion that he is bald. Note, however, that a purely emotive state, such as a fear or worry of going bald, is not sufficient to account for such avoidance behavior. There must also be some belief-like state coupling with this fear to guide Mitchell in his specific behaviors. For example, Mitchell could easily have feared going bald while he

still had a full head of hair. At that time, he did not (let us plausibly assume) resort to a combover technique. This is because Mitchell lacked a belief that this fear was actualized. At this juncture, one might take a different tack and suggest that a mere suspicion of baldness, rather than full-fledged belief, would be enough to explain avoidance behavior. This possibility is not as damaging to the present view, since suspecting and believing are in the same family of intentional states. Indeed, doubting, suspecting and believing can be seen as three parts of one spectrum. Still, the sophistication and apparent well-informedness of the avoidance behavior of self-deceivers point toward attributing full-fledged true beliefs. More importantly, when the stakes become too high for self-deceivers and it becomes important that they act on their sincere beliefs, we see that self-deceivers do act in a way that shows they believe the truth. An example of this is provided in Section 3. I thank an anonymous referee for bringing these concerns to my attention.

I thank an anonymous referee for bringing this complication to my attention.

The deception/delusion distinction is also made in Audi (1988), p. 109: “Take a case of self-deception in avowing that one is courageous (p). There is a certain tension here which is characteristic of self-deception and partly explains its typical instability. The sense of evidence against p pulls one away from the deception and threatens to lift the veil concealing from consciousness one’s knowledge that one is not courageous, but the desires or needs in which the self-deception is grounded pull against one’s grasp of the evidence and threaten to block one’s perception of the truth. If the first force prevails, one sees the truth plainly and is no longer deceived; if the second force prevails, one passes from self-deception to single-minded delusion and does not see the truth at all. Self-deception exists, I think, only where there is a balance between these two forces.” See also Audi (1985), p. 171.

In contrast, Audi (1988) presents examples of self-deception in our limited sense. (See, in particular, pp. 94-97.)

Mele (2001), p. 3.

Mele (2001), pp. 36-37, 32, 26.

Mele (2001), pp. 79-80.

The comments in Audi (1997) and Bach (1997) regarding the tension characteristic of self-deception also describe self-deception as a process.

Audi (1985), pp. 177 and 182 suggests, but does not develop, this possibility.

This requirement for the capacity of reflection explains why animals and young children cannot self-deceive. Compare with Nelkin (2002), p. 398.

Rosenthal (1986, 1997) offer some of the more prominent articulations of this view.

Lycan (2001) and Manson (2001), for example, see higher-order (representational) theories in this way. However, they are not discussing higher-order thought theories in particular.

Rosenthal mentions subjects with higher-order thoughts of non-existent mental states. It should be pointed out that his higher-order thoughts are not beliefs, but his observation regarding such higher-order thoughts is still relevant. He writes:

“On the present account, conscious mental states are mental states that cause the occurrence of higher-order thoughts that one is in those mental states. And, since those higher-order thoughts are distinct from the mental states that are conscious, those thoughts can presumably occur even when the mental states that the higher-order thoughts purport to be about do not exist. But such occurrences would not constitute an objection to this account. It is reasonable to suppose that such false higher-order thoughts would be both rare and pathological.” (1986), p. 338. Rosenthal holds that the higher-order thought makes the lower order mental state conscious. I think that the higher-order belief makes it more likely that the

lower-order belief is conscious, and when the lower-order belief is non-existent a “false consciousness” still likely occurs.

Although, because even the self-deceived generally respect the weight of evidence, these first-person qualities will likely not be as constant as those of genuine believers (e.g., the self-deluded). It might take some effort for self-deceivers to maintain these first-person qualities. So, in this regard, we can concede somewhat to Bach when he writes: “The self-deceiver is disposed to think the very thing he is motivated to avoid thinking, and this is the disposition he resists.” (1997), p. 105. We can say the following: In virtue of believing the truth, the self-deceived are disposed to think the truth. But in virtue of having a false higher-order belief, this disposition is overridden and the self-deceived think what they do not believe.

Compare with Audi (1985), pp. 188-189. This is the example promised in footnote 15.

Audi (1997) p. 104.

Audi (1985), p. 173; Audi (1988), p. 94.

Knowledge of deception, as I am using the terms, does not require knowing what is really true and false of the world. Rather, it requires knowing your evidence, motives, deceptive tactics, and belief states.

Cohen (1992), p. 147.

This motive might be narrowed to a desire merely for the first-person aspects associated with believing that p, as discussed earlier.

Many would want to include a sixth clause stating that not-p is true. I do not think this is necessary, however. The tension that is symptomatic of self-deception can certainly arise without this requirement. But, if one thinks that as a matter of proper usage deception requires a motivation to believe what is false, they can simply add such a sixth clause.

Thanks to Steffen Borge, Tamar Gendler, John Hawthorne, Karson Kovakovich, and Ted Sider for helpful comments on an earlier version of this paper.

REFERENCES

Audi, Robert. (1985). “Self-Deception and Rationality,” in Self-Deception and Self-

Understanding, ed. Mike W. Martin (Lawrence: University of Kansas), pp. 169-194.

Audi, Robert. (1988). “Self-Deception, Rationalization, and Reasons for Acting,” in

Perspectives on Self-Deception, eds. Brian McLaughlin and Amelie Rorty (Berkeley:

University of California Press), pp. 92-120.

Audi, Robert. (1997). “Self-Deception vs. Self-Caused Deception: A Comment on

Professor Mele,” Behavioral and Brain Sciences 20, p. 104.

Bach, Kent. (1981). “An Analysis of Self-Deception,” Philosophy and Phenomenological Research

41, pp. 351-370.

Bach, Kent. (1997). “Thinking and Believing in Self-Deception,” Behavioral and Brain Sciences

20, p. 105.

Cohen, L. Jonathan. (1992). An Essay on Belief and Acceptance, (Oxford: Clarendon

Press).

Dalgleish, Tim. (1997). “Once More with Feeling: The Role of Emotion in Self-

Deception,” Behavioral and Brain Sciences 20, pp. 110-111.

Davidson, Donald. (1982). “Paradoxes of Irrationality,” from Philosophical Essays on

Freud, Richard Wollheim and James Hopkins, eds., (New York: Cambridge

University Press), pp. 289-305.

Davidson, Donald. (1986). “Deception and Division,” from The Multiple Self, Jon Elster, ed.,

New York: Cambridge University Press.

Lazar, Ariela. (1999). “Deceiving Oneself or Self-Deceived? On the Formation of Beliefs

‘Under the Influence’,” Mind 108, pp. 265-290.

Lycan, William. (2001). “A Simple Argument for a Higher-order Representation Theory of

Consciousness,” Analysis 61, pp. 3-4.

Manson, Neil C. (2001). “The Limitations and Costs of Lycan’s ‘Simple’ Argument,”

Analysis 61.4, pp. 319-323.

Martin, Mike W. (1997). “Self-Deceiving Intentions,” Behavioral and Brain Sciences 20, pp.

Mele, Alfred. (1997). “Real Self-Deception,” Behavioral and Brain Sciences 20, pp. 91-102.

Mele, Alfred. (2001). Self-Deception Unmasked, (Princeton, NJ: Princeton University Press).

Nelkin, Dana. (2002). “Self-Deception, Motivation, and the Desire to Believe,” Pacific

Philosophical Quarterly 83.4, pp. 384-406.

Pears, David. (1984). Motivated Irrationality, (Oxford: Clarendon Press).

Perring, Christian. (1997). “Direct, Fully Intentional Self-Deception is also Real,” Behavioral

and Brain Sciences 20, pp. 123-124.

Quattrone, G., and A. Tversky. (1984). “Causal versus Diagnostic Contingencies: On Self-

Deception and on the Voter’s Illusion,” Journal of Personality and Social Psychology 46, pp.

Rosenthal, David. (1986). “Two Concepts of Consciousness,” Philosophical Studies 49,

pp. 329-359.

Rosenthal, David. (1997). “A Theory of Consciousness,” reprinted in The Nature of

Consciousness, eds. N. Block, O. Flanagan, and G. Guzeldere, (Cambridge, MA: The MIT

Press).

defense Mechanisms

To avoid unconscious conflicts or anxiety people unconsciously use defense mechanisms to cope. These mechanisms protect us from painful experiences or emotions. These patterns can be adaptive or maladaptive. Projection, splitting, and acting-out are usually maladaptive. Use of humor is considered adaptive.

Projection - When a person falsely attributes his own feelings, thoughts, or impulses to others. Adults with low self esteem become very critical of others who have the same problems they unconsciously perceive. 

Splitting - When a person views himself or others as all good or all bad, and alternates between these extremes. 

Acting-out - When a person acts without thinking or regard for the consequences.

Denial - When a person denies the reality of the situation. For example, I really do not have a drinking problem. A counselor should try to help the person identify the problem for himself instead of pointing it out to him. 

Repression - When a person unconsciously hides his uncomfortable feelings, thoughts, or experience so they are not remembered. This is probably the most commonly used defense mechanism, and the bases for all others. 

Suppression - When a person consciously tries to hides his uncomfortable feelings, thoughts, or experiences. 

Rationalization - When a person gives an incorrect explanation for his behavior to justify what he did. 

Reaction Formation - When a person does the opposite of his own unacceptable thoughts, feelings, or actions. 

Somatization - When a person becomes overly preoccupied with his health. 

Autistic Fantasy - When a person becomes preoccupied with day-dreaming, instead of pursuing relationships. 

Idealization - When a person thinks unrealistically more highly of himself or others.

Devaluation - When a person thinks unrealistically more lowly of himself or others.

Displacement - When a person redirects his feelings on to someone else. For example, the boss yells at husband, the husband comes home and yells at the wife, the wife in turn yells at the kids, and the kids kick the dog.

Intellectualization - When a person  uses excessive abstract thinking to avoid his feelings.

Passive Aggressive - When a person indirectly expresses aggression toward others. Instead of directly saying no to a job, they drag the job out or make mistakes doing it.

Isolation - When a person is unable to experience the thought and feeling together of an experience. His feelings remain hidden. 

Undoing - When a person's action are meant to symbolically atone for his bad thoughts, feelings, or actions. 

Dealing with Defense Mechanisms

Self deceiving defense mechanisms should be replaced by proper ways of coping. 

The first step is to realize what defense mechanisms you are using. Conscious control helps overcome unconscious defense mechanisms. 

Forgive others that have wronged you.

Forgive your self when a wrong is done. 

Forgiveness is the best defense against unhappiness and depression. 

Forgetting those things in the past and press on toward the future. 

Patience is a virtue. The immature want it their way now, and become easily frustrated and unhappy.

Accepting and giving love is a great defense against loneliness and inferiority feelings.

Have the ability to laugh at yourself and your mistakes.

Redirect your energy in positive ways. Hostile energy can be channeled into sports and exercise.  

Getting enough sleep and dreaming can help restore your strength. 

NEH Grant Aids Davidson Philosopher's Study of Self-Deception


There are some very good reasons for Professor Al Mele to leave his work at the office. The Davidson College philosophy professor has just received a grant from the National Endowment for the Humanities (NEH) to study and write about "twisted self deception."

He says it's not the type of research that invites family involvement. "You need a lot of data to do any legitimate analysis of people's actual motivations, and I'm only close enough to my family to get the data I'd need," he explained. "But trying to do that wouldn't make for a very happy life around the house!"

Instead, Mele will investigate the concept from the point of view of language and theory, leaving its application to human services professionals like psychologists and psychiatrists. His work over the past 15 years in weakness of will, motivation, self-deception and intentional action has captured the interest of some psychologists and psychiatrists, who apply it to their case work. Last year he presented an invited talk to a convention of psychiatrists in Manhattan who were considering "Weakness of Will." Two years ago he presented an invited paper on "Addiction and Self-Control" to a multidisciplinary conference on addiction in Oslo, Norway. Likewise, he reads psychology journals to gain insights into his subjects.

Mele is just not interested in case histories. "I like to have a nice rosy view of people, and think it would be depressing to work on real people’s actual problems," he said. "But I get very excited about trying to understand how this stuff works theoretically."

The latest grant, his eighth from the NEH, gives him $4,000 for two months of summer study. The NEH funded just 130 of 1,035 applications from professors around the country. All reviewers who read Mele's proposal gave it the highest possible rating. One stated, "Alfred Mele is doing some of the most important work there is today on irrationality..." and said Mele's proposal was the best he had read. Another called Mele "the leading philosophical theorist of action."

Mele has studied the idea of self deception since 1983. He theorizes that most self deception is based on motivational bias. For example, consider the case in which a man could believe that his son is lying to him, but convinces himself that the son is not. There may be enough evidence of lying that an impartial observer would believe it is happening, yet still the man decides there is no lie. Mele believes the man deceives himself because his desire that his son should tell the truth leads him to misinterpret the facts. This strong motivation can lead the man to be self-deceiving, Mele believes, without the man's explicit intent. "We all have defense mechanisms that helps us believe something even though evidence is contrary," he said.

This earlier work on self deception has now led Mele to consider "twisted self deception," a term he developed. In this case, a man may believe his son is lying even though the evidence of lying is slim. In this case, Mele believes that the man's emotional insecurity leads him to focus attention on the slim evidence of lying, and render it more vivid than it would otherwise be. He will argue in a paper he will write this summer with the NEH grant that in some cases human emotion can lead us to believe something we don't want to believe.

He concludes, "No goal is more central to humanistic inquiry than understanding human beings. Given the centrality of motivation in explaining human behavior, a proper understanding of motivation will promote the achievement of this goal."

Mele plans to convert the NEH-funded essay into a chapter of his next book, which will address fundamental questions in the theory of motivation. He plans to write it during a sabbatical year in 1999-2000, building logically on the work he has developed in previous books. Springs of Action (1992) explains the motivation for intentional action, Irrationality (1987) addresses action exhibiting weakness of will and self-deception, Autonomous Agents (1995) explains the control humans have over their motivation, action, belief and emotion. All of those books were published by Oxford University Press. His latest book is last year's The Philosophy of Action, which he edited as part of Oxford Press's prestigious philosophy series.

In addition to his books, Mele has written scores of articles for professional philosophy, psychology and literary journals. An article he wrote on "Real Self Deception" has been selected to appear soon in the quarterly journal "Behavioral and Brain Sciences." That journal takes the unusual step of simultaneously publishing an author's original paper along with commentaries on it by about 30 peer reviewers, as well as the author's response to those commentaries.

Though he is one of the college's most published writers, Mele carries a regular teaching load and believes students benefit greatly from his professional work in the field. "The more writing and research you do, the more you have to bring to the classroom," he said. "I enjoy seeing students come to understand complicated and important things, and develop an ability for themselves to understand them."

The secret to publishing and teaching at the same time is no secret at all, he said. "The trick is to work seven days a week, ten hours a day, and anyone can have two careers as a researcher and teacher!" he said. In addition to his NEH grants, Mele has received other funding from the Sloan Foundation and Mellon Foundation for development of a general account of how our beliefs, desires and intentions help explain our behavior. He has long felt driven by a desire to understand human behavior, particularly intentional behavior. As an undergraduate at Wayne State University, he tried and rejected psychology as a means toward that end. He found himself drawn toward Aristotle's behavioral theories instead. He switched to the study of philosophy and earned his Ph.D. from the University of Michigan in 1979. He has been teaching at Davidson since that time.

Self-deception, intentions and contradictory beliefs

José Luis Bermúdez

Department of Philosophy

University of Stirling

Stirling FK9 4LA

Scotland

jb10@stir.ac.uk

Philosophical discussion of self-deception aims to provide a conceptual model for making sense of a puzzling but common mental phenomenon. The models proposed fall into two groups – the intentionalist and the anti-intentionalist. Broadly speaking, intentionalist approaches to selfdeception analyse the phenomenon on the model of other-deception - what happens in selfdeception is parallel to what happens when one person deceives another, except that deceiver and deceived are the same person. Anti-intentionalist approaches, in contrast, stress what they take to be the deep conceptual problems involved in trying to assimilate self-deception to other-deception.

Many of the arguments appealed to by anti-intentionalists suffer from failing to specify clearly

enough what the intentionalist is actually proposing. In this paper I distinguish three different

descriptions that an intentionalist might give of an episode of self-deception. Separating out the

different possible ways in which an intentionalist might characterise an episode of self-deception

allows us to evaluate some of the commoner objections that are levelled against the intentionalist

approach. I end by offering a tentative argument in favour of intentionalism, suggesting that only

intentionalist accounts of self-deception look as if they can deal with the selective nature of selfdeception.

I am concerned here only with self-deception issuing in the acquisition of beliefs, putting to one side both cases of self-deception in which what is acquired is a propositional attitude that is

not a belief (such as a hope or a fear), and cases in which the self-deception generates the

retention rather than the acquisition of a belief. In general terms the intentionalist view is that a

subject forms a self-deceiving belief that p when they intentionally bring it about that they believe

that p. But there are, of course, different ways in which this might happen. Consider the

following three situations.

(1) S believes that ~p but intends to bring it about that he acquires the false belief that

p

(2) S believes that ~p but intends to bring it about that he acquires the belief that p.

(3) S intends to bring it about that he acquires the belief that p.

There are three dimensions of variation here. First, there is the question of whether the subject

actually holds the belief that p to be false. Second, there is the question of whether the subject

actually intends to bring it about that he believes a falsehood. Third, there is the question of

whether there is a belief such that the subject actually intends to bring it about that he acquires that

belief. We can accordingly identify the following three claims, in ascending order of strength, that

an intentionalist might make about the phenomenon of self-deceiving belief acquisition:

(A) A given episode of self-deception can involve intending to bring it about that one

acquires a certain belief.

(B) A given episode of self-deception can involve holding a belief to be false and yet

intending to bring it about that one acquires that belief.

(C) A given episode of self-deception can involve intending to bring it about that one

acquires a false belief.

It is clear that holding (A) to be true is a bottom-level requirement on any intentionalist account of

self-deception.

But (A) does not entail either (B) or (C). There is something puzzling about the idea that one

might hold it to be the case that p and intend to bring it about that one acquires the belief that p. If

one holds it to be the case that p then one already believes that p - and any intention to bring it

about that one believe that p would be pointless. But one might, of course, have no views

whatsoever about whether or not it is the case that p and yet intend to bring oneself to believe that

p. This would be the position of a complete agnostic persuaded by Pascal's Wager. Alternatively,

one might have evidence for or against p that was too inconclusive to warrant anything more than

the judgment that p is possibly true, and yet intend to bring it about that one believe that p. So,

there is clearly no sense in which one can only intend to bring it about that one believes p when

one holds p to be false. So (A) does not entail B.

Nor does (B) entail (C). I can believe that p is false and intend to bring it about that I acquire

the belief that p without it actually being part of the content of my intention that I come to believe

a falsehood. That is to say, to put it in a way that parallels a familiar point in epistemology,

intentions are not closed under known logical implication. I can know that x entails y and intend

to bring about x without ipso f a ct o intending to bring about y.

The epistemological parallel is worth pursuing. The thesis that knowledge is not closed under

known logical implication can be motivated by placing a tracking requirement on knowledge

(Nozick 1981). I cannot know that p unless my belief that p tracks the fact that p. That is to say, I

cannot know that p unless

(a) were it to be the case that not-p, I wouldn't believe that p

(b) were it to be the case that p I would believe that p.

So, I might know that p entails q and also know that p without knowing that q – because my belief

that q, unlike my belief that p, does not satisfy the tracking requirements (a) and (b).

We can place a similar tracking condition upon intentions. Suppose I know that if I go out for

a walk on the moor this afternoon it is inevitable that I will end up killing midges (because they

are all over the place and I know that I cannot stop myself from crushing them if they land on me,

as they are bound to do). Yet I form the intention to go out for a walk and end up killing midges.

Have I killed those midges intentionally? It is plausible that I haven't. After all, the killing of the

midges was not my aim in going out, nor was it a means to my achieving my aim. My aim was

simply to go out for a walk, and even if my going out hadn't resulted in the death of a single

midge I still would have gone out.

Here is a case in which it seems that I know that one action entails another and yet

intentionally perform the first without intentionally performing the second. The case of selfdeception

seems exactly parallel. I know that my bringing it about that I come to acquire the belief

that p will have the consequence that I come to believe a falsehood. Nonetheless, I can intend to

bring it about that I believe that p without intending to bring it about that I believe a falsehood –

because in a counterfactual situation in which my bringing it about that I believe that p does n ot

result in my believing a falsehood I would nonetheless still bring it about that I believe that p. In

other words, I do not intentionally bring it about that I believe a falsehood because that is not my

aim. So (B) does not entail (C).

Although there is no entailment from (A) to (B) to (C) there is an entailment from (C) to (B)

to (A). (A) is a minimal requirement on intentionalist theories of self-deception. One can be an

intentionalist without accepting (B) or (C), but not without accepting (A). We can see this by a

quick comparison with Mele's anti-intentionalist approach to self-deception (Mele 1997, 1998).

Mele holds that self-deception occurs when:

(i) The belief that p which S acquires is false.

(ii) S treats data seemingly relevant to the truth-value of p in a motivationally biased

way.

(iii) This motivationally biased treatment non-deviantly causes S to acquire the belief

that p,

(iv) The evidence that S possesses at the time provides greater warrant for ~p than for

p.

Examples of the motivational biases that Mele mentions in spelling out the second condition are:

selective attention to evidence that we actually possess; selective means of gathering evidence;

negative misinterpretation (failing to count as evidence against p data that we would easily

recognise as such were we not motivationally biased); positive misinterpretation (counting as

evidence for p data that we would easily recognise as evidence against p were we not

motivationally biased), and so forth (Mele 1997). The non-deviant causal chain that leads from

motivationally biased treatment of evidence to the acquisition of the belief that p does n ot proceed

via an intention to bring it about that one believes that p.

Let me turn now to what is often taken to be an obvious objection to any intentionalist account

of self-deception, but which is actually only applicable to (B) and (C), namely, that such accounts

assume that the self-deceiver forms an intention to bring it about that he acquire a belief that he

thinks is false. There are two main reasons for holding this to be incoherent. First, one might

argue that one cannot intend to bring it about that one acquires a belief that one thinks to be false

without simultaneously having contradictory beliefs – the belief that p and the belief that ~p. Yet it

is impossible to be in any such state. Second, one might think that the project is guaranteed to be

self-defeating. Quite simply, if one knows that the belief is false then how can one get oneself to

believe it?

The first of these lines of argument is extremely unconvincing. The key claim that it is

impossible simultaneously to possess contradictory beliefs is highly implausible. There is

certainly something very puzzling about the idea of an agent simultaneously avowing two

contradictory beliefs or avowing the contradictory belief that p & ~p. But nothing like this need

occur in either (B) or (C), since the two beliefs could be inferentially i nsulate d . Positing

inferential insulation is not just an ad hoc manoeuvre to deal with the problem of self-deception

(in the way that dividing the self into deceiver and deceived would be), since there are familiar

computational reasons for denying that an agent’s beliefs are all inferentially integrated (the

limitations of memory search strategies etc). In any case, it is a simple logical point that ' S

believes p at time t' and 'S believes q at time t' do not jointly entail 'S believes p & q at time t ' .

So, an account of self-deception can involve the simultaneous ascription of beliefs that p and that

not-p without assuming that those two contradictory beliefs are simultaneously active in any way

that would require ascribing the contradictory belief that p & ~p.

But there is a sense in which this is peripheral, because it is far from clear that either (B) or

(C) do require the ascription of simultaneous contradictory beliefs. I can start from a state in

which I believe that ~p and then intentionally bring it about that I acquire the belief that p without

there being a point at which I simultaneously believe that p and believe that ~p. This becomes

clearer when one reflects that the best way of bringing it about that one moves from a state in

which one believes ~p to a state in which one believes p is to undermine one's reasons for

believing ~p and hence to weaken one's belief in ~p. It seems plausible that one's confidence in p

will be inversely proportional to one's confidence in ~p.

It might be argued that one cannot do something intentionally without doing it knowingly.

Hence one cannot intentionally bring it about that one believes p when p is false without knowing

that p is false - and so one will have simultaneous contradictory beliefs after all. I shall shortly

argue that the premise is false, but let me concede it for the moment. The first thing to notice is

that this only threatens intentionalists who espouse (C). Those who espouse (B) are left

untouched. Presumably what one knows is the content of one's intention and in (B) the content

of that intention does not include the falsehood of the belief that one is trying to get oneself to

believe.

But should we conclude that a (C)-type intentionalist is committed to the ascription of

simultaneous contradictory beliefs? Not at all. During the process of intentionally bringing it

about that one comes to believe a falsehood one will, on the current assumption, know that one is

bringing it about that one will acquire a false belief. But during that time one has presumably not

yet acquired the false belief in question. So there is no conflict. And when the process of

acquiring the belief has come to an end, so too does the intentional activity and the concomitant

knowledge that the belief thus acquired is false. Someone might argue that this will still not allow

the self-deceived believer to believe that p with impunity – because one cannot believe p without

believing that p is true and one cannot believe that p is true if one believes that one has caused

oneself to believe it. But this confuses the normative and the descriptive. No doubt one ought not

to believe that p if one believes that one caused oneself to believe that p. But as a simple matter of

psychological fact people can reconcile those two beliefs. One might believe, for example, that

although one initially set out to cause oneself to believe that p the evidence in favour of p was so

completely overwhelming that one would have come to believe that p regardless.

In any case, it seems false that one cannot do something intentionally without doing it

knowingly. There is certainly a clear sense in which it is pretty implausible that one might

intentionally perform a simple action like pulling a trigger or switching on a light without

knowing that that is what one is doing. And it also seems pretty clear that if such a simple action

has consequences which one has no means of recognising or predicting (like assassinating the

only woman to believe that Arkansas is in Australia or frightening away a tawny owl) then those

consequences are not achieved intentionally. But most intentional actions do not fit neatly into one

or other of those categories. Suppose I have a long-term desire to advance my career. This longterm

desire influences almost everything I do in my professional life, so that it becomes correct to

describe many of my actions as carried out with the intention of advancing my career. Does this

mean that I carry them out knowing that they are being done with that intention? Surely not.

The intention to bring it about that one acquire a certain belief is closer to the intention to

advance one's career than it is to the intention to switch on a light. As Pascal pointed out,

acquiring a belief is a long-term process involving much careful focusing of attention, selective

evidence gathering, acting as if the belief was true, and so forth. It seems likely that the further on

one is in the process, and the more successful one has been in the process of internalising the

belief, the more one likely one will be to have lost touch with the original motivation.

It would seem, then, that the standard objections to intentionalist accounts of self-deception

are less than convincing. This goes some way towards weakening the case for antiintentionalism,

simply because a considerable part of the appeal of anti-intentionalism comes from

the puzzles or paradoxes that are supposed to beset intentionalist approaches. But what about the

positive case for intentionalism?

The positive case for intentionalism is based on inference to the best explanation. The

intentionalist proposal is that we cannot understand an important class of psychological

phenomena without postulating that the subject is intentionally bringing it about that he come to

have a certain belief. The situation may be more accurately characterised in terms of one of the

three models I have identified - the most common, no doubt, will be (A)-type intentional selfdeception.

It is not enough for the intentionalist to show that such situations sometimes occur -

perhaps by citing extreme examples like the neurological patients who deny that there is anything

wrong with them, despite being cortically blind (Anton's syndrome) or partially paralysed

(anosognosia for hemiplegia). The intentionalist needs to capture the middle ground by showing

that many of the everyday, "common-or-garden" episodes that we would characterise as instances

of self-deception need to be explained in intentionalist terms. The task is obviously too large to

undertake here, but I will make a start on it by trying to show that the sophisticated mechanisms

that Alfred Mele proposes for an anti-intentionalist and deflationary analyses of everyday selfdeception

look very much as if they can only do the explanatory work required of them when

supplemented by an intentionalist explanation.

Let us look again at the four conditions that Mele places upon self-deception:

(i) The belief that p which S acquires is false

(ii) S treats data seemingly relevant to the truth-value of p in a motivationally biased

way.

(iii) This motivationally biased treatment non-deviantly causes S to acquire the belief

that p,

(iv) The evidence that S possesses at the time provides greater warrant for ~p than for

p.

I would like to focus upon the second condition. Mele requires that S treat data seemingly relevant

to determining the truth-value of p in a motivationally biased way. It is presumably a desire that p

be the case that results in the motivationally biased treatment. Because we desire that p be the case

we engage in selective evidence-gathering, various forms of misinterpretation of evidence, and so

forth, eventually resulting in the acquisition of the belief that p. This account is radically

incomplete, however. What is the connection between our desire that p be the case and our

exercise of motivational bias?

We can get a sense of how Mele would respond from his (1998) discussion of the model of

everyday hypothesis testing developed by Trope and Liberman (1996) to explain the divergence.

The basic idea is that people have different acceptance/rejection thresholds for hypotheses

depending upon the expected subjective cost to the individual of false acceptance or false rejection

relative to the resources required for acquiring and processing information. The higher the

expected subjective cost of false acceptance the higher the threshold for acceptance - similarly for

rejection. Hypotheses which have a high acceptance threshold will be more rigorously tested and

evaluated than those which have a low acceptance threshold. Mele proposes that, in many cases

of self-deception, the expected subjective cost associated with the acquired false belief is low. So,

for example, the complacent husband would be much happier falsely believing that his wife is not

having an affair than he would be falsely believing that she was having an affair – because he

desires that she not be having an affair. So the acceptance threshold for the hypothesis that she is

not having an affair will be low, and it is this low acceptance threshold which explains the selfdeceiving

acquisition of the belief that she is not having an affair.

We can see how Mele would most likely respond to the question of how the second condition

of his account of self-deception comes about. S's desire that p be true results in a motivationally

biased treatment of data by affecting the acceptance and rejection thresholds of the hypothesis that

p. It lowers the acceptance threshold and raises the rejection threshold, thus opening the door to

biased treatment of the data. This account is ingenious and no doubt provides a good explanation

of why different people will draw different conclusions from similar bodies of information. It

also provides a good explanation of at least some instances of belief acquisition that one might

intuitively classify as instances of self-deception. It is not clear, however, that it can be extended

to provide a general account of self-deceiving belief acquisition. There is a fundamental problem

that the theory does not seem to address.

Self-deception is paradigmatically selective. Any explanation of a given instance of selfdeception

will need to explain why motivational bias occurred in that particular situation. But the

desire that p should be the case is insufficient to motivate cognitive bias in favour of the belief that

p. There are all sorts of situations in which, however strongly we desire it to be the case that p,

we are not in any way biased in favour of the belief that p. How are we to distinguish these from

situations in which we desire p and are biased in favour of the belief that p? I will call this the

selectivity p r oblem .

In response to an earlier presentation of the selectivity problem (Bermudez 1997), and to a

related but different problem identified by William Talbott (1995), Mele (1998) gives the

following illustration of how his theory might cope with it. He imagines Gordon, a CIA agent

who has been accused of treason. Although Gordon's parents and his staff of intelligence agents

have access to roughly the same information relative to Gordon's alleged crime, and they all want

Gordon to be innocent, they come to different verdicts. Gordon's parents decide that he is

innocent, while his colleagues decide that he is guilty. How can this be, given that the intelligence

agents and the parents have the same desire and access to similar information? Here is Mele's

response.

Here it is important to bear in mind a distinction between the cost of believing that p

and the cost of believing falsely that p. It is the latter cost, not the former, that is

directly relevant to the determination of the confidence thesholds on Trope and

Liberman's view. For Gordon's parents, the cost of believing falsely that their son is

innocent may not differ much from the cost of believing that he is innocent

(independently of the truth or falsity of the belief). We may suppose that believing

that he is innocent has no cost for them. Indeed, the belief is a source of comfort, and

believing that Gordon is guilty would be quite painful. Additionally, their believing

falsely that he is innocent may pose no subjectively significant threat to the parents.

However with Gordon's staff matters are very different The cost to them of

believing falsely that Gordon is innocent may be enormous, for they recognise that

their lives are in his hands. And this is so even if they very much want it to be true

that he is innocent. His parents have a much lower threshold for accepting the

hypothesis that Gordon is innocent than for rejecting it, whereas in the light of the

relative costs of "false acceptance" and "false rejection" for the staff, one would

expect their thresholds to be quite the reverse of this. (Mele 1998 p.361)

In essence, the divergence arises because the CIA agents' desire that Gordon be innocent is

trumped by their desire not to be betrayed and their acceptance and rejection thresholds differ

accordingly from the threshold of Gordon's parents.

The cost-benefit analysis provides a plausible explanation of what might be going on in the

Gordon case, which is particularly interesting since there are ways of describing the situation on

which Gordon's parents come out as self-deceived (as indeed there are for describing the CIA

agents as being self-deceived). But it is not at all clear, despite what Mele claims, that it provides

a response to the selectivity problem. The selectivity problem is not a problem of how two people

in similar situations can acquire different beliefs. It arises, rather, from the fact that possessing a

desire that p be true is not sufficient to generate cognitive bias, even if all other things are equal

(which they are, perhaps, for Gordon's parents but not for his subordinates). It is simply not the

case that, whenever my motivational set is such as to lower the acceptance threshold of a

particular hypothesis, I will end up self-deceivingly accepting that hypothesis.1 The selectivity

problem reappears. There are many hypotheses for which my motivational set dicates a low

acceptance and high rejection threshold and for which the evidence available to me is marginal

enough to make self-deception possible. But I self-deceivingly come to believe only a small

proportion of them. Why those and not the others?.

Intentionalist accounts of self-deception have a clear and straightforward answer to the

selectivity problem. The self-deceiving acquisition of a belief that p requires more than simply a

desire that p be the case and a low acceptance theshold/high rejection threshold for the hypothesis

that p. It requires an intention on the part of the self-deceiver to bring it about that he acquires the

belief that p. The fact that intentionalist theories can solve the selectivity problem in this way

1 I leave aside the problem that it seems perfectly possible to deceive oneself into believing a hypothesis

seems at least a p r ima f a cie reason for thinking that one cannot entirely abandon an intentionalist

approach to self-deception.

Let me end, however, with two qualifications. The first is that the argument from the

selectivity problem can only tentative, because it remains entirely possible that an antiintentionalist

response to the selectivity problem might be developed. It is hard to see what sort of

argument could rule this out. Second, even if sound the argument does not compel recognition of

the existence of anything stronger than what I have called (A)-type self-deception. I have

suggested that (B)- and (C)-type self-deception are not conceptually incoherent. But it remains to

be seen whether there are situations for which inference to the best explanation demands an

analysis in terms of (B)- or (C)-type self-deception.

Bibliography

Bermúdez, J. L. 1998. 'Defending intentionalist accounts of self-deception', B e havioral a nd

Brain S c ience s 20, 107-108.

Mele, A. 1997. 'Real self-deception', B ehavioral a nd B r ain S ciences 20, 91-102.

Mele, A. 1998. 'Motivated belief and agency', P h ilosophical P sycholog y 11, 353 – 369.

Nozick, R. 1981. P h ilosophical E xplanation s . Cambridge MA. Harvard University Press.

Talbott, W. 1995. 'Intentional self-deception in a single, coherent self', P h ilosophy a nd

Phenomenological R e searc h 55, 27-74.

Trope, Y. and Liberman, A. 1996. 'Social hypothesis testing: cognitive and motivational

mechanisms' in E. Higgins and A. Kruglanski (eds.), S o cial P s ychology: H a ndbook o f B a sic

Principles . New York. Guildford Press.

Back to the home page The CNW Project [Homepage]

Self-Deception, Intentions and Contradictory Beliefs

José Luis Bermúdez

(University of Stirling)

(Forthcoming in Analysis, September 2000)

Philosophical accounts of self-deception can be divided into two broad groups - the intentionalist and the anti-intentionalist. On intentionalist models what happens in the central cases of self-deception is parallel to what happens when one person intentionally deceives another, except that deceiver and deceived are the same person.1 In the classic case of self- deceiving belief formation, the self-deceiver brings it about that they themselves acquire (or retain) a particular belief that p - just as in other-deception one person intentionally brings it about that another person acquires a belief. In neither case is the (self-)deceiver motivated by conviction of the truth of p. The interpersonal deceiver wants it to be the case that his victim forms the belief that p, while the self-deceiver wants it to be the case that he himself forms the belief that p. Both the self-deceiver and the interpersonal deceiver act intentionally to bring it about that the relevant desire is satisfied.2

According to anti-intentionalist accounts, in contrast, self-deceiving belief formation can be explained simply in terms of motivational bias, without bringing in appeals to intentional action. Self-deception is not structurally similar to intentional interpersonal deception. Whereas intentionalism is generally defended by arguments to the best explanation, anti-intentionalism is often argued for indirectly, through attacking alleged incoherences in intentionalist accounts. Many of the arguments appealed to by anti-intentionalists suffer from failing to specify clearly enough what the intentionalist is actually proposing. This paper distinguishes three different descriptions that an intentionalist might give of an episode of self-deception. Separating out the different possible ways in which an intentionalist might characterise an episode of self- deception allows us to evaluate some of the commoner objections that are levelled against the intentionalist approach. I end with a positive argument in favour of intentionalism, suggesting that only intentionalist accounts of self-deception look as if they can deal with the selective nature of self-deception.

I

I am concerned here only with self-deception issuing in the acquisition of beliefs, putting to one side both self-deceiving acquisition of non-doxastic propositional attitudes (such as hope or fear), and cases of retention rather than acquisition. In general terms the intentionalist view is that it is a necessary condition of a subject's forming a self-deceiving belief that p that they should intentionally bring it about that they believe that p. But there are different ways in which this might happen. For example:

(1) S believes that not-p but intends to bring it about that he acquires the false belief that p.

(2) S believes that not-p but intends to bring it about that he acquires the belief that p.

(3) S intends to bring it about that he acquires the belief that p.

Correspondingly, there are three different ways, in ascending order of strength, in which an intentionalist might characterise core episodes of self-deception:

(A) Core episodes of self-deception involve intending to bring it about that one acquires a certain belief.

(B) Core episodes of self-deception involve holding a belief to be false and yet intending to bring it about that one acquires that belief.

(C) Core episodes of self-deception involve intending to bring it about that one acquires a false belief.

(A)-type intentionalism attributes intentions of the third type discussed above; (B)-type intentionalism intentions of the second type and (C)-type intentionalism intentions of the first type. It is clear that holding (A) to be true is a bottom-level requirement on any intentionalist account of self-deception. But (A) does not entail either (B) or (C).

(A) does not entail (B) because it is false that one can only intend to bring it about that one believes p when one holds p to be false. One might, have no views whatsoever about whether or not it is the case that p and yet intend to bring oneself to believe that p. This would be the position of a complete agnostic persuaded by Pascal's Wager. Alternatively, one might have evidence for or against p that was too inconclusive to warrant anything more than the judgment that p is possibly true, and yet intend to bring it about that one believe that p.

Nor does (B) entail (C). I can believe that p is false and intend to bring it about that I acquire the belief that p without it actually being part of the content of my intention that I come to believe a falsehood. That is to say, to put it in a way that parallels a claim sometimes made in epistemology, intentions are not closed under known logical implication. I can know that x entails y and intend to bring about x without ipso facto intending to bring about y.

The epistemological parallel is worth pursuing. The thesis that knowledge is not closed under known logical implication can be motivated by placing the following requirements on knowledge (Nozick 1981); I cannot know that p unless

(a) were it to be the case that not-p, I wouldn't believe that p

(b) were it to be the case that p I would believe that p.

So, I might know that p entails q and also know that p without knowing that q - because my belief that q, unlike my belief that p, does not satisfy the equivalent of requirements (a) and (b).

We can place similar conditions upon intentions. Suppose I know that if I go out for a walk on the moor this afternoon it is inevitable that I will end up killing midges (because they are all over the place and I know that I cannot stop myself from crushing them if they land on me, as they are bound to do). Yet I form the intention to go out for a walk and end up killing midges. Have I killed those midges intentionally? It is plausible that I haven't. After all, the killing of the midges was not my aim in going out, nor was it a means to my achieving my aim. My aim was simply to go out for a walk, and even if my going out hadn't resulted in the death of a single midge I still would have gone out. The equivalent of requirement (a) is not met.

The case of self-deception seems exactly parallel. I know that my bringing it about that I come to acquire the belief that p will have the consequence that I come to believe a falsehood. Nonetheless, I can intend to bring it about that I believe that p without intending to bring it about that I believe a falsehood - because in a counterfactual situation in which my bringing it about that I believe that p does not result in my believing a falsehood I would nonetheless still bring it about that I believe that p. This reflects the intuitive idea that I do not intentionally bring it about that I believe a falsehood because that is not my aim. So (B) does not entail (C).

Having separated out the three different types of intentionalist position one might still wonder, however, whether they all really count as instances of self-deception. In what sense are subjects who form one or other of intentions (1) to (3) properly described as deceiving themselves?

It is pretty clear where the deception comes in (B)- and (C)-type intentionalism, since in both cases the self-deceiver knows that p is false and sets out to manipulate himself into believing that p - just as in intentional interpersonal deception one person sets out to manipulate another into believing something that he (the deceiver) holds to be false. But what about (A)-type self- deception? Things are more delicate here since in (A)-type self-deception the self-deceiver does not necessarily hold the belief they intend to acquire to be false. Nonetheless, they do know that the belief they intend to bring it about that they acquire is not currently warranted by the available evidence. Is this genuine deception? Well, it would count as deception in the interpersonal case. If I intend to bring it about that someone I'm talking to at a dinner-party comes to believe that a particular mutual friend is untrustworthy even though I know that the evidence available to either of us does not warrant that conclusion then I would properly be described as deceiving them, simply because I would knowingly be contravening the tacit principle that beliefs should be transmitted only when they are held to be true. The case of (A)- type self-deception is exactly parallel. I intend to bring it about that I acquire a belief in ways that are not simply non-standard, but that contravene the norms of truth governing belief formation. I am manipulating my own belief-forming mechanisms.

In all three versions of the intentionalist claim, therefore, the deception comes in because the intention is to bring it about that one acquires a belief that one knows one would not have acquired in the absence of that intention. The distinction between self-deception as the intentionalist construes it and wishful thinking (or other comparable modes of motivationally biased belief formation) should be clear. The wishful thinker acquires the belief that p because they want it to be the case that p. The self-deceiver, in contrast, will usually want it to be the case that p - but will also want it to be the case that they believe that p. Moreover, the self- deceiver forms the belief that p as a result of having intended to form the belief that p, whereas there is no such intention in wishful thinking.

II

Let me turn now to what is often taken to be an obvious objection to any intentionalist account of self-deception, but which is actually only applicable to the suggestion in (B) and (C) that the self-deceiver forms an intention to bring it about that he acquire a belief that he thinks is false. There are two main reasons for holding this to be incoherent. First, one might argue that one cannot intend to bring it about that one acquires a belief that one thinks to be false without simultaneously having contradictory beliefs. Yet it is impossible to be in any such state. Second, one might think that the project is guaranteed to be self-defeating. If one knows that the belief is false then how can one get oneself to believe it?

The first line of argument is unconvincing. There is certainly something very puzzling about the idea of an agent avowing the contradictory belief that p & not-p. But nothing like this need occur in either (B) or (C), since the two beliefs could be inferentially insulated from each other. It is clear that 'S believes p at time t' and 'S believes q at time t' do not jointly entail that S at time t has a single belief with the conjunctive content that p & q. So, an account of self- deception can involve the simultaneous ascription of beliefs that p and that not-p without assuming that those two contradictory beliefs are simultaneously active in any way that would require ascribing the contradictory belief that p & not-p.

But there is a sense in which this is peripheral, because it is unclear that either (B) or (C) do require the ascription of simultaneous contradictory beliefs. I can start from a state in which I believe that not-p and then intentionally bring it about that I acquire the belief that p without there being a point at which I simultaneously believe that p and believe that not-p. The best way of bringing it about that one moves from a state in which one believes not-p to a state in which one believes p is to undermine one's reasons for believing not-p and hence to weaken one's belief in not-p. It seems plausible that one's confidence in p will be inversely proportional to one's confidence in not-p.

It might be argued that one cannot do something intentionally without doing it knowingly. Hence one cannot intentionally bring it about that one believes p when p is false without knowing that p is false - and so one will have simultaneous contradictory beliefs after all. I shall shortly argue that the premise is false, but let me concede it for the moment. The first thing to notice is that this only threatens intentionalists who espouse (C). Those who espouse (B) are left untouched. Presumably what one knows is the content of one's intention and in (B) the content of that intention does not include the falsehood of the belief that one is trying to get oneself to believe. Of course, it is an episode of (B)-type self-deception because one starts off believing that not-p. But one might have forgotten this in the process of tricking oneself into acquiring the belief.

But should we conclude that a (C)-type intentionalist is committed to the ascription of simultaneous contradictory beliefs? Not at all. During the process of intentionally bringing it about that one comes to believe a falsehood one will, on the current assumption, know that one is bringing it about that one will acquire a false belief. But during that time one has presumably not yet acquired the false belief in question. So there is no conflict. And when the process of acquiring the belief has come to an end, so too does the intentional activity and the concomitant knowledge that the belief thus acquired is false. Someone might argue that this will still not allow the self-deceived believer to believe that p with impunity - because one cannot believe p without believing that p is true and one cannot believe that p is true if one believes that one has caused oneself to believe it. But this confuses the normative and the descriptive. No doubt one ought not to believe that p if one believes that one caused oneself to believe that p. But as a simple matter of psychological fact people can reconcile those two beliefs. One might believe, for example, that although one initially set out to cause oneself to believe that p the evidence that emerged in favour of p was so overwhelming that one would have come to believe that p regardless.

In any case, it seems false that one cannot do something intentionally without doing it knowingly. It is certainly implausible that one might intentionally perform a simple action like pulling a trigger or switching on a light without knowing that that is what one is doing. And it also seems pretty clear that if such a simple action has consequences which one has no means of recognising or predicting (like assassinating the only woman to believe that Arkansas is in Australia or frightening away a tawny owl) then those consequences are not achieved intentionally. But most intentional actions do not fit neatly into one or other of those categories. Suppose I have a long-term desire to advance my career. This long-term desire influences almost everything I do in my professional life, so that it becomes correct to describe many of my actions as carried out with the intention of advancing my career. Does this mean that I carry them out knowing that they are being done with that intention? Surely not.

The intention to bring it about that one acquire a certain belief is closer to the intention to advance one's career than it is to the intention to switch on a light. As Pascal pointed out, acquiring a belief is a long-term process involving much careful focusing of attention, selective evidence gathering, acting as if the belief was true, and so forth. It seems likely that the further on one is in the process, and the more successful one has been in the process of internalising the belief, the more likely one will be to have lost touch with the original motivation. This is not to say, however, that all cases of intentional self-deception will involve unconscious intentions. Far from it. An action can be performed unknowingly even though the intention that gave rise to it was (at the time it was formulated) fully conscious. The point is simply that one can lose touch with an intention while one is in the process of implementing it, particularly when that implementation is a long drawn out process. The fact that an action is precipitated by a conscious intention does not entail that while carrying out the action one remains constantly conscious of the intention that gave rise to it. By the same token, the fact that one is not conscious of the intention while carrying out the action does not undermine the action's status as intentional. Nor does it threaten the idea that the project is one of deception - any more than in the interpersonal case the fact that I am not at the moment conscious of my original intention to deceive means that my ongoing attempts to manipulate my dinner companion no longer count as deception.3

So, the standard objections to intentionalist accounts of self-deception are less than convincing. This goes some way towards weakening the case for anti-intentionalism, simply because a considerable part of the appeal of anti-intentionalism comes from the puzzles or paradoxes that are supposed to beset intentionalist approaches. But what about the positive case for intentionalism?

III

The positive case for intentionalism is based on inference to the best explanation. The intentionalist proposal is that we cannot understand an important class of psychological phenomena which would normally be labelled self-deception without postulating that the subject is intentionally bringing it about that he come to have a certain belief. It is not enough for the intentionalist to show that such situations sometimes occur - perhaps by citing extreme examples like the neurological patients who deny that there is anything wrong with them, despite being cortically blind (Anton's syndrome) or partially paralysed (anosognosia for hemiplegia). The intentionalist needs to capture the middle ground by showing that many of the everyday, "common-or-garden" episodes that we would characterise as instances of self- deception need to be explained in intentionalist terms. The task is too large to undertake here, but I will make a start on it by trying to show that one sophisticated version of the anti- intentionalist and deflationary approach to self-deception suffers from a fatal defect unless supplemented by an intentionalist explanation. It seems likely that the problem will generalise to any anti-intentionalist strategy.

Here are the four conditions that Al Mele has deployed to characterise self-deception (Mele 1997, 1998):

(i) The belief that p which S acquires is false.

(ii) S treats data seemingly relevant to the truth-value of p in a motivationally biased way.

(iii) This motivationally biased treatment non-deviantly causes S to acquire the belief that p,

(iv) The evidence that S possesses at the time provides greater warrant for not-p than for p.

Examples of the motivational biases that Mele mentions in spelling out the second condition are: selective attention to evidence that we actually possess; selective means of gathering evidence; negative misinterpretation (failing to count as evidence against p data that we would easily recognise as such were we not motivationally biased); positive misinterpretation (counting as evidence for p data that we would easily recognise as evidence against p were we not motivationally biased), and so forth (Mele 1997).

This account is radically incomplete, however. What is the connection between our desire that p be the case and our exercise of motivational bias? Where does the motivational bias come from? We can get a sense of how Mele would respond from his (1998) discussion of the model of everyday hypothesis testing developed by Trope and Liberman (1996). The basic idea is that people have different acceptance/rejection thresholds for hypotheses depending upon the expected subjective cost to the individual of false acceptance or false rejection relative to the resources required for acquiring and processing information. The higher the expected subjective cost of false acceptance the higher the threshold for acceptance - similarly for rejection. Hypotheses which have a high acceptance threshold will be more rigorously tested and evaluated than those which have a low acceptance threshold. Mele proposes that, in many cases of self-deception, the expected subjective cost associated with the acquired false belief is low. So, for example, the complacent husband would be much happier falsely believing that his wife is not having an affair than he would be falsely believing that she was having an affair - because he desires that she not be having an affair. So the acceptance threshold for the hypothesis that she is not having an affair will be low, and it is this low acceptance threshold which explains the self-deceiving acquisition of the belief that she is not having an affair.

Clearly, then, Mele would say that S's desire that p be true results in a motivationally biased treatment of data by lowering the acceptance threshold and raising the rejection threshold of the hypothesis that p, thus opening the door to biased treatment of the data. This account is ingenious and provides a good explanation of at least some instances of belief acquisition that one might intuitively classify as instances of self-deception. But there is a fundamental problem that the theory does not seem to address.

Self-deception is paradigmatically selective. Any explanation of a given instance of self- deception will need to explain why motivational bias occurred in that particular situation. But the desire that p should be the case is insufficient to motivate cognitive bias in favour of the belief that p. There are all sorts of situations in which, however strongly we desire it to be the case that p, we are not in any way biased in favour of the belief that p. How are we to distinguish these from situations in which our desires result in motivational bias? I will call this the selectivity problem.

In response to an earlier presentation of the selectivity problem (Bermodez 1997), and to a related but different problem identified by William Talbott (1995), Mele (1998) gives the following illustration of how his theory might cope with it. He imagines Gordon, a CIA agent who has been accused of treason. Although Gordon's parents and his staff of intelligence agents have access to roughly the same information relative to Gordon's alleged crime, and they all want Gordon to be innocent, they come to different verdicts. Gordon's parents decide that he is innocent, while his colleagues decide that he is guilty. How can this be, given that the intelligence agents and the parents have the same desire and access to similar information? Mele's response is that the cost of falsely believing that p is far higher for the intelligence agents than it is for Gordon's parents. The agents' desire that Gordon be innocent is trumped by their desire not to be betrayed and their acceptance and rejection thresholds differ accordingly from the threshold of Gordon's parents (who might be comforted by believing falsely that their son is innocent).

The cost-benefit analysis provides a plausible explanation of what might be going on in the Gordon case, but it does not seem to solve the selectivity problem. The selectivity problem is not a problem of how two people in similar situations can acquire different beliefs. It arises, rather, from the fact that possessing a desire that p be true is not sufficient to generate cognitive bias, even if all other things are equal (which they are, perhaps, for Gordon's parents but not for his subordinates). It is simply not the case that, whenever my motivational set is such as to lower the acceptance threshold of a particular hypothesis, I will end up self-deceivingly accepting that hypothesis.4 The selectivity problem reappears. There are many hypotheses for which my motivational set dicates a low acceptance and high rejection threshold and for which the evidence available to me is marginal enough to make self-deception possible. But I self- deceivingly come to believe only a small proportion of them. Why those and not the others?

One might suggest that self-deceiving belief formation arises when the self-deceiver possesses, not simply the desire that p be the case but also the desire that he come to believe that p. But the selectivity problem seems to arise once more. Even if one desires both that p be true and that one come to believe that p it is not inevitable that one will form the belief that p. We still need an explanation of why one does in some cases and not in others.

Intentionalist accounts of self-deception have a clear and straightforward answer to the selectivity problem. The self-deceiving acquisition of a belief that p requires more than simply a desire that p be the case and a low acceptance/high rejection threshold for the hypothesis that p. It requires an intention on the part of the self-deceiver to bring it about that he acquires the belief that p. The fact that intentionalist theories can solve the selectivity problem in this way seems at least a prima facie reason for thinking that one cannot entirely abandon an intentionalist approach to self-deception. One might add, moreover, that including an intention in the account makes the psychological phenomenon look far more like a genuine case of deception, for the reasons canvassed at the end of section I.

Let me end with two qualifications. First, the argument from the selectivity problem can only be tentative, because it remains entirely possible that an anti-intentionalist response to the selectivity problem might be developed. It is hard to see what sort of argument could rule this out. Second, even if sound the argument does not compel recognition of the existence of anything stronger than what I have called (A)-type self-deception. I have suggested that (B)- and (C)-type self-deception are not conceptually incoherent. But it remains to be seen whether there are situations for which inference to the best explanation demands an analysis in terms of (B)- or (C)-type self-deception.5

The University of Stirling

Stirling FK9 4LA

jb10@stirling.ac.uk

Bibliography

Bermodez, J. L. 1997. Defending intentionalist accounts of self-deception. Behavioral and Brain Sciences 20: 107-108.

Bermodez, J. L. 2000. Autoinganno, intenzioni e credenze contraddittorie. Un commento a Mele. Sistemi Intelligenti 3

Haight, M. 1980. A Study of Self-Deception. Brighton: Harvester Press.

Mele, A. 1997. Real self-deception. Behavioral and Brain Sciences 20: 91-102.

Mele, A. 1998. Motivated belief and agency. Philosophical Psychology 11: 353 - 369.

Nozick, R. 1981. Philosophical Explanations. Cambridge Mass.: Harvard University Press.

Talbott, W. 1995. Intentional self-deception in a single, coherent self. Philosophy and Phenomenological Research 55: 27-74.

Trope, Y. and Liberman, A. 1996. Social hypothesis testing: cognitive and motivational mechanisms. In Social Psychology: Handbook of Basic Principles, eds. E. Higgins and A. Kruglanski. New York: Guildford Press.

Below is the unedited preprint (not a quotable final draft) of:

Mele, A.R. (1997). Real self-deception. Behavioral and Brain Sciences 20 (1): 91-136.

The final published draft of the target article, commentaries and Author's Response are currently available only in paper. € For information about subscribing or purchasing offprints of the published version, with commentaries and author's response, write to: journals_subscriptions@cup.org (North America) or journals_marketing@cup.cam.ac.uk (All other countries). €

REAL SELF-DECEPTION

Alfred R. Mele

Department of Philosophy,

Davidson College

Davidson, NC 28036

almele@davidson.edu

Keywords

belief; bias; contradictory beliefs; intention; motivation; self-deception; wishful thinking

Abstract

Self-deception poses tantalizing conceptual conundrums and provides fertile ground for empirical research. Recent interdisciplinary volumes on the topic feature essays by biologists, philosophers, psychiatrists, and psychologists (Lockard & Paulhus 1988, Martin 1985). Self-deception's location at the intersection of these disciplines is explained by its significance for questions of abiding interdisciplinary interest. To what extent is our mental life present--or even accessible--to consciousness? How rational are we? How is motivated irrationality to be explained? To what extent are our beliefs subject to our control? What are the determinants of belief, and how does motivation bear upon belief? In what measure are widely shared psychological propensities products of evolution?<1>

A proper grasp of the dynamics of self-deception may yield substantial practical gains. Plato wrote, "there is nothing worse than self-deception--when the deceiver is at home and always with you" (Cratylus 428d). Others argue that self-deception sometimes is beneficial; and whether we would be better or worse off, on the whole, if we never deceived ourselves is an open question.<2> In any case, ideally, a detailed understanding of the etiology of self-deception would help reduce the frequency of harmful self-deception. This hope is boldly voiced by Jonathan Baron in a book on rational thinking and associated obstacles: "If people know that their thinking is poor, they will not believe its results. One of the purposes of a book like this is to make recognition of poor thinking more widespread, so that it will no longer be such a handy means of self-deception" (1988, p. 39). A lively debate in social psychology about the extent to which sources of biased belief are subject to personal control has generated evidence that some prominent sources of bias are to some degree controllable.<3> This provides grounds for hope that a better understanding of self-deception would enhance our ability to do something about it.

My aim in this article is to clarify the nature and (relatively proximate) etiology of self-deception. Theorists have tended to construe self-deception as largely isomorphic with paradigmatic interpersonal deception. Such construals, which have generated some much-discussed puzzles or "paradoxes," guide influential work on self-deception in each of the four disciplines mentioned (e.g., Davidson 1985, Gur & Sackheim 1979, Haight 1980, Pears 1984, Quattrone & Tversky 1984, Trivers 1985).<4> In the course of resolving the major puzzles, I will argue that the attempt to understand self-deception on the model of paradigmatic interpersonal deception is fundamentally misguided. Section 1 provides background, including sketches of two familiar puzzles: one about the mental state of a self-deceived person at a given time, the other about the dynamics of self-deception. Section 2, drawing upon empirical studies of biased belief, resolves the first puzzle and articulates sufficient conditions for self-deception. Section 3 challenges some attempted empirical demonstrations of the reality of self-deception, construed as requiring the simultaneous possession of beliefs whose propositional contents are mutually contradictory. Section 4 resolves the dynamic puzzle. Section 5 examines intentional self-deception.

Readers should be forewarned that the position defended here is deflationary. If I am right, self-deception is neither irresolvably paradoxical nor mysterious and it is explicable without the assistance of mental exotica. Although a theorist whose interest in self-deception is restricted to the outer limits of logical or conceptual possibility might view this as draining the topic of conceptual fun, the main source of broader, enduring interest in self-deception is a concern to understand and explain the behavior of real human beings.

1. Three Approaches to Characterizing Self-Deception and a Pair of Puzzles

Defining 'self-deception' is no mean feat. Three common approaches may be distinguished. One is lexical: a theorist starts with a definition of 'deceive' or 'deception', using the dictionary or common usage as a guide, and then employs it as a model for defining self-deception. Another is example-based: one scrutinizes representative examples of self-deception and attempts to identify their essential common features. The third is theory-guided: the search for a definition is guided by common-sense theory about the etiology and nature of self-deception. Hybrids of these approaches are also common.

The lexical approach may seem safest. Practitioners of the example-based approach run the risk of considering too narrow a range of cases. The theory-guided approach (in its typical manifestations) relies on common-sense explanatory hypotheses that may be misguided: ordinary folks may be good at identifying hypothetical cases of self-deception but quite unreliable at diagnosing what transpires in them. In its most pristine versions, the lexical approach relies primarily on a dictionary definition of 'deceive'. And what could be a better source of definitions than the dictionary?

Matters are not so simple, however. There are weaker and stronger senses of 'deceive' both in the dictionary and in common parlance, as I will explain. Lexicalists need a sense of 'deceive' that is appropriate to self-deception. On what basis are they to identify that sense? Must they eventually turn to representative examples of self-deception or to common-sense theories about what transpires in instances of self-deception?

The lexical approach is favored by theorists who deny that self-deception is possible (e.g., Gergen 1985, Haight 1980, Kipp 1980). A pair of lexical assumptions are common:

1. By definition, person A deceives person B (where B may or may not be the same person as A) into believing that p only if A knows, or at least believes truly, that ~p (i.e., that p is false) and causes B to believe that p.

2. By definition, deceiving is an intentional activity: nonintentional deceiving is conceptually impossible.

Each assumption is associated with a familiar puzzle about self-deception.

If assumption 1 is true, then deceiving oneself into believing that p requires that one know, or at least believe truly, that ~p and cause oneself to believe that p. At the very least, one starts out believing that ~p and then somehow gets oneself to believe that p. Some theorists take this to entail that, at some time, self-deceivers both believe that p and believe that ~p (e.g., Kipp 1980, p. 309). And, it is claimed, this is not a possible state of mind: the very nature of belief precludes one's simultaneously believing that p is true and believing that p is false. Thus we have a static puzzle about self-deception: self-deception, according to the view at issue, requires being in an impossible state of mind.

Assumption 2 generates a dynamic puzzle, a puzzle about the dynamics of self-deception. It is often held that doing something intentionally entails doing it knowingly. If that is so, and if deceiving is by definition an intentional activity, then one who deceives oneself does so knowingly. But knowingly deceiving oneself into believing that p would require knowing that what one is getting oneself to believe is false. How can that knowledge fail to undermine the very project of deceiving oneself? It is hard to imagine how one person can deceive another into believing that p if the latter person knows exactly what the former is up to. And it is difficult to see how the trick can be any easier when the intending deceiver and the intended victim are the same person.<5> Further, deception normally is facilitated by the deceiver's having and intentionally executing a deceptive strategy. If, to avoid thwarting one's own efforts at self-deception, one must not intentionally execute any strategy for deceiving oneself, how can one succeed?

In sketching these puzzles, I conjoined the numbered assumptions with subsidiary ones. One way for a proponent of the reality of self-deception to attempt to solve the puzzles is to attack the subsidiary assumptions while leaving the main assumptions unchallenged. A more daring tack is to undermine the main assumptions, 1 and 2. That is the line I will pursue.

Stereotypical instances of deceiving someone else into believing that p are instances of intentional deceiving in which the deceiver knows or believes truly that ~p. Recast as claims specifically about stereotypical interpersonal deceiving, assumptions 1 and 2 would be acceptable. But in their present formulations the assumptions are false. In a standard use of 'deceived' in the passive voice, we properly say such things as "Unless I am deceived, I left my keys in my car." Here 'deceived' means 'mistaken'. There is a corresponding use of 'deceive' in the active voice. In this use, to deceive is "to cause to believe what is false" (my authority is the Oxford English Dictionary). Obviously, one can intentionally or unintentionally cause someone to believe what is false; and one can cause someone to acquire the false belief that p even though one does not oneself believe that ~p. Yesterday, mistakenly believing that my son's keys were on my desk, I told him they were there. In so doing, I caused him to believe a falsehood. I deceived him, in the sense identified; but I did not do so intentionally, nor did I cause him to believe something I disbelieved.

The point just made has little significance for self-deception, if paradigmatic instances of self-deception have the structure of stereotypical instances of interpersonal deception. But do they? Stock examples of self-deception, both in popular thought and in the literature, feature people who falsely believe--in the face of strong evidence to the contrary--that their spouses are not having affairs, or that their children are not using illicit drugs, or that they themselves are not seriously ill. Is it a plausible diagnosis of what transpires in such cases that these people start by knowing or believing the truth, p, and intentionally cause themselves to believe that ~p? If, in our search for a definition of self-deception, we are guided partly by these stock examples, we may deem it an open question whether self-deception requires intentionally deceiving oneself, getting oneself to believe something one earlier knew or believed to be false, simultaneously possessing conflicting beliefs, and the like. If, instead, our search is driven by a presumption that nothing counts as self-deception unless it has the same structure as stereotypical interpersonal deception, the question is closed at the outset.

Compare the question whether self-deception is properly understood on the model of stereotypical interpersonal deception with the question whether addiction is properly understood on the model of disease. Perhaps the current folk-conception of addiction treats addictions as being, by definition, diseases. However, the disease model of addiction has been forcefully attacked (e.g., Peele 1989). The issue is essentially about explanation, not semantics. How is the characteristic behavior of people typically counted as addicts best explained? Is the disease model of addiction explanatorily more fruitful than its competitors? Self-deception, like addiction, is an explanatory concept. We postulate self-deception in particular cases to explain behavioral data. And we should ask how self-deception is likely to be constituted--what it is likely to be--if it does help to explain the relevant data. Should we discover that the behavioral data explained by self-deception are not explained by a phenomenon involving the simultaneous possession of beliefs whose contents are mutually contradictory or intentional acts of deception directed at oneself, self-deception would not disappear from our conceptual map--any more than addiction would disappear should we learn that addictions are not diseases.

A caveat is in order before I move forward. In the literature on self-deception, "belief," rather than "degree of belief," usually is the operative notion. Here, I follow suit, primarily to avoid unnecessary complexities. Those who prefer to think in terms of degree of belief should read such expressions as "S believes that p" as shorthand for "S believes that p to a degree greater than 0.5 (on a scale from 0 to 1)."

2. Motivated Belief and the Static Puzzle

In stock examples of self-deception, people typically believe something they want to be true--that their spouses are not involved in extramarital flings, that their children are not using drugs, and so on. It is a commonplace that self-deception, in garden-variety cases, is motivated by wants such as these.<6> Should it turn out that the motivated nature of self-deception entails that self-deceivers intentionally deceive themselves and requires that those who deceive themselves into believing that p start by believing that ~p, theorists who seek a tight fit between self-deception and stereotypical interpersonal deception would be vindicated. Whether self-deception can be motivated without being intentional--and without the self-deceiver's starting with the relevant true belief--remains to be seen.

A host of studies have produced results that are utterly unsurprising on the hypothesis that motivation sometimes biases beliefs. Thomas Gilovich reports:

A survey of one million high school seniors found that 70% thought they were above average in leadership ability, and only 2% thought they were below average. In terms of ability to get along with others, all students thought they were above average, 60% thought they were in the top 10%, and 25% thought they were in the top 1%! . . . A survey of university professors found that 94% thought they were better at their jobs than their average colleague. (1991, p. 77)

Apparently, we have a tendency to believe propositions we want to be true even when an impartial investigation of readily available data would indicate that they probably are false. A plausible hypothesis about that tendency is that our wanting something to be true sometimes exerts a biasing influence on what we believe.

Ziva Kunda, in a recent review essay, ably defends the view that motivation can influence "the generation and evaluation of hypotheses, of inference rules, and of evidence," and that motivationally "biased memory search will result in the formation of additional biased beliefs and theories that are constructed so as to justify desired conclusions" (1990, p. 483). In an especially persuasive study, undergraduate subjects (75 women and 86 men) read an article alleging that "women were endangered by caffeine and were strongly advised to avoid caffeine in any form"; that the major danger was fibrocystic disease, "associated in its advanced stages with breast cancer"; and that "caffeine induced the disease by increasing the concentration of a substance called cAMP in the breast" (Kunda 1987, p. 642). (Since the article did not personally threaten men, they were used as a control group.) Subjects were then asked to indicate, among other things, "how convinced they were of the connection between caffeine and fibrocystic disease and of the connection between caffeine and . . . cAMP on a 6-point scale" (pp. 643-44). In the female group, "heavy consumers" of caffeine were significantly less convinced of the connections than were "low consumers." The males were considerably more convinced than the female "heavy consumers"; and there was a much smaller difference in conviction between "heavy" and "low" male caffeine consumers (the heavy consumers were slightly more convinced of the connections).

Given that all subjects were exposed to the same information and assuming that only the female "heavy consumers" were personally threatened by it, a plausible hypothesis is that their lower level of conviction is due to "motivational processes designed to preserve optimism about their future health" (Kunda 1987, p. 644). Indeed, in a study in which the reported hazards of caffeine use were relatively modest, "female heavy consumers were no less convinced by the evidence than were female low consumers" (p. 644). Along with the lesser threat, there is less motivation for skepticism about the evidence.

How do the female heavy consumers come to be less convinced than the others? One testable possibility is that because they find the "connections" at issue personally threatening, these women (or some of them) are motivated to take a hyper-critical stance toward the article, looking much harder than other subjects for reasons to be skeptical about its merits (cf. Kunda 1990, p. 495). Another is that, owing to the threatening nature of the article, they (or some of them) read it less carefully than the others do, thereby enabling themselves to be less impressed by it.<7> In either case, however, there is no need to suppose that the women intend to deceive themselves, or intend to bring it about that they hold certain beliefs, or start by finding the article convincing and then get themselves to find it less convincing. Motivation can prompt cognitive behavior protective of favored beliefs without the person's intending to protect those beliefs. Many instances of self-deception, as I will argue, are explicable along similar lines.

Beliefs that we are self-deceived in acquiring or retaining are a species of biased belief. In self-deception, on a widely held view, the biasing is motivated. Even so, attention to some sources of unmotivated or "cold" biased belief will prove salutary. A number of such sources have been identified in psychological literature. Here are four.<8>

1. Vividness of information. A datum's vividness for an individual often is a function of individual interests, the concreteness of the datum, its "imagery-provoking" power, or its sensory, temporal, or spatial proximity (Nisbett & Ross 1980, p. 45). Vivid data are more likely to be recognized, attended to, and recalled than pallid data. Consequently, vivid data tend to have a disproportional influence on the formation and retention of beliefs.<9>

2. The availability heuristic. When we form beliefs about the frequency, likelihood, or causes of an event, we "often may be influenced by the relative availability of the objects or events, that is, their accessibility in the processes of perception, memory, or construction from imagination" (Nisbett & Ross, p. 18). For example, we may mistakenly believe that the number of English words beginning with 'r' greatly outstrips the number having 'r' in the third position, because we find it much easier to produce words on the basis of a search for their first letter (Tversky & Kahnemann 1973). Similarly, attempts to locate the cause(s) of an event are significantly influenced by manipulations that focus one's attention on a potential cause (Nisbett & Ross, p. 22; Taylor & Fiske 1975, 1978).

3. The confirmation bias. People testing a hypothesis tend to search (in memory and the world) more often for confirming than for disconfirming instances and to recognize the former more readily (Baron 1988, pp. 259-65; Nisbett & Ross, pp. 181-82). This is true even when the hypothesis is only a tentative one (as opposed, e.g., to a belief one has). The implications of this tendency for the retention and formation of beliefs are obvious.

4. Tendency to search for causal explanations. We tend to search for causal explanations of events (Nisbett & Ross, pp. 183-86). On a plausible view of the macroscopic world, this is as it should be. But given 1 and 2 above, the causal explanations upon which we so easily hit in ordinary life may often be ill-founded; and given 3, one is likely to endorse and retain one's first hypothesis much more often than one ought. Further, ill-founded causal explanations can influence future inferences.

Obviously, the most vivid or available data sometimes have the greatest evidential value; the influence of such data is not always a biasing influence. The main point to be made is that although sources of biased belief can function independently of motivation, they may also be primed by motivation in the production of particular motivationally biased beliefs.<10> For example, motivation can enhance the vividness or salience of certain data. Data that count in favor of the truth of a hypothesis that one would like to be true might be rendered more vivid or salient given one's recognition that they so count; and vivid or salient data, given that they are more likely to be recalled, tend to be more "available" than pallid counterparts. Similarly, motivation can influence which hypotheses occur to one (including causal hypotheses) and affect the salience of available hypotheses, thereby setting the stage for the confirmation bias.<11> When this happens, motivation issues in cognitive behavior that epistemologists shun. False beliefs produced or sustained by such motivated cognitive behavior in the face of weightier evidence to the contrary are, I will argue, beliefs that one is self-deceived in holding. And the self-deception in no way requires that the agents intend to deceive themselves, or intend to produce or sustain certain beliefs in themselves, or start by believing something they end up disbelieving. Cold biasing is not intentional; and mechanisms of the sort described may be primed by motivation independently of any intention to deceive.

There are a variety of ways in which our desiring that p can contribute to our believing that p in instances of self-deception. Here are some examples.<12>

1. Negative Misinterpretation. Our desiring that p may lead us to misinterpret as not counting (or not counting strongly) against p data that we would easily recognize to count (or count strongly) against p in the desire's absence. For example, Don just received a rejection notice on a journal submission. He hopes that his article was wrongly rejected, and he reads through the comments offered. Don decides that the referees misunderstood a certain crucial but complex point and that their objections consequently do not justify the rejection. However, as it turns out, the referees' criticisms were entirely justified; and when, a few weeks later, Don rereads his paper and the comments in a more impartial frame of mind, it is clear to him that the rejection was warranted.

2. Positive Misinterpretation. Our desiring that p may lead us to interpret as supporting p data that we would easily recognize to count against p in the desire's absence. For example, Sid is very fond of Roz, a college classmate with whom he often studies. Wanting it to be true that Roz loves him, he may interpret her refusing to date him and her reminding him that she has a steady boyfriend as an effort on her part to "play hard to get" in order to encourage Sid to continue to pursue her and prove that his love for her approximates hers for him. As Sid interprets Roz's behavior, not only does it fail to count against the hypothesis that she loves him, it is evidence for the truth of that hypothesis.

3. Selective Focusing/Attending. Our desiring that p may lead us both to fail to focus attention on evidence that counts against p and to focus instead on evidence suggestive of p. Attentional behavior may be either intentional or unintentional. Ann may tell herself that it is a waste of time to consider her evidence that her husband is having an affair, since he loves her too much to do such a thing; and she may intentionally act accordingly. Or, because of the unpleasantness of such thoughts, Ann may find her attention shifting whenever the issue suggests itself.

4. Selective Evidence-Gathering. Our desiring that p may lead us both to overlook easily obtained evidence for ~p and to find evidence for p that is much less accessible. A historian of philosophy who holds a certain philosophical position hopes that her favorite philosopher (Plato) did so too; consequently, she scours the texts for evidence of this while consulting commentaries that she thinks will provide support for the favored interpretation. Our historian may easily miss rather obvious evidence to the contrary, even though she succeeds in finding obscure evidence for her favored interpretation. Selective evidence-gathering may be analyzed as a combination of 'hyper-sensitivity' to evidence (and sources of evidence) for the desired state of affairs and 'blindness'--of which there are, of course, degrees--to contrary evidence (and sources thereof).<13>

In none of the examples offered does the person hold the true belief that ~p and then intentionally bring it about that he or she believes that p. Yet, assuming that my hypothetical agents acquire relevant false beliefs in the ways described, these are garden-variety instances of self-deception. Don is self-deceived in believing that his article was wrongly rejected, Sid is self-deceived in believing certain things about Roz, and so on.

It sometimes is claimed that while we are deceiving ourselves into believing that p we must be aware that our evidence favors ~p, on the grounds that this awareness is part of what explains our motivationally biased treatment of data (Davidson 1985, p. 146). The thought is that without this awareness we would have no reason to treat data in a biased way, since the data would not be viewed as threatening, and consequently we would not engage in motivationally biased cognition. In this view, self-deception is understood on the model of intentional action: the agent has a goal, sees how to promote it, and seeks to promote it in that way. However, the model places excessive demands on self-deceivers.<14> Cold or unmotivated biased cognition is not explained on the model of intentional action; and motivation can prime mechanisms for the cold biasing of data in us without our being aware, or believing, that our evidence favors a certain proposition. Desire-influenced biasing may result both in our not being aware that our evidence favors ~p over p and in our acquiring the belief that p. This is a natural interpretation of the illustrations I offered of misinterpretation and of selective focusing/attending. In each case, the person's evidence may favor the undesirable proposition; but there is no need to suppose the person is aware of this in order to explain the person's biased cognition.<15> Evidence that one's spouse is having an affair (or that a scholarly paper one painstakingly produced is seriously flawed, or that someone one loves lacks reciprocal feelings) may be threatening even if one lacks the belief, or the awareness, that that evidence is stronger than one's contrary evidence.

Analyzing self-deception is a difficult task; providing a plausible set of sufficient conditions for self-deception is less demanding. Not all cases of self-deception need involve the acquisition of a new belief. Sometimes we may be self-deceived in retaining a belief that we were not self-deceived in acquiring. Still, the primary focus in the literature has been on self-deceptive belief-acquisition, and I will follow suit.

I suggest that the following conditions are jointly sufficient for entering self-deception in acquiring a belief that p.

1. The belief that p which S acquires is false.

2. S treats data relevant, or at least seemingly relevant, to the truth value of p in a motivationally biased way.

3. This biased treatment is a nondeviant cause of S's acquiring the belief that p.

4. The body of data possessed by S at the time provides greater warrant for ~p than for p.<16>

Each condition requires brief attention. Condition 1 captures a purely lexical point. A person is, by definition, deceived in believing that p only if p is false; the same is true of being self-deceived in believing that p. The condition in no way implies that the falsity of p has special importance for the dynamics of self-deception. Motivationally biased treatment of data may sometimes result in someone's believing an improbable proposition, p, that, as it happens, is true. There may be self-deception in such a case; but the person is not self-deceived in believing that p, nor in acquiring the belief that p.<17>

My brief discussion of various ways of entering self-deception serves well enough as an introduction to condition 2. My list of motivationally biased routes to self-deception is not intended as exhaustive; but my discussion of these routes does provide a gloss on the notion of motivationally biased treatment of data.

My inclusion of the term 'nondeviant' in condition 3 is motivated by a familiar problem for causal characterizations of phenomena in any sphere (see, e.g., Mele 1992a, ch. 11). Specifying the precise nature of nondeviant causation of a belief by motivationally biased treatment of data is a difficult technical task better reserved for another occasion. However, much of this article provides guidance on the issue.

The thrust of condition 4 is that self-deceivers believe against the weight of the evidence they possess. For reasons offered elsewhere, I do not view 4 as a necessary condition of self-deception (Mele 1987a, pp. 134-35). In some instances of motivationally biased evidence-gathering, e.g., people may bring it about that they believe a falsehood, p, when ~p is much better supported by evidence readily available to them, even though, owing to the selectivity of the evidence-gathering process, the evidence that they themselves actually possess at the time favors p over ~p. As I see it, such people are naturally deemed self-deceived, other things being equal. Other writers on the topic do require that a condition like 4 be satisfied, however (e.g., Davidson 1985, McLaughlin 1988, Szabados 1985); and I have no objection to including 4 in a list of jointly sufficient conditions. Naturally, in some cases, whether the weight of a person's evidence lies on the side of p or of ~p (or equally supports each) is subject to legitimate disagreement.<18>

Return to the static puzzle. The primary assumption, again, is this: "By definition, person A deceives person B (where B may or may not be the same person as A) into believing that p only if A knows, or at least believes truly, that ~p and causes B to believe that p." I have already argued that the assumption is false and I have attacked two related conceptual claims about self-deception: that all self-deceivers know or believe truly that ~p while (or before) causing themselves to believe that p, and that they simultaneously believe that ~p and believe that p. In many garden-variety instances of self-deception, the false belief that p is not preceded by the true belief that ~p, nor are the two beliefs held simultaneously. Rather, a desire-influenced treatment of data has the result both that the person does not acquire the true belief and that he or she does acquire (or retain) the false belief. One might worry that the puzzle emerges at some other level; but I have addressed that worry elsewhere and I set it aside here (Mele 1987a, pp. 129-30).

The conditions for self-deception that I have offered are conditions specifically for entering self-deception in acquiring a belief. However, as I mentioned, ordinary conceptions of the phenomenon allow people to enter self-deception in retaining a belief. Here is an illustration from Mele 1987a (pp. 131-32):

Sam has believed for many years that his wife, Sally, would never have an affair. In the past, his evidence for this belief was quite good. Sally obviously adored him; she never displayed a sexual interest in another man . . .; she condemned extramarital sexual activity; she was secure, and happy with her family life; and so on. However, things recently began to change significantly. Sally is now arriving home late from work on the average of two nights a week; she frequently finds excuses to leave the house alone after dinner; and Sam has been informed by a close friend that Sally has been seen in the company of a certain Mr. Jones at a theater and a local lounge. Nevertheless, Sam continues to believe that Sally would never have an affair. Unfortunately, he is wrong. Her relationship with Jones is by no means platonic.

In general, the stronger the perceived evidence one has against a proposition that one believes (or "against the belief," for short), the harder it is to retain the belief. Suppose Sam's evidence against his favored belief--that Sally is not having an affair--is not so strong as to render self-deception psychologically impossible and not so weak as to make an attribution of self-deception implausible. Each of the four types of data-manipulation I mentioned may occur in a case of this kind. Sam may positively misinterpret data, reasoning that if Sally were having an affair she would want to hide it and that her public meetings with Jones consequently indicate that she is not sexually involved with him. He may negatively misinterpret the data, and even (nonintentionally) recruit Sally in so doing by asking her for an "explanation" of the data or by suggesting for her approval some acceptable hypothesis about her conduct. Selective focusing may play an obvious role. And even selective evidence-gathering has a potential place in Sam's self-deception. He may set out to conduct an impartial investigation, but, owing to his desire that Sally not be having an affair, locate less accessible evidence for the desired state of affairs while overlooking some more readily attainable support for the contrary judgment.

Here again, garden-variety self-deception is explicable independently of the assumption that self-deceivers manipulate data with the intention of deceiving themselves, or with the intention of protecting a favored belief. Nor is there an explanatory need to suppose that at some point Sam both believes that p and believes that ~p.

3. Conflicting Beliefs and Alleged Empirical Demonstrations of Self-Deception

I have argued that in various garden-variety examples, self-deceivers do not simultaneously possess beliefs whose propositional contents are mutually contradictory ("conflicting beliefs," for short). This leaves it open, of course, that some self-deceivers do possess such beliefs. A familiar defense of the claim that the self-deceived simultaneously possess conflicting beliefs proceeds from the contention that they behave in conflicting ways. For example, it is alleged that although self-deceivers like Sam sincerely assure their friends that their spouses are faithful, they normally treat their spouses in ways manifesting distrust. This is an empirical matter on which I cannot pronounce. But suppose, for the sake of argument, that the empirical claim is true. Even then, we would lack sufficient grounds for holding that, in addition to believing that their spouses are not having affairs, these self-deceivers also believe, simultaneously, that their spouses are so engaged. After all, the supposed empirical fact can be accounted for on the alternative hypothesis that, while believing that their spouses are faithful, these self-deceivers also believe that there is a significant chance they are wrong about this. The mere suspicion that one's spouse is having an affair does not amount to a belief that he or she is so involved. And one may entertain suspicions that p while believing that ~p.<19>

That said, it should be noted that some psychologists have offered alleged empirical demonstrations of self-deception, on a conception of the phenomenon requiring that self-deceivers (at some point) simultaneously believe that p and believe that ~p.<20> A brief look at some of this work will prove instructive.

Ruben Gur and Harold Sackheim propose the following statement of "necessary and sufficient" conditions for self-deception:

1. The individual holds two contradictory beliefs (p and not-p).

2. These two contradictory beliefs are held simultaneously.

3. The individual is not aware of holding one of the beliefs (p or not-p).

4. The act that determines which belief is and which belief is not subject to awareness is a motivated act. (Sackheim & Gur 1978, p. 150; cf. Gur & Sackheim 1979; Sackheim & Gur 1985)

Their evidence for the occurrence of self-deception, thus defined, is provided by voice-recognition studies. In one type of experiment, subjects who wrongly state that a tape-recorded voice is not their own, nevertheless show physiological responses (e.g., galvanic skin responses) that are correlated with voice recognition. "The self-report of the subject is used to determine that one particular belief is held," while "behavioral indices, measured while the self-report is made, are used to indicate whether a contradictory belief is also held" (Sackheim & Gur 1978, p. 173).

It is unclear, however, that the physiological responses are demonstrative of belief (Mele 1987b, p. 6).<21> In addition to believing that the voice is not their own (assuming the reports are sincere), do the subjects also believe that it is their own, or do they merely exhibit physiological responses that often accompany the belief that one is hearing one's own voice? Perhaps there is only a sub-doxastic (from 'doxa': belief) sensitivity in these cases. The threshold for physiological reaction to one's own voice may be lower than that for cognition (including unconscious belief) that the voice is one's own. Further, another team of psychologists (Douglas & Gibbins 1983; cf. Gibbins & Douglas 1985) obtained similar results for subjects' reactions to voices of acquaintances. Thus, even if the physiological responses were indicative of belief, they would not establish that subjects hold conflicting beliefs. Perhaps subjects believe that the voice is not their own while also "believing" that it is a familiar voice.

George Quattrone and Amos Tversky, in an elegant study (1984), argue for the reality of self-deception satisfying Sackheim and Gur's conditions. The study offers considerable evidence that subjects required on two different occasions "to submerge their forearm into a chest of circulating cold water until they could no longer tolerate it" tried to shift their tolerance on the second trial, after being informed that increased tolerance of pain (or decreased tolerance, in another sub-group) indicated a healthy heart.<22> Most subjects denied having tried to do this; and Quattrone and Tversky argue that many of their subjects believed that they did not try to shift their tolerance while also believing that they did try to shift it. They argue, as well, that these subjects were unaware of holding the latter belief, the "lack of awareness" being explained by their "desire to accept the diagnosis implied by their behavior" (p. 239).

Grant that many of the subjects tried to shift their tolerance in the second trial and that their attempts were motivated. Grant, as well, that most of the "deniers" sincerely denied having tried to do this. Even on the supposition that the deniers were aware of their motivation to shift their tolerance, does it follow that, in addition to believing that they did not "purposefully engage in the behavior to make a favorable diagnosis," these subjects also believed that they did do this, as Quattrone and Tversky claim? Does anything block the supposition that the deniers were effectively motivated to shift their tolerance without believing, at any level, that this is what they were doing? (My use of "without believing, at any level, that [p]" is elliptical for "without believing that p while being aware of holding the belief and without believing that p while not being aware of holding the belief.")

The study does not offer any direct evidence that the sincere deniers believed themselves to be trying to shift their tolerance. Nor is the assumption that they believed this required to explain their behavior. (The required belief for the purpose of behavior-explanation is a belief to the effect that a suitable change in one's tolerance on the second trial would constitute evidence of a healthy heart.) From the assumptions (1) that some motivation M that agents have for doing something A results in their doing A and (2) that they are aware that they have this motivation for doing A, it does not follow that they believe, consciously or otherwise, that they are doing A (in this case, purposely shifting their tolerance).<23> Nor, a fortiori, does it follow that they believe, consciously or otherwise, that they are doing A for reasons having to do with M. They may falsely believe that M has no influence whatever on their behavior, while not possessing the contrary belief.

The following case illustrates the latter point. Ann, who consciously desires her parents' love, believes they would love her if she were a successful lawyer. Consequently, she enrolls in law school. But Ann does not believe, at any level, that her desire for her parents' love is in any way responsible for her decision to enroll. She believes she is enrolling solely because of an independent desire to become a lawyer. Of course, I have simply stipulated that Ann lacks the belief in question. But my point is that this stipulation does not render the scenario incoherent. My claim about the sincere deniers in Quattrone and Tversky's study is that, similarly, there is no explanatory need to suppose they believe, at any level, that they are trying to shift their tolerance for diagnostic purposes, or even believe that they are trying to shift their tolerance at all. These subjects are motivated to generate favorable diagnostic evidence and they believe (to some degree) that a suitable change in their tolerance on the second trial would constitute such evidence. But the motivation and belief can result in purposeful action independently of their believing, consciously or otherwise, that they are "purposefully engaged in the behavior," or purposefully engaged in it "to make a favorable diagnosis."<24>

As Quattrone and Tversky's study indicates, people sometimes do not consciously recognize why they are doing what they are doing (e.g., why they are now reporting a certain pain-rating). Given that an unconscious recognition or belief that they are "purposefully engaged in the behavior," or purposefully engaged in it "to make a favorable diagnosis," in no way helps to account for what transpires in the case of the sincere deniers, why suppose that such recognition or belief is present? If one thought that normal adult human beings always recognize--at least at some level--what is motivating them to act as they are, one would opt for Quattrone and Tversky's dual belief hypothesis about the sincere deniers. But Quattrone and Tversky offer no defense of the general thesis just mentioned. In light of their results, a convincing defense of that thesis would demonstrate that whenever such adults do not consciously recognize what they are up to, they nevertheless correctly believe that they are up to x, albeit without being aware that they believe this. That is a tall order.

Quattrone and Tversky suspect that (many of) the sincere deniers are self-deceived in believing that they did not try to shift their tolerance. They adopt Sackheim and Gur's analysis of self-deception (1984, p. 239) and interpret their results accordingly. However, an interpretation of their data that avoids the dual belief assumption just criticized allows for self-deception on a less demanding conception. One can hold (a) that sincere deniers, due to a desire to live a long, healthy life, were motivated to believe that they had a healthy heart; (b) that this motivation (in conjunction with a belief that an upward/downward shift in tolerance would constitute evidence for the favored proposition) led them to try to shift their tolerance; and (c) that this motivation also led them to believe that they were not purposely shifting their tolerance (and not to believe the opposite). Their motivated false beliefs that they were not trying to alter their displayed tolerance can count as beliefs that they are self-deceived in holding without their also believing that they were attempting to do this.<25>

How did the subjects' motivation lead them to hold the false belief at issue? Quattrone and Tversky offer a plausible suggestion (p. 243): "The physiological mechanism of pain may have facilitated self-deception in this experiment. Most people believe that heart responses and pain thresholds are ordinarily not under an individual's voluntary control. This widespread belief would protect the assertion that the shift could not have been on purpose, for how does one 'pull the strings'?" And notice that a belief that one did not try to alter the amount of time one left one's hand in the water before reporting a pain-rating of "intolerable," one based (in part) upon a belief about ordinary uncontrollability of "heart responses and pain thresholds," need not be completely cold or unmotivated. Some subjects' motivation might render the "uncontrollability" belief very salient, e.g., while also drawing attention away from internal cues that they were trying to shift their tolerance, including the intensity of the pain.

Like Quattrone and Tversky, biologist Robert Trivers (1985, pp. 416-17) endorses Gur and Sackheim's definition of self-deception. Trivers maintains that self-deception has "evolved ... because natural selection favors ever subtler ways of deceiving others" (p. 282, cf. pp. 415-20). We recognize that "shifty eyes, sweaty palms, and croaky voices may indicate the stress that accompanies conscious knowledge of attempted deception. By becoming unconscious of its deception, the deceiver hides these signs from the observer. He or she can lie without the nervousness that accompanies deception" (pp. 415-16). Trivers's thesis cannot adequately be assessed here; but the point should be made that the thesis in no way depends for its plausibility upon self-deception's requiring the presence of conflicting beliefs. Self-deception that satisfies the set of sufficient conditions I offered without satisfying the "dual belief" requirement is no less effective a tool for deceiving others. Trivers's proposal hinges on the idea that agents who do not consciously believe the truth (p) have an advantage over agents who do in getting others to believe the pertinent falsehood (~p): consciousness of the truth tends to manifest itself in ways that tip one's hand. But notice that an unconscious belief that p provides no help in this connection. Indeed, such a belief might generate tell-tale physiological signs of deception (recall the physiological manifestations of the alleged unconscious beliefs in Gur and Sackheim's studies). If unconscious true beliefs would make self-deceivers less subtle interpersonal deceivers than they would be without these beliefs, and if self-deception evolved because natural selection favors subtlety in the deception of others, better that it evolve on my model than on the "dual belief" model Trivers accepts.

In criticizing attempted empirical demonstrations of the existence of self-deception on Sackheim & Gur's model without producing empirical evidence that the subjects do not have "two contradictory beliefs," have I been unfair to the researchers? Recall the dialectical situation. The researchers claim that they have demonstrated the existence of self-deception on the model at issue. I have shown that they have not demonstrated this. The tests they employ for the existence of "two contradictory beliefs" in their subjects are, for the reasons offered, inadequate. I have no wish to claim that it is impossible for an agent to believe that p while also believing that ~p.<26> My claim, to be substantiated further, is that there is no explanatory need to postulate such beliefs either in familiar cases of self-deception or in the alleged cases cited by these researchers and that plausible alternative explanations of the data may be generated by appealing to mechanisms and processes that are relatively well understood.

4. The Dynamic Puzzle

The central challenge posed by the dynamic puzzle sketched in Section 1 calls for an explanation of the alleged occurrence of garden-variety instances of self-deception. If a prospective self-deceiver, S, has no strategy, how can S succeed? And if S does have a strategy, how can S's attempt to carry it out fail to be self-undermining in garden-variety cases?

It may be granted that self-deception typically is strategic at least in the following sense: when people deceive themselves they at least normally do so by engaging in potentially self-deceptive behavior, including cognitive behavior of the kinds catalogued in Section 2. Behavior of these kinds can be counted, in a broad sense of the term, as strategic, and the behavioral types may be viewed as strategies of self-deception. Such strategies divide broadly into two kinds, depending on their locus of operation. Internal-biasing strategies feature the manipulation of data that one already has. Input-control strategies feature one's controlling (to some degree) which data one acquires.<27> There are also mixed strategies, involving both internal biasing and input control.

Another set of distinctions will prove useful. Regarding cognitive activities that contribute to motivationally biased belief, there are significant differences among (1) unintentional activities (e.g., unintentionally focusing on data of a certain kind), (2) intentional activities (e.g., intentionally focusing on data of a certain kind), and (3) intentional activities engaged in with the intention of deceiving oneself (e.g., intentionally focusing on data of a certain kind with the intention of deceiving oneself into believing that p). Many skeptical worries about the reality of self-deception are motivated partly by the assumption that 3 is characteristic of self-deception.

An important difference between 2 and 3 merits emphasis. Imagine a twelve-year-old, Beth, whose father died some months ago. Beth may find it comforting to reflect on pleasant memories of playing happily with her father, to look at family photographs of such scenes, and the like. Similarly, she may find it unpleasant to reflect on memories of her father leaving her behind to play ball with her brothers, as he frequently did. From time to time, she may intentionally focus her attention on the pleasant memories, intentionally linger over the pictures, and intentionally turn her attention away from memories of being left behind. As a consequence of such intentional activities, she may acquire a false, unwarranted belief that her father cared more deeply for her than for anyone else. Although her intentional cognitive activities may be explained, in part, by the motivational attractiveness of the hypothesis that he loved her most, those activities need not also be explained by a desire--much less an intention--to deceive herself into believing this hypothesis, or to cause herself to believe this. Intentional cognitive activities that contribute even in a relatively straightforward way to self-deception need not be guided by an intention to deceive oneself.<28>

For the record, I have defended a detailed account of intentions elsewhere (Mele 1992a, chs. 7-13). Intentions, as I view them, are executive attitudes toward plans, in a technical sense of "plan" that, in the limiting case, treats an agent's mental representation of a prospective "basic" action like raising his arm as the plan-component of an intention to raise his arm. However, readers need not accept my view of intention to be persuaded by the arguments advanced here. It is enough that they understand intentions as belonging no less to the category "mental state" than beliefs and desires do and that they view intending to do something, A, as involving being settled (not necessarily irrevocably) upon A-ing, or upon trying to A.<29> Notice that one can have a desire (or motivation) to A without being at all settled upon A-ing. Desiring to take my daughter to the midnight dance while also desiring to take my son to the midnight movie, I need to make up my mind about what to do. But intending to take my daughter to the dance (and to make it up to my son later), my mind is made up. The "settledness" aspect of intentions is central to their "executive" nature, an issue examined in Mele 1992a.<30>

My resolution of the dynamic puzzle about self-deception is implicit in earlier sections. Such strategies of self-deception as positive and negative misinterpretation, selective attending, and selective evidence-gathering do not depend for their effectiveness upon agents' employing them with the intention of deceiving themselves. Even the operation of cold mechanisms whose functioning one does not direct can bias one's beliefs. When, under the right conditions, such mechanisms are primed by motivation and issue in motivated false beliefs, we have self-deception. Again, motivation can affect, among other things, the hypotheses that occur to one and the salience of those hypotheses and of data. For example, Don's motivational condition favors the hypothesis that his paper was wrongly rejected; and Sid's favors hypotheses about Roz's behavior that are consistent with her being as fond of him as he is of her. In "testing" these hypotheses, these agents may accentuate supporting evidence and downplay, or even positively misinterpret, contrary data without intending to do that, and without intending to deceive themselves. Strategies of self-deception, in garden-variety cases of this kind, need not be rendered ineffective by agents' intentionally exercising them with the knowledge of what they are up to; for, in garden-variety cases, self-deceivers need not intend to deceive themselves, strategically or otherwise. Since we can understand how causal processes that issue in garden-variety instances of self-deception succeed without the agent's intentionally orchestrating the process, we avoid the other horn of the puzzle, as well.

5. Intentionally Deceiving Oneself

I have criticized the assumptions that self-deception entails intentionally deceiving oneself and that it requires simultaneously possessing beliefs whose propositional contents are mutually contradictory; and I have tried to show how occurrences of garden-variety self-deception may be explained. I have not claimed that believing that p while also believing that ~p is conceptually or psychologically impossible. But I have not encountered a compelling illustration of that phenomenon in a case of self-deception. Some might suggest that illustrations may be found in the literature on multiple personality. However, that phenomenon, if it is a genuine one, raises thorny questions about the self in self-deception. In such alleged cases, do individuals deceive themselves, with the result that they believe that p while also believing that ~p? Or do we rather have interpersonal deception--or at any rate something more closely resembling that than self-deception?<31> These are questions for another occasion. They take us far from garden-variety self-deception.

Intentionally deceiving oneself, in contrast, is unproblematically possible. Hypothetical illustrations are easily constructed. It is worth noting, however, that the unproblematic cases are remote from garden-variety self-deception.

Here is an illustration. Ike, a forgetful prankster skilled at imitating others' handwriting, has intentionally deceived friends by secretly making false entries in their diaries. Ike has just decided to deceive himself by making a false entry in his own diary. Cognizant of his forgetfulness, he writes under today's date, "I was particularly brilliant in class today," and counts on eventually forgetting that what he wrote is false. Weeks later, when reviewing his diary, Ike reads this sentence and acquires the belief that he was brilliant in class on the specified day. If Ike intentionally deceived others by making false entries in their diaries, what is to prevent us from justifiably holding that he intentionally deceived himself in the imagined case? He intended to bring it about that he would believe that p, which he knew at the time to be false; and he executed that intention without a hitch, causing himself to believe, eventually, that p. Again, to deceive, on one standard definition, is to cause to believe what is false; and Ike's causing himself to believe the relevant falsehood is no less intentional than his causing his friends to believe falsehoods (by doctoring their diaries).<32>

Ike's case undoubtedly strikes readers as markedly dissimilar to garden-variety examples of self-deception--for instance, the case of the woman who falsely believes that her husband is not having an affair (or that she is not seriously ill, or that her child is not using drugs), in the face of strong evidence to the contrary. Why is that? Readers convinced that self-deception does not require the simultaneous presence of beliefs whose propositional contents are mutually contradictory will not seek an answer in the absence of such beliefs in Ike. The most obvious difference between Ike's case and garden-variety examples of self-deception lies in the straightforwardly intentional nature of Ike's project. Ike consciously sets out to deceive himself and intentionally and consciously executes his plan for so doing; ordinary self-deceivers behave quite differently.<33>

This indicates that in attempting to construct hypothetical cases that are, at once, paradigmatic cases of self-deception and cases of agents intentionally deceiving themselves, one must imagine that the agents' intentions to deceive themselves are somehow hidden from them. I do not wish to claim that "hidden-intentions" are impossible. Our ordinary concept of intention leaves room, e.g., for "Freudian" intentions, hidden in some mental partition. And if there is conceptual space for hidden intentions that play a role in the etiology of behavior, there is conceptual space for hidden intentions to deceive ourselves, intentions that may influence our treatment of data.

As I see it, the claim is unwarranted, not incoherent, that intentions to deceive ourselves, or intentions to produce or sustain certain beliefs in ourselves--normally, intentions hidden from us--are at work in ordinary self-deception.<34> Without denying that "hidden-intention" cases of self-deception are possible, a theorist should ask what evidence there may be (in the real world) that an intention to deceive oneself is at work in a paradigmatic case of self-deception. Are there data that can only--or best--be explained on the hypothesis that such an intention is operative?

Evidence that agents desirous of its being the case that p eventually come to believe that p owing to a biased treatment of data is sometimes regarded as supporting the claim that these agents intended to deceive themselves. The biasing apparently is sometimes relatively sophisticated purposeful behavior, and one may assume that such behavior must be guided by an intention. However, as I have argued, the sophisticated behavior in garden-variety examples of self-deception (e.g., Sam's case in Sec. 2) may be accounted for on a less demanding hypothesis that does not require the agents to possess relevant intentions: e.g., intentions to deceive themselves into believing that p, or to cause themselves to believe that p, or to promote their peace of mind by producing in themselves the belief that p. Once again, motivational states can prompt biased cognition of the sorts common in self-deception without the assistance of such intentions. In Sam's case, a powerful motivational attraction to the hypothesis that Sally is not having an affair--in the absence both of a strong desire to ascertain the truth of the matter and of conclusive evidence of Sally's infidelity--may prompt the line of reasoning described earlier and the other belief-protecting behavior. An explicit, or consciously held, intention to deceive himself in these ways into holding on to his belief in Sally's fidelity would undermine the project; and a hidden intention to deceive is not required to produce these activities.

Even if this is granted, it may be held that the supposition that such intentions always or typically are at work in cases of self-deception is required to explain why a motivated biasing of data occurs in some situations but not in other very similar situations (Talbott 1995). Return to Don, who is self-deceived in believing that his article was wrongly rejected. At some point, while revising his article, Don may have wanted it to be true that the paper was ready for publication, that no further work was necessary. Given the backlog of work on his desk, he may have wanted that just as strongly as he later wanted it to be true that the paper was wrongly rejected. Further, Don's evidential situation at these two times may have been very similar: e.g., his evidence that the paper was ready may have been no weaker than his later evidence that the paper was wrongly rejected, and his evidence that the paper was not ready may have been no stronger than his later evidence that the paper was rightly rejected. Still, we may suppose, although Don deceived himself into believing that the article was wrongly rejected, he did not deceive himself into believing that the article was ready for publication: he kept working on it--searching for new objections to rebut, clarifying his prose, and so on--for another week. To account for the difference in the two situations, it may be claimed, we must suppose that in one situation Don decided to deceive himself (without being aware of this) whereas in the other he did not so decide; and in deciding to do something, A, one forms an intention to A. If the execution of self-deceptive biasing strategies were a nonintended consequence of being in a motivational/evidential condition of a certain kind, the argument continues, then Don would either have engaged in such strategies on both occasions or on neither: again, to account for the difference in his cognitive behavior on the earlier and later occasions, we need to suppose that an intention to deceive himself was at work in one case and not in the other.

This argument is flawed. If on one of the two occasions Don decides (hence, intends) to deceive himself whereas on the other he does not, then, presumably, there is some difference in the two situations that accounts for this difference. But if there is a difference, D, in the two situations aside From the intention-difference that the argument alleges, an argument is needed for the claim that D itself cannot account for Don's self-deceptively biasing data in one situation and his not so doing in the other. Given that a difference in intention across situations (presence in one vs. absence in the other) requires some additional difference in the situations that would account for this difference, why should we suppose that there is no difference in the situations that can account for Don's biasing data in one and not in the other in a way that does not depend on his intending to deceive himself in one but not in the other? Why should we think that intention is involved in the explanation of the primary difference to be explained? Why cannot the primary difference be explained instead, e.g., by Don's having a strong desire to avoid mistakenly believing the paper to be ready (or to avoid submitting a paper that is not yet ready) and his having at most a weak desire later to avoid mistakenly believing that the paper was wrongly rejected? Such a desire, in the former case, may block any tendency to bias data in a way supporting the hypothesis that the paper is ready for publication.<35>

At this point, proponents of the thesis that self-deception is intentional deception apparently need to rely on claims about the explanatory place of intention in self-deception itself, as opposed to its place in explaining differences across situations. Claims of that sort have already been evaluated here; and they have been found wanting.

Advocates of the view that self-deception is essentially (or normally) intentional may seek support in a distinction between self-deception and wishful thinking. They may claim that although wishful thinking does not require an intention to deceive oneself, self-deception differs from it precisely in being intentional. This may be interpreted either as stipulative linguistic legislation or as a substantive claim. On the former reading, a theorist is simply expressing a decision to reserve the expression 'self-deception' for an actual or hypothetical phenomenon that requires an intention to deceive oneself or an intention to produce in oneself a certain belief. Such a theorist may proceed to inquire about the possibility of the phenomenon and about how occurrences of self-deception, in the stipulated sense, may be explained. On the latter reading, a theorist is advancing a substantive conceptual thesis: the thesis that the concepts (or our ordinary concepts) of wishful thinking and of self-deception differ along the lines mentioned.

I have already criticized the conceptual thesis about self-deception. A comment on wishful thinking is in order. If wishful thinking is not wishful believing, one difference between wishfully thinking that p and being self-deceived in believing that p is obvious. If, however, wishful thinking is wishful believing--in particular, motivationally biased, false believing--then, assuming that it does not overlap with self-deception (an assumption challenged in Mele 1987a, p. 135), the difference may lie in the relative strength of relevant evidence against the believed proposition: wishful thinkers may encounter weaker counter-evidence than self-deceivers (Szabados 1985, pp. 148-49). This difference requires a difference in intention only if the relative strength of the evidence against the propositions that self-deceivers believe is such as to require that their acquiring or retaining those beliefs depends upon their intending to do so, or upon their intending to deceive themselves. And this thesis about relative evidential strength, I have argued, is false.

Consciously executing an intention to deceive oneself is possible, as in Ike's case; but such cases are remote from paradigmatic examples of self-deception. Executing a "hidden" intention to deceive oneself is possible, too; but, as I have argued, there is no good reason to maintain that such intentions are at work in paradigmatic self-deception. Part of what I have argued, in effect, is that some theorists--philosophers and psychologists alike--have made self-deception more theoretically perplexing than it actually is by imposing upon the phenomena a problematic conception of self-deception.

6. Conclusion

Philosophers' conclusions tend to be terse; psychologists favor detailed summaries. Here I seek a mean. My aim in this paper has been to clarify the nature and relatively proximate etiology of self-deception. In sections 1-4, I resolved a pair of much-discussed puzzles about self-deception, advanced a plausible set of sufficient conditions for self-deception, and criticized empirical studies that allegedly demonstrate the existence of self-deception on a strict interpersonal model. In section 5, I argued that intentionally deceiving oneself is unproblematically possible (as in Ike's case), but that representative unproblematic cases are remote from garden-variety instances of self-deception. Conceptual work on self-deception guided by the thought that the phenomenon must be largely isomorphic with stereotypical interpersonal deception has generated interesting conceptual puzzles. But, I have argued, it also has led us away from a proper understanding of self-deception. Stereotypical interpersonal deception is intentional deception; normal self-deception, I have argued, probably is not. If it were intentional, "hidden" intentions would be at work; and we lack good grounds for holding that such intentions are operative in self-deception. Further, in stereotypical interpersonal deception, there is some time at which the deceiver believes that ~p and the deceived believes that p; but there is no good reason to hold, I have argued, that self-deceivers simultaneously believe that ~p and believe that p. Recognizing these points, we profitably seek an explanatory model for self-deception that diverges from models for the explanation of intentional conduct. I have not produced a full-blown model for this; but, unless I am deceived, I have pointed the way toward such a model--a model informed by empirical work on motivationally biased belief and by a proper appreciation of the point that motivated behavior is not coextensive with intended behavior.

I conclude with a challenge for readers inclined to think that there are cases of self-deception that fit the strict interpersonal model--in particular, cases in which the self-deceiver simultaneously believes that p and believes that ~p. The challenge is simply stated: Provide convincing evidence of the existence of such self-deception. The most influential empirical work on the topic has not met the challenge, as I have shown. Perhaps some readers can do better. However, if I am right, such cases will be exceptional instances of self-deception--not the norm.<36>

REFERENCES

Ainslie, G. (1992) Picoeconomics. Cambridge University Press. Audi, R. (1989) Self-deception and practical reasoning. Canadian Journal of Philosophy 19:247-66.

Audi, R. (1985) Self-deception and rationality. In: Self-deception and self-understanding, ed. M. Martin. University of Kansas Press.

Bach, K. (1981) An analysis of self-deception. Philosophy and Phenomenological Research 41:351-370.

Baron, J. (1988) Thinking and deciding. Cambridge University Press.

Baumeister, R. & Cairns, K. (1992) Repression and self-presentation: When audiences interfere with self-deceptive strategies. Journal of Personality and Social Psychology 62:851-62.

Davidson, D. (1985) Deception and division. In: Actions and events, ed. E. LePore & B. McLaughlin. Basil Blackwell.

Davidson, D. (1982) Paradoxes of irrationality. In: Philosophical essays on Freud, ed. R. Wollheim & J. Hopkins. Cambridge University Press.

Douglas, W. & Gibbins, K. (1983) Inadequacy of voice recognition as a demonstration of self-deception. Journal of Personality and Social Psychology 44:589-92.

Festinger, L. (1964) Conflict, decision, and dissonance. Stanford University Press.

Festinger, L. (1957) A theory of cognitive dissonance. Stanford University Press.

Fingarette, H. (1969) Self-deception. Routledge & Kegan Paul.

Frey, D. (1986) Recent research on selective exposure to information. In: Advances in experimental social psychology, vol. 19, ed. L. Berkowitz. Academic Press.

Gergen, K. (1985) The ethnopsychology of self-deception. In: Self-deception and self-understanding, ed. M. Martin. University of Kansas Press.

Gibbins, K. & Douglas, W. (1985) Voice recognition and self-deception: A reply to Sackheim and Gur. Journal of Personality and Social Psychology 48:1369-72.

Gilovich, T. (1991) How we know what isn't so. Macmillan.

Greenwald, A. (1988) Self-knowledge and self-deception. In Self-deception: An adaptive mechanism? ed. J. Lockard & D. Paulhus. Prentice-Hall.

Gur, R. & Sackheim, H. (1979) Self-deception: A concept in search of a phenomenon. Journal of Personality and Social Psychology 37:147-69.

Haight, M. (1980) A study of self-deception. Harvester Press.

Higgins, R., Snyder, C. & Berglas, S. (1990) Self-handicapping: The paradox that isn't. Plenum Press.

Johnston, M. (1988) Self-deception and the nature of mind. In: Perspectives on self-deception, ed. B. McLaughlin & A. Rorty. University of California Press.

Kipp, D. (1980) On self-deception. Philosophical Quarterly 30:305-17.

Kunda, Z. (1990) The case for motivated reasoning. Psychological Bulletin 108:480-98.

Kunda, Z. (1987) Motivated inference: Self-serving generation and evaluation of causal theories. Journal of Personality and Social Psychology 53:636-47.

Lockard, J. & Paulhus, D. (1988) Self-deception: An adaptive mechanism? Prentice-Hall.

Martin, M. (1985) Self-deception and self-understanding. University of Kansas Press.

McLaughlin, B. (1988) Exploring the possibility of self-deception in belief. In: Perspectives on self-deception, ed. B. McLaughlin & A. Rorty. University of California Press.

Mele, A. (1995) Autonomous agents: From self-control to autonomy. Oxford University Press.

Mele, A. (1992a) Springs of action. Oxford University Press.

Mele, A. (1992b) Recent work on intentional action. American Philosophical Quarterly 29:199-217.

Mele, A. (1987a) Irrationality. Oxford University Press.

Mele, A. (1987b) Recent work on self-deception. American Philosophical Quarterly 24:1-17.

Mele, A. (1983) Self-deception. Philosophical Quarterly 33:365-77.

Mele, A. & Moser, P. (1994) Intentional action. No=96s 28:39-68.

Nisbett, R. & Ross, L. (1980) Human inference: Strategies and shortcomings of social judgment. Prentice-Hall.

Pears, D. (1991) Self-deceptive belief-formation. Synthese 89:393-405.

Pears, D. (1984) Motivated irrationality. Oxford University Press.

Peele, S. (1989) Diseasing of America: Addiction treatment out of control. Lexington Books.

Plato (1953) Cratylus. In: The dialogues of Plato, trans. B. Jowett. Clarendon Press.

Quattrone, G. & Tversky, A. (1984) Causal versus diagnostic contingencies: On self-deception and on the voter's illusion. Journal of Personality and Social Psychology 46:237-48.

Rorty, A. (1980) Self-Deception, Akrasia, and Irrationality. Social Science Information 19: 905-22.

Sackheim, H. (1988) Self-deception: A synthesis. In Self-deception: An adaptive mechanism? ed. J. Lockard & D. Paulhus. Prentice-Hall.

Sackheim, H. & Gur, R. (1985) Voice recognition and the ontological status of self-deception. Journal of Personality and Social Psychology 48:1365-68.

Sackheim, H. & Gur, R. (1978) Self-deception, self-confrontation, and consciousness. In: Consciousness and self-regulation, vol. 2, ed. G. Schwartz & D. Shapiro. Plenum Press.

Silver, M., Sabini, J. & Miceli, M. (1989) On knowing self-deception. Journal for the Theory of Social Behaviour 19:213-27.

Sorensen, R. (1985) Self-deception and scattered events. Mind 94:64-69.

Szabados, B. (1985) The self, its passions, and self-deception. In: Self-deception and self-understanding, ed. M. Martin. University of Kansas Press.

Talbott, W. (1995) Intentional self-deception in a single, coherent self. Philosophy and Phenomenological Research 55:27-74.

Taylor, S. (1989) Positive illusions. Basic Books.

Taylor, S. & Fiske, S. (1978) Salience, attention and attribution: Top of the head phenomena. In: Advances in experimental social psychology, vol. 11, ed. L. Berkowitz. Academic Press.

Taylor, S. & Fiske, S. (1975) Point of view and perceptions of causality. Journal of Personality and Social Psychology 32:439-45.

Taylor, S. & Thompson, S. (1982) Stalking the elusive "vividness" effect. Psychological Review 89:155-81.

Trivers, R. (1985) Social evolution. Benjamin/Cummings.

Tversky, A. & Kahnemann, D. (1973) Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5:207-32.

Weiskrantz, L. (1986) Blindsight: A case study and implications. Oxford University Press.

NOTES

1. I have addressed many of these questions elsewhere. Mele 1987a argues that proper explanations of self-deception and of irrational behavior manifesting akrasia or "weakness of will" are importantly similar and generate serious problems for a standard philosophical approach to explaining purposive behavior. Mele 1992a develops an account of the psychological springs of intentional action that illuminates the etiology of motivated rational and irrational behavior alike. Mele 1995 defends a view of self-control and its opposite that applies not only to overt action and to belief but also to such things as higher-order reflection on personal values and principles; this book also displays the place of self-control in individual autonomy. Several referees noted connections between ideas explored in this article and these issues; some expressed a desire that I explicitly address them here. Although I take some steps in that direction, my primary concern is a proper understanding of self-deception itself. Given space constraints, I set aside questions about the utility of self-deception; but if my arguments succeed, they illuminate the phenomenon whose utility is at issue. I also lack space to examine particular philosophical works on self-deception. On ground-breaking work by Audi (e.g., 1985), Bach (1981), Fingarette (1969), Rorty (e.g., 1980), and others, see Mele 1987b; on important work by Davidson (1982) and Pears (1984), see Mele 1987a, ch. 10.

2. On the occasional rationality of self-deception, see Audi 1985, 1989, and Baron 1988, p. 40. On the question whether self-deception is an adaptive mechanism, see Taylor 1989 and essays in Lockard & Paulhus 1988.

3. For example, subjects instructed to conduct "symmetrical memory searches" are less likely than others to fall prey to the confirmation bias (see Sec. 2), and subjects' confidence in their responses to "knowledge questions" is reduced when they are invited to provide grounds for doubting the correctness of those responses (Kunda 1990, pp. 494-95). Presumably, people aware of the confirmation bias may reduce biased thinking in themselves by giving themselves the former instruction; and, fortunately, we do sometimes remind ourselves to consider both the pros and the cons before making up our minds about the truth of important propositions--even when we are tempted to do otherwise. For a review of the debate, see Kunda 1990. For a revolutionary view of the place of motivation in the etiology of beliefs, see Ainslie 1992.

4. Literature on the "paradoxes" of self-deception is reviewed in Mele 1987b.

5. One response is mental partitioning: the deceived part of the mind is unaware of what the deceiving part is up to. See Pears 1984 (cf. 1991) for a detailed response of this kind and Davidson 1985 (cf. 1982) for a more modest partitioning view. For criticism of some partitioning views of self-deception, see Johnston 1988 and Mele 1987a, ch. 10, 1987b, pp. 3-6.

6. This is not to say that self-deception is always "self-serving" in this way. See Mele 1987a, pp. 116-18; Pears 1984, pp. 42-44. Sometimes we deceive ourselves into believing that p is true even though we would like p to be false.

7. Regarding the effects of motivation on time spent reading threatening information, see Baumeister & Cairns 1992.

8. The following descriptions derive from Mele 1987a, pp. 144-45.

9. For a challenge to studies of the vividness effect, see Taylor & Thompson 1982. They contend that research on the issue has been flawed in various ways, but that studies conducted in "situations that reflect the informational competition found in everyday life" might "show the existence of a strong vividness effect" (pp. 178-79).

10. This theme is developed in Mele 1987a, ch. 10 in explaining the occurrence of self-deception. Kunda 1990 develops the same theme, paying particular attention to evidence that motivation sometimes primes the confirmation bias. Cf. Silver et al. 1989, p. 222.

11. For a motivational interpretation of the confirmation bias, see Frey 1986, pp. 70-74.

12. Cf. Mele 1987a, pp. 125-26. Also cf. Bach 1981, pp. 358-61 on "rationalization" and "evasion," Baron 1988, pp. 258 and 275-76 on positive and negative misinterpretation and "selective exposure," and Greenwald 1988 on various kinds of "avoidance." Again, I am not suggesting that, in all cases, agents who are self-deceived in believing that p desire that p (see n. 6). For other routes to self-deception, including what is sometimes called "immersion," see Mele 1987a, pp. 149-51, 157-58. On self-handicapping, another potential route to self-deception, see Higgins et al. 1990.

13. Literature on "selective exposure" is reviewed in Frey 1986. Frey defends the reality of motivated selective evidence-gathering, arguing that a host of data are best accommodated by a variant of Festinger's (1957, 1964) cognitive dissonance theory.

14. For references to work defending the view that self-deception typically is not intentional, see Mele 1987b, p. 11; also see Johnston 1988.

15. This is not to deny that self-deceivers sometimes believe that p while being aware that their evidence favors ~p. On such cases, see Mele 1987a, ch. 8 and pp. 135-36.

16. Condition 4 does not assert that the self-deceiver is aware of this.

17. On a relevant difference between being deceived in believing that p and being deceived into believing that p, see Mele 1987a, pp. 127-28.

18. Notice that not all instances of motivationally biased belief satisfy my set of sufficient conditions for self-deception. In some cases of such belief, what we believe happens to be true. Further, since we are imperfect assessors of data, we might fail to notice that our data provide greater warrant for p than for ~p and end up believing that p as a result of a motivationally biased treatment of data.

19. This is true, of course, on "degree-of-belief" conceptions of belief, as well.

20. Notice that simultaneously believing that p and believing that ~p--i.e., Bp & B~p--is distinguishable from believing the conjunction of the two propositions: B(p & ~p). We do not always put two and two together.

21. In a later paper, Sackheim grants this (1988, pp. 161-62).

22. The study is described and criticized in greater detail in Mele 1987a, pp. 152-58. Parts of this section are based on that discussion.

23. For supporting argumentation, see Mele 1987a, pp. 153-56.

24. As this implies, in challenging the claim that the sincere deniers have the belief at issue, I am not challenging the popular idea that attempts are explained at least partly in terms of pertinent beliefs and desires.

25. Obviously, whether the subjects satisfy the conditions offered in Section 2 as sufficient for self-deception depends on the relative strength of their evidence for the pertinent pair of propositions.

26. Locating such cases is not as easy as some might think. One reader appealed to blindsight. There is evidence that some people who believe themselves to be blind can see (e.g., Weiskrantz 1986). They perform much better (and in some cases, much worse) on certain tasks than they would if they were simply guessing, and steps are taken to ensure that they are not benefitting from any other sense. Suppose some sighted people in fact believe themselves to be blind. Do they also believe that they are not blind, or, e.g., that they see x? If it were true that all sighted people (even those who believe themselves to be blind) believe themselves to be sighted, the answer would be yes. But precisely the evidence for blindsight is evidence against the truth of this universal proposition. The evidence indicates that, under certain conditions, people may see without believing that they are seeing. The same reader appealed to a more mundane case of the following sort. Ann set her watch a few minutes ahead to promote punctuality. Weeks later, when we ask her for the time, Ann looks at her watch and reports what she sees, "11:10." We then ask whether her watch is accurate. If she recalls having set it ahead, she might sincerely reply, "No, it's fast; it's actually a little earlier than 11:10." Now, at time t, when Ann says "11:10," does she both believe that it is 11:10 and believe that it is not 11:10? There are various alternative possibilities. Perhaps, e.g., although she has not forgotten setting her watch ahead, her memory of so doing is not salient for her at t and she does not infer at t that it is not 11:10; or perhaps she has adopted the strategy of acting as if her watch is accurate and does not actually believe any of its readings. (Defending a promising answer to the following question is left as an exercise for the reader: What would constitute convincing evidence that, at t, Ann believes that it is 11:10 and believes that it is not 11:10?)

27. Pears identifies what I have called internal biasing and input-control strategies and treats "acting as if something were so in order to generate the belief that it is so" as a third strategy (1984, p. 61). I examine "acting as if" in Mele 1987a, pp. 149-51, 157-58.

28. For further discussion of the difference between 2 and 3 and of cases of self-deception in which agents intentionally selectively focus on data supportive of a preferred hypothesis (e.g.) without intending to deceive themselves, see Mele 1987a, pp. 146, 149-51.

29. Readers who hold that intending is a matter of degree should note that the same may be said about being settled upon doing something.

30. For criticism of opposing conceptions of intention in the psychological literature, see Mele 1992a, ch. 7. On connections between intention and intentional action, see Mele 1992a, 1992b, and Mele & Moser 1994.

31. Similar questions have been raised about partitioning hypotheses that fall short of postulating multiple personalities. For references, see Mele 1987b, p. 4; cf. Johnston 1988.

32. On "time-lag" scenarios of this general kind, see Davidson 1985, p. 145; McLaughlin 1988, pp. 31-33; Mele 1983, pp. 374-75, 1987a, pp. 132-34; Sackheim 1988, p. 156; Sorensen 1985.

33. Some readers may be attracted to the view that although Ike deceives himself, this is not self-deception at all (cf. Davidson 1985, p. 145; McLaughlin 1988). Imagine that Ike had been embarrassed by his performance in class that day and consciously viewed the remark as ironic when he wrote it. Imagine also that Ike strongly desires to see himself as exceptionally intelligent and that this desire helps to explain, in a way psychotherapy might reveal to Ike, his writing the sentence. If, in this scenario, Ike later came to believe that he was brilliant in class that day on the basis of a subsequent reading of his diary, would such readers be more inclined to view the case as one of self-deception?

34. Pears 1991 reacts to the charge of incoherence, responding to Johnston 1988.

35. Talbott suggests that there are different preference rankings in the two kinds of case. (The preferences need not be objects of awareness, of course.) In cases of self-deception, the agents' highest relevant preference is that they believe "that p is true, if p is true"; and their second-highest preference is that they believe "that p is true, if p is false": self-deceiving agents want to believe that p is true whether or not it is true. In the contrasting cases, agents have the same highest preference, but the self-deceiver's second-highest preference is the lowest preference of these agents: these agents have a higher-ranking preference "not to believe that p, if p is false." Suppose, for the sake of argument, that this diagnosis of the difference between the two kinds of case is correct. Why should we hold that in order to account for the target difference--namely, that in one case there is a motivated biasing of data and in the other there is not--we must suppose that an intention to deceive oneself (or to get oneself to believe that p) is at work in one case but not in the other? Given our understanding of various ways in which motivation can bias cognition in the absence of such an intention, we can understand how one preference ranking can do this while another does not. An agent with the second preference ranking may be strongly motivated to ascertain whether p is true or false; and that may block any tendency toward motivated biasing of relevant data. This would not be true of an agent with the first preference ranking.

36. Parts of this article derive from my "Two Paradoxes of Self-Deception" (presented at a 1993 conference on self-deception at Stanford). Drafts were presented at the University of Alabama, McGill University, and Mount Holyoke College, where I received useful feedback. Initial work on this article occurred during my tenure of a 1992/93 NEH Fellowship for College Teachers, a 1992/93 Fellowship at the National Humanities Center, and an NEH grant for participation in a 1993 Summer Seminar, "Intention," at Stanford (Michael Bratman, director). For helpful written comments, I am grateful to George Ainslie, Kent Bach, David Bersoff, John Furedy, Stevan Harnad, Harold Sackheim, and BBS's anonymous referees.

The Spandrels of Self-Deception: Prospects for a Biological Theory of a

Mental Phenomenon

D. S. Neil Van Leeuwen

Stanford University

Abstract: Three puzzles about self-deception make this mental phenomenon an intriguing

explanatory target. The first relates to how to define it without paradox; the second concerns why

it exists at all; and the third is about how to make sense of self-deception in light of the

interpretive view of the mental that has become widespread in philosophy. In this paper I address

the first two puzzles. First, I define self-deception. Second, I criticize Robert Trivers’ attempt to

use evolutionary psychology to solve the second puzzle (existence). Third, I sketch a theory to

replace that of Trivers’. Self-deception is not an adaptation, but a spandrel in the sense that Gould

and Lewontin (1978) give the term: a byproduct of other features of human (cognitive)

architecture.

Self-deception is so undeniable a fact of human life that if anyone tried to deny

its existence, the proper response would be to accuse this person of it.

—Allen Wood

Introduction to the Problems

Even prior to a characterization of what self-deception is, it’s easy to see that it’s

widespread. This is puzzling, because self-deception is paradoxical. Here are three apparent

problems. First, self-deception seems to involve a conceptual contradiction; in order to deceive

one must believe the contrary of the deception one is perpetrating, but if one believes the

contrary, it seems impossible for that very self to believe the deception.1 Second, a view of the

mind has become widely accepted that makes rationality constitutive and exhaustive of the

mental; this is the interpretive view of the mental put forth by Donald Davidson and followers.2

According to this view, the idea of attributing irrational beliefs to agents makes little sense, for

we cannot make sense of what someone’s beliefs are unless they are rational. But self-deception

is highly irrational.3 Third, a brief survey of other cognitive mechanisms (such as vision) reveals

1 Sartre (1956: 89) raises a closely analogous paradox regarding his concept bad faith

2 See especially Essays 9-11 in Inquiries into Truth and Interpretation and Essays 11 and 12 in Essays on

Actions and Events

3 I am not so concerned to deal with puzzles 1 and 2 in this paper, although I believe my next section on the

concept of self-deception offers a start. The literature, however, is extensive. For work relevant to puzzle 1,

see (for example) McGlaughlin (1988) and Rorty (1988). Davidson’s own work (1982, 1985, 1993) is

that most are well suited for giving us reliable information about ourselves and our environment.

Self-deception, however, undermines knowledge of ourselves and the world. If having good

information brings fitness benefits to organisms, how is it that self-deception was not weeded out

by natural selection? (For convenience, I shall refer to these as puzzles 1, 2, and 3.)

Puzzles such as these make the self-deception an intriguing explanatory target.

Philosophical literature on self-deception abounds, as does psychological literature. Much of the

philosophical literature, however, focuses on characterizing the phenomenon in a way that avoids

the paradoxes while making literal sense of calling the phenomenon “self-deception.” Such

investigations, while being interesting with respect to puzzles 1 and 2, do little to advance an

understanding of why the capacity for self-deception exists as a property of the human mind.

Puzzle 3 is widely overlooked. The story often told is that self-deception is the mind’s means of

protecting itself from psychological pain. But such an account does not solve puzzle 3, for

misinformation, regardless of how content it might leave the organism, would seem inevitably to

decrease fitness in the long run. Pain, physical and psychological, occurs for biological reasons.

Given that puzzle 3 remains unsolved, the time seems right for evolutionary psychology

to enter the picture with an adaptationist theory of self-deception. After all, the initial conditions

seem right: self-deception (as Wood points out in the quote above) is, it seems, universal; its

universality may be taken to suggest that the capacity for it was a trait that was selected for in the

ancestral environment. All that remains is for the evolutionary psychologist to posit a way in

which self-deception could have conferred a fitness benefit.

This strategy is precisely what Robert Trivers adopts in his article “The Elements of a

Scientific Theory of Self-Deception.”4 On his account, self-deception serves at least two purposes

that confer fitness benefits on the organism. First, the ability to self-deceive increases one’s

ability to lie effectively. Second, self-deception can serve the purpose of helping to orient one

largely addressed at puzzle 2. See Johnston (1988) for a view that takes puzzle 2 as a reductio of the

interpretive view.

4 Trivers (2000).

positively toward the future. If Trivers’ theory is correct, we may take him to have solved puzzle

3. Further, if his work coheres conceptually with appropriate work in philosophy aimed at puzzles

1 and 2, we might take it to offer a definitive solution to the paradoxes of self-deception.

While Trivers’ theory looks promising, I shall argue here that it fails for a number of

reasons. Three criticisms are salient. First, the loose, qualitative predictions that the theory seems

to have as consequences do not stand up to many kinds of cases of self-deception. Second, it

makes implausible assumptions about what the phenotype set or variation would have been for

natural selection to choose from. Third, the adaptationist/evolutionary psychology approach to

self-deception seems to make the capacity for self-deception a modular feature of the mind; such

a view makes little sense of the connections that self-deception has to other aspects of human

cognition. After leveling these critiques against Trivers, I develop my own approach to the

question of why the capacity for self-deception exists. Self-deception is a spandrel, in the sense

that Gould and Lewontin (1978) give that term, of more general features of the Bauplan of human

cognitive architecture. Taking this view avoids the criticisms to which Trivers’ theory is

susceptible and yields empirical predictions whose prospects of confirmation appear much better.

The Quarterback and the Cuckold: Conceptual Groundwork on Self-Deception

As a starting point to discussion, let me first define self-deception

Trivers characterizes self-deception as “active misrepresentation of reality to the

conscious mind.”5 This definition is inadequate. First, “misrepresentation of reality” suggests

simply incorrect representation of facts.6 But on this analysis any false belief that is consciously

adhered to would count as self-deception, even ones that are clearly not self-deceptive (like the

belief that the water cooler is in the hall, although someone has recently moved it to the lounge).

Furthermore, this definition of self-deception excludes paradigm cases of the phenomenon, since

a belief does not count as a “misrepresentation” if it turns out true. But take the case of the abused

5 Trivers (2000: 114)

6 The first entry in the online OED for “misrepresentation” is: “Wrong or incorrect representation of facts,

statements, the character of a person, etc.”

spouse who convinces herself that her husband will stop beating her for good after this time. I am

inclined to count her mental state as self-deception, and to think so regardless of whether or not

he (say) gets hit by a truck the next day and so does not beat her. But if he gets hit by a truck, the

apparently self-deceptive belief will be true, and thus not count as “misrepresentation.”

Rejecting Trivers’ definition, I think we can best conceive of self-deception by

considering two paradigm cases: the cuckold and the nervous quarterback. To start, imagine a

man whose wife is cheating but who convinces himself that she’s faithful. We are inclined to

label this “self-deception” just in case the man in question has compelling evidence that she is

cheating. She stays out all night, comes home with messed-up hair, is secretive about where she

goes at night, etc. In fact, the husband has evidence such that, if he were to have that evidence

about the behavior of another man’s wife, he would conclude the woman was cheating. What

makes the difference? The obvious suggestion is that he has a desire that his wife is faithful. This

desire makes the causal difference in what he comes to believe. Now consider the nervous

quarterback. He has strong reasons to believe his coach will be furious if he delivers a bad

performance. Furthermore, he knows that this belief makes him nervous and that his resulting

anxiety will cause him to perform badly. With this background he convinces himself that his

coach will not be angry if he delivers a bad performance (for he desires this belief), despite

compelling evidence to the contrary. To do this, he attends to what scanty evidence there is that

the coach is a nice guy; if he succeeds in convincing himself, he’s self-deceived.

These cases have two points in common. First, the self-deceived agent has epistemic

access to compelling evidence to the contrary of the belief that he self-deceptively adopts. By

“compelling evidence to the contrary” I mean evidence that under the usual application of the

agent’s epistemic norms would yield the belief opposed to the self-deceptive belief.7 Second, in

7 It may be further asked what I mean by “usual” here, since if “usual” just means non-self-deceptive then

we are stuck with a circular definition. This problem, however, can be solved by saying that “usual” just

means “in absence of the kind of belief-influencing desire that constitutes the second point of commonality

between the two cases.”

both cases there is a desire that is making the difference in what the agent believes. Importantly,

the existence of the desire confers no further epistemic warrant on the adopted belief. For, to take

the first example, the agent’s desire that his wife was faithful last night makes it no more likely

that it is true that she was faithful. With these two points in mind, we arrive at the following

definition of self-deception: an agent is self-deceived if and only if she arrives at a belief (i) that

is contrary to what her epistemic norms in conjunction with available evidence would usually

dictate and (ii) a desire for a certain state of affairs to obtain, or to have a certain belief, causally

makes the difference in what belief is arrived at in an epistemically illegitimate fashion

The difference between the quarterback and the cuckold is that the cuckold’s desire that

makes the difference is a desire for the world to be a certain way, whereas the quarterback desires

a certain belief—it is not so much that he desires that his coach not be angry; mainly he desires

that he believe that his coach will not be angry, since having this belief will calm his nerves. This

analysis makes the cuckold’s self-deception a species of wishful thinking, while the quarterback’s

might be called intentional self-deception.8 One might be tempted to call one self-deception and

not the other, but I consider them both to be genuine cases of self-deception in virtue of intuitions

about common usage (I think general usage would lean in the direction of labeling both as cases

of “self-deception”). And I also think my definition conforms to the semantics of the term selfdeception

nicely for the following reasons. First, it stipulates that self-deception involves

subversion of the evidence-based belief formation process (hence deception); second, it stipulates

the subversion is due to one of the agent’s own desires (hence self

These conceptual considerations, of course, do not spell doom for Trivers’ project. It may

be that his adaptationist explanation of the phenomenon is correct, despite his bad definition. I do,

however, think that Trivers’ should be more precise in his definition than he is, for hypotheses

8 There is disagreement about which of these two notions is the right analysis of self-deception.

McGlaughlin (1988) is explicit about the view that self-deception is on a continuum with wishful thinking,

while Talbott (1995) adopts the “intentional” self-deception analysis. By defining the term in the inclusive

way that I do, I avoid the controversy. Of course, I have independent reasons for adopting the analysis I do.

about traits as adaptations need to employ precise characterizations of what exactly those traits

are if those hypotheses are to admit any kind of testing that would count as scientific.

A Brief Overview of Trivers’ Account

Trivers offers essentially three hypotheses for why humans engage in self-deception.

First, self-deception may aid the deception of others, so it’s adaptive. Second, self-deception may

arise from internal representation of voices of significant others, where such “voices” can be

ingrained genetically (“internal genetic conflict”) or learned through contact. (This part of the

theory I won’t discuss for reasons of space; I include it in the summary for completeness.) Third,

self-deception is adaptive insofar as it orients one favorably toward the future.

With respect to the first hypothesis, Trivers notes that hiding the fact that one is deceiving

from one’s own consciousness makes it less likely that one will inadvertently betray the fact that

one is deceiving, since one does not realize it oneself. He then elaborates on the hypothesis by

giving a loose taxonomy of deceits that may be aided by self-deception: self-promotion, the

construction of a biased social theory, and fictitious narratives of intention.

Turning to the second hypothesis, Trivers first focuses on parent-offspring interaction,

noticing two facts: (i) a parent has a 1/2 degree of relatedness to its offspring, whereas the

offspring (naturally) has a degree of relatedness of 1 to itself; (ii) the instruction of parents can be

enormously valuable to offspring. As a result, the offspring can benefit from parental instruction,

although the interests of the parent and offspring are expected to diverge. From this Trivers

concludes: “ . . . it can easily be imagined that selection has accentuated the parental voice in the

offspring to benefit the parent and that some conflict is expected within an individual between its

own self-interest and the internal representation of its parents’ view of its self-interest.”9 The

conflict of internal voices is then supposed to pave the way for self-deception.

9 Trivers (2000:122)

Third, Trivers briefly considers the idea that positive illusions benefit humans. “Life is

intrinsically future-oriented and mental operations that keep a positive future orientation at the

forefront result in better future outcomes (though perhaps not as good as those projected).”10

Do Trivers’ Views Stand Up?

I will focus here on two aspects of Trivers’ theory (and more the first than the second):

his view that self-deception was selected for because it increases the ability to lie11; his view that

self-deception was selected for because it orients one positively toward the future.

I take it as an assumption that having reliable information about the environment is

beneficial for the survival and reproduction of any organism.12 I also take it as an assumption that

this largely explains the cognitive mechanisms we have, such as vision, hearing, smell, touch, and

taste. Faculties that provide a pathway for reliable information were selected for in the process of

natural selection, not just in the more recent human environment, but also deeper in our

evolutionary history. I doubt Trivers would disagree with these basic assumptions as a starting

point for investigation. Against this background, however, Trivers view is surprising. He is

committed to the claim that a specific capacity for self-deception was selected for; such a

capacity, if it did come about in this way, goes against our expectations about natural selection

pushing us in the direction of having better information, because self-deception will generally

(although not necessarily) tend in the direction of falsehood.

Trivers answer to this worry has to be that the ability to lie convincingly (hypothesis 1)

has so great a fitness benefit that self-deception’s contribution to it outweighs the loss of fitness

that we would expect to accompany something that detracts from reliable information flow. To

start, one might doubt whether lying itself is really that beneficial in the first place, but this is not

10 Trivers (2000:126)

11 Properly speaking, if one is truly self-deceived in having a certain belief, then propagating that belief will

not count as “lying,” because one has the belief oneself. But I trust it will be clear what is meant in this

context: “lying” is standing in for “conveying beliefs that one would not under normal circumstances

believe oneself.”

12 Skittish animals notwithstanding.

my criticism. It seems, rather, that self-deception would have to be highly targeted—that is,

almost uniquely associated with situations in which lying is beneficial—in order to have enough

fitness benefit to be selected for overall. To make this clear, consider an early human who is

asked whether there is food by the river. Perhaps this human is helped by being able to lie,

because then he has the food to himself. So given that there are many situations like this, selfdeception,

which will increase the ability to lie, will be selected for. But what would the selfdeception

here consist in? Well, convincing himself in the dialogue that there actually is no food

by the river. But if he continues to believe this, then he ends up not getting the food at the river

himself, because he believes there is none. In fact, the self-deceiver is worse off than if he had not

lied at all, because then he at least would have gotten half the food. Thus in order to confer the

fitness benefit, the supposedly selected-for capacity for self-deception would have to produce

self-deceptions that hold almost exclusively when lying is useful. Even if such a cognitive feature

were possible, it sounds very unlike the self-deception that we encounter in the real world.

Furthermore, although Trivers says nothing about how his theory might be tested, the

considerations of the last paragraph suggest that there is at least one prediction that his theory

makes that might facilitate testing of some sort. If Trivers were right, then cases in which humans

engage in self-deception could be expected to map closely to cases in which it is useful

(potentially yields fitness benefits) to lie. A systematic psychological study would be needed to

test this, but I think there are two strong reasons for expecting this prediction will fail. First, it is

perfectly possible (even frequent) for human self-deception to occur in cases where lying confers

no benefit and is not even attempted. Second, human lying often occurs (and may even confer

fitness benefits) without self-deception to help it along. To see the first point, consider again the

quarterback from our earlier discussion who deceives himself into thinking that his coach will not

be angry if he plays poorly. He may have no intention of spreading the belief to others; he

engages in the self-deception for its effect on how he will play (better if less nervous), not in

order to lie. He may even hope that others think that the coach will be angry, because then they

might pity him.13 Such cases make it unlikely that the capacity for self-deception exists because it

enhances the ability to lie. To see the second point, consider again the early human who lies about

there being food by the river; here, presumably, there is no self-deception at all about where the

food actually is. Putting the two points together, we see there must be plenty of self-deception

without lying and plenty of lying without self-deception. So the main prediction that Trivers’

theory would seem to make fails.14

In addition, if Trivers is to tell the adaptive story he wants to tell, he, like other

adaptationists, has to assume that there was a high degree of variation in the phenotype set from

which selection occurred. Otherwise nature will not have provided enough options for natural

selection to sort through and pick out the things that ultimately are adaptations. But here Trivers

is in a delicate situation. If we assume too diverse a phenotype set, then it becomes clear that selfdeception

for the sake of lying will not have the optimal fitness value in comparison to the other

possibilities. The reason is that one relevant possibility that we might put in the phenotype set

assumed in our optimality model is the capacity to lie effectively without deceiving oneself. Then,

because of the advantages of having reliable information over not having reliable information,

this phenotype will win out over that of the self-deceptive liar (even if we suppose that selfdeceptive

lying is advantageous over not lying at all). If this is the case, then we do not expect

self-deception that is tied to lying to win out. In order for Trivers account to work as an

adaptationist story, he would have to assume a phenotype set from which selection occurred that

13 I note that it would be a weak objection to this point to say that it does not hold because there were not

quarterbacks in the ancestral environment; whatever that environment was, it will not be that difficult to

construct analogous cases (assuming the environment was social and people were at all disposed to get

nervous for social reasons).

14 Trivers might respond to one of these points by saying that the part of his theory about self-deception

yielding a positive orientation toward the future explains the quarterback’s self-deception. But now it

becomes increasingly unclear how Trivers’ thinks this chapter in the history of natural selection is supposed

to go. Is the capacity for self-deception supposed to be one trait or two—one to help lying and one to help

positive orientation toward the future? If it is one capacity, what was the reason for its initial selection (if it

was to aid lying, then my criticisms apply)? If it is supposed to be two capacities, why has Trivers not said

so and how are they supposed to be different? Trivers’ best hope might be to theorize that the capacity for

self-deception, however it arose, was later maintained as an adaptation by the means that he describes, but

then his story lacks an explanation—and I take it this was what he thinks he is giving in this article—for

why self-deception became prevalent in the first place

did not include the possibility of non-self-deceptive liars competing with the hypothesized selfdeceptive

liars. Such an assumption would be highly puzzling, especially since there are many

effective liars in existence who do not self-deceive. In addition, we can extend an analogous

criticism to Trivers’ hypothesis that self-deception may have been selected also for its ability to

orient one positively toward the future. Why would a self-deceptive positive future orientation

win out over a self-honest positive future orientation? It seems obvious that any benefits that arise

from positive illusions could just as well have existed as independent phenotypic characteristics.

Against these, it is hard to see why the self-deceptive positive illusions would have won out.

There are many other problems that can be raised about Trivers’ views on self-deception.

One obvious one is the “grain” problem—the problem that evolutionary psychologists face in

individuating what precisely were the problems set by nature that given adaptations were to

solve.15 I wish to conclude my criticisms here by noting what on my view may be the greatest

problem for Trivers’ theory. He holds that the capacity for self-deception was a particular

adaptation that was selected for. This seems to have as a consequence that there is some selfdeception

“module” that humans have that arose in the history of our cognitive evolution.

Presumably, then, this module would turn on whenever the adaptive benefits of deceiving oneself

are at hand. This view (whether Trivers holds it or not) is problematic insofar as it fails to relate

self-deception to other features of cognition. Talbott (1995), for example, notes that biases of

attention, memory, reasoning, and evidence gathering may all be involved in self-deception. A

view that makes the capacity for self-deception out to be a modular adaptation does not sit well

with a view that makes it dependent on other features of human cognition. In the next section, I

develop a view that locates self-deception within a broader mental context. That is, I supply a

(more or less) structuralist alternative to the adaptationist view.

15 For a discussion of this see Sterelny and Griffiths (1999: 352)

Self-Deception as a Spandrel

Gould and Lewontin (1978) begin their critique of adaptationism with the observation

that any architectural structure that involves a dome mounted on top of rounded arches will have

as a by-product of this design what are called spandrels, tapered triangular surfaces that reside

beneath the dome in the space between the arches. Gould and Lewontin’s point is that many

phenotypic traits are analogous to spandrels; they are the result of the organism’s structure, and it

would be wrong to construe them as adaptations that were selected for in their own right, just as it

would be wrong to construe the spandrels of a cathedral as spaces that the architect decided to

include independently of the overall structural design. (The example often alluded to is the chin

which is the result of other structural features of the jaw, not a trait selected for in its own right.)

Contra Trivers, I think it is most fruitful to view self-deception as a spandrel, not as an

adaptation. That is, I think humans so pervasively have the propensity for self-deception not

because the trait was selected for in our evolutionary history, but because other aspects of our

cognitive Bauplan have the capacity for self-deception as a by-product. Here I identify four such

features and attempt to explain how they may be involved in self-deception. I hasten to add the

qualification that my account here may be—and probably is—incomplete, since (i) other features

may also be involved and (ii) I suspect these features by themselves are not sufficient to make

complete sense of the phenomenon. I am neutral on whether or not the features I identify here

themselves are adaptations, although I suspect some of them are.

The four features I have in mind are:

The inertia of the web of belief. It is widely acknowledged in philosophy that beliefs do

not typically occur without relation to a wide web of other beliefs that in some way

justify them and give them content. I believe that one fact of human cognition is that the

webs that constitute our belief sets typically have a degree of inertia; that is, a system of

beliefs does not easily change entirely due to the existence of facts that are anomalous

from the perspective of particular beliefs. For the most part, I think this aspect of our

cognitive economy is advantageous; for if our web of beliefs underwent revolution with

each discovery of anomalous fact, we would be in a perpetual state of cognitive flux.

Disposition to favor theories that are on the whole less complex. The human mind (to use

some terminology from computing) has limited storage space and limited computational

power. Thus it is inclined to make sense of as much of reality as it can with the aid of as

few informational resources as possible. This propensity may be seen as informationally

advantageous

Selective attention to evidence. Humans form beliefs in response to evidence that they

find for those beliefs in the world. Often, evidence will uniquely determine what the

beliefs are—if the wall appears white to me, I cannot do anything to make myself believe

that it’s purple. However, what evidence we attend to is influenced by what our interests

are, and this will have an effect on what we come to believe.

Awareness of self-fulfilling beliefs. The holding of many believes contributes to the

fulfillment of those beliefs. For example, I won’t be able to jump across the ditch unless I

believe that I can; but if I believe, I’m able. Thus it makes sense to form the belief. But

this awareness that beliefs can be self-fulfilling can be misapplied. Believing I will win

the lottery doesn’t make it so

The inertia of the web of beliefs is a factor in self-deception as follows. The human mind

is not easily disposed to giving up beliefs that are central to the web. Typically, much disturbance

at the periphery is necessary. The fact that the web has inertia to begin with makes it possible to

hang on to unjustified beliefs even when evidence to the contrary is compelling. Thus the inertia I

allude to is a factor in the human capacity for self-deception.

Second, suppose adopting a certain belief will add a great deal of complexity to one’s

belief set. The cuckold from our earlier examples may have good reason to believe that his wife is

a good person, that she is honest, that he himself is a good judge of character, that her past

promises were made sincerely, etc. Against this background, acknowledging her infidelity would

require the acceptance of a large number of additional facts. His wife is a good person, except in

certain particular ways; she is honest usually, but not in certain situations; he himself is

sometimes a good judge of character, sometimes not; her past promises were sincere, but she has

started to have complex second thoughts about them, etc. Of course, the cuckold might have to

accept complex explanations about where she was last night, but not acknowledging her infidelity

16 These four features of human cognition are, of course, not at all new ideas. What I think is original, if

anything, is the way I employ 1 and 2 to understand the phenomenon of self-deception (3 and 4 have

figured in other discussions) and the insight that the way such features work together to produce selfdeception

affects what sort of biological stance we may take on the phenomenon. My inspiration for 1

comes from Quine (1953) and Kuhn (1962). I realized that 2 may play a role in self-deception on reading

Sober (1981: 103 ff.). 3 is discussed in Talbott (1995).

may reduce the informational complexity of his beliefs overall. We might generalize this and say

that a greater deal of complexity in the web of beliefs locally can lead to less complexity overall.

Of course, when the local complexity become too much, the phenomenon of overall

informational efficiency may start to turn into the phenomenon of self-deception. Thus having

informationally advantageous dispositions as part of our cognitive architecture contributes to our

capacity for self-deception.

Third, we attend to evidence to varying degrees. This is generally beneficial, because if

we were to attend equally to all evidence for any belief we might have, we would know a lot

about many things that are of no interest to us and not enough about many things that are of much

greater interest. But failure to attend to evidence that by our usual epistemic standards we should

attend to, when that evidence is compelling and to the contrary of a belief we have, can leave us

with beliefs that we ourselves ought not regard as justified. When that failure to attend to the

evidence is due to a desire that has no epistemic relevance to the truth of the belief in question,

holding that belief becomes a case of self-deception.17

Fourth, many aspects of human performance require confidence. I must believe that I can

do P in order to be able to do it. Thus many humans make a habit of pressuring themselves to

believe they can do something as a normal means of trying to be able to. That is, they desire a

belief of the form I can accomplish P. In cases where confidence actually affects performance,

such belief formation can be justified, since the having of the belief that I can do something

actually increases the likelihood that I really can. Such belief formation is easily extended to

produce self-deception—particularly in cases where it would be manifestly irrational for me to

believe (say) that I can do something very difficult. Or worse, it extends to cases where

performance plays no role; people convince themselves they’ll win the lottery “if I only believe.”

17 I note here that this analysis leads to a possible connection between self-deception and deception in

which the direction of explanation is the opposite of what Trivers gives (by this I mean that attempts to

deceive can explain some cases of self-deception). Often in order to deceive, one must gather evidence in

support of the deception. The biased accumulation of evidence can then lead the deceiver himself to be

deceived.

Finally, these four features can work in groups. 2 and 3 can combine as follows: I don’t

want to increase the complexity of my belief set, so I ignore certain evidence. 1 can combine with

any of the other three, for example, with 4: if I used to be able to do something, then inertia of the

web of beliefs combined with the thought that I only need confidence may help me selfdeceptively

hold on to the belief that I can. 3 can also combine with 4: I ignore evidence that

confidence makes no difference to a task.

Conclusion

I have argued here that self-deception is not an adaptation, but a spandrel, a by-product of

other features of our cognitive architecture. My criticisms were targeted most specifically at the

theory of Robert Trivers, but it should be clear by now that I am arguing against adaptationist

theories of self-deception generally. Indeed, it should not be surprising that the capacity for selfdeception

is not an adaptation, since self-deception so often leads to consequences that are

destructive for its practitioners. Self-deceptive conviction that someone is a friend can lead to

being swindled, whereas one might not have been swindled if she were honest with herself. Selfdeception

about a partner’s fidelity may lead one to put time and energy into raising a child that is

not one’s own. Self-deception can lead to the pursuit of goals that are a waste of time. The list

goes on. Two examples of how maladaptive self-deception can be that are relevant to

contemporary society are the abused wife who convinces herself that her husband will change and

the drunk driver who convinces himself that he is fit to drive. Theories like Trivers’ lead me to

think that one of Humanity’s greatest self-deceptions is that self-deception is beneficial. Perhaps

it is sometimes, but, I believe, not generally.

One advantage of treating self-deception as a spandrel is that we can explain its continued

existence despite the salient possibility that it is maladaptive. Architectural plans often have

undesirable by-products (this, of course, is where the analogy breaks down, because real

spandrels are often beautiful). I also wish to suggest that the qualitative predictions that my

schematic theory seems to make are all quite plausible. First, if I am right about the role of the

inertia of the web of belief, we would expect many cases of self-deception to be cases of

maintaining old beliefs despite contrary evidence that has piled up. For an example, just consider

Kuhn’s (1962) examples of scientists who have to die without giving up their paradigms. Second,

if my considerations about the web of belief and the role of favoring informational simplicity are

correct, we would expect not that self-deception occurs regarding things that are close to the

sensory periphery of the web, but rather that it would occur regarding more central beliefs that

play a greater role in the simplicity of the overall system. This also seems plausible, for it is hard

to imagine one deceiving oneself regarding whether one’s socks are yellow, but self-deception

often happens regarding central beliefs that have many connections to other practical beliefs

about one’s life (like whether or not a spouse is faithful). Finally, if the overextended connection

between belief that I can do P and actually doing P plays the role in self-deception that I think it

does, then we would expect our self-deceptive realities to correspond more often to our desired

pictures of reality than to pictures of reality that we shun. This, too, I think fits the facts; consider

the many initially overly-optimistic young lovers whose love ultimately goes unrequited.

To conclude, I wish to draw one general moral for the research program of evolutionary

psychology. Trivers’ main mistake was to attempt to explain self-deception without considering

any of the many other features of the human mind to which it relates. Thus he found himself in

the position of attempting to explain why a feature of human psychology exists as though it were

a feature in isolation, and the prevalence of this feature led him to consider it an adaptation. This

mistake is easy to make. That this mistake is being made often is suggested by the fact that the

picture of the mind that evolutionary psychology often paints is a picture of massive modularity.

The mind, it can be argued, does have modules, but it is not all modules. I believe that stepping

back and considering the relations between features of the human mind (either from the

perspective of psychology or philosophy of mind) before adverting to adaptationist stories will

yield psychological theories that are both richer and more accurate.

References

1. Davidson, D. (1980) Essays on Actions and Events, New York: Oxford University Press.

2. Davidson, D. (1982) “Paradoxes of irrationality,” in Philosophical Essays on Freud, R.

Wollheim and J. Hopkins (eds.), New York: Cambridge University Press.

3. Davidson, D. (1984) Inquiries into Truth and Interpretation, New York: Oxford

University Press.

4. Davidson, D. (1985) “Deception and Division,” in Actions and Events, E. LePore and B.

McLaughlin (eds.), New York: Basil Blackwell.

5. Davidson, D. (1993) “Who is fooled?,” in Self-Deception and Paradoxes of Rationality

J. Dupuy (ed.), Stanford: CSLI Publications.

6. Gould, S. J. and Lewontin, R. (1978) “The Spandrels of San Marco and the Panglossian

Paradigm,” Proceedings of the Royal Society of London

7. Johnston, M. (1988) “Self-Deception and the Nature of Mind,” in Perspectives on Self-

Deception [henceforth PSD], B. McGlaughlin and A. O. Rorty (eds.), Berkeley:

University of California Press

8. Kuhn, T. (1962) The Structure of Scientific Revolutions, Chicago: University of Chicago

Press.

9. McGlaughlin, B. (1988) “Exploring the Possibility of Self-Deception in Belief,” in PSD

10. Pears, D. (1984/1998) Motivated Irrationality, Oxford University Press/St. Augustine

Press

11. Quine, W. V. (1953) “Two Dogmas of Empiricism,” in From a Logical Point of View

Cambridge: Harvard University Press.

12. Rorty, A. O. (1988) “The Deceptive Self,” in PSD

13. Sober, E. (1981) “The Evolution of Rationality,” Synthese

14. Sterelny, K. and Griffiths, P. E. (1999) Sex and Death, Chicago: University of Chicago

Press.

15. Talbott, W. (1995) “Intentional Self-Deception in a Single Coherent Self,” Philosophy

and Phenomenological Research

16. Trivers, R. (2000) “Elements of a Scientific Theory of Self-Deception,” Annals of the

New York Academy of Sciences

MOTIVATED AVERSION: NON-THETIC AWARENESS IN BAD FAITH

Jonathan Webber

Sartre Studies International vol. 8, no. 1 (2002)

– please cite only publication –

Abstract: Sartre's concept of ‘non-thetic awareness’ must be understood as equivalent to the concept of

‘nonconceptual content’ currently discussed in anglophone epistemology and philosophy of mind, since it could

not otherwise play the role in the structure of ‘bad faith’, or self-deception, that Sartre ascribes to it. This

understanding of the term makes sense of some otherwise puzzling features of Sartre's early philosophy, and has

implications for understanding certain areas of his thought.

Exactly what does Jean-Paul Sartre mean by describing some conscious awareness as ‘nonthetic’?

He does not explicitly say. Yet this phrase, sprinkled liberally throughout his early

philosophical works, is germane to some of the distinctive and fundamental theories of

Sartrean existentialism. My aim in this paper is to examine the concept in terms of the role

that Sartre claims it plays in bad faith (mauvaise foi), the deliberate and motivated project of

refusing to face or consider the consequences of some fact or facts. I will argue that non-thetic

awareness could play the role Sartre ascribes to it in bad faith only if it is understood as being

equivalent to the nonconceptual representational content currently discussed in anglophone

philosophy of mind. I will proceed by first providing an initial rough characterisation of ‘nonthetic’

awareness through a discussion of the philosophical background to Sartre’s term, then

showing how this rough characterisation needs to be refined in order that bad faith may evade

the two paradoxes of self-deception, next drawing the distinction between conceptual and

nonconceptual content, and then arguing that non-thetic awareness must be construed as

nonconceptual content. This clarification of one of the most pervasive and one of the most

obscure concepts in Sartrean existentialism will have the additional ramifications that Sartre’s

theory of consciousness in general must be understood as involving both conceptual and

nonconceptual structures and that his discussion of the interplay of these structures can

provide innovative and valuable contributions to the debates over the role of conceptual and

nonconceptual contents in perception and action currently raging in anglophone discussions of

mind.

Jonathan Webber

page 2 of 11

‘THETIC’ AND ‘POSITIONAL’

Sartre’s frequent uses of the term ‘thetic’ (thetique) and its negation without definition

is paralleled by his similarly unexplained use of the term ‘positional’ (positionelle) and its

cognates. These two terms, which Sartre seems to consider co-extensive, have been taken over

from Edmund Husserl, so Sartre’s lack of definitions seems to indicate that he considered

himself to be using the stock terminology of the fledgling Phenomenological tradition,

stemming from Husserl’s work, in which he was keen to situate himself.

The thetic (thetischen) character of an experience, for Husserl, is equivalent to its

positional (Setzungs) character (e.g. Ideas, § 129), but Husserl is not consistent on the nature

of this character. In Logical Investigations, ‘positing’ (Setzung) awareness affirms the

existence of its object whereas ‘nonpositing’ awareness suspends judgement about the

existence or non-existence of the object (Inv. V § 34). In Ideas, ‘positing’ is used in a variety

of senses, the widest of which encompasses the varieties of mental attitudes towards objects,

such that judging, wishing, and perceiving, for example, are different forms of positing (§ 129).

And in Cartesian Meditations, ‘positing’ is defined as ‘taking a position as to being’, and there

are a wide variety of such positions, including ‘certainly existing, being possible, being

probable, also being beautiful and being good, being useful, etc.’ (§ 15).1 The positional or

thetic character of an experience for Husserl, then, is the experience’s character of explicitly

classifying the experienced object under some category or other, but Husserl is not consistent

over the kinds of categories involved. This sense is related to the sense of G. W. F. Hegel’s

use of the term ‘setzen’, usually translated as ‘to posit’, but which means to articulate or make

explicit something that was already implicit.2

Sartre does not use ‘positional’ in this way. To call a consciousness ‘positional’, for

Sartre, is to say that ‘it transcends itself in order to reach an object’ (B&N: xxvii).3 The object

posited in an experience is the object singled out, to which I ‘direct my attention’ (B&N: 95).

Looking at a photograph of my friend Peter, for example, I may inspect the shapes and

colours on the card, or I may see it as an image of Peter. Only in the former case, according to

Sartre, am I seeing the photograph: it is the object posited. In the latter case I am imagining

Peter: he is the object posited (PI: 17-8).4 The positional character of experience, for Sartre,

then, is its direction on or towards some particular object, the object posited.

The thetic component of experience in Sartre’s theory of consciousness is roughly the

aspect of Husserl’s notion of the positional character of awareness that is missing from

Sartre’s notion of positing. Where Sartrean positing is just directedness towards an object,

Husserlian positing is directedness towards an object that classifies it in certain ways. The

thetic component of an act of consciousness, for Sartre, consists in a thesis or proposition

thèse) classifying the object posited (see B&N: 90); it is the set of ways in which the object

is understood. For example:

‘In the case of the perception of the chair, there is a thesis — that is, the

apprehension and affirmation of the chair as the in-itself which consciousness

is not’ (B&N: 140).

Perception involves, for Sartre, positing the seen object as present and existing; the

thetic component of perception, that is, represents the object posited as present and existing.

Motivated Aversion

page 3 of 11

Sartre calls the components of the thetic component of awareness ‘determinations’:

determinations are the category headings that the thetic component of a consciousness

classifies its object under; they are the way the object is intended. The thetic component of

perceptual experience, according to Sartre, ascribes the determinations ‘present’ and ‘existent’

to its object. But the thetic character of perceptual experience is by no means restricted to

this. There are, for Sartre, two further varieties of determination that can be involved in the

thetic component of a perceptual experience.

The first variety track what Sartre calls the ‘qualities’ of the object. Perceiving a pool,

for example, involves awareness of qualities such as ‘[t]he fluidity, the tepidity, the bluish

colour, the undulating restlessness of the water in the pool’ (B&N: 186). Each of these

qualities of the pool may or may not be referred to in the thetic component of my awareness

of the pool. If the pool is seen as fluid, blue, or restless, then these are the determinations

ascribed to the pool in my experience. The various qualities of the object are undifferentiated

in experience unless the experience contains corresponding determinations (see B&N: 10,

188). Sartre often talks of the thetic component of experience in terms of the ‘intentions’ of

that experience: the determinations are the way in which the object is intended, the way it is

presented in intentional experience. Using this terminology, he explains that in any given

visual perception there are intentions that are ‘motivated’ by the seen qualities of the object,

and others that are not (B&N: 26-7); some thetic determinations track the object’s qualities,

others do not.

The second variety of determination is of more interest to Sartre, for determinations of

this sort are motivated by the aims and projects of the perceiver. This claim is at the heart of

Sartrean existentialism, since it grounds the claim that an individual’s project provides the lens

through which that individual experiences being in-itself as a world of tools, obstacles, and

values. My attempt to realise one of my possibilities is partly responsible for way reality

seems to me: ‘this projection of myself toward an original possibility … causes the existence

of values, appeals, expectations, and in general a world’ (B&N: 39); ‘perception is in no way

to be distinguished from the practical organisation of existents into a world’ (B&N: 321). The

‘world’, for Sartre, is not the mass of being in-itself but the complex of instruments and values

that appears to consciousness (B&N: 24, 139, 617-8). Mere chunks of being in-itself are thus

experienced as tools or obstacles, as themselves having ‘potentialities’ in relation to my

projects: ‘the order of instruments in the world is the image of my possibilities projected into

the in-itself; that is, the image of what I am’ (B&N: 292). This is what Sartre refers to as ‘the

potentializing structure of perception’ (B&N: 197): the fact that being in-itself is perceived as

a world of tools, obstacles, and values relating to my projects (see also B&N: 199). And this

relation between projects and the experienced structure of the world is captured in the key

existentialist term ‘situation’. Sartre introduces this term in his discussion of the project of

looking through a keyhole to observe a scene, claiming that ‘there is a spectacle to be seen

behind the door only because I am jealous, but my jealousy is nothing except the simple

objective fact that there is a sight to be seen behind the door’ (B&N: 259). Being jealous and

experiencing being in-itself as structured in this way are one and the same. This ‘situation’ is

the combination of facts about the environment, such as the existence of a door with a

keyhole, with facts about my aims and projects, such as my wish to see the scene beyond the

door: a situation always involves determinations imposed by the projects of the situated

individual as well as those that track the qualities of the individual’s immediate environment.

Jonathan Webber

page 4 of 11

The thetic component of experience, and the two varieties of determination it involves,

is what Sartre is alluding to when he describes focusing on an object as making it ‘the object of

a detailed attention’ (B&N: 95). The term translated as ‘detailed’ is ‘circonstanciée’, which

implies appropriateness to the circumstances. Both varieties of determination are ideally

appropriate to the circumstances, but can fail to be. The determination ‘clear’ is appropriate

to a glass of water in that it refers to a manifest quality of the object, but if I am thirsty the

determination ‘inviting’ is also appropriate to the object in a way that it would not be if the

glass was empty. So when Sartre talks of only the first sort of determinations being

‘motivated’ by the qualities of the object (B&N: 27), he is best understood as claiming that

only the first sort are motivated purely by the qualities of the object: the second sort are

motivated by qualities plus the seer’s aims and projects. Sartre uses the terms ‘positional’ and

‘thetic’ co-extensively, then, not simply because the two aspects of experience that they pick

out are conflated by Hegel and Husserl under the concept of positing. More importantly, it is

because the thetic character of an experience is the characterisation of the object singled out,

and the positional character is the singling out of the object: the thetic character refers to

whatever is posited, and so is not independent of the positional character.

BAD FAITH AND THE PARADOXES OF SELF-DECEPTION

Bad faith, in Sartre’s philosophy, is the deliberate and motivated project of concealing some

unpleasant truth from oneself. It is not simply a mistake, but a form of self-deception. And

herein lies a theoretical problem: the idea of deceiving oneself seems to generate two logical

paradoxes, as Sartre is aware (B&N: 43, 47-54). Sartre employs his notion of non-thetic

awareness to provide a non-paradoxical account of bad faith. Before going on to show that

non-thetic awareness can play this role only if it is understood as the nonconceptual content

currently discussed by anglophone philosophers, I will explain the paradoxes of selfdeception

in order to clarify the role of non-thetic awareness in bad faith.

The two paradoxes of self-deception arise from the fact that deception is not an honest

mistake: the deceiver deliberately inculcates in the deceived a false belief. The first paradox of

self-deception concerns the self-deceiver’s awareness of the unpleasant fact to be concealed.

In ordinary deception, the deceiver must be aware of the truth to be hidden from the deceived,

but the deceiver cannot aware of this truth if the deception is to succeed. So the self-deceiver,

it seems, must both be aware of the truth and not be aware of the truth, which seems to be a

contradiction. The second arises from the self-deceiver’s awareness of the intention to deceive:

the deceiver must be aware of this for the deception to be deliberate, but the deceived must

not be aware of it if the deception is to succeed. So the self-deceiver, it seems, must both be

aware of the intent to self-deceive and not be aware of the intent to self-deceive, which seems

to be a contradiction. If Jeffrey Archer tries to deceive us over his dealings with prostitutes,

then he must be aware of the truth about his dealings with prostitutes and be aware of his

intention to deceive. If his deception is to succeed, we must not be aware of either of these

things. But if he is to deceive himself, then he needs to be both aware and unaware of the truth

and both aware and unaware of his intention, which seems to involve two contradictions.

Some philosophers point out that the paradoxes arise only if self-deception is thought

of as a state in which the self-deceiver must believe and not believe the same things at the

same time. If self-deception is instead thought of as a process or a scattered event, the thought

Motivated Aversion

page 5 of 11

runs, then successful self-deception involves forgetting the truth and the intention to deceive

by the time the new belief is in place.5 Jeffrey could happily deceive himself, so long as he

forgot about the prostitutes and about his intention to deceive. This suggestion, however, will

not do for Sartre’s purposes. Sartre is concerned with cases in which the evidence against the

self-induced belief is continuous or at least frequent, yet the self-deceiver persists in holding

that belief. The anguish that continually reveals our freedom and responsibility, for example,

does not prevent people from deceiving themselves into believing that they are not free and

responsible (see B&N: 43). In such cases, the self-deceiver must continue to intend to hide the

truth. ‘When reality (or memory) continues to threaten the self-induced belief of the selfdeceived,

continuing motivation is necessary to keep the happy thought in place.’6

In order to maintain the claim that one can be continually motivated to deceive oneself,

we must distinguish two senses in which one can be aware of the truth and of one’s intention,

so that the paradoxes can be resolved by the claim that the self-deceiver is aware of these

things in one sense, and unaware of them in another. The mind could be divided, for example,

between conscious and unconscious mental activity, which would allow Jeffrey to deceive

himself by allowing him unconscious awareness of his dealings with prostitutes and his

intention to deceive, but no conscious awareness of these things. Sartre, however, wishes to

retain the unity and integrity of the conscious subject, and thinks that the distinction between

conscious and unconscious fails to maintain this unity, but also that it fails for other reasons

(see B&N: 47-54).

Instead of distinguishing conscious from unconscious mental activity, Sartre

distinguishes thetic from non-thetic awareness, allowing Jeffrey to deceive himself so long as

he has non-thetic awareness of his dealings with prostitutes and his intention to deceive, but

no thetic awareness of it. We will see later on that this distinction is not so different from

Freud’s distinction between conscious and unconscious mental activity as Sartre seems to

think. But first I will argue that Sartre’s distinction should be construed as the distinction

between conceptual and nonconceptual representational content.

REPRESENTATIONS AND CONCEPTS

It may seem odd to claim that Sartre’s distinction between thetic and non-thetic should be

construed as a distinction between two types of representational content, given that Sartre

declared that ‘[r]epresentations … are idols invented by the psychologists’ (B&N: 125), and

that:

‘All consciousness … is consciousness of something. This means that there is

no consciousness that is not a positing of a transcendent object, or, if you

prefer, that consciousness has no “content” … A table is not in consciousness

– not even in the capacity of a representation. A table is in space, beside the

window, etc.’ (B&N: xxvii)

The term ‘representational content’, however, no longer has the meaning in philosophical

parlance that it did when Sartre wrote Being and Nothingness. Sartre is opposed to any theory

of mind that holds us to be aware only of private images that represent the real world

beyond.7 When Sartre writes disparagingly of ‘representations’ and ‘contents’, he means

Jonathan Webber

page 6 of 11

purported private, subjective objects of awareness, which have been variously dubbed ideas,

impressions, sensa, sensations (see B&N: 314-5), sense data, and percepts. But philosophers

who deny the existence of such entities continue to talk of conscious events representing

aspects of the world, and distinguish this from the claim that conscious events involve

awareness of the representation. Rather, current anglophone ‘intentionalist’ theorists argue, to

be in a mental state that represents a certain object is to be aware of that object, not to be

aware of the representation.8

This is not to say that Sartre’s theory of the intentionality of consciousness is to be

understood in terms of the current anglophone notion of intentionality, merely to point out

that Sartre’s statements opposing the notions of mental representation and mental content do

not preclude describing his position in terms of those notions as they are employed in current

anglophone philosophy: the terms have shifted meanings since Sartre used them. To say that a

mental state or event has representational content, these days, is just to say that it picks out

an object or state of affairs and presents it in some way or other. You might believe that the

cat is on the mat, in which case your belief represents the cat being on the mat. This does not

require awareness of a representation of a cat and a mat. It is quite clear that Sartre believes

consciousness to classify its objects, as cat and mat for example, rather than simply present

their bare sensuous properties like colour and shape (see ‘‘Thetic’ and ‘Positional’’, above),

and this classification is part of representation.

Anglophone philosophers distinguish two kinds of representational content: those that

are composed of concepts and those that are not. A concept is an inferentially relevant

constituent of a representation, and possessing a concept consists in having a set of

inferentially related representations with a common constituent. Possessing the concept ‘cat’,

for example, consists in possessing a set of inferentially related representations concerned

with cats, such as the beliefs that cats are domestic pets, are tame, and are smaller than

houses. A representation is conceptual, then, only if it is composed of concepts, which means

that it stands in inferential relations to a set of other representations possessed by the same

organism. A nonconceptual representation, on the other hand, does not stand in inferential

relations. It can be possessed by an organism that does not possess the concepts required to

express that representation.9 A nonconceptual representation is a representation: it specifies a

possible state of affairs. But it is independent of what Wilfrid Sellars called ‘the logical space

of reasons’: it cannot be inferred from other mental representations, other mental

representations cannot be inferred from it, and it cannot be linguistically articulated, say in

response to a question.10 An intention, for example, would consist in a nonconceptual

representation if it consisted in a state of the brain or mind that specified a possible future

state of affairs as something to be aimed for, but was independent of the logical space of

reasons.

I am going to argue that Sartre’s distinction between thetic and non-thetic awareness

should be understood as a distinction not between representational and non-representational

forms of awareness, but between conceptual and non-conceptual representational awareness if

it is to play the role in bad faith that Sartre ascribes to it.

NON-THETIC AWARENESS AS NONCONCEPTUAL CONTENT

Motivated Aversion

page 7 of 11

Bad faith, as we have seen, is a form of self-deception, and so seems to require awareness of

the unpleasant fact to be hidden and of the intention to deceive along with simultaneous lack

of awareness of these things. Sartre attempts to avert these paradoxes without distinguishing

conscious from unconscious awareness by distinguishing thetic from non-thetic awareness: the

self-deceiver has non-thetic awareness of the awful truth and of the intention to deceive, but

has no thetic awareness of these things.

It is clear, then, that non-thetic awareness must be a form of representational

awareness: unless the awful truth is classified as unpleasant, then its unpleasantness cannot

motivate the self-deception, and unless the intention is classified as an intention to deceive,

then the intention cannot be acted on. The aversion to the truth and the carrying out of the

intention cannot be explained by an awareness of the truth and intention that does not classify

them, for such awareness would not be distinguishable from awareness of a pleasant truth and

an intention to be honest.

But this non-thetic awareness must also be nonconceptual, for otherwise it would

stand in rational relations to other conceptual representations had by the self-deceiver. In

particular, the self-deceiver would not be capable of believing the opposite of the awful truth

while having conceptual representations of the truth of the awful truth and of the intention to

self-deceive: the contradiction would simply be obvious. But only conceptual representations

are inferentially and rationally linked to one another (that is what makes them conceptual

representations), so only conceptual representations can contradict or be contradicted. If nonthetic

awareness is understood as involving nonconceptual content, then it will not stand in

inferential or rational relations to explicit and articulable beliefs and so cannot threaten the

subject’s cognitive ignorance of the thing to be avoided or cognitive assent to the happy

thought.

This distinction between conceptual and nonconceptual content can help to resolve a

seeming contradiction in Sartre’s account of non-thetic awareness of one’s current activities.

He claims that when engaged in a project such as counting cigarettes, non-thetic consciousness

allows one to answer the question ‘what are you doing?’, yet he also claims that ‘children who

are capable of making an addition spontaneously cannot explain subsequently how they set

about it’ (B&N: xxix). Why is it that non-thetic awareness allows one to report current

activities but not how they are carried out? If non-thetic awareness is understood as involving

nonconceptual content, then this seeming contradiction can be resolved. The awareness does

not stand in the space of reasons, so one cannot form linguistically articulable beliefs on the

basis of it: it allows one to be aware of what one is doing without being able to explain how it

is happening. But also, nonconceptual awareness may be responsible for an action feeling

appropriate or inappropriate to a conceptually formed intention: if I conceptually intend to

count cigarettes, then proceed to do so, the non-thetic awareness of my activity may be an

awareness of the appropriateness of my activity given the initial intention.11 If the initial

intention is not itself conceptually structured, of course, then no linguistically articulable

belief about it can be formed.

This is what I take Sartre to mean by describing consciousness as ‘translucent’

translucide) as opposed to ‘transparent’ (transparent). The difference between transparency

and translucency is best illustrated by the difference between an ordinary window and one

made of frosted glass; as Larousse puts it, a translucide body diffuses light so that objects

‘are not clearly visible’ (ne sont pas visible avec netteté) through it.12 For consciousness to be

translucent, then, is for there to be some awareness of one’s own consciousness but this not

Jonathan Webber

page 8 of 11

structured in the right way to allow the formation of articulable beliefs about that

consciousness.

This reading of Sartre also clears him of the charge that he ‘unjustifiably ignores a

number of different forms of knowledge’, concentrating too narrowly on conceptual or

propositional knowledge and overlooking such forms of knowledge as the unarticulable knowhow

required for successful action.13 When Sartre talks of knowledge revealing truths to us

‘with an orientation in relation to other truths, to certain consequences’ (B&N: 155), he can be

understood as describing conceptually structured knowledge, the kind of knowledge displayed

in response to questions. Know-how, on the other hand, should be understood as involving

nonconceptual content. Sartre’s limitation of the scope of ‘knowledge’ to the conceptual realm

can then be seen simply as stipulating a use of the term, rather than denying the existence of

know-how.14

If I am right that non-thetic awareness should be understood as involving

nonconceptual content, then Sartre’s theory of mind is much closer to Sigmund Freud’s than

Sartre seems to have understood: the Freudian unconscious consists of representations that

are not rationally related either to each other or to conscious, cognitive beliefs about reality,

and are thereby nonconceptual. This is why they are not easily reportable or captured in

rational thought. But there is more to the Freudian unconscious than its nonconceptual

structure. Freud thought that the representations in the unconscious were charged with

‘cathectic energy’, which drove them to attempt to find expression in clear consciousness.15

This theory of cathectic energy, the theory of ‘blind forces’ that Sartre is opposed to (B&N:

52), however, is not entailed by the theory of nonconceptual content, and so can be

consistently rejected by Sartre.

CONCLUSIONS

Self-deception is a seemingly paradoxical notion, as it seems to require awareness of a truth to

be concealed and of the intent to conceal it whilst also requiring ignorance of these things.

Since Sartre’s bad faith is a form of self-deception, his account of the structures of bad faith

needs to show how this is possible. I have argued that Sartre’s account can succeed only if

non-thetic awareness is understood as involving nonconceptual content: the undesirable fact is

classified as undesirable and the intention to conceal it is classified as an intention to conceal

it, but these classifications are not structured in such a way that articulable beliefs can be

inferred from them or in such a way as to stand in the rational relation of contradiction to the

pleasing belief the self-deceiver engenders.

This construal of Sartre’s distinction between thetic and non-thetic awareness draws

out the need to understand his theory of consciousness in terms of both conceptual and

nonconceptual aspects. Sartre’s ‘conscious’ cannot be equated with ‘conceptual’, even though

Freud’s use of ‘conscious’ can. Moreover, this construal of non-thetic consciousness does not

conflict with Yiwei Zheng’s innovative account of Sartre’s notion of non-positional awareness

as a feel that feels to be what it is, as opposed to a positional consciousness that reveals

something other than itself. My claim is simply that such a feel, if Zheng is right about this,

must involve nonconceptual representation if it is to play the role Sartre wants it to play in

bad faith.16

Motivated Aversion

page 9 of 11

My account of Sartre’s distinction between thetic and non-thetic awareness in terms

of the distinction between conceptual and nonconceptual content also has ramifications for the

use of Sartre’s work as a resource in current debates in anglophone philosophy of mind:

Sartre’s use of the notion of non-thetic awareness in the motivation of bad faith marks out a

distinctive contribution to the theory of action as well as providing an innovative dissolution

of the paradoxes of self-deception. There may also be potential contributions to anglophone

epistemology and the philosophy of perception to be made through disentangling Sartre’s

dense discussion of the relation between ‘determinations’ and ‘qualities’ in perception, and his

theory of the formation of determinations, in the light of this distinction between conceptual

and nonconceptual content (see B&N: 186-204).17

Jonathan Webber

page 10 of 11

NOTES

1 See Edmund Husserl, Logical Investigations second edition, translated by J. N. Findlay,

London: Routledge and Kegan Paul, 1970; Ideas Pertaining to a Pure Phenomenology and to a

Phenomenological Philosophy, First Book: General Introduction to a Pure Phenomenology,

translated by F. Kersten, The Hague: Martinus Nijhoff, 1982; Cartesian Meditations

translated by Dorion Cairns, The Hague: Martinus Nijhoff, 1950.

2 See G. W. F. Hegel, The Encyclopaedia Logic, translated by T. F. Geraets, W. A. Suchting,

and H. S. Harris (Indianapolis: Hackett Publishing Co., 1991). See especially p. 352, n. 39.

3 Jean-Paul Sartre, Being and Nothingness, translated by Hazel E. Barnes (London: Methuen,

4 Jean-Paul Sartre, The Psychology of Imagination, translation anonymous (London:

Methuen, 1972).

5 See David Pears, ‘Motivated Irrationality’, in Philosophical Essays on Freud, edited by

Richard Wollheim and James Hopkins (Cambridge: Cambridge University Press, 1982); Roy

Sorensen, ‘Self-Deception and Scattered Events’, Mind 94, no. 373, January 1985.

6 Donald Davidson, ‘Deception and Division’, in The Multiple Self, edited by Jon Elster

(Cambridge: Cambridge University Press, 1985), p. 90.

7 See, for example, David Hume, An Enquiry Concerning Human Understanding in Enquiries

Concerning Human Understanding and Concerning the Principles of Morals, edited by L. A.

Selby-Bigge and P. H. Nidditch (Oxford: Clarendon, 1975), § 12, or Bertrand Russell, The

Problems of Philosophy (Oxford: Oxford University Press, 1912), chapters 1-2.

8 See John Searle, Intentionality, (Cambridge: Cambridge University Press, 1983), pp. 57-61.

9 See Tim Crane, ‘The Nonconceptual Content of Experience’, in The Contents of Experience

edited by Tim Crane (Cambridge: Cambridge University Press, 1992); Robert Brandom,

Making It Explicit: Reasoning, Representing, and Discursive Commitment (Cambridge MA:

Harvard University Press, 1994), pp. 88-9; John McDowell, Mind and World (Cambridge

MA: Harvard University Press, 1994), ch. 1.

10 Wilfrid Sellars, ‘Empiricism and the Philosophy of Mind’, in Minnesota Studies in the

Philosophy of Science volume 1: The Foundations of Science and the Concepts of Psychology

and Psychoanalysis, edited by Herbert Feigl and Michael Scriven (Minneapolis: University of

Minnesota Press, 1956), pp. 298-9.

11 For more on the relation between nonconceptual content and the feeling of

appropriateness, see my ‘Doing Without Representation: Coping with Dreyfus’,

Philosophical Explorations 5, no. 1, January 2002.

Motivated Aversion

page 11 of 11

Grand Larousse Universel, 1994 edn., s.v. 'Translucide' (15: 10365). Sartre does use the

term ‘transparency’ (transparence) once in Being and Nothingness (B&N: 164), but since this

repeats a point which Sartre first made using the term ‘translucidité’ (B&N: 103), this use of

transparence’ should be considered a slip of the pen or a printer's error, not taken as

indicating a commitment to the transparency (as opposed to translucency) of consciousness.

See also Phyllis Sutton Morris, ‘Sartre on the Self-Deceivers’ Translucent Consciousness’,

Journal of the British Society for Phenomenology 23, no. 2: 103-119, 1992.

13 See David A. Jopling, ‘Sartre’s Moral Psychology’, in The Cambridge Companion to

Sartre, edited by Christina Howells (Cambridge: Cambridge University Press, 1992), pp. 124-

14 For a more detailed discussion of know-how and nonconceptual content, see my ‘Doing

Without Representation: Coping With Dreyfus’, op. cit. n. 11.

15 Sigmund Freud, ‘The Unconscious’, in The Standard Edition of the Complete Psychological

Works of Sigmund Freud, translated and edited by James Strachey, Anna Freud, Alix

Strachey, and Alan Tyson, volume 14 (London: The Hogarth Press and the Institute of

Psycho-Analysis, 1957), pp. 186-7.

16 Yiwei Zheng, ‘On Pure Reflection in Sartre’s Being and Nothingness Sartre Studies

International 7, no. 1, 2001. This construal of non-thetic awareness does, however, undermine

the exegetical aspect of Fiona Ellis’s otherwise interesting article discussing Sartre’s theory of

consciousness and world in the light of John McDowell’s theory of the relation between

conceptual thought and reality, in her ‘Sartre on Mind and World’, Sartre Studies

International 6, no. 1, 2000.

17 I am grateful to Andy Leak, Sarah Richmond, and an anonymous reviewer for this journal

for comments on earlier drafts of this material.

Self-Deception and Emotional Coherence

BALJINDER SAHDRA and PAUL THAGARD

University of Waterloo, Department of Psychology, Faculty of Arts, 200 University Avenue West,

N2L 3G1, Waterloo, Ontario, Canada

Abstract. This paper proposes that self-deception results from the emotional coherence of beliefs

with subjective goals. We apply the HOTCO computational model of emotional coherence to simulate

a rich case of self-deception from Hawthorne’s The Scarlet Letter. We argue that this model

is more psychologically realistic than other available accounts of self-deception, and discuss related

issues such as wishful thinking, intention, and the division of the self.

Key words: coherence, desire, emotion, self, self-deception, simulation, wishful thinking

1. Introduction

Skeptics such as Paluch (1967) and Haight (1980) think that the very notion of selfdeception

is implausible. However, there is empirical evidence that self-deception

is not only possible but also highly pervasive in human life. It accounts for positive

illusions of opponents in battle and their belief that they will win (Wrangham,

1999). It is involved in denial in physical illness (Goldbeck, 1997). It has been

shown to account for unrealistic optimism of the self-employed (Arabsheibani et

al., 2000). It has been observed in traffic behavior of professional drivers (Lajunen

et al., 1996). And it has been shown to mediate cooperation and defection in a

variety of social contexts (Surbey and McNally, 1997).

What is self-deception? How do we deceive ourselves? Researchers have attempted

to answer these questions in various ways. Some thinkers argue that selfdeception

involves a division in the self where one part of the self deceives the

other (Davidson, 1985; Pears, 1986; Rorty, 1988, 1996). Others, however, maintain

that such division is not necessary (Demos, 1960; Elster, 1983; Johnston, 1988;

McLaughlin, 1988; Rey, 1988; Talbott, 1995; Mele, 2001). Some consider selfdeception

to be intentional (Sackeim and Gur, 1978; Davidson, 1985; Pears, 1986;

Rorty, 1988; Talbott, 1995), while others insist that it is non-intentional (Elster,

1983; Johnston, 1988; McLaughlin, 1988, 1996; Lazar, 1999; Mele, 2001). Some

think that self-deception is a violation of general maxims of rationality (Pears,

1982; Davidson, 1985), while others argue that self-deception is consistent with

practical rationality (Rey, 1988; Rorty, 1988).

We propose that self-deception can result from emotional coherence directed to

approach or avoid subjective goals. We will show this by modeling a specific case

of self-deception, namely that of Dimmesdale, the minister in Hawthorne’s The

Scarlet Letter (1850). The model has two parts, namely, a “cold” or non-emotional

Minds and Machines

Kluwer Academic Publishers. Printed in the Netherlands.

214 BALJINDER SAHDRA AND PAUL THAGARD

model and a “hot” or emotional model. The emotional aspect of self-deception

may be implicit in some accounts, but no one has brought it in the forefront of the

discussion. Two notable exceptions are Ronald de Sousa (1988) and Ariela Lazar

(1999). Our computational account is more precise than de Sousa’s. Lazar (1999)

argues that self-deceptive beliefs are partially caused by emotions whose effects

are not mediated by practical reasoning. Our account differs from Lazar’s in that

we show that self-deception can arise even in those cases in which self-deceivers

are highly motivated to be rational; the effects of emotions are just as mediated by

rational thought as the effects of rational thought are by emotions. In other words,

we will show that it is the interaction of cognitive and emotional factors that plays

the pivotal role in self-deception.

After giving our model, we will also compare it to two other models of selfdeception,

namely Rey’s (1988), and Talbott’s (1995).We will argue that our model

is more psychologically plausible than their models.

2. What Is Self-Deception?

Self-deception involves a blind or unexamined acceptance of a belief that can easily

be seen as “spurious” if the person were to inspect the belief impartially or from the

perspective of the generalized other (Mitchell, 2000, p. 145). Consider an example

from Mele (2001, p. 26): Don deceives himself into believing the belief p that his

research paper was wrongly rejected for publication. Indubitably, there are two

essential characteristics of Don’s self-deception:

1 Donfalsely believes p.

2 Either someone other than Don, or he himself at a later impartial examination,

observes (or accuses) that he is deceiving himself into believing p.

The two points are related. The reason we know that Don falsely believes p is

that an impartial examination of evidence suggests that Don ought to believe p.

Most of the time, “the impartial observer”, to use Mele’s (2001) terms, is someone

other than the self-deceiver, but it is also possible that the self-deceiver herself may

realize the spuriousness of and self-deception involved in her beliefs, at a later

careful examination.

In 2 above, one might even use the term “interpretation” instead of observation

or accusation. Goldberg notes, “accusations of self-deception are only as strong as

the interpretation of [the] behavior” (Goldberg, 1997). Thus, one might say that

the model of Dimmesdale’s self-deception that we will present shortly is based on

our interpretation of his speech and beliefs as they are revealed in the narrative.

But we cannot have just any interpretation. The best interpretation is the one that

is consistent with all the available information.

The minimal description of self-deception that we have given above is not finely

tuned to distinguish self-deception from wishful thinking and denial. We will make

SELF-DECEPTION AND EMOTIONAL COHERENCE 215

such distinctions after we give amuch more precise account of self-deception in our

computational model. Given these remarks, we can begin the impartial or external

analysis of the self-deception of Dimmesdale, the minister in The Scarlet Letter

3. Dimmesdale’s Self-Deception

Before we present our analysis, we must clarify that our purpose is not to morally

judge Dimmesdale. Some theorists hold that self-deception is intrinsically wrong in

that it is a sort of spiritual failure (Sartre, 1958; Fingarette, 1969). At the same time,

many philosophers also argue that self-deception is not always wrong, and may

even be beneficial; for example, see Rorty (1996) and Taylor (1989). Nevertheless,

in the era in which Hawthorne wrote The Scarlet Letter, it was taken for granted

that being dishonest with oneself was somehow always wrong. Hawthorne gives us

the famous advice: “Be true! Be true! Be true! Show freely to the world, if not your

worst, yet some trait whereby the worst may be inferred!” (Hawthorne, 1850, p.

260) For Hawthorne, self-deception, like hypocrisy, is morally wrong because selfdeceivers

are largely out of touch with their true selves within them and mistakenly

place their trust in what the reader recognizes to be a false outer appearance (Harris,

1988, p. 21). We think that the issue of moral reprehensibility of self-deception is

important. However, our objective is much humbler in that we only aim to explain

how Dimmesdale deceived himself, and thus suggest a new way of conceiving of

self-deception.

The Scarlet Letter is particularly attractive for a study of self-deception because

of the richness of detail of the self-deceiving characters in it. This degree of detail

is invariably missing in the philosophical literature on self-deception; the most

commonly cited case is Elster’s (1983) example of sour grapes. In The Scarlet

Letter, almost everybody is a hypocrite. More importantly, Hawthorne’s hypocrites

are almost always self-deceivers as well. The reader easily infers that from

the narrative. On one occasion, however, the narrator explicitly informs us that

Dimmesdale, the minister, had deceived himself; and later, Dimmesdale himself

unfolds his character to reveal the complexity of his self-deception. See Figure 1

for a pictorial analysis of Dimmesdale’s self-deception. The solid lines in the figure

represent positive associations, and the dotted lines represent negative associations.

The narrator informs us that Dimmesdale deceives himself when he tells himself

that his satisfaction at knowing he can still preach the Election Day sermon before

running off with Hester arises from his wish that people will think of him as an

admirable servant of the Church. He says to Hester: People “shall say of me that

I leave no public duty unperformed, nor ill performed” (Hawthorne, 1850, p. 215).

“Sad, indeed,” the narrator tells us, “that an introspection so profound and acute

as this poor minister’s should be so miserably deceived!” (p. 215) Dimmesdale

deceives himself in not wanting to have it said that he failed to carry out his “public

duty,” even if it were to be revealed almost immediately, as it obviously would, to

216 BALJINDER SAHDRA AND PAUL THAGARD

Figure 1. ECHO analysis of Dimmesdale’s self-deception. Solid lines represent positive associations. Dotted lines represent negative representations.

SELF-DECEPTION AND EMOTIONAL COHERENCE 217

the same public that his dutiful service to them was sheer hypocrisy. In other words,

his self-deception consists in believing that he can fill his sacramental office and

still be hypocritical.

Dimmesdale deceives himself in believing that he is no more hypocritical now

than he has been for seven years. Previously, he has known that he has been

hypocritical because of his hidden sinfulness. However, at one point during his

conversation with Chillingworth, he concedes that a priest who knows he is guilty

may still have the obligation to continue as a priest, and in that sense he would not

necessarily be a hypocrite. He claims that some people, “by the very constitution

of their nature,” may be forced to live with the “unutterable torment” of hidden

guilt and still carry out ministerial, even sacramental functions (p. 132). Therefore,

as Harris puts it, “in the past, Dimmesdale has been a good clergyman not only

in spite of his hidden guilt and his consciousness of his hypocrisy, but precisely

because of those very factors — because of his excruciating situation” (Harris,

1988, p. 84). He has been a hypocrite because he has allowed people to think that

he is a saint; but his motive in doing that has been to fulfill his duty. Thus, he was

a good clergyman in spite of his hypocrisy because his motives were selfless.

The situation is different however, when he deceives himself. His motives have

changed and he deceives himself in believing that his motives are the same. This

time, his motive is to pass himself off as righteous. His main concern is that

people say of him that he “leaves no public duty unperformed, nor ill performed!”

(Hawthorne, 1850, p. 215). In the past, Dimmesdale has been a hypocrite but still a

good clergyman. Now, he is a hypocrite and a bad clergyman because his motives

are selfish. He believes that he can still sincerely preach about his suffering even

though he rejects his suffering now. In short, he deceives himself in believing that

he is a good clergyman now as he was in the past.

The fact that Dimmesdale’s self-deception is intertwined with his hypocrisy

causes him much confusion. Harris thinks that his self-deception is of the “deepest,

most unconscious sort, compounded by deliberate hypocrisy, and the prognosis

calls for an ever-increasing confusion of identity” (Harris, 1988, p. 75). The narrator

in the novel describes this complexity: “No man, for any considerable period,

can wear one face to himself, and another to the multitude, without getting bewildered

as to which may be the true” (Hawthorne, 1850, pp. 215–216). In the

chapter, “The Minister in a Maze,” as Dimmesdale’s identity unravels, his impulses

seem to be “at once involuntary and intentional: in spite of himself, yet

growing out of a profounder self than that which opposed the impulse” (p. 217).

This “profounder self” incites him into blaspheming in front of a deacon; laughing

at the good and holy deacon; promulgating atheism to a helpless old lady who has

nothing but faith in God to sustain her; teaching “some very wicked words” to a

few kids playing on a street; and giving a “wicked look” to a young girl with a

spiritual crush on him.

His “profounder self” has much to do with his sexuality as manifested in three

things: First, his impulsive affair with Hester; second, his likely relations with “the

218 BALJINDER SAHDRA AND PAUL THAGARD

many blooming damsels, spiritually devoted to him” from his congregation (p.

125). Third, as mentioned previously, when he meets one young girl while he is

lost in his private “maze,” the reader is informed, “that the minister felt potent to

blight all the field of innocence with but one wicked look” (p. 220).

It is important to note that Dimmesdale is able to end his self-deception. This

will factor significantly in our model, as we will explain later. Throughout the

novel, Hawthorne is very sarcastic and scornful of Dimmesdale for his hypocrisy

and self-deception, but in the end he makes him appear as a kind of a hero or saint.

Dimmesdale, while giving the sermon, “stops deceiving himself into thinking that

he could preach about his suffering at the same time he was planning to reject his

suffering” (Harris, 1988, p. 86). He preaches in the same spirit as before and to

the same effect. Thus, he returns to his earlier state of hypocrisy. He escapes his

hypocrisy at the time of his death when he declares it in front of all the people of his

congregation. The reason he is able to escape his self-deception and his hypocrisy

is that he knows, more than anybody else, that he is unworthy of redemption.

4. General Description of Our Model of Dimmesdale’s Self-Deception

We used the simulators, ECHO and HOTCO 2 to computationally model Dimmesdale’s

self-deception. The two sections following this one contain the detailed

descriptions of these simulators. In this section, we give a general description of

our model.

The model has two parts: (1) Cold Clergyman, a cold or emotionless explanation,

and (2) Hot Clergyman, an emotional explanation. The first part is the test

to see what Dimmesdale would believe given the evidence. This experiment would

serve as a rational baseline. In other words, this would be the impartial or external

observation of the situation. The second part is the model of what he does believe

in spite of the evidence, given his goals and emotional preferences.

In the first experiment, Cold Clergyman, the input is simply the observed propositions

(that is, the evidence), and the negative and positive associations between

propositions (see Figure 1). After the experiment is run, we expect that Dimmesdale

would believe the belief-set A:

1 I am a bad clergyman.

2 I will give my sermon in bad faith.

3 I cannot preach about my suffering.

4 I am hypocritical, selfish, and righteous minister.

In the second experiment, Hot Clergyman, in addition to the evidence, propositions,

and all the associations, he is given two goals: (1) approach redemption,

and (2) avoid damnation. Also, he is given ‘likes’ and ‘dislikes’, based on whether

a proposition has negative or positive emotional valence (See Table 1). For exSELF-

DECEPTION AND EMOTIONAL COHERENCE 219

Table I. Dimmesdale’s likes and dislikes

Propositions Goals Likes Dislikes

1 I am a good minister.

2 I am a bad minister.

3 I deserve redemption. Approach

redemption

4 I deserve damnation. Avoid

damnation

5 I will give my sermon in good faith.

6 I will give my sermon in bad faith.

7 I can preach about my suffering in spite of my

hypocrisy.

8 I am hypocritical but selfless.

9 I always perform my duty.

10 I cannot preach about my suffering.

11 I am hypocritical, selfish, and righteous minister.

12 I reject my suffering.

13 I am blasphemous.

14 I am hypocritical.

15 I am selfish.

16 I have sinned.

17 I preach people not to sin, yet I sin myself.

18 I had an affair with Hester. (E)

19 I had feelings for a young girl. (E)

20 I had relations with a lady. (E)

21 I want people to think of me as a great minister. (E)

22 People will discover my guilt.

23 Chillingworth knows about my affair with Hester. (E)

24 I plan to run away with Hester. (E)

25 I wanted to laugh at a holy deacon. (E)

26 I wanted to utter blasphemous words in front of a

deacon. (E)

27 I wanted to teach wicked words to kids. (E)

28 I wanted to promulgate atheism on an old lady. (E)

29 I have an argument against the immortality of soul. (E)

(E) = Evidence.

ample, he likes being a good clergyman and dislikes being a bad clergyman. In this

experiment, he should be able to deceive himself into believing the belief-set B:

1 I am a good clergyman.

220 BALJINDER SAHDRA AND PAUL THAGARD

2 I will give my sermon in good faith.

3 I can preach about my suffering in spite of my hypocrisy.

4 I am hypocritical but selfless.

Cold clergyman is run in the explanatory coherence program, ECHO. Hot Clergyman

is run in the emotional coherence program, HOTCO 2. We devote the

following two sections to describe ECHO and HOTCO 2 in detail.

5. ECHO and Explanatory Coherence

ECHO is an implementation of the theory of explanatory coherence that can be

summarized in the following principles, discussed at length elsewhere (Thagard,

Principle E1. Symmetry. Explanatory coherence is a symmetric relation, unlike,

say, conditional probability. That is, two propositions p and q cohere with

each other equally.

Principle E2. Explanation. (a) A hypothesis coheres with what it explains,

which can either be evidence or another hypothesis; (b) hypotheses that together

explain some other proposition cohere with each other; and (c) the more

hypotheses it takes to explain something, the lower the degree of coherence.

Principle E3. Analogy. Similar hypotheses that explain similar pieces of evidence

cohere.

Principle E4. Data priority. Propositions that describe the results of observations

have a degree of acceptability on their own.

Principle E5. Contradiction. Contradictory propositions are incoherent with

each other.

Principle E6. Competition. If P and Q both explain a proposition, and if P

and Q are not explanatorily connected, then P and Q are incoherent with each

other. (P and Q are explanatorily connected if one explains the other or if

together they explain something.)

Principle E7. Acceptance. The acceptability of a proposition in a system of

propositions depends on its coherence with them.

ECHO shows precisely how coherence can be calculated. Hypotheses and evidence

are represented by units, which are highly simplified artificial neurons that can have

excitatory and inhibitory links with each other. When two propositions cohere, as

when a hypothesis explains a piece of evidence, then there is an excitatory link

between the two units that represent them. When two propositions are incoherent

with each other, either because they are contradictory or because they compete to

explain some of the evidence, then there is an inhibitory link between them. Standard

algorithms are available for spreading activation among the units until they

reach a stable state in which some units have positive activation, representing the

SELF-DECEPTION AND EMOTIONAL COHERENCE 221

acceptance of the propositions they represent, and other units have negative activation,

representing the rejection of the propositions they represent. Thus algorithms

for artificial neural networks can be used to maximize explanatory coherence, as

can other kinds of algorithms (Thagard and Verbeurgt, 1998; Thagard, 2000).

6. HOTCO and Emotional Coherence

When people make judgments, they not only come to conclusions about what to

believe, but they also make emotional assessments. For example, the decision to

trust people is partly based on purely cognitive inferences about their plans and

personalities, but also involves adopting emotional attitudes toward them (Thagard,

2000, Ch. 6). The theory of emotional coherence serves to explain how people’s

inferences about what to believe are integrated with the production of feelings

about people, things, and situations. On this theory, mental representations such

as propositions and concepts have, in addition to the cognitive status of being

accepted or rejected, an emotional status called a valence, which can be positive or

negative depending on one’s emotional attitude toward the representation. For example,

just as one can accept or reject the proposition that Dimmesdale committed

adultery with Hester, one can attach a positive or negative valence to it depending

on whether one thinks this is good or bad.

The computational model HOTCO implements the theory of emotional coherence

by expanding ECHO to allow the units that stand for propositions to have

valences as well as activations. Valences are affective tags attached to the elements

in coherence systems. Valences can be positive or negative. In addition, units can

have input valences to represent their intrinsic valences. In the original version

of HOTCO (Thagard, 2000), the valence of a unit was calculated on the basis

of the activations and valences of all the units connected to it. Hence valences

could be affected by activations and emotions, but not vice versa: HOTCO enabled

cognitive inferences such as ones based on explanatory coherence to influence

emotional judgments, but did not allow emotional judgments to bias cognitive

inferences. HOTCO and the overly rational theory of emotional coherence that it

embodied could explain a fairly wide range of cognitive-emotional judgments involving

trust and other psychological phenomena, but could not adequately explain

Dimmesdale’s self-deception.

Thagard (forthcoming) altered HOTCO to allow a kind of biasing of activations

by valences. This version of the program, HOTCO 2, allows biasing for all

units. For instance, consider the proposition, “I will be redeemed.” This proposition

can be viewed as having an activation that represents its degree of acceptance or

rejection, but it can also be viewed as having a valence that corresponds to Dimmesdale’s

emotional attitude toward redemption. Since he deems redemption as of

great importance, this proposition has a positive valence. In HOTCO 2, therefore,

222 BALJINDER SAHDRA AND PAUL THAGARD

truth and desirability of a proposition become interdependent. Technical details

concerning explanatory and emotional coherence are provided in an appendix.

7. Results of Cold Clergyman and Hot Clergyman

As expected, in the cold-experiments run in ECHO, the system yields acceptance

of all four propositions of the belief-set A:

1 I am a bad clergyman.

2 I will give my sermon in bad faith.

3 I cannot preach about my suffering.

4 I am hypocritical, selfish, and righteous minister.

Also, the system rejects the belief-set B:

1 I am a good clergyman.

2 I will give my sermon in good faith.

3 I can preach about my suffering in spite of my hypocrisy.

4 I am hypocritical but selfless.

On the other hand, in the hot experiments run in HOTCO 2, given that the weight

of the input valence is equal to or greater than 0.06, the system successfully selfdeceives;

that is, the belief-set B is accepted and A is rejected (except for proposition

4 in A). The ideal solution to model Dimmesdale’s self-deception, however, is

when the weight is 0.07. At this degree of input valence, Dimmesdale successfully

deceives himself into believing the belief-set B while he rejects the proposition that

he will be redeemed. In other words, self-deception occurs, but the proposition that

he will be redeemed has a negative activation, that is, it is rejected. Also, Dimmesdale

fully accepts that he has sinned. This is consistent with the novel, The Scarlet

Letter, in which Dimmesdale never denies that he has sinned and experiences much

pain due to his guilt. The result is also consistent with the fact that Dimmesdale is

able to escape his self-deception, and admit his sin in front of his congregation

before he dies with a cry for forgiveness. Thus, HOTCO 2 successfully models that

although Dimmesdale is trying to approach redemption while deceiving himself,

he never fully approaches it. This allows him to get out of his self-deception later

in the novel when he realizes that he can never be redeemed unless he escapes his

self-deception and hypocrisy.

8. Self-deception and Subjective Well-Being

According to our model, self-deception occurs via emotional biasing of people’s

beliefs, while people attempt to avoid or approach their subjective goals. This

account is consistent with Erez et al.’s (1995) psychological theory of subjective

SELF-DECEPTION AND EMOTIONAL COHERENCE 223

Figure 2. The Psychological Causal Model of Self-deception. Adapted from Erez et al. (1995)

Figure 3. The two-way causal-link between self-deception and subjective well being.

well being according to which dispositional tendencies, such as, affective disposition

and locus of control, influence subjective well being through self-deception.

According to this theory, certain individuals tend to use self-deception in order to

maintain their happiness. Such individuals are either positively disposed or they

have high expectations of control. They tend to ignore failure, for instance, if they

are positively disposed. They unrealistically think that they control their environment,

if they have high expectations of control. Also, individuals who tend to

evaluate stimuli in a positive manner or tend to think they can control their environment

do so by actively searching for positive and desirable cues while denying

negative and undesirable one (Erez et al., 1995) (see Figure 2).

Thus, focusing on the bottom section of the Erez et al.’s hypothesized causal

model, two things may cause self-deception: affective disposition and focus of

control. However, we hypothesize that there is a two-way causal-link between

self-deception and subjective well being (see Figure 3).

We think that the causal-link is bi-directional because:

(a). There is evidence that self-deception causes subjective well being:

224 BALJINDER SAHDRA AND PAUL THAGARD

Self-deception is one of the mental mechanism that increases the subjects’

positive assessments of situations (ignoring minor criticisms, discounting

failure, and expecting success) (Zerbe and Paulhus, 1987).

Self-deceivers continually distort daily events to build positive esteem (Paulhus

and Reid, 1991).

Self-deception improves motivation and performance of competitors by

reducing their stress and bolstering their confidence by repressing their

knowledge of the self-interests and confidence of their competitor (Starek

and Keating, 1991).

(b). Also, subjective well being causes self-deception:

The positive esteem, when sufficiently strong, may act as a buffer to soften

the impact of negative information (Erez et al., 1995). Thus, having high

subjective well being may cause one to self-deceive by causing one to

ignore negative information.

When threatening or “harmful-to-the-self” information is presented to ego

enhancers, they turn to their assets and emphasize them to neutralize the

threat (Taylor, 1989). Thus, an enhanced ego can cause one’s self-deception

in that it neutralizes the influence of any evidence that diminishes ego.

In our model described in the previous sections, self-deception is directed toward

approaching or avoiding certain subjective goals, which presumably increase or

decrease subjective well being if approached or avoided. For instance, in Dimmesdale’s

case, the causal model can be depicted as shown in Figure 4, which

shows that Dimmesdale’s subjective well being depends on his prospect of being

redeemed. Being a good clergyman is essential for redemption. He may be disposed

to ignore any evidence that may suggest that he is a bad clergyman. This

is consistent with HOTCO 2 experiments in which the proposition that people will

know when he runs away with Hester is rejected. Also, he may have a false sense of

control that he will give his sermon in good faith, and make people believe that he

is a good minister. His false sense of control and his disposition to ignore certain

evidence cause his self-deception which in turn, cause his subjective well being

which feeds back into his self-deception.

One might argue that the notion of cause is ambiguous or mysterious. We are

suggesting a way to disambiguate or demystify the causal relations involved in

self-deception by proposing that the mechanism of the causal relations is emotional

coherence. Thus, the causal links in self-deception may be as depicted in

Figure 4, but the way different causes lead to self-deception is through emotional

coherence. The successful modeling of Dimmesdale’s self-deception in HOTCO 2,

an implementation of emotional coherence, supports this conclusion.

SELF-DECEPTION AND EMOTIONAL COHERENCE 225

Figure 4. Causal Model of Dimmesdale’s Self-deception. Adapted from Erez et al. (1995).

9. Wishful Thinking, Denial and Self-Deception

Wishful thinking is importantly different from self-deception. In wishful thinking

an agent believes whatever he or she wants. Elster (1983), Mele (2001), and Johnston

(1988) propose that at least some cases of self-deception can be explained in

terms of wishful thinking. Although, it is important that the agent desires that p

to self-deceive herself into believing that p, we think that self-deception is not just

wishful thinking. In HOTCO 2, the valence input can be varied so as to make the

system less or more emotional. After a certain degree of valence input, the system

becomes so emotional that it models an agent who believes every single proposition

that he or she deems as important. In a sense, emotions completely override reason.

At such a degree of emotional input, the model shows that Dimmesdale not only

believes that he is a good minister, he also believes that he will be redeemed. However,

he does not think of himself as worthy of redemption in the experiments in

which he successfully deceives himself into believing that he is a good clergyman.

226 BALJINDER SAHDRA AND PAUL THAGARD

Thus, in wishful thinking, people believe everything that they want to believe.

Self-deception, however, is a ‘weaker’ state in that we may successfully deceive

ourselves into believing something, but not everything that we wish to believe.

This claim is supported by psychological studies on motivated inference (Kunda,

1999; Sanitioso et al., 1990); psychologists have shown that our judgments are

colored by our motivations because the process of constructing justifications is

biased by our goals. Nevertheless, we cannot believe whatever we want to believe.

As Kunda (1999, p. 224) puts it, “Even when we are motivated to arrive at a particular

conclusion, we are also motivated to be rational and to construct a justification

for our desired conclusion that would persuade a dispassionate observer.” There

are constraints on what we can believe. In self-deception, we succeed in believing

some (false) beliefs but not in believing everything we want to believe. Because

some wishes remain unfulfilled, anxiety or internal conflict typically but not necessarily

accompanies self-deception (as we will discuss in a coming section), but not

wishful thinking.

Denial is also different from self-deception in that it is a kind of direct lie

that self-deception is not. In denial, a person knowingly or consciously lies that

p. In self-deception, however, the person really believes that p. Both claim

something false, but in self-deception the correct belief (that p) is ‘held’ nonconsciously,

whereas in denial, it is believed consciously. Also, in addition to denial,

self-deception contains a very strong ego-enhancement component (Paulhus and

Reid, 1991). Thus, self-deception and denial are importantly different.

10. Debates

In the beginning of our paper, we mentioned the debates over whether self-deception

is intentional or not, and whether it involves a divided self or not. In this section we

briefly comment on these debates. We also discuss the issue of whether the desire

that p has to be “anxious” or not.

Regarding the issue of whether self-deception is intentional or not, we think

that the debate is misplaced. To the extent that Dimmesdale intends to be redeemed,

his self-deception can be seen as intentional. However, it would be absurd

to claim that he intends to have the emotional preferences that he does. He may

not have any control over his emotions at all. Emotional coherence occurs unconsciously.

In their classic experiment pioneering the psychological studies of

self-deception, Sackeim and Gur found that there were discrepancies in subjects’

conscious misindentifications and nonconscious correct identifications (as indicated

by Galvanic Skin Reponses) of voices as their own or others (Sackeim and

Gur, 1979). They found that such discrepancies were a function of the subjects’ independent

manipulation of their self-esteem. (The subjects misidentified their own

voices as others’ when they thought ill of themselves, and others’ voices as their

own when they thought well of themselves.) We are proposing that the unconscious

SELF-DECEPTION AND EMOTIONAL COHERENCE 227

mechanism behind self-deception is emotional coherence. The subjective goal of

redemption, in Dimmesdale’s case for instance, may be a conscious goal. However,

the goal is approached through nonconscious emotional coherence. Is having the

intent to be redeemed sufficient to call Dimmesdale’s self-deception as intentional?

No, for self-deception is not just approaching or avoiding one’s goals. How the goal

is approached or avoided is crucially a part of self-deception. One may fully intend

to do whatever is necessary to achieve the desired goal, but at the same time, one

does not intend to achieve emotional coherence involved in the approach of the

goal. Therefore, until there is a good account of the relation between intentions

and emotions, we cannot decide whether self-deception is intentional or not.

On the issue of a possible division of self involved in self-deception, we think

that the issue arises from a misunderstanding of the notion of the self itself. Researchers

have mainly focused on the deception side of self-deception, while rarely

talking about the self of self-deception. What is the self that is deceived? In Dimmesdale’s

case, the narrator informs us of a tension between his “profounder self”

and his presentational or outer self. However, this does not imply that there is necessarily

a Freudian split in his self. It is not that Dimmesdale has two selves, self-A

and self-B, and that self-A deceives self-B. There is a sense in which the self has a

kind of continuity or oneness to it. However, the self is a “decentered, distributed,

and multiplex” phenomenon that is “the sum total of its narratives, and includes

within itself all the equivocations, contradictions, struggles and hidden messages

that find expression in personal life” (Gallagher, 2000, p. 20). It is because the self

is multiplex and devoid of any center, that self-deception is possible. Thus, if there

is a ‘split’ in self, it is not at one place, but all over the place, and even in the self

that does not deceive itself.

There is another debate worth paying attention to. Everybody agrees that in selfdeception

the agent, say, A is motivationally biased to believe p. Mele (2001) holds

that the biasing role is played by A’s desire that p. However, following Johnston

(1988), Barnes (1997) insists that the desire that p must be anxious in that the

person is uncertain of the truth of p. We can easily think of Dimmesdale’s case

as involving the desire to be redeemed. There is no doubt that if he wants to be

redeemed.We can also say that he desires to be a good clergyman, and successfully

deceives himself into believing that he is a good clergyman. Hawthorne makes it

clear that Dimmesdale’s self-deception causes him so much confusion that he experiences

profound identity crises. It appears that in Dimmesdale’s case his desire

is anxious. However, we are inclined to agree with Mele that in self-deception the

desire that p does not have to be an anxious desire.

There is good, although not conclusive computational evidence from our model

that has inclined us to say that Mele is probably right on this issue. We conducted

several hot experiments with varying degrees of emotional (valence) input in the

system. The general trend was that the greater the valence input, that is, the more

emotional the system, the easier (that is, faster) it was for the system to model selfdeception.

This suggests that the ’influence’ of emotions on the system was amatter

228 BALJINDER SAHDRA AND PAUL THAGARD

of degree. The same may be true in humans. There is psychological evidence to

suggest that self-deception is a dispositional tendency (Sackeim and Gur, 1979). It

is plausible to hypothesize that this tendency is due to emotions and that depending

on the degree to which people are emotional, they may be less or more disposed to

deceive themselves.

11. OurModel Compared to Other Computational Models of Self-Deception

Rey (1988) gives a computational model based on the distinction between “central”

and “avowed” attitudes of a self-deceiver. According to Rey, self-deception arises

due to the discrepancies between the two kinds of attitudes. However, as Rey correctly

notes, it is crucial that the discrepancies be motivated (p. 281). Otherwise,

the agent would be self-ignorant and not self-deceiving. What is missing in Rey’s

model is any detailed account of what plays the motivated biasing role essential

for self-deception. Our model shows that emotional coherence involving goals can

provide the necessary motivated biasing.

Another notable model is Talbott’s (1995) Bayesian model. Insofar as Talbott

bases his model on the assumption of the self as a practically rational Bayesian

agent, Talbott’s model inherits the problems of a probabilistic approach to human

thinking. The problems with probabilistic models of human thinking are discussed

at length in Thagard (2000, Ch. 8). Such accounts assume that quantities that

comply with the mathematical theory of probability can adequately describe the

degrees of belief that people have in various propositions. However, there is much

empirical evidence to show that human thinking is often not in accord with the notions

of probability theory (see, e.g., Kahneman et al., 1982; Tversky and Koehler,

1994). On the other hand, as discussed in detail in Thagard (2000), coherencebased

reasoning (on which our model is based) is pervasive in human thinking,

in domains such as perception, decision making, ethical judgments, and emotion.

Thus, our model is much more psychologically realistic than Talbott’s model. In

addition, Talbott fails to note the role of emotions in self-deception, whereas we

have shown that emotions play a pivotal role in this phenomenon.

12. Conclusion

We have given a detailed analysis of a complex case of self-deception, namely, that

of Dimmesdale in The Scarlet Letter. We have shown, by modeling Dimmesdale’s

self-deception in HOTCO 2, that self-deception can be seen as resulting from emotional

coherence involving beliefs and goals. We have also compared our model to

other models and have argued that our model is more psychologically realistic.

SELF-DECEPTION AND EMOTIONAL COHERENCE 229

Appendix: Technical Details

The explanatory coherence program ECHO creates a network of units with explanatory

and inhibitory links, then makes inferences by spreading activation through

the network (Thagard, 1992). The activation of a unit j aj , is updated according to

the following equation:

aj (t+1) = aj (t)(1-d) + netj(max - aj (t))

if netj > , otherwise netj (aj (t) - min).

Here d is a decay parameter (say 0.05) that decrements each unit at every cycle,

min is a minimum activation ( max is maximum activation (1). Based on the

weight wij between each unit i and j, we can calculate netj the net input to a unit,

by:

netj iwij ai (t).

In HOTCO, units have valences as well as activations. The valence of a unit uj

is the sum of the results of multiplying, for all units ui to which it is linked, the

activation of ui times the valence of ui times the weight of the link between ui and

uj The actual equation used in HOTCO to update the valence vj of unitj is similar

to the equation for updating activations::

vj (t+1) = vj (t)(1-d) + netj (max- vj (t))

if netj > 0, netj (vj (t) - min) otherwise.

Again d is a decay parameter (say 0.05) that decrements each unit at every cycle,

min is a minimum valence ( max is maximum valence (1). Based on the weight

wij between each unit i and j, we can calculate netj the net valence input to a unit,

by:

netj iwij vi (t)ai (t).

Updating valences is just like updating activations plus the inclusion of a multiplicative

factor for valences.

HOTCO 2 allows units to have their activations influenced by both input activations

and input valences. The basic equation for updating activations is the same

as the one given for ECHO above, but the net input is defined by a combination of

activations and valences:

netj iwij ai (t) + iwij vi (t)ai (t).

ECHO and HOTCO both proceed in two stages. First, input about explanatory

and other relations generates a network of units and links. The LISP input for all

simulations used in this paper is available on theWeb at https://cogsci.uwaterloo.ca/

coherencecode/co here/hotco-input.lisp.html. Second, activations and (for HOTCO)

230 BALJINDER SAHDRA AND PAUL THAGARD

valences are updated in parallel in accord with the above equations. Updating proceeds

until all activations have reached stable values, which usually takes about

100 iterations of updating.

References

Arabsheibani, G., D. de Meza et al. (2000), ‘And a Vision Appeared Unto Them of a Great Profit:

Evidence of Self-Deception Among the Self-Employed,’ Economics Letters 67, pp. 35–41.

Barnes, A. (1997), Seeing Through Self-Deception, Cambridge: Cambridge University Press.

Davidson, D. (1985), ‘Deception and Division’, in E. LePore and B. P. McLaughlin, eds., Actions

and Events, Perspectives on the Philosophy of Donald Davidson, Oxford: Blackwell.

de Sousa, R. B. (1988), ‘Emotion and Self-Deception’, in B. P. McLaughlin and A. O. Rorty, eds.,

Perspectives on Self-Deception, Berkeley, CA: University of California Press, pp. 325–341.

Demos, N. F. (1960), ‘Lying to oneself,’ Journal of Philosophy 57, pp. 588–595.

Elster, J. (1983), Sour Grapes, New York: Cambridge University Press.

Erez, A., D. E. Johnson et al. (1995), ‘Self-Deception as a Mediator of the Relationship between

Dispositions and Subjective Well-Being,’ Personality and Individual Differences 19(5), pp. 597–

Fingarette, H. (1969), Self-Deception, London: Routledge and Kegan Paul.

Gallagher, S. (2000), ‘Philosophical Conceptions of the Self: Implications for Cognitive Science,’

Trends in Cognitive Science 4(1), pp. 14–21.

Goldbeck, R. (1997), ‘Denial in Physical Illness,’ Journal of Psychosomatic Research 43(6), pp.

Goldberg, S. C. (1997), ‘The Very Idea of Computer Self-Knowledge and Self-Deception,’ Minds

and Machines 7, pp. 515–529.

Haight, M. R. (1980), A Study of Self-Deception, Sussex: Harvester Press.

Harris, K. M. (1988), Hypocrisy and Self-Deception in Hawthorne’s Fiction, Charlottesville, VA:

University Press of Virginia.

Hawthorne, N. (1850), The Scarlet Letter: A Romance, Boston: Ticknor and Fields.

Johnston, M. (1988), ‘Self-Deception and the Nature of Mind’, in B. P.McLaughlin and A. O. Rorty,

eds., Perspectives on Self-Deception, Berkeley: University of California Press, pp. 63–91.

Kahneman, D., P. Slovic et al. (1982), Judgment under uncertainty: Heuristics and biases, NewYork:

Cambridge University Press.

Kunda, Z. (1999), Social Cognition, Cambridge, MA: MIT Press.

Kipp, D. (1980), ‘On self-deception,’ Philosophical Quarterly

Lajunen, T., A. Corry et al. (1996), ‘Impression Management and Self-Deception in Traffic Behavior

Inventories,’ Personality and Individual Differences 22(3), pp. 341–353.

Lazar, A. (1999), ‘Deceiving Oneself Or Self-Deceived? On the Formation of Beliefs “Under the

Influence,”’ Mind 108(430), pp. 265–290.

McLaughlin, B. P. (1988), ‘Exploring the Possibility of Self-Deception in Belief’, in B. P.McLaughlin

and A. O. Rorty, eds., Perspectives on Self-Deception, Berkeley: University of California

Press, pp. 29–62.

McLaughlin, B. P. (1996), ‘On the Very Possibility of Self-Deception’, in R. T. Ames and W.

Dissanayake, eds., Self and Deception: A Cross-Cultural Philosophical Enquiry, New York:

SUNY.

Mele, A. R. (2001), Self-Deception Unmasked, Princeton: Princeton University Press.

Mitchell, J. (2000), ‘Living a Lie: Self-Deception, Habit, and Social Roles,’ Human Studies 23, pp.

Paluch, S. (1967), ‘Self-deception,’ Inquiry 10, pp. 268–278.

SELF-DECEPTION AND EMOTIONAL COHERENCE 231

Paulhus and Reid (1991), ‘Enhancement and denial in social desirable responding,’ Journal of

Personality and Social Psychology 60, pp. 307–317.

Pears, D. (1982), ‘Motivated Irrationality, Freudian Theory, and Cognitive Dissonance’, in R. Wollheim

and J. Hopkins, eds., Philosophical Essays on Freud, Cambridge: Cambridge University

Press, pp. 264–288.

Pears, D. (1986), ‘The Goals and Strategies of Self-Deception’, in J. Elster, ed., The Multiple Self

Cambridge: Cambridge University Press, pp. 59–78.

Rey, G. (1988), ‘Toward a Computational Account of Akrasia and Self-Deception,’ in B. P.McLaughlin

and A. O. Rorty, eds., Perspectives on Self-Deception, Berkeley: University of California

Press, pp. 264–296.

Rorty, A. O. (1988), ‘The Deceptive Self: Liars, Layers, and Lairs’, in B. P. McLaughlin and A. O.

Rorty, eds., Perspectives on Self-Deception, Berkeley: University of California Press, pp. 11–28.

Rorty, A. O. (1996), ‘User-Friendly Self-Deception: a Traveler’s Manual’, in R. T. Ames and W.

Dissanayake, eds., Self and Deception: A Cross-Cultural Philosophical Enquiry, New York:

SUNY.

Sackeim, H. A. and R. C. Gur (1978), ‘Self-Deception, Self-Confrontation, and Consciousness’, in

G. E. S. D. Shapiro, ed., Consciousness and Self-regulation: Advances in Research, New York:

Plenum, pp. 139–197.

Sackeim, H. A. and R. C. Gur (1979), ‘Self-Deception, Other Deception and Self-Reported

Psychopathy,’ Journal of Consulting and Clinical Psychology 47, pp. 213–215.

Sanitioso, R., Z. Kunda et al. (1990), ‘Motivated Recruitment of Autobiographical Memories,’

Journal of Personality and Social Psychology 59, pp. 229–241.

Sartre, J.-P. (1958), Being and Nothingness, London: Methuen.

Solomon, R. C. (1996), ‘Self, Deception and Self-Deception in Philosophy’, in R. T. Ames and

W. Dissanayake, eds., Self and Deception: A Cross-Cultural Philosophical Enquiry, New York:

SUNY.

Starek, J. E. and C. F. Keating (1991), ‘Self-Deception and Its Relationship To Success in

Competition,’ Basic and Applied Social Psychology 12, pp. 145–155.

Surbey,M. K. and J. J. McNally (1997), ‘Self-Deception as a Mediator of Cooperation and Defection

in Varying Social Contexts Described in the Iterated Prisoner’s Dilemma,’ Evolution and Human

Behavior 18(6), pp. 417–435.

Talbott, W. J. (1995), ‘Intentional Self-Deception in a Single Coherent Self,’ Philosophy and

Phenomelogical Research LV(1), pp. 27–74.

Taylor, S. E. (1989), Positive Illusions: Creative Self-Deception and the Healthy Mind, New York:

Basic Books.

Thagard, P. (1992), Conceptual revolutions, Princeton: Princeton University Press.

Thagard, P. (2000), Coherence in thought and action, Cambridge, MA: MIT Press.

Thagard, P. (forthcoming), ‘Why Wasn’t O. J. Convicted: Emotional Coherence in Legal Inference’,

Cognition and Emotions

Thagard, P. and K. Verbeurgt (1998), ‘Coherence as Constraint Satisfaction,’ Cognitive Science

pp. 1–24.

Tversky, A. and D. J. Koehler (1994), ‘Support Theory: A Nonextensional Representation of

Subjective Probability,’ Psychological Review 101, pp. 547–567.

Wrangham, R. (1999), ‘Is Military Incompetence Adaptive?’ Evolution and Human Behavior 20, pp.

Zerbe, W. J. and D. L. Paulhus (1987), ‘Socially Desirable Responding in Organized Behavior: A

Reconception,’ Academy of Management Review 12, pp. 250–264.

Lies, Lies, Lies! The Art and Science of Deception

Sunday 27 October 2002

repeated the following Wednesday at 2.30pm

with Natasha Mitchell

Are we humans inherently deceitful? When you get a gift that you hate what do you do? You lie of course. Is this a morally questionable thing to do or are some lies necessary for the sake of social cohesion? Clearly lying can have dire consequences and a world leader in the subtle art of lie catching argues the popular mythology around detecting deception amongst police interrogators is having frightening implications.

Transcript:

Natasha Mitchell: Hello there, Natasha Mitchell here with a cheatin’ and lyin’ edition of All in the Mind this week. Thanks for tuning in. There’s no doubt that dishonesty has a tendency to muck up the moral fibre of our society. But the sort of deceitful goings on that we are privy to today would, I image, had moral philosophers from a bygone era writhing with discomfort.

What with the Enron and the Children Overboard case as two examples it’s all become bigger than Ben Hur it would seem.

The philosopher Emmanuel Kant took a hard line on lying, that all lying is inexcusable. But Plato had a slightly different point of view, he thought that the ‘noble lie’ had a definite place in public life. You know, the lie of the politician for the benefit of the collective for example.

Oscar Wilde on the other hand argued that without misrepresentation there’d be no art. “Lying” he said, “the telling of beautiful untrue things is the proper aim of Art”. Writer Penny Vincenzi clearly thinks deception has its merits too.

Reading from The Lie in Adultery by Penny Vincenzi:

There’s one masterly phrase which the adulterer should keep by when all else fails. It explains away all but the most advanced sexual behaviour and although it does carries a sting in its tail, the poison is not deadly. It is, “I’m sorry darling, but I was drunk”. Try it and see; it works, but only once. Even when you are cornered, confronted and confounded, you should still not stoop to the truth. You owe your partner more than that. Say it was the very first time. Say you were doing it because they promised you a rise, or a cheap mortgage or a course of free driving lessons. Say they were a sex therapist and you were seeking to improve your technique and thence your marriage. Say you didn’t want to. Say you couldn’t help it. Say you can’t remember how you got there. Say anything at all, so long as it’s a lie.

Tony Coady: Part of the challenge is that lying is such as integral part of cultures. Broadly speaking people regard lying as on the face of it as wrong, but nonetheless it goes on an enormous amount. And you know you’ve only got to think of the disputes that spring up in the community when politicians are caught out in a lie, tremendous indignation is expressed. Especially by people who lie a lot themselves but hate to see anyone else lying, you know.

Charles Ford: Lying is also a social lubricant, it’s a way by which we relate to other people that we make them feel good about themselves. And so flattery and other means are so common and so commonly done we don’t even think about them.

Natasha Mitchell: Charles Ford who’s Professor of Psychiatry at the University of Alabama at Birmingham and author of Lies, Lies, Lies – The Psychology of Deceit. And before him Professor Tony Coady who’s Deputy Director of the Centre for Applied Philosophy and Public Ethics at the University of Melbourne.

As you’ll hear later, lying doesn’t just present us with a huge moral challenge, we’re also pretty hopeless at detecting lies. And this has of course major implications for the integrity of our criminal justice system. But who hasn’t told the occasional white lie? Come on, what about the time you said that you loved a present when in fact you were really thinking that it’d make a good rug for the dog. Clearly, some lies are more benign than others and perhaps so trivial that you could hardly call them a lie, or could you? Is dishonesty always a vice? Or should we be able to get away with it at least sometimes? Tony Coady.

Tony Coady: The concept of corruption is one that comes to mind here because quite often widespread practices involve corruption and I don’t just mean financial corruption. And people, when you point out to them what’s happening and criticise it, say oh but look, everyone’s doing that. You know, you couldn’t get on - to take a wild example - the insurance business unless you did blah, blah, blah and then shortly after you find these people in court and I mean I think it’s a very important thing to be on the watch for these systemic understandings that people develop. For instance a colleague of mine was saying only the other day that she put to some of her students would they lie in their job applications or on their curriculum vitae in order to get a job? And 90% of them said yes which I think is absolutely chilling you know, absolutely chilling.

Natasha Mitchell: Professor Tony Coady there speaking to Maryke Steffens. There are, of course extremes when it comes to lying and you may have heard of the terrible plight of people with Munchausen’s Syndrome or factitious disorder where people lie compulsively about illnesses that they know they do not have. As a neuropsychiatrist Charles Ford knows the plight of these pathological liars all too well.

Charles Ford: They are people who have in essence developed a personal identity of being patients and when you are a patient presenting with a dramatic symptom then you are the star. For example mimicking the symptoms of a brain tumour or one of the most famous patients of all time learned how to cough up blood in a dramatic way as to get everyone in the emergency room immediately involved in taking care of him as an emergency. One of the patients that I personally saw was a nurse who would take a syringe, draw out about 10ccs of blood from her arm, then put a catheter up into her bladder, inject the blood into her bladder, take the catheter out and then present herself at hospital emergency rooms with gross haemoturia – bloody urine. It’s very much an attention seeking behaviour and that’s part of it. It’s also a mechanism by which these people feel very superior in that they are fooling doctors.

Natasha Mitchell: And the devastating consequences of this sort of pathological lying is a complex experience that we can explore further in another program. But of course most of us lie in more menial ways and Charles Ford would argue that it starts with our uniquely human, very sophisticated ability to lie to our very selves. Freud and his followers of course had a field day with this with the consideration that the contents of our unconscious are a seething mass of repressed thoughts and denial about ourselves and our personal history. And to repress is to self deceive.

Charles Ford: Each of us has what has often been called the personal myth and by the personal myth it’s what we kind of think of ourselves as being. How we’ve constructed our own self image, our own self esteem. And in the process of constructing our personal myth we often use some selective memory. We tend to not remember some of the more unpleasant things about ourselves and we tend to perhaps exaggerate or add a little bit to those things that are more positive or we would like to believe about ourselves. Each of us tends to see ourselves perhaps a little bit more positively than other people tend to view us. I think each of us engages on a daily and almost constant basis of distorting information to ourselves in such a way that we protect ourselves from either a loss of self esteem or the creation of anxiety within ourselves. So we tend to rationalise away our failures, we tend to forget that which is more unpleasant, we tend to displace things, we tend to in a variety of internal ways distort facts so that they fit into our personal myths. And there’s been a fair amount of research that showed that people who effectively use self deception are happier and less depressed than those people who don’t. Self deception can be detrimental in that if in your social relationships with other people, if you continue to believe that you are more important than you really are, you are more loved than you really are, more powerful than you really are and don’t get feedback from others, then you maybe in a variety of ways ostracised or kept out of social interactions. So we continuously need to read the messages of other people and this is done largely through non-verbal communication.

Natasha Mitchell: Charles Ford from the University of Alabama at Birmingham speaking there to Maryke Steffens.

Some argue that we’re lying a whole lot more than we used to and it’s the fragmentation of modern industrial society that’s to blame. There is simply more opportunities to lie and to get away with it, more closed doors, life is more transient, communities more dispersed, families not as close. If we alienate ourselves in one context because we’ve lied, then we can move on to the next one.

But it’s in the police interrogation rooms of the world where detecting deception on the spot can make all the difference and this is where Mark Frank is leading the charge to make practices more scientific. He’s a social psychologist at Rutgers University in New Jersey and has spent some time in Australia too. And along with his rather famous collaborator Paul Ekman, he’s interested in the subtle body language behind fibbing. Mark Frank says that on the whole we tend not to be very good a spotting a liar. We’re better liars than lie catchers.

Mark Frank: And if you look at people’s behaviour when they are lying and telling the truth one thing that all reputable researchers have come to the conclusion is that there is no Pinocchio response, no human behaviour that in all people, in all situations, means that somebody is lying.

Natasha Mitchell: You’ve been especially interested in how professional lie catchers, so here we are talking about police, judges etc go about detecting deception. And I take it, as you pointed out earlier, that there are some very wild mythologies and misconceptions in place when it comes to police training and the signs they look for in spotting a liar.

Mark Frank: Oh yes, it seems like police are always getting various manuals, various pamphlets and they’re always giving them tips on this is what happens when somebody lies, and they touch their nose, and they cross their arms, and they bite their lip and they don’t look you in the eye and so on and if you actually look at the research literature you find out that most of those hints or signs of lying are not good signs of lying. And in fact the classic police training manuals on it talk about biting the lip, touching the nose, you know not making eye contact, crossing the arms, putting the hand over the mouth and yet those signs tend not to be very good.

Natasha Mitchell: Why aren’t they good signs?

Mark Frank: It’s just when you do it empirically. You put people in laboratory situations and you actually do look at them in control situations, those things don’t necessarily accompany lying any more than they accompany truthfulness and the thing that’s really interesting is that there was a study where they took police and students, this was published about three years ago, and they trained them on these signs that were in this police manual about what liars looked like and then they gave them videos of people lying and telling the truth. And the people who were trained on those things actually became worse lie catchers than the people who were not given any training whatsoever. Now the problem of course is these things have come up anecdotally they weren’t derived scientifically but what becomes important to a scientist you have to say well do these behaviours also occur in truthful people under similar circumstances? And the research seems to suggest that for a lot of these things that were taught to police that was in fact the case. It turns out police tend not to get a lot of training in interviewing. It’s interesting in the United States for example like Law Enforcement College we get hundreds and hundreds of hours on how to shoot and yet we get less than an hour on how to interview.

Natasha Mitchell: This really frankly sounds to me like a crisis for our justice system I mean when you’ve got, and clearly this happens, you have disbelieved truth tellers being jailed alongside disbelieved liars. I mean detecting deception really underpins our entire justice system.

Mark Frank: Well certainly in the United States it has been a problem recently and for example in the State of Illinois they’ve gone through their death penalty cases and they’ve exonerated - not just thrown out on technicalities - but actually exonerated half of the people on death row. And it’s a frightening number to think about, obviously people believe that these folks were lying about what they were saying in the course of the interrogation. And many of them had confessed by the way but then later DNA evidence showed they didn’t do it.

Natasha Mitchell: What about the American Polygraph Association’s slogan, which is ‘Dedicated to Truth’. How confident can we be in the technologies of deception? For example the modern polygraph?

Mark Frank: Well that runs into the same problem that behavioural work does and that problem being there is no guaranteed 100% sign that somebody is in fact lying. So the polygraph machines for example they are reading blood pressure, heart rate, skin conductants and breathing rates. Now typically when somebody lies they do tend to become fearful or nervous or they show what’s called an orienting response where they recognise something and they get a little jolt to their body system and the machine is very good at reading that stuff and quite accurate reading that stuff. But that can be caused by something else besides a lie. So for example the fact that you’ve been accused of something, there are a number of ways to do a polygraph exam, one of the ways that tends to produce a lot of false positives is one where we’ll ask you things like you know, is your name so and so, do you work here, a variety of those questions and then the question: and did you steal a camera from this particular shop? And of course everyone knows that’s the critical question and even if you didn’t do it, if you get a little nervous about it, your physiology will go up, the machine will read that and that will be classified as a lie. And even though you maybe telling the truth if you get nervous about it, it will show up as a lie.

Natasha Mitchell: Which is what interests me so much about the work of you and your close colleague another big thinker about deception, Paul Eckman, I mean despite all the attempts that we have collectively made to apply grand technologies to detecting lies you guys are actually looking at much more subtle signs: the ways in which we reveal our emotions through the most subtle thoughts of non verbal communication means. So very subtle facial expressions and body languages. Are you going back to basics in a sense?

Mark Frank: Yes, well Darwin said that look humans are born and part of our wiring are these physiological responses that help us to survive, so when a big scary thing comes leaping out at us suddenly, before we can consciously process the idea our brain has already put us into action. It causes our heart rate to go up, our blood pressure to go up, the blood leaves the periphery, goes to the large muscles of the leg that’s why you see particularly with Caucasians they might go white with fear cause the blood is leaving the outside part of your body going to your legs and your brain is sending a signal to get away. And all this happens in a quarter of a second, the adrenalin is flowing and that reaction has helped us to survive. And so what will happen in deception situations and somebody obviously doesn’t want to get caught that fear reaction will happen, emotions will happen to people, you don’t pick and choose your emotions. If we did we would not need psychologists because then you’d say oh, I’m depressed let me be happy. Bing – OK, I’m happy.

Natasha Mitchell: Although we do apply our rational mind to tempering and monitoring our emotions in a way. There’s certainly an argument to be had about the rationality of emotions.

Mark Frank: We try, that’s right and the way we interpret an event and we learn from interpreted events can take the edge off but the bottom line is you have this physiological reaction, it’s part of it, and in a lie situation that will happen to people. And they’ll be motivated to try to hide it, it’s part of this primitive wiring that came through our evolutionary systems and so in deception situations often people will try to hide it, they’re trying to be cool but these emotions happen and despite their efforts to control their emotions they will leak out. And the principle way in which they leak out is through the face.

Natasha Mitchell: Social psychologist Mark Frank from Rutgers University who’s an expert when it comes to detecting lies.

And you’re tuned to a deceitful edition of All in the Mind with me Natasha Mitchell, you’re on ABC Radio National and internationally too on Radio Australia and the world wide web.

So you’ve been zeroing in on what you’ve described as micro expressions of emotions, almost momentary expressions, can you give me some examples?

Mark Frank: Well yes, for example if we take this situation where somebody’s feeling fear so they are being interviewed by a police officer about what they’ve done and they claim no, I was just you know having lunch with a friend when this particular crime occurred but they know that’s a lie and they know if they get caught in that lie they are going to go to jail and so the fear comes up, and they don’t want it to, sometimes the part of the fear expression and there’s a specific pattern of facial expressions that come with fear - and sometimes for as fast as one tenth of a second. So for example when somebody’s afraid typically their eyebrows pull up and they pull together and their upper eyelids pull up and then their mouth actually stretches back a little bit. That’s an uninhibited ah, horror, fear face if you will. But when they’re trying not to show it and they’re trying to be cool occasionally when the emotion is strong enough that signal from the brain to the face will still come out and leak it out and you’ll see those eyebrows pull up and together very subtly and the thing that makes that really interesting as well is that the average person can’t make that expression on purpose if they tried to and couldn’t make it that fast if they tried to do it on purpose.

Natasha Mitchell: Actually as you’ve been talking I’ve been trying to do just what you’ve been describing and it ain’t working.

Mark Frank: Well that’s right, the average person couldn’t do that. If you get a group of people together in this room and said try to raise just the inner corners of your eyebrows, not the whole eyebrow, just the inner corners the average person can’t but when people are distressed that happens.

Natasha Mitchell: If we’re talking about micro expressions being sometimes a tenth of a second long though can we surely be confident that they are you know sure fire markers that someone is deceiving us?

Mark Frank: Well that’s the thing, it’s how do you interpret it. You can see these micro expressions and one of the things that we would teach people and for example law enforcement which is typically who we teach only, is that we say well how do you interpret that and the way you interpret it is this person just felt fear, they were telling you about lunch yesterday with their friend, that’s what they said they were doing and all of a sudden they showed you some fear. Now unless they ate at a fairly shocking place for lunch typically most of us don’t show fear when talking about lunchtime. That doesn’t fit what the person is saying and so we would tell people when you see that don’t go aha they’re lying instead say aha, it’s a hot spot OK, there’s something going on, you’re getting an unusual emotional reaction on that particular topic. Identify that and ask some more questions because again there isn’t a guaranteed lie sign.

Natasha Mitchell: This doesn’t really sound though like the traditional approach to police suspect questioning which is, if television is anything to go by is much more hostile in its approach. What you’re suggesting is something a little bit more like rapport building.

Mark Frank: Yes, I think that would be clearly a much more effective way to do it. You know for example you wanted to get some information from somebody well let’s say in the course of the interview you say listen punk, I’m going you know beat the living stuffing out of you unless you tell me what I want to know right now and you grab them by the lapels and you shake them kind of you know like Dennis France in NYPD Blue. Now if that person is showing you signs of fear these are typically thought of as signs of lying. Now the signs of fear that you see in that person is that a sign that they’re lying? Well they’re nervous, well of course they’re nervous, they’re nervous that they are going to get beaten up and so you need to distinguish – is this the fear of a guilty person who’s afraid of getting caught in a lie or is this the fear of a truthful person? And so if you come in very hostile with your interview you won’t be able to tell those apart. In fact some of the earlier police stuff, the previous edition of these police manuals teaching on interrogation techniques had a statement in there that said truthful people are not nervous in a police interrogation and so they taught them techniques to try and turn up the anxiety, how to get in somebody’s face, how to start the interview off, listen, we’ve got a problem and put somebody on the back foot right out the box. But of you have more rapport, gentler interviewing style, now if you see these signs they can be useful to you. Otherwise they are absolutely useless to you if you’re threatening somebody.

Natasha Mitchell: Yes, in a sense you do need to slow it down, calm it down to even be able to stop and detect micro expressions of a sort.

Mark Frank: Yes well it take a while to train your eye to see them and one of the things that we found is that most people don’t see them. But in our research looking at law enforcement and looking at who are our good lie catchers one of the things we know is that the good lie catchers do is that they are able to spot these micro expressions. They sometimes can’t tell you what they’re doing but they’ll know it in their gut, they say there was something there that was unusual which is why I thought this person was lying. And, as it turns out, when we look at the tapes what was unusual was this very quick, micro expression of fear, or distress, or a happiness that wasn’t a genuine happiness, there’s a variety of signs out there. People who are good at spotting these micro expressions are better at spotting deception.

Natasha Mitchell: Associate Professor Mark Frank from Rutgers University in New Jersey.

And that’s it for our lyin’ and cheatin’ edition of All in the Mind this week. Don’t forget you can hear us twice in the week on Sundays and Wednesdays and more details along with transcripts and audio on our website at abc.net.au/rn just look for us under the program list.

Thanks today go to Maryke Steffens, producer David Rutledge and to studio producer Jenny Parsonage. I’m Natasha Mitchell, ta da until next week.

Guests:

Professor Tony Coady

Deputy Director, Melbourne Division

Centre for Applied Philosophy and Public Ethics

University of Melbourne

https://www.philosophy.unimelb.edu.au/cappe/

Professor Charles V. Ford

Professor of Psychiatry

Director of the Neuropsychiatry Clinic

University of Alabama at Birmingham

https://www.uab.edu/psychiatry/Adult.htm

Associate Professor Mark Frank

Department of Communication

Rutgers University

https://www.scils.rutgers.edu/~ptran/frank/

Publications:

Lies! Lies! Lies! The Psychology of Deceit

Author: Charles V. Ford

Publisher: Amer Psychiatric Press (1999)

ISBN: 0880489979

The Penguin Book of Lies

Author: edited by Philip Kerr

Publisher: Viking Press (London, 1991)

ASIN: 0670825603

The Compleat Liar (in the The Penguin Book of Lies)

Author: Penny Vincenzi

Publisher: Cassell

ASIN: 0304298859

Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage

Author: Paul Ekman

Publisher: W.W. Norton & Company (1992)

ISBN: 0393321886

A Pack of Lies: Towards a Sociology of Lying

Author: J.A. Barnes

Publisher: Cambridge University Press (1994)

ISBN: 0521459788


Document Info


Accesari: 4280
Apreciat: hand-up

Comenteaza documentul:

Nu esti inregistrat
Trebuie sa fii utilizator inregistrat pentru a putea comenta


Creaza cont nou

A fost util?

Daca documentul a fost util si crezi ca merita
sa adaugi un link catre el la tine in site


in pagina web a site-ului tau.




eCoduri.com - coduri postale, contabile, CAEN sau bancare

Politica de confidentialitate | Termenii si conditii de utilizare




Copyright © Contact (SCRIGROUP Int. 2024 )