More PsychSim Tutorials from on the Worth Publishers Student Center for Discovering Psychology:
More PsychSim Tutorials from on the Worth Publishers Student Center for Discovering Psychology:
PsychSim Tutorial from on the Worth Publishers Student Center for Discovering Psychology:
More PsychSim Tutorials from on the Worth Publishers Student Center for Discovering Psychology:
More PsychSim Tutorials from on the Worth Publishers Student Center for Discovering Psychology:
Watch this video to understand why people who are orderly or meticulous are probably not suffering from obsessive-compulsive disorder.
Apply your understanding of Kohlberg’s stages of moral development and consider your response to the following dilemma.
Test your ability to identify the types of treatments for mental disorders as they are described in the following PsychSim Tutorial from the Worth Publishers’ Student Center for Discovering Psychology.
Watch this video to ensure you can differentiate between negative reinforcement and punishment.
Learn more about the aging process through the following PsychSim Tutorial from the Worth Publishers’ Student Center for Discovering Psychology. Take the time to watch the video examples of adults explaining their own experiences with aging.
View the PsychSim Tutorial “Catching Liars” from on the Worth Publishers Student Center for Discovering Psychology to learn more about the psychology behind lie-detector tests and looking for visual cues on lying. The television program Lie to Me was based off of the idea that people can learn to read facial microexpressions and detect when another person is telling a lie. Although many criticize the human ability to actually detect lies through visual cues, psychologist Paul Ekman has done extensive research on the human face and how to better read emotions through even the slighted facial movements.
Another way to spot lies is through language. Watch this video to learn more:
Learn more about descriptive, correlational, and experimental approaches to psychological research through the following PsychSim Tutorial from the Worth Publishers’ Student Center for Discovering Psychology.
Psychologists today do not believe there is one “right” way to study the way people think or behave. There are, however, various schools of thought that evolved throughout the development of psychology that continue to shape the way psychologists investigate human behavior. For example, some psychologists might attribute a certain behavior to biological factors such as genetics while another psychologist might consider early childhood experiences to be a more likely explanation for the behavior. Because psychologists might emphasize various points within psychology in their research and analysis of behavior, there are different viewpoints in psychology. These schools of thought are known as approaches, or perspectives.
Psychodynamic theory is an approach to psychology that studies the psychological forces underlying human behavior, feelings, and emotions, and how they may relate to early childhood experience. This theory is especially interested in the dynamic relations between conscious and unconscious motivation, and asserts that behavior is the product of underlying conflicts over which people often have little awareness.
Psychodynamic theory was born in 1874 with the works of German scientist Ernst von Brucke, who supposed that all living organisms are energy systems governed by the principle of the conservation of energy. During the same year, medical student Sigmund Freud adopted this new “dynamic” physiology and expanded it to create the original concept of “psychodynamics,” in which he suggested that psychological processes are flows of psychosexual energy (libido) in a complex brain. Freud also coined the term “psychoanalysis.” Later, these theories were developed further by Carl Jung, Alfred Adler, Melanie Klein, and others. By the mid-1940s and into the 1950s, the general application of the “psychodynamic theory” had been well established.
Freud’s theory of psychoanalysis holds two major assumptions: (1) that much of mental life is unconscious (i.e., outside of awareness), and (2) that past experiences, especially in early childhood, shape how a person feels and behaves throughout life. The concept of the unconscious was central: Freud postulated a cycle in which ideas are repressed but continue to operate unconsciously in the mind, and then reappear in consciousness under certain circumstances. Much of Freud’s theory was based on his investigations of patients suffering from “hysteria” and neurosis. Hysteria was an ancient diagnosis that was primarily used for women with a wide variety of symptoms, including physical symptoms and emotional disturbances with no apparent physical cause. The history of the term can be traced to ancient Greece, where the idea emerged that a woman’s uterus could float around her body and cause a variety of disturbances. Freud theorized instead that many of his patients’ problems arose from the unconscious mind. In Freud’s view, the unconscious mind was a repository of feelings and urges of which we have no awareness.
The treatment of a patient referred to as Anna O. is regarded as marking the beginning of psychoanalysis. Freud worked together with Austrian physician Josef Breuer to treat Anna O.’s “hysteria,” which Freud implied was a result of the resentment she felt over her father’s real and physical illness that later led to his death. Today many researchers believe that her illness was not psychological, as Freud suggested, but either neurological or organic.
Freud’s structural model of personality divides the personality into three parts—the id, the ego, and the superego. The id is the unconscious part that is the cauldron of raw drives, such as for sex or aggression. The ego, which has conscious and unconscious elements, is the rational and reasonable part of personality. Its role is to maintain contact with the outside world to keep the individual in touch with society, and to do this it mediates between the conflicting tendencies of the id and the superego. The superego is a person’s conscience, which develops early in life and is learned from parents, teachers, and others. Like the ego, the superego has conscious and unconscious elements. When all three parts of the personality are in dynamic equilibrium, the individual is thought to be mentally healthy. However, if the ego is unable to mediate between the id and the superego, an imbalance is believed to occur in the form of psychological distress.
Freud’s theories also placed a great deal of emphasis on sexual development. Freud believed that each of us must pass through a series of stages during childhood, and that if we lack proper nurturing during a particular stage, we may become stuck or fixated in that stage. Freud’s psychosexual model of development includes five stages: oral, anal, phallic, latency, and genital. According to Freud, children’s pleasure-seeking urges are focused on a different area of the body, called an erogenous zone, at each of these five stages. Psychologists today dispute that Freud’s psychosexual stages provide a legitimate explanation for how personality develops, but what we can take away from Freud’s theory is that personality is shaped, in some part, by experiences we have in childhood.
Carl Jung was a Swiss psychotherapist who expanded upon Freud’s theories at the turn of the 20th century. A central concept of Jung’s analytical psychology is individuation: the psychological process of integrating opposites, including the conscious with the unconscious, while still maintaining their relative autonomy. Jung focused less on infantile development and conflict between the id and superego and instead focused more on integration between different parts of the person. Jung created some of the best-known psychological concepts, including the archetype, the collective unconscious, the complex, and synchronicity.
At present, psychodynamics is an evolving multidisciplinary field that analyzes and studies human thought processes, response patterns, and influences. Research in this field focuses on areas such as:
Psychodynamic therapy, in which patients become increasingly aware of dynamic conflicts and tensions that are manifesting as a symptom or challenge in their lives, is an approach to therapy that is still commonly used today.
Behaviorism is an approach to psychology that emerged in the early 20th century as a reaction to the psychoanalytic theory of the time. Psychoanalytic theory often had difficulty making predictions that could be tested using rigorous experimental methods. The behaviorist school of thought maintains that behaviors can be described scientifically without recourse either to internal physiological events or to hypothetical constructs such as thoughts and beliefs. Rather than focusing on underlying conflicts, behaviorism focuses on observable, overt behaviors that are learned from the environment.
Its application to the treatment of mental problems is known as behavior modification. Learning is seen as behavior change molded by experience; it is accomplished largely through either classical or operant conditioning (described below).
The primary developments in behaviorism came from the work of Ivan Pavlov, John B. Watson, Edward Lee Thorndike, and B. F. Skinner.
The Russian physiologist Ivan Pavlov was widely known for describing the phenomenon now known as classical conditioning. In his famous 1890s experiment, he trained his dogs to salivate on command by associating the ringing of a bell with the delivery of food. As Pavlov’s work became known in the West, particularly through the writings of John B. Watson, the idea of conditioning as an automatic form of learning became a key concept in the development of behaviorism.
John B. Watson was an American psychologist who is best known for his controversial “Little Albert” experiment. In this experiment, he used classical conditioning to teach a nine-month-old boy to be afraid of a white toy rat by associating the rat with a sudden loud noise. This study demonstrated how emotions could become conditioned responses.
Edward Lee Thorndike was an American psychologist whose work on animal behavior and the learning process led to the “law of effect.” The law of effect states that responses that create a satisfying effect are more likely to occur again, while responses that produce a discomforting effect become less likely to occur.
“Operant conditioning,” a term coined by psychologist B. F. Skinner, describes a form of learning in which a voluntary response is strengthened or weakened depending on its association with either positive or negative consequences. The strengthening of a response occurs through reinforcement. Skinner described two types of reinforcement: positive reinforcement, which is the introduction of a positive consequence such as food, pleasurable activities, or attention from others, and negative reinforcement, which is the removal of a negative consequence such as pain or a loud noise. Skinner saw human behavior as shaped by trial and error through reinforcement and punishment, without any reference to inner conflicts or perceptions. In his theory, mental disorders represented maladaptive behaviors that were learned and could be unlearned through behavior modification.
In the second half of the 20th century, behaviorism was expanded through advances in cognitive theories. While behaviorism and cognitive schools of psychological thought may not agree theoretically, they have complemented each other in practical therapeutic applications like cognitive-behavioral therapy (CBT), which has been used widely in the treatment of many different mental disorders, such as phobias, PTSD, and addiction.
Some behavior therapies employ Skinner’s theories of operant conditioning: by not reinforcing certain behaviors, these behaviors can be extinguished. Skinner’s radical behaviorism advanced a “triple contingency” model, which explored the links between the environment, behavior, and the mind. This later gave rise to applied behavior analysis (ABA), in which operant conditioning techniques are used to reinforce positive behaviors and punish unwanted behaviors. This approach to treatment has been an effective tool to help children on the autism spectrum; however, it is considered controversial by many who see it as attempting to change or “normalize” autistic behaviors (Lovaas, 1987, 2003; Sallows & Graupner, 2005; Wolf & Risley, 1967).
Cognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. “Cognition” refers to thinking and memory processes, and “cognitive development” refers to long-term changes in these processes. Much of the work derived from cognitive psychology has been integrated into various other modern disciplines of psychological study, including social psychology, personality psychology, abnormal psychology, developmental psychology, educational psychology, and behavioral economics.
Cognitive psychology is radically different from previous psychological approaches in that it is characterized by both of the following:
Cognitive theory contends that solutions to problems take the form of algorithms, heuristics, or insights. Major areas of research in cognitive psychology include perception, memory, categorization, knowledge representation, numerical cognition, language, and thinking.
Cognitive psychology is one of the more recent additions to psychological research. Though there are examples of cognitive approaches from earlier researchers, cognitive psychology really developed as a subfield within psychology in the late 1950s and early 1960s. The development of the field was heavily influenced by contemporary advancements in technology and computer science.
In 1958, Donald Broadbent integrated concepts from human-performance research and the recently developed information theory in his book Perception and Communication, which paved the way for the information-processing model of cognition. Ulric Neisser is credited with formally having coined the term “cognitive psychology” in his book of the same name, published in 1967. The perspective had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children.
Although no one person is entirely responsible for starting the cognitive revolution, Noam Chomsky was very influential in the early days of this movement. Chomsky (1928–), an American linguist, was dissatisfied with the influence that behaviorism had had on psychology. He believed that psychology’s focus on behavior was short-sighted and that the field had to reincorporate mental functioning into its purview if it were to offer any meaningful contributions to understanding behavior (Miller, 2003).
Instead of approaching development from a psychoanalytic or psychosocial perspective, Piaget focused on children’s cognitive growth. He is most widely known for his stage theory of cognitive development, which outlines how children become able to think logically and scientifically over time. As they progress to a new stage, there is a distinct shift in how they think and reason.
Humanistic psychology is a psychological perspective that rose to prominence in the mid-20th century, drawing on the philosophies of existentialism and phenomenology, as well as Eastern philosophy. It adopts a holistic approach to human existence through investigations of concepts such as meaning, values, freedom, tragedy, personal responsibility, human potential, spirituality, and self-actualization.
The humanistic perspective is a holistic psychological perspective that attributes human characteristics and actions to free will and an innate drive for self-actualization. This approach focuses on maximum human potential and achievement rather than psychoses and symptoms of disorder. It emphasizes that people are inherently good and pays special attention to personal experiences and creativity. This perspective has led to advances in positive, educational, and industrial psychology, and has been applauded for its successful application to psychotherapy and social issues. Despite its great influence, humanistic psychology has also been criticized for its subjectivity and lack of evidence.
In the late 1950s, a group of psychologists convened in Detroit, Michigan, to discuss their interest in a psychology that focused on uniquely human issues, such as the self, self-actualization, health, hope, love, creativity, nature, being, becoming, individuality, and meaning. These preliminary meetings eventually culminated in the description of humanistic psychology as a recognizable “third force” in psychology, along with behaviorism and psychoanalysis. Humanism’s major theorists were Abraham Maslow, Carl Rogers, Rollo May, and Clark Moustakas; it was also influenced by psychoanalytic theorists, including Wilhelm Reich, who discussed an essentially good, healthy core self, and Carl Gustav Jung, who emphasized the concept of archetypes.
Abraham Maslow (1908–1970) is considered the founder of humanistic psychology, and is noted for his conceptualization of a hierarchy of human needs. He believed that every person has a strong desire to realize his or her full potential—or to reach what he called “self-actualization.” Unlike many of his predecessors, Maslow studied mentally healthy individuals instead of people with serious psychological issues. Through his research he coined the term “peak experiences,” which he defined as “high points” in which people feel at harmony with themselves and their surroundings. Self-actualized people, he believed, have more of these peak experiences throughout a given day than others.
To explain his theories, Maslow created a visual, which he termed the “hierarchy of needs.” This pyramid depicts various levels of physical and psychological needs that a person progresses through during their lifetime. At the bottom of the pyramid are the basic physiological needs of a human being, such as food and water. The next level is safety, which includes shelter and needs paramount to physical survival. The third level, love and belonging, is the psychological need to share oneself with others. The fourth level, esteem, focuses on success, status, and accomplishments. The top of the pyramid is self-actualization, in which a person is believed to have reached a state of harmony and understanding. Individuals progress from lower to higher stages throughout their lives, and cannot reach higher stages without first meeting the lower needs that come before them.
Carl Rogers (1902–1987) is best known for his person-centered approach, in which the relationship between therapist and client is used to help the patient reach a state of realization, so that they can then help themselves. His non-directive approach focuses more on the present than the past and centers on clients’ capacity for self-direction and understanding of their own development. The therapist encourages the patient to express their feelings and does not suggest how the person might wish to change. Instead, the therapist uses the skills of active listening and mirroring to help patients explore and understand their feelings for themselves.
Rogers is also known for practicing “unconditional positive regard,” which is defined as accepting a person in their entirety with no negative judgment of their essential worth. He believed that those raised in an environment of unconditional positive regard have the opportunity to fully actualize themselves, while those raised in an environment of conditional positive regard only feel worthy if they match conditions that have been laid down by others.
Rollo May (1909–1994) was the best known American existential psychologist, and differed from other humanistic psychologists by showing a sharper awareness of the tragic dimensions of human existence. May was influenced by American humanism, and emphasized the importance of human choice.
Humanistic psychology is holistic in nature: it takes whole persons into account rather than their separate traits or processes. In this way, people are not reduced to one particular attribute or set of characteristics, but instead are appreciated for the complex beings that they are. Humanistic psychology allows for a personality concept that is dynamic and fluid and accounts for much of the change a person experiences over a lifetime. It stresses the importance of free will and personal responsibility for decision-making; this view gives the conscious human being some necessary autonomy and frees them from deterministic principles. Perhaps most importantly, the humanistic perspective emphasizes the need to strive for positive goals and explains human potential in a way that other theories cannot.
However, critics have taken issue with many of the early tenets of humanism, such as its lack of empirical evidence (as was the case with most early psychological approaches). Because of the inherent subjective nature of the humanistic approach, psychologists worry that this perspective does not identify enough constant variables in order to be researched with consistency and accuracy. Psychologists also worry that such an extreme focus on the subjective experience of the individual does little to explain or appreciate the impact of external societal factors on personality development. In addition, The major tenet of humanistic personality psychology—namely, that people are innately good and intuitively seek positive goals—does not account for the presence of deviance in the world within normal, functioning personalities.
Sociocultural factors are the larger-scale forces within cultures and societies that affect the thoughts, feelings, and behaviors of individuals. These include forces such as attitudes, child-rearing practices, discrimination and prejudice, ethnic and racial identity, gender roles and norms, family and kinship structures, power dynamics, regional differences, religious beliefs and practices, rituals, and taboos. Several subfields within psychology seek to examine these sociocultural factors that influence human mental states and behavior; among these are social psychology, cultural psychology, and cultural-historical psychology.
Cultural psychology is the study of how psychological and behavioral tendencies are rooted and embedded within culture. The main tenet of cultural psychology is that mind and culture are inseparable and mutually constitutive, meaning that people are shaped by their culture and their culture is also shaped by them.
A major goal of cultural psychology is to expand the number and variation of cultures that contribute to basic psychological theories, so that these theories become more relevant to the predictions, descriptions, and explanations of all human behaviors—not just Western ones. Populations that are Western, educated, and industrialized tend to be overrepresented in psychological research, yet findings from this research tend to be labeled “universal” and inaccurately applied to other cultures. The evidence that social values, logical reasoning, and basic cognitive and motivational processes vary across populations has become increasingly difficult to ignore. By studying only a narrow range of culture within human populations, psychologists fail to account for a substantial amount of diversity.
Cultural psychology is often confused with cross-cultural psychology; however, it is distinct in that cross-cultural psychologists generally use culture as a means of testing the universality of psychological processes, rather than determining how local cultural practices shape psychological processes. So while a cross-cultural psychologist might ask whether Jean Piaget’s stages of development are universal across a variety of cultures, a cultural psychologist would be interested in how the social practices of a particular set of cultures shape the development of cognitive processes in different ways.
Cultural-historical psychology is a psychological theory formed by Lev Vygotsky in the late 1920s and further developed by his students and followers in Eastern Europe and worldwide. This theory focuses on how aspects of culture, such as values, beliefs, customs, and skills, are transmitted from one generation to the next. According to Vygotsky, social interaction—especially involvement with knowledgeable community or family members—helps children to acquire the thought processes and behaviors specific to their culture and/or society. The growth that children experience as a result of these interactions differs greatly between cultures; this variance allows children to become competent in tasks that are considered important or necessary in their particular society.
Social psychology is the scientific study of how people’s thoughts, feelings, and behaviors are influenced by the actual, imagined, or implied presence of others. This subfield of psychology is concerned with the way such feelings, thoughts, beliefs, intentions, and goals are constructed, and how these psychological factors, in turn, influence our interactions with others.
Social psychology typically explains human behavior as a result of the interaction of mental states and immediate social situations. Social psychologists, therefore, examine the factors that lead us to behave in a given way in the presence of others, as well as the conditions under which certain behaviors, actions, and feelings occur. They focus on how people construe or interpret situations and how these interpretations influence their thoughts, feelings, and behaviors (Ross & Nisbett, 1991). Thus, social psychology studies individuals in a social context and how situational variables interact to influence behavior.
Social psychologists assert that an individual’s thoughts, feelings, and behaviors are very much influenced by social situations. Essentially, people will change their behavior to align with the social situation at hand. If we are in a new situation or are unsure how to behave, we will take our cues from other individuals.
The field of social psychology studies topics at both the intrapersonal level (pertaining to the individual), such as emotions and attitudes, and the interpersonal level (pertaining to groups), such as aggression and attraction. The field is also concerned with common cognitive biases—such as the fundamental attribution error, the actor-observer bias, the self-serving bias, and the just-world hypothesis—that influence our behavior and our perceptions of events.
The discipline of social psychology began in the United States in the early 20th century. The first published study in this area was an experiment in 1898 by Norman Triplett on the phenomenon of social facilitation. During the 1930s, Gestalt psychologists such as Kurt Lewin were instrumental in developing the field as something separate from the behavioral and psychoanalytic schools that were dominant during that time.
During World War II, social psychologists studied the concepts of persuasion and propaganda for the U.S. military. After the war, researchers became interested in a variety of social problems including gender issues, racial prejudice, cognitive dissonance, bystander intervention, aggression, and obedience to authority. During the years immediately following World War II there was frequent collaboration between psychologists and sociologists; however, the two disciplines have become increasingly specialized and isolated from each other in recent years, with sociologists focusing more on macro-level variables (such as social structure).
Biopsychology—also known as biological psychology or psychobiology—is the application of the principles of biology to the study of mental processes and behavior. The fields of behavioral neuroscience, cognitive neuroscience, and neuropsychology are all subfields of biological psychology.
Biopsychologists are interested in measuring biological, physiological, and/or genetic variables and attempting to relate them to psychological or behavioral variables. Because all behavior is controlled by the central nervous system, biopsychologists seek to understand how the brain functions in order to understand behavior. Key areas of focus include sensation and perception, motivated behavior (such as hunger, thirst, and sex), control of movement, learning and memory, sleep and biological rhythms, and emotion. As technical sophistication leads to advancements in research methods, more advanced topics, such as language, reasoning, decision-making, and consciousness, are now being studied.
Behavioral neuroscience has a strong history of contributing to the understanding of medical disorders, including those that fall into the realm of clinical psychology. Neuropsychologists are often employed as scientists to advance scientific or medical knowledge, and neuropsychology is particularly concerned with understanding brain injuries in an attempt to learn about normal psychological functioning. Neuroimaging tools, such as functional magnetic resonance imaging (fMRI) scans, are often used to observe which areas of the brain are active during particular tasks in order to help psychologists understand the link between brain and behavior.
Biopsychology as a scientific discipline emerged from a variety of scientific and philosophical traditions in the 18th and 19th centuries. Philosophers like Rene Descartes proposed physical models to explain animal and human behavior. Descartes suggested, for example, that the pineal gland, a midline unpaired structure in the brain of many organisms, was the point of contact between mind and body. In The Principles of Psychology (1890), William James argued that the scientific study of psychology should be grounded in an understanding of biology. The emergence of both psychology and behavioral neuroscience as legitimate sciences can be traced to the emergence of physiology during the 18th and 19th centuries; however, it was not until 1914 that the term “psychobiology” was first used in its modern sense by Knight Dunlap in An Outline of Psychobiology.
Stress. It makes your heart pound, your breathing quicken and your forehead sweat. But while stress has been made into a public health enemy, new research suggests that stress may only be bad for you if you believe that to be the case. Psychologist Kelly McGonigal urges us to see stress as a positive, and introduces us to an unsung mechanism for stress reduction: reaching out to others.
When people describe what they most want out of life, happiness is almost always on the list, and very frequently it is at the top of the list. When people describe what they want in life for their children, they frequently mention health and wealth, occasionally they mention fame or success—but they almost always mention happiness. People will claim that whether their kids are wealthy and work in some prestigious occupation or not, “I just want my kids to be happy.” Happiness appears to be one of the most important goals for people, if not the most important. But what is it, and how do people get it?
In this module I describe “happiness” or subjective well-being (SWB) as a process—it results from certain internal and external causes, and in turn it influences the way people behave, as well as their physiological states. Thus, high SWB is not just a pleasant outcome but is an important factor in our future success. Because scientists have developed valid ways of measuring “happiness,” they have come in the past decades to know much about its causes and consequences.
Philosophers debated the nature of happiness for thousands of years, but scientists have recently discovered that happiness means different things. Three major types of happiness are high life satisfaction, frequent positive feelings, and infrequent negative feelings (Diener, 1984). “Subjective well-being” is the label given by scientists to the various forms of happiness taken together. Although there are additional forms of SWB, the three in the table below have been studied extensively. The table also shows that the causes of the different types of happiness can be somewhat different.
You can see in the table that there are different causes of happiness, and that these causes are not identical for the various types of SWB. Therefore, there is no single key, no magic wand—high SWB is achieved by combining several different important elements (Diener & Biswas-Diener, 2008). Thus, people who promise to know the key to happiness are oversimplifying.
Some people experience all three elements of happiness—they are very satisfied, enjoy life, and have only a few worries or other unpleasant emotions. Other unfortunate people are missing all three. Most of us also know individuals who have one type of happiness but not another. For example, imagine an elderly person who is completely satisfied with her life—she has done most everything she ever wanted—but is not currently enjoying life that much because of the infirmities of age. There are others who show a different pattern, for example, who really enjoy life but also experience a lot of stress, anger, and worry. And there are those who are having fun, but who are dissatisfied and believe they are wasting their lives. Because there are several components to happiness, each with somewhat different causes, there is no magic single cure-all that creates all forms of SWB. This means that to be happy, individuals must acquire each of the different elements that cause it.
There are external influences on people’s happiness—the circumstances in which they live. It is possible for some to be happy living in poverty with ill health, or with a child who has a serious disease, but this is difficult. In contrast, it is easier to be happy if one has supportive family and friends, ample resources to meet one’s needs, and good health. But even here there are exceptions—people who are depressed and unhappy while living in excellent circumstances. Thus, people can be happy or unhappy because of their personalities and the way they think about the world or because of the external circumstances in which they live. People vary in their propensity to happiness—in their personalities and outlook—and this means that knowing their living conditions is not enough to predict happiness.
In the table below are shown internal and external circumstances that influence happiness. There are individual differences in what makes people happy, but the causes in the table are important for most people (Diener, Suh, Lucas, & Smith, 1999; Lyubomirsky, 2013; Myers, 1992).
When people consider their own happiness, they tend to think of their relationships, successes and failures, and other personal factors. But a very important influence on how happy people are is the society in which they live. It is easy to forget how important societies and neighborhoods are to people’s happiness or unhappiness. In Figure 1, I present life satisfaction around the world. You can see that some nations, those with the darkest shading on the map, are high in life satisfaction. Others, the lightest shaded areas, are very low. The grey areas in the map are places we could not collect happiness data—they were just too dangerous or inaccessible.
Can you guess what might make some societies happier than others? Much of North America and Europe have relatively high life satisfaction, and much of Africa is low in life satisfaction. For life satisfaction living in an economically developed nation is helpful because when people must struggle to obtain food, shelter, and other basic necessities, they tend to be dissatisfied with lives. However, other factors, such as trusting and being able to count on others, are also crucial to the happiness within nations. Indeed, for enjoying life our relationships with others seem more important than living in a wealthy society. One factor that predicts unhappiness is conflict—individuals in nations with high internal conflict or conflict with neighboring nations tend to experience low SWB.
Will money make you happy? A certain level of income is needed to meet our needs, and very poor people are frequently dissatisfied with life (Diener & Seligman, 2004). However, having more and more money has diminishing returns—higher and higher incomes make less and less difference to happiness. Wealthy nations tend to have higher average life satisfaction than poor nations, but the United States has not experienced a rise in life satisfaction over the past decades, even as income has doubled. The goal is to find a level of income that you can live with and earn. Don’t let your aspirations continue to rise so that you always feel poor, no matter how much money you have. Research shows that materialistic people often tend to be less happy, and putting your emphasis on relationships and other areas of life besides just money is a wise strategy. Money can help life satisfaction, but when too many other valuable things are sacrificed to earn a lot of money—such as relationships or taking a less enjoyable job—the pursuit of money can harm happiness.
There are stories of wealthy people who are unhappy and of janitors who are very happy. For instance, a number of extremely wealthy people in South Korea have committed suicide recently, apparently brought down by stress and other negative feelings. On the other hand, there is the hospital janitor who loved her life because she felt that her work in keeping the hospital clean was so important for the patients and nurses. Some millionaires are dissatisfied because they want to be billionaires. Conversely, some people with ordinary incomes are quite happy because they have learned to live within their means and enjoy the less expensive things in life.
It is important to always keep in mind that high materialism seems to lower life satisfaction—valuing money over other things such as relationships can make us dissatisfied. When people think money is more important than everything else, they seem to have a harder time being happy. And unless they make a great deal of money, they are not on average as happy as others. Perhaps in seeking money they sacrifice other important things too much, such as relationships, spirituality, or following their interests. Or it may be that materialists just can never get enough money to fulfill their dreams—they always want more.
To sum up what makes for a happy life, let’s take the example of Monoj, a rickshaw driver in Calcutta. He enjoys life, despite the hardships, and is reasonably satisfied with life. How could he be relatively happy despite his very low income, sometimes even insufficient to buy enough food for his family? The things that make Monoj happy are his family and friends, his religion, and his work, which he finds meaningful. His low income does lower his life satisfaction to some degree, but he finds his children to be very rewarding, and he gets along well with his neighbors. I also suspect that Monoj’s positive temperament and his enjoyment of social relationships help to some degree to overcome his poverty and earn him a place among the happy. However, Monoj would also likely be even more satisfied with life if he had a higher income that allowed more food, better housing, and better medical care for his family.
Besides the internal and external factors that influence happiness, there are psychological influences as well—such as our aspirations, social comparisons, and adaptation. People’s aspirations are what they want in life, including income, occupation, marriage, and so forth. If people’s aspirations are high, they will often strive harder, but there is also a risk of them falling short of their aspirations and being dissatisfied. The goal is to have challenging aspirations but also to be able to adapt to what actually happens in life.
One’s outlook and resilience are also always very important to happiness. Every person will have disappointments in life, fail at times, and have problems. Thus, happiness comes not to people who never have problems—there are no such individuals—but to people who are able to bounce back from failures and adapt to disappointments. This is why happiness is never caused just by what happens to us but always includes our outlook on life.
The process of adaptation is important in understanding happiness. When good and bad events occur, people often react strongly at first, but then their reactions adapt over time and they return to their former levels of happiness. For instance, many people are euphoric when they first marry, but over time they grow accustomed to the marriage and are no longer ecstatic. The marriage becomes commonplace and they return to their former level of happiness. Few of us think this will happen to us, but the truth is that it usually does. Some people will be a bit happier even years after marriage, but nobody carries that initial “high” through the years.
People also adapt over time to bad events. However, people take a long time to adapt to certain negative events such as unemployment. People become unhappy when they lose their work, but over time they recover to some extent. But even after a number of years, unemployed individuals sometimes have lower life satisfaction, indicating that they have not completely habituated to the experience. However, there are strong individual differences in adaptation, too. Some people are resilient and bounce back quickly after a bad event, and others are fragile and do not ever fully adapt to the bad event. Do you adapt quickly to bad events and bounce back, or do you continue to dwell on a bad event and let it keep you down?
An example of adaptation to circumstances is shown in Figure 3, which shows the daily moods of “Harry,” a college student who had Hodgkin’s lymphoma (a form of cancer). As can be seen, over the 6-week period when I studied Harry’s moods, they went up and down. A few times his moods dropped into the negative zone below the horizontal blue line. Most of the time Harry’s moods were in the positive zone above the line. But about halfway through the study Harry was told that his cancer was in remission—effectively cured—and his moods on that day spiked way up. But notice that he quickly adapted—the effects of the good news wore off, and Harry adapted back toward where he was before. So even the very best news one can imagine—recovering from cancer—was not enough to give Harry a permanent “high.” Notice too, however, that Harry’s moods averaged a bit higher after cancer remission. Thus, the typical pattern is a strong response to the event, and then a dampening of this joy over time. However, even in the long run, the person might be a bit happier or unhappier than before.
Is the state of happiness truly a good thing? Is happiness simply a feel-good state that leaves us unmotivated and ignorant of the world’s problems? Should people strive to be happy, or are they better off to be grumpy but “realistic”? Some have argued that happiness is actually a bad thing, leaving us superficial and uncaring. Most of the evidence so far suggests that happy people are healthier, more sociable, more productive, and better citizens (Diener & Tay, 2012; Lyubomirsky, King, & Diener, 2005). Research shows that the happiest individuals are usually very sociable. The table below summarizes some of the major findings.
Although it is beneficial generally to be happy, this does not mean that people should be constantly euphoric. In fact, it is appropriate and helpful sometimes to be sad or to worry. At times a bit of worry mixed with positive feelings makes people more creative. Most successful people in the workplace seem to be those who are mostly positive but sometimes a bit negative. Thus, people need not be a superstar in happiness to be a superstar in life. What is not helpful is to be chronically unhappy. The important question is whether people are satisfied with how happy they are. If you feel mostly positive and satisfied, and yet occasionally worry and feel stressed, this is probably fine as long as you feel comfortable with this level of happiness. If you are a person who is chronically unhappy much of the time, changes are needed, and perhaps professional intervention would help as well.
SWB researchers have relied primarily on self-report scales to assess happiness—how people rate their own happiness levels on self-report surveys. People respond to numbered scales to indicate their levels of satisfaction, positive feelings, and lack of negative feelings. You can see where you stand on these scales by going to http://internal.psychology.illinois.edu/~ediener/scales.html or by filling out the Flourishing Scale below. These measures will give you an idea of what popular scales of happiness are like.
The self-report scales have proved to be relatively valid (Diener, Inglehart, & Tay, 2012), although people can lie, or fool themselves, or be influenced by their current moods or situational factors. Because the scales are imperfect, well-being scientists also sometimes use biological measures of happiness (e.g., the strength of a person’s immune system, or measuring various brain areas that are associated with greater happiness). Scientists also use reports by family, coworkers, and friends—these people reporting how happy they believe the target person is. Other measures are used as well to help overcome some of the shortcomings of the self-report scales, but most of the field is based on people telling us how happy they are using numbered scales.
There are scales to measure life satisfaction (Pavot & Diener, 2008), positive and negative feelings, and whether a person is psychologically flourishing (Diener et al., 2009). Flourishing has to do with whether a person feels meaning in life, has close relationships, and feels a sense of mastery over important life activities. You can take the well-being scales created in the Diener laboratory, and let others take them too, because they are free and open for use.
Most people are fairly happy, but many of them also wish they could be a bit more satisfied and enjoy life more. Prescriptions about how to achieve more happiness are often oversimplified because happiness has different components and prescriptions need to be aimed at where each individual needs improvement—one size does not fit all. A person might be strong in one area and deficient in other areas. People with prolonged serious unhappiness might need help from a professional. Thus, recommendations for how to achieve happiness are often appropriate for one person but not for others. With this in mind, I list in Table 4 below some general recommendations for you to be happier (see also Lyubomirsky, 2013):
Watch this lecture from MIT’s John Gabrieli on stress. Pay close attention to the biological consequences of stress.
Social psychologists began trying to answer this question following the unfortunate murder of Kitty Genovese in 1964 (Dovidio, Piliavin, Schroeder, & Penner, 2006; Penner, Dovidio, Piliavin, & Schroeder, 2005). A knife-wielding assailant attacked Kitty repeatedly as she was returning to her apartment early one morning. At least 38 people may have been aware of the attack, but no one came to save her. More recently, in 2010, Hugo Alfredo Tale-Yax was stabbed when he apparently tried to intervene in an argument between a man and woman. As he lay dying in the street, only one man checked his status, but many others simply glanced at the scene and continued on their way. (One passerby did stop to take a cellphone photo, however.) Unfortunately, failures to come to the aid of someone in need are not unique, as the segments on “What Would You Do?” show. Help is not always forthcoming for those who may need it the most. Trying to understand why people do not always help became the focus of bystander intervention research (e.g., Latané & Darley, 1970).
To answer the question regarding when people help, researchers have focused on
The decision to help is not a simple yes/no proposition. In fact, a series of questions must be addressed before help is given—even in emergencies in which time may be of the essence. Sometimes help comes quickly; an onlooker recently jumped from a Philadelphia subway platform to help a stranger who had fallen on the track. Help was clearly needed and was quickly given. But some situations are ambiguous, and potential helpers may have to decide whether a situation is one in which help, in fact, needs to be given.
To define ambiguous situations (including many emergencies), potential helpers may look to the action of others to decide what should be done. But those others are looking around too, also trying to figure out what to do. Everyone is looking, but no one is acting! Relying on others to define the situation and to then erroneously conclude that no intervention is necessary when help is actually needed is called pluralistic ignorance (Latané & Darley, 1970). When people use the inactions of others to define their own course of action, the resulting pluralistic ignorance leads to less help being given.
Simply being with others may facilitate or inhibit whether we get involved in other ways as well. In situations in which help is needed, the presence or absence of others may affect whether a bystander will assume personal responsibility to give the assistance. If the bystander is alone, personal responsibility to help falls solely on the shoulders of that person. But what if others are present? Although it might seem that having more potential helpers around would increase the chances of the victim getting help, the opposite is often the case. Knowing that someone else couldhelp seems to relieve bystanders of personal responsibility, so bystanders do not intervene. This phenomenon is known as diffusion of responsibility (Darley & Latané, 1968).
On the other hand, watch the video of the race officials following the 2013 Boston Marathon after two bombs exploded as runners crossed the finish line. Despite the presence of many spectators, the yellow-jacketed race officials immediately rushed to give aid and comfort to the victims of the blast. Each one no doubt felt a personal responsibility to help by virtue of their official capacity in the event; fulfilling the obligations of their roles overrode the influence of the diffusion of responsibility effect.
There is an extensive body of research showing the negative impact of pluralistic ignorance and diffusion of responsibility on helping (Fisher et al., 2011), in both emergencies and everyday need situations. These studies show the tremendous importance potential helpers place on the social situation in which unfortunate events occur, especially when it is not clear what should be done and who should do it. Other people provide important social information about how we should act and what our personal obligations might be. But does knowing a person needs help and accepting responsibility to provide that help mean the person will get assistance? Not necessarily.
The nature of the help needed plays a crucial role in determining what happens next. Specifically, potential helpers engage in a cost–benefit analysis before getting involved (Dovidio et al., 2006). If the needed help is of relatively low cost in terms of time, money, resources, or risk, then help is more likely to be given. Lending a classmate a pencil is easy; confronting the knife-wielding assailant who attacked Kitty Genovese is an entirely different matter. As the unfortunate case of Hugo Alfredo Tale-Yax demonstrates, intervening may cost the life of the helper.
The potential rewards of helping someone will also enter into the equation, perhaps offsetting the cost of helping. Thanks from the recipient of help may be a sufficient reward. If helpful acts are recognized by others, helpers may receive social rewards of praise or monetary rewards. Even avoiding feelings of guilt if one does not help may be considered a benefit. Potential helpers consider how much helping will cost and compare those costs to the rewards that might be realized; it is the economics of helping. If costs outweigh the rewards, helping is less likely. If rewards are greater than cost, helping is more likely.
Do you know someone who always seems to be ready, willing, and able to help? Do you know someone who never helps out? It seems there are personality and individual differences in the helpfulness of others. To answer the question of who chooses to help, researchers have examined 1) the role that sex and gender play in helping, 2) what personality traits are associated with helping, and 3) the characteristics of the “prosocial personality.”
In terms of individual differences that might matter, one obvious question is whether men or women are more likely to help. In one of the “What Would You Do?” segments, a man takes a woman’s purse from the back of her chair and then leaves the restaurant. Initially, no one responds, but as soon as the woman asks about her missing purse, a group of men immediately rush out the door to catch the thief. So, are men more helpful than women? The quick answer is “not necessarily.” It all depends on the type of help needed. To be very clear, the general level of helpfulness may be pretty much equivalent between the sexes, but men and women help in different ways (Becker & Eagly, 2004; Eagly & Crowley, 1986). What accounts for these differences?
Two factors help to explain sex and gender differences in helping. The first is related to the cost–benefit analysis process discussed previously. Physical differences between men and women may come into play (e.g., Wood & Eagly, 2002); the fact that men tend to have greater upper body strength than women makes the cost of intervening in some situations less for a man. Confronting a thief is a risky proposition, and some strength may be needed in case the perpetrator decides to fight. A bigger, stronger bystander is less likely to be injured and more likely to be successful.
The second explanation is simple socialization. Men and women have traditionally been raised to play different social roles that prepare them to respond differently to the needs of others, and people tend to help in ways that are most consistent with their gender roles. Female gender roles encourage women to be compassionate, caring, and nurturing; male gender roles encourage men to take physical risks, to be heroic and chivalrous, and to be protective of those less powerful. As a consequence of social training and the gender roles that people have assumed, men may be more likely to jump onto subway tracks to save a fallen passenger, but women are more likely to give comfort to a friend with personal problems (Diekman & Eagly, 2000; Eagly & Crowley, 1986). There may be some specialization in the types of help given by the two sexes, but it is nice to know that there is someone out there—man or woman—who is able to give you the help that you need, regardless of what kind of help it might be.
Graziano and his colleagues (e.g., Graziano & Tobin, 2009; Graziano, Habishi, Sheese, & Tobin, 2007) have explored how agreeableness—one of the Big Five personality dimensions (e.g., Costa & McCrae, 1988)—plays an important role in prosocial behavior. Agreeableness is a core trait that includes such dispositional characteristics as being sympathetic, generous, forgiving, and helpful, and behavioral tendencies toward harmonious social relations and likeability. At the conceptual level, a positive relationship between agreeableness and helping may be expected, and research by Graziano et al. (2007) has found that those higher on the agreeableness dimension are, in fact, more likely than those low on agreeableness to help siblings, friends, strangers, or members of some other group. Agreeable people seem to expect that others will be similarly cooperative and generous in interpersonal relations, and they, therefore, act in helpful ways that are likely to elicit positive social interactions.
Rather than focusing on a single trait, Penner and his colleagues (Penner, Fritzsche, Craiger, & Freifeld, 1995; Penner & Orom, 2010) have taken a somewhat broader perspective and identified what they call the prosocial personality orientation. Their research indicates that two major characteristics are related to the prosocial personality and prosocial behavior. The first characteristic is called other-oriented empathy: People high on this dimension have a strong sense of social responsibility, empathize with and feel emotionally tied to those in need, understand the problems the victim is experiencing, and have a heightened sense of moral obligation to be helpful. This factor has been shown to be highly correlated with the trait of agreeableness discussed previously. The second characteristic, helpfulness, is more behaviorally oriented. Those high on the helpfulness factor have been helpful in the past, and because they believe they can be effective with the help they give, they are more likely to be helpful in the future.
Finally, the question of why a person would help needs to be asked. What motivation is there for that behavior? Psychologists have suggested that 1) evolutionary forces may serve to predispose humans to help others, 2) egoistic concerns may determine if and when help will be given, and 3) selfless, altruistic motives may also promote helping in some cases.
Our evolutionary past may provide keys about why we help (Buss, 2004). Our very survival was no doubt promoted by the prosocial relations with clan and family members, and, as a hereditary consequence, we may now be especially likely to help those closest to us—blood-related relatives with whom we share a genetic heritage. According to evolutionary psychology, we are helpful in ways that increase the chances that our DNA will be passed along to future generations (Burnstein, Crandall, & Kitayama, 1994)—the goal of the “selfish gene” (Dawkins, 1976). Our personal DNA may not always move on, but we can still be successful in getting some portion of our DNA transmitted if our daughters, sons, nephews, nieces, and cousins survive to produce offspring. The favoritism shown for helping our blood relatives is called kin selection (Hamilton, 1964).
But, we do not restrict our relationships just to our own family members. We live in groups that include individuals who are unrelated to us, and we often help them too. Why? Reciprocal altruism(Trivers, 1971) provides the answer. Because of reciprocal altruism, we are all better off in the long run if we help one another. If helping someone now increases the chances that you will be helped later, then your overall chances of survival are increased. There is the chance that someone will take advantage of your help and not return your favors. But people seem predisposed to identify those who fail to reciprocate, and punishments including social exclusion may result (Buss, 2004). Cheaters will not enjoy the benefit of help from others, reducing the likelihood of the survival of themselves and their kin.
Evolutionary forces may provide a general inclination for being helpful, but they may not be as good an explanation for why we help in the here and now. What factors serve as proximal influences for decisions to help?
Most people would like to think that they help others because they are concerned about the other person’s plight. In truth, the reasons why we help may be more about ourselves than others: Egoistic or selfish motivations may make us help. Implicitly, we may ask, “What’s in it for me?” There are two major theories that explain what types of reinforcement helpers may be seeking. The negative state relief model (e.g., Cialdini, Darby, & Vincent, 1973; Cialdini, Kenrick, & Baumann, 1982) suggests that people sometimes help in order to make themselves feel better. Whenever we are feeling sad, we can use helping someone else as a positive mood boost to feel happier. Through socialization, we have learned that helping can serve as a secondary reinforcement that will relieve negative moods (Cialdini & Kenrick, 1976).
The arousal: cost–reward model provides an additional way to understand why people help (e.g., Piliavin, Dovidio, Gaertner, & Clark, 1981). This model focuses on the aversive feelings aroused by seeing another in need. If you have ever heard an injured puppy yelping in pain, you know that feeling, and you know that the best way to relieve that feeling is to help and to comfort the puppy. Similarly, when we see someone who is suffering in some way (e.g., injured, homeless, hungry), we vicariously experience a sympathetic arousal that is unpleasant, and we are motivated to eliminate that aversive state. One way to do that is to help the person in need. By eliminating the victim’s pain, we eliminate our own aversive arousal. Helping is an effective way to alleviate our own discomfort.
As an egoistic model, the arousal: cost–reward model explicitly includes the cost/reward considerations that come into play. Potential helpers will find ways to cope with the aversive arousal that will minimize their costs—maybe by means other than direct involvement. For example, the costs of directly confronting a knife-wielding assailant might stop a bystander from getting involved, but the cost of some indirect help (e.g., calling the police) may be acceptable. In either case, the victim’s need is addressed. Unfortunately, if the costs of helping are too high, bystanders may reinterpret the situation to justify not helping at all. We now know that the attack of Kitty Genovese was a murderous assault, but it may have been misperceived as a lover’s spat by someone who just wanted to go back to sleep. For some, fleeing the situation causing their distress may do the trick (Piliavin et al., 1981).
The egoistically based negative state relief model and the arousal: cost–reward model see the primary motivation for helping as being the helper’s own outcome. Recognize that the victim’s outcome is of relatively little concern to the helper—benefits to the victim are incidental byproducts of the exchange (Dovidio et al., 2006). The victim may be helped, but the helper’s real motivation according to these two explanations is egoistic: Helpers help to the extent that it makes them feel better.
Although many researchers believe that egoism is the only motivation for helping, others suggest that altruism—helping that has as its ultimate goal the improvement of another’s welfare—may also be a motivation for helping under the right circumstances. Batson (2011) has offered the empathy–altruism model to explain altruistically motivated helping for which the helper expects no benefits. According to this model, the key for altruism is empathizing with the victim, that is, putting oneself in the shoes of the victim and imagining how the victim must feel. When taking this perspective and having empathic concern, potential helpers become primarily interested in increasing the well-being of the victim, even if the helper must incur some costs that might otherwise be easily avoided. The empathy–altruism model does not dismiss egoistic motivations; helpers not empathizing with a victim may experience personal distress and have an egoistic motivation, not unlike the feelings and motivations explained by the arousal: cost–reward model. Because egoistically motivated individuals are primarily concerned with their own cost–benefit outcomes, they are less likely to help if they think they can escape the situation with no costs to themselves. In contrast, altruistically motivated helpers are willing to accept the cost of helping to benefit a person with whom they have empathized—this “self-sacrificial” approach to helping is the hallmark of altruism (Batson, 2011).
Although there is still some controversy about whether people can ever act for purely altruistic motives, it is important to recognize that, while helpers may derive some personal rewards by helping another, the help that has been given is also benefitting someone who was in need. The residents who offered food, blankets, and shelter to stranded runners who were unable to get back to their hotel rooms because of the Boston Marathon bombing undoubtedly received positive rewards because of the help they gave, but those stranded runners who were helped got what they needed badly as well. “In fact, it is quite remarkable how the fates of people who have never met can be so intertwined and complementary. Your benefit is mine; and mine is yours” (Dovidio et al., 2006, p. 143).
We started this module by asking the question, “Who helps when and why?” As we have shown, the question of when help will be given is not quite as simple as the viewers of “What Would You Do?” believe. The power of the situation that operates on potential helpers in real time is not fully considered. What might appear to be a split-second decision to help is actually the result of consideration of multiple situational factors (e.g., the helper’s interpretation of the situation, the presence and ability of others to provide the help, the results of a cost–benefit analysis) (Dovidio et al., 2006). We have found that men and women tend to help in different ways—men are more impulsive and physically active, while women are more nurturing and supportive. Personality characteristics such as agreeableness and the prosocial personality orientation also affect people’s likelihood of giving assistance to others. And, why would people help in the first place? In addition to evolutionary forces (e.g., kin selection, reciprocal altruism), there is extensive evidence to show that helping and prosocial acts may be motivated by selfish, egoistic desires; by selfless, altruistic goals; or by some combination of egoistic and altruistic motives. (For a fuller consideration of the field of prosocial behavior, we refer you to Dovidio et al. .)
By the end of this section, you will be able to:
There are many theories regarding how babies and children grow and develop into happy, healthy adults. We explore several of these theories in this section.
Sigmund Freud (1856–1939) believed that personality develops during early childhood. For Freud, childhood experiences shape our personalities and behavior as adults. Freud viewed development as discontinuous; he believed that each of us must pass through a serious of stages during childhood, and that if we lack proper nurturance and parenting during a stage, we may become stuck, or fixated, in that stage. Freud’s stages are called the stages of psychosexual development. According to Freud, children’s pleasure-seeking urges are focused on a different area of the body, called an erogenous zone, at each of the five stages of development: oral, anal, phallic, latency, and genital.
While most of Freud’s ideas have not found support in modern research, we cannot discount the contributions that Freud has made to the field of psychology. Psychologists today dispute Freud’s psychosexual stages as a legitimate explanation for how one’s personality develops, but what we can take away from Freud’s theory is that personality is shaped, in some part, by experiences we have in childhood. These stages are discussed in detail in the chapter on personality.
Erik Erikson (1902–1994) ([(Link)]), another stage theorist, took Freud’s theory and modified it as psychosocial theory. Erikson’s psychosocial development theory emphasizes the social nature of our development rather than its sexual nature. While Freud believed that personality is shaped only in childhood, Erikson proposed that personality development takes place all through the lifespan. Erikson suggested that how we interact with others is what affects our sense of self, or what he called the ego identity.
Erikson proposed that we are motivated by a need to achieve competence in certain areas of our lives. According to psychosocial theory, we experience eight stages of development over our lifespan, from infancy through late adulthood. At each stage there is a conflict, or task, that we need to resolve. Successful completion of each developmental task results in a sense of competence and a healthy personality. Failure to master these tasks leads to feelings of inadequacy.
According to Erikson (1963), trust is the basis of our development during infancy (birth to 12 months). Therefore, the primary task of this stage is trust versus mistrust. Infants are dependent upon their caregivers, so caregivers who are responsive and sensitive to their infant’s needs help their baby to develop a sense of trust; their baby will see the world as a safe, predictable place. Unresponsive caregivers who do not meet their baby’s needs can engender feelings of anxiety, fear, and mistrust; their baby may see the world as unpredictable.
As toddlers (ages 1–3 years) begin to explore their world, they learn that they can control their actions and act on the environment to get results. They begin to show clear preferences for certain elements of the environment, such as food, toys, and clothing. A toddler’s main task is to resolve the issue of autonomy versus shame and doubt, by working to establish independence. This is the “me do it” stage. For example, we might observe a budding sense of autonomy in a 2-year-old child who wants to choose her clothes and dress herself. Although her outfits might not be appropriate for the situation, her input in such basic decisions has an effect on her sense of independence. If denied the opportunity to act on her environment, she may begin to doubt her abilities, which could lead to low self-esteem and feelings of shame.
Once children reach the preschool stage (ages 3–6 years), they are capable of initiating activities and asserting control over their world through social interactions and play. According to Erikson, preschool children must resolve the task of initiative versus guilt. By learning to plan and achieve goals while interacting with others, preschool children can master this task. Those who do will develop self-confidence and feel a sense of purpose. Those who are unsuccessful at this stage—with their initiative misfiring or stifled—may develop feelings of guilt. How might over-controlling parents stifle a child’s initiative?
During the elementary school stage (ages 6–12), children face the task of industry versus inferiority. Children begin to compare themselves to their peers to see how they measure up. They either develop a sense of pride and accomplishment in their schoolwork, sports, social activities, and family life, or they feel inferior and inadequate when they don’t measure up. What are some things parents and teachers can do to help children develop a sense of competence and a belief in themselves and their abilities?
In adolescence (ages 12–18), children face the task of identity versus role confusion. According to Erikson, an adolescent’s main task is developing a sense of self. Adolescents struggle with questions such as “Who am I?” and “What do I want to do with my life?” Along the way, most adolescents try on many different selves to see which ones fit. Adolescents who are successful at this stage have a strong sense of identity and are able to remain true to their beliefs and values in the face of problems and other people’s perspectives. What happens to apathetic adolescents, who do not make a conscious search for identity, or those who are pressured to conform to their parents’ ideas for the future? These teens will have a weak sense of self and experience role confusion. They are unsure of their identity and confused about the future.
People in early adulthood (i.e., 20s through early 40s) are concerned with intimacy versus isolation. After we have developed a sense of self in adolescence, we are ready to share our life with others. Erikson said that we must have a strong sense of self before developing intimate relationships with others. Adults who do not develop a positive self-concept in adolescence may experience feelings of loneliness and emotional isolation.
When people reach their 40s, they enter the time known as middle adulthood, which extends to the mid-60s. The social task of middle adulthood is generativity versus stagnation. Generativity involves finding your life’s work and contributing to the development of others, through activities such as volunteering, mentoring, and raising children. Those who do not master this task may experience stagnation, having little connection with others and little interest in productivity and self-improvement.
From the mid-60s to the end of life, we are in the period of development known as late adulthood. Erikson’s task at this stage is called integrity versus despair. He said that people in late adulthood reflect on their lives and feel either a sense of satisfaction or a sense of failure. People who feel proud of their accomplishments feel a sense of integrity, and they can look back on their lives with few regrets. However, people who are not successful at this stage may feel as if their life has been wasted. They focus on what “would have,” “should have,” and “could have” been. They face the end of their lives with feelings of bitterness, depression, and despair. [(Link)] summarizes the stages of Erikson’s theory.
|Stage||Age (years)||Developmental Task||Description|
|1||0–1||Trust vs. mistrust||Trust (or mistrust) that basic needs, such as nourishment and affection, will be met|
|2||1–3||Autonomy vs. shame/doubt||Develop a sense of independence in many tasks|
|3||3–6||Initiative vs. guilt||Take initiative on some activities—may develop guilt when unsuccessful or boundaries overstepped|
|4||7–11||Industry vs. inferiority||Develop self-confidence in abilities when competent or sense of inferiority when not|
|5||12–18||Identity vs. confusion||Experiment with and develop identity and roles|
|6||19–29||Intimacy vs. isolation||Establish intimacy and relationships with others|
|7||30–64||Generativity vs. stagnation||Contribute to society and be part of a family|
|8||65–||Integrity vs. despair||Assess and make sense of life and meaning of contributions|
Jean Piaget (1896–1980) is another stage theorist who studied childhood development ([(Link)]). Instead of approaching development from a psychoanalytical or psychosocial perspective, Piaget focused on children’s cognitive growth. He believed that thinking is a central aspect of development and that children are naturally inquisitive. However, he said that children do not think and reason like adults (Piaget, 1930, 1932). His theory of cognitive development holds that our cognitive abilities develop through specific stages, which exemplifies the discontinuity approach to development. As we progress to a new stage, there is a distinct shift in how we think and reason.
Piaget said that children develop schemata to help them understand the world. Schemata are concepts (mental models) that are used to help us categorize and interpret information. By the time children have reached adulthood, they have created schemata for almost everything. When children learn new information, they adjust their schemata through two processes: assimilation and accommodation. First, they assimilate new information or experiences in terms of their current schemata: assimilation is when they take in information that is comparable to what they already know. Accommodation describes when they change their schemata based on new information. This process continues as children interact with their environment.
For example, 2-year-old Blake learned the schema for dogs because his family has a Labrador retriever. When Blake sees other dogs in his picture books, he says, “Look mommy, dog!” Thus, he has assimilated them into his schema for dogs. One day, Blake sees a sheep for the first time and says, “Look mommy, dog!” Having a basic schema that a dog is an animal with four legs and fur, Blake thinks all furry, four-legged creatures are dogs. When Blake’s mom tells him that the animal he sees is a sheep, not a dog, Blake must accommodate his schema for dogs to include more information based on his new experiences. Blake’s schema for dog was too broad, since not all furry, four-legged creatures are dogs. He now modifies his schema for dogs and forms a new one for sheep.
Like Freud and Erikson, Piaget thought development unfolds in a series of stages approximately associated with age ranges. He proposed a theory of cognitive development that unfolds in four stages: sensorimotor, preoperational, concrete operational, and formal operational ([(Link)]).
|Age (years)||Stage||Description||Developmental issues|
|0–2||Sensorimotor||World experienced through senses and actions||Object permanence
|2–6||Preoperational||Use words and images to represent things, but lack logical reasoning||Pretend play
|7–11||Concrete operational||Understand concrete events and analogies logically; perform arithmetical operations||Conservation
|12–||Formal operational||Formal operations
Utilize abstract reasoning
The first stage is the sensorimotor stage, which lasts from birth to about 2 years old. During this stage, children learn about the world through their senses and motor behavior. Young children put objects in their mouths to see if the items are edible, and once they can grasp objects, they may shake or bang them to see if they make sounds. Between 5 and 8 months old, the child develops object permanence, which is the understanding that even if something is out of sight, it still exists (Bogartz, Shinskey, & Schilling, 2000). According to Piaget, young infants do not remember an object after it has been removed from sight. Piaget studied infants’ reactions when a toy was first shown to an infant and then hidden under a blanket. Infants who had already developed object permanence would reach for the hidden toy, indicating that they knew it still existed, whereas infants who had not developed object permanence would appear confused.
In Piaget’s view, around the same time children develop object permanence, they also begin to exhibit stranger anxiety, which is a fear of unfamiliar people. Babies may demonstrate this by crying and turning away from a stranger, by clinging to a caregiver, or by attempting to reach their arms toward familiar faces such as parents. Stranger anxiety results when a child is unable to assimilate the stranger into an existing schema; therefore, she can’t predict what her experience with that stranger will be like, which results in a fear response.
Piaget’s second stage is the preoperational stage, which is from approximately 2 to 7 years old. In this stage, children can use symbols to represent words, images, and ideas, which is why children in this stage engage in pretend play. A child’s arms might become airplane wings as he zooms around the room, or a child with a stick might become a brave knight with a sword. Children also begin to use language in the preoperational stage, but they cannot understand adult logic or mentally manipulate information (the term operational refers to logical manipulation of information, so children at this stage are considered to be pre-operational). Children’s logic is based on their own personal knowledge of the world so far, rather than on conventional knowledge. For example, dad gave a slice of pizza to 10-year-old Keiko and another slice to her 3-year-old brother, Kenny. Kenny’s pizza slice was cut into five pieces, so Kenny told his sister that he got more pizza than she did. Children in this stage cannot perform mental operations because they have not developed an understanding of conservation, which is the idea that even if you change the appearance of something, it is still equal in size as long as nothing has been removed or added.
During this stage, we also expect children to display egocentrism, which means that the child is not able to take the perspective of others. A child at this stage thinks that everyone sees, thinks, and feels just as they do. Let’s look at Kenny and Keiko again. Keiko’s birthday is coming up, so their mom takes Kenny to the toy store to choose a present for his sister. He selects an Iron Man action figure for her, thinking that if he likes the toy, his sister will too. An egocentric child is not able to infer the perspective of other people and instead attributes his own perspective.
Piaget’s third stage is the concrete operational stage, which occurs from about 7 to 11 years old. In this stage, children can think logically about real (concrete) events; they have a firm grasp on the use of numbers and start to employ memory strategies. They can perform mathematical operations and understand transformations, such as addition is the opposite of subtraction, and multiplication is the opposite of division. In this stage, children also master the concept of conservation: Even if something changes shape, its mass, volume, and number stay the same. For example, if you pour water from a tall, thin glass to a short, fat glass, you still have the same amount of water. Remember Keiko and Kenny and the pizza? How did Keiko know that Kenny was wrong when he said that he had more pizza?
Children in the concrete operational stage also understand the principle of reversibility, which means that objects can be changed and then returned back to their original form or condition. Take, for example, water that you poured into the short, fat glass: You can pour water from the fat glass back to the thin glass and still have the same amount (minus a couple of drops).
The fourth, and last, stage in Piaget’s theory is the formal operational stage, which is from about age 11 to adulthood. Whereas children in the concrete operational stage are able to think logically only about concrete events, children in the formal operational stage can also deal with abstract ideas and hypothetical situations. Children in this stage can use abstract thinking to problem solve, look at alternative solutions, and test these solutions. In adolescence, a renewed egocentrism occurs. For example, a 15-year-old with a very small pimple on her face might think it is huge and incredibly visible, under the mistaken impression that others must share her perceptions.
As with other major contributors of theories of development, several of Piaget’s ideas have come under criticism based on the results of further research. For example, several contemporary studies support a model of development that is more continuous than Piaget’s discrete stages (Courage & Howe, 2002; Siegler, 2005, 2006). Many others suggest that children reach cognitive milestones earlier than Piaget describes (Baillargeon, 2004; de Hevia & Spelke, 2010).
According to Piaget, the highest level of cognitive development is formal operational thought, which develops between 11 and 20 years old. However, many developmental psychologists disagree with Piaget, suggesting a fifth stage of cognitive development, known as the postformal stage (Basseches, 1984; Commons & Bresette, 2006; Sinnott, 1998). In postformal thinking, decisions are made based on situations and circumstances, and logic is integrated with emotion as adults develop principles that depend on contexts. One way that we can see the difference between an adult in postformal thought and an adolescent in formal operations is in terms of how they handle emotionally charged issues.
It seems that once we reach adulthood our problem solving abilities change: As we attempt to solve problems, we tend to think more deeply about many areas of our lives, such as relationships, work, and politics (Labouvie-Vief & Diehl, 1999). Because of this, postformal thinkers are able to draw on past experiences to help them solve new problems. Problem-solving strategies using postformal thought vary, depending on the situation. What does this mean? Adults can recognize, for example, that what seems to be an ideal solution to a problem at work involving a disagreement with a colleague may not be the best solution to a disagreement with a significant other.
Watch this lecture on Child Development from MIT’s John Gabrieli.
Watch this Ted talk by Alison Gopnik to learn about how recent discoveries show us that babies are probably smarter than we think.
Click on these links to access some common personality tests (note that some of the tests were already found within the readings in this module).
Watch this video from Dan Pink’s Ted talk on “The surprising truth about what motivates us.” Think about what things motivate you, and how you anticipate that you might respond to the types of incentives explained in the talk.
Go to the Faces Memory Challenge found here:
Go to . Play the memory solitaire game. Then play game #2: Tell Yourself a Story.
Watch this lecture from MIT’s John Gabrieli on memory. Listen for key vocabulary terms from this module, particularly:
To see a modern application of various theories on intelligence testing, watch “The Battle of the Brains” from the BBC:
By the end of this section, you will be able to:
Imagine two men of 30-something age, Adam and Ben, walking down the corridor. Judging from their clothing, they are young businessmen, taking a break from work. They then have this exchange.
Adam: “You know, Gary bought a ring.”
Ben: “Oh yeah? For Mary, isn’t it?” (Adam nods.)
If you are watching this scene and hearing their conversation, what can you guess from this? First of all, you’d guess that Gary bought a ring for Mary, whoever Gary and Mary might be. Perhaps you would infer that Gary is getting married to Mary. What else can you guess? Perhaps that Adam and Ben are fairly close colleagues, and both of them know Gary and Mary reasonably well. In other words, you can guess the social relationships surrounding the people who are engaging in the conversation and the people whom they are talking about.
Language is used in our everyday lives. If psychology is a science of behavior, scientific investigation of language use must be one of the most central topics—this is because language use is ubiquitous. Every human group has a language; human infants (except those who have unfortunate disabilities) learn at least one language without being taught explicitly. Even when children who don’t have much language to begin with are brought together, they can begin to develop and use their own language. There is at least one known instance where children who had had little language were brought together and developed their own language spontaneously with minimum input from adults. In Nicaragua in the 1980s, deaf children who were separately raised in various locations were brought together to schools for the first time. Teachers tried to teach them Spanish with little success. However, they began to notice that the children were using their hands and gestures, apparently to communicate with each other. Linguists were brought in to find out what was happening—it turned out the children had developed their own sign language by themselves. That was the birth of a new language, Nicaraguan Sign Language (Kegl, Senghas, & Coppola, 1999). Language is ubiquitous, and we humans are born to use it.
If language is so ubiquitous, how do we actually use it? To be sure, some of us use it to write diaries and poetry, but the primary form of language use is interpersonal. That’s how we learn language, and that’s how we use it. Just like Adam and Ben, we exchange words and utterances to communicate with each other. Let’s consider the simplest case of two people, Adam and Ben, talking with each other. According to Clark (1996), in order for them to carry out a conversation, they must keep track of common ground. Common ground is a set of knowledge that the speaker and listener share and they think, assume, or otherwise take for granted that they share. So, when Adam says, “Gary bought a ring,” he takes for granted that Ben knows the meaning of the words he is using, whom Gary is, and what buying a ring means. When Ben says, “For Mary, isn’t it?” he takes for granted that Adam knows the meaning of these words, who Mary is, and what buying a ring for someone means. All these are part of their common ground.
Note that, when Adam presents the information about Gary’s purchase of a ring, Ben responds by presenting his inference about who the recipient of the ring might be, namely, Mary. In conversational terms, Ben’s utterance acts as evidence for his comprehension of Adam’s utterance—“Yes, I understood that Gary bought a ring”—and Adam’s nod acts as evidence that he now has understood what Ben has said too—“Yes, I understood that you understood that Gary has bought a ring for Mary.” This new information is now added to the initial common ground. Thus, the pair of utterances by Adam and Ben (called an adjacency pair) together with Adam’s affirmative nod jointly completes one proposition, “Gary bought a ring for Mary,” and adds this information to their common ground. This way, common ground changes as we talk, gathering new information that we agree on and have evidence that we share. It evolves as people take turns to assume the roles of speaker and listener, and actively engage in the exchange of meaning.
Common ground helps people coordinate their language use. For instance, when a speaker says something to a listener, he or she takes into account their common ground, that is, what the speaker thinks the listener knows. Adam said what he did because he knew Ben would know who Gary was. He’d have said, “A friend of mine is getting married,” to another colleague who wouldn’t know Gary. This is called audience design (Fussell & Krauss, 1992); speakers design their utterances for their audiences by taking into account the audiences’ knowledge. If their audiences are seen to be knowledgeable about an object (such as Ben about Gary), they tend to use a brief label of the object (i.e., Gary); for a less knowledgeable audience, they use more descriptive words (e.g., “a friend of mine”) to help the audience understand their utterances (Box 1).
So, language use is a cooperative activity, but how do we coordinate our language use in a conversational setting? To be sure, we have a conversation in small groups. The number of people engaging in a conversation at a time is rarely more than four. By some counts (e.g., Dunbar, Duncan, & Nettle, 1995;James, 1953), more than 90 percent of conversations happen in a group of four individuals or less. Certainly, coordinating conversation among four is not as difficult as coordinating conversation among 10. But, even among only four people, if you think about it, everyday conversation is an almost miraculous achievement. We typically have a conversation by rapidly exchanging words and utterances in real time in a noisy environment. Think about your conversation at home in the morning, at a bus stop, in a shopping mall. How can we keep track of our common ground under such circumstances?
Pickering and Garrod (2004) argue that we achieve our conversational coordination by virtue of our ability to interactively align each other’s actions at different levels of language use: lexicon (i.e., words and expressions), syntax (i.e., grammatical rules for arranging words and expressions together), as well as speech rate and accent. For instance, when one person uses a certain expression to refer to an object in a conversation, others tend to use the same expression (e.g.,Clark & Wilkes-Gibbs, 1986). Furthermore, if someone says “the cowboy offered a banana to the robber,” rather than “the cowboy offered the robber a banana,” others are more likely to use the same syntactic structure (e.g., “the girl gave a book to the boy” rather than “the girl gave the boy a book”) even if different words are involved (Branigan, Pickering, & Cleland, 2000). Finally, people in conversation tend to exhibit similar accents and rates of speech, and they are often associated with people’s social identity (Giles, Coupland, & Coupland, 1991). So, if you have lived in different places where people have somewhat different accents (e.g., United States and United Kingdom), you might have noticed that you speak with Americans with an American accent, but speak with Britons with a British accent.
Pickering and Garrod (2004) suggest that these interpersonal alignments at different levels of language use can activate similar situation models in the minds of those who are engaged in a conversation. Situation models are representations about the topic of a conversation. So, if you are talking about Gary and Mary with your friends, you might have a situation model of Gary giving Mary a ring in your mind. Pickering and Garrod’s theory is that as you describe this situation using language, others in the conversation begin to use similar words and grammar, and many other aspects of language use converge. As you all do so, similar situation models begin to be built in everyone’s mind through the mechanism known as priming. Priming occurs when your thinking about one concept (e.g., “ring”) reminds you about other related concepts (e.g., “marriage”, “wedding ceremony”). So, if everyone in the conversation knows about Gary, Mary, and the usual course of events associated with a ring—engagement, wedding, marriage, etc.— everyone is likely to construct a shared situation model about Gary and Mary. Thus, making use of our highly developed interpersonal ability to imitate (i.e., executing the same action as another person) and cognitive ability to infer (i.e., one idea leading to other ideas), we humans coordinate our common ground, share situation models, and communicate with each other.
What are humans doing when we are talking? Surely, we can communicate about mundane things such as what to have for dinner, but also more complex and abstract things such as the meaning of life and death, liberty, equality, and fraternity, and many other philosophical thoughts. Well, when naturally occurring conversations were actually observed (Dunbar, Marriott, & Duncan, 1997), a staggering 60%–70% of everyday conversation, for both men and women, turned out to be gossip—people talk about themselves and others whom they know. Just like Adam and Ben, more often than not, people use language to communicate about their social world.
Gossip may sound trivial and seem to belittle our noble ability for language—surely one of the most remarkable human abilities of all that distinguish us from other animals. Au contraire, some have argued that gossip—activities to think and communicate about our social world—is one of the most critical uses to which language has been put. Dunbar (1996) conjectured that gossiping is the human equivalent of grooming, monkeys and primates attending and tending to each other by cleaning each other’s fur. He argues that it is an act of socializing, signaling the importance of one’s partner. Furthermore, by gossiping, humans can communicate and share their representations about their social world—who their friends and enemies are, what the right thing to do is under what circumstances, and so on. In so doing, they can regulate their social world—making more friends and enlarging one’s own group (often called the ingroup, the group to which one belongs) against other groups (outgroups) that are more likely to be one’s enemies. Dunbar has argued that it is these social effects that have given humans an evolutionary advantage and larger brains, which, in turn, help humans to think more complex and abstract thoughts and, more important, maintain larger ingroups. Dunbar (1993) estimated an equation that predicts average group size of nonhuman primate genera from their average neocortex size (the part of the brain that supports higher order cognition). In line with his social brain hypothesis, Dunbar showed that those primate genera that have larger brains tend to live in larger groups. Furthermore, using the same equation, he was able to estimate the group size that human brains can support, which turned out to be about 150—approximately the size of modern hunter-gatherer communities. Dunbar’s argument is that language, brain, and human group living have co-evolved—language and human sociality are inseparable.
Dunbar’s hypothesis is controversial. Nonetheless, whether or not he is right, our everyday language use often ends up maintaining the existing structure of intergroup relationships. Language use can have implications for how we construe our social world. For one thing, there are subtle cues that people use to convey the extent to which someone’s action is just a special case in a particular context or a pattern that occurs across many contexts and more like a character trait of the person. According to Semin and Fiedler (1988), someone’s action can be described by an action verb that describes a concrete action (e.g., he runs), a state verb that describes the actor’s psychological state (e.g., he likes running), an adjective that describes the actor’s personality (e.g., he is athletic), or a noun that describes the actor’s role (e.g., he is an athlete). Depending on whether a verb or an adjective (or noun) is used, speakers can convey the permanency and stability of an actor’s tendency to act in a certain way—verbs convey particularity, whereas adjectives convey permanency. Intriguingly, people tend to describe positive actions of their ingroup members using adjectives (e.g., he is generous) rather than verbs (e.g., he gave a blind man some change), and negative actions of outgroup members using adjectives (e.g., he is cruel) rather than verbs (e.g., he kicked a dog). Maass, Salvi, Arcuri, and Semin (1989) called this a linguistic intergroup bias, which can produce and reproduce the representation of intergroup relationships by painting a picture favoring the ingroup. That is, ingroup members are typically good, and if they do anything bad, that’s more an exception in special circumstances; in contrast, outgroup members are typically bad, and if they do anything good, that’s more an exception.
In addition, when people exchange their gossip, it can spread through broader social networks. If gossip is transmitted from one person to another, the second person can transmit it to a third person, who then in turn transmits it to a fourth, and so on through a chain of communication. This often happens for emotive stories (Box 2). If gossip is repeatedly transmitted and spread, it can reach a large number of people. When stories travel through communication chains, they tend to become conventionalized (Bartlett, 1932). A Native American tale of the “War of the Ghosts” recounts a warrior’s encounter with ghosts traveling in canoes and his involvement with their ghostly battle. He is shot by an arrow but doesn’t die, returning home to tell the tale. After his narration, however, he becomes still, a black thing comes out of his mouth, and he eventually dies. When it was told to a student in England in the 1920s and retold from memory to another person, who, in turn, retold it to another and so on in a communication chain, the mythic tale became a story of a young warrior going to a battlefield, in which canoes became boats, and the black thing that came out of his mouth became simply his spirit (Bartlett, 1932). In other words, information transmitted multiple times was transformed to something that was easily understood by many, that is, information was assimilated into the common ground shared by most people in the linguistic community. More recently, Kashima (2000) conducted a similar experiment using a story that contained sequence of events that described a young couple’s interaction that included both stereotypical and counter-stereotypical actions (e.g., a man watching sports on TV on Sunday vs. a man vacuuming the house). After the retelling of this story, much of the counter-stereotypical information was dropped, and stereotypical information was more likely to be retained. Because stereotypes are part of the common ground shared by the community, this finding too suggests that conversational retellings are likely to reproduce conventional content.
What are the psychological consequences of language use? When people use language to describe an experience, their thoughts and feelings are profoundly shaped by the linguistic representation that they have produced rather than the original experience per se (Holtgraves & Kashima, 2008). For example, Halberstadt (2003) showed a picture of a person displaying an ambiguous emotion and examined how people evaluated the displayed emotion. When people verbally explained why the target person was expressing a particular emotion, they tended to remember the person as feeling that emotion more intensely than when they simply labeled the emotion.
Thus, constructing a linguistic representation of another person’s emotion apparently biased the speaker’s memory of that person’s emotion. Furthermore, linguistically labeling one’s own emotional experience appears to alter the speaker’s neural processes. When people linguistically labeled negative images, the amygdala—a brain structure that is critically involved in the processing of negative emotions such as fear—was activated less than when they were not given a chance to label them (Lieberman et al., 2007). Potentially because of these effects of verbalizing emotional experiences, linguistic reconstructions of negative life events can have some therapeutic effects on those who suffer from the traumatic experiences (Pennebaker & Seagal, 1999). Lyubomirsky, Sousa, and Dickerhoof (2006) found that writing and talking about negative past life events improved people’s psychological well-being, but just thinking about them worsened it. There are many other examples of effects of language use on memory and decision making (Holtgraves & Kashima, 2008).
Furthermore, if a certain type of language use (linguistic practice) (Holtgraves & Kashima, 2008) is repeated by a large number of people in a community, it can potentially have a significant effect on their thoughts and action. This notion is often called Sapir-Whorf hypothesis(Sapir, 1921; Whorf, 1956; Box 3). For instance, if you are given a description of a man, Steven, as having greater than average experience of the world (e.g., well-traveled, varied job experience), a strong family orientation, and well-developed social skills, how do you describe Steven? Do you think you can remember Steven’s personality five days later? It will probably be difficult. But if you know Chinese and are reading about Steven in Chinese, as Hoffman, Lau, and Johnson (1986) showed, the chances are that you can remember him well. This is because English does not have a word to describe this kind of personality, whereas Chinese does (shì gù). This way, language you use can influence your cognition. In its strong form, it has been argued that languagedetermines thought, but this is probably wrong. Language does not completely determine our thoughts—our thoughts are far too flexible for that—but habitual uses of language can influence our habit of thought and action. For instance, some linguistic practice seems to be associated even with cultural values and social institution. Pronoun drop is the case in point. Pronouns such as “I” and “you” are used to represent the speaker and listener of a speech in English. In an English sentence, these pronouns cannot be dropped if they are used as the subject of a sentence. So, for instance, “I went to the movie last night” is fine, but “Went to the movie last night” is not in standard English. However, in other languages such as Japanese, pronouns can be, and in fact often are, dropped from sentences. It turned out that people living in those countries where pronoun drop languages are spoken tend to have more collectivistic values (e.g., employees having greater loyalty toward their employers) than those who use non–pronoun drop languages such as English (Kashima & Kashima, 1998). It was argued that the explicit reference to “you” and “I” may remind speakers the distinction between the self and other, and the differentiation between individuals. Such a linguistic practice may act as a constant reminder of the cultural value, which, in turn, may encourage people to perform the linguistic practice.
Language and language use constitute a central ingredient of human psychology. Language is an essential tool that enables us to live the kind of life we do. Can you imagine a world in which machines are built, farms are cultivated, and goods and services are transported to our household without language? Is it possible for us to make laws and regulations, negotiate contracts, and enforce agreements and settle disputes without talking? Much of contemporary human civilization wouldn’t have been possible without the human ability to develop and use language. Like the Tower of Babel, language can divide humanity, and yet, the core of humanity includes the innate ability for language use. Whether we can use it wisely is a task before us in this globalized world.
Watch this Khan Academy video to understand schedules of reinforcement as they apply to operant conditioning.
Watch this video to review the basic principles of classical and operant conditioning.
Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit?
Modern society has become awash in studies such as this; you can read about several such studies in the news every day. Moreover, data abound everywhere in modern life. Conducting such a study well, and interpreting the results of such studies well for making informed decisions or setting policies, requires understanding basic ideas of statistics, the science of gaining insight from data. Rather than relying on anecdote and intuition, statistics allows us to systematically study phenomena of interest.
Key components to a statistical investigation are:
Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this module, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.
When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. The most fundamental principle of statistics is that data vary. The pattern of that variation is crucial to capture and to understand. Often, careful presentation of the data will address many of the research questions without requiring more sophisticated analyses. It may, however, point to additional questions that need to be examined in more detail.
Example 1: Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Table 1.
Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 1.
Figure 1 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.
Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1 °F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population?
Example 2: In a study reported in the November 2007 issue of Nature, researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with. The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy.
One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand? Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible.
It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process. Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.
If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value. The p-value tells you how often a random process would give a result at least as extreme as what was found in the actual study, assuming there was nothing other than random chance at play. So, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy. We often compare the p-value to some cut-off value (called the level of significance, typically around 0.05). If the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.
One limitation to the previous study is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample) from a much larger group of individuals (the population) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.
Example 3: The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a random sample that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.
In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling. Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error). A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.
The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.
In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?
Example 4: A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic or extrinsic motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 2, where higher scores indicate more creativity.
In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?
Figure 2 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)
The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.
We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.
But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?
We again want to apply to a probability model to approximate a p-value, but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 3 shows the results from 1,000 such hypothetical random assignments for these scores.
Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.
Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.
Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.
So where does this leave us with regard to the coffee study mentioned at the beginning of this module? We can answer many of the questions:
This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.
Most of the time, we perceive the world as a unified bundle of sensations from multiple sensory modalities. In other words, our perception is multimodal. This module provides an overview of multimodal perception, including information about its neurobiology and its psychological effects.
Although it has been traditional to study the various senses independently, most of the time, perception operates in the context of information supplied by multiple sensory modalities at the same time. For example, imagine if you witnessed a car collision. You could describe the stimulus generated by this event by considering each of the senses independently; that is, as a set of unimodal stimuli. Your eyes would be stimulated with patterns of light energy bouncing off the cars involved. Your ears would be stimulated with patterns of acoustic energy emanating from the collision. Your nose might even be stimulated by the smell of burning rubber or gasoline. However, all of this information would be relevant to the same thing: your perception of the car collision. Indeed, unless someone was to explicitly ask you to describe your perception in unimodal terms, you would most likely experience the event as a unified bundle of sensations from multiple senses. In other words, your perception would be multimodal. The question is whether the various sources of information involved in this multimodal stimulus are processed separately by the perceptual system or not.
For the last few decades, perceptual research has pointed to the importance of multimodal perception: the effects on the perception of events and objects in the world that are observed when there is information from more than one sensory modality. Most of this research indicates that, at some point in perceptual processing, information from the various sensory modalities is integrated. In other words, the information is combined and treated as a unitary representation of the world.
Several theoretical problems are raised by multimodal perception. After all, the world is a “blooming, buzzing world of confusion” that constantly bombards our perceptual system with light, sound, heat, pressure, and so forth. To make matters more complicated, these stimuli come from multiple events spread out over both space and time. To return to our example: Let’s say the car crash you observed happened on Main Street in your town. Your perception during the car crash might include a lot of stimulation that was not relevant to the car crash. For example, you might also overhear the conversation of a nearby couple, see a bird flying into a tree, or smell the delicious scent of freshly baked bread from a nearby bakery (or all three!). However, you would most likely not make the mistake of associating any of these stimuli with the car crash. In fact, we rarely combine the auditory stimuli associated with one event with the visual stimuli associated with another (although, under some unique circumstances—such as ventriloquism—we do). How is the brain able to take the information from separate sensory modalities and match it appropriately, so that stimuli that belong together stay together, while stimuli that do not belong together get treated separately? In other words, how does the perceptual system determine which unimodal stimuli must be integrated, and which must not?
Once unimodal stimuli have been appropriately integrated, we can further ask about the consequences of this integration: What are the effects of multimodal perception that would not be present if perceptual processing were only unimodal? Perhaps the most robust finding in the study of multimodal perception concerns this last question. No matter whether you are looking at the actions of neurons or the behavior of individuals, it has been found that responses to multimodal stimuli are typically greater than the combined response to either modality independently. In other words, if you presented the stimulus in one modality at a time and measured the response to each of these unimodal stimuli, you would find that adding them together would still not equal the response to the multimodal stimulus. This superadditive effect of multisensory integrationindicates that there are consequences resulting from the integrated processing of multimodal stimuli.
The extent of the superadditive effect (sometimes referred to as multisensory enhancement) is determined by the strength of the response to the single stimulus modality with the biggest effect. To understand this concept, imagine someone speaking to you in a noisy environment (such as a crowded party). When discussing this type of multimodal stimulus, it is often useful to describe it in terms of its unimodal components: In this case, there is an auditory component (the sounds generated by the speech of the person speaking to you) and a visual component (the visual form of the face movements as the person speaks to you). In the crowded party, the auditory component of the person’s speech might be difficult to process (because of the surrounding party noise). The potential for visual information about speech—lipreading—to help in understanding the speaker’s message is, in this situation, quite large. However, if you were listening to that same person speak in a quiet library, the auditory portion would probably be sufficient for receiving the message, and the visual portion would help very little, if at all (Sumby & Pollack, 1954). In general, for a stimulus with multimodal components, if the response to each component (on its own) is weak, then the opportunity for multisensory enhancement is very large. However, if one component—by itself—is sufficient to evoke a strong response, then the opportunity for multisensory enhancement is relatively small. This finding is called the Principle of Inverse Effectiveness (Stein & Meredith, 1993) because the effectiveness of multisensory enhancement is inversely related to the unimodal response with the greatest effect.
Another important theoretical question about multimodal perception concerns the neurobiology that supports it. After all, at some point, the information from each sensory modality is definitely separated (e.g., light comes in through the eyes, and sound comes in through the ears). How does the brain take information from different neural systems (optic, auditory, etc.) and combine it? If our experience of the world is multimodal, then it must be the case that at some point during perceptual processing, the unimodal information coming from separate sensory organs—such as the eyes, ears, skin—is combined. A related question asks where in the brain this integration takes place. We turn to these questions in the next section.
A surprisingly large number of brain regions in the midbrain and cerebral cortex are related to multimodal perception. These regions contain neurons that respond to stimuli from not just one, but multiple sensory modalities. For example, a region called the superior temporal sulcus contains single neurons that respond to both the visual and auditory components of speech (Calvert, 2001;Calvert, Hansen, Iversen, & Brammer, 2001). These multisensory convergence zones are interesting, because they are a kind of neural intersection of information coming from the different senses. That is, neurons that are devoted to the processing of one sense at a time—say vision or touch—send their information to the convergence zones, where it is processed together.
One of the most closely studied multisensory convergence zones is the superior colliculus (Stein & Meredith, 1993), which receives inputs from many different areas of the brain, including regions involved in the unimodal processing of visual and auditory stimuli (Edwards, Ginsburgh, Henkel, & Stein, 1979). Interestingly, the superior colliculus is involved in the “orienting response,” which is the behavior associated with moving one’s eye gaze toward the location of a seen or heard stimulus. Given this function for the superior colliculus, it is hardly surprising that there are multisensory neurons found there (Stein & Stanford, 2008).
The details of the anatomy and function of multisensory neurons help to answer the question of how the brain integrates stimuli appropriately. In order to understand the details, we need to discuss a neuron’s receptive field. All over the brain, neurons can be found that respond only to stimuli presented in a very specific region of the space immediately surrounding the perceiver. That region is called the neuron’s receptive field. If a stimulus is presented in a neuron’s receptive field, then that neuron responds by increasing or decreasing its firing rate. If a stimulus is presented outside of a neuron’s receptive field, then there is no effect on the neuron’s firing rate. Importantly, when two neurons send their information to a third neuron, the third neuron’s receptive field is the combination of the receptive fields of the two input neurons. This is called neural convergence, because the information from multiple neurons converges on a single neuron. In the case of multisensory neurons, the convergence arrives from different sensory modalities. Thus, the receptive fields of multisensory neurons are the combination of the receptive fields of neurons located in different sensory pathways.
Now, it could be the case that the neural convergence that results in multisensory neurons is set up in a way that ignores the locations of the input neurons’ receptive fields. Amazingly, however, these crossmodal receptive fields overlap. For example, a multisensory neuron in the superior colliculus might receive input from two unimodal neurons: one with a visual receptive field and one with an auditory receptive field. It has been found that the unimodal receptive fields refer to the same locations in space—that is, the two unimodal neurons respond to stimuli in the same region of space. Crucially, the overlap in the crossmodal receptive fields plays a vital role in the integration of crossmodal stimuli. When the information from the separate modalities is coming from within these overlapping receptive fields, then it is treated as having come from the same location—and the neuron responds with a superadditive (enhanced) response. So, part of the information that is used by the brain to combine multimodal inputs is the location in space from which the stimuli came.
This pattern is common across many multisensory neurons in multiple regions of the brain. Because of this, researchers have defined the spatial principle of multisensory integration: Multisensory enhancement is observed when the sources of stimulation are spatially related to one another. A related phenomenon concerns the timing of crossmodal stimuli. Enhancement effects are observed in multisensory neurons only when the inputs from different senses arrive within a short time of one another (e.g., Recanzone, 2003).
Multisensory neurons have also been observed outside of multisensory convergence zones, in areas of the brain that were once thought to be dedicated to the processing of a single modality (unimodal cortex). For example, the primary visual cortex was long thought to be devoted to the processing of exclusively visual information. The primary visual cortex is the first stop in the cortex for information arriving from the eyes, so it processes very low-level information like edges. Interestingly, neurons have been found in the primary visual cortex that receives information from the primary auditory cortex (where sound information from the auditory pathway is processed) and from the superior temporal sulcus (a multisensory convergence zone mentioned above). This is remarkable because it indicates that the processing of visual information is, from a very early stage, influenced by auditory information.
There may be two ways for these multimodal interactions to occur. First, it could be that the processing of auditory information in relatively late stages of processing feeds back to influence low-level processing of visual information in unimodal cortex (McDonald, Teder-Sälejärvi, Russo, & Hillyard, 2003). Alternatively, it may be that areas of unimodal cortex contact each other directly (Driver & Noesselt, 2008; Macaluso & Driver, 2005), such that multimodal integration is a fundamental component of all sensory processing.
In fact, the large numbers of multisensory neurons distributed all around the cortex—in multisensory convergence areas and in primary cortices—has led some researchers to propose that a drastic reconceptualization of the brain is necessary (Ghazanfar & Schroeder, 2006). They argue that the cortex should not be considered as being divided into isolated regions that process only one kind of sensory information. Rather, they propose that these areas only prefer to process information from specific modalities but engage in low-level multisensory processing whenever it is beneficial to the perceiver (Vasconcelos et al., 2011).
Although neuroscientists tend to study very simple interactions between neurons, the fact that they’ve found so many crossmodal areas of the cortex seems to hint that the way we experience the world is fundamentally multimodal. As discussed above, our intuitions about perception are consistent with this; it does not seem as though our perception of events is constrained to the perception of each sensory modality independently. Rather, we perceive a unified world, regardless of the sensory modality through which we perceive it.
It will probably require many more years of research before neuroscientists uncover all the details of the neural machinery involved in this unified experience. In the meantime, experimental psychologists have contributed to our understanding of multimodal perception through investigations of the behavioral effects associated with it. These effects fall into two broad classes. The first class—multimodal phenomena—concerns the binding of inputs from multiple sensory modalities and the effects of this binding on perception. The second class—crossmodal phenomena—concerns the influence of one sensory modality on the perception of another (Spence, Senkowski, & Roder, 2009).
Multimodal phenomena concern stimuli that generate simultaneous (or nearly simultaneous) information in more than one sensory modality. As discussed above, speech is a classic example of this kind of stimulus. When an individual speaks, she generates sound waves that carry meaningful information. If the perceiver is also looking at the speaker, then that perceiver also has access to visual patterns that carry meaningful information. Of course, as anyone who has ever tried to lipread knows, there are limits on how informative visual speech information is. Even so, the visual speech pattern alone is sufficient for very robust speech perception. Most people assume that deaf individuals are much better at lipreading than individuals with normal hearing. It may come as a surprise to learn, however, that some individuals with normal hearing are also remarkably good at lipreading (sometimes called “speechreading”). In fact, there is a wide range of speechreading ability in both normal hearing and deaf populations (Andersson, Lyxell, Rönnberg, & Spens, 2001). However, the reasons for this wide range of performance are not well understood (Auer & Bernstein, 2007; Bernstein, 2006; Bernstein, Auer, & Tucker, 2001; Mohammed et al., 2005).
How does visual information about speech interact with auditory information about speech? One of the earliest investigations of this question examined the accuracy of recognizing spoken words presented in a noisy context, much like in the example above about talking at a crowded party. To study this phenomenon experimentally, some irrelevant noise (“white noise”—which sounds like a radio tuned between stations) was presented to participants. Embedded in the white noise were spoken words, and the participants’ task was to identify the words. There were two conditions: one in which only the auditory component of the words was presented (the “auditory-alone” condition), and one in both the auditory and visual components were presented (the “audiovisual” condition). The noise levels were also varied, so that on some trials, the noise was very loud relative to the loudness of the words, and on other trials, the noise was very soft relative to the words. Sumby and Pollack (1954) found that the accuracy of identifying the spoken words was much higher for the audiovisual condition than it was in the auditory-alone condition. In addition, the pattern of results was consistent with the Principle of Inverse Effectiveness: The advantage gained by audiovisual presentation was highest when the auditory-alone condition performance was lowest (i.e., when the noise was loudest). At these noise levels, the audiovisual advantage was considerable: It was estimated that allowing the participant to see the speaker was equivalent to turning the volume of the noise down by over half. Clearly, the audiovisual advantage can have dramatic effects on behavior.
Another phenomenon using audiovisual speech is a very famous illusion called the “McGurk effect” (named after one of its discoverers). In the classic formulation of the illusion, a movie is recorded of a speaker saying the syllables “gaga.” Another movie is made of the same speaker saying the syllables “baba.” Then, the auditory portion of the “baba” movie is dubbed onto the visual portion of the “gaga” movie. This combined stimulus is presented to participants, who are asked to report what the speaker in the movie said. McGurk and MacDonald (1976) reported that 98 percent of their participants reported hearing the syllable “dada”—which was in neither the visual nor the auditory components of the stimulus. These results indicate that when visual and auditory information about speech is integrated, it can have profound effects on perception.
Not all multisensory integration phenomena concern speech, however. One particularly compelling multisensory illusion involves the integration of tactile and visual information in the perception of body ownership. In the “rubber hand illusion” (Botvinick & Cohen, 1998), an observer is situated so that one of his hands is not visible. A fake rubber hand is placed near the obscured hand, but in a visible location. The experimenter then uses a light paintbrush to simultaneously stroke the obscured hand and the rubber hand in the same locations. For example, if the middle finger of the obscured hand is being brushed, then the middle finger of the rubber hand will also be brushed. This sets up a correspondence between the tactile sensations (coming from the obscured hand) and the visual sensations (of the rubber hand). After a short time (around 10 minutes), participants report feeling as though the rubber hand “belongs” to them; that is, that the rubber hand is a part of their body. This feeling can be so strong that surprising the participant by hitting the rubber hand with a hammer often leads to a reflexive withdrawing of the obscured hand—even though it is in no danger at all. It appears, then, that our awareness of our own bodies may be the result of multisensory integration.
Crossmodal phenomena are distinguished from multimodal phenomena in that they concern the influence one sensory modality has on the perception of another.
A famous (and commonly experienced) crossmodal illusion is referred to as “the ventriloquism effect.” When a ventriloquist appears to make a puppet speak, she fools the listener into thinking that the location of the origin of the speech sounds is at the puppet’s mouth. In other words, instead of localizing the auditory signal (coming from the mouth of a ventriloquist) to the correct place, our perceptual system localizes it incorrectly (to the mouth of the puppet).
Why might this happen? Consider the information available to the observer about the location of the two components of the stimulus: the sounds from the ventriloquist’s mouth and the visual movement of the puppet’s mouth. Whereas it is very obvious where the visual stimulus is coming from (because you can see it), it is much more difficult to pinpoint the location of the sounds. In other words, the very precise visual location of mouth movement apparently overrides the less well-specified location of the auditory information. More generally, it has been found that the location of a wide variety of auditory stimuli can be affected by the simultaneous presentation of a visual stimulus (Vroomen & De Gelder, 2004). In addition, the ventriloquism effect has been demonstrated for objects in motion: The motion of a visual object can influence the perceived direction of motion of a moving sound source (Soto-Faraco, Kingstone, & Spence, 2003).
A related illusion demonstrates the opposite effect: where sounds have an effect on visual perception. In the double-flash illusion, a participant is asked to stare at a central point on a computer monitor. On the extreme edge of the participant’s vision, a white circle is briefly flashed one time. There is also a simultaneous auditory event: either one beep or two beeps in rapid succession. Remarkably, participants report seeing two visual flashes when the flash is accompanied by two beeps; the same stimulus is seen as a single flash in the context of a single beep or no beep (Shams, Kamitani, & Shimojo, 2000). In other words, the number of heard beeps influences the number of seen flashes!
Another illusion involves the perception of collisions between two circles (called “balls”) moving toward each other and continuing through each other. Such stimuli can be perceived as either two balls moving through each other or as a collision between the two balls that then bounce off each other in opposite directions. Sekuler, Sekuler, and Lau (1997) showed that the presentation of an auditory stimulus at the time of contact between the two balls strongly influenced the perception of a collision event. In this case, the perceived sound influences the interpretation of the ambiguous visual stimulus.
Several crossmodal phenomena have also been discovered for speech stimuli. These crossmodal speech effects usually show altered perceptual processing of unimodal stimuli (e.g., acoustic patterns) by virtue of prior experience with the alternate unimodal stimulus (e.g., optical patterns). For example, Rosenblum, Miller, and Sanchez (2007) conducted an experiment examining the ability to become familiar with a person’s voice. Their first interesting finding was unimodal: Much like what happens when someone repeatedly hears a person speak, perceivers can become familiar with the “visual voice” of a speaker. That is, they can become familiar with the person’s speaking style simply by seeing that person speak. Even more astounding was their crossmodal finding: Familiarity with this visual information also led to increased recognition of the speaker’s auditory speech, to which participants had never had exposure.
Similarly, it has been shown that when perceivers see a speaking face, they can identify the (auditory-alone) voice of that speaker, and vice versa (Kamachi, Hill, Lander, & Vatikiotis-Bateson, 2003; Lachs & Pisoni, 2004a, 2004b, 2004c; Rosenblum, Smith, Nichols, Lee, & Hale, 2006). In other words, the visual form of a speaker engaged in the act of speaking appears to contain information about what that speaker should sound like. Perhaps more surprisingly, the auditory form of speech seems to contain information about what the speaker should look like.
In this module, we have reviewed some of the main evidence and findings concerning the role of multimodal perception in our experience of the world. It appears that our nervous system (and the cortex in particular) contains considerable architecture for the processing of information arriving from multiple senses. Given this neurobiological setup, and the diversity of behavioral phenomena associated with multimodal stimuli, it is likely that the investigation of multimodal perception will continue to be a topic of interest in the field of experimental perception for many years to come.
The sensory systems of touch and pain provide us with information about our environment and our bodies that is often crucial for survival and well-being. Moreover, touch is a source of pleasure. In this module, we review how information about our environment and our bodies is coded in the periphery and interpreted by the brain as touch and pain sensations. We discuss how these experiences are often dramatically shaped by top-down factors like motivation, expectation, mood, fear, stress, and context. When well-functioning, these circuits promote survival and prepare us to make adaptive decisions. Pathological loss of touch can result in perceived disconnection from the body, and insensitivity to pain can be very dangerous, leading to maladaptive hazardous behavior. On the other hand, chronic pain conditions, in which these systems start signaling pain in response to innocuous touch or even in the absence of any observable sensory stimuli, have tremendous negative impact on the lives of the affected. Understanding how our sensory-processing mechanisms can be modulated psychologically and physiologically promises to help researchers and clinicians find new ways to alleviate the suffering of chronic-pain patients.
Imagine a life free of pain. How would it be—calm, fearless, serene? Would you feel invulnerable, invincible? Getting rid of pain is a popular quest—a quick search for “pain-free life” on Google returns well over 4 million hits—including links to various bestselling self-help guides promising a pain-free life in only 7 steps, 6 weeks, or 3 minutes. Pain management is a billion-dollar market, and involves much more than just pharmaceuticals. Surely a life with no pain would be a better one?
Well, consider one of the “lucky few”: 12-year-old “Thomas” has never felt deep pain. Not even when a fracture made him walk around with one leg shorter than the other, so that the bones of his healthy leg were slowly crushed to destruction underneath the knee joint (see Figure 1A). For Thomas and other members of a large Swedish family, life without pain is a harsh reality because of a mutated gene that affects the growth of the nerves conducting deep pain. Most of those affected suffer from joint damage and frequent fractures to bones in their feet and hands; some end up in wheelchairs even before they reach puberty (Minde et al., 2004). It turns out pain—generally—serves us well.
Living without a sense of touch sounds less attractive than being free of pain—touch is a source of pleasure and essential to how we feel. Losing the sense of touch has severe implications—something patient G. L. experienced when an antibiotics treatment damaged the type of nerves that signal touch from her skin and the position of her joints and muscles. She reported feeling like she’d lost her physical self from her nose down, making her “disembodied”—like she no longer had any connection to the body attached to her head. If she didn’t look at her arms and legs they could just “wander off” without her knowing—initially she was unable to walk, and even after she relearned this skill she was so dependent on her visual attention that closing her eyes would cause her to land in a hopeless heap on the floor. Only light caresses like those from her children’s hands can make her feel she has a body, but even these sensations remain vague and elusive (Olausson et al., 2002; Sacks, 1985).
Touch and pain are aspects of the somatosensory system, which provides our brain with information about our own body (interoception) and properties of the immediate external world (exteroception) (Craig, 2002). We have somatosensory receptors located all over the body, from the surface of our skin to the depth of our joints. The information they send to the central nervous system is generally divided into four modalities: cutaneous senses (senses of the skin), proprioception (body position), kinesthesis (body movement), and nociception (pain, discomfort). We are going to focus on the cutaneous senses, which respond to tactile, thermal, and pruritic (itchy) stimuli, and events that cause tissue damage (and hence pain). In addition, there is growing evidence for a fifth modality specifically channeling pleasant touch (McGlone & Reilly, 2010).
The skin can convey many sensations, such as the biting cold of a wind, the comfortable pressure of a hand holding yours, or the irritating itch from a woolen scarf. The different types of information activate specific receptors that convert the stimulation of the skin to electrical nerve impulses, a process called transduction. There are three main groups of receptors in our skin:mechanoreceptors, responding to mechanical stimuli, such as stroking, stretching, or vibration of the skin; thermoreceptors, responding to cold or hot temperatures; and chemoreceptors, responding to certain types of chemicals either applied externally or released within the skin (such as histamine from an inflammation). For an overview of the different receptor types and their properties, see Box 1. The experience of pain usually starts with activation of nociceptors—receptors that fire specifically to potentially tissue-damaging stimuli. Most of the nociceptors are subtypes of either chemoreceptors or mechanoreceptors. When tissue is damaged or inflamed, certain chemical substances are released from the cells, and these substances activate the chemosensitive nociceptors. Mechanoreceptive nociceptors have a high threshold for activation—they respond to mechanical stimulation that is so intense it might damage the tissue.
When you step on a pin, this activates a host of mechanoreceptors, many of which are nociceptors. You may have noticed that the sensation changes over time. First you feel a sharp stab that propels you to remove your foot, and only then you feel a wave of more aching pain. The sharp stab is signaled via fast-conducting A-fibers, which project to the somatosensory cortex. This part of the cortex is somatotopically organized—that is, the sensory signals are represented according to where in the body they stem from (see homunculus illustration, Figure 2). The unpleasant ache you feel after the sharp pin stab is a separate, simultaneous signal sent from the nociceptors in your foot via thin C-pain or Aδ-fibers to the insular cortex and other brain regions involved in processing of emotion and interoception (see Figure 3a for a schematic representation of this pathway). The experience of stepping on a pin is, in other words, composed by two separate signals: one discriminatory signal allowing us to localize the touch stimulus and distinguish whether it’s a blunt or a sharp stab; and one affective signal that lets us know that stepping on the pin is bad. It is common to divide pain into sensory–discriminatory and affective–motivational aspects (Auvray, Myin, & Spence, 2010). This distinction corresponds, at least partly, to how this information travels from the peripheral to the central nervous system and how it is processed in the brain (Price, 2000).
Touch senses are not just there for discrimination or detection of potentially painful events, as Harlow and Suomi (1970) demonstrated in a series of heartbreaking experiments where baby monkeys were taken from their mothers. The infant monkeys could choose between two artificial surrogate mothers—one “warm” mother without food but with a furry, soft cover; and one cold, steel mother with food. The monkey babies spent most of their time clinging to the soft mother, and only briefly moved over to the hard, steel mother to feed, indicating that touch is of “overpowering importance” to the infant (Harlow & Suomi, 1970, p. 161). Gentle touch is central for creating and maintaining social relationships in primates; they groom each other by stroking the fur and removing parasites—an activity important not only for their individual well-being but also for group cohesion (Dunbar, 2010; Keverne, Martensz, & Tuite, 1989). Although people don’t groom each other in the same way, gentle touch is important for us, too.
The sense of touch is the first to develop while one is in the womb, and human infants crave touch from the moment they’re born. From studies of human orphans, we know that touch is also crucial for human development. In Romanian orphanages where the babies were fed but not given regular attention or physical contact, the children suffered cognitive and neurodevelopmental delay (Simons & Land, 1987). Physical contact helps a crying baby calm down, and the soothing touch a mother gives to her child is thought to reduce the levels of stress hormones such as cortisol. High levels of cortisol have negative effects on neural development, and they can even lead to cell loss (Feldman, Singer, & Zagoory, 2010; Fleming, O’Day, & Kraemer, 1999; Pechtel & Pizzagalli, 2011). Thus, stress reduction through hugs and caresses might be important not only for children’s well-being, but also for the development of the infant brain.
The skin senses are similar across species, likely reflecting the evolutionary advantage of being able to tell what is touching you, where it’s happening, and whether or not it’s likely to cause tissue damage. An intriguing line of touch research suggests that humans, cats, and other animals have a special, evolutionarily preserved system that promotes gentle touch because it carries social and emotional significance.On a peripheral level, this system consists of a subtype of C-fibers that responds not to painful stimuli, but rather to gentle stroking touch—called C-tactile fibers. The firing rate of the C-tactile fibers correlates closely with how pleasant the stroking feels—suggesting they are coding specifically for the gentle caresses typical of social affiliative touch (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). This finding has led to the social touch hypothesis, which proposes that C-tactile fibers form a system for touch perception that supports social bonding (Morrison, Löken, & Olausson, 2010; Olausson, Wessberg, Morrison, McGlone, & Vallbo, 2010). The discovery of the C-tactile system suggests that touch is organized in a similar way to pain; fast-conducting A-fibers contribute to sensory–discriminatory aspects, while thin C-fibers contribute to affective–motivational aspects (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). However, while these “hard-wired” afferent systems often provide us with accurate information about our environment and our bodies, how we experience touch or pain depends very much on top-down sources like motivation, expectation, mood, fear, and stress.
In April 2003, the climber Aron Ralston found himself at the floor of Blue John Canyon in Utah, forced to make an appalling choice: face a slow but certain death—or amputate his right arm. Five days earlier he fell down the canyon—since then he had been stuck with his right arm trapped between an 800-lb boulder and the steep sandstone wall. Weak from lack of food and water and close to giving up, it occurred to him like an epiphany that if he broke the two bones in his forearm he could manage to cut off the rest with his pocket knife. The thought of freeing himself and surviving made him so exited he spent the next 40 minutes completely engrossed in the task: first snapping his bones using his body as a lever, then sticking his fingers into the arm, pinching bundles of muscle fibers and severing them one by one, before cutting the blue arteries and the pale “noodle-like” nerves. The pain was unimportant. Only cutting through the thick white main nerve made him stop for a minute—the flood of pain, he describes, was like thrusting his entire arm “into a cauldron of magma.” Finally free, he rappelled down a cliff and walked another 7 miles until he was rescued by some hikers (Ralston, 2010). How is it possible to do something so excruciatingly painful to yourself, and still manage to walk, talk, and think rationally afterwards? The answer lies within the brain, where signals from the body are interpreted. When we perceive somatosensory and nociceptive signals from the body, the experience is highly subjective and malleable by motivation, attention, emotion, and context.
According to the motivation–decision model, the brain automatically and continuously evaluates the pros and cons of any situation—weighing impending threats and available rewards (Fields, 2004,2006). Anything more important for survival than avoiding the pain activates the brain’s descending pain modulatory system—a top-down system involving several parts of the brain and brainstem, which inhibits nociceptive signaling so that the more important actions can be attended to (Figure 3b). In Aron’s extreme case, his actions were likely based on such an unconscious decision process—taking into account his homeostatic state (his hunger, thirst, the inflammation and decay of his crushed hand slowly affecting the rest of his body), the sensory input available (the sweet smell of his dissolving skin, the silence around him indicating his solitude), and his knowledge about the threats facing him (death, or excruciating pain that won’t kill him) versus the potential rewards (survival, seeing his family again). Aron’s story illustrates the evolutionary advantage to being able to shut off pain: The descending pain modulatory system allows us to go through with potentially life-saving actions. However, when one has reached safety or obtained the reward, healing is more important. The very same descending system can then “crank up” nociception from the body to promote healing and motivate us to avoid potentially painful actions. To facilitate or inhibit nociceptive signals from the body, the descending pain modulatory system uses a set of ON- or OFF-cells in the brainstem, which regulates how much of the nociceptive signal reaches the brain. The descending system is dependent on opioid signaling, and analgesics like morphine relieve pain via this circuit (Petrovic, Kalso, Petersson, & Ingvar, 2002).
Thinking about the good things, like his loved ones and the life ahead of him, was probably pivotal to Aron’s survival. The promise of a reward can be enough to relieve pain. Expecting pain relief (getting less pain is often the best possible outcome if you’re in pain, i.e., it is a reward) from a medical treatment contributes to the placebo effect—where pain relief is due at least partly to your brain’s descending modulation circuit, and such relief depends on the brain’s own opioid system (Eippert et al., 2009; Eippert, Finsterbusch, Bingel, & Buchel, 2009; Levine, Gordon, & Fields, 1978). Eating tasty food, listening to good music, or feeling pleasant touch on your skin also decreases pain in both animals and humans, presumably through the same mechanism in the brain (Leknes & Tracey, 2008). In a now classic experiment, Dum and Herz (1984) either fed rats normal rat food or let them feast on highly rewarding chocolate-covered candy (rats love sweets) while standing on a metal plate until they learned exactly what to expect when placed there. When the plate was heated up to a noxious/painful level, the rats that expected candy endured the temperature for twice as long as the rats expecting normal chow. Moreover, this effect was completely abolished when the rats’ opioid (endorphin) system was blocked with a drug, indicating that the analgesic effect of reward anticipation was caused by endorphin release.
For Aron the climber, both the stress from knowing that death was impending and the anticipation of the reward it would be to survive probably flooded his brain with endorphins, contributing to the wave of excitement and euphoria he experienced while he carried out the amputation “like a five-year-old unleashed on his Christmas presents” (Ralston, 2010). This altered his experience of the pain from the extreme tissue damage he was causing and enabled him to focus on freeing himself. Our brain, it turns out, can modulate the perception of how unpleasant pain is, while still retaining the ability to experience the intensity of the sensation (Rainville, Duncan, Price, Carrier, & Bushnell, 1997; Rainville, Feine, Bushnell, & Duncan, 1992). Social rewards, like holding the hand of your boyfriend or girlfriend, have pain-reducing effects. Even looking at a picture of him/her can have similar effects—in fact, seeing a picture of a person we feel close to not only reduces subjective pain ratings, but also the activity in pain-related brain areas (Eisenberger et al., 2011). The most common things to do when wanting to help someone through a painful experience—being present and holding the person’s hand—thus seems to have a measurably positive effect.
Chances are you’ve been sunburned a few times in your life and have experienced how even the lightest pat on the back or the softest clothes can feel painful on your over-sensitive skin. This condition, where innocuous touch gives a burning, tender sensation, is similar to a chronic condition called allodynia—where neuronal disease or injury makes touch that is normally pleasant feel unpleasantly painful. In allodynia, neuronal injury in the spinal dorsal horn causes Aβ-afferents, which are activated by non-nociceptive touch, to access nociceptive pathways (Liljencrantz et al., 2013). The result is that even gentle touch is interpreted by the brain as painful. While an acute pain response to noxious stimuli has a vital protective function, allodynia and other chronic painconditions constitute a tremendous source of unnecessary suffering that affects millions of people. Approximately 100 million Americans suffer from chronic pain, and annual economic cost associated is estimated to be $560–$635 billion (Committee on Advancing Pain Research, Care, & Institute of Medicine, 2011). Chronic pain conditions are highly diverse, and they can involve changes on peripheral, spinal, central, and psychological levels. The mechanisms are far from fully understood, and developing appropriate treatment remains a huge challenge for pain researchers.
Chronic pain conditions often begin with an injury to a peripheral nerve or the tissue surrounding it, releasing hormones and inflammatory molecules that sensitize nociceptors. This makes the nerve and neighboring afferents more excitable, so that also uninjured nerves become hyperexcitable and contribute to the persistence of pain. An injury might also make neurons fire nonstop regardless of external stimuli, providing near-constant input to the pain system. Sensitization can also happen in the brain and in the descending modulatory system of the brainstem (Zambreanu, Wise, Brooks, Iannetti, & Tracey, 2005). Exactly on which levels the pain perception is altered in chronic pain patients can be extremely difficult to pinpoint, making treatment an often exhausting process of trial and error. Suffering from chronic pain has dramatic impacts on the lives of the afflicted. Being in pain over a longer time can lead to depression, anxiety (fear or anticipation of future pain), and immobilization, all of which may in turn exacerbate pain (Wiech & Tracey, 2009). Negative emotion and attention to pain can increase sensitization to pain, possibly by keeping the descending pain modulatory system in facilitation mode. Distraction is therefore a commonly used technique in hospitals where patients have to undergo painful treatments like changing bandages on large burns. For chronic pain patients, however, diverting attention is not a long-term solution. Positive factors like social support can reduce the risk of chronic pain after an injury, and so they can help to adjust to bodily change as a result of injury. We haveve already talked about how having a hand to hold might alleviate suffering. Chronic pain treatment should target these emotional and social factors as well as the physiological.
The context of pain and touch has a great impact on how we interpret it. Just imagine how different it would feel to Aron if someone amputated his hand against his will and for no discernible reason. Prolonged pain from injuries can be easier to bear if the incident causing them provides a positive context—like a war wound that testifies to a soldier’s courage and commitment—or phantom painfrom a hand that was cut off to enable life to carry on. The relative meaning of pain is illustrated by a recent experiment, where the same moderately painful heat was administered to participants in two different contexts—one control context where the alternative was a nonpainful heat; and another where the alternative was an intensely painful heat. In the control context, where the moderate heat was the least preferable outcome, it was (unsurprisingly) rated as painful. In the other context it was the best possible outcome, and here the exact same moderately painful heat was actually rated aspleasant—because it meant the intensely painful heat had been avoided. This somewhat surprising change in perception—where pain becomes pleasant because it represents relief from something worse—highlights the importance of the meaning individuals ascribe to their pain, which can have decisive effects in pain treatment (Leknes et al., 2013). In the case of touch, knowing who or what is stroking your skin can make all the difference—try thinking about slugs the next time someone strokes your skin if you want an illustration of this point. In a recent study, a group of heterosexual males were told that they were about to receive sensual caresses on the leg by either a male experimenter or by an attractive female experimenter (Gazzola et al., 2012). The study participants could not see who was touching them. Although it was always the female experimenter who performed the caress, the heterosexual males rated the otherwise pleasant sensual caresses as clearly unpleasant when they believed the male experimenter did it. Moreover, brain responses to the “male touch” in somatosensory cortex were reduced, exemplifying how top-down regulation of touch resembles top-down pain inhibition.
Pain and pleasure not only share modulatory systems—another common attribute is that we don’t need to be on the receiving end of it ourselves in order to experience it. How did you feel when you read about Aron cutting through his own tissue, or “Thomas” destroying his own bones unknowingly? Did you cringe? It’s quite likely that some of your brain areas processing affective aspects of pain were active even though the nociceptors in your skin and deep tissue were not firing. Pain can be experienced vicariously, as can itch, pleasurable touch, and other sensations. Tania Singer and her colleagues found in an fMRI study that some of the same brain areas that were active when participants felt pain on their own skin (anterior cingulate and insula) were also active when they were given a signal that a loved one was feeling the pain. Those who were most “empathetic” also showed the largest brain responses (Singer et al., 2004). A similar effect has been found for pleasurable touch: The posterior insula of participants watching videos of someone else’s arm being gently stroked shows the same activation as if they were receiving the touch themselves (Morrison, Bjornsdotter, & Olausson, 2011).
Sensory experiences connect us to the people around us, to the rest of the world, and to our own bodies. Pleasant or unpleasant, they’re part of being human. In this module, we have seen how being able to inhibit pain responses is central to our survival—and in cases like that of climber Aron Ralston, that ability can allow us to do extreme things. We have also seen how important the ability to feel pain is to our health—illustrated by young “Thomas,” who keeps injuring himself because he simply doesn’t notice pain. While “Thomas” has to learn to avoid harmful activities without the sensory input that normally guides us, G. L. has had to learn how to keep approaching and move about in a world she can hardly feel at all, with a body that is practically disconnected from her awareness. Too little sensation or too much of it leads to no good, no matter how pleasant or unpleasant the sensation usually feels. As long as we have nervous systems that function normally, we are able to adjust the volume of the sensory signals and our behavioral reactions according to the context we’re in. When it comes to sensory signals like touch and pain, we are interpreters, not measuring instruments. The quest for understanding how our sensory–processing mechanisms can be modulated, psychologically and physiologically, promises to help researchers and clinicians find new ways to alleviate distress from chronic pain.
Watch this lecture about sensation and perception, with a focus on vision, given by John Gabrieli at MIT. Only the first 30 minutes are required to view.
Memories are recollections of actual events stored within our brains. But how is our brain able to form and store these memories? Epigenetic mechanisms inﬂuence genomic activities in the brain to produce long-term changes in synaptic signaling, organization, and morphology, which in turn support learning and memory (Day & Sweatt, 2011).
Neuronal activity in the hippocampus of mice is associated with changes in DNA methylation (Guo et al., 2011), and disruption to genes encoding the DNA methylation machinery cause learning and memory impairments (Feng et al., 2010). DNA methylation has also been implicated in the maintenance of long-term memories, as pharmacological inhibition of DNA methylation and impaired memory (Day & Sweatt, 2011; Miller et al., 2010). These ﬁndings indicate the importance of DNA methylation in mediating synaptic plasticity and cognitive functions, both of which are disturbed in psychological illness.
Changes in histone modiﬁcations can also inﬂuence long-term memory formation by altering chromatin accessibility and the expression of genes relevant to learning and memory. Memory formation and the associated enhancements in synaptic transmission are accompanied by increases in histone acetylation (Guan et al., 2002) and alterations in histone methylation (Schaefer et al., 2009), which promote gene expression. Conversely, a neuronal increase in histone deacetylase activity, which promotes gene silencing, results in reduced synaptic plasticity and impairs memory (Guan et al., 2009). Pharmacological inhibition of histone deacetylases augments memory formation (Guan et al., 2009; Levenson et al., 2004), further suggesting that histone (de)acetylation regulates this process.
In humans genetic defects in genes encoding the DNA methylation and chromatin machinery exhibit profound effects on cognitive function and mental health (Jiang, Bressler, & Beaudet, 2004). The two best-characterized examples are Rett syndrome (Amir et al., 1999) and Rubinstein-Taybi syndrome (RTS) (Alarcon et al., 2004), which are profound intellectual disability disorders. Both MECP2 and CBP are highly expressed in neurons and are involved in regulating neural gene expression (Chen et al., 2003; Martinowich et al., 2003).
Rett syndrome patients have a mutation in their DNA sequence in a gene called MECP2. MECP2 plays many important roles within the cell: One of these roles is to read the DNA sequence, checking for DNA methylation, and to bind to areas that contain methylation, thereby preventing the wrong proteins from being present. Other roles for MECP2 include promoting the presence of particular, necessary, proteins, ensuring that DNA is packaged properly within the cell and assisting with the production of proteins. MECP2 function also influences gene expression that supports dendritic and synaptic development and hippocampus-dependent memory (Li, Zhong, Chau, Williams, & Chang, 2011; Skene et al., 2010). Mice with altered MECP2 expression exhibit genome-wide increases in histone acetylation, neuron cell death, increased anxiety, cognitive deficits, and social withdrawal (Shahbazian et al., 2002). These findings support a model in which DNA methylation and MECP2 constitute a cell-specific epigenetic mechanism for regulation of histone modification and gene expression, which may be disrupted in Rett syndrome.
RTS patients have a mutation in their DNA sequence in a gene called CBP. One of these roles of CBP is to bind to specific histones and promote histone acetylation, thereby promoting gene expression. Consistent with this function, RTS patients exhibit a genome-wide decrease in histone acetylation and cognitive dysfunction in adulthood (Kalkhoven et al., 2003). The learning and memory deficits are attributed to disrupted neural plasticity (Korzus, Rosenfeld, & Mayford, 2004). Similar to RTS in humans, mice with a mutation of CBP perform poorly in cognitive tasks and show decreased genome-wide histone acetylation (for review, see Josselyn, 2005). In the mouse brain CBP was found to act as an epigenetic switch to promote the birth of new neurons in the brain. Interestingly, this epigenetic mechanism is disrupted in the fetal brains of mice with a mutation of CBP, which, as pups, exhibit early behavioral deficits following removal and separation from their mother (Wang et al., 2010). These findings provide a novel mechanism whereby environmental cues, acting through histone modifying enzymes, can regulate epigenetic status and thereby directly promote neurogenesis, which regulates neurobehavioral development.
Together, these studies demonstrate that misregulation of epigenetic modiﬁcations and their regulatory enzymes is capable of orchestrating prominent deﬁcits in neuronal plasticity and cognitive function. Knowledge from these studies may provide greater insight into other mental disorders such as depression and suicidal behaviors.
Epigenome-wide studies have identiﬁed several dozen sites with DNA methylation alterations in genes involved in brain development and neurotransmitter pathways, which had previously been associated with mental illness (Mill et al., 2008). These disorders are complex and typically start at a young age and cause lifelong disability. Often, limited benefits from treatment make these diseases some of the most burdensome disorders for individuals, families, and society. It has become evident that the efforts to identify the primary causes of complex psychiatric disorders may significantly benefit from studies linking environmental effects with changes observed within the individual cells.
Epigenetic events that alter chromatin structure to regulate programs of gene expression have been associated with depression-related behavior and action of antidepressant medications, with increasing evidence for similar mechanisms occurring in post-mortem brains of depressed individuals. In mice, social avoidance resulted in decreased expression of hippocampal genes important in mediating depressive responses (Tsankova et al., 2006). Similarly, chronic social defeat stress was found to decrease expression of genes implicated in normal emotion processing (Lutter et al., 2008). Consistent with these findings, levels of histone markers of increased gene expression were down regulated in human post-mortem brain samples from individuals with a history of clinical depression (Covington et al., 2009).
Administration of antidepressants increased histone markers of increased gene expression and reversed the gene repression induced by defeat stress (Lee, Wynder, Schmidt, McCafferty, & Shiekhattar, 2006; Tsankova et al., 2006; Wilkinson et al., 2009). These results provide support for the use of HDAC inhibitors against depression. Accordingly, several HDAC inhibitors have been found to exert antidepressant effects by each modifying distinct cellular targets (Cassel et al., 2006;Schroeder, Lin, Crusio, & Akbarian, 2007).
There is also increasing evidence that aberrant gene expression resulting from altered epigenetic regulation is associated with the pathophysiology of suicide (McGowan et al., 2008; Poulter et al., 2008). Thus, it is tempting to speculate that there is an epigenetically determined reduced capacity for gene expression, which is required for learning and memory, in the brains of suicide victims.
While the cellular and molecular mechanisms that influence on physical and mental health have long been a central focus of neuroscience, only in recent years has attention turned to the epigenetic mechanisms behind the dynamic changes in gene expression responsible for normal cognitive function and increased risk for mental illness. The links between early environment and epigenetic modifications suggest a mechanism underlying gene-environment interactions. Early environmental adversity alone is not a sufficient cause of mental illness, because many individuals with a history of severe childhood maltreatment or trauma remain healthy. It is increasingly becoming evident that inherited differences in the segments of specific genes may moderate the effects of adversity and determine who is sensitive and who is resilient through a gene-environment interplay. Genes such as the glucocorticoid receptor appear to moderate the effects of childhood adversity on mental illness. Remarkably, epigenetic DNA modifications have been identified that may underlie the long-lasting effects of environment on biological functions. This new epigenetic research is pointing to a new strategy to understanding gene-environment interactions.
The next decade of research will show if this potential can be exploited in the development of new therapeutic options that may alter the traces that early environment leaves on the genome. However, as discussed in this module, the epigenome is not static and can be molded by developmental signals, environmental perturbations, and disease states, which present an experimental challenge in the search for epigenetic risk factors in psychological disorders (Rakyan, Down, Balding, & Beck, 2011). The sample size and epigenomic assay required is dependent on the number of tissues affected, as well as the type and distribution of epigenetic modiﬁcations. The combination of genetic association maps studies with epigenome-wide developmental studies may help identify novel molecular mechanisms to explain features of inheritance of personality traits and transform our understanding of the biological basis of psychology. Importantly, these epigenetic studies may lead to identification of novel therapeutic targets and enable the development of improved strategies for early diagnosis, prevention, and better treatment of psychological and behavioral disorders.
The development of an individual is an active process of adaptation that occurs within a social and economic context. For example, the closeness or degree of positive attachment of the parent (typically mother)–infant bond and parental investment (including nutrient supply provided by the parent) that define early childhood experience also program the development of individual differences in stress responses in the brain, which then affect memory, attention, and emotion. In terms of evolution, this process provides the offspring with the ability to physiologically adjust gene expression profiles contributing to the organization and function of neural circuits and molecular pathways that support (1) biological defensive systems for survival (e.g., stress resilience), (2) reproductive success to promote establishment and persistence in the present environment, and (3) adequate parenting in the next generation (Bradshaw, 1965).
The most comprehensive study to date of variations in parental investment and epigenetic inheritance in mammals is that of the maternally transmitted responses to stress in rats. In rat pups, maternal nurturing (licking and grooming) during the first week of life is associated with long-term programming of individual differences in stress responsiveness, emotionality, cognitive performance, and reproductive behavior (Caldji et al., 1998; Francis, Diorio, Liu, & Meaney, 1999; Liu et al., 1997; Myers, Brunelli, Shair, Squire, & Hofer, 1989; Stern, 1997). In adulthood, the offspring of mothers that exhibit increased levels of pup licking and grooming over the first week of life show increased expression of the glucocorticoid receptor in the hippocampus (a brain structure associated with stress responsivity as well as learning and memory) and a lower hormonal response to stress compared with adult animals reared by low licking and grooming mothers (Francis et al., 1999; Liu et al., 1997). Moreover, rat pups that received low levels of maternal licking and grooming during the first week of life showed decreased histone acetylation and increased DNA methylation of a neuron-specific promoter of the glucocorticoid receptor gene (Weaver et al., 2004). The expression of this gene is then reduced, the number of glucocorticoid receptors in the brain is decreased, and the animals show a higher hormonal response to stress throughout their life. The effects of maternal care on stress hormone responses and behaviour in the offspring can be eliminated in adulthood by pharmacological treatment (HDAC inhibitor trichostatin A, TSA) or dietary amino acid supplementation (methyl donor L-methionine), treatments that influence histone acetylation, DNA methylation, and expression of the glucocorticoid receptor gene (Weaver et al., 2004; Weaver et al., 2005). This series of experiments shows that histone acetylation and DNA methylation of the glucocorticoid receptor gene promoter is a necessary link in the process leading to the long-term physiological and behavioral sequelae of poor maternal care. This points to a possible molecular target for treatments that may reverse or ameliorate the traces of childhood maltreatment.
Several studies have attempted to determine to what extent the findings from model animals are transferable to humans. Examination of post-mortem brain tissue from healthy human subjects found that the human equivalent of the glucocorticoid receptor gene promoter (NR3C1 exon 1F promoter) is also unique to the individual (Turner, Pelascini, Macedo, & Muller, 2008). A similar study examining newborns showed that methylation of the glucocorticoid receptor gene promoter maybe an early epigenetic marker of maternal mood and risk of increased hormonal responses to stress in infants 3 months of age (Oberlander et al., 2008). Although further studies are required to examine the functional consequence of this DNA methylation, these findings are consistent with our studies in the neonate and adult offspring of low licking and grooming mothers that show increased DNA methylation of the promoter of the glucocorticoid receptor gene, decreased glucocorticoid receptor gene expression, and increased hormonal responses to stress (Weaver et al., 2004). Examination of brain tissue from suicide victims found that the human glucocorticoid receptor gene promoter is also more methylated in the brains of individuals who had experienced maltreatment during childhood (McGowan et al., 2009). These finding suggests that DNA methylation mediates the effects of early environment in both rodents and humans and points to the possibility of new therapeutic approaches stemming from translational epigenetic research. Indeed, similar processes at comparable epigenetic labile regions could explain why the adult offspring of high and low licking/grooming mothers exhibit widespread differences in hippocampal gene expression and cognitive function (Weaver, Meaney, & Szyf, 2006).
However, this type of research is limited by the inaccessibility of human brain samples. The translational potential of this finding would be greatly enhanced if the relevant epigenetic modification can be measured in an accessible tissue. Examination of blood samples from adult patients with bipolar disorder, who also retrospectively reported on their experiences of childhood abuse and neglect, found that the degree of DNA methylation of the human glucocorticoid receptor gene promoter was strongly positively related to the reported experience of childhood maltreatment decades earlier. For a relationship between a molecular measure and reported historical exposure, the effects size is extraordinarily large. This opens a range of new possibilities: given the large effect size and consistency of this association, measurement of the GR promoter methylation may effectively become a blood test measuring the physiological traces left on the genome by early experiences. Although this blood test cannot replace current methods of diagnosis, this unique and addition information adds to our knowledge of how disease may arise and be manifested throughout life. Near-future research will examine whether this measure adds value over and above simple reporting of early adversities when it comes to predicting important outcomes, such as response to treatment or suicide.
The old adage “you are what you eat” might be true on more than just a physical level: The food you choose (and even what your parents and grandparents chose) is reflected in your own personal development and risk for disease in adult life (Wells, 2003). Nutrients can reverse or change DNA methylation and histone modifications, thereby modifying the expression of critical genes associated with physiologic and pathologic processes, including embryonic development, aging, and carcinogenesis. It appears that nutrients can influence the epigenome either by directly inhibiting enzymes that catalyze DNA methylation or histone modifications, or by altering the availability of substrates necessary for those enzymatic reactions. For example, rat mothers fed a diet low in methyl group donors during pregnancy produce offspring with reduced DNMT-1 expression, decreased DNA methylation, and increased histone acetylation at promoter regions of specific genes, including the glucocorticoid receptor, and increased gene expression in the liver of juvenile offspring (Lillycrop, Phillips, Jackson, Hanson, & Burdge, 2005) and adult offspring (Lillycrop et al., 2007). These data suggest that early life nutrition has the potential to influence epigenetic programming in the brain not only during early development but also in adult life, thereby modulating health throughout life. In this regard, nutritional epigenetics has been viewed as an attractive tool to prevent pediatric developmental diseases and cancer, as well as to delay aging-associated processes.
The best evidence relating to the impact of adverse environmental conditions development and health comes from studies of the children of women who were pregnant during two civilian famines of World War II: the Siege of Leningrad (1941–44) (Bateson, 2001) and the Dutch Hunger Winter (1944–1945) (Stanner et al., 1997). In the Netherlands famine, women who were previously well nourished were subjected to low caloric intake and associated environmental stressors. Women who endured the famine in the late stages of pregnancy gave birth to smaller babies (Lumey & Stein, 1997) and these children had an increased risk of insulin resistance later in life (Painter, Roseboom, & Bleker, 2005). In addition, offspring who were starved prenatally later experienced impaired glucose tolerance in adulthood, even when food was more abundant (Stanner et al., 1997). Famine exposure at various stages of gestation was associated with a wide range of risks such as increased obesity, higher rates of coronary heart disease, and lower birth weight (Lumey & Stein, 1997). Interestingly, when examined 60 years later, people exposed to famine prenatally showed reduced DNA methylation compared with their unexposed same-sex siblings (Heijmans et al., 2008).
Almost all the cells in our body are genetically identical, yet our body generates many different cell types, organized into different tissues and organs, and expresses different proteins. Within each type of mammalian cell, about 2 meters of genomic DNA is divided into nuclear chromosomes. Yet the nucleus of a human cell, which contains the chromosomes, is only about 2 μm in diameter. To achieve this 1,000,000-fold compaction, DNA is wrapped around a group of 8 proteins called histones. This combination of DNA and histone proteins forms a special structure called a “nucleosome,” the basic unit of chromatin, which represents a structural solution for maintaining and accessing the tightly compacted genome. These factors alter the likelihood that a gene will be expressed or silenced. Cellular functions such as gene expression, DNA replication, and the generation of specific cell types are therefore influenced by distinct patterns of chromatin structure, involving covalent modification of both histones (Kadonaga, 1998) and DNA (Razin, 1998).
Importantly, epigenetic variation also emerges across the lifespan. For example, although identical twins share a common genotype and are genetically identical and epigenetically similar when they are young, as they age they become more dissimilar in their epigenetic patterns and often display behavioral, personality, or even physical differences, and have different risk levels for serious illness. Thus, understanding the structure of the nucleosome is key to understanding the precise and stable control of gene expression and regulation, providing a molecular interface between genes and environmentally induced changes in cellular activity.
DNA methylation is the best-understood epigenetic modification influencing gene expression. DNA is composed of four types of naturally occurring nitrogenous bases: adenine (A), thymine (T), guanine (G), and cytosine (C). In mammalian genomes, DNA methylation occurs primarily at cytosine residues in the context of cytosines that are followed by guanines (CpG dinucleotides), to form 5-methylcytosine in a cell-specific pattern (Goll & Bestor, 2005; Law & Jacobsen, 2010; Suzuki & Bird, 2008). The enzymes that perform DNA methylation are called DNA methyltransferases (DNMTs), which catalyze the transfer of a methyl group to the cytosine (Adams, McKay, Craig, & Burdon, 1979). These enzymes are all expressed in the central nervous system and are dynamically regulated during development (Feng, Chang, Li, & Fan, 2005; Goto et al., 1994). The effect of DNA methylation on gene function varies depending on the period of development during which the methylation occurs and location of the methylated cytosine. Methylation of DNA in gene regulatory regions (promoter and enhancer regions) usually results in gene silencing and reduced gene expression (Ooi, O’Donnell, & Bestor, 2009; Suzuki & Bird, 2008; Sutter and Doerfler, 1980; Vardimon et al., 1982). This is a powerful regulatory mechanism that ensures that genes are expressed only when needed. Thus DNA methylation may broadly impact human brain development, and age-related misregulation of DNA methylation is associated with the molecular pathogenesis of neurodevelopmental disorders.
The modification of histone proteins comprises an important epigenetic mark related to gene expression. One of the most thoroughly studied modifications is histone acetylation, which is associated with gene activation and increased gene expression (Wade, Pruss, & Wolffe, 1997). Acetylation on histone tails is mediated by the opposing enzymatic activities of histone acetyltransferases (HATs) and histone deacetylases (HDACs) (Kuo & Allis, 1998). For example, acetylation of histone in gene regulatory regions by HAT enzymes is generally associated with DNA demethylation, gene activation, and increased gene expression (Hong, Schroth, Matthews, Yau, & Bradbury, 1993; Sealy & Chalkley, 1978). On the other hand, removal of the acetyl group (deacetylation) by HDAC enzymes is generally associated with DNA methylation, gene silencing, and decreased gene expression (Davie & Chadee, 1998). The relationship between patterns of histone modifications and gene activity provides evidence for the existence of a “histone code” for determining cell-specific gene expression programs (Jenuwein & Allis, 2001). Interestingly, recent research using animal models has demonstrated that histone modifications and DNA methylation of certain genes mediates the long-term behavioral effects of the level of care experienced during infancy.
Early life experiences exert a profound and long-lasting influence on physical and mental health throughout life. The efforts to identify the primary causes of this have significantly benefited from studies of the epigenome—a dynamic layer of information associated with DNA that differs between individuals and can be altered through various experiences and environments. The epigenome has been heralded as a key “missing piece” of the etiological puzzle for understanding how development of psychological disorders may be influenced by the surrounding environment, in concordance with the genome. Understanding the mechanisms involved in the initiation, maintenance, and heritability of epigenetic states is thus an important aspect of research in current biology, particularly in the study of learning and memory, emotion, and social behavior in humans. Moreover, epigenetics in psychology provides a framework for understanding how the expression of genes is influenced by experiences and the environment to produce individual differences in behavior, cognition, personality, and mental health. In this module, we survey recent developments revealing epigenetic aspects of mental health and review some of the challenges of epigenetic approaches in psychology to help explain how nurture shapes nature.
Early childhood is not only a period of physical growth; it is also a time of mental development related to changes in the anatomy, physiology, and chemistry of the nervous system that influence mental health throughout life. Cognitive abilities associated with learning and memory, reasoning, problem solving, and developing relationships continue to emerge during childhood. Brain development is more rapid during this critical or sensitive period than at any other, with more than 700 neural connections created each second. Herein, complex gene–environment interactions (or genotype–environment interactions, G×E) serve to increase the number of possible contacts between neurons, as they hone their adult synaptic properties and excitability. Many weak connections form to different neuronal targets; subsequently, they undergo remodeling in which most connections vanish and a few stable connections remain. These structural changes (or plasticity) may be crucial for the development of mature neural networks that support emotional, cognitive, and social behavior. The generation of different morphology, physiology, and behavioral outcomes from a single genome in response to changes in the environment forms the basis for “phenotypic plasticity,” which is fundamental to the way organisms cope with environmental variation, navigate the present world, and solve future problems.
The challenge for psychology has been to integrate findings from genetics and environmental (social, biological, chemical) factors, including the quality of infant–mother attachments, into the study of personality and our understanding of the emergence of mental illness. These studies have demonstrated that common DNA sequence variation and rare mutations account for only a small fraction (1%–2%) of the total risk for inheritance of personality traits and mental disorders (Dick, Riley, & Kendler, 2010; Gershon, Alliey-Rodriguez, & Liu, 2011). Additionally, studies that have attempted to examine the mechanisms and conditions under which DNA sequence variation influences brain development and function have been confounded by complex cause-and-effect relationships (Petronis, 2010). The large unaccounted heritability of personality traits and mental health suggests that additional molecular and cellular mechanisms are involved.
Epigenetics has the potential to provide answers to these important questions and refers to the transmission of phenotype in terms of gene expression in the absence of changes in DNA sequence—hence the name epi- (Greek: επί- over, above) genetics (Waddington, 1942; Wolffe & Matzke, 1999). The advent of high-throughput techniques such as sequencing-based approaches to study the distributions of regulators of gene expression throughout the genome led to the collective description of the “epigenome.” In contrast to the genome sequence, which is static and the same in almost all cells, the epigenome is highly dynamic, differing among cell types, tissues, and brain regions (Gregg et al., 2010). Recent studies have provided insights into epigenetic regulation of developmental pathways in response to a range of external environmental factors (Dolinoy, Weidman, & Jirtle, 2007). These environmental factors during early childhood and adolescence can cause changes in expression of genes conferring risk of mental health and chronic physical conditions. Thus, the examination of genetic–epigenetic–environment interactions from a developmental perspective may determine the nature of gene misregulation in psychological disorders.
This module will provide an overview of the main components of the epigenome and review themes in recent epigenetic research that have relevance for psychology, to form the biological basis for the interplay between environmental signals and the genome in the regulation of individual differences in physiology, emotion, cognition, and behavior.
It may seem surprising, but genetic influence on behavior is a relatively recent discovery. In the middle of the 20th century, psychology was dominated by the doctrine of behaviorism, which held that behavior could only be explained in terms of environmental factors. Psychiatry concentrated on psychoanalysis, which probed for roots of behavior in individuals’ early life-histories. The truth is, neither behaviorism nor psychoanalysis is incompatible with genetic influences on behavior, and neither Freud nor Skinner was naive about the importance of organic processes in behavior. Nevertheless, in their day it was widely thought that children’s personalities were shaped entirely by imitating their parents’ behavior, and that schizophrenia was caused by certain kinds of “pathological mothering.” Whatever the outcome of our broader discussion of nature–nurture, the basic fact that the best predictors of an adopted child’s personality or mental health are found in the biological parents he or she has never met, rather than in the adoptive parents who raised him or her, presents a significant challenge to purely environmental explanations of personality or psychopathology. The message is clear: You can’t leave genes out of the equation. But keep in mind, no behavioral traits are completely inherited, so you can’t leave the environment out altogether, either.
Trying to untangle the various ways nature-nurture influences human behavior can be messy, and often common-sense notions can get in the way of good science. One very significant contribution of behavioral genetics that has changed psychology for good can be very helpful to keep in mind: When your subjects are biologically-related, no matter how clearly a situation may seem to point to environmental influence, it is never safe to interpret a behavior as wholly the result of nurture without further evidence. For example, when presented with data showing that children whose mothers read to them often are likely to have better reading scores in third grade, it is tempting to conclude that reading to your kids out loud is important to success in school; this may well be true, but the study as described is inconclusive, because there are genetic as well asenvironmental pathways between the parenting practices of mothers and the abilities of their children. This is a case where “correlation does not imply causation,” as they say. To establish that reading aloud causes success, a scientist can either study the problem in adoptive families (in which the genetic pathway is absent) or by finding a way to randomly assign children to oral reading conditions.
The outcomes of nature–nurture studies have fallen short of our expectations (of establishing clear-cut bases for traits) in many ways. The most disappointing outcome has been the inability to organize traits from more– toless-genetic. As noted earlier, everything has turned out to be at least somewhat heritable (passed down), yet nothing has turned out to be absolutely heritable, and there hasn’t been much consistency as to which traits aremore heritable and which are less heritable once other considerations (such as how accurately the trait can be measured) are taken into account (Turkheimer, 2000). The problem is conceptual: The heritability coefficient, and, in fact, the whole quantitative structure that underlies it, does not match up with our nature–nurture intuitions. We want to know how “important” the roles of genes and environment are to the development of a trait, but in focusing on “important” maybe we’re emphasizing the wrong thing. First of all, genes and environment are both crucial to every trait; without genes the environment would have nothing to work on, and too, genes cannot develop in a vacuum. Even more important, because nature–nurture questions look at the differences among people, the cause of a given trait depends not only on the trait itself, but also on the differences in that trait between members of the group being studied.
The classic example of the heritability coefficient defying intuition is the trait of having two arms. No one would argue against the development of arms being a biological, genetic process. But fraternal twins are just as similar for “two-armedness” as identical twins, resulting in a heritability coefficient of zero for the trait of having two arms. Normally, according to the heritability model, this result (coefficient of zero) would suggest all nurture, no nature, but we know that’s not the case. The reason this result is not a tip-off that arm development is less genetic than we imagine is because people do not vary in the genes related to arm development—which essentially upends the heritability formula. In fact, in this instance, the opposite is likely true: the extent that people differ in arm number is likely the result of accidents and, therefore, environmental. For reasons like these, we always have to be very careful when asking nature–nurture questions, especially when we try to express the answer in terms of a single number. The heritability of a trait is not simply a property of that trait, but a property of the trait in a particular context of relevant genes and environmental factors.
Another issue with the heritability coefficient is that it divides traits’ determinants into two portions—genes and environment—which are then calculated together for the total variability. This is a little like asking how much of the experience of a symphony comes from the horns and how much from the strings; the ways instruments or genes integrate is more complex than that. It turns out to be the case that, for many traits, genetic differences affect behavior under some environmental circumstances but not others—a phenomenon called gene-environment interaction, or G x E. In one well-known example, Caspi et al. (2002) showed that among maltreated children, those who carried a particular allele of the MAOA gene showed a predisposition to violence and antisocial behavior, while those with other alleles did not. Whereas, in children who had not been maltreated, the gene had no effect. Making matters even more complicated are very recent studies of what is known as epigenetics (see module, “Epigenetics” http://noba.to/37p5cb8v), a process in which the DNA itself is modified by environmental events, and those genetic changes transmitted to children.
Some common questions about nature–nurture are, how susceptible is a trait to change, how malleable it is, and do we “have a choice” about it? These questions are much more complex than they may seem at first glance. For example, phenylketonuria is an inborn error of metabolism caused by a single gene; it prevents the body from metabolizing phenylalanine. Untreated, it causes mental retardation and death. But it can be treated effectively by a straightforward environmental intervention: avoiding foods containing phenylalanine. Height seems like a trait firmly rooted in our nature and unchangeable, but the average height of many populations in Asia and Europe has increased significantly in the past 100 years, due to changes in diet and the alleviation of poverty. Even the most modern genetics has not provided definitive answers to nature–nurture questions. When it was first becoming possible to measure the DNA sequences of individual people, it was widely thought that we would quickly progress to finding the specific genes that account for behavioral characteristics, but that hasn’t happened. There are a few rare genes that have been found to have significant (almost always negative) effects, such as the single gene that causes Huntington’s disease, or the Apolipoprotein gene that causes early onset dementia in a small percentage of Alzheimer’s cases. Aside from these rare genes of great effect, however, the genetic impact on behavior is broken up over many genes, each with very small effects. For most behavioral traits, the effects are so small and distributed across so many genes that we have not been able to catalog them in a meaningful way. In fact, the same is true of environmental effects. We know that extreme environmental hardship causes catastrophic effects for many behavioral outcomes, but fortunately extreme environmental hardship is very rare. Within the normal range of environmental events, those responsible for differences (e.g., why some children in a suburban third-grade classroom perform better than others) are much more difficult to grasp.
The difficulties with finding clear-cut solutions to nature–nurture problems bring us back to the other great questions about our relationship with the natural world: the mind-body problem and free will. Investigations into what we mean when we say we are aware of something reveal that consciousness is not simply the product of a particular area of the brain, nor does choice turn out to be an orderly activity that we can apply to some behaviors but not others. So it is with nature and nurture: What at first may seem to be a straightforward matter, able to be indexed with a single number, becomes more and more complicated the closer we look. The many questions we can ask about the intersection among genes, environments, and human traits—how sensitive are traits to environmental change, and how common are those influential environments; are parents or culture more relevant; how sensitive are traits to differences in genes, and how much do the relevant genes vary in a particular population; does the trait involve a single gene or a great many genes; is the trait more easily described in genetic or more-complex behavioral terms?—may have different answers, and the answer to one tells us little about the answers to the others.
It is tempting to predict that the more we understand the wide-ranging effects of genetic differences on all human characteristics—especially behavioral ones—our cultural, ethical, legal, and personal ways of thinking about ourselves will have to undergo profound changes in response. Perhaps criminal proceedings will consider genetic background. Parents, presented with the genetic sequence of their children, will be faced with difficult decisions about reproduction. These hopes or fears are often exaggerated. In some ways, our thinking may need to change—for example, when we consider the meaning behind the fundamental American principle that all men are created equal. Human beings differ, and like all evolved organisms they differ genetically. The Declaration of Independence predates Darwin and Mendel, but it is hard to imagine that Jefferson—whose genius encompassed botany as well as moral philosophy—would have been alarmed to learn about the genetic diversity of organisms. One of the most important things modern genetics has taught us is that almost all human behavior is too complex to be nailed down, even from the most complete genetic information, unless we’re looking at identical twins. The science of nature and nurture has demonstrated that genetic differences among people are vital to human moral equality, freedom, and self-determination, not opposed to them. As Mordecai Kaplan said about the role of the past in Jewish theology, genetics gets a vote, not a veto, in the determination of human behavior. We should indulge our fascination with nature–nurture while resisting the temptation to oversimplify it.
There are three related problems at the intersection of philosophy and science that are fundamental to our understanding of our relationship to the natural world: the mind–body problem, the free will problem, and the nature–nurture problem. These great questions have a lot in common. Everyone, even those without much knowledge of science or philosophy, has opinions about the answers to these questions that come simply from observing the world we live in. Our feelings about our relationship with the physical and biological world often seem incomplete. We are in control of our actions in some ways, but at the mercy of our bodies in others; it feels obvious that our consciousness is some kind of creation of our physical brains, at the same time we sense that our awareness must go beyond just the physical. This incomplete knowledge of our relationship with nature leaves us fascinated and a little obsessed, like a cat that climbs into a paper bag and then out again, over and over, mystified every time by a relationship between inner and outer that it can see but can’t quite understand.
It may seem obvious that we are born with certain characteristics while others are acquired, and yet of the three great questions about humans’ relationship with the natural world, only nature–nurture gets referred to as a “debate.” In the history of psychology, no other question has caused so much controversy and offense: We are so concerned with nature–nurture because our very sense of moral character seems to depend on it. While we may admire the athletic skills of a great basketball player, we think of his height as simply a gift, a payoff in the “genetic lottery.” For the same reason, no one blames a short person for his height or someone’s congenital disability on poor decisions: To state the obvious, it’s “not their fault.” But we do praise the concert violinist (and perhaps her parents and teachers as well) for her dedication, just as we condemn cheaters, slackers, and bullies for their bad behavior.
The problem is, most human characteristics aren’t usually as clear-cut as height or instrument-mastery, affirming our nature–nurture expectations strongly one way or the other. In fact, even the great violinist might have some inborn qualities—perfect pitch, or long, nimble fingers—that support and reward her hard work. And the basketball player might have eaten a diet while growing up that promoted his genetic tendency for being tall. When we think about our own qualities, they seem under our control in some respects, yet beyond our control in others. And often the traits that don’t seem to have an obvious cause are the ones that concern us the most and are far more personally significant. What about how much we drink or worry? What about our honesty, or religiosity, or sexual orientation? They all come from that uncertain zone, neither fixed by nature nor totally under our own control.
One major problem with answering nature-nurture questions about people is, how do you set up an experiment? In nonhuman animals, there are relatively straightforward experiments for tackling nature–nurture questions. Say, for example, you are interested in aggressiveness in dogs. You want to test for the more important determinant of aggression: being born to aggressive dogs or being raised by them. You could mate two aggressive dogs—angry Chihuahuas—together, and mate two nonaggressive dogs—happy beagles—together, then switch half the puppies from each litter between the different sets of parents to raise. You would then have puppies born to aggressive parents (the Chihuahuas) but being raised by nonaggressive parents (the Beagles), and vice versa, in litters that mirror each other in puppy distribution. The big questions are: Would the Chihuahua parents raise aggressive beagle puppies? Would the beagle parents raise nonaggressive Chihuahua puppies? Would the puppies’ nature win out, regardless of who raised them? Or… would the result be a combination of nature and nurture? Much of the most significant nature–nurture research has been done in this way (Scott & Fuller, 1998), and animal breeders have been doing it successfully for thousands of years. In fact, it is fairly easy to breed animals for behavioral traits.
With people, however, we can’t assign babies to parents at random, or select parents with certain behavioral characteristics to mate, merely in the interest of science (though history does include horrific examples of such practices, in misguided attempts at “eugenics,” the shaping of human characteristics through intentional breeding). In typical human families, children’s biological parents raise them, so it is very difficult to know whether children act like their parents due to genetic (nature) or environmental (nurture) reasons. Nevertheless, despite our restrictions on setting up human-based experiments, we do see real-world examples of nature-nurture at work in the human sphere—though they only provide partial answers to our many questions.
The science of how genes and environments work together to influence behavior is called behavioral genetics. The easiest opportunity we have to observe this is the adoption study. When children are put up for adoption, the parents who give birth to them are no longer the parents who raise them. This setup isn’t quite the same as the experiments with dogs (children aren’t assigned to random adoptive parents in order to suit the particular interests of a scientist) but adoption still tells us some interesting things, or at least confirms some basic expectations. For instance, if the biological child of tall parents were adopted into a family of short people, do you suppose the child’s growth would be affected? What about the biological child of a Spanish-speaking family adopted at birth into an English-speaking family? What language would you expect the child to speak? And what might these outcomes tell you about the difference between height and language in terms of nature-nurture?
Another option for observing nature-nurture in humans involves twin studies. There are two types of twins: monozygotic (MZ) and dizygotic (DZ). Monozygotic twins, also called “identical” twins, result from a single zygote (fertilized egg) and have the same DNA. They are essentially clones. Dizygotic twins, also known as “fraternal” twins, develop from two zygotes and share 50% of their DNA. Fraternal twins are ordinary siblings who happen to have been born at the same time. To analyze nature–nurture using twins, we compare the similarity of MZ and DZ pairs. Sticking with the features of height and spoken language, let’s take a look at how nature and nurture apply: Identical twins, unsurprisingly, are almost perfectly similar for height. The heights of fraternal twins, however, are like any other sibling pairs: more similar to each other than to people from other families, but hardly identical. This contrast between twin types gives us a clue about the role genetics plays in determining height. Now consider spoken language. If one identical twin speaks Spanish at home, the co-twin with whom she is raised almost certainly does too. But the same would be true for a pair of fraternal twins raised together. In terms of spoken language, fraternal twins are just as similar as identical twins, so it appears that the genetic match of identical twins doesn’t make much difference.
Twin and adoption studies are two instances of a much broader class of methods for observing nature-nurture called quantitative genetics, the scientific discipline in which similarities among individuals are analyzed based on how biologically related they are. We can do these studies with siblings and half-siblings, cousins, twins who have been separated at birth and raised separately (Bouchard, Lykken, McGue, & Segal, 1990; such twins are very rare and play a smaller role than is commonly believed in the science of nature–nurture), or with entire extended families (see Plomin, DeFries, Knopik, & Neiderhiser, 2012, for a complete introduction to research methods relevant to nature–nurture).
For better or for worse, contentions about nature–nurture have intensified because quantitative genetics produces a number called a heritability coefficient, varying from 0 to 1, that is meant to provide a single measure of genetics’ influence of a trait. In a general way, a heritability coefficient measures how strongly differences among individuals are related to differences among their genes. But beware: Heritability coefficients, although simple to compute, are deceptively difficult to interpret. Nevertheless, numbers that provide simple answers to complicated questions tend to have a strong influence on the human imagination, and a great deal of time has been spent discussing whether the heritability of intelligence or personality or depression is equal to one number or another.
One reason nature–nurture continues to fascinate us so much is that we live in an era of great scientific discovery in genetics, comparable to the times of Copernicus, Galileo, and Newton, with regard to astronomy and physics. Every day, it seems, new discoveries are made, new possibilities proposed. When Francis Galton first started thinking about nature–nurture in the late-19th century he was very influenced by his cousin, Charles Darwin, but genetics per se was unknown. Mendel’s famous work with peas, conducted at about the same time, went undiscovered for 20 years; quantitative genetics was developed in the 1920s; DNA was discovered by Watson and Crick in the 1950s; the human genome was completely sequenced at the turn of the 21st century; and we are now on the verge of being able to obtain the specific DNA sequence of anyone at a relatively low cost. No one knows what this new genetic knowledge will mean for the study of nature–nurture, but as we will see in the next section, answers to nature–nurture questions have turned out to be far more difficult and mysterious than anyone imagined.
Read through this fascinating comic created by Stuart McMillen about Bruce Alexander’s Rat Park study, found HERE.
For more information on Bruce Alexander’s study and a better understanding of addiction, listen to Johann Hari’s TED Talk, “Everything you think you know about addiction is wrong.”
Click on the link HERE to watch MIT Professor John Gabrieli’s lecture from Introduction to Psychology. Start at the 30:45 minute mark to hear examples of actual psychological studies and how they were analyzed. Listen for references to independent and dependent variables, experimenter bias, and double-blind studies. In the video, you’ll learn about breaking social norms, “WEIRD” research, why expectations matter, how a warm cup of coffee might make you nicer, why you should change your answer on a multiple choice test, and why praise for intelligence won’t make you any smarter.