The video discusses the ideas of aggression and altruism. These two things are difficult to understand and explain so sit tight and get ready to run the gauntlet of human emotions.
This video discusses dealing with prejudice, stereotyping, and discrimination.
Why do people sometimes do bad things just because someone else told them to? And what does the term Groupthink mean? In this episode of Crash Course Psychology, Hank talks about the ideas of Social Influence and how it can affect our decisions to act or to not act.
Why do people do bad things? Is it because of the situation or who they are at their core? In this week’s episode of Crash Course Psychology, Hank works to shed a little light on the ideas of Situation vs. Personality. Oh, and we’ll have a look at the Stanford Prison Experiment…it’s alarming.
Do you know how prozac works? Or lithium? Did you know that electro shock therapy is still a thing? There’s a lot to know about biomedical treatments and how they work in tandem with psychotherapy or talk therapy. In this episode of Crash Course Psychology, Hank talks about how biomedical treatments have evolved and how they work with other therapies.
So, you know you’d like to get help with some problematic behavior (like fear of flying). What do you do? Who can you go to for help? Once you’ve gone, what can you expect? In this episode of Crash Course Psychology, Hank talks about what “Getting Help” can look like.
What exactly are personality disorders? How can they be diagnosed? Can we prevent some of them? In this episode of Crash Course Psychology, Hank gives us the down low on things like ego-dystonic and ego-syntonic disorders, borderline and antisocial personality disorders, and potential biological, psychological, and social roots of these disorders.
In this episode of Crash Course Psychology, Hank walks us through the troubling world of eating and body dysmorphic disorders. There’s a lot going on here and, even though we still have a lot of dots to connect, a lot we can learn to help ourselves and each other.
Did you know that Schizophrenia and Multiple Personality Disorder aren’t the same thing? Did you know that we don’t call it multiple personality disorder anymore? In this episode of Crash Course Psychology, Hank takes us down the road of some very misunderstood psychological disorders.
So, what do Batman and J.R.R. Tolkien have in common? Post Traumatic Stress Disorder. It used to be called “shellshock” and it can be really really really destructive. In this episode of Crash Course Psychology, Hank lays out the low down on PTSD and how trauma can affect the brain. Plus, a look at how addiction can play into trauma and the different types of treatments used to help those afflicted.
Not sleeping for days on end. Long periods of euphoria. Racing thoughts. Grandiose ideas. Mania. Depression. All of these are symptoms of Bipolar Disorder. In this episode of Crash Course Psychology, Hank talks about mood disorders and their causes as well as how these disorders can impact people’s lives.
Ever call someone OCD because they like to have a clean apartment? Ever tell someone you have a phobia of spiders when, in fact, they just creep you out a little? In this episode of Crash Course psychology, Hank talks about OCD and Anxiety Disorders in the hope we’ll understand what people with actual OCD have to deal with as well as how torturous Anxiety Disorders and Panic Attacks can actually be.
In this episode of Crash Course Psychology, Hank takes a look at how the treatment for Psychological Disorders has changed over the last hundred years and who is responsible for getting us on the path to getting us here.
Sex is complicated for different reasons in different cultures. But, it’s the entire purpose of life, so there’s no reason to blush. In this episode of Crash Course Psychology, Hank talks about Kinsey, Masters and Johnson, sexuality, gender identity, hormones, and even looks into the idea of why we have sex. There’s a lot to go through here.
So, it turns out we have an easy time reading emotions in facial expressions, but emotions can straight up kill us! In this episode of Crash Course Psychology, Hank discusses stress, emotions, and their overall impact on our health.
Even if you’re Mel Gibson or Kanye, it’s probably best to not wear all of your emotions on your sleeve. In this episode of Crash Course Psychology, Hank talks about these things called “Emotions”. What are they? And why do we need them?
In this episode of Crash Course Psychology, Hank takes a look at WAIS and WISC intelligence tests and how bias can really skew both results and the usefulness of those results.
So, how many different kinds of intelligence are there? And what is the G-Factor? Eugenics? Have you ever taken an IQ Test? All of these things play into the fascinating and sometimes icky history of Intelligence Testing. In this episode of Crash Course Psychology, Hank talks us through some of the important aspects of that history… as well as Nazis. Hey, I said some of it was icky.
How would you measure a personality? What, exactly, is the self? Well, as you’ve come to expect, it’s not that easy to nail down an answer for those questions. Whether you’re into blood, bile, earth, wind, fire, or those Buzzfeed questionnaires, there are LOTS of ways to get at who we are and why.
Herman Rorschach (no, not the guy from Watchmen) came up with the eponymous tests, but what do they mean? Why are we so fascinated with them despite the division in the world of Psychology? Hank tackles these topics as we take a closer look at personality in this episode of Crash Course Psychology.
In this episode of Crash Course Psychology, Hank has a look at that oh so troublesome time in everyone’s life: adolescence! He talks about identity, individuality, and The Breakfast Club.
In this episode of Crash Course Psychology, Hank takes a look at a few experiments that helped us understand how we develop as human beings. Things like attachment, separation anxiety, stranger anxiety, and morality are all discussed… also, a seriously unpleasant study with monkeys and fake mothers.
How does our knowledge grow? It turns out there are some different ideas about that. Schemas, Four-Stage Theory of Cognitive Development, and Vygotsky’s Theory of Scaffolding all play different roles but the basic idea is that children think about things very differently than adults.
Feeling motivated? Even if you are, do you know why? The story of Aaron Ralston can tell us a lot about motivation. In this episode of Crash Course Psychology, Hank tells us Ralston’s story, as well as 4 theories of motivation and some evolutionary perspectives on motivation.
You know what’s amazing? That we can talk to people, they can make meaning out of it, and then talk back to us. In this episode of Crash Course Psychology, Hank talks to us and tries to make meaning out of how our brains do this thing called Language. Plus, monkeys!
We used to think that the human brain was a lot like a computer—using logic to figure out complicated problems. It turns out, it’s a lot more complex and, well, weird than that. In this episode of Crash Course Psychology, Hank discusses thinking & communication, solving problems, creating problems, and a few ideas about what our brains are doing up there.
In this REALLY IMPORTANT EPISODE of Crash Course Psychology, Hank talks about how we remember and forget things, why our memories are fallible, and the dangers that can pose.
Remember that guy from 300? What was his name? ARG!!! It turns out our brains make and recall memories in different ways. In this episode of Crash Course Psychology, Hank talks about the way we do it, what damaging that process can do to us, and that guy… with the face and six pack…
In this episode of Crash Course Psychology, Hank talks about how we learn by observation… and how that can mean beating the tar out of an inanimate clown named Bobo.
I’m sure you’ve heard of Pavlov’s Bell (and I’m not talking about the Aimee Mann song), but what was Ivan Pavlov up to, exactly? And how are our brains trained? And what is a “Skinner Box”? All those questions and more are answered in today’s Crash Course Psychology, in which Hank talks about some of the aspects of learning.
You may think you know all about hypnosis from the movies…Zoolander, The Manchurian Candidate, etc… but there’s a whole lot more going on. In this episode of Crash Course Psychology, Hank tells us about some of the many altered states of consciousness, including hypnosis.
Why do we sleep? Well… that’s a tricky question. More easily answered is the question,”How do we sleep?” In this episode of Crash Course Psychology, Hank discusses some of the ways our brain functions when sleeping and how it can malfunction as well.
What exactly is consciousness? Well… that’s kind of a gray area. In this episode of Crash Course Psychology, Hank gives you the basic ideas of what consciousness is, how our attention works, and why we shouldn’t text and drive… ever… no, really, NEVER!
So what does perception even mean? What’s the difference between seeing something and making sense of it? In today’s episode of Crash Course Psychology, Hank gives us some insight into the differences between sensing and perceiving.
HOMUNCULUS! It’s a big and weird word that you may or may not have heard before, but do you know what it means? In this episode of Crash Course Psychology, Hank gives us a deeper understanding of this weird model of human sensation.
Just what is the difference between sensing and perceiving? And how does vision actually work? And what does this have to do with a Corgi? In this episode of Crash Course Psychology, Hank takes us on a journey through the brain to better explain these and other concepts. Plus, you know, CORGI!
In this episode of Crash Course Psychology, we get to meet the brain. Hank talks us through the Central Nervous System, the ancestral structures of the brain, the limbic system, and new structures of the brain. Plus, what does Phineas Gage have to do with all of this?
What exactly happens when we get scared? How does our brain make our body react? Just what are Neurotransmitters? The video takes us to the simplest part of the complex system of our brains and nervous systems, the neuron.
So how do we apply the scientific method to psychological research? The video discusses case studies, naturalistic observation, surveys and interviews, and experimentation. Different kinds of bias in experimentation and how research practices help us avoid them are also included.
What does Psychology mean? Where does it come from? Here’s a a 10 minute introduction to psychology, its history, and some of the big names in the development of the field.
Social psychology is the scientific study of how people’s thoughts, feelings, and behaviors are influenced by the actual, imagined, or implied presence of others. This subfield of psychology is concerned with the way such feelings, thoughts, beliefs, intentions, and goals are constructed, and how these psychological factors, in turn, influence our interactions with others.
Focus of Social Psychology
Social psychology typically explains human behavior as a result of the interaction of mental states and immediate social situations. Social psychologists, therefore, examine the factors that lead us to behave in a given way in the presence of others, as well as the conditions under which certain behaviors, actions, and feelings occur. They focus on how people construe or interpret situations and how these interpretations influence their thoughts, feelings, and behaviors (Ross & Nisbett, 1991). Thus, social psychology studies individuals in a social context and how situational variables interact to influence behavior.
Social psychologists assert that an individual’s thoughts, feelings, and behaviors are very much influenced by social situations. Essentially, people will change their behavior to align with the social situation at hand. If we are in a new situation or are unsure how to behave, we will take our cues from other individuals.
The field of social psychology studies topics at both the intrapersonal level (pertaining to the individual), such as emotions and attitudes, and the interpersonal level (pertaining to groups), such as aggression and attraction. The field is also concerned with common cognitive biases—such as the fundamental attribution error, the actor-observer bias, the self-serving bias, and the just-world hypothesis—that influence our behavior and our perceptions of events.
![Protesters holding signs, demanding justice for Trayvon Martin.](https://courses.candelalearning.com/principlesofpsychology2x19x1/wp-content/uploads/sites/1293/2015/12/tmartin.jpe “Trayvon Martin, a 17-year-old African American youth, was shot to death at the hands of George Zimmerman, a white volunteer neighborhood watchman, in 2012. His death sparked a heated debate around the country about the effects of racism in the United States. Social psychologists theorize about how different cognitive biases influence people’s perspectives on the event. (credit “signs”: modification of work by David Shankbone; credit ‘walk’: modification of work by “Fibonacci Blue”/Flickr)”)
History of Social Psychology
The discipline of social psychology began in the United States in the early 20th century. The first published study in this area was an experiment in 1898 by Norman Triplett on the phenomenon of social facilitation. During the 1930s, Gestalt psychologists such as Kurt Lewin were instrumental in developing the field as something separate from the behavioral and psychoanalytic schools that were dominant during that time.
During World War II, social psychologists studied the concepts of persuasion and propaganda for the U.S. military. After the war, researchers became interested in a variety of social problems including gender issues, racial prejudice, cognitive dissonance, bystander intervention, aggression, and obedience to authority. During the years immediately following World War II there was frequent collaboration between psychologists and sociologists; however, the two disciplines have become increasingly specialized and isolated from each other in recent years, with sociologists focusing more on macro-level variables (such as social structure).
This introduction to psychology is largely derived from the OpenStax open textbook, as ported by Lumen Learning.
It’s meant to be an example of a textbook as a living, forkable, remixable entity, something that captures the early promise of the Connexions platform, and the recent promise of Ward Cunningham’s experiment with wiki.
To read along, click the links on the side menu, or follow the previous and next prompts at the bottom of each page.
References in the text can be found in the Candela Introductory Psychology Biblography.
If you’ve come here from another location you can start at the beginning of the path. Clicking the path with reload this page but load a sidebar which will suggest a reading sequence to you.
By the end of this section, you will be able to:
- Explain what a correlation coefficient tells us about the relationship between variables
- Recognize that correlation does not indicate a cause-and-effect relationship between variables
- Discuss our tendency to look for relationships between variables that do not really exist
- Explain random sampling and assignment of participants into experimental and control groups
- Discuss how experimenter or participant bias could affect the results of an experiment
- Identify independent and dependent variables
Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.
It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?
Correlation means that there is a relationship between two or more variables (such as ice cream consumption and crime), but this relationship does not necessarily imply cause and effect. When two variables are correlated, it simply means that as one variable changes, so does the other. We can measure correlation by calculating a statistic known as a correlation coefficient. A correlation coefficient is a number from -1 to +1 that indicates the strength and direction of the relationship between variables. The correlation coefficient is usually represented by the letter r.
The number portion of the correlation coefficient indicates the strength of the relationship. The closer the number is to 1 (be it negative or positive), the more strongly related the variables are, and the more predictable changes in one variable will be as the other variable changes. The closer the number is to zero, the weaker the relationship, and the less predictable the relationships between the variables becomes. For instance, a correlation coefficient of 0.9 indicates a far stronger relationship than a correlation coefficient of 0.3. If the variables are not related to one another at all, the correlation coefficient is 0. The example above about ice cream and crime is an example of two variables that we might expect to have no relationship to each other.
The sign—positive or negative—of the correlation coefficient indicates the direction of the relationship ([(Link)]). A positive correlation means that the variables move in the same direction. Put another way, it means that as one variable increases so does the other, and conversely, when one variable decreases so does the other. A negative correlation means that the variables move in opposite directions. If two variables are negatively correlated, a decrease in one variable is associated with an increase in the other and vice versa.
The example of ice cream and crime rates is a positive correlation because both variables increase when temperatures are warmer. Other examples of positive correlations are the relationship between an individual’s height and weight or the relationship between a person’s age and number of wrinkles. One might expect a negative correlation to exist between someone’s tiredness during the day and the number of hours they slept the previous night: the amount of sleep decreases as the feelings of tiredness increase. In a real-world example of negative correlation, student researchers at the University of Minnesota found a weak negative correlation (r = -0.29) between the average number of days per week that students got fewer than 5 hours of sleep and their GPA (Lowry, Dean, & Manders, 2010). Keep in mind that a negative correlation is not the same as no correlation. For example, we would probably find no correlation between hours of sleep and shoe size.
As mentioned earlier, correlations have predictive value. Imagine that you are on the admissions committee of a major university. You are faced with a huge number of applications, but you are able to accommodate only a small percentage of the applicant pool. How might you decide who should be admitted? You might try to correlate your current students’ college GPA with their scores on standardized tests like the SAT or ACT. By observing which correlations were strongest for your current students, you could use this information to predict relative success of those students who have applied for admission into the university.
Correlation Does Not Indicate Causation
Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect. While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable, is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.
Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research, we would be overstepping our bounds by making this assumption.
Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet ([(Link)])? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.
The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations, or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full ([(Link)]).
There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.
Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias. Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).
CAUSALITY: CONDUCTING EXPERIMENTS AND USING THE DATA
As you’ve learned, the only way to establish that there is a cause-and-effect relationship between two variables is to conduct a scientific experiment. Experiment has a different meaning in the scientific context than in everyday life. In everyday conversation, we often use it to describe trying something for the first time, such as experimenting with a new hair style or a new food. However, in the scientific context, an experiment has precise requirements for design and implementation.
The Experimental Hypothesis
In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon ([(Link)]).
These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.
Designing an Experiment
The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.
In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.
We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.
Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.
Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study, meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.
In a double-blind study, both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.
The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.
Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.
To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations ([(Link)]).
Independent and Dependent Variables
In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study ([(Link)]). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.
We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?
Selecting and Assigning Experimental Participants
Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.
Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment ([(Link)]). If possible, we should use a random sample (there are other types of samples, but for the purposes of this chapter, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.
In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.
In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.
Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment, all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.
Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.
Issues to Consider
While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.
Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.
Interpreting Experimental Findings
Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.
The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.
When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.
A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.
Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.
Dig Deeper: The Vaccine-Autism Myth and the Retraction of Published Studies
Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.
A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated ([(Link)]). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.
RELIABILITY AND VALIDITY
Reliability and validity are two important considerations that must be made with any type of data collection. Reliability refers to the ability to consistently produce a given result. In the context of psychological research, this would mean that any instruments or tools used to collect data do so in consistent, reproducible ways.
Unfortunately, being consistent in measurement does not necessarily mean that you have measured something correctly. To illustrate this concept, consider a kitchen scale that would be used to measure the weight of cereal that you eat in the morning. If the scale is not properly calibrated, it may consistently under- or overestimate the amount of cereal that’s being measured. While the scale is highly reliable in producing consistent results (e.g., the same amount of cereal poured onto the scale produces the same reading each time), those results are incorrect. This is where validity comes into play. Validity refers to the extent to which a given instrument or tool accurately measures what it’s supposed to measure. While any valid measure is by necessity reliable, the reverse is not necessarily true. Researchers strive to use instruments that are both highly reliable and valid.
Standardized tests like the SAT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT for admission, this high degree of predictive validity might be comforting.
However, the emphasis placed on SAT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that the SAT is a biased test that places minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of the SAT is grossly exaggerated in how well it is able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).
In 2014, College Board president David Coleman expressed his awareness of these problems, recognizing that college success is more accurately predicted by high school grades than by SAT scores. To address these concerns, he has called for significant changes to the SAT exam (Lewin, 2014).
A correlation is described with a correlation coefficient, r, which ranges from -1 to 1. The correlation coefficient tells us about the nature (positive or negative) and the strength of the relationship between two or more variables. Correlations do not tell us anything about causation—regardless of how strong the relationship is between variables. In fact, the only way to demonstrate causation is by conducting an experiment. People often make the mistake of claiming that correlations exist when they really do not.
Researchers can test cause-and-effect hypotheses by conducting experiments. Ideally, experimental participants are randomly selected from the population of interest. Then, the participants are randomly assigned to their respective groups. Sometimes, the researcher and the participants are blind to group membership to prevent their expectations from influencing the results.
In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.
Psychologists report their research findings in peer-reviewed journal articles. Research published in this format is checked by several other psychologists who serve as a filter separating ideas that are supported by evidence from ideas that are not. Replication has an important role in ensuring the legitimacy of published research. In the long run, only those findings that are capable of being replicated consistently will achieve consensus in the scientific community.
Self Check Questions
Critical Thinking Questions
1. Earlier in this section, we read about research suggesting that there is a correlation between eating cereal and weight. Cereal companies that present this information in their advertisements could lead someone to believe that eating more cereal causes healthy weight. Why would they make such a claim and what arguments could you make to counter this cause-and-effect claim?
2. Recently a study was published in the journal, Nutrition and Cancer, which established a negative correlation between coffee consumption and breast cancer. Specifically, it was found that women consuming more than 5 cups of coffee a day were less likely to develop breast cancer than women who never consumed coffee (Lowcock, Cotterchio, Anderson, Boucher, & El-Sohemy, 2013). Imagine you see a newspaper story about this research that says, “Coffee Protects Against Cancer.” Why is this headline misleading and why would a more accurate headline draw less interest?
3. Sometimes, true random sampling can be very difficult to obtain. Many researchers make use of convenience samples as an alternative. For example, one popular convenience sample would involve students enrolled in Introduction to Psychology courses. What are the implications of using this sampling technique?
4. Peer review is an important part of publishing research findings in many scientific disciplines. This process is normally conducted anonymously; in other words, the author of the article being reviewed does not know who is reviewing the article, and the reviewers are unaware of the author’s identity. Why would this be an important part of this process?
Personal Application Questions
5. We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?
6. Are there any questions about human or animal behavior that you would really like to answer? Generate a hypothesis and briefly describe how you would conduct an experiment to answer your question.
1. The cereal companies are trying to make a profit, so framing the research findings in this way would improve their bottom line. However, it could be that people who forgo more fatty options for breakfast are health conscious and engage in a variety of other behaviors that help them maintain a healthy weight.
2. Using the word protects seems to suggest causation as a function of correlation. If the headline were more accurate, it would be less interesting because indicating that two things are associated is less powerful than indicating that doing one thing causes a change in the other.
3. If research is limited to students enrolled in Introduction to Psychology courses, then our ability to generalize to the larger population would be dramatically reduced. One could also argue that students enrolled in Introduction to Psychology courses may not be representative of the larger population of college students at their school, much less the larger general population.
4. Anonymity protects against personal biases interfering with the reviewer’s opinion of the research. Allowing the reviewer to remain anonymous would mean that they can be honest in their appraisal of the manuscript without fear of reprisal.
By the end of this section, you will be able to:
- Describe the different research methods used by psychologists
- Discuss the strengths and weaknesses of case studies, naturalistic observation, surveys, and archival research
- Compare longitudinal and cross-sectional approaches to research
There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.
Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected. All of the methods described thus far are correlational in nature. This means that researchers can speak to important relationships that might exist between two or more variables of interest. However, correlational data cannot be used to make claims about cause-and-effect relationships.
Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in this chapter, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.
CLINICAL OR CASE STUDIES
In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.
The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.
These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).
In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.
If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.
If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?
This is very similar to the phenomenon mentioned earlier in this chapter: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about hand washing, we have other options available to us.
Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation: observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).
It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway ([(Link)]).
It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa ([(Link)]). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).
The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.
The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s hand washing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.
Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the chapter on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.
Another potential problem in observational research is observer bias. Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability: a measure of reliability that assesses the consistency of observations by different observers.
Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally ([(Link)]). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.
Surveys allow researchers to gather data from larger samples than may be afforded by other research methods. A sample is a subset of individuals selected from a population, which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population.
There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.
Another potential weakness of surveys is something we touched on earlier in this chapter: People don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.
Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins, Ruppel, Kizer, Yehl, and Griffin (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).
Some researchers gain access to large amounts of data without interacting with a single research participant. Instead, they use existing records to answer various research questions. This type of research approach is known as archival research. Archival research relies on looking at past records or data sets to look for interesting patterns or relationships.
For example, a researcher might access the academic records of all individuals who enrolled in college within the past ten years and calculate how long it took them to complete their degrees, as well as course loads, grades, and extracurricular involvement. Archival research could provide important information about who is most likely to complete their education, and it could help identify important risk factors for struggling students ([(Link)]).
In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.
LONGITUDINAL AND CROSS-SECTIONAL RESEARCH
Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.
Another approach is cross-sectional research. In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.
To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.
Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.
Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) ([(Link)]).
As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.
Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.
The clinical or case study involves studying just a few individuals for an extended period of time. While this approach provides an incredible depth of information, the ability to generalize these observations to the larger population is problematic. Naturalistic observation involves observing behavior in a natural setting and allows for the collection of valid, true-to-life information from realistic situations. However, naturalistic observation does not allow for much control and often requires quite a bit of time and money to perform. Researchers strive to ensure that their tools for collecting data are both reliable (consistent and replicable) and valid (accurate).
Surveys can be administered in a number of ways and make it possible to collect large amounts of data quickly. However, the depth of information that can be collected through surveys is somewhat limited compared to a clinical or case study.
Archival research involves studying existing data sets to answer research questions.
Longitudinal research has been incredibly helpful to researchers who need to collect data on how people change over time. Cross-sectional research compares multiple segments of a population at a single time.
Self Check Questions
Critical Thinking Questions
1. In this section, conjoined twins, Krista and Tatiana, were described as being potential participants in a case study. In what other circumstances would you think that this particular research approach would be especially helpful and why?
2. Presumably, reality television programs aim to provide a realistic portrayal of the behavior displayed by the characters featured in such programs. This section pointed out why this is not really the case. What changes could be made in the way that these programs are produced that would result in more honest portrayals of realistic behavior?
3. Which of the research methods discussed in this section would be best suited to research the effectiveness of the D.A.R.E. program in preventing the use of alcohol and other drugs? Why?
4. Aside from biomedical research, what other areas of research could greatly benefit by both longitudinal and archival research?
Personal Application Questions
5. A friend of yours is working part-time in a local pet store. Your friend has become increasingly interested in how dogs normally communicate and interact with each other, and is thinking of visiting a local veterinary clinic to see how dogs interact in the waiting room. After reading this section, do you think this is the best way to better understand such interactions? Do you have any suggestions that might result in more valid data?
6. As a college student, you are no doubt concerned about the grades that you earn while completing your coursework. If you wanted to know how overall GPA is related to success in life after college, how would you choose to approach this question and what kind of resources would you need to conduct this research?
1. Case studies might prove especially helpful using individuals who have rare conditions. For instance, if one wanted to study multiple personality disorder then the case study approach with individuals diagnosed with multiple personality disorder would be helpful.
2. The behavior displayed on these programs would be more realistic if the cameras were mounted in hidden locations, or if the people who appear on these programs did not know when they were being recorded.
3. Longitudinal research would be an excellent approach in studying the effectiveness of this program because it would follow students as they aged to determine if their choices regarding alcohol and drugs were affected by their participation in the program.
4. Answers will vary. Possibilities include research on hiring practices based on human resource records, and research that follows former prisoners to determine if the time that they were incarcerated provided any sort of positive influence on their likelihood of engaging in criminal behavior in the future.
By the end of this section, you will be able to:
- Explain how scientific research addresses questions about behavior
- Discuss how scientific research guides public policy
- Appreciate how scientific research can be important in making personal decisions
Scientific research is a critical tool for successfully navigating our complex world. Without it, we would be forced to rely solely on intuition, other people’s authority, and blind luck. While many of us feel confident in our abilities to decipher and interact with the world around us, history is filled with examples of how very wrong we can be when we fail to recognize the need for evidence in supporting claims. At various times in history, we would have been certain that the sun revolved around a flat earth, that the earth’s continents did not move, and that mental illness was caused by possession ([(Link)]). It is through systematic scientific research that we divest ourselves of our preconceived notions and superstitions and gain an objective understanding of ourselves and our world.
The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical: It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.
While behavior is observable, the mind is not. If someone is crying, we can see behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This chapter explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.
USE OF RESEARCH INFORMATION
Trying to determine which theories are and are not accepted by the scientific community can be difficult, especially in an area of research as broad as psychology. More than ever before, we have an incredible amount of information at our fingertips, and a simple internet search on any given research topic might result in a number of contradictory studies. In these cases, we are witnessing the scientific community going through the process of reaching a consensus, and it could be quite some time before a consensus emerges. For example, the hypothesized link between exposure to media violence and subsequent aggression has been debated in the scientific community for roughly 60 years. Even today, we will find detractors, but a consensus is building. Several professional organizations view media violence exposure as a risk factor for actual violence, including the American Medical Association, the American Psychiatric Association, and the American Psychological Association (American Academy of Pediatrics, American Academy of Child & Adolescent Psychiatry, American Psychological Association, American Medical Association, American Academy of Family Physicians, American Psychiatric Association, 2000).
In the meantime, we should strive to think critically about the information we encounter by exercising a degree of healthy skepticism. When someone makes a claim, we should examine the claim from a number of different perspectives: what is the expertise of the person making the claim, what might they gain if the claim is valid, does the claim seem justified given the evidence, and what do other researchers think of the claim? This is especially important when we consider how much information in advertising campaigns and on the internet claims to be based on “scientific evidence” when in actuality it is a belief or perspective of just a few individuals trying to sell a product or draw attention to their perspectives.
We should be informed consumers of the information made available to us because decisions based on this information have significant consequences. One such consequence can be seen in politics and public policy. Imagine that you have been elected as the governor of your state. One of your responsibilities is to manage the state budget and determine how to best spend your constituents’ tax dollars. As the new governor, you need to decide whether to continue funding the D.A.R.E. (Drug Abuse Resistance Education) program in public schools ([(Link)]). This program typically involves police officers coming into the classroom to educate students about the dangers of becoming involved with alcohol and other drugs. According to the D.A.R.E. website (www.dare.org), this program has been very popular since its inception in 1983, and it is currently operating in 75% of school districts in the United States and in more than 40 countries worldwide. Sounds like an easy decision, right? However, on closer review, you discover that the vast majority of research into this program consistently suggests that participation has little, if any, effect on whether or not someone uses alcohol or other drugs (Clayton, Cattarello, & Johnstone, 1996; Ennett, Tobler, Ringwalt, & Flewelling, 1994; Lynam et al., 1999; Ringwalt, Ennett, & Holt, 1991). If you are committed to being a good steward of taxpayer money, will you fund this particular program, or will you try to find other programs that research has consistently demonstrated to be effective?
Ultimately, it is not just politicians who can benefit from using research in guiding their decisions. We all might look to research from time to time when making decisions in our lives. Imagine you just found out that a close friend has breast cancer or that one of your young relatives has recently been diagnosed with autism. In either case, you want to know which treatment options are most successful with the fewest side effects. How would you find that out? You would probably talk with your doctor and personally review the research that has been done on various treatment options—always with a critical eye to ensure that you are as informed as possible.
In the end, research is what makes the difference between facts and opinions. Facts are observable realities, and opinions are personal judgments, conclusions, or attitudes that may or may not be accurate. In the scientific community, facts can be established only using evidence collected through empirical research.
THE PROCESS OF SCIENTIFIC RESEARCH
Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. In this sense, the scientific process is circular. The types of reasoning within the circle are called deductive and inductive. In deductive reasoning, ideas are tested against the empirical world; in inductive reasoning, empirical observations lead to new ideas ([(Link)]). These processes are inseparable, like inhaling and exhaling, but different research approaches place different emphasis on the deductive and inductive aspects.
In the scientific context, deductive reasoning begins with a generalization—one hypothesis—that is then used to reach logical conclusions about the real world. If the hypothesis is correct, then the logical conclusions reached through deductive reasoning should also be correct. A deductive reasoning argument might go something like this: All living things require energy to survive (this would be your hypothesis). Ducks are living things. Therefore, ducks require energy to survive (logical conclusion). In this example, the hypothesis is correct; therefore, the conclusion is correct as well. Sometimes, however, an incorrect hypothesis may lead to a logical but incorrect conclusion. Consider this argument: all ducks are born with the ability to see. Quackers is a duck. Therefore, Quackers was born with the ability to see. Scientists use deductive reasoning to empirically test their hypotheses. Returning to the example of the ducks, researchers might design a study to test the hypothesis that if all living things require energy to survive, then ducks will be found to require energy to survive.
Deductive reasoning starts with a generalization that is tested against real-world observations; however, inductive reasoning moves in the opposite direction. Inductive reasoning uses empirical observations to construct broad generalizations. Unlike deductive reasoning, conclusions drawn from inductive reasoning may or may not be correct, regardless of the observations on which they are based. For instance, you may notice that your favorite fruits—apples, bananas, and oranges—all grow on trees; therefore, you assume that all fruit must grow on trees. This would be an example of inductive reasoning, and, clearly, the existence of strawberries, blueberries, and kiwi demonstrate that this generalization is not correct despite it being based on a number of direct observations. Scientists use inductive reasoning to formulate theories, which in turn generate hypotheses that are tested with deductive reasoning. In the end, science involves both deductive and inductive processes.
For example, case studies, which you will read about in the next section, are heavily weighted on the side of empirical observations. Thus, case studies are closely associated with inductive processes as researchers gather massive amounts of observations and seek interesting patterns (new ideas) in the data. Experimental research, on the other hand, puts great emphasis on deductive reasoning.
We’ve stated that theories and hypotheses are ideas, but what sort of ideas are they, exactly? A theory is a well-developed set of ideas that propose an explanation for observed phenomena. Theories are repeatedly checked against the world, but they tend to be too complex to be tested all at once; instead, researchers create hypotheses to test specific aspects of a theory.
A hypothesis is a testable prediction about how the world will behave if our idea is correct, and it is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests [(Link)].
To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later chapter, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.
A scientific hypothesis is also falsifiable, or capable of being shown to be incorrect. Recall from the introductory chapter that Sigmund Freud had lots of interesting ideas to explain various human behaviors ([(Link)]). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.
In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).
Scientific research’s dependence on falsifiability allows for great confidence in the information that it produces. Typically, by the time information is accepted by the scientific community, it has been tested repeatedly.
Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives.
Self Check Questions
Critical Thinking Questions
1. In this section, the D.A.R.E. program was described as an incredibly popular program in schools across the United States despite the fact that research consistently suggests that this program is largely ineffective. How might one explain this discrepancy?
2. The scientific method is often described as self-correcting and cyclical. Briefly describe your understanding of the scientific method with regard to these concepts.
Personal Application Questions
3. Healthcare professionals cite an enormous number of health problems related to obesity, and many people have an understandable desire to attain a healthy weight. There are many diet programs, services, and products on the market to aid those who wish to lose weight. If a close friend was considering purchasing or participating in one of these products, programs, or services, how would you make sure your friend was fully aware of the potential consequences of this decision? What sort of information would you want to review before making such an investment or lifestyle change yourself?
1. There is probably tremendous political pressure to appear to be hard on drugs. Therefore, even though D.A.R.E. might be ineffective, it is a well-known program with which voters are familiar.
2. This cyclical, self-correcting process is primarily a function of the empirical nature of science. Theories are generated as explanations of real-world phenomena. From theories, specific hypotheses are developed and tested. As a function of this testing, theories will be revisited and modified or refined to generate new hypotheses that are again tested. This cyclical process ultimately allows for more and more precise (and presumably accurate) information to be collected.
Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.
The topic of violence in the media today is contentious. Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of quickly changing technologies, questions about the effects of media continue to emerge. Many of us find ourselves with a strong opinion on these issues, only to find the person next to us bristling with the opposite view.
How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.
American Academy of Pediatrics, American Academy of Child & Adolescent Psychiatry, American Psychological Association, American Medical Association, American Academy of Family Physicians, American Psychiatric Association. (2000). Joint statement on the impact of entertainment violence on children. Retrieved from http://www2.aap.org/advocacy/releases/jstmtevc.htm.
American Cancer Society. (n.d.). History of the cancer prevention studies. Retrieved from http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study
American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.
American Psychological Association. (n.d.). Research with animals in psychology. Retrieved from https://www.apa.org/research/responsible/research-animals.pdf
Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614.
Barton, B. A., Eldridge, A. L., Thompson, D., Affenito, S. G., Striegel-Moore, R. H., Franko, D. L., . . . Crockett, S. J. (2005). The relationship of breakfast and cereal consumption to nutrient intake and body mass index: The national heart, lung, and blood institute growth and health study. Journal of the American Dietetic Association, 105(9), 1383–1389. Retrieved from http://dx.doi.org/10.1016/j.jada.2005.06.003
Chwalisz, K., Diener, E., & Gallagher, D. (1988). Autonomic arousal feedback and emotional experience: Evidence from the spinal cord injured. Journal of Personality and Social Psychology, 54, 820–828.
Clayton, R. R., Cattarello, A. M., & Johnstone, B. M. (1996). The effectiveness of Drug Abuse Resistance Education (Project DARE): 5-year follow-up results. Preventive Medicine: An International Journal Devoted to Practice and Theory, 25(3), 307–318. doi:10.1006/pmed.1996.0061
D.A.R.E. (n.d.). D.A.R.E. is substance abuse prevention education and much more! [About page] Retrieved from http://www.dare.org/about-d-a-r-e/
Dominus, S. (2011, May 25). Could conjoined twins share a mind? New York Times Sunday Magazine. Retrieved from http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?_r=5&hp&
Ennett, S. T., Tobler, N. S., Ringwalt, C. L., & Flewelling, R. L. (1994). How effective is drug abuse resistance education? A meta-analysis of Project DARE outcome evaluations. American Journal of Public Health, 84(9), 1394–1401. doi:10.2105/AJPH.84.9.1394
Fanger, S. M., Frankel, L. A., & Hazen, N. (2012). Peer exclusion in preschool children’s play: Naturalistic observations in a playground setting. Merrill-Palmer Quarterly, 58, 224–254.
Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). New York, NY: Psychology Press.
Frantzen, L. B., Treviño, R. P., Echon, R. M., Garcia-Dominic, O., & DiMarco, N. (2013). Association between frequency of ready-to-eat cereal consumption, nutrient intakes, and body mass index in fourth- to sixth-grade low-income minority children. Journal of the Academy of Nutrition and Dietetics, 113(4), 511–519.
Harper, J. (2013, July 5). Ice cream and crime: Where cold cuisine and hot disputes intersect. The Times-Picaune. Retrieved from http://www.nola.com/crime/index.ssf/2013/07/ice_cream_and_crime_where_hot.html
Jenkins, W. J., Ruppel, S. E., Kizer, J. B., Yehl, J. L., & Griffin, J. L. (2012). An examination of post 9-11 attitudes towards Arab Americans. North American Journal of Psychology, 14, 77–84.
Jones, J. M. (2013, May 13). Same-sex marriage support solidifies above 50% in U.S. Gallup Politics. Retrieved from http://www.gallup.com/poll/162398/sex-marriage-support-solidifies-above.aspx
Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average (Research Report No. 2008-5). Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2008-5-validity-sat-predicting-first-year-college-grade-point-average.pdf
Lewin, T. (2014, March 5). A new SAT aims to realign with schoolwork. New York Times. Retreived from http://www.nytimes.com/2014/03/06/education/major-changes-in-sat-announced-by-college-board.html.
Lowcock, E. C., Cotterchio, M., Anderson, L. N., Boucher, B. A., & El-Sohemy, A. (2013). High coffee intake, but not caffeine, is associated with reduced estrogen receptor negative and postmenopausal breast cancer risk with no effect modification by CYP1A2 genotype. Nutrition and Cancer, 65(3), 398–409. doi:10.1080/01635581.2013.768348
Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. Retrieved from http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf
Lynam, D. R., Milich, R., Zimmerman, R., Novak, S. P., Logan, T. K., Martin, C., . . . Clayton, R. (1999). Project DARE: No effects at 10-year follow-up. Journal of Consulting and Clinical Psychology, 67(4), 590–593. doi:10.1037/0022-006X.67.4.590
McKie, R. (2010, June 26). Chimps with everything: Jane Goodall’s 50 years in the jungle. The Guardian. Retrieved from http://www.theguardian.com/science/2010/jun/27/jane-goodall-chimps-africa-interview
Offit, P. (2008). Autism’s false prophets: Bad science, risky medicine, and the search for a cure. New York: Columbia University Press.
Perkins, H. W., Haines, M. P., & Rice, R. (2005). Misperceiving the college drinking norm and related problems: A nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. J. Stud. Alcohol, 66(4), 470–478.
Rimer, S. (2008, September 21). College panel calls for less focus on SATs. The New York Times. Retrieved from http://www.nytimes.com/2008/09/22/education/22admissions.html?_r=0
Ringwalt, C., Ennett, S. T., & Holt, K. D. (1991). An outcome evaluation of Project DARE (Drug Abuse Resistance Education). Health Education Research, 6(3), 327–337. doi:10.1093/her/6.3.327
Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297–317.
Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. doi:10.1037/0033-2909.97.2.286
Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Education Review, 80, 106–134.
Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515–530.
Tuskegee University. (n.d.). About the USPHS Syphilis Study. Retrieved from http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx.
By the end of this section, you will be able to:
- Understand educational requirements for careers in academic settings
- Understand the demands of a career in an academic setting
- Understand career options outside of academic settings
Psychologists can work in many different places doing many different things. In general, anyone wishing to continue a career in psychology at a 4-year institution of higher education will have to earn a doctoral degree in psychology for some specialties and at least a master’s degree for others. In most areas of psychology, this means earning a PhD in a relevant area of psychology. Literally, PhD refers to a doctor of philosophy degree, but here, philosophy does not refer to the field of philosophy per se. Rather, philosophy in this context refers to many different disciplinary perspectives that would be housed in a traditional college of liberal arts and sciences.
The requirements to earn a PhD vary from country to country and even from school to school, but usually, individuals earning this degree must complete a dissertation. A dissertation is essentially a long research paper or bundled published articles describing research that was conducted as a part of the candidate’s doctoral training. In the United States, a dissertation generally has to be defended before a committee of expert reviewers before the degree is conferred ([(link)]).
Doctoral degrees are generally conferred in formal ceremonies involving special attire and rites. (credit: Public Affairs Office Fort Wainwright)
Once someone earns her PhD, she may seek a faculty appointment at a college or university. Being on the faculty of a college or university often involves dividing time between teaching, research, and service to the institution and profession. The amount of time spent on each of these primary responsibilities varies dramatically from school to school, and it is not uncommon for faculty to move from place to place in search of the best personal fit among various academic environments. The previous section detailed some of the major areas that are commonly represented in psychology departments around the country; thus, depending on the training received, an individual could be anything from a biological psychologist to a clinical psychologist in an academic setting ([(link)]).
Individuals earning a PhD in psychology have a range of employment options.
By the end of this section, you will be able to:
- Appreciate the diversity of interests and foci within psychology
- Understand basic interests and applications in each of the described areas of psychology
- Demonstrate familiarity with some of the major concepts or important figures in each of the described areas of psychology
Contemporary psychology is a diverse field that is influenced by all of the historical perspectives described in the preceding section. Reflective of the discipline’s diversity is the diversity seen within the American Psychological Association (APA). The APA is a professional organization representing psychologists in the United States. The APA is the largest organization of psychologists in the world, and its mission is to advance and disseminate psychological knowledge for the betterment of people. There are 56 divisions within the APA, representing a wide variety of specialties that range from Societies for the Psychology of Religion and Spirituality to Exercise and Sport Psychology to Behavioral Neuroscience and Comparative Psychology. Reflecting the diversity of the field of psychology itself, members, affiliate members, and associate members span the spectrum from students to doctoral-level psychologists, and come from a variety of places including educational settings, criminal justice, hospitals, the armed forces, and industry (American Psychological Association, 2014). The Association for Psychological Science (APS) was founded in 1988 and seeks to advance the scientific orientation of psychology. Its founding resulted from disagreements between members of the scientific and clinical branches of psychology within the APA. The APS publishes five research journals and engages in education and advocacy with funding agencies. A significant proportion of its members are international, although the majority is located in the United States. Other organizations provide networking and collaboration opportunities for professionals of several ethnic or racial groups working in psychology, such as the National Latina/o Psychological Association (NLPA), the Asian American Psychological Association (AAPA), the Association of Black Psychologists (ABPsi), and the Society of Indian Psychologists (SIP). Most of these groups are also dedicated to studying psychological and social issues within their specific communities.
This section will provide an overview of the major subdivisions within psychology today in the order in which they are introduced throughout the remainder of this textbook. This is not meant to be an exhaustive listing, but it will provide insight into the major areas of research and practice of modern-day psychologists.
BIOPSYCHOLOGY AND EVOLUTIONARY PSYCHOLOGY
As the name suggests, biopsychology explores how our biology influences our behavior. While biological psychology is a broad field, many biological psychologists want to understand how the structure and function of the nervous system is related to behavior ([(Link)]). As such, they often combine the research strategies of both psychologists and physiologists to accomplish this goal (as discussed in Carlson, 2013).
The research interests of biological psychologists span a number of domains, including but not limited to, sensory and motor systems, sleep, drug use and abuse, ingestive behavior, reproductive behavior, neurodevelopment, plasticity of the nervous system, and biological correlates of psychological disorders. Given the broad areas of interest falling under the purview of biological psychology, it will probably come as no surprise that individuals from all sorts of backgrounds are involved in this research, including biologists, medical professionals, physiologists, and chemists. This interdisciplinary approach is often referred to as neuroscience, of which biological psychology is a component (Carlson, 2013).
While biopsychology typically focuses on the immediate causes of behavior based in the physiology of a human or other animal, evolutionary psychology seeks to study the ultimate biological causes of behavior. To the extent that a behavior is impacted by genetics, a behavior, like any anatomical characteristic of a human or animal, will demonstrate adaption to its surroundings. These surroundings include the physical environment and, since interactions between organisms can be important to survival and reproduction, the social environment. The study of behavior in the context of evolution has its origins with Charles Darwin, the co-discoverer of the theory of evolution by natural selection. Darwin was well aware that behaviors should be adaptive and wrote books titled, The Descent of Man (1871) and The Expression of the Emotions in Man and Animals (1872), to explore this field.
Evolutionary psychology, and specifically, the evolutionary psychology of humans, has enjoyed a resurgence in recent decades. To be subject to evolution by natural selection, a behavior must have a significant genetic cause. In general, we expect all human cultures to express a behavior if it is caused genetically, since the genetic differences among human groups are small. The approach taken by most evolutionary psychologists is to predict the outcome of a behavior in a particular situation based on evolutionary theory and then to make observations, or conduct experiments, to determine whether the results match the theory. It is important to recognize that these types of studies are not strong evidence that a behavior is adaptive, since they lack information that the behavior is in some part genetic and not entirely cultural (Endler, 1986). Demonstrating that a trait, especially in humans, is naturally selected is extraordinarily difficult; perhaps for this reason, some evolutionary psychologists are content to assume the behaviors they study have genetic determinants (Confer et al., 2010).
One other drawback of evolutionary psychology is that the traits that we possess now evolved under environmental and social conditions far back in human history, and we have a poor understanding of what these conditions were. This makes predictions about what is adaptive for a behavior difficult. Behavioral traits need not be adaptive under current conditions, only under the conditions of the past when they evolved, about which we can only hypothesize.
There are many areas of human behavior for which evolution can make predictions. Examples include memory, mate choice, relationships between kin, friendship and cooperation, parenting, social organization, and status (Confer et al., 2010).
Evolutionary psychologists have had success in finding experimental correspondence between observations and expectations. In one example, in a study of mate preference differences between men and women that spanned 37 cultures, Buss (1989) found that women valued earning potential factors greater than men, and men valued potential reproductive factors (youth and attractiveness) greater than women in their prospective mates. In general, the predictions were in line with the predictions of evolution, although there were deviations in some cultures.
SENSATION AND PERCEPTION
Scientists interested in both physiological aspects of sensory systems as well as in the psychological experience of sensory information work within the area of sensation and perception ([(Link)]). As such, sensation and perception research is also quite interdisciplinary. Imagine walking between buildings as you move from one class to another. You are inundated with sights, sounds, touch sensations, and smells. You also experience the temperature of the air around you and maintain your balance as you make your way. These are all factors of interest to someone working in the domain of sensation and perception.
As described in a later chapter that focuses on the results of studies in sensation and perception, our experience of our world is not as simple as the sum total of all of the sensory information (or sensations) together. Rather, our experience (or perception) is complex and is influenced by where we focus our attention, our previous experiences, and even our cultural backgrounds.
As mentioned in the previous section, the cognitive revolution created an impetus for psychologists to focus their attention on better understanding the mind and mental processes that underlie behavior. Thus, cognitive psychology is the area of psychology that focuses on studying cognitions, or thoughts, and their relationship to our experiences and our actions. Like biological psychology, cognitive psychology is broad in its scope and often involves collaborations among people from a diverse range of disciplinary backgrounds. This has led some to coin the term cognitive science to describe the interdisciplinary nature of this area of research (Miller, 2003).
Cognitive psychologists have research interests that span a spectrum of topics, ranging from attention to problem solving to language to memory. The approaches used in studying these topics are equally diverse. Given such diversity, cognitive psychology is not captured in one chapter of this text per se; rather, various concepts related to cognitive psychology will be covered in relevant portions of the chapters in this text on sensation and perception, thinking and intelligence, memory, lifespan development, social psychology, and therapy.
Developmental psychology is the scientific study of development across a lifespan. Developmental psychologists are interested in processes related to physical maturation. However, their focus is not limited to the physical changes associated with aging, as they also focus on changes in cognitive skills, moral reasoning, social behavior, and other psychological attributes.
Early developmental psychologists focused primarily on changes that occurred through reaching adulthood, providing enormous insight into the differences in physical, cognitive, and social capacities that exist between very young children and adults. For instance, research by Jean Piaget ([(Link)]) demonstrated that very young children do not demonstrate object permanence. Object permanence refers to the understanding that physical things continue to exist, even if they are hidden from us. If you were to show an adult a toy, and then hide it behind a curtain, the adult knows that the toy still exists. However, very young infants act as if a hidden object no longer exists. The age at which object permanence is achieved is somewhat controversial (Munakata, McClelland, Johnson, and Siegler, 1997).
While Piaget was focused on cognitive changes during infancy and childhood as we move to adulthood, there is an increasing interest in extending research into the changes that occur much later in life. This may be reflective of changing population demographics of developed nations as a whole. As more and more people live longer lives, the number of people of advanced age will continue to increase. Indeed, it is estimated that there were just over 40 million people aged 65 or older living in the United States in 2010. However, by 2020, this number is expected to increase to about 55 million. By the year 2050, it is estimated that nearly 90 million people in this country will be 65 or older (Department of Health and Human Services, n.d.).
Personality psychology focuses on patterns of thoughts and behaviors that make each individual unique. Several individuals (e.g., Freud and Maslow) that we have already discussed in our historical overview of psychology, and the American psychologist Gordon Allport, contributed to early theories of personality. These early theorists attempted to explain how an individual’s personality develops from his or her given perspective. For example, Freud proposed that personality arose as conflicts between the conscious and unconscious parts of the mind were carried out over the lifespan. Specifically, Freud theorized that an individual went through various psychosexual stages of development. According to Freud, adult personality would result from the resolution of various conflicts that centered on the migration of erogenous (or sexual pleasure-producing) zones from the oral (mouth) to the anus to the phallus to the genitals. Like many of Freud’s theories, this particular idea was controversial and did not lend itself to experimental tests (Person, 1980).
More recently, the study of personality has taken on a more quantitative approach. Rather than explaining how personality arises, research is focused on identifying personality traits, measuring these traits, and determining how these traits interact in a particular context to determine how a person will behave in any given situation. Personality traits are relatively consistent patterns of thought and behavior, and many have proposed that five trait dimensions are sufficient to capture the variations in personality seen across individuals. These five dimensions are known as the “Big Five” or the Five Factor model, and include dimensions of conscientiousness, agreeableness, neuroticism, openness, and extraversion ([(Link)]). Each of these traits has been demonstrated to be relatively stable over the lifespan (e.g., Rantanen, Metsäpelto, Feldt, Pulkinnen, and Kokko, 2007; Soldz & Vaillant, 1999; McCrae & Costa, 2008) and is influenced by genetics (e.g., Jang, Livesly, and Vernon, 1996).
Social psychology focuses on how we interact with and relate to others. Social psychologists conduct research on a wide variety of topics that include differences in how we explain our own behavior versus how we explain the behaviors of others, prejudice, and attraction, and how we resolve interpersonal conflicts. Social psychologists have also sought to determine how being among other people changes our own behavior and patterns of thinking.
There are many interesting examples of social psychological research, and you will read about many of these in a later chapter of this textbook. Until then, you will be introduced to one of the most controversial psychological studies ever conducted. Stanley Milgram was an American social psychologist who is most famous for research that he conducted on obedience. After the holocaust, in 1961, a Nazi war criminal, Adolf Eichmann, who was accused of committing mass atrocities, was put on trial. Many people wondered how German soldiers were capable of torturing prisoners in concentration camps, and they were unsatisfied with the excuses given by soldiers that they were simply following orders. At the time, most psychologists agreed that few people would be willing to inflict such extraordinary pain and suffering, simply because they were obeying orders. Milgram decided to conduct research to determine whether or not this was true ([(Link)]). As you will read later in the text, Milgram found that nearly two-thirds of his participants were willing to deliver what they believed to be lethal shocks to another person, simply because they were instructed to do so by an authority figure (in this case, a man dressed in a lab coat). This was in spite of the fact that participants received payment for simply showing up for the research study and could have chosen not to inflict pain or more serious consequences on another person by withdrawing from the study. No one was actually hurt or harmed in any way, Milgram’s experiment was a clever ruse that took advantage of research confederates, those who pretend to be participants in a research study who are actually working for the researcher and have clear, specific directions on how to behave during the research study (Hock, 2009). Milgram’s and others’ studies that involved deception and potential emotional harm to study participants catalyzed the development of ethical guidelines for conducting psychological research that discourage the use of deception of research subjects, unless it can be argued not to cause harm and, in general, requiring informed consent of participants.
Industrial-Organizational psychology (I-O psychology) is a subfield of psychology that applies psychological theories, principles, and research findings in industrial and organizational settings. I-O psychologists are often involved in issues related to personnel management, organizational structure, and workplace environment. Businesses often seek the aid of I-O psychologists to make the best hiring decisions as well as to create an environment that results in high levels of employee productivity and efficiency. In addition to its applied nature, I-O psychology also involves conducting scientific research on behavior within I-O settings (Riggio, 2013).
Health psychology focuses on how health is affected by the interaction of biological, psychological, and sociocultural factors. This particular approach is known as the biopsychosocial model ([(Link)]). Health psychologists are interested in helping individuals achieve better health through public policy, education, intervention, and research. Health psychologists might conduct research that explores the relationship between one’s genetic makeup, patterns of behavior, relationships, psychological stress, and health. They may research effective ways to motivate people to address patterns of behavior that contribute to poorer health (MacDonald, 2013).
SPORT AND EXERCISE PSYCHOLOGY
Researchers in sport and exercise psychology study the psychological aspects of sport performance, including motivation and performance anxiety, and the effects of sport on mental and emotional wellbeing. Research is also conducted on similar topics as they relate to physical exercise in general. The discipline also includes topics that are broader than sport and exercise but that are related to interactions between mental and physical performance under demanding conditions, such as fire fighting, military operations, artistic performance, and surgery.
Clinical psychology is the area of psychology that focuses on the diagnosis and treatment of psychological disorders and other problematic patterns of behavior. As such, it is generally considered to be a more applied area within psychology; however, some clinicians are also actively engaged in scientific research. Counseling psychology is a similar discipline that focuses on emotional, social, vocational, and health-related outcomes in individuals who are considered psychologically healthy.
As mentioned earlier, both Freud and Rogers provided perspectives that have been influential in shaping how clinicians interact with people seeking psychotherapy. While aspects of the psychoanalytic theory are still found among some of today’s therapists who are trained from a psychodynamic perspective, Roger’s ideas about client-centered therapy have been especially influential in shaping how many clinicians operate. Furthermore, both behaviorism and the cognitive revolution have shaped clinical practice in the forms of behavioral therapy, cognitive therapy, and cognitive-behavioral therapy ([(Link)]). Issues related to the diagnosis and treatment of psychological disorders and problematic patterns of behavior will be discussed in detail in later chapters of this textbook.
By far, this is the area of psychology that receives the most attention in popular media, and many people mistakenly assume that all psychology is clinical psychology.
Forensic psychology is a branch of psychology that deals questions of psychology as they arise in the context of the justice system. For example, forensic psychologists (and forensic psychiatrists) will assess a person’s competency to stand trial, assess the state of mind of a defendant, act as consultants on child custody cases, consult on sentencing and treatment recommendations, and advise on issues such as eyewitness testimony and children’s testimony (American Board of Forensic Psychology, 2014). In these capacities, they will typically act as expert witnesses, called by either side in a court case to provide their research- or experience-based opinions. As expert witnesses, forensic psychologists must have a good understanding of the law and provide information in the context of the legal system rather than just within the realm of psychology. Forensic psychologists are also used in the jury selection process and witness preparation. They may also be involved in providing psychological treatment within the criminal justice system. Criminal profilers are a relatively small proportion of psychologists that act as consultants to law enforcement.
Psychology is a diverse discipline that is made up of several major subdivisions with unique perspectives. Biological psychology involves the study of the biological bases of behavior. Sensation and perception refer to the area of psychology that is focused on how information from our sensory modalities is received, and how this information is transformed into our perceptual experiences of the world around us. Cognitive psychology is concerned with the relationship that exists between thought and behavior, and developmental psychologists study the physical and cognitive changes that occur throughout one’s lifespan. Personality psychology focuses on individuals’ unique patterns of behavior, thought, and emotion. Industrial and organizational psychology, health psychology, sport and exercise psychology, forensic psychology, and clinical psychology are all considered applied areas of psychology. Industrial and organizational psychologists apply psychological concepts to I-O settings. Health psychologists look for ways to help people live healthier lives, and clinical psychology involves the diagnosis and treatment of psychological disorders and other problematic behavioral patterns. Sport and exercise psychologists study the interactions between thoughts, emotions, and physical performance in sports, exercise, and other activities. Forensic psychologists carry out activities related to psychology in association with the justice system.
Self Check Questions
Critical Thinking Questions
1. Given the incredible diversity among the various areas of psychology that were described in this section, how do they all fit together?
2. What are the potential ethical concerns associated with Milgram’s research on obedience?
Personal Application Question
3. Now that you’ve been briefly introduced to some of the major areas within psychology, which are you most interested in learning more about? Why?
1. Although the different perspectives all operate on different levels of analyses, have different foci of interests, and different methodological approaches, all of these areas share a focus on understanding and/or correcting patterns of thought and/or behavior.
2. Many people have questioned how ethical this particular research was. Although no one was actually harmed in Milgram’s study, many people have questioned how the knowledge that you would be willing to inflict incredible pain and/or death to another person, simply because someone in authority told you to do so, would affect someone’s self-concept and psychological health. Furthermore, the degree to which deception was used in this particular study raises a few eyebrows.
By the end of this section, you will be able to:
- Understand the importance of Wundt and James in the development of psychology
- Appreciate Freud’s influence on psychology
- Understand the basic tenets of Gestalt psychology
- Appreciate the important role that behaviorism played in psychology’s history
- Understand basic tenets of humanism
- Understand how the cognitive revolution shifted psychology’s focus back to the mind
Psychology is a relatively young science with its experimental roots in the 19th century, compared, for example, to human physiology, which dates much earlier. As mentioned, anyone interested in exploring issues related to the mind generally did so in a philosophical context prior to the 19th century. Two men, working in the 19th century, are generally credited as being the founders of psychology as a science and academic discipline that was distinct from philosophy. Their names were Wilhelm Wundt and William James. This section will provide an overview of the shifts in paradigms that have influenced psychology from Wundt and James through today.
By the end of this section, you will be able to:
- Understand the etymology of the word “psychology”
- Define psychology
- Understand the merits of an education in psychology
In Greek mythology, Psyche was a mortal woman whose beauty was so great that it rivaled that of the goddess Aphrodite. Aphrodite became so jealous of Psyche that she sent her son, Eros, to make Psyche fall in love with the ugliest man in the world. However, Eros accidentally pricked himself with the tip of his arrow and fell madly in love with Psyche himself. He took Psyche to his palace and showered her with gifts, yet she could never see his face. While visiting Psyche, her sisters roused suspicion in Psyche about her mysterious lover, and eventually, Psyche betrayed Eros’ wishes to remain unseen to her. Because of this betrayal, Eros abandoned Psyche. When Psyche appealed to Aphrodite to reunite her with Eros, Aphrodite gave her a series of impossible tasks to complete. Psyche managed to complete all of these trials; ultimately, her perseverance paid off as she was reunited with Eros and was ultimately transformed into a goddess herself (Ashliman, 2001; Greek Myths & Greek Mythology, 2014).
Psyche comes to represent the human soul’s triumph over the misfortunes of life in the pursuit of true happiness (Bulfinch, 1855); in fact, the Greek word psyche means soul, and it is often represented as a butterfly. The word psychology was coined at a time when the concepts of soul and mind were not as clearly distinguished (Green, 2001). The root ology denotes scientific study of, and psychology refers to the scientific study of the mind. Since science studies only observable phenomena and the mind is not directly observable, we expand this definition to the scientific study of mind and behavior.
The scientific study of any aspect of the world uses the scientific method to acquire knowledge. To apply the scientific method, a researcher with a question about how or why something happens will propose a tentative explanation, called a hypothesis, to explain the phenomenon. A hypothesis is not just any explanation; it should fit into the context of a scientific theory. A scientific theory is a broad explanation or group of explanations for some aspect of the natural world that is consistently supported by evidence over time. A theory is the best understanding that we have of that part of the natural world. Armed with the hypothesis, the researcher then makes observations or, better still, carries out an experiment to test the validity of the hypothesis. That test and its results are then published so that others can check the results or build on them. It is necessary that any explanation in science be testable, which means that the phenomenon must be perceivable and measurable. For example, that a bird sings because it is happy is not a testable hypothesis, since we have no way to measure the happiness of a bird. We must ask a different question, perhaps about the brain state of the bird, since this can be measured. In general, science deals only with matter and energy, that is, those things that can be measured, and it cannot arrive at knowledge about values and morality. This is one reason why our scientific understanding of the mind is so limited, since thoughts, at least as we experience them, are neither matter nor energy. The scientific method is also a form of empiricism. An empirical method for acquiring knowledge is one based on observation, including experimentation, rather than a method based only on forms of logical argument or previous authorities.
It was not until the late 1800s that psychology became accepted as its own academic discipline. Before this time, the workings of the mind were considered under the auspices of philosophy. Given that any behavior is, at its roots, biological, some areas of psychology take on aspects of a natural science like biology. No biological organism exists in isolation, and our behavior is influenced by our interactions with others. Therefore, psychology is also a social science.
Clive Wearing is an accomplished musician who lost his ability to form new memories when he became sick at the age of 46. While he can remember how to play the piano perfectly, he cannot remember what he ate for breakfast just an hour ago (Sacks, 2007). James Wannerton experiences a taste sensation that is associated with the sound of words. His former girlfriend’s name tastes like rhubarb (Mundasad, 2013). John Nash is a brilliant mathematician and Nobel Prize winner. However, while he was a professor at MIT, he would tell people that the New York Times contained coded messages from extraterrestrial beings that were intended for him. He also began to hear voices and became suspicious of the people around him. Soon thereafter, Nash was diagnosed with schizophrenia and admitted to a state-run mental institution (O’Connor & Robertson, 2002). Nash was the subject of the 2001 movie A Beautiful Mind. Why did these people have these experiences? How does the human brain work? And what is the connection between the brain’s internal processes and people’s external behaviors? This textbook will introduce you to various ways that the field of psychology has explored these questions.
This chapter will introduce you to what psychology is and what psychologists do. A brief history of the discipline will be followed by a consideration of major subdivisions that exist within modern psychology. The chapter will close by exploring many of the career options available for students of psychology.
American Board of Forensic Psychology. (2014). Brochure. Retrieved from http://www.abfp.com/brochure.asp
American Psychological Association. (2014). Retrieved from www.apa.org
American Psychological Association. (2014). Graduate training and career possibilities in exercise and sport psychology. Retrieved from http://www.apadivisions.org/division-47/about/resources/training.aspx?item=1
American Psychological Association. (2011). Psychology as a career. Retrieved from http://www.apa.org/education/undergrad/psych-career.aspx
Ashliman, D. L. (2001). Cupid and Psyche. In Folktexts: A library of folktales, folklore, fairy tales, and mythology. Retrieved from http://www.pitt.edu/~dash/cupid.html
Betancourt, H., & López, S. R. (1993). The study of culture, ethnicity, and race in American psychology. American Psychologist, 48, 629–637.
Black, S. R., Spence, S. A., & Omari, S. R. (2004). Contributions of African Americans to the field of psychology. Journal of Black Studies, 35, 40–64.
Bulfinch, T. (1855). The age of fable: Or, stories of gods and heroes. Boston, MA: Chase, Nichols and Hill.
Buss, D. M. (1989). Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures. Behavioral and Brain Sciences, 12, 1–49.
Carlson, N. R. (2013). Physiology of Behavior (11th ed.). Boston, MA: Pearson.
Confer, J. C., Easton, J. A., Fleischman, D. S., Goetz, C. D., Lewis, D. M. G., Perilloux, C., & Buss, D. M. (2010). Evolutionary psychology. Controversies, questions, prospects, and limitations. American Psychologist, 65, 100–126.
Crawford, M., & Marecek, J. (1989). Psychology reconstructs the female 1968–1988. Psychology of Women Quarterly, 13, 147–165.
Danziger, K. (1980). The history of introspection reconsidered. Journal of the History of the Behavioral Sciences, 16, 241–262.
Darwin, C. (1871). The descent of man and selection in relation to sex. London: John Murray.
Darwin, C. (1872). The expression of the emotions in man and animals. London: John Murray.
DeAngelis, T. (2010). Fear not. gradPSYCH Magazine, 8, 38.
Department of Health and Human Services. (n.d.). Projected future growth of the older population. Retrieved from http://www.aoa.gov/Aging_Statistics/future_growth/future_growth.aspx#age
Endler, J. A. (1986). Natural Selection in the Wild. Princeton, NJ: Princeton University Press.
Fogg, N. P., Harrington, P. E., Harrington, T. F., & Shatkin, L. (2012). College majors handbook with real career paths and payoffs (3rd ed.). St. Paul, MN: JIST Publishing.
Franko, D. L., et al. (2012). Racial/ethnic differences in adults in randomized clinical trials of binge eating disorder. Journal of Consulting and Clinical Psychology, 80, 186–195.
Friedman, H. (2008), Humanistic and positive psychology: The methodological and epistemological divide. The Humanistic Psychologist, 36, 113–126.
Gordon, O. E. (1995). A brief history of psychology. Retrieved from http://www.psych.utah.edu/gordon/Classes/Psy4905Docs/PsychHistory/index.html#maptop
Greek Myths & Greek Mythology. (2014). The myth of Psyche and Eros. Retrieved from http://www.greekmyths-greekmythology.com/psyche-and-eros-myth/
Green, C. D. (2001). Classics in the history of psychology. Retrieved from http://psychclassics.yorku.ca/Krstic/marulic.htm
Greengrass, M. (2004). 100 years of B.F. Skinner. Monitor on Psychology, 35, 80.
Halonen, J. S. (2011). White paper: Are there too many psychology majors? Prepared for the Staff of the State University System of Florida Board of Governors. Retrieved from http://www.cogdop.org/page_attachments/0000/0200/FLA_White_Paper_for_cogop_posting.pdf
Hock, R. R. (2009). Social psychology. Forty studies that changed psychology: Explorations into the history of psychological research (pp. 308–317). Upper Saddle River, NJ: Pearson.
Hoffman, C. (2012). Careers in clinical, counseling, or school psychology; mental health counseling; clinical social work; marriage & family therapy and related professions. Retrieved from http://www.indiana.edu/~psyugrad/advising/docs/Careers%20in%20Mental%20Health%20Counseling.pdf
Jang, K. L., Livesly, W. J., & Vernon, P. A. (1996). Heritability of the Big Five personality dimensions and their facets: A twin study. Journal of Personality, 64, 577–591.
Johnson, R., & Lubin, G. (2011). College exposed: What majors are most popular, highest paying and most likely to get you a job. Business Insider.com. Retrieved from http://www.businessinsider.com/best-college-majors-highest-income-most-employed-georgetwon-study-2011-6?op=1
Knekt, P. P., et al. (2008). Randomized trial on the effectiveness of long- and short-term psychodynamic psychotherapy and solution-focused therapy on psychiatric symptoms during a 3-year follow-up. Psychological Medicine: A Journal of Research In Psychiatry And The Allied Sciences, 38, 689–703.
Landers, R. N. (2011, June 14). Grad school: Should I get a PhD or Master’s in I/O psychology? [Web log post]. Retrieved from http://neoacademic.com/2011/06/14/grad-school-should-i-get-a-ph-d-or-masters-in-io-psychology/#.UuKKLftOnGg
Macdonald, C. (2013). Health psychology center presents: What is health psychology? Retrieved from http://healthpsychology.org/what-is-health-psychology/
McCrae, R. R. & Costa, P. T. (2008). Empirical and theoretical status of the five-factor model of personality traits. In G. J. Boyle, G. Matthews, & D. H. Saklofske (Eds.), The Sage handbook of personality theory and assessment. Vol. 1 Personality theories and models. London: Sage.
Michalski, D., Kohout, J., Wicherski, M., & Hart, B. (2011). 2009 Doctorate Employment Survey. APA Center for Workforce Studies. Retrieved from http://www.apa.org/workforce/publications/09-doc-empl/index.aspx
Miller, G. A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive Sciences, 7, 141–144.
Munakata, Y., McClelland, J. L., Johnson, M. H., & Siegler, R. S. (1997). Rethinking infant knowledge: Toward an adaptive process account of successes and failures in object permanence tasks. Psychological Review, 104, 689–713.
Mundasad, S. (2013). Word-taste synaesthesia: Tasting names, places, and Anne Boleyn. Retrieved from http://www.bbc.co.uk/news/health-21060207
Munsey, C. (2009). More states forgo a postdoc requirement. Monitor on Psychology, 40, 10.
National Association of School Psychologists. (n.d.). Becoming a nationally certified school psychologist (NCSP). Retrieved from http://www.nasponline.org/CERTIFICATION/becomeNCSP.aspx
Nicolas, S., & Ferrand, L. (1999). Wundt’s laboratory at Leipzig in 1891. History of Psychology, 2, 194–203.
Norcross, J. C. (n.d.) Clinical versus counseling psychology: What’s the diff? Available at http://www.csun.edu/~hcpsy002/Clinical%20Versus%20Counseling%20Psychology.pdf
Norcross, J. C., & Castle, P. H. (2002). Appreciating the PsyD: The facts. Eye on Psi Chi, 7, 22–26.
O’Connor, J. J., & Robertson, E. F. (2002). John Forbes Nash. Retrieved from http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Nash.html
O’Hara, M. (n.d.). Historic review of humanistic psychology. Retrieved from http://www.ahpweb.org/index.php?option=com_k2&view=item&layout=item&id=14&Itemid=24
Person, E. S. (1980). Sexuality as the mainstay of identity: Psychoanalytic perspectives. Signs, 5, 605–630.
Rantanen, J., Metsäpelto, R. L., Feldt, T., Pulkkinen, L., & Kokko, K. (2007). Long-term stability in the Big Five personality traits in adulthood. Scandinavian Journal of Psychology, 48, 511–518.
Riggio, R. E. (2013). What is industrial/organizational psychology? Psychology Today. Retrieved from http://www.psychologytoday.com/blog/cutting-edge-leadership/201303/what-is-industrialorganizational-psychology
Sacks, O. (2007). A neurologists notebook: The abyss, music and amnesia. The New Yorker. Retrieved from http://www.newyorker.com/reporting/2007/09/24/070924fa_fact_sacks?currentPage=all
Shedler, J. (2010). The efficacy of psychodynamic psychotherapy. American Psychologist, 65(2), 98–109.
Soldz, S., & Vaillant, G. E. (1999). The Big Five personality traits and the life course: A 45-year longitudinal study. Journal of Research in Personality, 33, 208–232.
Thorne, B. M., & Henley, T. B. (2005). Connections in the history and systems of psychology (3rd ed.). Boston, MA: Houghton Mifflin Company.
Tolman, E. C. (1938). The determiners of behavior at a choice point. Psychological Review, 45, 1–41.
U.S. Department of Education, National Center for Education Statistics. (2013). Digest of Education Statistics, 2012 (NCES 2014-015).
Weisstein, N. (1993). Psychology constructs the female: Or, the fantasy life of the male psychologist (with some attention to the fantasies of his friends, the male biologist and the male anthropologist). Feminism and Psychology, 3, 195–210.
Westen, D. (1998). The scientific legacy of Sigmund Freud, toward a psychodynamically informed psychological science. Psychological Bulletin, 124, 333–371.
By the end of this section, you will be able to:
- Explain how intelligence tests are developed
- Describe the history of the use of IQ tests
- Describe the purposes and benefits of intelligence testing
While you’re likely familiar with the term “IQ” and associate it with the idea of intelligence, what does IQ really mean? IQ stands for intelligence quotient and describes a score earned on a test designed to measure intelligence. You’ve already learned that there are many ways psychologists describe intelligence (or more aptly, intelligences). Similarly, IQ tests—the tools designed to measure intelligence—have been the subject of debate throughout their development and use.
When might an IQ test be used? What do we learn from the results, and how might people use this information? IQ tests are expensive to administer and must be given by a licensed psychologist. Intelligence testing has been considered both a bane and a boon for education and social policy. In this section, we will explore what intelligence tests measure, how they are scored, and how they were developed.
It seems that the human understanding of intelligence is somewhat limited when we focus on traditional or academic-type intelligence. How then, can intelligence be measured? And when we measure intelligence, how do we ensure that we capture what we’re really trying to measure (in other words, that IQ tests function as valid measures of intelligence)? In the following paragraphs, we will explore the how intelligence tests were developed and the history of their use.
The IQ test has been synonymous with intelligence for over a century. In the late 1800s, Sir Francis Galton developed the first broad test of intelligence (Flanagan & Kaufman, 2004). Although he was not a psychologist, his contributions to the concepts of intelligence testing are still felt today (Gordon, 1995). Reliable intelligence testing (you may recall from earlier chapters that reliability refers to a test’s ability to produce consistent results) began in earnest during the early 1900s with a researcher named Alfred Binet ([(Link)]). Binet was asked by the French government to develop an intelligence test to use on children to determine which ones might have difficulty in school; it included many verbally based tasks. American researchers soon realized the value of such testing. Louis Terman, a Stanford professor, modified Binet’s work by standardizing the administration of the test and tested thousands of different-aged children to establish an average score for each age. As a result, the test was normed and standardized, which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve (bell curves will be discussed later). Standardization means that the manner of administration, scoring, and interpretation of results is consistent. Norming involves giving a test to a large population so data can be collected comparing groups, such as age groups. The resulting data provide norms, or referential scores, by which to interpret future scores. Norms are not expectations of what a given group should know but a demonstration of what that group does know. Norming and standardizing the test ensures that new scores are reliable. This new version of the test was called the Stanford-Binet Intelligence Scale (Terman, 1916). Remarkably, an updated version of this test is still widely used today.
In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans, developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal skills, because Wechsler believed that intelligence encompassed “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958, p. 7). He named the test the Wechsler-Bellevue Intelligence Scale (Wechsler, 1981). This combination of subtests became one of the most extensively used intelligence tests in the history of psychology. Although its name was later changed to the Wechsler Adult Intelligence Scale (WAIS) and has been revised several times, the aims of the test remain virtually unchanged since its inception (Boake, 2002). Today, there are three intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV), the Wechsler Intelligence Scale for Children (WISC-V), and the Wechsler Preschool and Primary Scale of Intelligence—Revised (WPPSI-III) (Wechsler, 2002). These tests are used widely in schools and communities throughout the United States, and they are periodically normed and standardized as a means of recalibration. Interestingly, the periodic recalibrations have led to an interesting observation known as the Flynn effect. Named after James Flynn, who was among the first to describe this trend, the Flynn effect refers to the observation that each generation has a significantly higher IQ than the last. Flynn himself argues, however, that increased IQ scores do not necessarily mean that younger generations are more intelligent per se (Flynn, Shaughnessy, & Fulgham, 2012). As a part of the recalibration process, the WISC-V (which is scheduled to be released in 2014) was given to thousands of children across the country, and children taking the test today are compared with their same-age peers ([(Link)]).
The WISC-V is composed of 10 subtests, which comprise four indices, which then render an IQ score. The four indices are Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed. When the test is complete, individuals receive a score for each of the four indices and a Full Scale IQ score (Heaton, 2004). The method of scoring reflects the understanding that intelligence is comprised of multiple abilities in several cognitive realms and focuses on the mental processes that the child used to arrive at his or her answers to each test item (Heaton, 2004).
Ultimately, we are still left with the question of how valid intelligence tests are. Certainly, the most modern versions of these tests tap into more than verbal competencies, yet the specific skills that should be assessed in IQ testing, the degree to which any test can truly measure an individual’s intelligence, and the use of the results of IQ tests are still issues of debate (Gresham & Witt, 1997; Flynn, Shaughnessy, & Fulgham, 2012; Richardson, 2002; Schlinger, 2003).
The case of Atkins v. Virginia was a landmark case in the United States Supreme Court. On August 16, 1996, two men, Daryl Atkins and William Jones, robbed, kidnapped, and then shot and killed Eric Nesbitt, a local airman from the U.S. Air Force. A clinical psychologist evaluated Atkins and testified at the trial that Atkins had an IQ of 59. The mean IQ score is 100. The psychologist concluded that Atkins was mildly mentally retarded.
The jury found Atkins guilty, and he was sentenced to death. Atkins and his attorneys appealed to the Supreme Court. In June 2002, the Supreme Court reversed a previous decision and ruled that executions of mentally retarded criminals are ‘cruel and unusual punishments’ prohibited by the Eighth Amendment. The court wrote in their decision:
Clinical definitions of mental retardation require not only subaverage intellectual functioning, but also significant limitations in adaptive skills. Mentally retarded persons frequently know the difference between right and wrong and are competent to stand trial. Because of their impairments, however, by definition they have diminished capacities to understand and process information, to communicate, to abstract from mistakes and learn from experience, to engage in logical reasoning, to control impulses, and to understand others’ reactions. Their deficiencies do not warrant an exemption from criminal sanctions, but diminish their personal culpability (Atkins v. Virginia, 2002, par. 5).
The court also decided that there was a state legislature consensus against the execution of the mentally retarded and that this consensus should stand for all of the states. The Supreme Court ruling left it up to the states to determine their own definitions of mental retardation and intellectual disability. The definitions vary among states as to who can be executed. In the Atkins case, a jury decided that because he had many contacts with his lawyers and thus was provided with intellectual stimulation, his IQ had reportedly increased, and he was now smart enough to be executed. He was given an execution date and then received a stay of execution after it was revealed that lawyers for co-defendant, William Jones, coached Jones to “produce a testimony against Mr. Atkins that did match the evidence” (Liptak, 2008). After the revelation of this misconduct, Atkins was re-sentenced to life imprisonment.
Atkins v. Virginia (2002) highlights several issues regarding society’s beliefs around intelligence. In the Atkins case, the Supreme Court decided that intellectual disability does affect decision making and therefore should affect the nature of the punishment such criminals receive. Where, however, should the lines of intellectual disability be drawn? In May 2014, the Supreme Court ruled in a related case (Hall v. Florida) that IQ scores cannot be used as a final determination of a prisoner’s eligibility for the death penalty (Roberts, 2014).
THE BELL CURVE
The results of intelligence tests follow the bell curve, a graph in the general shape of a bell. When the bell curve is used in psychological testing, the graph demonstrates a normal distribution of a trait, in this case, intelligence, in the human population. Many human traits naturally follow the bell curve. For example, if you lined up all your female schoolmates according to height, it is likely that a large cluster of them would be the average height for an American woman: 5’4”–5’6”. This cluster would fall in the center of the bell curve, representing the average height for American women ([(Link)]). There would be fewer women who stand closer to 4’11”. The same would be true for women of above-average height: those who stand closer to 5’11”. The trick to finding a bell curve in nature is to use a large sample size. Without a large sample size, it is less likely that the bell curve will represent the wider population. A representative sample is a subset of the population that accurately represents the general population. If, for example, you measured the height of the women in your classroom only, you might not actually have a representative sample. Perhaps the women’s basketball team wanted to take this course together, and they are all in your class. Because basketball players tend to be taller than average, the women in your class may not be a good representative sample of the population of American women. But if your sample included all the women at your school, it is likely that their heights would form a natural bell curve.
The same principles apply to intelligence tests scores. Individuals earn a score called an intelligence quotient (IQ). Over the years, different types of IQ tests have evolved, but the way scores are interpreted remains the same. The average IQ score on an IQ test is 100. Standard deviations describe how data are dispersed in a population and give context to large data sets. The bell curve uses the standard deviation to show how all scores are dispersed from the average score ([(Link)]). In modern IQ testing, one standard deviation is 15 points. So a score of 85 would be described as “one standard deviation below the mean.” How would you describe a score of 115 and a score of 70? Any IQ score that falls within one standard deviation above and below the mean (between 85 and 115) is considered average, and 82% of the population has IQ scores in this range. An IQ score of 130 or above is considered a superior level.
Only 2.2% of the population has an IQ score below 70 (American Psychological Association [APA], 2013). A score of 70 or below indicates significant cognitive delays, major deficits in adaptive functioning, and difficulty meeting “community standards of personal independence and social responsibility” when compared to same-aged peers (APA, 2013, p. 37). An individual in this IQ range would be considered to have an intellectual disability and exhibit deficits in intellectual functioning and adaptive behavior (American Association on Intellectual and Developmental Disabilities, 2013). Formerly known as mental retardation, the accepted term now is intellectual disability, and it has four subtypes: mild, moderate, severe, and profound ([(Link)]). The Diagnostic and Statistical Manual of Psychological Disorders lists criteria for each subgroup (APA, 2013).
|Intellectual Disability Subtype||Percentage of Intellectually Disabled Population||Description|
|Mild||85%||3rd- to 6th-grade skill level in reading, writing, and math; may be employed and live independently|
|Moderate||10%||Basic reading and writing skills; functional self-care skills; requires some oversight|
|Severe||5%||Functional self-care skills; requires oversight of daily environment and activities|
|Profound||<1%||May be able to communicate verbally or nonverbally; requires intensive oversight|
On the other end of the intelligence spectrum are those individuals whose IQs fall into the highest ranges. Consistent with the bell curve, about 2% of the population falls into this category. People are considered gifted if they have an IQ score of 130 or higher, or superior intelligence in a particular area. Long ago, popular belief suggested that people of high intelligence were maladjusted. This idea was disproven through a groundbreaking study of gifted children. In 1921, Lewis Terman began a longitudinal study of over 1500 children with IQs over 135 (Terman, 1925). His findings showed that these children became well-educated, successful adults who were, in fact, well-adjusted (Terman & Oden, 1947). Additionally, Terman’s study showed that the subjects were above average in physical build and attractiveness, dispelling an earlier popular notion that highly intelligent people were “weaklings.” Some people with very high IQs elect to join Mensa, an organization dedicated to identifying, researching, and fostering intelligence. Members must have an IQ score in the top 2% of the population, and they may be required to pass other exams in their application to join the group.
Dig Deeper: What’s in a Name? Mental Retardation
In the past, individuals with IQ scores below 70 and significant adaptive and social functioning delays were diagnosed with mental retardation. When this diagnosis was first named, the title held no social stigma. In time, however, the degrading word “retard” sprang from this diagnostic term. “Retard” was frequently used as a taunt, especially among young people, until the words “mentally retarded” and “retard” became an insult. As such, the DSM-5 now labels this diagnosis as “intellectual disability.” Many states once had a Department of Mental Retardation to serve those diagnosed with such cognitive delays, but most have changed their name to Department of Developmental Disabilities or something similar in language. The Social Security Administration still uses the term “mental retardation” but is considering eliminating it from its programming (Goad, 2013). Earlier in the chapter, we discussed how language affects how we think. Do you think changing the title of this department has any impact on how people regard those with developmental disabilities? Does a different name give people more dignity, and if so, how? Does it change the expectations for those with developmental or cognitive disabilities? Why or why not?
WHY MEASURE INTELLIGENCE?
The value of IQ testing is most evident in educational or clinical settings. Children who seem to be experiencing learning difficulties or severe behavioral problems can be tested to ascertain whether the child’s difficulties can be partly attributed to an IQ score that is significantly different from the mean for her age group. Without IQ testing—or another measure of intelligence—children and adults needing extra support might not be identified effectively. In addition, IQ testing is used in courts to determine whether a defendant has special or extenuating circumstances that preclude him from participating in some way in a trial. People also use IQ testing results to seek disability benefits from the Social Security Administration. While IQ tests have sometimes been used as arguments in support of insidious purposes, such as the eugenics movement (Severson, 2011), the following case study demonstrates the usefulness and benefits of IQ testing.
Candace, a 14-year-old girl experiencing problems at school, was referred for a court-ordered psychological evaluation. She was in regular education classes in ninth grade and was failing every subject. Candace had never been a stellar student but had always been passed to the next grade. Frequently, she would curse at any of her teachers who called on her in class. She also got into fights with other students and occasionally shoplifted. When she arrived for the evaluation, Candace immediately said that she hated everything about school, including the teachers, the rest of the staff, the building, and the homework. Her parents stated that they felt their daughter was picked on, because she was of a different race than the teachers and most of the other students. When asked why she cursed at her teachers, Candace replied, “They only call on me when I don’t know the answer. I don’t want to say, ‘I don’t know’ all of the time and look like an idiot in front of my friends. The teachers embarrass me.” She was given a battery of tests, including an IQ test. Her score on the IQ test was 68. What does Candace’s score say about her ability to excel or even succeed in regular education classes without assistance?
In this section, we learned about the history of intelligence testing and some of the challenges regarding intelligence testing. Intelligence tests began in earnest with Binet; Wechsler later developed intelligence tests that are still in use today: the WAIS-IV and WISC-V. The Bell curve shows the range of scores that encompass average intelligence as well as standard deviations.
Self Check Questions
Critical Thinking Questions
1. Why do you think different theorists have defined intelligence in different ways?
2. Compare and contrast the benefits of the Stanford-Binet IQ test and Wechsler’s IQ tests.
Personal Application Question
3. In thinking about the case of Candace described earlier, do you think that Candace benefitted or suffered as a result of consistently being passed on to the next grade?
1. Since cognitive processes are complex, ascertaining them in a measurable way is challenging. Researchers have taken different approaches to define intelligence in an attempt to comprehensively describe and measure it.
2. The Wechsler-Bellevue IQ test combined a series of subtests that tested verbal and nonverbal skills into a single IQ test in order to get a reliable, descriptive score of intelligence. While the Stanford-Binet test was normed and standardized, it focused more on verbal skills than variations in other cognitive processes.
By the end of this section, you will be able to:
- Define intelligence
- Explain the triarchic theory of intelligence
- Identify the difference between intelligence theories
- Explain emotional intelligence
A four-and-a-half-year-old boy sits at the kitchen table with his father, who is reading a new story aloud to him. He turns the page to continue reading, but before he can begin, the boy says, “Wait, Daddy!” He points to the words on the new page and reads aloud, “Go, Pig! Go!” The father stops and looks at his son. “Can you read that?” he asks. “Yes, Daddy!” And he points to the words and reads again, “Go, Pig! Go!”
This father was not actively teaching his son to read, even though the child constantly asked questions about letters, words, and symbols that they saw everywhere: in the car, in the store, on the television. The dad wondered about what else his son might understand and decided to try an experiment. Grabbing a sheet of blank paper, he wrote several simple words in a list: mom, dad, dog, bird, bed, truck, car, tree. He put the list down in front of the boy and asked him to read the words. “Mom, dad, dog, bird, bed, truck, car, tree,” he read, slowing down to carefully pronounce bird and truck. Then, “Did I do it, Daddy?” “You sure did! That is very good.” The father gave his little boy a warm hug and continued reading the story about the pig, all the while wondering if his son’s abilities were an indication of exceptional intelligence or simply a normal pattern of linguistic development. Like the father in this example, psychologists have wondered what constitutes intelligence and how it can be measured.
What exactly is intelligence? The way that researchers have defined the concept of intelligence has been modified many times since the birth of psychology. British psychologist Charles Spearman believed intelligence consisted of one general factor, called g, which could be measured and compared among individuals. Spearman focused on the commonalities among various intellectual abilities and demphasized what made each unique. Long before modern psychology developed, however, ancient philosophers, such as Aristotle, held a similar view (Cianciolo & Sternberg, 2004).
Others psychologists believe that instead of a single factor, intelligence is a collection of distinct abilities. In the 1940s, Raymond Cattell proposed a theory of intelligence that divided general intelligence into two components: crystallized intelligence and fluid intelligence (Cattell, 1963). Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it. When you learn, remember, and recall information, you are using crystallized intelligence. You use crystallized intelligence all the time in your coursework by demonstrating that you have mastered the information covered in the course. Fluid intelligence encompasses the ability to see complex relationships and solve problems. Navigating your way home after being detoured onto an unfamiliar route because of road construction would draw upon your fluid intelligence. Fluid intelligence helps you tackle complex, abstract challenges in your daily life, whereas crystallized intelligence helps you overcome concrete, straightforward problems (Cattell, 1963).
Other theorists and psychologists believe that intelligence should be defined in more practical terms. For example, what types of behaviors help you get ahead in life? Which skills promote success? Think about this for a moment. Being able to recite all 44 presidents of the United States in order is an excellent party trick, but will knowing this make you a better person?
Robert Sternberg developed another theory of intelligence, which he titled the triarchic theory of intelligence because it sees intelligence as comprised of three parts (Sternberg, 1988): practical, creative, and analytical intelligence ([(Link)]).
Practical intelligence, as proposed by Sternberg, is sometimes compared to “street smarts.” Being practical means you find solutions that work in your everyday life by applying knowledge based on your experiences. This type of intelligence appears to be separate from traditional understanding of IQ; individuals who score high in practical intelligence may or may not have comparable scores in creative and analytical intelligence (Sternberg, 1988).
This story about the 2007 Virginia Tech shootings illustrates both high and low practical intelligences. During the incident, one student left her class to go get a soda in an adjacent building. She planned to return to class, but when she returned to her building after getting her soda, she saw that the door she used to leave was now chained shut from the inside. Instead of thinking about why there was a chain around the door handles, she went to her class’s window and crawled back into the room. She thus potentially exposed herself to the gunman. Thankfully, she was not shot. On the other hand, a pair of students was walking on campus when they heard gunshots nearby. One friend said, “Let’s go check it out and see what is going on.” The other student said, “No way, we need to run away from the gunshots.” They did just that. As a result, both avoided harm. The student who crawled through the window demonstrated some creative intelligence but did not use common sense. She would have low practical intelligence. The student who encouraged his friend to run away from the sound of gunshots would have much higher practical intelligence.
Analytical intelligence is closely aligned with academic problem solving and computations. Sternberg says that analytical intelligence is demonstrated by an ability to analyze, evaluate, judge, compare, and contrast. When reading a classic novel for literature class, for example, it is usually necessary to compare the motives of the main characters of the book or analyze the historical context of the story. In a science course such as anatomy, you must study the processes by which the body uses various minerals in different human systems. In developing an understanding of this topic, you are using analytical intelligence. When solving a challenging math problem, you would apply analytical intelligence to analyze different aspects of the problem and then solve it section by section.
Creative intelligence is marked by inventing or imagining a solution to a problem or situation. Creativity in this realm can include finding a novel solution to an unexpected problem or producing a beautiful work of art or a well-developed short story. Imagine for a moment that you are camping in the woods with some friends and realize that you’ve forgotten your camp coffee pot. The person in your group who figures out a way to successfully brew coffee for everyone would be credited as having higher creative intelligence.
Multiple Intelligences Theory was developed by Howard Gardner, a Harvard psychologist and former student of Erik Erikson. Gardner’s theory, which has been refined for more than 30 years, is a more recent development among theories of intelligence. In Gardner’s theory, each person possesses at least eight intelligences. Among these eight intelligences, a person typically excels in some and falters in others (Gardner, 1983). [(Link)] describes each type of intelligence.
|Intelligence Type||Characteristics||Representative Career|
|Linguistic intelligence||Perceives different functions of language, different sounds and meanings of words, may easily learn multiple languages||Journalist, novelist, poet, teacher|
|Logical-mathematical intelligence||Capable of seeing numerical patterns, strong ability to use reason and logic||Scientist, mathematician|
|Musical intelligence||Understands and appreciates rhythm, pitch, and tone; may play multiple instruments or perform as a vocalist||Composer, performer|
|Bodily kinesthetic intelligence||High ability to control the movements of the body and use the body to perform various physical tasks||Dancer, athlete, athletic coach, yoga instructor|
|Spatial intelligence||Ability to perceive the relationship between objects and how they move in space||Choreographer, sculptor, architect, aviator, sailor|
|Interpersonal intelligence||Ability to understand and be sensitive to the various emotional states of others||Counselor, social worker, salesperson|
|Intrapersonal intelligence||Ability to access personal feelings and motivations, and use them to direct behavior and reach personal goals||Key component of personal success over time|
|Naturalist intelligence||High capacity to appreciate the natural world and interact with the species within it||Biologist, ecologist, environmentalist|
Gardner’s theory is relatively new and needs additional research to better establish empirical support. At the same time, his ideas challenge the traditional idea of intelligence to include a wider variety of abilities, although it has been suggested that Gardner simply relabeled what other theorists called “cognitive styles” as “intelligences” (Morgan, 1996). Furthermore, developing traditional measures of Gardner’s intelligences is extremely difficult (Furnham, 2009; Gardner & Moran, 2006; Klein, 1997).
Gardner’s inter- and intrapersonal intelligences are often combined into a single type: emotional intelligence. Emotional intelligence encompasses the ability to understand the emotions of yourself and others, show empathy, understand social relationships and cues, and regulate your own emotions and respond in culturally appropriate ways (Parker, Saklofske, & Stough, 2009). People with high emotional intelligence typically have well-developed social skills. Some researchers, including Daniel Goleman, the author of Emotional Intelligence: Why It Can Matter More than IQ, argue that emotional intelligence is a better predictor of success than traditional intelligence (Goleman, 1995). However, emotional intelligence has been widely debated, with researchers pointing out inconsistencies in how it is defined and described, as well as questioning results of studies on a subject that is difficulty to measure and study emperically (Locke, 2005; Mayer, Salovey, & Caruso, 2004)
Intelligence can also have different meanings and values in different cultures. If you live on a small island, where most people get their food by fishing from boats, it would be important to know how to fish and how to repair a boat. If you were an exceptional angler, your peers would probably consider you intelligent. If you were also skilled at repairing boats, your intelligence might be known across the whole island. Think about your own family’s culture. What values are important for Latino families? Italian families? In Irish families, hospitality and telling an entertaining story are marks of the culture. If you are a skilled storyteller, other members of Irish culture are likely to consider you intelligent.
Some cultures place a high value on working together as a collective. In these cultures, the importance of the group supersedes the importance of individual achievement. When you visit such a culture, how well you relate to the values of that culture exemplifies your cultural intelligence, sometimes referred to as cultural competence.
Creativity is the ability to generate, create, or discover new ideas, solutions, and possibilities. Very creative people often have intense knowledge about something, work on it for years, look at novel solutions, seek out the advice and help of other experts, and take risks. Although creativity is often associated with the arts, it is actually a vital form of intelligence that drives people in many disciplines to discover something new. Creativity can be found in every area of life, from the way you decorate your residence to a new way of understanding how a cell works.
Creativity is often assessed as a function of one’s ability to engage in divergent thinking. Divergent thinking can be described as thinking “outside the box;” it allows an individual to arrive at unique, multiple solutions to a given problem. In contrast, convergent thinking describes the ability to provide a correct or well-established answer or solution to a problem (Cropley, 2006; Gilford, 1967)
Everyday Connection: Creativity
Dr. Tom Steitz, the Sterling Professor of Biochemistry and Biophysics at Yale University, has spent his career looking at the structure and specific aspects of RNA molecules and how their interactions cold help produce antibiotics and ward off diseases. As a result of his lifetime of work, he won the Nobel Prize in Chemistry in 2009. He wrote, “Looking back over the development and progress of my career in science, I am reminded how vitally important good mentorship is in the early stages of one’s career development and constant face-to-face conversations, debate and discussions with colleagues at all stages of research. Outstanding discoveries, insights and developments do not happen in a vacuum” (Steitz, 2010, para. 39). Based on Steitz’s comment, it becomes clear that someone’s creativity, although an individual strength, benefits from interactions with others. Think of a time when your creativity was sparked by a conversation with a friend or classmate. How did that person influence you and what problem did you solve using creativity?
Intelligence is a complex characteristic of cognition. Many theories have been developed to explain what intelligence is and how it works. Sternberg generated his triarchic theory of intelligence, whereas Gardner posits that intelligence is comprised of many factors. Still others focus on the importance of emotional intelligence. Finally, creativity seems to be a facet of intelligence, but it is extremely difficult to measure objectively.
Self Check Questions
Critical Thinking Questions
1. Describe a situation in which you would need to use practical intelligence.
2. Describe a situation in which cultural intelligence would help you communicate better.
Personal Application Question
3. What influence do you think emotional intelligence plays in your personal life?
1. You are out with friends and it is getting late. You need to make it home before your curfew, but you don’t have a ride home. You need to get in touch with your parents, but your cell phone is dead. So, you enter a nearby convenience store and explain your situation to the clerk. He allows you to use the store’s phone to call your parents, and they come and pick you and your friends up, and take all of you home.
By the end of this section, you will be able to:
- Describe problem solving strategies
- Define algorithm and heuristic
- Explain some common roadblocks to effective problem solving
People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.
When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.
A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ([(Link)]). For example, a well-known strategy is trial and error. The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.
|Trial and error||Continue trying different solutions until problem is solved||Restarting phone, turning off WiFi, turning off bluetooth in order to determine why your phone is malfunctioning|
|Algorithm||Step-by-step problem-solving formula||Instruction manual for installing new software on your computer|
|Heuristic||General problem-solving framework||Working backwards; breaking a task into steps|
Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?
A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):
- When one is faced with too much information
- When the time to make a decision is limited
- When the decision to be made is unimportant
- When there is access to very little information to use in making the decision
- When an appropriate heuristic happens to come to mind in the same moment
Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.
Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.
Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ([(Link)]) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.
Here is another popular type of puzzle ([(Link)]) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:
Take a look at the “Puzzling Scales” logic puzzle below ([(Link)]). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).
PITFALLS TO PROBLEM SOLVING
Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.
Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.
Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).
In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.
The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.
Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision. Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in [(Link)].
|Anchoring||Tendency to focus on one particular piece of information when making decisions or problem-solving|
|Confirmation||Focuses on information that confirms existing beliefs|
|Hindsight||Belief that the event just experienced was predictable|
|Representative||Unintentional stereotyping of someone or something|
|Availability||Decision is based upon either an available precedent or an example that may be faulty|
Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.
Self Check Questions
Critical Thinking Questions
1. What is functional fixedness and how can overcoming it help you solve problems?
2. How does an algorithm save you time and energy when solving a problem?
Personal Application Question
3. Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?
1. Functional fixedness occurs when you cannot see a use for an object other than the use for which it was intended. For example, if you need something to hold up a tarp in the rain, but only have a pitchfork, you must overcome your expectation that a pitchfork can only be used for garden chores before you realize that you could stick it in the ground and drape the tarp on top of it to hold it up.
2. An algorithm is a proven formula for achieving a desired outcome. It saves time because if you follow it exactly, you will solve the problem without having to figure out how to solve the problem. It is a bit like not reinventing the wheel.
By the end of this section, you will be able to:
- Define language and demonstrate familiarity with the components of language
- Understand how the use of language develops
- Explain the relationship between language and thinking
Language is a communication system that involves using words and systematic rules to organize those words to transmit information from one individual to another. While language is a form of communication, not all communication is language. Many species communicate with one another through their postures, movements, odors, or vocalizations. This communication is crucial for species that need to interact and develop social relationships with their conspecifics. However, many people have asserted that it is language that makes humans unique among all of the animal species (Corballis & Suddendorf, 2007; Tomasello & Rakoczy, 2003). This section will focus on what distinguishes language as a special form of communication, how the use of language develops, and how language affects the way we think.
COMPONENTS OF LANGUAGE
Language, be it spoken, signed, or written, has specific components: a lexicon and grammar. Lexicon refers to the words of a given language. Thus, lexicon is a language’s vocabulary. Grammar refers to the set of rules that are used to convey meaning through the use of the lexicon (Fernández & Cairns, 2011). For instance, English grammar dictates that most verbs receive an “-ed” at the end to indicate past tense.
Words are formed by combining the various phonemes that make up the language. A phoneme (e.g., the sounds “ah” vs. “eh”) is a basic sound unit of a given language, and different languages have different sets of phonemes. Phonemes are combined to form morphemes, which are the smallest units of language that convey some type of meaning (e.g., “I” is both a phoneme and a morpheme). We use semantics and syntax to construct language. Semantics and syntax are part of a language’s grammar. Semantics refers to the process by which we derive meaning from morphemes and words. Syntax refers to the way words are organized into sentences (Chomsky, 1965; Fernández & Cairns, 2011).
We apply the rules of grammar to organize the lexicon in novel and creative ways, which allow us to communicate information about both concrete and abstract concepts. We can talk about our immediate and observable surroundings as well as the surface of unseen planets. We can share our innermost thoughts, our plans for the future, and debate the value of a college education. We can provide detailed instructions for cooking a meal, fixing a car, or building a fire. The flexibility that language provides to relay vastly different types of information is a property that makes language so distinct as a mode of communication among humans.
Given the remarkable complexity of a language, one might expect that mastering a language would be an especially arduous task; indeed, for those of us trying to learn a second language as adults, this might seem to be true. However, young children master language very quickly with relative ease. B. F. Skinner (1957) proposed that language is learned through reinforcement. Noam Chomsky (1965) criticized this behaviorist approach, asserting instead that the mechanisms underlying language acquisition are biologically determined. The use of language develops in the absence of formal instruction and appears to follow a very similar pattern in children from vastly different cultures and backgrounds. It would seem, therefore, that we are born with a biological predisposition to acquire a language (Chomsky, 1965; Fernández & Cairns, 2011). Moreover, it appears that there is a critical period for language acquisition, such that this proficiency at acquiring language is maximal early in life; generally, as people age, the ease with which they acquire and master new languages diminishes (Johnson & Newport, 1989; Lenneberg, 1967; Singleton, 1995).
Children begin to learn about language from a very early age ([(Link)]). In fact, it appears that this is occurring even before we are born. Newborns show preference for their mother’s voice and appear to be able to discriminate between the language spoken by their mother and other languages. Babies are also attuned to the languages being used around them and show preferences for videos of faces that are moving in synchrony with the audio of spoken language versus videos that do not synchronize with the audio (Blossom & Morgan, 2006; Pickens, 1994; Spelke & Cortelyou, 1981).
|Stage||Age||Developmental Language and Communication|
|1||0–3 months||Reflexive communication|
|2||3–8 months||Reflexive communication; interest in others|
|3||8–13 months||Intentional communication; sociability|
|4||12–18 months||First words|
|5||18–24 months||Simple sentences of two words|
|6||2–3 years||Sentences of three or more words|
|7||3–5 years||Complex sentences; has conversations|
In the fall of 1970, a social worker in the Los Angeles area found a 13-year-old girl who was being raised in extremely neglectful and abusive conditions. The girl, who came to be known as Genie, had lived most of her life tied to a potty chair or confined to a crib in a small room that was kept closed with the curtains drawn. For a little over a decade, Genie had virtually no social interaction and no access to the outside world. As a result of these conditions, Genie was unable to stand up, chew solid food, or speak (Fromkin, Krashen, Curtiss, Rigler, & Rigler, 1974; Rymer, 1993). The police took Genie into protective custody.
Genie’s abilities improved dramatically following her removal from her abusive environment, and early on, it appeared she was acquiring language—much later than would be predicted by critical period hypotheses that had been posited at the time (Fromkin et al., 1974). Genie managed to amass an impressive vocabulary in a relatively short amount of time. However, she never developed a mastery of the grammatical aspects of language (Curtiss, 1981). Perhaps being deprived of the opportunity to learn language during a critical period impeded Genie’s ability to fully acquire and use language.
You may recall that each language has its own set of phonemes that are used to generate morphemes, words, and so on. Babies can discriminate among the sounds that make up a language (for example, they can tell the difference between the “s” in vision and the “ss” in fission); early on, they can differentiate between the sounds of all human languages, even those that do not occur in the languages that are used in their environments. However, by the time that they are about 1 year old, they can only discriminate among those phonemes that are used in the language or languages in their environments (Jensen, 2011; Werker & Lalonde, 1988; Werker & Tees, 1984).
After the first few months of life, babies enter what is known as the babbling stage, during which time they tend to produce single syllables that are repeated over and over. As time passes, more variations appear in the syllables that they produce. During this time, it is unlikely that the babies are trying to communicate; they are just as likely to babble when they are alone as when they are with their caregivers (Fernández & Cairns, 2011). Interestingly, babies who are raised in environments in which sign language is used will also begin to show babbling in the gestures of their hands during this stage (Petitto, Holowka, Sergio, Levy, & Ostry, 2004).
Generally, a child’s first word is uttered sometime between the ages of 1 year to 18 months, and for the next few months, the child will remain in the “one word” stage of language development. During this time, children know a number of words, but they only produce one-word utterances. The child’s early vocabulary is limited to familiar objects or events, often nouns. Although children in this stage only make one-word utterances, these words often carry larger meaning (Fernández & Cairns, 2011). So, for example, a child saying “cookie” could be identifying a cookie or asking for a cookie.
As a child’s lexicon grows, she begins to utter simple sentences and to acquire new vocabulary at a very rapid pace. In addition, children begin to demonstrate a clear understanding of the specific rules that apply to their language(s). Even the mistakes that children sometimes make provide evidence of just how much they understand about those rules. This is sometimes seen in the form of overgeneralization. In this context, overgeneralization refers to an extension of a language rule to an exception to the rule. For example, in English, it is usually the case that an “s” is added to the end of a word to indicate plurality. For example, we speak of one dog versus two dogs. Young children will overgeneralize this rule to cases that are exceptions to the “add an s to the end of the word” rule and say things like “those two gooses” or “three mouses.” Clearly, the rules of the language are understood, even if the exceptions to the rules are still being learned (Moskowitz, 1978).
LANGUAGE AND THOUGHT
When we speak one language, we agree that words are representations of ideas, people, places, and events. The given language that children learn is connected to their culture and surroundings. But can words themselves shape the way we think about things? Psychologists have long investigated the question of whether language shapes thoughts and actions, or whether our thoughts and beliefs shape our language. Two researchers, Edward Sapir and Benjamin Lee Whorf, began this investigation in the 1940s. They wanted to understand how the language habits of a community encourage members of that community to interpret language in a particular manner (Sapir, 1941/1964). Sapir and Whorf proposed that language determines thought, suggesting, for example, that a person whose community language did not have past-tense verbs would be challenged to think about the past (Whorf, 1956). Researchers have since identified this view as too absolute, pointing out a lack of empiricism behind what Sapir and Whorf proposed (Abler, 2013; Boroditsky, 2011; van Troyer, 1994). Today, psychologists continue to study and debate the relationship between language and thought.
What Do You Think: The Meaning of Language
Think about what you know of other languages; perhaps you even speak multiple languages. Imagine for a moment that your closest friend fluently speaks more than one language. Do you think that friend thinks differently, depending on which language is being spoken? You may know a few words that are not translatable from their original language into English. For example, the Portuguese word saudade originated during the 15th century, when Portuguese sailors left home to explore the seas and travel to Africa or Asia. Those left behind described the emptiness and fondness they felt as saudade ([(Link)]). The word came to express many meanings, including loss, nostalgia, yearning, warm memories, and hope. There is no single word in English that includes all of those emotions in a single description. Do words such as saudade indicate that different languages produce different patterns of thought in people? What do you think??
Language may indeed influence the way that we think, an idea known as linguistic determinism. One recent demonstration of this phenomenon involved differences in the way that English and Mandarin Chinese speakers talk and think about time. English speakers tend to talk about time using terms that describe changes along a horizontal dimension, for example, saying something like “I’m running behind schedule” or “Don’t get ahead of yourself.” While Mandarin Chinese speakers also describe time in horizontal terms, it is not uncommon to also use terms associated with a vertical arrangement. For example, the past might be described as being “up” and the future as being “down.” It turns out that these differences in language translate into differences in performance on cognitive tests designed to measure how quickly an individual can recognize temporal relationships. Specifically, when given a series of tasks with vertical priming, Mandarin Chinese speakers were faster at recognizing temporal relationships between months. Indeed, Boroditsky (2001) sees these results as suggesting that “habits in language encourage habits in thought” (p. 12).
One group of researchers who wanted to investigate how language influences thought compared how English speakers and the Dani people of Papua New Guinea think and speak about color. The Dani have two words for color: one word for light and one word for dark. In contrast, the English language has 11 color words. Researchers hypothesized that the number of color terms could limit the ways that the Dani people conceptualized color. However, the Dani were able to distinguish colors with the same ability as English speakers, despite having fewer words at their disposal (Berlin & Kay, 1969). A recent review of research aimed at determining how language might affect something like color perception suggests that language can influence perceptual phenomena, especially in the left hemisphere of the brain. You may recall from earlier chapters that the left hemisphere is associated with language for most people. However, the right (less linguistic hemisphere) of the brain is less affected by linguistic influences on perception (Regier & Kay, 2009)
Language is a communication system that has both a lexicon and a system of grammar. Language acquisition occurs naturally and effortlessly during the early stages of life, and this acquisition occurs in a predictable sequence for individuals around the world. Language has a strong influence on thought, and the concept of how language may influence cognition remains an area of study and debate in psychology.
Self Check Questions
Critical Thinking Questions
1. How do words not only represent our thoughts but also represent our values?
2. How could grammatical errors actually be indicative of language acquisition in children?
3. How do words not only represent our thoughts but also represent our values?
Personal Application Question
Can you think of examples of how language affects cognition?
1. People tend to talk about the things that are important to them or the things they think about the most. What we talk about, therefore, is a reflection of our values.
2. People tend to talk about the things that are important to them or the things they think about the most. What we talk about, therefore, is a reflection of our values.
3. Grammatical errors that involve overgeneralization of specific rules of a given language indicate that the child recognizes the rule, even if he or she doesn’t recognize all of the subtleties or exceptions involved in the rule’s application.
By the end of this section, you will be able to:
- Describe cognition
- Distinguish concepts and prototypes
- Explain the difference between natural and artificial concepts
Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet, you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses the processes associated with perception, knowledge, problem solving, judgment, language, and memory. Scientists who study cognition are searching for ways to understand how we integrate, organize, and utilize our conscious cognitive experiences without being aware of all of the unconscious work that our brains are doing (for example, Kahneman, 2011).
Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced.
Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later.
CONCEPTS AND PROTOTYPES
The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating it into nervous impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the brain also pulls information from emotions and memories ([(Link)]). Emotion and memory are powerful influences on both our thoughts and behaviors.
In order to organize this staggering amount of information, the brain has developed a file cabinet of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible.
Concepts are informed by our semantic memory (you learned about this concept when you studied memory) and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom.
Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people, because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function.
Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, for the category of civil disobedience, your prototype could be Rosa Parks. Her peaceful resistance to segregation on a city bus in Montgomery, Alabama, is a recognizable example of civil disobedience. Or your prototype could be Mohandas Gandhi, sometimes called Mahatma Gandhi (“Mahatma” is an honorific title) ([(Link)]).
Mohandas Gandhi served as a nonviolent force for independence for India while simultaneously demanding that Buddhist, Hindu, Muslim, and Christian leaders—both Indian and British—collaborate peacefully. Although he was not always successful in preventing violence around him, his life provides a steadfast example of the civil disobedience prototype (Constitutional Rights Foundation, 2013). Just as concepts can be abstract or concrete, we can make a distinction between concepts that are functions of our direct experience with the world and those that are more artificial in nature.
NATURAL AND ARTIFICIAL CONCEPTS
In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts are created “naturally” through your experiences and can be developed from either direct or indirect experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like, and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean, you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as part of the setting. Either way, snow is a natural concept because you can construct an understanding of it through direct observations or experiences of snow ([(Link)]).
An artificial concept, on the other hand, is a concept that is defined by a specific set of characteristics. Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial concepts. A triangle always has three angles and three sides. A square always has four equal sides and four right angles. Mathematical formulas, like the equation for area (length × width) are artificial concepts defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the understanding of a topic by building on one another. For example, before learning the concept of “area of a square” (and the formula to find it), you must understand what a square is. Once the concept of “area of a square” is understood, an understanding of area for other geometric shapes can be built upon the original understanding of area. The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as building blocks and can be connected in countless combinations to create complex thoughts.
A schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932). There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed.
There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian.
An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator ([(Link)]). First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council, n.d.)
Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home from work or school. This event schema involves getting in the car, shutting the door, and buckling your seatbelt before putting the key in the ignition. You might perform this script two or three times each day. As you drive home, you hear your phone’s ring tone. Typically, the event schema that occurs when you hear your phone ringing involves locating the phone and answering it or responding to your latest text message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is extremely challenging for us to stop reaching for the phone, even though we know that we endanger our own lives and the lives of others while we do it (Neyfakh, 2013) ([(Link)]).
Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event schema dictates our behavior in the elevator, and it is no different with our phones. Current research suggests that it is the habit, or event schema, of checking our phones in many different situations that makes refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep in mind how powerful the forces of concepts and schemata are to our understanding of the world.
In this section, you were introduced to cognitive psychology, which is the study of cognition, or the brain’s ability to think, perceive, plan, analyze, and remember. Concepts and their corresponding prototypes help us quickly organize our thinking by creating categories into which we can sort new information. We also develop schemata, which are clusters of related concepts. Some schemata involve routines of thought and behavior, and these help us function properly in various situations without having to “think twice” about them. Schemata show up in social situations and routines of daily behavior.
Self Check Questions
Critical Thinking Questions
1. Describe a social schema that you would notice at a sporting event.
2. Explain why event schemata have so much power over human behavior.
Personal Application Question
3. Describe a natural concept that you know fully but that would be difficult for someone else to understand and explain why it would be difficult.
1. Answers will vary. When attending a basketball game, it is typical to support your team by wearing the team colors and sitting behind their bench.
2. Event schemata are rooted in the social fabric of our communities. We expect people to behave in certain ways in certain types of situations, and we hold ourselves to the same social standards. It is uncomfortable to go against an event schema—it feels almost like we are breaking the rules.
Why is it so difficult to break habits—like reaching for your ringing phone even when you shouldn’t, such as when you’re driving? How does a person who has never seen or touched snow in real life develop an understanding of the concept of snow? How do young children acquire the ability to learn language with no formal instruction? Psychologists who study thinking explore questions like these.
Cognitive psychologists also study intelligence. What is intelligence, and how does it vary from person to person? Are “street smarts” a kind of intelligence, and if so, how do they relate to other types of intelligence? What does an IQ test really measure? These questions and more will be explored in this chapter as you study thinking and intelligence.
In other chapters, we discussed the cognitive processes of perception, learning, and memory. In this chapter, we will focus on high-level cognitive processes. As a part of this discussion, we will consider thinking and briefly explore the development and use of language. We will also discuss problem solving and creativity before ending with a discussion of how intelligence is measured and how our biology and environments interact to affect intelligence. After finishing this chapter, you will have a greater appreciation of the higher-level cognitive processes that contribute to our distinctiveness as a species.
Abler, W. (2013). Sapir, Harris, and Chomsky in the twentieth century. Cognitive Critique, 7, 29–48.
American Association on Intellectual and Developmental Disabilities. (2013). Definition of intellectual disability. Retrieved from http://aaidd.org/intellectual-disability/definition#.UmkR2xD2Bh4
American Psychological Association. (2013). In Diagnostic and statistical manual of psychological disorders (5th ed., pp. 34–36). Washington, D. C.: American Psychological Association.
Aronson, E. (Ed.). (1995). Social cognition. In The social animal (p. 151). New York: W.H. Freeman and Company.
Atkins v. Virginia, 00-8452 (2002).
Bartels, M., Rietveld, M., Van Baal, G., & Boomsma, D. I. (2002). Genetic and environmental influences on the development of intelligence. Behavior Genetics, 32(4), 237–238.
Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge, England: Cambridge University Press.
Bayer, J. B., & Campbell, S. W. (2012). Texting while driving on automatic: Considering the frequency-independent side of habit. Computers in Human Behavior, 28, 2083–2090.
Barton, S. M. (2003). Classroom accommodations for students with dyslexia. Learning Disabilities Journal, 13, 10–14.
Berlin, B., & Kay, P. (1969). Basic color terms: Their universality and evolution. Berkley: University of California Press.
Berninger, V. W. (2008). Defining and differentiating dysgraphia, dyslexia, and language learning disability within a working memory model. In M. Mody & E. R. Silliman (Eds.), Brain, behavior, and learning in language and reading disorders (pp. 103–134). New York: The Guilford Press.
Blossom, M., & Morgan, J. L. (2006). Does the face say what the mouth says? A study of infants’ sensitivity to visual prosody. In the 30th annual Boston University Conference on Language Development, Somerville, MA.
Boake, C. (2002, May 24). From the Binet-Simon to the Wechsler-Bellevue: Tracing the history of intelligence testing. Journal of Clinical and Experimental Neuropsychology, 24(3), 383–405.
Boroditsky, L. (2001). Does language shape thought? Mandarin and English speakers’ conceptions of time. Cognitive Psychology, 43, 1–22.
Boroditsky, L. (2011, February). How language shapes thought. Scientific American, 63–65.Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press
Bouchard, T. J., Lykken, D. T., McGue, M., Segal, N. L., & Tellegen, A. (1990). Sources of human psychological differences: The Minnesota Study of Twins Reared Apart. Science, 250, 223–228.
Cairns Regional Council. (n.d.). Cultural greetings. Retrieved from http://www.cairns.qld.gov.au/__data/assets/pdf_file/0007/8953/CulturalGreetingExercise.pdf
Callero, P. L. (1994). From role-playing to role-using: Understanding role as resource. Social Psychology Quarterly, 57, 228–243.
Cattell, R. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22.
Cianciolo, A. T., & Sternberg, R. J. (2004). Intelligence: A brief history. Malden, MA: Blackwell Publishing.
Chomsky, N.(1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press
Corballis, M. C., & Suddendorf, T. (2007). Memory, time, and language. In C. Pasternak (Ed.), What makes us human (pp. 17–36). Oxford, UK: Oneworld Publications.
Constitutional Rights Foundation. (2013). Gandhi and civil disobedience. Retrieved from http://www.crf-usa.org/black-history-month/gandhi-and-civil-disobedience
Cropley, A. (2006). In praise of convergent thinking. Creativity Research Journal, 18(3), 391–404.
Csikszentmihalyi, M., & Csikszentmihalyi, I. (1993). Family influences on the development of giftedness. Ciba Foundation Symposium, 178, 187–206.
Curtiss, S. (1981). Dissociations between language and cognition: Cases and implications. Journal of Autism and Developmental Disorders, 11(1), 15–30.
Cyclopedia of Puzzles. (n.d.) Retrieved from http://www.mathpuzzle.com/loyd/
Dates and Events. (n.d.). Oprah Winfrey timeline. Retrieved from http://www.datesandevents.org/people-timelines/05-oprah-winfrey-timeline.htm
Fernández, E. M., & Cairns, H. S. (2011). Fundamentals of psycholinguistics. West Sussex, UK: Wiley-Blackwell.
Flanagan, D., & Kaufman, A. (2004). Essentials of WISC-IV assessment. Hoboken: John Wiley and Sons, Inc.
Flynn, J., Shaughnessy, M. F., & Fulgham, S. W. (2012) Interview with Jim Flynn about the Flynn effect. North American Journal of Psychology, 14(1), 25–38.
Fox, M. (2012, November 1). Arthur R. Jensen dies at 89; Set off debate about I.Q. New York Times, p. B15.
Fromkin, V., Krashen, S., Curtiss, S., Rigler, D., & Rigler, M. (1974). The development of language in Genie: A case of language acquisition beyond the critical period. Brain and Language, 1, 81–107.
Furnham, A. (2009). The validity of a new, self-report measure of multiple intelligence. Current Psychology: A Journal for Diverse Perspectives on Diverse Psychological Issues, 28, 225–239.
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.
Gardner, H., & Moran, S. (2006). The science of multiple intelligences theory: A response to Lynn Waterhouse. Educational Psychologist, 41, 227–232.
German, T. P., & Barrett, H. C. (2005). Functional fixedness in a technologically sparse culture. Psychological Science, 16, 1–5.
Goad, B. (2013, January 25). SSA wants to stop calling people ‘mentally retarded.’ Retrieved from http://thehill.com/blogs/regwatch/pending-regs/279399-ssa-wants-to-stop-calling-people-mentally-retarded
Goldstone, R. L., & Kersten, A. (2003). Concepts and categorization. In A. F. Healy, R. W. Proctor, & I.B. Weiner (Eds.), Handbook of psychology (Volume IV, pp. 599–622). Hoboken, New Jersey: John Wiley & Sons, Inc.
Goleman, D. (1995). Emotional intelligence; Why it can matter more than IQ. New York: Bantam Books.
Gordon, O. E. (1995). Francis Galton (1822–1911). Retrieved from http://www.psych.utah.edu/gordon/Classes/Psy4905Docs/PsychHistory/Cards/Galton.html
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for treatment planning, classification, and placement decisions: Recent empirical findings and future directions. School Psychology Quarterly, 12(3), 249–267.
Guilford, J. P. (1967). The nature of human intelligence. New York, NY: McGraw Hill.
Heaton, S. (2004). Making the switch: Unlocking the mystery of the WISC-IV. Case Conference. University of Florida.
Jensen, J. (2011). Phoneme acquisition: Infants and second language learners. The Language Teacher, 35(6), 24–28.
Johnson, J. S., & Newport, E. L. (1989). Critical period effects in second language learning: The influence of maturational state on the acquisition of English as a second language. Cognitive Psychology, 21, 60–99.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus, and Giroux.
Kishyama, M. M., Boyce, W. T., Jimenez, A. M., Perry, L. M., & Knight, R. T. (2009). Socioeconomic disparities affect prefrontal function in children. Journal of Cognitive Neuroscience, 21(6), 1106–1115.
Klein, P. D. (1997). Multiplying the problems of intelligence by eight: A critique of Gardner’s theory. Canadian Journal of Education, 22, 377-94.
Larry P v. Riles, C-71-2270 RFP. (1979).
Lenneberg, E. (1967). Biological foundations of language. New York: Wiley.
Liptak, A. (2008, January 19). Lawyer reveals secret, toppling death sentence. New York Times. Retrieved from http://www.nytimes.com/2008/01/19/us/19death.html?_r=0
Locke, E. A. (2005, April 14). Why emotional intelligence is an invalid concept. Journal of Organizational Behavior, 26, 425–431.
Mayer, J. D., Salovey, P., & Caruso, D. (2004). Emotional intelligence: Theory, findings, and implications, Psychological Inquiry, 15(3), 197–215.
Modgil, S., & Routledge, C. M. (Eds.). (1987). Arthur Jensen: Consensus and controversy. New York: Falmer Press.
Morgan, H. (1996). An analysis of Gardner’s theory of multiple intelligence. Roeper Review: A Journal on Gifted Education, 18, 263–269.
Moskowitz, B. A. (1978). The acquisition of language. Scientific American, 239, 92–108. Petitto, L. A., Holowka, S., Sergio, L. E., Levy, B., & Ostry, D. J. (2004). Baby hands that move to the rhythm of language: Hearing babies acquiring sign languages babble silently on the hands. Cognition, 93, 43–73.
Neyfakh, L. (2013, October 7). “Why you can’t stop checking your phone.” Retrieved from http://www.bostonglobe.com/ideas/2013/10/06/why-you-can-stop-checking-your-phone/rrBJzyBGDAr1YlEH5JQDcM/story.html
Parker, J. D., Saklofske, D. H., & Stough, C. (Eds.). (2009). Assessing emotional intelligence: Theory, research, and applications. New York: Springer.
Petitto, L. A., Holowka, S., Sergio, L. E., Levy, B., & Ostry, D. J. (2004). Baby hands that move to the rhythm of language: Hearing babies acquiring sign languages babble silently on the hands. Cognition, 93, 43–73.
Pickens, J. (1994). Full-term and preterm infants’ perception of face-voice synchrony. Infant Behavior and Development, 17, 447–455.
Pratkanis, A. (1989). The cognitive representation of attitudes. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 71–98). Hillsdale, NJ: Erlbaum.
Regier, T., & Kay, P. (2009). Language, thought, and color: Whorf was half right. Trends in Cognitive Sciences, 13(10), 439–446.
Riccio, C. A., Gonzales, J. J., & Hynd, G. W. (1994). Attention-deficit Hyperactivity Disorder (ADHD) and learning disabilities. Learning Disability Quarterly, 17, 311–322.
Richardson, K. (2002). What IQ tests test. Theory & Psychology, 12(3), 283–314.
Roberts, D. (2014, May 27). U.S. Supreme Court bars Florida from using IQ score cutoff for executions. The Guardian. Retrieved from http://www.theguardian.com/world/2014/may/27/us-supreme-court-iq-score-cutoff-florida-execution
Rushton, J. P., & Jensen, A. R. (2005). Thirty years of research on race differences in cognitive ability. Psychology, public policy, and law, 11(2), 235–294.
Rymer, R. (1993). Genie: A Scientific Tragedy. New York: Harper Collins.
Sapir, E. (1964). Culture, language, and personality. Berkley: University of California Press. (Original work published 1941)
Schlinger, H. D. (2003). The myth of intelligence. The Psychological Record, 53(1), 15–32.
Severson, K. (2011, December 9). Thousands sterilized, a state weighs restitution. The New York Times. Retrieved from http://www.nytimes.com/2011/12/10/us/redress-weighed-for-forced-sterilizations-in-north-carolina.html?pagewanted=all&_r=0
Singleton, D. M. (1995). Introduction: A critical look at the critical period hypothesis in second language acquisition research. In D.M. Singleton & Z. Lengyel (Eds.), The age factor in second language acquisition: A critical look at the critical period hypothesis in second language acquisition research (pp. 1–29). Avon, UK: Multilingual Matters Ltd.
Skinner, B. F. (1957). Verbal behavior. Acton, MA: Copley Publishing Group.
Smits-Engelsman, B. C. M., & Van Galen, G. P. (1997). Dysgraphia in children: Lasting psychomotor deficiency or transient developmental delay? Journal of Experimental Child Psychology, 67, 164–184.
Spelke, E. S., & Cortelyou, A. (1981). Perceptual aspects of social knowing: Looking and listening in infancy. In M.E. Lamb & L.R. Sherrod (Eds.), Infant social cognition: Empirical and theoretical considerations (pp. 61–83). Hillsdale, NJ: Erlbaum.
Steitz, T. (2010). Thomas A. Steitz – Biographical. (K. Grandin, Ed.) Retrieved from http://www.nobelprize.org/nobel_prizes/chemistry/laureates/2009/steitz-bio.html
Sternberg, R. J. (1988). The triarchic mind: A new theory of intelligence. New York: Viking-Penguin.
Terman, L. M. (1925). Mental and physical traits of a thousand gifted children (I). Stanford, CA: Stanford University Press.
Terman, L. M., & Oden, M. H. (1947). The gifted child grows up: 25 years’ follow-up of a superior group: Genetic studies of genius (Vol. 4). Standord, CA: Stanford University Press.
Terman, L. M. (1916). The measurement of intelligence. Boston: Houghton-Mifflin.
Tomasello, M., & Rakoczy, H. (2003). What makes human cognition unique? From individual to shared to collective intentionality. Mind & Language, 18(2), 121–147.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
van Troyer, G. (1994). Linguistic determinism and mutability: The Sapir-Whorf “hypothesis” and intercultural communication. JALT Journal, 2, 163–178.
Wechsler, D. (1958). The measurement of adult intelligence. Baltimore: Williams & Wilkins.
Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence Scale—revised. New York: Psychological Corporation.
Wechsler, D. (2002 ). WPPSI-R manual. New York: Psychological Corporation.
Werker, J. F., & Lalonde, C. E. (1988). Cross-language speech perception: Initial capabilities and developmental change. Developmental Psychology, 24, 672–683.
Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7, 49–63.
Whorf, B. L. (1956). Language, thought and relativity. Cambridge, MA: MIT Press.
Williams, R. L., (1970). Danger: Testing and dehumanizing black children. Clinical Child Psychology Newsletter, 9(1), 5–6.
Zwicker, J. G. (2005). Effectiveness of occupational therapy in remediating handwriting difficulties in primary students: Cognitive versus multisensory interventions. Unpublished master’s thesis, University of Victoria, Victoria, British Columbia, Canada). Retrieved from http://dspace.library.uvic.ca:8080/bitstream/handle/1828/49/Zwicker%20thesis.pdf?sequence=1
By the end of this section, you will be able to:
- Define observational learning
- Discuss the steps in the modeling process
- Explain the prosocial and antisocial effects of observational learning
Previous sections of this chapter focused on classical and operant conditioning, which are forms of associative learning. In observational learning, we learn by watching others and then imitating, or modeling, what they do or say. The individuals performing the imitated behavior are called models. Research suggests that this imitative learning involves a specific type of neuron, called a mirror neuron (Hickock, 2010; Rizzolatti, Fadiga, Fogassi, & Gallese, 2002; Rizzolatti, Fogassi, & Gallese, 2006).
Humans and other animals are capable of observational learning. As you will see, the phrase “monkey see, monkey do” really is accurate ([(Link)]). The same could be said about other animals. For example, in a study of social learning in chimpanzees, researchers gave juice boxes with straws to two groups of captive chimpanzees. The first group dipped the straw into the juice box, and then sucked on the small amount of juice at the end of the straw. The second group sucked through the straw directly, getting much more juice. When the first group, the “dippers,” observed the second group, “the suckers,” what do you think happened? All of the “dippers” in the first group switched to sucking through the straws directly. By simply observing the other chimps and modeling their behavior, they learned that this was a more efficient method of getting juice (Yamamoto, Humle, and Tanaka, 2013).
Imitation is much more obvious in humans, but is imitation really the sincerest form of flattery? Consider Claire’s experience with observational learning. Claire’s nine-year-old son, Jay, was getting into trouble at school and was defiant at home. Claire feared that Jay would end up like her brothers, two of whom were in prison. One day, after yet another bad day at school and another negative note from the teacher, Claire, at her wit’s end, beat her son with a belt to get him to behave. Later that night, as she put her children to bed, Claire witnessed her four-year-old daughter, Anna, take a belt to her teddy bear and whip it. Claire was horrified, realizing that Anna was imitating her mother. It was then that Claire knew she wanted to discipline her children in a different manner.
Like Tolman, whose experiments with rats suggested a cognitive component to learning, psychologist Albert Bandura’s ideas about learning were different from those of strict behaviorists. Bandura and other researchers proposed a brand of behaviorism called social learning theory, which took cognitive processes into account. According to Bandura, pure behaviorism could not explain why learning can take place in the absence of external reinforcement. He felt that internal mental states must also have a role in learning and that observational learning involves much more than imitation. In imitation, a person simply copies what the model does. Observational learning is much more complex. According to Lefrançois (2012) there are several ways that observational learning can occur:
You learn a new response. After watching your coworker get chewed out by your boss for coming in late, you start leaving home 10 minutes earlier so that you won’t be late.
You choose whether or not to imitate the model depending on what you saw happen to the model. Remember Julian and his father? When learning to surf, Julian might watch how his father pops up successfully on his surfboard and then attempt to do the same thing. On the other hand, Julian might learn not to touch a hot stove after watching his father get burned on a stove.
You learn a general rule that you can apply to other situations.
Bandura identified three kinds of models: live, verbal, and symbolic. A live model demonstrates a behavior in person, as when Ben stood up on his surfboard so that Julian could see how he did it. A verbal instructional model does not perform the behavior, but instead explains or describes the behavior, as when a soccer coach tells his young players to kick the ball with the side of the foot, not with the toe. A symbolic model can be fictional characters or real people who demonstrate behaviors in books, movies, television shows, video games, or Internet sources ([(Link)]).
STEPS IN THE MODELING PROCESS
Of course, we don’t learn a behavior simply by observing a model. Bandura described specific steps in the process of modeling that must be followed if learning is to be successful: attention, retention, reproduction, and motivation. First, you must be focused on what the model is doing—you have to pay attention. Next, you must be able to retain, or remember, what you observed; this is retention. Then, you must be able to perform the behavior that you observed and committed to memory; this is reproduction. Finally, you must have motivation. You need to want to copy the behavior, and whether or not you are motivated depends on what happened to the model. If you saw that the model was reinforced for her behavior, you will be more motivated to copy her. This is known as vicarious reinforcement. On the other hand, if you observed the model being punished, you would be less motivated to copy her. This is called vicarious punishment. For example, imagine that four-year-old Allison watched her older sister Kaitlyn playing in their mother’s makeup, and then saw Kaitlyn get a time out when their mother came in. After their mother left the room, Allison was tempted to play in the make-up, but she did not want to get a time-out from her mother. What do you think she did? Once you actually demonstrate the new behavior, the reinforcement you receive plays a part in whether or not you will repeat the behavior.
Bandura researched modeling behavior, particularly children’s modeling of adults’ aggressive and violent behaviors (Bandura, Ross, & Ross, 1961). He conducted an experiment with a five-foot inflatable doll that he called a Bobo doll. In the experiment, children’s aggressive behavior was influenced by whether the teacher was punished for her behavior. In one scenario, a teacher acted aggressively with the doll, hitting, throwing, and even punching the doll, while a child watched. There were two types of responses by the children to the teacher’s behavior. When the teacher was punished for her bad behavior, the children decreased their tendency to act as she had. When the teacher was praised or ignored (and not punished for her behavior), the children imitated what she did, and even what she said. They punched, kicked, and yelled at the doll.
What are the implications of this study? Bandura concluded that we watch and learn, and that this learning can have both prosocial and antisocial effects. Prosocial (positive) models can be used to encourage socially acceptable behavior. Parents in particular should take note of this finding. If you want your children to read, then read to them. Let them see you reading. Keep books in your home. Talk about your favorite books. If you want your children to be healthy, then let them see you eat right and exercise, and spend time engaging in physical fitness activities together. The same holds true for qualities like kindness, courtesy, and honesty. The main idea is that children observe and learn from their parents, even their parents’ morals, so be consistent and toss out the old adage “Do as I say, not as I do,” because children tend to copy what you do instead of what you say. Besides parents, many public figures, such as Martin Luther King, Jr. and Mahatma Gandhi, are viewed as prosocial models who are able to inspire global social change. Can you think of someone who has been a prosocial model in your life?
The antisocial effects of observational learning are also worth mentioning. As you saw from the example of Claire at the beginning of this section, her daughter viewed Claire’s aggressive behavior and copied it. Research suggests that this may help to explain why abused children often grow up to be abusers themselves (Murrell, Christoff, & Henning, 2007). In fact, about 30% of abused children become abusive parents (U.S. Department of Health & Human Services, 2013). We tend to do what we know. Abused children, who grow up witnessing their parents deal with anger and frustration through violent and aggressive acts, often learn to behave in that manner themselves. Sadly, it’s a vicious cycle that’s difficult to break.
Some studies suggest that violent television shows, movies, and video games may also have antisocial effects ([(Link)]) although further research needs to be done to understand the correlational and causational aspects of media violence and behavior. Some studies have found a link between viewing violence and aggression seen in children (Anderson & Gentile, 2008; Kirsch, 2010; Miller, Grabell, Thomas, Bermann, & Graham-Bermann, 2012). These findings may not be surprising, given that a child graduating from high school has been exposed to around 200,000 violent acts including murder, robbery, torture, bombings, beatings, and rape through various forms of media (Huston et al., 1992). Not only might viewing media violence affect aggressive behavior by teaching people to act that way in real life situations, but it has also been suggested that repeated exposure to violent acts also desensitizes people to it. Psychologists are working to understand this dynamic.
According to Bandura, learning can occur by watching others and then modeling what they do or say. This is known as observational learning. There are specific steps in the process of modeling that must be followed if learning is to be successful. These steps include attention, retention, reproduction, and motivation. Through modeling, Bandura has shown that children learn many things both good and bad simply by watching their parents, siblings, and others.
Self Check Questions
Critical Thinking Questions
1. What is the effect of prosocial modeling and antisocial modeling?
2. Cara is 17 years old. Cara’s mother and father both drink alcohol every night. They tell Cara that drinking is bad and she shouldn’t do it. Cara goes to a party where beer is being served. What do you think Cara will do? Why?
Personal Application Question
3. What is something you have learned how to do after watching someone else?
1. Prosocial modeling can prompt others to engage in helpful and healthy behaviors, while antisocial modeling can prompt others to engage in violent, aggressive, and unhealthy behaviors.
By the end of this section, you will be able to:
- Define operant conditioning
- Explain the difference between reinforcement and punishment
- Distinguish between reinforcement schedules
The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning, operant conditioning. In operant conditioning, organisms learn to associate a behavior and its consequence ([(Link)]). A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.
|Classical Conditioning||Operant Conditioning|
|Conditioning approach||An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation).||The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.|
|Stimulus timing||The stimulus occurs immediately before the response.||The stimulus (either reinforcement or punishment) occurs soon after the response.|
Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike. According to the law of effect, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.
Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a “Skinner box” ([(Link)]). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.
In discussing operant conditioning, we use several everyday words—positive, negative, reinforcement, and punishment—in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead, positive means you are adding something, and negative means you are taking something away. Reinforcement means you are increasing a behavior, and punishment means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) increase the likelihood of a behavioral response. All punishers (positive or negative) decrease the likelihood of a behavioral response. Now let’s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment ([(Link)]).
|Positive||Something is added to increase the likelihood of a behavior.||Something is added to decrease the likelihood of a behavior.|
|Negative||Something is removed to increase the likelihood of a behavior.||Something is removed to decrease the likelihood of a behavior.|
The most effective way to teach a person or animal a new behavior is with positive reinforcement. In positive reinforcement, a desirable stimulus is added to increase a behavior.
For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let’s pause for a moment. Some people might say, “Why should I reward my child for doing what is expected?” But in fact we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver’s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students’ behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)—an early forerunner of computer-assisted learning. His teaching machine tested students’ knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961).
In negative reinforcement, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.
Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, punishment always decreases a behavior. In positive punishment, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In negative punishment, you remove a pleasant stimulus to decrease a behavior. For example, a driver might blast her horn when a light turns green, and continue blasting the horn until the car in front moves.
Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, runs into the busy street to get his ball. You give him a time-out (positive punishment) and tell him never to go into the street again. Chances are he won’t repeat this behavior. While strategies like time-outs are common today, in the past children were often subject to physical punishment, such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment—you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won’t share their toys.
While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor reinforcement over punishment—they recommend that you catch your child doing something good and reward her for it.
In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in shaping, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:
Reinforce any response that resembles the desired behavior.
Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.
Next, begin to reinforce the response that even more closely resembles the desired behavior.
Continue to reinforce closer and closer approximations of the desired behavior.
Finally, only reinforce the desired behavior.
Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.
It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.
PRIMARY AND SECONDARY REINFORCERS
Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.
What would be a good reinforce for humans? For your daughter Sydney, it was the promise of a toy if she cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin a piece of candy every time he made a goal, you would be using a primary reinforcer. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure.
A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Joaquin made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.
Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet hands” token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.
Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed ([(Link)]). Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.
Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand ([(Link)]). For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn’t throw blocks.
There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over.
Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called continuous reinforcement. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let’s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits, so that he can make an association between the target behavior (sitting) and the consequence (getting a treat).
Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule—partial reinforcement. In partial reinforcement, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules ([(Link)]). These schedules are described as either fixed or variable, and as either interval or ratio. Fixed refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. Variable refers to the number of responses or amount of time between reinforcements, which varies or changes. Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is based on the number of responses between reinforcements.
|Fixed interval||Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes).||Moderate response rate with significant pauses after reinforcement||Hospital patient uses patient-controlled, doctor-timed pain relief|
|Variable interval||Reinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes).||Moderate yet steady response rate||Checking Facebook|
|Fixed ratio||Reinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses).||High response rate with pauses after reinforcement||Piecework—factory worker getting paid for every x number of items manufactured|
|Variable ratio||Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).||High and steady response rate||Gambling|
Now let’s combine these four terms. A fixed interval reinforcement schedule is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.
With a variable interval reinforcement schedule, the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel’s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.
With a fixed ratio reinforcement schedule, there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it’s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.
In a variable ratio reinforcement schedule, the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction.
In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn’t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish ([(Link)]).
Connect the Concepts: Gambling and the Brain
Skinner (1953) stated, “If the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron’s money on a variable-ratio schedule” (p. 397).
Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (“Skinner’s Utopia,” 1971). Beyond the power of variable ratio reinforcement, gambling seems to work on the brain in the same way as some addictive drugs. The Illinois Institute for Addiction Recovery (n.d.) reports evidence suggesting that pathological gambling is an addiction similar to a chemical addiction ([(Link)]). Specifically, gambling may activate the reward centers of the brain, much like cocaine does. Research has shown that some pathological gamblers have lower levels of the neurotransmitter (brain chemical) known as norepinephrine than do normal gamblers (Roy, et al., 1988). According to a study conducted by Alec Roy and colleagues, norepinephrine is secreted when a person feels stress, arousal, or thrill; pathological gamblers use gambling to increase their levels of this neurotransmitter. Another researcher, neuroscientist Hans Breiter, has done extensive research on gambling and its effects on the brain. Breiter (as cited in Franzen, 2001) reports that “Monetary reward in a gambling-like experiment produces brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine” (para. 1). Deficiencies in serotonin (another neurotransmitter) might also contribute to compulsive behavior, including a gambling addiction.
It may be that pathological gamblers’ brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction—perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers’ brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry.
COGNITION AND LATENT LEARNING
Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as thoughts and expectations) plays a role in learning, another behaviorist, Edward C. Tolman, had a different opinion. Tolman’s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman & Honzik, 1930; Tolman, Ritchie, & Kalish, 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.
In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map: a mental picture of the layout of the maze ([(Link)]). After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as latent learning: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.
Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi’s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he’s never driven there himself, so he has not had a chance to demonstrate that he’s learned the way. One morning Ravi’s dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier.
Everyday Connection: This Place Is Like a Maze
Have you ever gotten lost in a building and couldn’t find your way back out? While that can be frustrating, you’re not alone. At one time or another we’ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation—or cognitive map—of the location, as Tolman’s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it’s often difficult to predict what’s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help find our way out of the building.
Operant conditioning is based on the work of B. F. Skinner. Operant conditioning is a form of learning in which the motivation for a behavior happens after the behavior is demonstrated. An animal or a human receives a consequence after performing a specific behavior. The consequence is either a reinforcer or a punisher. All reinforcement (positive or negative) increases the likelihood of a behavioral response. All punishment (positive or negative) decreases the likelihood of a behavioral response. Several types of reinforcement schedules are used to reward behavior depending on either a set or variable period of time.
Self Check Questions
Critical Thinking Questions
1. What is a Skinner box and what is its purpose?
2. What is the difference between negative reinforcement and punishment?
3. What is shaping and how would you use shaping to teach a dog to roll over?
4. Explain the difference between negative reinforcement and punishment, and provide several examples of each based on your own experiences.
5. Think of a behavior that you have that you would like to change. How could you use behavior modification, specifically positive reinforcement, to change your behavior? What is your positive reinforcer?
1. A Skinner box is an operant conditioning chamber used to train animals such as rats and pigeons to perform certain behaviors, like pressing a lever. When the animals perform the desired behavior, they receive a reward: food or water.
2. In negative reinforcement you are taking away an undesirable stimulus in order to increase the frequency of a certain behavior (e.g., buckling your seat belt stops the annoying beeping sound in your car and increases the likelihood that you will wear your seatbelt). Punishment is designed to reduce a behavior (e.g., you scold your child for running into the street in order to decrease the unsafe behavior.)
3. Shaping is an operant conditioning method in which you reward closer and closer approximations of the desired behavior. If you want to teach your dog to roll over, you might reward him first when he sits, then when he lies down, and then when he lies down and rolls onto his back. Finally, you would reward him only when he completes the entire sequence: lying down, rolling onto his back, and then continuing to roll over to his other side.
By the end of this section, you will be able to:
- Explain how classical conditioning occurs
- Summarize the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination
Does the name Ivan Pavlov ring a bell? Even if you are new to the study of psychology, chances are that you have heard of Pavlov and his famous dogs.
Pavlov (1849–1936), a Russian scientist, performed extensive research on dogs and is best known for his experiments in classical conditioning ([(Link)]). As we discussed briefly in the previous section, classical conditioning is a process by which we learn to associate stimuli and, consequently, to anticipate events.
Pavlov came to his conclusions about how learning occurs completely by accident. Pavlov was a physiologist, not a psychologist. Physiologists study the life processes of organisms, from the molecular level to the level of cells, organ systems, and entire organisms. Pavlov’s area of interest was the digestive system (Hunt, 2007). In his studies with dogs, Pavlov surgically implanted tubes inside dogs’ cheeks to collect saliva. He then measured the amount of saliva produced in response to various foods. Over time, Pavlov (1927) observed that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants’ footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don’t naturally salivate at the sight of an empty bowl or the sound of footsteps.
These unusual responses intrigued Pavlov, and he wondered what accounted for what he called the dogs’ “psychic secretions” (Pavlov, 1927). To explore this phenomenon in an objective manner, Pavlov designed a series of carefully controlled experiments to see which stimuli would cause the dogs to salivate. He was able to train the dogs to salivate in response to stimuli that clearly had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses.
In Pavlov’s experiments, the dogs salivated each time meat powder was presented to them. The meat powder in this situation was an unconditioned stimulus (UCS): a stimulus that elicits a reflexive response in an organism. The dogs’ salivation was an unconditioned response (UCR): a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs’ stimulus and response like this:
In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder ([(Link)]). The tone was the neutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association for the dogs. Quite simply this pairing means:
When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became the conditioned stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR). In the case of Pavlov’s dogs, they had learned to associate the tone (CS) with being fed, and they began to salivate (CR) in anticipation of food.
REAL WORLD APPLICATION OF CLASSICAL CONDITIONING
How does classical conditioning work in the real world? Let’s say you have a cat named Tiger, who is quite spoiled. You keep her food in a separate cabinet, and you also have a special electric can opener that you use only to open cans of cat food. For every meal, Tiger hears the distinctive sound of the electric can opener (“zzhzhz”) and then gets her food. Tiger quickly learns that when she hears “zzhzhz” she is about to get fed. What do you think Tiger does when she hears the electric can opener? She will likely get excited and run to where you are preparing her food. This is an example of classical conditioning. In this case, what are the UCS, CS, UCR, and CR?
What if the cabinet holding Tiger’s food becomes squeaky? In that case, Tiger hears “squeak” (the cabinet), “zzhzhz” (the electric can opener), and then she gets her food. Tiger will learn to get excited when she hears the “squeak” of the cabinet. Pairing a new neutral stimulus (“squeak”) with the conditioned stimulus (“zzhzhz”) is called higher-order conditioning, or second-order conditioning. This means you are using the conditioned stimulus of the can opener to condition another stimulus: the squeaky cabinet ([(Link)]). It is hard to achieve anything above second-order conditioning. For example, if you ring a bell, open the cabinet (“squeak”), use the can opener (“zzhzhz”), and then feed Tiger, Tiger will likely never get excited when hearing the bell alone.
Everyday Connection: Classical Conditioning at Stingray City
Kate and her husband Scott recently vacationed in the Cayman Islands, and booked a boat tour to Stingray City, where they could feed and swim with the southern stingrays. The boat captain explained how the normally solitary stingrays have become accustomed to interacting with humans. About 40 years ago, fishermen began to clean fish and conch (unconditioned stimulus) at a particular sandbar near a barrier reef, and large numbers of stingrays would swim in to eat (unconditioned response) what the fishermen threw into the water; this continued for years. By the late 1980s, word of the large group of stingrays spread among scuba divers, who then started feeding them by hand. Over time, the southern stingrays in the area were classically conditioned much like Pavlov’s dogs. When they hear the sound of a boat engine (neutral stimulus that becomes a conditioned stimulus), they know that they will get to eat (conditioned response).
As soon as Kate and Scott reached Stingray City, over two dozen stingrays surrounded their tour boat. The couple slipped into the water with bags of squid, the stingrays’ favorite treat. The swarm of stingrays bumped and rubbed up against their legs like hungry cats ([(Link)]). Kate and Scott were able to feed, pet, and even kiss (for luck) these amazing creatures. Then all the squid was gone, and so were the stingrays.
Classical conditioning also applies to humans, even babies. For example, Sara buys formula in blue canisters for her six-month-old daughter, Angelina. Whenever Sara takes out a formula container, Angelina gets excited, tries to reach toward the food, and most likely salivates. Why does Angelina get excited when she sees the formula canister? What are the UCS, CS, UCR, and CR here?
So far, all of the examples have involved food, but classical conditioning extends beyond the basic need to be fed. Consider our earlier example of a dog whose owners install an invisible electric dog fence. A small electrical shock (unconditioned stimulus) elicits discomfort (unconditioned response). When the unconditioned stimulus (shock) is paired with a neutral stimulus (the edge of a yard), the dog associates the discomfort (unconditioned response) with the edge of the yard (conditioned stimulus) and stays within the set boundaries.
GENERAL PROCESSES IN CLASSICAL CONDITIONING
Now that you know how classical conditioning works and have seen several examples, let’s take a look at some of the general processes involved. In classical conditioning, the initial period of learning is known as acquisition, when an organism learns to connect a neutral stimulus and an unconditioned stimulus. During acquisition, the neutral stimulus begins to elicit the conditioned response, and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself. Timing is important for conditioning to occur. Typically, there should only be a brief interval between presentation of the conditioned stimulus and the unconditioned stimulus. Depending on what is being conditioned, sometimes this interval is as little as five seconds (Chance, 2009). However, with other types of conditioning, the interval can be up to several hours.
Taste aversion is a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here’s how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you’ve been conditioned to be averse to a food after a single, negative experience.
How does this occur—conditioning based on a single instance and involving an extended time lapse between the event and the negative stimulus? Research into taste aversion suggests that this response may be an evolutionary adaptation designed to help organisms quickly learn to avoid harmful foods (Garcia & Rusiniak, 1980; Garcia & Koelling, 1966). Not only may this contribute to species survival via natural selection, but it may also help us develop strategies for challenges such as helping cancer patients through the nausea induced by certain treatments (Holmes, 1993; Jacobsen et al., 1993; Hutton, Baracos, & Wismer, 2007; Skolin et al., 2006).
Once we have established the connection between the unconditioned stimulus and the conditioned stimulus, how do we break that connection and get the dog, cat, or child to stop responding? In Tiger’s case, imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food. Now, Tiger would hear the can opener, but she would not get food. In classical conditioning terms, you would be giving the conditioned stimulus, but not the unconditioned stimulus. Pavlov explored this scenario in his experiments with dogs: sounding the tone without giving the dogs the meat powder. Soon the dogs stopped responding to the tone. Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response.
What happens when learning is not used for a while—when what was learned lies dormant? As we just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell. What do you think would happen with Tiger’s behavior if your electric can opener broke, and you did not use it for several months? When you finally got it fixed and started using it to open Tiger’s food again, Tiger would remember the association between the can opener and her food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov’s dogs and Tiger illustrates a concept Pavlov called spontaneous recovery: the return of a previously extinguished conditioned response following a rest period ([(Link)]).
Of course, these processes also apply in humans. For example, let’s say that every day when you walk to campus, an ice cream truck passes your route. Day after day, you hear the truck’s music (neutral stimulus), so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and then your mouth waters (unconditioned response). This initial period of learning is known as acquisition, when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus (the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus. Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response) as soon as you hear the truck’s musical jingle—even before you bite into the ice cream bar. Then one day you head down the street. You hear the truck’s music (conditioned stimulus), and your mouth waters (conditioned response). However, when you get to the truck, you discover that they are all out of ice cream. You leave disappointed. The next few days you pass by the truck and hear the music, but don’t stop to get an ice cream bar because you’re running late for class. You begin to salivate less and less when you hear the music, until by the end of the week, your mouth no longer waters when you hear the tune. This illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in the mouth). Then the weekend comes. You don’t have to go to class, so you don’t pass the truck. Monday morning arrives and you take your usual route to campus. You round the corner and hear the truck again. What do you think happens? Your mouth begins to water again. Why? After a break from conditioning, the conditioned response reappears, which indicates spontaneous recovery.
Acquisition and extinction involve the strengthening and weakening, respectively, of a learned association. Two other learning processes—stimulus discrimination and stimulus generalization—are involved in distinguishing which stimuli will trigger the learned association. Animals (including humans) need to distinguish between stimuli—for example, between sounds that predict a threatening event and sounds that do not—so that they can respond appropriately (such as running away if the sound is threatening). When an organism learns to respond differently to various stimuli that are similar, it is called stimulus discrimination. In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not come running to the kitchen looking for food.
On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to the condition stimulus, it is called stimulus generalization, the opposite of stimulus discrimination. The more similar a stimulus is to the condition stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart).
Sometimes, classical conditioning can lead to habituation. Habituation occurs when we learn not to respond to a stimulus that is presented repeatedly without change. As the stimulus occurs over and over, we learn not to focus our attention on it. For example, imagine that your neighbor or roommate constantly has the television blaring. This background noise is distracting and makes it difficult for you to focus when you’re studying. However, over time, you become accustomed to the stimulus of the television noise, and eventually you hardly notice it any longer.
John B. Watson, shown in [(Link)], is considered the founder of behaviorism. Behaviorism is a school of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured.
Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned.
In 1920, Watson was the chair of the psychology department at Johns Hopkins University. Through his position at the university he came to meet Little Albert’s mother, Arvilla Merritte, who worked at a campus hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning. Through these experiments, Little Albert was exposed to and conditioned to fear certain things. Initially he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR? Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask ([(Link)]). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or situation— through conditioning alone, thus countering Freud’s view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert’s mother moved away, ending the experiment, and Little Albert himself died a few years later of unrelated causes. While Watson’s research provided new insight into conditioning, it would be considered unethical by today’s standards.
Everyday Connection: Advertising and Associative Learning
Advertising executives are pros at applying the principles of associative learning. Think about the car commercials you have seen on television. Many of them feature an attractive model. By associating the model with the car being advertised, you come to see the car as being desirable (Cialdini, 2008). You may be asking yourself, does this advertising technique actually work? According to Cialdini (2008), men who viewed a car commercial that included an attractive model later rated the car as being faster, more appealing, and better designed than did men who viewed an advertisement for the same car minus the model.
Have you ever noticed how quickly advertisers cancel contracts with a famous athlete following a scandal? As far as the advertiser is concerned, that athlete is no longer associated with positive feelings; therefore, the athlete cannot be used as an unconditioned stimulus to condition the public to associate positive feelings (the unconditioned response) with their product (the conditioned stimulus).
Now that you are aware of how associative learning works, see if you can find examples of these types of advertisements on television, in magazines, or on the Internet.
Pavlov’s pioneering work with dogs contributed greatly to what we know about learning. His experiments explored the type of associative learning we now call classical conditioning. In classical conditioning, organisms learn to associate events that repeatedly happen together, and researchers study how a reflexive response to a stimulus can be mapped to a different stimulus—by training an association between the two stimuli. Pavlov’s experiments show how stimulus-response bonds are formed. Watson, the founder of behaviorism, was greatly influenced by Pavlov’s work. He tested humans by conditioning fear in an infant known as Little Albert. His findings suggest that classical conditioning can explain how some fears develop.
Self Check Questions
Critical Thinking Questions
1. If the sound of your toaster popping up toast causes your mouth to water, what are the UCS, CS, and CR?
2. Explain how the processes of stimulus generalization and stimulus discrimination are considered opposites.
4. Can you think of an example in your life of how classical conditioning has produced a positive emotional response, such as happiness or excitement? How about a negative emotional response, such as fear, anxiety, or anger?
1. The food being toasted is the UCS; the sound of the toaster popping up is the CS; salivating to the sound of the toaster is the CR.
2. In stimulus generalization, an organism responds to new stimuli that are similar to the original conditioned stimulus. For example, a dog barks when the doorbell rings. He then barks when the oven timer dings because it sounds very similar to the doorbell. On the other hand, stimulus discrimination occurs when an organism learns a response to a specific stimulus, but does not respond the same way to new stimuli that are similar. In this case, the dog would bark when he hears the doorbell, but he would not bark when he hears the oven timer ding because they sound different; the dog is able to distinguish between the two sounds.
3. This occurs through the process of acquisition. A human or an animal learns to connect a neutral stimulus and an unconditioned stimulus. During the acquisition phase, the neutral stimulus begins to elicit the conditioned response. The neutral stimulus is becoming the conditioned stimulus. At the end of the acquisition phase, learning has occurred and the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself.
By the end of this section, you will be able to:
- Explain how learned behaviors are different from instincts and reflexes
- Define learning
- Recognize and define three basic forms of learning—classical conditioning, operant conditioning, and observational learning
Birds build nests and migrate as winter approaches. Infants suckle at their mother’s breast. Dogs shake water off wet fur. Salmon swim upstream to spawn, and spiders spin intricate webs. What do these seemingly unrelated behaviors have in common? They all are unlearned behaviors. Both instincts and reflexes are innate behaviors that organisms are born with. Reflexes are a motor or neural reaction to a specific stimulus in the environment. They tend to be simpler than instincts, involve the activity of specific body parts and systems (e.g., the knee-jerk reflex and the contraction of the pupil in bright light), and involve more primitive centers of the central nervous system (e.g., the spinal cord and the medulla). In contrast, instincts are innate behaviors that are triggered by a broader range of events, such as aging and the change of seasons. They are more complex patterns of behavior, involve movement of the organism as a whole (e.g., sexual activity and migration), and involve higher brain centers.
Both reflexes and instincts help an organism adapt to its environment and do not have to be learned. For example, every healthy human baby has a sucking reflex, present at birth. Babies are born knowing how to suck on a nipple, whether artificial (from a bottle) or human. Nobody teaches the baby to suck, just as no one teaches a sea turtle hatchling to move toward the ocean.
Learning, like reflexes and instincts, allows an organism to adapt to its environment. But unlike instincts and reflexes, learned behaviors involve change and experience: learning is a relatively permanent change in behavior or knowledge that results from experience. In contrast to the innate behaviors discussed above, learning involves acquiring knowledge and skills through experience. Looking back at our surfing scenario, Julian will have to spend much more time training with his surfboard before he learns how to ride the waves like his father.
Learning to surf, as well as any complex learning process (e.g., learning about the discipline of psychology), involves a complex interaction of conscious and unconscious processes. Learning has traditionally been studied in terms of its simplest components—the associations our minds automatically make between events. Our minds have a natural tendency to connect events that occur closely together or in sequence. Associative learning occurs when an organism makes connections between stimuli or events that occur together in the environment. You will see that associative learning is central to all three basic learning processes discussed in this chapter; classical conditioning tends to involve unconscious processes, operant conditioning tends to involve conscious processes, and observational learning adds social and cognitive layers to all the basic associative processes, both conscious and unconscious. These learning processes will be discussed in detail later in the chapter, but it is helpful to have a brief overview of each as you begin to explore how learning is understood from a psychological perspective.
In classical conditioning, also known as Pavlovian conditioning, organisms learn to associate events—or stimuli—that repeatedly happen together. We experience this process throughout our daily lives. For example, you might see a flash of lightning in the sky during a storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump (loud noises have that effect by reflex). Because lightning reliably predicts the impending boom of thunder, you may associate the two and jump when you see lightning. Psychological researchers study this associative process by focusing on what can be seen and measured—behaviors. Researchers ask if one stimulus triggers a reflex, can we train a different stimulus to trigger that same reflex?
In operant conditioning, organisms learn, again, to associate events—a behavior and its consequence (reinforcement or punishment). A pleasant consequence encourages more of that behavior in the future, whereas a punishment deters the behavior. Imagine you are teaching your dog, Hodor, to sit. You tell Hodor to sit, and give him a treat when he does. After repeated experiences, Hodor begins to associate the act of sitting with receiving a treat. He learns that the consequence of sitting is that he gets a doggie biscuit ([(Link)]). Conversely, if the dog is punished when exhibiting a behavior, it becomes conditioned to avoid that behavior (e.g., receiving a small shock when crossing the boundary of an invisible electric fence).
Observational learning extends the effective range of both classical and operant conditioning. In contrast to classical and operant conditioning, in which learning occurs only through direct experience, observational learning is the process of watching others and then imitating what they do. A lot of learning among humans and other animals comes from observational learning. To get an idea of the extra effective range that observational learning brings, consider Ben and his son Julian from the introduction. How might observation help Julian learn to surf, as opposed to learning by trial and error alone? By watching his father, he can imitate the moves that bring success and avoid the moves that lead to failure. Can you think of something you have learned how to do after watching someone else?
All of the approaches covered in this chapter are part of a particular tradition in psychology, called behaviorism, which we discuss in the next section. However, these approaches do not represent the entire study of learning. Separate traditions of learning have taken shape within different fields of psychology, such as memory and cognition, so you will find that other chapters will round out your understanding of the topic. Over time these traditions tend to converge. For example, in this chapter you will see how cognition has come to play a larger role in behaviorism, whose more extreme adherents once insisted that behaviors are triggered by the environment with no intervening thought.
Instincts and reflexes are innate behaviors—they occur naturally and do not involve learning. In contrast, learning is a change in behavior or knowledge that results from experience. There are three main types of learning: classical conditioning, operant conditioning, and observational learning. Both classical and operant conditioning are forms of associative learning where associations are made between events that occur together. Observational learning is just as it sounds: learning by observing others.
Self Check Questions
Critical Thinking Questions
1. Compare and contrast classical and operant conditioning. How are they alike? How do they differ?
2. What is the difference between a reflex and a learned behavior?
3. What is your personal definition of learning? How do your ideas about learning compare with the definition of learning presented in this text?
3. What kinds of things have you learned through the process of classical conditioning? Operant conditioning? Observational learning? How did you learn them?
1. Both classical and operant conditioning involve learning by association. In classical conditioning, responses are involuntary and automatic; however, responses are voluntary and learned in operant conditioning. In classical conditioning, the event that drives the behavior (the stimulus) comes before the behavior; in operant conditioning, the event that drives the behavior (the consequence) comes after the behavior. Also, whereas classical conditioning involves an organism forming an association between an involuntary (reflexive) response and a stimulus, operant conditioning involves an organism forming an association between a voluntary behavior and a consequence.
2. A reflex is a behavior that humans are born knowing how to do, such as sucking or blushing; these behaviors happen automatically in response to stimuli in the environment. Learned behaviors are things that humans are not born knowing how to do, such as swimming and surfing. Learned behaviors are not automatic; they occur as a result of practice or repeated experience in a situation.
The summer sun shines brightly on a deserted stretch of beach. Suddenly, a tiny grey head emerges from the sand, then another and another. Soon the beach is teeming with loggerhead sea turtle hatchlings ([(Link)]). Although only minutes old, the hatchlings know exactly what to do. Their flippers are not very efficient for moving across the hot sand, yet they continue onward, instinctively. Some are quickly snapped up by gulls circling overhead and others become lunch for hungry ghost crabs that dart out of their holes. Despite these dangers, the hatchlings are driven to leave the safety of their nest and find the ocean.
Not far down this same beach, Ben and his son, Julian, paddle out into the ocean on surfboards. A wave approaches. Julian crouches on his board, then jumps up and rides the wave for a few seconds before losing his balance. He emerges from the water in time to watch his father ride the face of the wave.
Unlike baby sea turtles, which know how to find the ocean and swim with no help from their parents, we are not born knowing how to swim (or surf). Yet we humans pride ourselves on our ability to learn. In fact, over thousands of years and across cultures, we have created institutions devoted entirely to learning. But have you ever asked yourself how exactly it is that we learn? What processes are at work as we come to know what we know? This chapter focuses on the primary ways in which learning occurs.
Anderson, C. A., & Gentile, D. A. (2008). Media violence, aggression, and public policy. In E. Borgida & S. Fiske (Eds.), Beyond common sense: Psychological science in the courtroom (p. 322). Malden, MA: Blackwell.
Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63, 575–582.
Cangi, K., & Daly, M. (2013). The effects of token economies on the occurrence of appropriate and inappropriate behaviors by children with autism in a social skills setting. West Chester University: Journal of Undergraduate Research. Retrieved from http://www.wcupa.edu/UndergraduateResearch/journal/documents/cangi_S2012.pdf
Carlson, L., Holscher, C., Shipley, T., & Conroy Dalton, R. (2010). Getting lost in buildings. Current Directions in Psychological Science, 19(5), 284–289.
Cialdini, R. B. (2008). Influence: Science and practice (5th ed.). Boston, MA: Pearson Education.
Chance, P. (2009). Learning and behavior (6th ed.). Belmont, CA: Wadsworth, Cengage Learning.
DeAngelis, T. (2010). ‘Little Albert’ regains his identity. Monitor on Psychology, 41(1), 10.
Franzen, H. (2001, May 24). Gambling, like food and drugs, produces feelings of reward in the brain. Scientific American [online]. Retrieved from http://www.scientificamerican.com/article.cfm?id=gamblinglike-food-and-dru
Fryer, R. G., Jr. (2010, April). Financial incentives and student achievement: Evidence from randomized trials. National Bureau of Economic Research [NBER] Working Paper, No. 15898. Retrieved from http://www.nber.org/papers/w15898
Garcia, J., & Koelling, R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123–124.
Garcia, J., & Rusiniak, K. W. (1980). What the nose learns from the mouth. In D. Müller-Schwarze & R. M. Silverstein (Eds.), Chemical signals: Vertebrates and aquatic invertebrates (pp. 141–156). New York, NY: Plenum Press.
Gershoff, E. T. (2002). Corporal punishment by parents and associated child behaviors and experiences: A meta-analytic and theoretical review. Psychological Bulletin, 128(4), 539–579. doi:10.1037//0033-2909.128.4.539
Gershoff, E.T., Grogan-Kaylor, A., Lansford, J. E., Chang, L., Zelli, A., Deater-Deckard, K., & Dodge, K. A. (2010). Parent discipline practices in an international sample: Associations with child behaviors and moderation by perceived normativeness. Child Development, 81(2), 487–502.
Hickock, G. (2010). The role of mirror neurons in speech and language processing. Brain and Language, 112, 1–2.
Holmes, S. (1993). Food avoidance in patients undergoing cancer chemotherapy. Support Care Cancer, 1(6), 326–330.
Hunt, M. (2007). The story of psychology. New York, NY: Doubleday.
Huston, A. C., Donnerstein, E., Fairchild, H., Feshbach, N. D., Katz, P. A., Murray, J. P., . . . Zuckerman, D. (1992). Big world, small screen: The role of television in American society. Lincoln, NE: University of Nebraska Press.
Hutton, J. L., Baracos, V. E., & Wismer, W. V. (2007). Chemosensory dysfunction is a primary factor in the evolution of declining nutritional status and quality of life with patients with advanced cancer.
Journal of Pain Symptom Management, 33(2), 156–165.
Illinois Institute for Addiction Recovery. (n.d.). WTVP on gambling. Retrieved from http://www.addictionrecov.org/InTheNews/Gambling/
Jacobsen, P. B., Bovbjerg, D. H., Schwartz, M. D., Andrykowski, M. A., Futterman, A. D., Gilewski, T., . . . Redd, W. H. (1993). Formation of food aversions in cancer patients receiving repeated infusions of chemotherapy. Behaviour Research and Therapy, 31(8), 739–748.
Kirsch, SJ (2010). Media and youth: A developmental perspective. Malden MA: Wiley Blackwell.
Lefrançois, G. R. (2012). Theories of human learning: What the professors said (6th ed.). Belmont, CA: Wadsworth, Cengage Learning.
Miller, L. E., Grabell, A., Thomas, A., Bermann, E., & Graham-Bermann, S. A. (2012). The associations between community violence, television violence, intimate partner violence, parent-child aggression, and aggression in sibling relationships of a sample of preschoolers. Psychology of Violence, 2(2), 165–78. doi:10.1037/a0027254
Murrell, A., Christoff, K. & Henning, K. (2007) Characteristics of domestic violence offenders: associations with childhood exposure to violence. Journal of Family Violence, 22(7), 523-532.
Pavlov, I. P. (1927). Conditioned reflexes: An investigation of the physiological activity of the cerebral cortex (G. V. Anrep, Ed. & Trans.). London, UK: Oxford University Press.
Rizzolatti, G., Fadiga, L., Fogassi, L., & Gallese, V. (2002). From mirror neurons to imitation: Facts and speculations. In A. N. Meltzoff & W. Prinz (Eds.), The imitative mind: Development, evolution, and brain bases (pp. 247–66). Cambridge, United Kingdom: Cambridge University Press.
Rizzolatti, G., Fogassi, L., & Gallese, V. (2006, November). Mirrors in the mind. Scientific American [online], pp. 54–61.
Roy, A., Adinoff, B., Roehrich, L., Lamparski, D., Custer, R., Lorenz, V., . . . Linnoila, M. (1988). Pathological gambling: A psychobiological study. Archives of General Psychiatry, 45(4), 369–373. doi:10.1001/archpsyc.1988.01800280085011
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York, NY: Appleton-Century-Crofts.
Skinner, B. F. (1953). Science and human behavior. New York, NY: Macmillan.
Skinner, B. F. (1961). Cumulative record: A selection of papers. New York, NY: Appleton-Century-Crofts.
Skinner’s utopia: Panacea, or path to hell? (1971, September 20). Time [online]. Retrieved from http://www.wou.edu/~girodm/611/Skinner%27s_utopia.pdf
Skolin, I., Wahlin, Y. B., Broman, D. A., Hursti, U-K. K., Larsson, M. V., & Hernell, O. (2006). Altered food intake and taste perception in children with cancer after start of chemotherapy: Perspectives of children, parents and nurses. Supportive Care in Cancer, 14, 369–78.
Thorndike, E. L. (1911). Animal intelligence: An experimental study of the associative processes in animals. Psychological Monographs, 8.
Tolman, E. C., & Honzik, C. H. (1930). Degrees of hunger, reward, and non-reward, and maze performance in rats. University of California Publications in Psychology, 4, 241–256.
Tolman, E. C., Ritchie, B. F., & Kalish, D. (1946). Studies in spatial learning: II. Place learning versus response learning. Journal of Experimental Psychology, 36, 221–229. doi:10.1037/h0060262
Watson, J. B. & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3, 1–14.
Watson, J. B. (1919). Psychology from the standpoint of a behaviorist. Philadelphia, PA: J. B. Lippincott.
Yamamoto, S., Humle, T., & Tanaka, M. (2013). Basis for cumulative cultural evolution in chimpanzees: Social learning of a more efficient tool-use technique. PLoS ONE, 8(1): e55768. doi:10.1371/journal.pone.0055768
By the end of this section, you will be able to:
- Explain the figure-ground relationship
- Define Gestalt principles of grouping
- Describe how perceptual set is influenced by an individual’s characteristics and mental state
In the early part of the 20th century, Max Wertheimer published a paper demonstrating that individuals perceived motion in rapidly flickering static images—an insight that came to him as he used a child’s toy tachistoscope. Wertheimer, and his assistants Wolfgang Köhler and Kurt Koffka, who later became his partners, believed that perception involved more than simply combining sensory stimuli. This belief led to a new movement within the field of psychology known as Gestalt psychology. The word gestalt literally means form or pattern, but its use reflects the idea that the whole is different from the sum of its parts. In other words, the brain creates a perception that is more than simply the sum of available sensory inputs, and it does so in predictable ways. Gestalt psychologists translated these predictable ways into principles by which we organize sensory information. As a result, Gestalt psychology has been extremely influential in the area of sensation and perception (Rock & Palmer, 1990).
One Gestalt principle is the figure-ground relationship. According to this principle, we tend to segment our visual world into figure and ground. Figure is the object or person that is the focus of the visual field, while the ground is the background. As [(Link)] shows, our perception can vary tremendously, depending on what is perceived as figure and what is perceived as ground. Presumably, our ability to interpret sensory information depends on what we label as figure and what we label as ground in any particular case, although this assumption has been called into question (Peterson & Gibson, 1994; Vecera & O’Reilly, 1998).
Another Gestalt principle for organizing sensory stimuli into meaningful perception is proximity. This principle asserts that things that are close to one another tend to be grouped together, as [(Link)] illustrates.
How we read something provides another illustration of the proximity concept. For example, we read this sentence like this, notl iket hiso rt hat. We group the letters of a given word together because there are no spaces between the letters, and we perceive words because there are spaces between each word. Here are some more examples: Cany oum akes enseo ft hiss entence? What doth es e wor dsmea n?
We might also use the principle of similarity to group things in our visual fields. According to this principle, things that are alike tend to be grouped together ([(Link)]). For example, when watching a football game, we tend to group individuals based on the colors of their uniforms. When watching an offensive drive, we can get a sense of the two teams simply by grouping along this dimension.
Two additional Gestalt principles are the law of continuity (or good continuation) and closure. The law of continuity suggests that we are more likely to perceive continuous, smooth flowing lines rather than jagged, broken lines ([(Link)]). The principle of closure states that we organize our perceptions into complete objects rather than as a series of parts ([(Link)]).
According to Gestalt theorists, pattern perception, or our ability to discriminate among different figures and shapes, occurs by following the principles described above. You probably feel fairly certain that your perception accurately matches the real world, but this is not always the case. Our perceptions are based on perceptual hypotheses: educated guesses that we make while interpreting sensory information. These hypotheses are informed by a number of factors, including our personalities, experiences, and expectations. We use these hypotheses to generate our perceptual set. For instance, research has demonstrated that those who are given verbal priming produce a biased interpretation of complex ambiguous figures (Goolkasian & Woodbury, 2010).
In this chapter, you have learned that perception is a complex process. Built from sensations, but influenced by our own experiences, biases, prejudices, and cultures, perceptions can be very different from person to person. Research suggests that implicit racial prejudice and stereotypes affect perception. For instance, several studies have demonstrated that non-Black participants identify weapons faster and are more likely to identify non-weapons as weapons when the image of the weapon is paired with the image of a Black person (Payne, 2001; Payne, Shimizu, & Jacoby, 2005). Furthermore, White individuals’ decisions to shoot an armed target in a video game is made more quickly when the target is Black (Correll, Park, Judd, & Wittenbrink, 2002; Correll, Urland, & Ito, 2006). This research is important, considering the number of very high-profile cases in the last few decades in which young Blacks were killed by people who claimed to believe that the unarmed individuals were armed and/or represented some threat to their personal safety.
Gestalt theorists have been incredibly influential in the areas of sensation and perception. Gestalt principles such as figure-ground relationship, grouping by proximity or similarity, the law of good continuation, and closure are all used to help explain how we organize sensory information. Our perceptions are not infallible, and they can be influenced by bias, prejudice, and other factors.
Self Check Questions
Critical Thinking Question
1. The central tenet of Gestalt psychology is that the whole is different from the sum of its parts. What does this mean in the context of perception?
2. Take a look at the following figure. How might you influence whether people see a duck or a rabbit?
Personal Application Question
3. Have you ever listened to a song on the radio and sung along only to find out later that you have been singing the wrong lyrics? Once you found the correct lyrics, did your perception of the song change?
1. This means that perception cannot be understood completely simply by combining the parts. Rather, the relationship that exists among those parts (which would be established according to the principles described in this chapter) is important in organizing and interpreting sensory information into a perceptual set.
2. Playing on their expectations could be used to influence what they were most likely to see. For instance, telling a story about Peter Rabbit and then presenting this image would bias perception along rabbit lines.
By the end of this section, you will be able to:
- Describe the basic functions of the chemical senses
- Explain the basic functions of the somatosensory, nociceptive, and thermoceptive sensory systems
- Describe the basic functions of the vestibular, proprioceptive, and kinesthetic sensory systems
Vision and hearing have received an incredible amount of attention from researchers over the years. While there is still much to be learned about how these sensory systems work, we have a much better understanding of them than of our other sensory modalities. In this section, we will explore our chemical senses (taste and smell) and our body senses (touch, temperature, pain, balance, and body position).
THE CHEMICAL SENSES
Taste (gustation) and smell (olfaction) are called chemical senses because both have sensory receptors that respond to molecules in the food we eat or in the air we breathe. There is a pronounced interaction between our chemical senses. For example, when we describe the flavor of a given food, we are really referring to both gustatory and olfactory properties of the food working in combination.
You have learned since elementary school that there are four basic groupings of taste: sweet, salty, sour, and bitter. Research demonstrates, however, that we have at least six taste groupings. Umami is our fifth taste. Umami is actually a Japanese word that roughly translates to yummy, and it is associated with a taste for monosodium glutamate (Kinnamon & Vandenbeuch, 2009). There is also a growing body of experimental evidence suggesting that we possess a taste for the fatty content of a given food (Mizushige, Inoue, & Fushiki, 2007).
Molecules from the food and beverages we consume dissolve in our saliva and interact with taste receptors on our tongue and in our mouth and throat. Taste buds are formed by groupings of taste receptor cells with hair-like extensions that protrude into the central pore of the taste bud ([(Link)]). Taste buds have a life cycle of ten days to two weeks, so even destroying some by burning your tongue won’t have any long-term effect; they just grow right back. Taste molecules bind to receptors on this extension and cause chemical changes within the sensory cell that result in neural impulses being transmitted to the brain via different nerves, depending on where the receptor is located. Taste information is transmitted to the medulla, thalamus, and limbic system, and to the gustatory cortex, which is tucked underneath the overlap between the frontal and temporal lobes (Maffei, Haley, & Fontanini, 2012; Roper, 2013).
Olfactory receptor cells are located in a mucous membrane at the top of the nose. Small hair-like extensions from these receptors serve as the sites for odor molecules dissolved in the mucus to interact with chemical receptors located on these extensions ([(Link)]). Once an odor molecule has bound a given receptor, chemical changes within the cell result in signals being sent to the olfactory bulb: a bulb-like structure at the tip of the frontal lobe where the olfactory nerves begin. From the olfactory bulb, information is sent to regions of the limbic system and to the primary olfactory cortex, which is located very near the gustatory cortex (Lodovichi & Belluscio, 2012; Spors et al., 2013).
There is tremendous variation in the sensitivity of the olfactory systems of different species. We often think of dogs as having far superior olfactory systems than our own, and indeed, dogs can do some remarkable things with their noses. There is some evidence to suggest that dogs can “smell” dangerous drops in blood glucose levels as well as cancerous tumors (Wells, 2010). Dogs’ extraordinary olfactory abilities may be due to the increased number of functional genes for olfactory receptors (between 800 and 1200), compared to the fewer than 400 observed in humans and other primates (Niimura & Nei, 2007).
Many species respond to chemical messages, known as pheromones, sent by another individual (Wysocki & Preti, 2004). Pheromonal communication often involves providing information about the reproductive status of a potential mate. So, for example, when a female rat is ready to mate, she secretes pheromonal signals that draw attention from nearby male rats. Pheromonal activation is actually an important component in eliciting sexual behavior in the male rat (Furlow, 1996, 2012; Purvis & Haynes, 1972; Sachs, 1997). There has also been a good deal of research (and controversy) about pheromones in humans (Comfort, 1971; Russell, 1976; Wolfgang-Kimball, 1992; Weller, 1998).
TOUCH, THERMOCEPTION, AND NOCICEPTION
A number of receptors are distributed throughout the skin to respond to various touch-related stimuli ([(Link)]). These receptors include Meissner’s corpuscles, Pacinian corpuscles, Merkel’s disks, and Ruffini corpuscles.Meissner’s corpuscles respond to pressure and lower frequency vibrations, and Pacinian corpuscles detect transient pressure and higher frequency vibrations. Merkel’s disks respond to light pressure, while Ruffini corpuscles detect stretch (Abraira & Ginty, 2013).
In addition to the receptors located in the skin, there are also a number of free nerve endings that serve sensory functions. These nerve endings respond to a variety of different types of touch-related stimuli and serve as sensory receptors for both thermoception (temperature perception) and nociception (a signal indicating potential harm and maybe pain) (Garland, 2012; Petho & Reeh, 2012; Spray, 1986). Sensory information collected from the receptors and free nerve endings travels up the spinal cord and is transmitted to regions of the medulla, thalamus, and ultimately to somatosensory cortex, which is located in the postcentral gyrus of the parietal lobe.
Pain is an unpleasant experience that involves both physical and psychological components. Feeling pain is quite adaptive because it makes us aware of an injury, and it motivates us to remove ourselves from the cause of that injury. In addition, pain also makes us less likely to suffer additional injury because we will be gentler with our injured body parts.
Generally speaking, pain can be considered to be neuropathic or inflammatory in nature. Pain that signals some type of tissue damage is known as inflammatory pain. In some situations, pain results from damage to neurons of either the peripheral or central nervous system. As a result, pain signals that are sent to the brain get exaggerated. This type of pain is known as neuropathic pain. Multiple treatment options for pain relief range from relaxation therapy to the use of analgesic medications to deep brain stimulation. The most effective treatment option for a given individual will depend on a number of considerations, including the severity and persistence of the pain and any medical/psychological conditions.
Some individuals are born without the ability to feel pain. This very rare genetic disorder is known as congenital insensitivity to pain (or congenital analgesia). While those with congenital analgesia can detect differences in temperature and pressure, they cannot experience pain. As a result, they often suffer significant injuries. Young children have serious mouth and tongue injuries because they have bitten themselves repeatedly. Not surprisingly, individuals suffering from this disorder have much shorter life expectancies due to their injuries and secondary infections of injured sites (U.S. National Library of Medicine, 2013).
THE VESTIBULAR SENSE, PROPRIOCEPTION, AND KINESTHESIA
The vestibular sense contributes to our ability to maintain balance and body posture. As [(Link)] shows, the major sensory organs (utricle, saccule, and the three semicircular canals) of this system are located next to the cochlea in the inner ear. The vestibular organs are fluid-filled and have hair cells, similar to the ones found in the auditory system, which respond to movement of the head and gravitational forces. When these hair cells are stimulated, they send signals to the brain via the vestibular nerve. Although we may not be consciously aware of our vestibular system’s sensory information under normal circumstances, its importance is apparent when we experience motion sickness and/or dizziness related to infections of the inner ear (Khan & Chang, 2013).
In addition to maintaining balance, the vestibular system collects information critical for controlling movement and the reflexes that move various parts of our bodies to compensate for changes in body position. Therefore, both proprioception (perception of body position) and kinesthesia (perception of the body’s movement through space) interact with information provided by the vestibular system.
These sensory systems also gather information from receptors that respond to stretch and tension in muscles, joints, skin, and tendons (Lackner & DiZio, 2005; Proske, 2006; Proske & Gandevia, 2012). Proprioceptive and kinesthetic information travels to the brain via the spinal column. Several cortical regions in addition to the cerebellum receive information from and send information to the sensory organs of the proprioceptive and kinesthetic systems.
Taste (gustation) and smell (olfaction) are chemical senses that employ receptors on the tongue and in the nose that bind directly with taste and odor molecules in order to transmit information to the brain for processing. Our ability to perceive touch, temperature, and pain is mediated by a number of receptors and free nerve endings that are distributed throughout the skin and various tissues of the body. The vestibular sense helps us maintain a sense of balance through the response of hair cells in the utricle, saccule, and semi-circular canals that respond to changes in head position and gravity. Our proprioceptive and kinesthetic systems provide information about body position and body movement through receptors that detect stretch and tension in the muscles, joints, tendons, and skin of the body.
Self Check Questions
Critical Thinking Question
1. Many people experience nausea while traveling in a car, plane, or boat. How might you explain this as a function of sensory interaction?
2. If you heard someone say that they would do anything not to feel the pain associated with significant injury, how would you respond given what you’ve just read?
3. Do you think women experience pain differently than men? Why do you think this is?
Personal Application Question
4. As mentioned earlier, a food’s flavor represents an interaction of both gustatory and olfactory information. Think about the last time you were seriously congested due to a cold or the flu. What changes did you notice in the flavors of the foods that you ate during this time?
1. When traveling by car, we often have visual information that suggests that we are in motion while our vestibular sense indicates that we’re not moving (assuming we’re traveling at a relatively constant speed). Normally, these two sensory modalities provide congruent information, but the discrepancy might lead to confusion and nausea. The converse would be true when traveling by plane or boat.
2. Pain serves important functions that are critical to our survival. As noxious as pain stimuli may be, the experiences of individuals who suffer from congenital insensitivity to pain makes the consequences of a lack of pain all too apparent.
3. Research has shown that women and men do differ in their experience of and tolerance for pain: Women tend to handle pain better than men. Perhaps this is due to women’s labor and childbirth experience. Men tend to be stoic about their pain and do not seek help. Research also shows that gender differences in pain tolerance can vary across cultures.
By the end of this section, you will be able to:
- Describe the basic anatomy and function of the auditory system
- Explain how we encode and perceive pitch
- Discuss how we localize sound
Our auditory system converts pressure waves into meaningful sounds. This translates into our ability to hear the sounds of nature, to appreciate the beauty of music, and to communicate with one another through spoken language. This section will provide an overview of the basic anatomy and function of the auditory system. It will include a discussion of how the sensory stimulus is translated into neural impulses, where in the brain that information is processed, how we perceive pitch, and how we know where sound is coming from.
ANATOMY OF THE AUDITORY SYSTEM
The ear can be separated into multiple sections. The outer ear includes the pinna, which is the visible part of the ear that protrudes from our heads, the auditory canal, and the tympanic membrane, or eardrum. The middle ear contains three tiny bones known as the ossicles, which are named the malleus (or hammer), incus (or anvil), and the stapes (or stirrup). The inner ear contains the semi-circular canals, which are involved in balance and movement (the vestibular sense), and the cochlea. The cochlea is a fluid-filled, snail-shaped structure that contains the sensory receptor cells (hair cells) of the auditory system ([(Link)]).
Sound waves travel along the auditory canal and strike the tympanic membrane, causing it to vibrate. This vibration results in movement of the three ossicles. As the ossicles move, the stapes presses into a thin membrane of the cochlea known as the oval window. As the stapes presses into the oval window, the fluid inside the cochlea begins to move, which in turn stimulates hair cells, which are auditory receptor cells of the inner ear embedded in the basilar membrane. The basilar membrane is a thin strip of tissue within the cochlea.
The activation of hair cells is a mechanical process: the stimulation of the hair cell ultimately leads to activation of the cell. As hair cells become activated, they generate neural impulses that travel along the auditory nerve to the brain. Auditory information is shuttled to the inferior colliculus, the medial geniculate nucleus of the thalamus, and finally to the auditory cortex in the temporal lobe of the brain for processing. Like the visual system, there is also evidence suggesting that information about auditory recognition and localization is processed in parallel streams (Rauschecker & Tian, 2000; Renier et al., 2009).
Different frequencies of sound waves are associated with differences in our perception of the pitch of those sounds. Low-frequency sounds are lower pitched, and high-frequency sounds are higher pitched. How does the auditory system differentiate among various pitches?
Several theories have been proposed to account for pitch perception. We’ll discuss two of them here: temporal theory and place theory. The temporal theory of pitch perception asserts that frequency is coded by the activity level of a sensory neuron. This would mean that a given hair cell would fire action potentials related to the frequency of the sound wave. While this is a very intuitive explanation, we detect such a broad range of frequencies (20–20,000 Hz) that the frequency of action potentials fired by hair cells cannot account for the entire range. Because of properties related to sodium channels on the neuronal membrane that are involved in action potentials, there is a point at which a cell cannot fire any faster (Shamma, 2001).
The place theory of pitch perception suggests that different portions of the basilar membrane are sensitive to sounds of different frequencies. More specifically, the base of the basilar membrane responds best to high frequencies and the tip of the basilar membrane responds best to low frequencies. Therefore, hair cells that are in the base portion would be labeled as high-pitch receptors, while those in the tip of basilar membrane would be labeled as low-pitch receptors (Shamma, 2001).
In reality, both theories explain different aspects of pitch perception. At frequencies up to about 4000 Hz, it is clear that both the rate of action potentials and place contribute to our perception of pitch. However, much higher frequency sounds can only be encoded using place cues (Shamma, 2001).
The ability to locate sound in our environments is an important part of hearing. Localizing sound could be considered similar to the way that we perceive depth in our visual fields. Like the monocular and binocular cues that provided information about depth, the auditory system uses both monaural (one-eared) and binaural (two-eared) cues to localize sound.
Each pinna interacts with incoming sound waves differently, depending on the sound’s source relative to our bodies. This interaction provides a monaural cue that is helpful in locating sounds that occur above or below and in front or behind us. The sound waves received by your two ears from sounds that come from directly above, below, in front, or behind you would be identical; therefore, monaural cues are essential (Grothe, Pecka, & McAlpine, 2010).
Binaural cues, on the other hand, provide information on the location of a sound along a horizontal axis by relying on differences in patterns of vibration of the eardrum between our two ears. If a sound comes from an off-center location, it creates two types of binaural cues: interaural level differences and interaural timing differences. Interaural level difference refers to the fact that a sound coming from the right side of your body is more intense at your right ear than at your left ear because of the attenuation of the sound wave as it passes through your head. Interaural timing difference refers to the small difference in the time at which a given sound wave arrives at each ear ([(Link)]). Certain brain areas monitor these differences to construct where along a horizontal axis a sound originates (Grothe et al., 2010).
Deafness is the partial or complete inability to hear. Some people are born deaf, which is known as congenital deafness. Many others begin to suffer from conductive hearing loss because of age, genetic predisposition, or environmental effects, including exposure to extreme noise (noise-induced hearing loss, as shown in [(Link)]), certain illnesses (such as measles or mumps), or damage due to toxins (such as those found in certain solvents and metals).
Given the mechanical nature by which the sound wave stimulus is transmitted from the eardrum through the ossicles to the oval window of the cochlea, some degree of hearing loss is inevitable. With conductive hearing loss, hearing problems are associated with a failure in the vibration of the eardrum and/or movement of the ossicles. These problems are often dealt with through devices like hearing aids that amplify incoming sound waves to make vibration of the eardrum and movement of the ossicles more likely to occur.
When the hearing problem is associated with a failure to transmit neural signals from the cochlea to the brain, it is called sensorineural hearing loss. One disease that results in sensorineural hearing loss is Ménière’s disease. Although not well understood, Ménière’s disease results in a degeneration of inner ear structures that can lead to hearing loss, tinnitus (constant ringing or buzzing), vertigo (a sense of spinning), and an increase in pressure within the inner ear (Semaan & Megerian, 2011). This kind of loss cannot be treated with hearing aids, but some individuals might be candidates for a cochlear implant as a treatment option. Cochlear implants are electronic devices that consist of a microphone, a speech processor, and an electrode array. The device receives incoming sound information and directly stimulates the auditory nerve to transmit information to the brain.
In the United States and other places around the world, deaf people have their own language, schools, and customs. This is called deaf culture. In the United States, deaf individuals often communicate using American Sign Language (ASL); ASL has no verbal component and is based entirely on visual signs and gestures. The primary mode of communication is signing. One of the values of deaf culture is to continue traditions like using sign language rather than teaching deaf children to try to speak, read lips, or have cochlear implant surgery.
When a child is diagnosed as deaf, parents have difficult decisions to make. Should the child be enrolled in mainstream schools and taught to verbalize and read lips? Or should the child be sent to a school for deaf children to learn ASL and have significant exposure to deaf culture? Do you think there might be differences in the way that parents approach these decisions depending on whether or not they are also deaf?
Sound waves are funneled into the auditory canal and cause vibrations of the eardrum; these vibrations move the ossicles. As the ossicles move, the stapes presses against the oval window of the cochlea, which causes fluid inside the cochlea to move. As a result, hair cells embedded in the basilar membrane become enlarged, which sends neural impulses to the brain via the auditory nerve.
Pitch perception and sound localization are important aspects of hearing. Our ability to perceive pitch relies on both the firing rate of the hair cells in the basilar membrane as well as their location within the membrane. In terms of sound localization, both monaural and binaural cues are used to locate where sounds originate in our environment.
Individuals can be born deaf, or they can develop deafness as a result of age, genetic predisposition, and/or environmental causes. Hearing loss that results from a failure of the vibration of the eardrum or the resultant movement of the ossicles is called conductive hearing loss. Hearing loss that involves a failure of the transmission of auditory nerve impulses to the brain is called sensorineural hearing loss.
Self Check Questions
Critical Thinking Question
1. Given what you’ve read about sound localization, from an evolutionary perspective, how does sound localization facilitate survival?
2. How can temporal and place theories both be used to explain our ability to perceive the pitch of sound waves with frequencies up to 4000 Hz?
3. If you had to choose to lose either your vision or your hearing, which would you choose and why?
1. Sound localization would have allowed early humans to locate prey and protect themselves from predators.
2. Pitch of sounds below this threshold could be encoded by the combination of the place and firing rate of stimulated hair cells. So, in general, hair cells located near the tip of the basilar membrane would signal that we’re dealing with a lower-pitched sound. However, differences in firing rates of hair cells within this location could allow for fine discrimination between low-, medium-, and high-pitch sounds within the larger low-pitch context.
By the end of this section, you will be able to:
- Describe the basic anatomy of the visual system
- Discuss how rods and cones contribute to different aspects of vision
- Describe how monocular and binocular cues are used in the perception of depth
The visual system constructs a mental representation of the world around us ([(Link)]). This contributes to our ability to successfully navigate through physical space and interact with important individuals and objects in our environments. This section will provide an overview of the basic anatomy and function of the visual system. In addition, we will explore our ability to perceive color and depth.
ANATOMY OF THE VISUAL SYSTEM
The eye is the major sensory organ involved in vision ([(Link)]). Light waves are transmitted across the cornea and enter the eye through the pupil. The cornea is the transparent covering over the eye. It serves as a barrier between the inner eye and the outside world, and it is involved in focusing light waves that enter the eye. The pupil is the small opening in the eye through which light passes, and the size of the pupil can change as a function of light levels as well as emotional arousal. When light levels are low, the pupil will become dilated, or expanded, to allow more light to enter the eye. When light levels are high, the pupil will constrict, or become smaller, to reduce the amount of light that enters the eye. The pupil’s size is controlled by muscles that are connected to the iris, which is the colored portion of the eye.
After passing through the pupil, light crosses the lens, a curved, transparent structure that serves to provide additional focus. The lens is attached to muscles that can change its shape to aid in focusing light that is reflected from near or far objects. In a normal-sighted individual, the lens will focus images perfectly on a small indentation in the back of the eye known as the fovea, which is part of the retina, the light-sensitive lining of the eye. The fovea contains densely packed specialized photoreceptor cells ([(Link)]). These photoreceptor cells, known as cones, are light-detecting cells. The cones are specialized types of photoreceptors that work best in bright light conditions. Cones are very sensitive to acute detail and provide tremendous spatial resolution. They also are directly involved in our ability to perceive color.
While cones are concentrated in the fovea, where images tend to be focused, rods, another type of photoreceptor, are located throughout the remainder of the retina. Rods are specialized photoreceptors that work well in low light conditions, and while they lack the spatial resolution and color function of the cones, they are involved in our vision in dimly lit environments as well as in our perception of movement on the periphery of our visual field.
We have all experienced the different sensitivities of rods and cones when making the transition from a brightly lit environment to a dimly lit environment. Imagine going to see a blockbuster movie on a clear summer day. As you walk from the brightly lit lobby into the dark theater, you notice that you immediately have difficulty seeing much of anything. After a few minutes, you begin to adjust to the darkness and can see the interior of the theater. In the bright environment, your vision was dominated primarily by cone activity. As you move to the dark environment, rod activity dominates, but there is a delay in transitioning between the phases. If your rods do not transform light into nerve impulses as easily and efficiently as they should, you will have difficulty seeing in dim light, a condition known as night blindness.
Rods and cones are connected (via several interneurons) to retinal ganglion cells. Axons from the retinal ganglion cells converge and exit through the back of the eye to form the optic nerve. The optic nerve carries visual information from the retina to the brain. There is a point in the visual field called the blind spot: Even when light from a small object is focused on the blind spot, we do not see it. We are not consciously aware of our blind spots for two reasons: First, each eye gets a slightly different view of the visual field; therefore, the blind spots do not overlap. Second, our visual system fills in the blind spot so that although we cannot respond to visual information that occurs in that portion of the visual field, we are also not aware that information is missing.
The optic nerve from each eye merges just below the brain at a point called the optic chiasm. As [(Link)] shows, the optic chiasm is an X-shaped structure that sits just below the cerebral cortex at the front of the brain. At the point of the optic chiasm, information from the right visual field (which comes from both eyes) is sent to the left side of the brain, and information from the left visual field is sent to the right side of the brain.
Once inside the brain, visual information is sent via a number of structures to the occipital lobe at the back of the brain for processing. Visual information might be processed in parallel pathways which can generally be described as the “what pathway” and the “where/how” pathway. The “what pathway” is involved in object recognition and identification, while the “where/how pathway” is involved with location in space and how one might interact with a particular visual stimulus (Milner & Goodale, 2008; Ungerleider & Haxby, 1994). For example, when you see a ball rolling down the street, the “what pathway” identifies what the object is, and the “where/how pathway” identifies its location or movement in space.
COLOR AND DEPTH PERCEPTION
We do not see the world in black and white; neither do we see it as two-dimensional (2-D) or flat (just height and width, no depth). Let’s look at how color vision works and how we perceive three dimensions (height, width, and depth).
Normal-sighted individuals have three different types of cones that mediate color vision. Each of these cone types is maximally sensitive to a slightly different wavelength of light. According to the trichromatic theory of color vision, shown in [(Link)], all colors in the spectrum can be produced by combining red, green, and blue. The three types of cones are each receptive to one of the colors.
The trichromatic theory of color vision is not the only theory—another major theory of color vision is known as the opponent-process theory. According to this theory, color is coded in opponent pairs: black-white, yellow-blue, and green-red. The basic idea is that some cells of the visual system are excited by one of the opponent colors and inhibited by the other. So, a cell that was excited by wavelengths associated with green would be inhibited by wavelengths associated with red, and vice versa. One of the implications of opponent processing is that we do not experience greenish-reds or yellowish-blues as colors. Another implication is that this leads to the experience of negative afterimages. An afterimage describes the continuation of a visual sensation after removal of the stimulus. For example, when you stare briefly at the sun and then look away from it, you may still perceive a spot of light although the stimulus (the sun) has been removed. When color is involved in the stimulus, the color pairings identified in the opponent-process theory lead to a negative afterimage. You can test this concept using the flag in [(Link)].
But these two theories—the trichromatic theory of color vision and the opponent-process theory—are not mutually exclusive. Research has shown that they just apply to different levels of the nervous system. For visual processing on the retina, trichromatic theory applies: the cones are responsive to three different wavelengths that represent red, blue, and green. But once the signal moves past the retina on its way to the brain, the cells respond in a way consistent with opponent-process theory (Land, 1959; Kaiser, 1997).
Our ability to perceive spatial relationships in three-dimensional (3-D) space is known as depth perception. With depth perception, we can describe things as being in front, behind, above, below, or to the side of other things.
Our world is three-dimensional, so it makes sense that our mental representation of the world has three-dimensional properties. We use a variety of cues in a visual scene to establish our sense of depth. Some of these are binocular cues, which means that they rely on the use of both eyes. One example of a binocular depth cue is binocular disparity, the slightly different view of the world that each of our eyes receives. To experience this slightly different view, do this simple exercise: extend your arm fully and extend one of your fingers and focus on that finger. Now, close your left eye without moving your head, then open your left eye and close your right eye without moving your head. You will notice that your finger seems to shift as you alternate between the two eyes because of the slightly different view each eye has of your finger.
A 3-D movie works on the same principle: the special glasses you wear allow the two slightly different images projected onto the screen to be seen separately by your left and your right eye. As your brain processes these images, you have the illusion that the leaping animal or running person is coming right toward you.
Although we rely on binocular cues to experience depth in our 3-D world, we can also perceive depth in 2-D arrays. Think about all the paintings and photographs you have seen. Generally, you pick up on depth in these images even though the visual stimulus is 2-D. When we do this, we are relying on a number of monocular cues, or cues that require only one eye. If you think you can’t see depth with one eye, note that you don’t bump into things when using only one eye while walking—and, in fact, we have more monocular cues than binocular cues.
An example of a monocular cue would be what is known as linear perspective. Linear perspective refers to the fact that we perceive depth when we see two parallel lines that seem to converge in an image ([(Link)]). Some other monocular depth cues are interposition, the partial overlap of objects, and the relative size and closeness of images to the horizon.
Bruce Bridgeman was born with an extreme case of lazy eye that resulted in him being stereoblind, or unable to respond to binocular cues of depth. He relied heavily on monocular depth cues, but he never had a true appreciation of the 3-D nature of the world around him. This all changed one night in 2012 while Bruce was seeing a movie with his wife.
The movie the couple was going to see was shot in 3-D, and even though he thought it was a waste of money, Bruce paid for the 3-D glasses when he purchased his ticket. As soon as the film began, Bruce put on the glasses and experienced something completely new. For the first time in his life he appreciated the true depth of the world around him. Remarkably, his ability to perceive depth persisted outside of the movie theater.
There are cells in the nervous system that respond to binocular depth cues. Normally, these cells require activation during early development in order to persist, so experts familiar with Bruce’s case (and others like his) assume that at some point in his development, Bruce must have experienced at least a fleeting moment of binocular vision. It was enough to ensure the survival of the cells in the visual system tuned to binocular cues. The mystery now is why it took Bruce nearly 70 years to have these cells activated (Peck, 2012).
Integration with Other Modalities
Vision is not an encapsulated system. It interacts with and depends on other sensory modalities. For example, when you move your head in one direction, your eyes reflexively move in the opposite direction to compensate, allowing you to maintain your gaze on the object that you are looking at. This reflex is called the vestibulo-ocular reflex. It is achieved by integrating information from both the visual and the vestibular system (which knows about body motion and position). You can experience this compensation quite simply. First, while you keep your head still and your gaze looking straight ahead, wave your finger in front of you from side to side. Notice how the image of the finger appears blurry. Now, keep your finger steady and look at it while you move your head from side to side. Notice how your eyes reflexively move to compensate the movement of your head and how the image of the finger stays sharp and stable. Vision also interacts with your proprioceptive system, to help you find where all your body parts are, and with your auditory system, to help you understand the sounds people make when they speak. You can learn more about this in the multimodal module.
Finally, vision is also often implicated in a blending-of-sensations phenomenon known as synesthesia. Synesthesia occurs when one sensory signal gives rise to two or more sensations. The most common type is grapheme-color synesthesia. About 1 in 200 individuals experience a sensation of color associated with specific letters, numbers, or words: the number 1 might always be seen as red, the number 2 as orange, etc. But the more fascinating forms of synesthesia blend sensations from entirely different sensory modalities, like taste and color or music and color: the taste of chicken might elicit a sensation of green, for example, and the timbre of violin a deep purple.
Light waves cross the cornea and enter the eye at the pupil. The eye’s lens focuses this light so that the image is focused on a region of the retina known as the fovea. The fovea contains cones that possess high levels of visual acuity and operate best in bright light conditions. Rods are located throughout the retina and operate best under dim light conditions. Visual information leaves the eye via the optic nerve. Information from each visual field is sent to the opposite side of the brain at the optic chiasm. Visual information then moves through a number of brain sites before reaching the occipital lobe, where it is processed.
Two theories explain color perception. The trichromatic theory asserts that three distinct cone groups are tuned to slightly different wavelengths of light, and it is the combination of activity across these cone types that results in our perception of all the colors we see. The opponent-process theory of color vision asserts that color is processed in opponent pairs and accounts for the interesting phenomenon of a negative afterimage. We perceive depth through a combination of monocular and binocular depth cues.
Self Check Questions
Critical Thinking Question
1. Compare the two theories of color perception. Are they completely different?
2. Color is not a physical property of our environment. What function (if any) do you think color vision serves?
3. Take a look at a few of your photos or personal works of art. Can you find examples of linear perspective as a potential depth cue?
1. The trichromatic theory of color vision and the opponent-process theory are not mutually exclusive. Research has shown they apply to different levels of the nervous system. For visual processing on the retina, trichromatic theory applies: the cones are responsive to three different wavelengths that represent red, blue, and green. But once the signal moves past the retina on its way to the brain, the cells respond in a way consistent with opponent-process theory.
2. Color vision probably serves multiple adaptive purposes. One popular hypothesis suggests that seeing in color allowed our ancestors to differentiate ripened fruits and vegetables more easily.
By the end of this section, you will be able to:
- Describe important physical features of wave forms
- Show how physical properties of light waves are associated with perceptual experience
- Show how physical properties of sound waves are associated with perceptual experience
Visual and auditory stimuli both occur in the form of waves. Although the two stimuli are very different in terms of composition, wave forms share similar characteristics that are especially important to our visual and auditory perceptions. In this section, we describe the physical properties of the waves as well as the perceptual experiences associated with them.
AMPLITUDE AND WAVELENGTH
Two physical characteristics of a wave are amplitude and wavelength ([(Link)]). The amplitude of a wave is the height of a wave as measured from the highest point on the wave (peak or crest) to the lowest point on the wave (trough). Wavelength refers to the length of a wave from one peak to the next.
Wavelength is directly related to the frequency of a given wave form. Frequency refers to the number of waves that pass a given point in a given time period and is often expressed in terms of hertz (Hz), or cycles per second. Longer wavelengths will have lower frequencies, and shorter wavelengths will have higher frequencies ([(Link)]).
The visible spectrum is the portion of the larger electromagnetic spectrum that we can see. As [(Link)] shows, the electromagnetic spectrum encompasses all of the electromagnetic radiation that occurs in our environment and includes gamma rays, x-rays, ultraviolet light, visible light, infrared light, microwaves, and radio waves. The visible spectrum in humans is associated with wavelengths that range from 380 to 740 nm—a very small distance, since a nanometer (nm) is one billionth of a meter. Other species can detect other portions of the electromagnetic spectrum. For instance, honeybees can see light in the ultraviolet range (Wakakuwa, Stavenga, & Arikawa, 2007), and some snakes can detect infrared radiation in addition to more traditional visual light cues (Chen, Deng, Brauth, Ding, & Tang, 2012; Hartline, Kass, & Loop, 1978).
In humans, light wavelength is associated with perception of color ([(Link)]). Within the visible spectrum, our experience of red is associated with longer wavelengths, greens are intermediate, and blues and violets are shorter in wavelength. (An easy way to remember this is the mnemonic ROYGBIV: red, orange, yellow, green, blue, indigo, violet.) The amplitude of light waves is associated with our experience of brightness or intensity of color, with larger amplitudes appearing brighter.
Like light waves, the physical properties of sound waves are associated with various aspects of our perception of sound. The frequency of a sound wave is associated with our perception of that sound’s pitch. High-frequency sound waves are perceived as high-pitched sounds, while low-frequency sound waves are perceived as low-pitched sounds. The audible range of sound frequencies is between 20 and 20000 Hz, with greatest sensitivity to those frequencies that fall in the middle of this range.
As was the case with the visible spectrum, other species show differences in their audible ranges. For instance, chickens have a very limited audible range, from 125 to 2000 Hz. Mice have an audible range from 1000 to 91000 Hz, and the beluga whale’s audible range is from 1000 to 123000 Hz. Our pet dogs and cats have audible ranges of about 70–45000 Hz and 45–64000 Hz, respectively (Strain, 2003).
The loudness of a given sound is closely associated with the amplitude of the sound wave. Higher amplitudes are associated with louder sounds. Loudness is measured in terms of decibels (dB), a logarithmic unit of sound intensity. A typical conversation would correlate with 60 dB; a rock concert might check in at 120 dB ([(Link)]). A whisper 5 feet away or rustling leaves are at the low end of our hearing range; sounds like a window air conditioner, a normal conversation, and even heavy traffic or a vacuum cleaner are within a tolerable range. However, there is the potential for hearing damage from about 80 dB to 130 dB: These are sounds of a food processor, power lawnmower, heavy truck (25 feet away), subway train (20 feet away), live rock music, and a jackhammer. The threshold for pain is about 130 dB, a jet plane taking off or a revolver firing at close range (Dunkle, 1982).
Although wave amplitude is generally associated with loudness, there is some interaction between frequency and amplitude in our perception of loudness within the audible range. For example, a 10 Hz sound wave is inaudible no matter the amplitude of the wave. A 1000 Hz sound wave, on the other hand, would vary dramatically in terms of perceived loudness as the amplitude of the wave increased.
Of course, different musical instruments can play the same musical note at the same level of loudness, yet they still sound quite different. This is known as the timbre of a sound. Timbre refers to a sound’s purity, and it is affected by the complex interplay of frequency, amplitude, and timing of sound waves.
Both light and sound can be described in terms of wave forms with physical characteristics like amplitude, wavelength, and timbre. Wavelength and frequency are inversely related so that longer waves have lower frequencies, and shorter waves have higher frequencies. In the visual system, a light wave’s wavelength is generally associated with color, and its amplitude is associated with brightness. In the auditory system, a sound’s frequency is associated with pitch, and its amplitude is associated with loudness.
Self Check Questions
Critical Thinking Question
1. Why do you think other species have such different ranges of sensitivity for both visual and auditory stimuli compared to humans?
2. Why do you think humans are especially sensitive to sounds with frequencies that fall in the middle portion of the audible range?
Personal Application Question
3. If you grew up with a family pet, then you have surely noticed that they often seem to hear things that you don’t hear. Now that you’ve read this section, you probably have some insight as to why this may be. How would you explain this to a friend who never had the opportunity to take a class like this?
1. Other species have evolved to best suit their particular environmental niches. For example, the honeybee relies on flowering plants for survival. Seeing in the ultraviolet light might prove especially helpful when locating flowers. Once a flower is found, the ultraviolet rays point to the center of the flower where the pollen and nectar are contained. Similar arguments could be made for infrared detection in snakes as well as for the differences in audible ranges of the species described in this section.
2. Once again, one could make an evolutionary argument here. Given that the human voice falls in this middle range and the importance of communication among humans, one could argue that it is quite adaptive to have an audible range that centers on this particular type of stimulus.
By the end of this section, you will be able to:
- Distinguish between sensation and perception
- Describe the concepts of absolute threshold and difference threshold
- Discuss the roles attention, motivation, and sensory adaptation play in perception
What does it mean to sense something? Sensory receptors are specialized neurons that respond to specific types of stimuli. When sensory information is detected by a sensory receptor, sensation has occurred. For example, light that enters the eye causes chemical changes in cells that line the back of the eye. These cells relay messages, in the form of action potentials (as you learned when studying biopsychology), to the central nervous system. The conversion from sensory stimulus energy to action potential is known as transduction.
You have probably known since elementary school that we have five senses: vision, hearing (audition), smell (olfaction), taste (gustation), and touch (somatosensation). It turns out that this notion of five senses is oversimplified. We also have sensory systems that provide information about balance (the vestibular sense), body position and movement (proprioception and kinesthesia), pain (nociception), and temperature (thermoception).
The sensitivity of a given sensory system to the relevant stimuli can be expressed as an absolute threshold. Absolute threshold refers to the minimum amount of stimulus energy that must be present for the stimulus to be detected 50% of the time. Another way to think about this is by asking how dim can a light be or how soft can a sound be and still be detected half of the time. The sensitivity of our sensory receptors can be quite amazing. It has been estimated that on a clear night, the most sensitive sensory cells in the back of the eye can detect a candle flame 30 miles away (Okawa & Sampath, 2007). Under quiet conditions, the hair cells (the receptor cells of the inner ear) can detect the tick of a clock 20 feet away (Galanter, 1962).
It is also possible for us to get messages that are presented below the threshold for conscious awareness—these are called subliminal messages. A stimulus reaches a physiological threshold when it is strong enough to excite sensory receptors and send nerve impulses to the brain: This is an absolute threshold. A message below that threshold is said to be subliminal: We receive it, but we are not consciously aware of it. Over the years there has been a great deal of speculation about the use of subliminal messages in advertising, rock music, and self-help audio programs. Research evidence shows that in laboratory settings, people can process and respond to information outside of awareness. But this does not mean that we obey these messages like zombies; in fact, hidden messages have little effect on behavior outside the laboratory (Kunst-Wilson & Zajonc, 1980; Rensink, 2004; Nelson, 2008; Radel, Sarrazin, Legrain, & Gobancé, 2009; Loersch, Durso, & Petty, 2013).
Absolute thresholds are generally measured under incredibly controlled conditions in situations that are optimal for sensitivity. Sometimes, we are more interested in how much difference in stimuli is required to detect a difference between them. This is known as the just noticeable difference (jnd) or difference threshold. Unlike the absolute threshold, the difference threshold changes depending on the stimulus intensity. As an example, imagine yourself in a very dark movie theater. If an audience member were to receive a text message on her cell phone which caused her screen to light up, chances are that many people would notice the change in illumination in the theater. However, if the same thing happened in a brightly lit arena during a basketball game, very few people would notice. The cell phone brightness does not change, but its ability to be detected as a change in illumination varies dramatically between the two contexts. Ernst Weber proposed this theory of change in difference threshold in the 1830s, and it has become known as Weber’s law: The difference threshold is a constant fraction of the original stimulus, as the example illustrates.
While our sensory receptors are constantly collecting information from the environment, it is ultimately how we interpret that information that affects how we interact with the world. Perception refers to the way sensory information is organized, interpreted, and consciously experienced. Perception involves both bottom-up and top-down processing. Bottom-up processing refers to the fact that perceptions are built from sensory input. On the other hand, how we interpret those sensations is influenced by our available knowledge, our experiences, and our thoughts. This is called top-down processing.
One way to think of this concept is that sensation is a physical process, whereas perception is psychological. For example, upon walking into a kitchen and smelling the scent of baking cinnamon rolls, the sensation is the scent receptors detecting the odor of cinnamon, but the perception may be “Mmm, this smells like the bread Grandma used to bake when the family gathered for holidays.”
Although our perceptions are built from sensations, not all sensations result in perception. In fact, we often don’t perceive stimuli that remain relatively constant over prolonged periods of time. This is known as sensory adaptation. Imagine entering a classroom with an old analog clock. Upon first entering the room, you can hear the ticking of the clock; as you begin to engage in conversation with classmates or listen to your professor greet the class, you are no longer aware of the ticking. The clock is still ticking, and that information is still affecting sensory receptors of the auditory system. The fact that you no longer perceive the sound demonstrates sensory adaptation and shows that while closely associated, sensation and perception are different.
There is another factor that affects sensation and perception: attention. Attention plays a significant role in determining what is sensed versus what is perceived. Imagine you are at a party full of music, chatter, and laughter. You get involved in an interesting conversation with a friend, and you tune out all the background noise. If someone interrupted you to ask what song had just finished playing, you would probably be unable to answer that question.
See for yourself how inattentional blindness works by checking out this selective attention test from Simons and Chabris (1999) below:
In a similar experiment, researchers tested inattentional blindness by asking participants to observe images moving across a computer screen. They were instructed to focus on either white or black objects, disregarding the other color. When a red cross passed across the screen, about one third of subjects did not notice it ([(Link)]) (Most, Simons, Scholl, & Chabris, 2000). Read more on inattentional blindness though this link to the Noba Project website.
Motivation can also affect perception. Have you ever been expecting a really important phone call and, while taking a shower, you think you hear the phone ringing, only to discover that it is not? If so, then you have experienced how motivation to detect a meaningful stimulus can shift our ability to discriminate between a true sensory stimulus and background noise. The ability to identify a stimulus when it is embedded in a distracting background is called signal detection theory. This might also explain why a mother is awakened by a quiet murmur from her baby but not by other sounds that occur while she is asleep. Signal detection theory has practical applications, such as increasing air traffic controller accuracy. Controllers need to be able to detect planes among many signals (blips) that appear on the radar screen and follow those planes as they move through the sky. In fact, the original work of the researcher who developed signal detection theory was focused on improving the sensitivity of air traffic controllers to plane blips (Swets, 1964).
Our perceptions can also be affected by our beliefs, values, prejudices, expectations, and life experiences. As you will see later in this chapter, individuals who are deprived of the experience of binocular vision during critical periods of development have trouble perceiving depth (Fawcett, Wang, & Birch, 2005). The shared experiences of people within a given cultural context can have pronounced effects on perception. For example, Marshall Segall, Donald Campbell, and Melville Herskovits (1963) published the results of a multinational study in which they demonstrated that individuals from Western cultures were more prone to experience certain types of visual illusions than individuals from non-Western cultures, and vice versa. One such illusion that Westerners were more likely to experience was the Müller-Lyer illusion ([(Link)]): The lines appear to be different lengths, but they are actually the same length.
These perceptual differences were consistent with differences in the types of environmental features experienced on a regular basis by people in a given cultural context. People in Western cultures, for example, have a perceptual context of buildings with straight lines, what Segall’s study called a carpentered world (Segall et al., 1966). In contrast, people from certain non-Western cultures with an uncarpentered view, such as the Zulu of South Africa, whose villages are made up of round huts arranged in circles, are less susceptible to this illusion (Segall et al., 1999). It is not just vision that is affected by cultural factors. Indeed, research has demonstrated that the ability to identify an odor, and rate its pleasantness and its intensity, varies cross-culturally (Ayabe-Kanamura, Saito, Distel, Martínez-Gómez, & Hudson, 1998).
Children described as thrill seekers are more likely to show taste preferences for intense sour flavors (Liem, Westerbeek, Wolterink, Kok, & de Graaf, 2004), which suggests that basic aspects of personality might affect perception. Furthermore, individuals who hold positive attitudes toward reduced-fat foods are more likely to rate foods labeled as reduced fat as tasting better than people who have less positive attitudes about these products (Aaron, Mela, & Evans, 1994).
Sensation occurs when sensory receptors detect sensory stimuli. Perception involves the organization, interpretation, and conscious experience of those sensations. All sensory systems have both absolute and difference thresholds, which refer to the minimum amount of stimulus energy or the minimum amount of difference in stimulus energy required to be detected about 50% of the time, respectively. Sensory adaptation, selective attention, and signal detection theory can help explain what is perceived and what is not. In addition, our perceptions are affected by a number of factors, including beliefs, values, prejudices, culture, and life experiences.
Self Check Questions
Critical Thinking Question
1. Not everything that is sensed is perceived. Do you think there could ever be a case where something could be perceived without being sensed?
2. Please generate a novel example of how just noticeable difference can change as a function of stimulus intensity.
3. Think about a time when you failed to notice something around you because your attention was focused elsewhere. If someone pointed it out, were you surprised that you hadn’t noticed it right away?
1. This would be a good time for students to think about claims of extrasensory perception. Another interesting topic would be the phantom limb phenomenon experienced by amputees.
2. There are many potential examples. One example involves the detection of weight differences. If two people are holding standard envelopes and one contains a quarter while the other is empty, the difference in weight between the two is easy to detect. However, if those envelopes are placed inside two textbooks of equal weight, the ability to discriminate which is heavier is much more difficult.
Imagine standing on a city street corner. You might be struck by movement everywhere as cars and people go about their business, by the sound of a street musician’s melody or a horn honking in the distance, by the smell of exhaust fumes or of food being sold by a nearby vendor, and by the sensation of hard pavement under your feet.
We rely on our sensory systems to provide important information about our surroundings. We use this information to successfully navigate and interact with our environment so that we can find nourishment, seek shelter, maintain social relationships, and avoid potentially dangerous situations. But while sensory information is critical to our survival, there is so much information available at any given time that we would be overwhelmed if we were forced to attend to all of it. In fact, we are aware of only a fraction of the sensory information taken in by our sensory systems at any given time.
This module will provide an overview of how sensory information is received and processed by the nervous system and how that affects our conscious experience of the world. We begin by learning the distinction between sensation and perception. Then we consider the physical properties of light and sound stimuli, along with an overview of the basic structure and function of the major sensory systems. The module will close with a discussion of a historically important theory of perception called the Gestalt theory. This theory attempts to explain some underlying principles of perception.
Aaron, J. I., Mela, D. J., & Evans, R. E. (1994). The influences of attitudes, beliefs, and label information on perceptions of reduced-fat spread. Appetite, 22, 25–37.
Abraira, V. E., & Ginty, D. D. (2013). The sensory neurons of touch. Neuron, 79, 618–639.
Ayabe-Kanamura, S., Saito, S., Distel, H., Martínez-Gómez, M., & Hudson, R. (1998). Differences and similarities in the perception of everyday odors: A Japanese-German cross-cultural study. Annals of the New York Academy of Sciences, 855, 694–700.
Chen, Q., Deng, H., Brauth, S. E., Ding, L., & Tang, Y. (2012). Reduced performance of prey targeting in pit vipers with contralaterally occluded infrared and visual senses. PloS ONE, 7(5), e34989. doi:10.1371/journal.pone.0034989
Comfort, A. (1971). Likelihood of human pheromones. Nature, 230, 432–479.
Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. (2002). The police officer’s dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83, 1314–1329.
Correll, J., Urland, G. R., & Ito, T. A. (2006). Event-related potentials and the decision to shoot: The role of threat perception and cognitive control. The Journal of Experimental Social Psychology, 42, 120–128.
Dunkle T. (1982). The sound of silence. Science, 82, 30–33.
Fawcett, S. L., Wang, Y., & Birch, E. E. (2005). The critical period for susceptibility of human stereopsis. Investigative Ophthalmology and Visual Science, 46, 521–525.
Furlow, F. B. (1996, 2012). The smell of love. Retrieved from http://www.psychologytoday.com/articles/200910/the-smell-love
Galanter, E. (1962). Contemporary Psychophysics. In R. Brown, E.Galanter, E. H. Hess, & G. Mandler (Eds.), New directions in psychology. New York, NY: Holt, Rinehart & Winston.
Garland, E. L. (2012). Pain processing in the human nervous system: A selective review of nociceptive and biobehavioral pathways. Primary Care, 39, 561–571.
Goolkasian, P. & Woodbury, C. (2010). Priming effects with ambiguous figures. Attention, Perception & Psychophysics, 72, 168–178.
Grothe, B., Pecka, M., & McAlpine, D. (2010). Mechanisms of sound localization in mammals. Physiological Reviews, 90, 983–1012.
Hartline, P. H., Kass, L., & Loop, M. S. (1978). Merging of modalities in the optic tectum: Infrared and visual integration in rattlesnakes. Science, 199, 1225–1229.
Kaiser, P. K. (1997). The joy of visual perception: A web book. Retrieved from http://www.yorku.ca/eye/noframes.htm
Khan, S., & Chang, R. (2013). Anatomy of the vestibular system: A review. NeuroRehabilitation, 32, 437–443.
Kinnamon, S. C., & Vandenbeuch, A. (2009). Receptors and transduction of umami taste stimuli. Annals of the New York Academy of Sciences, 1170, 55–59.
Kunst-Wilson, W. R., & Zajonc, R. B. (1980). Affective discrimination of stimuli that cannot be recognized. Science, 207, 557–558.
Lackner, J. R., & DiZio, P. (2005). Vestibular, proprioceptive, and haptic contributions to spatial orientation. Annual Review of Psychology, 56, 115–147.
Land, E. H. (1959). Color vision and the natural image. Part 1. Proceedings of the National Academy of Science, 45(1), 115–129.
Liem, D. G., Westerbeek, A., Wolterink, S., Kok, F. J., & de Graaf, C. (2004). Sour taste preferences of children relate to preference for novel and intense stimuli. Chemical Senses, 29, 713–720.
Lodovichi, C., & Belluscio, L. (2012). Odorant receptors in the formation of olfactory bulb circuitry. Physiology, 27, 200–212.
Loersch, C., Durso, G. R. O., & Petty, R. E. (2013). Vicissitudes of desire: A matching mechanism for subliminal persuasion. Social Psychological and Personality Science, 4(5), 624–631.
Maffei, A., Haley, M., & Fontanini, A. (2012). Neural processing of gustatory information in insular circuits. Current Opinion in Neurobiology, 22, 709–716.
Milner, A. D., & Goodale, M. A. (2008). Two visual systems re-viewed. Neuropsychological, 46, 774–785.
Mizushige, T., Inoue, K., Fushiki, T. (2007). Why is fat so tasty? Chemical reception of fatty acid on the tongue. Journal of Nutritional Science and Vitaminology, 53, 1–4.
Most, S. B., Simons, D. J., Scholl, B. J., & Chabris, C. F. (2000). Sustained inattentional blindness: The role of location in the detection of unexpected dynamic events. PSYCHE, 6(14).
Nelson, M. R. (2008). The hidden persuaders: Then and now. Journal of Advertising, 37(1), 113–126.
Niimura, Y., & Nei, M. (2007). Extensive gains and losses of olfactory receptor genes in mammalian evolution. PLoS ONE, 2, e708.
Okawa, H., & Sampath, A. P. (2007). Optimization of single-photon response transmission at the rod-to-rod bipolar synapse. Physiology, 22, 279–286.
Payne, B. K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81, 181–192.
Payne, B. K., Shimizu, Y., & Jacoby, L. L. (2005). Mental control and visual illusions: Toward explaining race-biased weapon misidentifications. Journal of Experimental Social Psychology, 41, 36–47.
Peck, M. (2012, July 19). How a movie changed one man’s vision forever. Retrieved from http://www.bbc.com/future/story/20120719-awoken-from-a-2d-world
Peterson, M. A., & Gibson, B. S. (1994). Must figure-ground organization precede object recognition? An assumption in peril. Psychological Science, 5, 253–259.
Petho, G., & Reeh, P. W. (2012). Sensory and signaling mechanisms of bradykinin, eicosanoids, platelet-activating factor, and nitric oxide in peripheral nociceptors. Physiological Reviews, 92, 1699–1775.
Proske, U. (2006). Kinesthesia: The role of muscle receptors. Muscle & Nerve, 34, 545–558.
Proske, U., & Gandevia, S. C. (2012). The proprioceptive senses: Their roles in signaling body shape, body position and movement, and muscle force. Physiological Reviews, 92, 1651–1697.
Purvis, K., & Haynes, N. B. (1972). The effect of female rat proximity on the reproductive system of male rats. Physiology & Behavior, 9, 401–407.
Radel, R., Sarrazin, P., Legrain, P., & Gobancé, L. (2009). Subliminal priming of motivational orientation in educational settings: Effect on academic performance moderated by mindfulness. Journal of Research in Personality, 43(4), 1–18.
Rauschecker, J. P., & Tian, B. (2000). Mechanisms and streams for processing “what” and “where” in auditory cortex. Proceedings of the National Academy of Sciences, USA, 97, 11800–11806.
Renier, L. A., Anurova, I., De Volder, A. G., Carlson, S., VanMeter, J., & Rauschecker, J. P. (2009). Multisensory integration of sounds and vibrotactile stimuli in processing streams for “what” and “where.” Journal of Neuroscience, 29, 10950–10960.
Rensink, R. A. (2004). Visual sensing without seeing. Psychological Science, 15, 27–32.
Rock, I., & Palmer, S. (1990). The legacy of Gestalt psychology. Scientific American, 262, 84–90.
Roper, S. D. (2013). Taste buds as peripheral chemosensory receptors. Seminars in Cell & Developmental Biology, 24, 71–79.
Russell, M. J. (1976). Human olfactory communication. Nature, 260, 520–522.
Sachs, B. D. (1997). Erection evoked in male rats by airborne scent from estrous females. Physiology & Behavior, 62, 921–924.
Segall, M. H., Campbell, D. T., & Herskovits, M. J. (1963). Cultural differences in the perception of geometric illusions. Science, 139, 769–771.
Segall, M. H., Campbell, D. T., & Herskovits, M. J. (1966). The influence of culture on visual perception. Indianapolis: Bobbs-Merrill.
Segall, M. H., Dasen, P. P., Berry, J. W., & Poortinga, Y. H. (1999). Human behavior in global perspective (2nd ed.). Boston: Allyn & Bacon.
Semaan, M. T., & Megerian, C. A. (2010). Contemporary perspectives on the pathophysiology of Meniere’s disease: implications for treatment. Current opinion in Otolaryngology & Head and Neck Surgery, 18(5), 392–398.
Shamma, S. (2001). On the role of space and time in auditory processing. Trends in Cognitive Sciences, 5, 340–348.
Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059–1074.
Spors, H., Albeanu, D. F., Murthy, V. N., Rinberg, D., Uchida, N., Wachowiak, M., & Friedrich, R. W. (2013). Illuminating vertebrate olfactory processing. Journal of Neuroscience, 32, 14102–14108.
Spray, D. C. (1986). Cutaneous temperature receptors. Annual Review of Physiology, 48, 625–638.
Strain, G. M. (2003). How well do dogs and other animals hear? Retrieved from http://www.lsu.edu/deafness/HearingRange.html
Swets, J. A. (1964). Signal detection and recognition by human observers. Psychological Bulletin, 60, 429–441.
Ungerleider, L. G., & Haxby, J. V. (1994). ‘What’ and ‘where’ in the human brain. Current Opinion in Neurobiology, 4, 157–165.
U.S. National Library of Medicine. (2013). Genetics home reference: Congenital insensitivity to pain. Retrieved from http://ghr.nlm.nih.gov/condition/congenital-insensitivity-to-pain
Vecera, S. P., & O’Reilly, R. C. (1998). Figure-ground organization and object recognition processes: An interactive account. Journal of Experimental Psychology-Human Perception and Performance, 24, 441–462.
Wakakuwa, M., Stavenga, D. G., & Arikawa, K. (2007). Spectral organization of ommatidia in flower-visiting insects. Photochemistry and Photobiology, 83, 27–34.
Weller, A. (1998). Human pheromones: Communication through body odour. Nature, 392, 126–127.
Wells, D. L. (2010). Domestic dogs and human health: An overview. British Journal of Health Psychology, 12, 145–156.
Wolfgang-Kimball, D. (1992). Pheromones in humans: myth or reality?. Retrieved from http://www.anapsid.org/pheromones.html
Wysocki, C. J., & Preti, G. (2004). Facts, fallacies, fears, and frustrations with human pheromones. The Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology, 281, 1201–1211.
By the end of this section, you will be able to:
- Define hypnosis and meditation
- Understand the similarities and differences of hypnosis and meditation
Our states of consciousness change as we move from wakefulness to sleep. We also alter our consciousness through the use of various psychoactive drugs. This final section will consider hypnotic and meditative states as additional examples of altered states of consciousness experienced by some individuals.
Hypnosis is a state of extreme self-focus and attention in which minimal attention is given to external stimuli. In the therapeutic setting, a clinician often will use relaxation and suggestion in an attempt to alter the thoughts and perceptions of a patient. Hypnosis has also been used to draw out information believed to be buried deeply in someone’s memory. For individuals who are especially open to the power of suggestion, this can prove to be a very effective technique, and brain imaging studies have demonstrated that hypnotic states are associated with global changes in brain functioning (Del Casale et al., 2012; Guldenmund, Vanhaudenhuyse, Boly, Laureys, & Soddu, 2012).
Historically, hypnosis has been viewed with some suspicion because of its portrayal in popular media and entertainment ([(Link)]). Therefore, it is important to make a distinction between hypnosis as an empirically based therapeutic approach versus as a form of entertainment. Contrary to popular belief, individuals undergoing hypnosis usually have clear memories of the hypnotic experience and are in control of their own behaviors. While hypnosis may be useful in enhancing memory or a skill, such enhancements are very modest in nature (Raz, 2011).
How exactly does a hypnotist bring a participant to a state of hypnosis? While there are variations, there are four parts that appear consistent in bringing people into the state of suggestibility associated with hypnosis (National Research Council, 1994). These components include:
The participant is guided to focus on one thing, such as the hypnotist’s words or a ticking watch.
The participant is made comfortable and is directed to be relaxed and sleepy.
The participant is told to be open to the process of hypnosis, trust the hypnotist and let go.
The participant is encouraged to use his or her imagination.
These steps are conducive to being open to the heightened suggestibility of hypnosis.
People vary in terms of their ability to be hypnotized, but a review of available research suggests that most people are at least moderately hypnotizable (Kihlstrom, 2013). Hypnosis in conjunction with other techniques is used for a variety of therapeutic purposes and has shown to be at least somewhat effective for pain management, treatment of depression and anxiety, smoking cessation, and weight loss (Alladin, 2012; Elkins, Johnson, & Fisher, 2012; Golden, 2012; Montgomery, Schnur, & Kravits, 2012).
Some scientists are working to determine whether the power of suggestion can affect cognitive processes such as learning, with a view to using hypnosis in educational settings (Wark, 2011). Furthermore, there is some evidence that hypnosis can alter processes that were once thought to be automatic and outside the purview of voluntary control, such as reading (Lifshitz, Aubert Bonn, Fischer, Kashem, & Raz, 2013; Raz, Shapiro, Fan, & Posner, 2002). However, it should be noted that others have suggested that the automaticity of these processes remains intact (Augustinova & Ferrand, 2012).
How does hypnosis work? Two theories attempt to answer this question: One theory views hypnosis as dissociation and the other theory views it as the performance of a social role. According to the dissociation view, hypnosis is effectively a dissociated state of consciousness, much like our earlier example where you may drive to work, but you are only minimally aware of the process of driving because your attention is focused elsewhere. This theory is supported by Ernest Hilgard’s research into hypnosis and pain. In Hilgard’s experiments, he induced participants into a state of hypnosis, and placed their arms into ice water. Participants were told they would not feel pain, but they could press a button if they did; while they reported not feeling pain, they did, in fact, press the button, suggesting a dissociation of consciousness while in the hypnotic state (Hilgard & Hilgard, 1994).
Taking a different approach to explain hypnosis, the social-cognitive theory of hypnosis sees people in hypnotic states as performing the social role of a hypnotized person. As you will learn when you study social roles, people’s behavior can be shaped by their expectations of how they should act in a given situation. Some view a hypnotized person’s behavior not as an altered or dissociated state of consciousness, but as their fulfillment of the social expectations for that role.
Meditation is the act of focusing on a single target (such as the breath or a repeated sound) to increase awareness of the moment. While hypnosis is generally achieved through the interaction of a therapist and the person being treated, an individual can perform meditation alone. Often, however, people wishing to learn to meditate receive some training in techniques to achieve a meditative state. A meditative state, as shown by EEG recordings of newly-practicing meditators, is not an altered state of consciousness per se; however, patterns of brain waves exhibited by expert meditators may represent a unique state of consciousness (Fell, Axmacher, & Haupt, 2010).
Although there are a number of different techniques in use, the central feature of all meditation is clearing the mind in order to achieve a state of relaxed awareness and focus (Chen et al., 2013; Lang et al., 2012). Mindfulness meditation has recently become popular. In the variation of meditation, the meditator’s attention is focused on some internal process or an external object (Zeidan, Grant, Brown, McHaffie, & Coghill, 2012).
Meditative techniques have their roots in religious practices ([(Link)]), but their use has grown in popularity among practitioners of alternative medicine. Research indicates that meditation may help reduce blood pressure, and the American Heart Association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension, although there is not sufficient data for a recommendation to be made (Brook et al., 2013). Like hypnosis, meditation also shows promise in stress management, sleep quality (Caldwell, Harrison, Adams, Quin, & Greeson, 2010), treatment of mood and anxiety disorders (Chen et al., 2013; Freeman et al., 2010; Vøllestad, Nielsen, & Nielsen, 2012), and pain management (Reiner, Tibi, & Lipsitz, 2013).
Hypnosis is a focus on the self that involves suggested changes of behavior and experience. Meditation involves relaxed, yet focused, awareness. Both hypnotic and meditative states may involve altered states of consciousness that have potential application for the treatment of a variety of physical and psychological disorders.
Self Check Questions
Critical Thinking Questions
1. What advantages exist for researching the potential health benefits of hypnosis?
2. What types of studies would be most convincing regarding the effectiveness of meditation in the treatment for some type of physical or mental disorder?
Personal Application Question
3. Under what circumstances would you be willing to consider hypnosis and/or meditation as a treatment option? What kind of information would you need before you made a decision to use these techniques?
1. Healthcare and pharmaceutical costs continue to skyrocket. If alternative approaches to dealing with these problems could be developed that would be relatively inexpensive, then the potential benefits are many.
2. Ideally, double-blind experimental trials would be best suited to speak to the effectiveness of meditation. At the very least, some sort of randomized control trial would be very informative.
By the end of this section, you will be able to:
- Describe the diagnostic criteria for substance use disorders
- Identify the neurotransmitter systems affected by various categories of drugs
- Describe how different categories of drugs effect behavior and experience
While we all experience altered states of consciousness in the form of sleep on a regular basis, some people use drugs and other substances that result in altered states of consciousness as well. This section will present information relating to the use of various psychoactive drugs and problems associated with such use. This will be followed by brief descriptions of the effects of some of the more well-known drugs commonly used today.
SUBSTANCE USE DISORDERS
The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) is used by clinicians to diagnose individuals suffering from various psychological disorders. Drug use disorders are addictive disorders, and the criteria for specific substance (drug) use disorders are described in DSM-5. A person who has a substance use disorder often uses more of the substance than they originally intended to and continues to use that substance despite experiencing significant adverse consequences. In individuals diagnosed with a substance use disorder, there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence.
Physical dependence involves changes in normal bodily functions—the user will experience withdrawal from the drug upon cessation of use. In contrast, a person who has psychological dependence has an emotional, rather than physical, need for the drug and may use the drug to relieve psychological distress. Tolerance is linked to physiological dependence, and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses. Tolerance can cause the user to increase the amount of drug used to a dangerous level—even to the point of overdose and death.
Drug withdrawal includes a variety of negative symptoms experienced when drug use is discontinued. These symptoms usually are opposite of the effects of the drug. For example, withdrawal from sedative drugs often produces unpleasant arousal and agitation. In addition to withdrawal, many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances. Psychological dependence, or drug craving, is a recent addition to the diagnostic criteria for substance use disorder in DSM-5. This is an important factor because we can develop tolerance and experience withdrawal from any number of drugs that we do not abuse. In other words, physical dependence in and of itself is of limited utility in determining whether or not someone has a substance use disorder.
The effects of all psychoactive drugs occur through their interactions with our endogenous neurotransmitter systems. Many of these drugs, and their relationships, are shown in [(Link)]. As you have learned, drugs can act as agonists or antagonists of a given neurotransmitter system. An agonist facilitates the activity of a neurotransmitter system, and antagonists impede neurotransmitter activity.
Alcohol and Other Depressants
Ethanol, which we commonly refer to as alcohol, is in a class of psychoactive drugs known as depressants ([(Link)]). A depressant is a drug that tends to suppress central nervous system activity. Other depressants include barbiturates and benzodiazepines. These drugs share in common their ability to serve as agonists of the gamma-Aminobutyric acid (GABA) neurotransmitter system. Because GABA has a quieting effect on the brain, GABA agonists also have a quieting effect; these types of drugs are often prescribed to treat both anxiety and insomnia.
Acute alcohol administration results in a variety of changes to consciousness. At rather low doses, alcohol use is associated with feelings of euphoria. As the dose increases, people report feeling sedated. Generally, alcohol is associated with decreases in reaction time and visual acuity, lowered levels of alertness, and reduction in behavioral control. With excessive alcohol use, a person might experience a complete loss of consciousness and/or difficulty remembering events that occurred during a period of intoxication (McKim & Hancock, 2013). In addition, if a pregnant woman consumes alcohol, her infant may be born with a cluster of birth defects and symptoms collectively called fetal alcohol spectrum disorder (FASD) or fetal alcohol syndrome (FAS).
With repeated use of many central nervous system depressants, such as alcohol, a person becomes physically dependent upon the substance and will exhibit signs of both tolerance and withdrawal. Psychological dependence on these drugs is also possible. Therefore, the abuse potential of central nervous system depressants is relatively high.
Drug withdrawal is usually an aversive experience, and it can be a life-threatening process in individuals who have a long history of very high doses of alcohol and/or barbiturates. This is of such concern that people who are trying to overcome addiction to these substances should only do so under medical supervision.
Stimulants are drugs that tend to increase overall levels of neural activity. Many of these drugs act as agonists of the dopamine neurotransmitter system. Dopamine activity is often associated with reward and craving; therefore, drugs that affect dopamine neurotransmission often have abuse liability. Drugs in this category include cocaine, amphetamines (including methamphetamine), cathinones (i.e., bath salts), MDMA (ecstasy), nicotine, and caffeine.
Cocaine can be taken in multiple ways. While many users snort cocaine, intravenous injection and ingestion are also common. The freebase version of cocaine, known as crack, is a potent, smokable version of the drug. Like many other stimulants, cocaine agonizes the dopamine neurotransmitter system by blocking the reuptake of dopamine in the neuronal synapse.
Crack ([(Link)]) is often considered to be more addictive than cocaine itself because it is smokable and reaches the brain very quickly. Crack is often less expensive than other forms of cocaine; therefore, it tends to be a more accessible drug for individuals from impoverished segments of society. During the 1980s, many drug laws were rewritten to punish crack users more severely than cocaine users. This led to discriminatory sentencing with low-income, inner-city minority populations receiving the harshest punishments. The wisdom of these laws has recently been called into question, especially given research that suggests crack may not be more addictive than other forms of cocaine, as previously thought (Haasen & Krausz, 2001; Reinerman, 2007).
Amphetamines have a mechanism of action quite similar to cocaine in that they block the reuptake of dopamine in addition to stimulating its release ([(Link)]). While amphetamines are often abused, they are also commonly prescribed to children diagnosed with attention deficit hyperactivity disorder (ADHD). It may seem counterintuitive that stimulant medications are prescribed to treat a disorder that involves hyperactivity, but the therapeutic effect comes from increases in neurotransmitter activity within certain areas of the brain associated with impulse control.
In recent years, methamphetamine (meth) use has become increasingly widespread. Methamphetamine is a type of amphetamine that can be made from ingredients that are readily available (e.g., medications containing pseudoephedrine, a compound found in many over-the-counter cold and flu remedies). Despite recent changes in laws designed to make obtaining pseudoephedrine more difficult, methamphetamine continues to be an easily accessible and relatively inexpensive drug option (Shukla, Crump, & Chrisco, 2012).
The cocaine, amphetamine, cathinones, and MDMA users seek a euphoric high, feelings of intense elation and pleasure, especially in those users who take the drug via intravenous injection or smoking. Repeated use of these stimulants can have significant adverse consequences. Users can experience physical symptoms that include nausea, elevated blood pressure, and increased heart rate. In addition, these drugs can cause feelings of anxiety, hallucinations, and paranoia (Fiorentini et al., 2011). Normal brain functioning is altered after repeated use of these drugs. For example, repeated use can lead to overall depletion among the monoamine neurotransmitters (dopamine, norepinephrine, and serotonin). People may engage in compulsive use of these stimulant substances in part to try to reestablish normal levels of these neurotransmitters (Jayanthi & Ramamoorthy, 2005; Rothman, Blough, & Baumann, 2007).
Caffeine is another stimulant drug. While it is probably the most commonly used drug in the world, the potency of this particular drug pales in comparison to the other stimulant drugs described in this section. Generally, people use caffeine to maintain increased levels of alertness and arousal. Caffeine is found in many common medicines (such as weight loss drugs), beverages, foods, and even cosmetics (Herman & Herman, 2013). While caffeine may have some indirect effects on dopamine neurotransmission, its primary mechanism of action involves antagonizing adenosine activity (Porkka-Heiskanen, 2011).
While caffeine is generally considered a relatively safe drug, high blood levels of caffeine can result in insomnia, agitation, muscle twitching, nausea, irregular heartbeat, and even death (Reissig, Strain, & Griffiths, 2009; Wolt, Ganetsky, & Babu, 2012). In 2012, Kromann and Nielson reported on a case study of a 40-year-old woman who suffered significant ill effects from her use of caffeine. The woman used caffeine in the past to boost her mood and to provide energy, but over the course of several years, she increased her caffeine consumption to the point that she was consuming three liters of soda each day. Although she had been taking a prescription antidepressant, her symptoms of depression continued to worsen and she began to suffer physically, displaying significant warning signs of cardiovascular disease and diabetes. Upon admission to an outpatient clinic for treatment of mood disorders, she met all of the diagnostic criteria for substance dependence and was advised to dramatically limit her caffeine intake. Once she was able to limit her use to less than 12 ounces of soda a day, both her mental and physical health gradually improved. Despite the prevalence of caffeine use and the large number of people who confess to suffering from caffeine addiction, this was the first published description of soda dependence appearing in scientific literature.
Nicotine is highly addictive, and the use of tobacco products is associated with increased risks of heart disease, stroke, and a variety of cancers. Nicotine exerts its effects through its interaction with acetylcholine receptors. Acetylcholine functions as a neurotransmitter in motor neurons. In the central nervous system, it plays a role in arousal and reward mechanisms. Nicotine is most commonly used in the form of tobacco products like cigarettes or chewing tobacco; therefore, there is a tremendous interest in developing effective smoking cessation techniques. To date, people have used a variety of nicotine replacement therapies in addition to various psychotherapeutic options in an attempt to discontinue their use of tobacco products. In general, smoking cessation programs may be effective in the short term, but it is unclear whether these effects persist (Cropley, Theadom, Pravettoni, & Webb, 2008; Levitt, Shaw, Wong, & Kaczorowski, 2007; Smedslund, Fisher, Boles, & Lichtenstein, 2004).
An opioid is one of a category of drugs that includes heroin, morphine, methadone, and codeine. Opioids have analgesic properties; that is, they decrease pain. Humans have an endogenous opioid neurotransmitter system—the body makes small quantities of opioid compounds that bind to opioid receptors reducing pain and producing euphoria. Thus, opioid drugs, which mimic this endogenous painkilling mechanism, have an extremely high potential for abuse. Natural opioids, called opiates, are derivatives of opium, which is a naturally occurring compound found in the poppy plant. There are now several synthetic versions of opiate drugs (correctly called opioids) that have very potent painkilling effects, and they are often abused. For example, the National Institutes of Drug Abuse has sponsored research that suggests the misuse and abuse of the prescription pain killers hydrocodone and oxycodone are significant public health concerns (Maxwell, 2006). In 2013, the U.S. Food and Drug Administration recommended tighter controls on their medical use.
Historically, heroin has been a major opioid drug of abuse ([(Link)]). Heroin can be snorted, smoked, or injected intravenously. Like the stimulants described earlier, the use of heroin is associated with an initial feeling of euphoria followed by periods of agitation. Because heroin is often administered via intravenous injection, users often bear needle track marks on their arms and, like all abusers of intravenous drugs, have an increased risk for contraction of both tuberculosis and HIV.
Aside from their utility as analgesic drugs, opioid-like compounds are often found in cough suppressants, anti-nausea, and anti-diarrhea medications. Given that withdrawal from a drug often involves an experience opposite to the effect of the drug, it should be no surprise that opioid withdrawal resembles a severe case of the flu. While opioid withdrawal can be extremely unpleasant, it is not life-threatening (Julien, 2005). Still, people experiencing opioid withdrawal may be given methadone to make withdrawal from the drug less difficult. Methadone is a synthetic opioid that is less euphorigenic than heroin and similar drugs. Methadone clinics help people who previously struggled with opioid addiction manage withdrawal symptoms through the use of methadone. Other drugs, including the opioid buprenorphine, have also been used to alleviate symptoms of opiate withdrawal.
Codeine is an opioid with relatively low potency. It is often prescribed for minor pain, and it is available over-the-counter in some other countries. Like all opioids, codeine does have abuse potential. In fact, abuse of prescription opioid medications is becoming a major concern worldwide (Aquina, Marques-Baptista, Bridgeman, & Merlin, 2009; Casati, Sedefov, & Pfeiffer-Gerschel, 2012).
A hallucinogen is one of a class of drugs that results in profound alterations in sensory and perceptual experiences ([(Link)]). In some cases, users experience vivid visual hallucinations. It is also common for these types of drugs to cause hallucinations of body sensations (e.g., feeling as if you are a giant) and a skewed perception of the passage of time.
As a group, hallucinogens are incredibly varied in terms of the neurotransmitter systems they affect. Mescaline and LSD are serotonin agonists, and PCP (angel dust) and ketamine (an animal anesthetic) act as antagonists of the NMDA glutamate receptor. In general, these drugs are not thought to possess the same sort of abuse potential as other classes of drugs discussed in this section.
While the possession and use of marijuana is illegal in most states, it is now legal in Washington and Colorado to possess limited quantities of marijuana for recreational use ([(Link)]). In contrast, medical marijuana use is now legal in nearly half of the United States and in the District of Columbia. Medical marijuana is marijuana that is prescribed by a doctor for the treatment of a health condition. For example, people who undergo chemotherapy will often be prescribed marijuana to stimulate their appetites and prevent excessive weight loss resulting from the side effects of chemotherapy treatment. Marijuana may also have some promise in the treatment of a variety of medical conditions (Mather, Rauwendaal, Moxham-Hall, & Wodak, 2013; Robson, 2014; Schicho & Storr, 2014).
While medical marijuana laws have been passed on a state-by-state basis, federal laws still classify this as an illicit substance, making conducting research on the potentially beneficial medicinal uses of marijuana problematic. There is quite a bit of controversy within the scientific community as to the extent to which marijuana might have medicinal benefits due to a lack of large-scale, controlled research (Bostwick, 2012). As a result, many scientists have urged the federal government to allow for relaxation of current marijuana laws and classifications in order to facilitate a more widespread study of the drug’s effects (Aggarwal et al., 2009; Bostwick, 2012; Kogan & Mechoulam, 2007).
Until recently, the United States Department of Justice routinely arrested people involved and seized marijuana used in medicinal settings. In the latter part of 2013, however, the United States Department of Justice issued statements indicating that they would not continue to challenge state medical marijuana laws. This shift in policy may be in response to the scientific community’s recommendations and/or reflect changing public opinion regarding marijuana.
Substance use disorder is defined in DSM-5 as a compulsive pattern of drug use despite negative consequences. Both physical and psychological dependence are important parts of this disorder. Alcohol, barbiturates, and benzodiazepines are central nervous system depressants that affect GABA neurotransmission. Cocaine, amphetamine, cathinones, and MDMA are all central nervous stimulants that agonize dopamine neurotransmission, while nicotine and caffeine affect acetylcholine and adenosine, respectively. Opiate drugs serve as powerful analgesics through their effects on the endogenous opioid neurotransmitter system, and hallucinogenic drugs cause pronounced changes in sensory and perceptual experiences. The hallucinogens are variable with regards to the specific neurotransmitter systems they affect.
Self Check Questions
Critical Thinking Questions
1. The negative health consequences of both alcohol and tobacco products are well-documented. A drug like marijuana, on the other hand, is generally considered to be as safe, if not safer than these legal drugs. Why do you think marijuana use continues to be illegal in many parts of the United States?
2. Why are programs designed to educate people about the dangers of using tobacco products just as important as developing tobacco cessation programs?
3. Many people experiment with some sort of psychoactive substance at some point in their lives. Why do you think people are motivated to use substances that alter consciousness?
1. One possibility involves the cultural acceptance and long history of alcohol and tobacco use in our society. No doubt, money comes into play as well. Growing tobacco and producing alcohol on a large scale is a well-regulated and taxed process. Given that marijuana is essentially a weed that requires little care to grow, it would be much more difficult to regulate its production. Recent events suggest that cultural attitudes regarding marijuana are changing, and it is quite likely that its illicit status will be adapted accordingly.
2. Given that currently available programs designed to help people quit using tobacco products are not necessarily effective in the long term, programs designed to prevent people from using these products in the first place may be the best hope for dealing with the enormous public health concerns associated with tobacco use.
By the end of this section, you will be able to:
- Describe the symptoms and treatments of insomnia
- Recognize the symptoms of several parasomnias
- Describe the symptoms and treatments for sleep apnea
- Recognize risk factors associated with sudden infant death syndrome (SIDS) and steps to prevent it
- Describe the symptoms and treatments for narcolepsy
Many people experience disturbances in their sleep at some point in their lives. Depending on the population and sleep disorder being studied, between 30% and 50% of the population suffers from a sleep disorder at some point in their lives (Bixler, Kales, Soldatos, Kaels, & Healey, 1979; Hossain & Shapiro, 2002; Ohayon, 1997, 2002; Ohayon & Roth, 2002). This section will describe several sleep disorders as well as some of their treatment options.
Insomnia, a consistent difficulty in falling or staying asleep, is the most common of the sleep disorders. Individuals with insomnia often experience long delays between the times that they go to bed and actually fall asleep. In addition, these individuals may wake up several times during the night only to find that they have difficulty getting back to sleep. As mentioned earlier, one of the criteria for insomnia involves experiencing these symptoms for at least three nights a week for at least one month’s time (Roth, 2007).
It is not uncommon for people suffering from insomnia to experience increased levels of anxiety about their inability to fall asleep. This becomes a self-perpetuating cycle because increased anxiety leads to increased arousal, and higher levels of arousal make the prospect of falling asleep even more unlikely. Chronic insomnia is almost always associated with feeling overtired and may be associated with symptoms of depression.
There may be many factors that contribute to insomnia, including age, drug use, exercise, mental status, and bedtime routines. Not surprisingly, insomnia treatment may take one of several different approaches. People who suffer from insomnia might limit their use of stimulant drugs (such as caffeine) or increase their amount of physical exercise during the day. Some people might turn to over-the-counter (OTC) or prescribed sleep medications to help them sleep, but this should be done sparingly because many sleep medications result in dependence and alter the nature of the sleep cycle, and they can increase insomnia over time. Those who continue to have insomnia, particularly if it affects their quality of life, should seek professional treatment.
Some forms of psychotherapy, such as cognitive-behavioral therapy, can help sufferers of insomnia. Cognitive-behavioral therapy is a type of psychotherapy that focuses on cognitive processes and problem behaviors. The treatment of insomnia likely would include stress management techniques and changes in problematic behaviors that could contribute to insomnia (e.g., spending more waking time in bed). Cognitive-behavioral therapy has been demonstrated to be quite effective in treating insomnia (Savard, Simard, Ivers, & Morin, 2005; Williams, Roth, Vatthauer, & McCrae, 2013).
A parasomnia is one of a group of sleep disorders in which unwanted, disruptive motor activity and/or experiences during sleep play a role. Parasomnias can occur in either REM or NREM phases of sleep. Sleepwalking, restless leg syndrome, and night terrors are all examples of parasomnias (Mahowald & Schenck, 2000).
In sleepwalking, or somnambulism, the sleeper engages in relatively complex behaviors ranging from wandering about to driving an automobile. During periods of sleepwalking, sleepers often have their eyes open, but they are not responsive to attempts to communicate with them. Sleepwalking most often occurs during slow-wave sleep, but it can occur at any time during a sleep period in some affected individuals (Mahowald & Schenck, 2000).
Historically, somnambulism has been treated with a variety of pharmacotherapies ranging from benzodiazepines to antidepressants. However, the success rate of such treatments is questionable. Guilleminault et al. (2005) found that sleepwalking was not alleviated with the use of benzodiazepines. However, all of their somnambulistic patients who also suffered from sleep-related breathing problems showed a marked decrease in sleepwalking when their breathing problems were effectively treated.
On January 16, 1997, Scott Falater sat down to dinner with his wife and children and told them about difficulties he was experiencing on a project at work. After dinner, he prepared some materials to use in leading a church youth group the following morning, and then he attempted repair the family’s swimming pool pump before retiring to bed. The following morning, he awoke to barking dogs and unfamiliar voices from downstairs. As he went to investigate what was going on, he was met by a group of police officers who arrested him for the murder of his wife (Cartwright, 2004; CNN, 1999).
Yarmila Falater’s body was found in the family’s pool with 44 stab wounds. A neighbor called the police after witnessing Falater standing over his wife’s body before dragging her into the pool. Upon a search of the premises, police found blood-stained clothes and a bloody knife in the trunk of Falater’s car, and he had blood stains on his neck.
Remarkably, Falater insisted that he had no recollection of hurting his wife in any way. His children and his wife’s parents all agreed that Falater had an excellent relationship with his wife and they couldn’t think of a reason that would provide any sort of motive to murder her (Cartwright, 2004).
Scott Falater had a history of regular episodes of sleepwalking as a child, and he had even behaved violently toward his sister once when she tried to prevent him from leaving their home in his pajamas during a sleepwalking episode. He suffered from no apparent anatomical brain anomalies or psychological disorders. It appeared that Scott Falater had killed his wife in his sleep, or at least, that is the defense he used when he was tried for his wife’s murder (Cartwright, 2004; CNN, 1999). In Falater’s case, a jury found him guilty of first degree murder in June of 1999 (CNN, 1999); however, there are other murder cases where the sleepwalking defense has been used successfully. As scary as it sounds, many sleep researchers believe that homicidal sleepwalking is possible in individuals suffering from the types of sleep disorders described below (Broughton et al., 1994; Cartwright, 2004; Mahowald, Schenck, & Cramer Bornemann, 2005; Pressman, 2007).
REM Sleep Behavior Disorder (RBD)
REM sleep behavior disorder (RBD) occurs when the muscle paralysis associated with the REM sleep phase does not occur. Individuals who suffer from RBD have high levels of physical activity during REM sleep, especially during disturbing dreams. These behaviors vary widely, but they can include kicking, punching, scratching, yelling, and behaving like an animal that has been frightened or attacked. People who suffer from this disorder can injure themselves or their sleeping partners when engaging in these behaviors. Furthermore, these types of behaviors ultimately disrupt sleep, although affected individuals have no memories that these behaviors have occurred (Arnulf, 2012).
This disorder is associated with a number of neurodegenerative diseases such as Parkinson’s disease. In fact, this relationship is so robust that some view the presence of RBD as a potential aid in the diagnosis and treatment of a number of neurodegenerative diseases (Ferini-Strambi, 2011). Clonazepam, an anti-anxiety medication with sedative properties, is most often used to treat RBD. It is administered alone or in conjunction with doses of melatonin (the hormone secreted by the pineal gland). As part of treatment, the sleeping environment is often modified to make it a safer place for those suffering from RBD (Zangini, Calandra-Buonaura, Grimaldi, & Cortelli, 2011).
A person with restless leg syndrome has uncomfortable sensations in the legs during periods of inactivity or when trying to fall asleep. This discomfort is relieved by deliberately moving the legs, which, not surprisingly, contributes to difficulty in falling or staying asleep. Restless leg syndrome is quite common and has been associated with a number of other medical diagnoses, such as chronic kidney disease and diabetes (Mahowald & Schenck, 2000). There are a variety of drugs that treat restless leg syndrome: benzodiazepines, opiates, and anticonvulsants (Restless Legs Syndrome Foundation, n.d.).
Night terrors result in a sense of panic in the sufferer and are often accompanied by screams and attempts to escape from the immediate environment (Mahowald & Schenck, 2000). Although individuals suffering from night terrors appear to be awake, they generally have no memories of the events that occurred, and attempts to console them are ineffective. Typically, individuals suffering from night terrors will fall back asleep again within a short time. Night terrors apparently occur during the NREM phase of sleep (Provini, Tinuper, Bisulli, & Lagaresi, 2011)Generally, treatment for night terrors is unnecessary unless there is some underlying medical or psychological condition that is contributing to the night terrors (Mayo Clinic, n.d.).
Sleep apnea is defined by episodes during which a sleeper’s breathing stops. These episodes can last 10–20 seconds or longer and often are associated with brief periods of arousal. While individuals suffering from sleep apnea may not be aware of these repeated disruptions in sleep, they do experience increased levels of fatigue. Many individuals diagnosed with sleep apnea first seek treatment because their sleeping partners indicate that they snore loudly and/or stop breathing for extended periods of time while sleeping (Henry & Rosenthal, 2013). Sleep apnea is much more common in overweight people and is often associated with loud snoring. Surprisingly, sleep apnea may exacerbate cardiovascular disease (Sánchez-de-la-Torre, Campos-Rodriguez, & Barbé, 2012). While sleep apnea is less common in thin people, anyone, regardless of their weight, who snores loudly or gasps for air while sleeping, should be checked for sleep apnea.
While people are often unaware of their sleep apnea, they are keenly aware of some of the adverse consequences of insufficient sleep. Consider a patient who believed that as a result of his sleep apnea he “had three car accidents in six weeks. They were ALL my fault. Two of them I didn’t even know I was involved in until afterwards” (Henry & Rosenthal, 2013, p. 52). It is not uncommon for people suffering from undiagnosed or untreated sleep apnea to fear that their careers will be affected by the lack of sleep, illustrated by this statement from another patient, “I’m in a job where there’s a premium on being mentally alert. I was really sleepy… and having trouble concentrating…. It was getting to the point where it was kind of scary” (Henry & Rosenthal, 2013, p. 52).
There are two types of sleep apnea: obstructive sleep apnea and central sleep apnea. Obstructive sleep apnea occurs when an individual’s airway becomes blocked during sleep, and air is prevented from entering the lungs. In central sleep apnea, disruption in signals sent from the brain that regulate breathing cause periods of interrupted breathing (White, 2005).
One of the most common treatments for sleep apnea involves the use of a special device during sleep. A continuous positive airway pressure (CPAP) device includes a mask that fits over the sleeper’s nose and mouth, which is connected to a pump that pumps air into the person’s airways, forcing them to remain open, as shown in [(Link)]. Some newer CPAP masks are smaller and cover only the nose. This treatment option has proven to be effective for people suffering from mild to severe cases of sleep apnea (McDaid et al., 2009). However, alternative treatment options are being explored because consistent compliance by users of CPAP devices is a problem. Recently, a new EPAP (excitatory positive air pressure) device has shown promise in double-blind trials as one such alternative (Berry, Kryger, & Massie, 2011).
In sudden infant death syndrome (SIDS) an infant stops breathing during sleep and dies. Infants younger than 12 months appear to be at the highest risk for SIDS, and boys have a greater risk than girls. A number of risk factors have been associated with SIDS including premature birth, smoking within the home, and hyperthermia. There may also be differences in both brain structure and function in infants that die from SIDS (Berkowitz, 2012; Mage & Donner, 2006; Thach, 2005).
The substantial amount of research on SIDS has led to a number of recommendations to parents to protect their children ([(Link)]). For one, research suggests that infants should be placed on their backs when put down to sleep, and their cribs should not contain any items which pose suffocation threats, such as blankets, pillows or padded crib bumpers (cushions that cover the bars of a crib). Infants should not have caps placed on their heads when put down to sleep in order to prevent overheating, and people in the child’s household should abstain from smoking in the home. Recommendations like these have helped to decrease the number of infant deaths from SIDS in recent years (Mitchell, 2009; Task Force on Sudden Infant Death Syndrome, 2011).
Unlike the other sleep disorders described in this section, a person with narcolepsy cannot resist falling asleep at inopportune times. These sleep episodes are often associated with cataplexy, which is a lack of muscle tone or muscle weakness, and in some cases involves complete paralysis of the voluntary muscles. This is similar to the kind of paralysis experienced by healthy individuals during REM sleep (Burgess & Scammell, 2012; Hishikawa & Shimizu, 1995; Luppi et al., 2011). Narcoleptic episodes take on other features of REM sleep. For example, around one third of individuals diagnosed with narcolepsy experience vivid, dream-like hallucinations during narcoleptic attacks (Chokroverty, 2010).
Surprisingly, narcoleptic episodes are often triggered by states of heightened arousal or stress. The typical episode can last from a minute or two to half an hour. Once awakened from a narcoleptic attack, people report that they feel refreshed (Chokroverty, 2010). Obviously, regular narcoleptic episodes could interfere with the ability to perform one’s job or complete schoolwork, and in some situations, narcolepsy can result in significant harm and injury (e.g., driving a car or operating machinery or other potentially dangerous equipment).
Generally, narcolepsy is treated using psychomotor stimulant drugs, such as amphetamines (Mignot, 2012). These drugs promote increased levels of neural activity. Narcolepsy is associated with reduced levels of the signaling molecule hypocretin in some areas of the brain (De la Herrán-Arita & Drucker-Colín, 2012; Han, 2012), and the traditional stimulant drugs do not have direct effects on this system. Therefore, it is quite likely that new medications that are developed to treat narcolepsy will be designed to target the hypocretin system.
There is a tremendous amount of variability among sufferers, both in terms of how symptoms of narcolepsy manifest and the effectiveness of currently available treatment options. This is illustrated by McCarty’s (2010) case study of a 50-year-old woman who sought help for the excessive sleepiness during normal waking hours that she had experienced for several years. She indicated that she had fallen asleep at inappropriate or dangerous times, including while eating, while socializing with friends, and while driving her car. During periods of emotional arousal, the woman complained that she felt some weakness in the right side of her body. Although she did not experience any dream-like hallucinations, she was diagnosed with narcolepsy as a result of sleep testing. In her case, the fact that her cataplexy was confined to the right side of her body was quite unusual. Early attempts to treat her condition with a stimulant drug alone were unsuccessful. However, when a stimulant drug was used in conjunction with a popular antidepressant, her condition improved dramatically.
Many individuals suffer from some type of sleep disorder or disturbance at some point in their lives. Insomnia is a common experience in which people have difficulty falling or staying asleep. Parasomnias involve unwanted motor behavior or experiences throughout the sleep cycle and include RBD, sleepwalking, restless leg syndrome, and night terrors. Sleep apnea occurs when individuals stop breathing during their sleep, and in the case of sudden infant death syndrome, infants will stop breathing during sleep and die. Narcolepsy involves an irresistible urge to fall asleep during waking hours and is often associated with cataplexy and hallucination.
Self Check Questions
Critical Thinking Questions
1. One of the recommendations that therapists will make to people who suffer from insomnia is to spend less waking time in bed. Why do you think spending waking time in bed might interfere with the ability to fall asleep later?
2. How is narcolepsy with cataplexy similar to and different from REM sleep?
Personal Application Question
3. What factors might contribute to your own experiences with insomnia?
1. Answers will vary. One possible explanation might invoke principles of associative learning. If the bed represents a place for socializing, studying, eating, and so on, then it is possible that it will become a place that elicits higher levels of arousal, which would make falling asleep at the appropriate time more difficult. Answers could also consider self-perpetuating cycle referred to when describing insomnia. If an individual is having trouble falling asleep and that generates anxiety, it might make sense to remove him from the context where sleep would normally take place to try to avoid anxiety being associated with that context.
2. Similarities include muscle atony and the hypnagogic hallucinations associated with narcoleptic episodes. The differences involve the uncontrollable nature of narcoleptic attacks and the fact that these come on in situations that would normally not be associated with sleep of any kind (e.g., instances of heightened arousal or emotionality).
By the end of this section, you will be able to:
- Differentiate between REM and non-REM sleep
- Describe the differences between the four stages of non-REM sleep
- Understand the role that REM and non-REM sleep play in learning and memory
Sleep is not a uniform state of being. Instead, sleep is composed of several different stages that can be differentiated from one another by the patterns of brain wave activity that occur during each stage. These changes in brain wave activity can be visualized using EEG and are distinguished from one another by both the frequency and amplitude of brain waves ([(Link)]). Sleep can be divided into two different general phases: REM sleep and non-REM (NREM) sleep. Rapid eye movement (REM) sleep is characterized by darting movements of the eyes under closed eyelids. Brain waves during REM sleep appear very similar to brain waves during wakefulness. In contrast, non-REM (NREM) sleep is subdivided into four stages distinguished from each other and from wakefulness by characteristic patterns of brain waves. The first four stages of sleep are NREM sleep, while the fifth and final stage of sleep is REM sleep. In this section, we will discuss each of these stages of sleep and their associated patterns of brain wave activity.
NREM STAGES OF SLEEP
The first stage of NREM sleep is known as stage 1 sleep. Stage 1 sleep is a transitional phase that occurs between wakefulness and sleep, the period during which we drift off to sleep. During this time, there is a slowdown in both the rates of respiration and heartbeat. In addition, stage 1 sleep involves a marked decrease in both overall muscle tension and core body temperature.
In terms of brain wave activity, stage 1 sleep is associated with both alpha and theta waves. The early portion of stage 1 sleep produces alpha waves, which are relatively low frequency (8–13Hz), high amplitude patterns of electrical activity (waves) that become synchronized ([(Link)]). This pattern of brain wave activity resembles that of someone who is very relaxed, yet awake. As an individual continues through stage 1 sleep, there is an increase in theta wave activity. Theta waves are even lower frequency (4–7 Hz), higher amplitude brain waves than alpha waves. It is relatively easy to wake someone from stage 1 sleep; in fact, people often report that they have not been asleep if they are awoken during stage 1 sleep.
As we move into stage 2 sleep, the body goes into a state of deep relaxation. Theta waves still dominate the activity of the brain, but they are interrupted by brief bursts of activity known as sleep spindles ([(Link)]). A sleep spindle is a rapid burst of higher frequency brain waves that may be important for learning and memory (Fogel & Smith, 2011; Poe, Walsh, & Bjorness, 2010). In addition, the appearance of K-complexes is often associated with stage 2 sleep. A K-complex is a very high amplitude pattern of brain activity that may in some cases occur in response to environmental stimuli. Thus, K-complexes might serve as a bridge to higher levels of arousal in response to what is going on in our environments (Halász, 1993; Steriade & Amzica, 1998).
Stage 3 and stage 4 of sleep are often referred to as deep sleep or slow-wave sleep because these stages are characterized by low frequency (up to 4 Hz), high amplitude delta waves ([(Link)]). During this time, an individual’s heart rate and respiration slow dramatically. It is much more difficult to awaken someone from sleep during stage 3 and stage 4 than during earlier stages. Interestingly, individuals who have increased levels of alpha brain wave activity (more often associated with wakefulness and transition into stage 1 sleep) during stage 3 and stage 4 often report that they do not feel refreshed upon waking, regardless of how long they slept (Stone, Taylor, McCrae, Kalsekar, & Lichstein, 2008).
As mentioned earlier, REM sleep is marked by rapid movements of the eyes. The brain waves associated with this stage of sleep are very similar to those observed when a person is awake, as shown in [(Link)], and this is the period of sleep in which dreaming occurs. It is also associated with paralysis of muscle systems in the body with the exception of those that make circulation and respiration possible. Therefore, no movement of voluntary muscles occurs during REM sleep in a normal individual; REM sleep is often referred to as paradoxical sleep because of this combination of high brain activity and lack of muscle tone. Like NREM sleep, REM has been implicated in various aspects of learning and memory (Wagner, Gais, & Born, 2001), although there is disagreement within the scientific community about how important both NREM and REM sleep are for normal learning and memory (Siegel, 2001).
If people are deprived of REM sleep and then allowed to sleep without disturbance, they will spend more time in REM sleep in what would appear to be an effort to recoup the lost time in REM. This is known as the REM rebound, and it suggests that REM sleep is also homeostatically regulated. Aside from the role that REM sleep may play in processes related to learning and memory, REM sleep may also be involved in emotional processing and regulation. In such instances, REM rebound may actually represent an adaptive response to stress in nondepressed individuals by suppressing the emotional salience of aversive events that occurred in wakefulness (Suchecki, Tiba, & Machado, 2012).
While sleep deprivation in general is associated with a number of negative consequences (Brown, 2012), the consequences of REM deprivation appear to be less profound (as discussed in Siegel, 2001). In fact, some have suggested that REM deprivation can actually be beneficial in some circumstances. For instance, REM sleep deprivation has been demonstrated to improve symptoms of people suffering from major depression, and many effective antidepressant medications suppress REM sleep (Riemann, Berger, & Volderholzer, 2001; Vogel, 1975).
It should be pointed out that some reviews of the literature challenge this finding, suggesting that sleep deprivation that is not limited to REM sleep is just as effective or more effective at alleviating depressive symptoms among some patients suffering from depression. In either case, why sleep deprivation improves the mood of some patients is not entirely understood (Giedke & Schwärzler, 2002). Recently, however, some have suggested that sleep deprivation might change emotional processing so that various stimuli are more likely to be perceived as positive in nature (Gujar, Yoo, Hu, & Walker, 2011). The hypnogram below ([(Link)]) shows a person’s passage through the stages of sleep.
The meaning of dreams varies across different cultures and periods of time. By the late 19th century, German psychiatrist Sigmund Freud had become convinced that dreams represented an opportunity to gain access to the unconscious. By analyzing dreams, Freud thought people could increase self-awareness and gain valuable insight to help them deal with the problems they faced in their lives. Freud made distinctions between the manifest content and the latent content of dreams. Manifest content is the actual content, or storyline, of a dream. Latent content, on the other hand, refers to the hidden meaning of a dream. For instance, if a woman dreams about being chased by a snake, Freud might have argued that this represents the woman’s fear of sexual intimacy, with the snake serving as a symbol of a man’s penis.
Freud was not the only theorist to focus on the content of dreams. The 20th century Swiss psychiatrist Carl Jung believed that dreams allowed us to tap into the collective unconscious. The collective unconscious, as described by Jung, is a theoretical repository of information he believed to be shared by everyone. According to Jung, certain symbols in dreams reflected universal archetypes with meanings that are similar for all people regardless of culture or location.
The sleep and dreaming researcher Rosalind Cartwright, however, believes that dreams simply reflect life events that are important to the dreamer. Unlike Freud and Jung, Cartwright’s ideas about dreaming have found empirical support. For example, she and her colleagues published a study in which women going through divorce were asked several times over a five month period to report the degree to which their former spouses were on their minds. These same women were awakened during REM sleep in order to provide a detailed account of their dream content. There was a significant positive correlation between the degree to which women thought about their former spouses during waking hours and the number of times their former spouses appeared as characters in their dreams (Cartwright, Agargun, Kirkby, & Friedman, 2006). Recent research (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013) has uncovered new techniques by which researchers may effectively detect and classify the visual images that occur during dreaming by using fMRI for neural measurement of brain activity patterns, opening the way for additional research in this area.
Recently, neuroscientists have also become interested in understanding why we dream. For example, Hobson (2009) suggests that dreaming may represent a state of protoconsciousness. In other words, dreaming involves constructing a virtual reality in our heads that we might use to help us during wakefulness. Among a variety of neurobiological evidence, John Hobson cites research on lucid dreams as an opportunity to better understand dreaming in general. Lucid dreams are dreams in which certain aspects of wakefulness are maintained during a dream state. In a lucid dream, a person becomes aware of the fact that they are dreaming, and as such, they can control the dream’s content (LaBerge, 1990).
The different stages of sleep are characterized by the patterns of brain waves associated with each stage. As a person transitions from being awake to falling asleep, alpha waves are replaced by theta waves. Sleep spindles and K-complexes emerge in stage 2 sleep. Stage 3 and stage 4 are described as slow-wave sleep that is marked by a predominance of delta waves. REM sleep involves rapid movements of the eyes, paralysis of voluntary muscles, and dreaming. Both NREM and REM sleep appear to play important roles in learning and memory. Dreams may represent life events that are important to the dreamer. Alternatively, dreaming may represent a state of protoconsciousness, or a virtual reality, in the mind that helps a person during consciousness.
Self Check Questions
Critical Thinking Questions
1. Freud believed that dreams provide important insight into the unconscious mind. He maintained that a dream’s manifest content could provide clues into an individual’s unconscious. What potential criticisms exist for this particular perspective?
2. Some people claim that sleepwalking and talking in your sleep involve individuals acting out their dreams. Why is this particular explanation unlikely?
3. Researchers believe that one important function of sleep is to facilitate learning and memory. How does knowing this help you in your college studies? What changes could you make to your study and sleep habits to maximize your mastery of the material covered in class?
1. The subjective nature of dream analysis is one criticism. Psychoanalysts are charged with helping their clients interpret the true meaning of a dream. There is no way to refute or confirm whether or not these interpretations are accurate. The notion that “sometimes a cigar is just a cigar” (sometimes attributed to Freud but not definitively shown to be his) makes it clear that there is no systematic, objective system in place for dream analysis.
2. Dreaming occurs during REM sleep. One of the hallmarks of this particular stage of sleep is the paralysis of the voluntary musculature which would make acting out dre
By the end of this section, you will be able to:
- Describe areas of the brain involved in sleep
- Understand hormone secretions associated with sleep
- Describe several theories aimed at explaining the function of sleep
We spend approximately one-third of our lives sleeping. Given the average life expectancy for U.S. citizens falls between 73 and 79 years old (Singh & Siahpush, 2006), we can expect to spend approximately 25 years of our lives sleeping. Some animals never sleep (e.g., several fish and amphibian species); other animals can go extended periods of time without sleep and without apparent negative consequences (e.g., dolphins); yet some animals (e.g., rats) die after two weeks of sleep deprivation (Siegel, 2008). Why do we devote so much time to sleeping? Is it absolutely essential that we sleep? This section will consider these questions and explore various explanations for why we sleep.
WHAT IS SLEEP?
You have read that sleep is distinguished by low levels of physical activity and reduced sensory awareness. As discussed by Siegel (2008), a definition of sleep must also include mention of the interplay of the circadian and homeostatic mechanisms that regulate sleep. Homeostatic regulation of sleep is evidenced by sleep rebound following sleep deprivation. Sleep rebound refers to the fact that a sleep-deprived individual will tend to take longer falling asleep during subsequent opportunities for sleep. Sleep is characterized by certain patterns of activity of the brain that can be visualized using electroencephalography (EEG), and different phases of sleep can be differentiated using EEG as well ([(Link)]).
Sleep-wake cycles seem to be controlled by multiple brain areas acting in conjunction with one another. Some of these areas include the thalamus, the hypothalamus, and the pons. As already mentioned, the hypothalamus contains the SCN—the biological clock of the body—in addition to other nuclei that, in conjunction with the thalamus, regulate slow-wave sleep. The pons is important for regulating rapid eye movement (REM) sleep (National Institutes of Health, n.d.).
Sleep is also associated with the secretion and regulation of a number of hormones from several endocrine glands including: melatonin, follicle stimulating hormone (FSH), luteinizing hormone (LH), and growth hormone (National Institutes of Health, n.d.). You have read that the pineal gland releases melatonin during sleep ([(Link)]). Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system (Hardeland et al., 2006). During sleep, the pituitary gland secretes both FSH and LH which are important in regulating the reproductive system (Christensen et al., 2012; Sofikitis et al., 2008). The pituitary gland also secretes growth hormone, during sleep, which plays a role in physical growth and maturation as well as other metabolic processes (Bartke, Sun, & Longo, 2013).
WHY DO WE SLEEP?
Given the central role that sleep plays in our lives and the number of adverse consequences that have been associated with sleep deprivation, one would think that we would have a clear understanding of why it is that we sleep. Unfortunately, this is not the case; however, several hypotheses have been proposed to explain the function of sleep.
Adaptive Function of Sleep
One popular hypothesis of sleep incorporates the perspective of evolutionary psychology. Evolutionary psychology is a discipline that studies how universal patterns of behavior and cognitive processes have evolved over time as a result of natural selection. Variations and adaptations in cognition and behavior make individuals more or less successful in reproducing and passing their genes to their offspring. One hypothesis from this perspective might argue that sleep is essential to restore resources that are expended during the day. Just as bears hibernate in the winter when resources are scarce, perhaps people sleep at night to reduce their energy expenditures. While this is an intuitive explanation of sleep, there is little research that supports this explanation. In fact, it has been suggested that there is no reason to think that energetic demands could not be addressed with periods of rest and inactivity (Frank, 2006; Rial et al., 2007), and some research has actually found a negative correlation between energetic demands and the amount of time spent sleeping (Capellini, Barton, McNamara, Preston, & Nunn, 2008).
Another evolutionary hypothesis of sleep holds that our sleep patterns evolved as an adaptive response to predatory risks, which increase in darkness. Thus we sleep in safe areas to reduce the chance of harm. Again, this is an intuitive and appealing explanation for why we sleep. Perhaps our ancestors spent extended periods of time asleep to reduce attention to themselves from potential predators. Comparative research indicates, however, that the relationship that exists between predatory risk and sleep is very complex and equivocal. Some research suggests that species that face higher predatory risks sleep fewer hours than other species (Capellini et al., 2008), while other researchers suggest there is no relationship between the amount of time a given species spends in deep sleep and its predation risk (Lesku, Roth, Amlaner, & Lima, 2006).
It is quite possible that sleep serves no single universally adaptive function, and different species have evolved different patterns of sleep in response to their unique evolutionary pressures. While we have discussed the negative outcomes associated with sleep deprivation, it should be pointed out that there are many benefits that are associated with adequate amounts of sleep. A few such benefits listed by the National Sleep Foundation (n.d.) include maintaining healthy weight, lowering stress levels, improving mood, and increasing motor coordination, as well as a number of benefits related to cognition and memory formation.
Cognitive Function of Sleep
Another theory regarding why we sleep involves sleep’s importance for cognitive function and memory formation (Rattenborg, Lesku, Martinez-Gonzalez, & Lima, 2007). Indeed, we know sleep deprivation results in disruptions in cognition and memory deficits (Brown, 2012), leading to impairments in our abilities to maintain attention, make decisions, and recall long-term memories. Moreover, these impairments become more severe as the amount of sleep deprivation increases (Alhola & Polo-Kantola, 2007). Furthermore, slow-wave sleep after learning a new task can improve resultant performance on that task (Huber, Ghilardi, Massimini, & Tononi, 2004) and seems essential for effective memory formation (Stickgold, 2005). Understanding the impact of sleep on cognitive function should help you understand that cramming all night for a test may be not effective and can even prove counterproductive.
Sleep has also been associated with other cognitive benefits. Research indicates that included among these possible benefits are increased capacities for creative thinking (Cai, Mednick, Harrison, Kanady, & Mednick, 2009; Wagner, Gais, Haider, Verleger, & Born, 2004), language learning (Fenn, Nusbaum, & Margoliash, 2003; Gómez, Bootzin, & Nadel, 2006), and inferential judgments (Ellenbogen, Hu, Payne, Titone, & Walker, 2007). It is possible that even the processing of emotional information is influenced by certain aspects of sleep (Walker, 2009).
We devote a very large portion of time to sleep, and our brains have complex systems that control various aspects of sleep. Several hormones important for physical growth and maturation are secreted during sleep. While the reason we sleep remains something of a mystery, there is some evidence to suggest that sleep is very important to learning and memory.
Self Check Questions
Critical Thinking Questions
1. If theories that assert sleep is necessary for restoration and recovery from daily energetic demands are correct, what do you predict about the relationship that would exist between individuals’ total sleep duration and their level of activity?
2 How could researchers determine if given areas of the brain are involved in the regulation of sleep?
3. One evolutionary theory of sleep holds that sleep is essential for restoration of resources that are expended during the demands of day-to-day life. A second theory proposes that our sleep patterns evolved as an adaptive response to predatory risks, which increase in darkness. The first theory has little or no empirical support, and the second theory is supported by some, though not all, research.
Personal Application Question
4. Have you (or someone you know) ever experienced significant periods of sleep deprivation because of simple insomnia, high levels of stress, or as a side effect from a medication? What were the consequences of missing out on sleep?
1. Those individuals (or species) that expend the greatest amounts of energy would require the longest periods of sleep.
2. Researchers could use lesion or brain stimulation techniques to determine how deactivation or activation of a given brain region affects behavior. Furthermore, researchers could use any number of brain imaging techniques like fMRI or CT scans to come to these conclusions.
3. Differentiate the evolutionary theories of sleep and make a case for the one with the most compelling evidence.
By the end of this section, you will be able to:
- Understand what is meant by consciousness
- Explain how circadian rhythms are involved in regulating the sleep-wake cycle, and how circadian cycles can be disrupted
- Discuss the concept of sleep debt
Consciousness describes our awareness of internal and external stimuli. Awareness of internal stimuli includes feeling pain, hunger, thirst, sleepiness, and being aware of our thoughts and emotions. Awareness of external stimuli includes seeing the light from the sun, feeling the warmth of a room, and hearing the voice of a friend.
We experience different states of consciousness and different levels of awareness on a regular basis. We might even describe consciousness as a continuum that ranges from full awareness to a deep sleep. Sleep is a state marked by relatively low levels of physical activity and reduced sensory awareness that is distinct from periods of rest that occur during wakefulness. Wakefulness is characterized by high levels of sensory awareness, thought, and behavior. In between these extremes are states of consciousness related to daydreaming, intoxication as a result of alcohol or other drug use, meditative states, hypnotic states, and altered states of consciousness following sleep deprivation. We might also experience unconscious states of being via drug-induced anesthesia for medical purposes. Often, we are not completely aware of our surroundings, even when we are fully awake. For instance, have you ever daydreamed while driving home from work or school without really thinking about the drive itself? You were capable of engaging in the all of the complex tasks involved with operating a motor vehicle even though you were not aware of doing so. Many of these processes, like much of psychological behavior, are rooted in our biology.
Biological rhythms are internal rhythms of biological activity. A woman’s menstrual cycle is an example of a biological rhythm—a recurring, cyclical pattern of bodily changes. One complete menstrual cycle takes about 28 days—a lunar month—but many biological cycles are much shorter. For example, body temperature fluctuates cyclically over a 24-hour period ([(Link)]). Alertness is associated with higher body temperatures, and sleepiness with lower body temperatures.
This pattern of temperature fluctuation, which repeats every day, is one example of a circadian rhythm. A circadian rhythm is a biological rhythm that takes place over a period of about 24 hours. Our sleep-wake cycle, which is linked to our environment’s natural light-dark cycle, is perhaps the most obvious example of a circadian rhythm, but we also have daily fluctuations in heart rate, blood pressure, blood sugar, and body temperature. Some circadian rhythms play a role in changes in our state of consciousness.
If we have biological rhythms, then is there some sort of biological clock? In the brain, the hypothalamus, which lies above the pituitary gland, is a main center of homeostasis. Homeostasis is the tendency to maintain a balance, or optimal level, within a biological system.
The brain’s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus (SCN). The axons of light-sensitive neurons in the retina provide information to the SCN based on the amount of light present, allowing this internal clock to be synchronized with the outside world (Klein, Moore, & Reppert, 1991; Welsh, Takahashi, & Kay, 2010) ([(Link)]).
PROBLEMS WITH CIRCADIAN RHYTHMS
Generally, and for most people, our circadian cycles are aligned with the outside world. For example, most people sleep during the night and are awake during the day. One important regulator of sleep-wake cycles is the hormone melatonin. The pineal gland, an endocrine structure located inside the brain that releases melatonin, is thought to be involved in the regulation of various biological rhythms and of the immune system during sleep (Hardeland, Pandi-Perumal, & Cardinali, 2006). Melatonin release is stimulated by darkness and inhibited by light.
There are individual differences with regards to our sleep-wake cycle. For instance, some people would say they are morning people, while others would consider themselves to be night owls. These individual differences in circadian patterns of activity are known as a person’s chronotype, and research demonstrates that morning larks and night owls differ with regard to sleep regulation (Taillard, Philip, Coste, Sagaspe, & Bioulac, 2003). Sleep regulation refers to the brain’s control of switching between sleep and wakefulness as well as coordinating this cycle with the outside world.
Disruptions of Normal Sleep
Whether lark, owl, or somewhere in between, there are situations in which a person’s circadian clock gets out of synchrony with the external environment. One way that this happens involves traveling across multiple time zones. When we do this, we often experience jet lag. Jet lag is a collection of symptoms that results from the mismatch between our internal circadian cycles and our environment. These symptoms include fatigue, sluggishness, irritability, and insomnia (i.e., a consistent difficulty in falling or staying asleep for at least three nights a week over a month’s time) (Roth, 2007).
Individuals who do rotating shift work are also likely to experience disruptions in circadian cycles. Rotating shift work refers to a work schedule that changes from early to late on a daily or weekly basis. For example, a person may work from 7:00 a.m. to 3:00 p.m. on Monday, 3:00 a.m. to 11:00 a.m. on Tuesday, and 11:00 a.m. to 7:00 p.m. on Wednesday. In such instances, the individual’s schedule changes so frequently that it becomes difficult for a normal circadian rhythm to be maintained. This often results in sleeping problems, and it can lead to signs of depression and anxiety. These kinds of schedules are common for individuals working in health care professions and service industries, and they are associated with persistent feelings of exhaustion and agitation that can make someone more prone to making mistakes on the job (Gold et al., 1992; Presser, 1995).
Rotating shift work has pervasive effects on the lives and experiences of individuals engaged in that kind of work, which is clearly illustrated in stories reported in a qualitative study that researched the experiences of middle-aged nurses who worked rotating shifts (West, Boughton & Byrnes, 2009). Several of the nurses interviewed commented that their work schedules affected their relationships with their family. One of the nurses said,
If you’ve had a partner who does work regular job 9 to 5 office hours . . . the ability to spend time, good time with them when you’re not feeling absolutely exhausted . . . that would be one of the problems that I’ve encountered. (West et al., 2009, p. 114)
While disruptions in circadian rhythms can have negative consequences, there are things we can do to help us realign our biological clocks with the external environment. Some of these approaches, such as using a bright light as shown in [(Link)], have been shown to alleviate some of the problems experienced by individuals suffering from jet lag or from the consequences of rotating shift work. Because the biological clock is driven by light, exposure to bright light during working shifts and dark exposure when not working can help combat insomnia and symptoms of anxiety and depression (Huang, Tsai, Chen, & Hsu, 2013).
When people have difficulty getting sleep due to their work or the demands of day-to-day life, they accumulate a sleep debt. A person with a sleep debt does not get sufficient sleep on a chronic basis. The consequences of sleep debt include decreased levels of alertness and mental efficiency. Interestingly, since the advent of electric light, the amount of sleep that people get has declined. While we certainly welcome the convenience of having the darkness lit up, we also suffer the consequences of reduced amounts of sleep because we are more active during the nighttime hours than our ancestors were. As a result, many of us sleep less than 7–8 hours a night and accrue a sleep debt. While there is tremendous variation in any given individual’s sleep needs, the National Sleep Foundation (n.d.) cites research to estimate that newborns require the most sleep (between 12 and 18 hours a night) and that this amount declines to just 7–9 hours by the time we are adults.
If you lie down to take a nap and fall asleep very easily, chances are you may have sleep debt. Given that college students are notorious for suffering from significant sleep debt (Hicks, Fernandez, & Pelligrini, 2001; Hicks, Johnson, & Pelligrini, 1992; Miller, Shattuck, & Matsangas, 2010), chances are you and your classmates deal with sleep debt-related issues on a regular basis. [(Link)] shows recommended amounts of sleep at different ages.
|Age||Nightly Sleep Needs|
|0–3 months||12–18 hours|
|3 months–1 year||14–15 hours|
|1–3 years||12–14 hours|
|3–5 years||11–13 hours|
|5–10 years||10–11 hours|
|10–18 years||8–10 hours|
|18 and older||7–9 hours|
Sleep debt and sleep deprivation have significant negative psychological and physiological consequences [(Link)]. As mentioned earlier, lack of sleep can result in decreased mental alertness and cognitive function. In addition, sleep deprivation often results in depression-like symptoms. These effects can occur as a function of accumulated sleep debt or in response to more acute periods of sleep deprivation. It may surprise you to know that sleep deprivation is associated with obesity, increased blood pressure, increased levels of stress hormones, and reduced immune functioning (Banks & Dinges, 2007). Furthermore, individuals suffering from sleep deprivation can also put themselves and others at risk when they put themselves behind the wheel of a car or work with dangerous machinery. Some research suggests that sleep deprivation affects cognitive and motor function as much as, if not more than, alcohol intoxication (Williamson & Feyer, 2000).
The amount of sleep we get varies across the lifespan. When we are very young, we spend up to 16 hours a day sleeping. As we grow older, we sleep less. In fact, a meta-analysis, which is a study that combines the results of many related studies, conducted within the last decade indicates that by the time we are 65 years old, we average fewer than 7 hours of sleep per day (Ohayon, Carskadon, Guilleminault, & Vitiello, 2004). As the amount of time we sleep varies over our lifespan, presumably the sleep debt would adjust accordingly.
States of consciousness vary over the course of the day and throughout our lives. Important factors in these changes are the biological rhythms, and, more specifically, the circadian rhythms generated by the suprachiasmatic nucleus (SCN). Typically, our biological clocks are aligned with our external environment, and light tends to be an important cue in setting this clock. When people travel across multiple time zones or work rotating shifts, they can experience disruptions of their circadian cycles that can lead to insomnia, sleepiness, and decreased alertness. Bright light therapy has shown to be promising in dealing with circadian disruptions. If people go extended periods of time without sleep, they will accrue a sleep debt and potentially experience a number of adverse psychological and physiological consequences.
Self Check Questions
Critical Thinking Questions
1. Healthcare professionals often work rotating shifts. Why is this problematic? What can be done to deal with potential problems?
2. Generally, humans are considered diurnal which means we are awake during the day and asleep during the night. Many rodents, on the other hand, are nocturnal. Why do you think different animals have such different sleep-wake cycles?
Personal Application Questions
3. We experience shifts in our circadian clocks in the fall and spring of each year with time changes associated with daylight saving time. Is springing ahead or falling back easier for you to adjust to, and why do you think that is?
4. What do you do to adjust to the differences in your daily schedule throughout the week? Are you running a sleep debt when daylight saving time begins or ends?
1. Given that rotating shift work can lead to exhaustion and decreased mental efficiency, individuals working under these conditions are more likely to make mistakes on the job. The implications for this in the health care professions are obvious. Those in health care professions could be educated about the benefits of light-dark exposure to help alleviate such problems.
2. Different species have different evolutionary histories, and they have adapted to their environments in different ways. There are a number of different possible explanations as to why a given species is diurnal or nocturnal. Perhaps humans would be most vulnerable to threats during the evening hours when light levels are low. Therefore, it might make sense to be in shelter during this time. Rodents, on the other hand, are faced with a number of predatory threats, so perhaps being active at night minimizes the risk from predators such as birds that use their visual senses to locate prey.
Our lives involve regular, dramatic changes in the degree to which we are aware of our surroundings and our internal states. While awake, we feel alert and aware of the many important things going on around us. Our experiences change dramatically while we are in deep sleep and once again when we are dreaming. Sometimes, we seek to alter our awareness and experience by using psychoactive drugs; that is, drugs that alter the central nervous system and produce a change of consciousness or a deep meditative state. Consciousness is an awareness of external and internal stimuli. As discussed in the chapter on the biology of psychology, the brain activity during different phases of consciousness produces characteristic brain waves, which can be observed by electroencephalography (EEG) and other types of analysis.
This chapter will discuss states of consciousness with a particular emphasis on sleep. The different stages of sleep will be identified, and sleep disorders will be described. The chapter will close with discussions of altered states of consciousness produced by psychoactive drugs, hypnosis, and meditation.
Aggarwal, S. K., Carter, G. T., Sullivan, M. D., ZumBrunnen, C., Morrill, R., & Mayer, J. D. (2009). Medicinal use of cannabis in the United States: Historical perspectives, current trends, and future directions. Journal of Opioid Management, 5, 153–168.
Alhola, P. & Polo-Kantola, P. (2007). Sleep Deprivation: Impact on cognitive performance. Neuropsychiatric Disease and Treatment, 3, 553–557.
Alladin, A. (2012). Cognitive hypnotherapy for major depressive disorder. The American Journal of Clinical Hypnosis, 54, 275–293.
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: Author.
Aquina, C. T., Marques-Baptista, A., Bridgeman, P., & Merlin, M. A. (2009). Oxycontin abuse and overdose. Postgraduate Medicine, 121, 163–167.
Arnulf, I. (2012). REM sleep behavior disorder: Motor manifestations and pathophysiology. Movement Disorders, 27, 677–689.
Augustinova, M., & Ferrand, L. (2012). Suggestion does not de-automatize word reading: Evidence from the semantically based Stroop task. Psychonomic Bulletin & Review, 19, 521–527.
Banks, S., & Dinges, D. F. (2007). Behavioral and physiological consequences of sleep restriction. Journal of Clinical Sleep Medicine, 3, 519–528.
Bartke, A., Sun, L. Y., & Longo, V. (2013). Somatotropic signaling: Trade-offs between growth, reproductive development, and longevity. Physiological Reviews, 93, 571–598.
Berkowitz, C. D. (2012). Sudden infant death syndrome, sudden unexpected infant death, and apparent life-threatening events. Advances in Pediatrics, 59, 183–208.
Berry, R. B., Kryger, M. H., & Massie, C. A. (2011). A novel nasal excitatory positive airway pressure (EPAP) device for the treatment of obstructive sleep apnea: A randomized controlled trial. Sleep, 34, 479–485.
Bixler, E. O., Kales, A., Soldatos, C. R., Kales, J. D., & Healey, S. (1979). Prevalence of sleep disorders in the Los Angeles metropolitan area. American Journal of Psychiatry, 136, 1257–1262.
Bostwick, J. M. (2012). Blurred boundaries: The therapeutics and politics of medical marijuana. Mayo Clinic Proceedings, 87, 172–186.
Brook, R. D., Appel, L. J., Rubenfire, M., Ogedegbe, G., Bisognano, J. D., Elliott, W. K., . . . Rajagopalan, S. (2013). Beyond medications and diet: Alternative approaches to lowering blood pressure: A scientific statement from the American Heart Association. Hypertension, 61, 1360–1383.
Broughton, R., Billings, R., Cartwright, R., Doucette, D., Edmeads, J., Edwardh, M., . . . Turrell, G. (1994). Homicidal somnambulism: A case report. Sleep, 17, 253–264.
Brown, L. K. (2012). Can sleep deprivation studies explain why human adults sleep? Current Opinion in Pulmonary Medicine, 18, 541–545.
Burgess, C. R., & Scammell, T. E. (2012). Narcolepsy: Neural mechanisms of sleepiness and cataplexy. Journal of Neuroscience, 32, 12305–12311.
Cai, D. J., Mednick, S. A., Harrison, E. M., Kanady, J. C., & Mednick, S. C. (2009). REM, not incubation, improves creativity by priming associative networks. Proceedings of the National Academy of Sciences, USA, 106, 10130–10134.
Caldwell, K., Harrison, M., Adams, M., Quin, R. H., & Greeson, J. (2010). Developing mindfulness in college students through movement based courses: Effects on self-regulatory self-efficacy, mood, stress, and sleep quality. Journal of American College Health, 58, 433–442.
Capellini, I., Barton, R. A., McNamara, P., Preston, B. T., & Nunn, C. L. (2008). Phylogenetic analysis of the ecology and evolution of mammalian sleep. Evolution, 62, 1764–1776.
Cartwright, R. (2004). Sleepwalking violence: A sleep disorder, a legal dilemma, and a psychological challenge. American Journal of Psychiatry, 161, 1149–1158.
Cartwright, R., Agargun, M. Y., Kirkby, J., & Friedman, J. K. (2006). Relation of dreams to waking concerns. Psychiatry Research, 141, 261–270.
Casati, A., Sedefov, R., & Pfeiffer-Gerschel, T. (2012). Misuse of medications in the European Union: A systematic review of the literature. European Addiction Research, 18, 228–245.
Chen, K. W., Berger, C. C., Manheimer, E., Forde, D., Magidson, J., Dachman, L., & Lejuez, C. W. (2013). Meditative therapies for reducing anxiety: A systematic review and meta-analysis of randomized controlled trials. Depression and Anxiety, 29, 545–562.
Chokroverty, S. (2010). Overview of sleep & sleep disorders. Indian Journal of Medical Research, 131, 126–140.
Christensen, A., Bentley, G. E., Cabrera, R., Ortega, H. H., Perfito, N., Wu, T. J., & Micevych, P. (2012). Hormonal regulation of female reproduction. Hormone and Metabolic Research, 44, 587–591.
CNN. (1999, June 25). ‘Sleepwalker’ convicted of murder. Retrieved from http://www.cnn.com/US/9906/25/sleepwalker.01/
Cropley, M., Theadom, A., Pravettoni, G., & Webb, G. (2008). The effectiveness of smoking cessation interventions prior to surgery: A systematic review. Nicotine and Tobacco Research, 10, 407–412.
De la Herrán-Arita, A. K., & Drucker-Colín, R. (2012). Models for narcolepsy with cataplexy drug discovery. Expert Opinion on Drug Discovery, 7, 155–164.
Del Casale, A., Ferracuti, S., Rapinesi, C., Serata, D., Sani, G., Savoja, V., . . . Girardi, P. (2012). Neurocognition under hypnosis: Findings from recent functional neuroimaging studies. International Journal of Clinical and Experimental Hypnosis, 60, 286–317.
Elkins, G., Johnson, A., & Fisher, W. (2012). Cognitive hypnotherapy for pain management. The American Journal of Clinical Hypnosis, 54, 294–310.
Ellenbogen, J. M., Hu, P. T., Payne, J. D., Titone, D., & Walker, M. P. (2007). Human relational memory requires time and sleep. Proceedings of the National Academy of Sciences, USA, 104, 7723–7728.
Fell, J., Axmacher, N., & Haupt, S. (2010). From alpha to gamma: Electrophysiological correlates meditation-related states of consciousness. Medical Hypotheses, 75, 218–224.
Fenn, K. M., Nusbaum, H. C., & Margoliash, D. (2003). Consolidation during sleep of perceptual learning of spoken language. Nature, 425, 614–616.
Ferini-Strambi, L. (2011). Does idiopathic REM sleep behavior disorder (iRBD) really exist? What are the potential markers of neurodegeneration in iRBD [Supplemental material]? Sleep Medicine, 12(2 Suppl.), S43–S49.
Fiorentini, A., Volonteri, L.S., Dragogna, F., Rovera, C., Maffini, M., Mauri, M. C., & Altamura, C. A. (2011). Substance-induced psychoses: A critical review of the literature. Current Drug Abuse Reviews, 4, 228–240.
Fogel, S. M., & Smith, C. T. (2011). The function of the sleep spindle: A physiological index of intelligence and a mechanism for sleep-dependent memory consolidation. Neuroscience and Biobehavioral Reviews, 35, 1154–1165.
Frank, M. G. (2006). The mystery of sleep function: Current perspectives and future directions. Reviews in the Neurosciences, 17, 375–392.
Freeman, M. P., Fava, M., Lake, J., Trivedi, M. H., Wisner, K. L., & Mischoulon, D. (2010). Complementary and alternative medicine in major depressive disorder: The American Psychiatric Association task force report. The Journal of Clinical Psychiatry, 71, 669–681.
Giedke, H., & Schwärzler, F. (2002). Therapeutic use of sleep deprivation in depression. Sleep Medicine Reviews, 6, 361–377.
Gold, D. R., Rogacz, S. R., Bock, N., Tosteson, T. D., Baum, T. M., Speizer, F. M., & Czeisler, C. A. (1992). Rotating shift work, sleep, and accidents related to sleepiness in hospital nurses. American Journal of Public Health, 82, 1011–1014.
Golden, W. L. (2012). Cognitive hypnotherapy for anxiety disorders. The American Journal of Clinical Hypnosis, 54, 263–274.
Gómez, R. L., Bootzin, R. R., & Nadel, L. (2006). Naps promote abstraction in language-learning infants. Psychological Science, 17, 670–674.
Guilleminault, C., Kirisoglu, C., Bao, G., Arias, V., Chan, A., & Li, K. K. (2005). Adult chronic sleepwalking and its treatment based on polysomnography. Brain, 128, 1062–1069.
Gujar, N., Yoo, S., Hu, P., & Walker, M. P. (2011). Sleep deprivation amplifies reactivity of brain reward networks, biasing the appraisal of positive emotional experiences. The Journal of Neuroscience, 31, 4466–4474.
Guldenmund, P., Vanhaudenhuyse, A., Boly, M., Laureys, S., & Soddu, A. (2012). A default mode of brain function in altered states of consciousness. Archives Italiennes de Biologie, 150, 107–121.
Halász, P. (1993). Arousals without awakening—Dynamic aspect of sleep. Physiology and Behavior, 54, 795–802.
Han, F. (2012). Sleepiness that cannot be overcome: Narcolepsy and cataplexy. Respirology, 17, 1157–1165.
Hardeland, R., Pandi-Perumal, S. R., & Cardinali, D. P. (2006). Melatonin. International Journal of Biochemistry & Cell Biology, 38, 313–316.
Haasen, C., & Krausz, M. (2001). Myths versus experience with respect to cocaine and crack: Learning from the US experience. European Addiction Research, 7, 159–160.
Henry, D., & Rosenthal, L. (2013). “Listening for his breath:” The significance of gender and partner reporting on the diagnosis, management, and treatment of obstructive sleep apnea. Social Science & Medicine, 79, 48–56.
Hicks, R. A., Fernandez, C., & Pelligrini, R. J. (2001). The changing sleep habits of university students: An update. Perceptual and Motor Skills, 93, 648.
Hicks, R. A., Johnson, C., & Pelligrini, R. J. (1992). Changes in the self-reported consistency of normal habitual sleep duration of college students (1978 and 1992). Perceptual and Motor Skills, 75, 1168–1170.
Hilgard, E. R., & Hilgard, J. R. (1994). Hypnosis in the Relief of Pain. New York: Brunner/Mazel.
Hishikawa, Y., & Shimizu, T. (1995). Physiology of REM sleep, cataplexy, and sleep paralysis. Advances in Neurology, 67, 245–271.
Herman, A., & Herman, A. P. (2013). Caffeine’s mechanism of action and its cosmetic use. Skin Pharmacology and Physiology, 26, 8–14.
Hobson, J. A. (2009). REM sleep and dreaming: Towards a theory of protoconsciousness. Nature Reviews Neuroscience, 10, 803–814.
Horikawa,T., Tamaki, M., Miyawaki, Y. & Kamitani, Y. (2013). Neural Decoding of Visual Imagery During Sleep. Science, 340(6132), 639–642. doi:10.1126/science.1234330
Hossain, J. L., & Shapiro, C. M. (2002). The prevalence, cost implications, and management of sleep disorders: An overview. Sleep and Breathing, 6, 85–102.
Huang, L. B., Tsai, M. C., Chen, C. Y., & Hsu, S. C. (2013). The effectiveness of light/dark exposure to treat insomnia in female nurses undertaking shift work during the evening/night shift. Journal of Clinical Sleep Medicine, 9, 641–646.
Huber, R., Ghilardi, M. F., Massimini, M., & Tononi, G. (2004). Local sleep and learning. Nature, 430, 78–81.
Jayanthi, L. D., & Ramamoorthy, S. (2005). Regulation of monoamine transporters: Influence of psychostimulants and therapeutic antidepressants. The AAPS Journal, 7, E728–738.
Julien, R. M. (2005). Opioid analgesics. In A primer of drug action: A comprehensive guide to the actions, uses, and side effects of psychoactive drugs (pp. 461–500). Portland, OR: Worth.
Kihlstrom, J. F. (2013). Neuro-hypnotism: Prospects for hypnosis and neuroscience. Cortex, 49, 365–374.
Klein, D. C., Moore, R. Y., & Reppert, S. M. (Eds.). (1991). Suprachiasmatic nucleus: The mind’s clock. New York, NY: Oxford University Press.
Kogan, N. M., & Mechoulam, R. (2007). Cannabinoids in health and disease. Dialogues in Clinical Neuroscience, 9, 413–430.
Kromann, C. B., & Nielson, C. T. (2012). A case of cola dependency in a woman with recurrent depression. BMC Research Notes, 5, 692.
Lang, A. J., Strauss, J. L., Bomeya, J., Bormann, J. E., Hickman, S. D., Good, R. C., & Essex, M. (2012). The theoretical and empirical basis for meditation as an intervention for PTSD. Behavior Modification, 36, 759–786.
LaBerge, S. (1990). Lucid dreaming: Psychophysiological studies of consciousness during REM sleep. In R. R. Bootzen, J. F. Kihlstrom, & D. L. Schacter (Eds.), Sleep and cognition (pp. 109–126). Washington, DC: American Psychological Association.
Lesku, J. A., Roth, T. C., 2nd, Amlaner, C. J., & Lima, S. L. (2006). A phylogenetic analysis of sleep architecture in mammals: The integration of anatomy, physiology, and ecology. The American Naturalist, 168, 441–453.
Levitt, C., Shaw, E., Wong, S., & Kaczorowski, J. (2007). Systematic review of the literature on postpartum care: Effectiveness of interventions for smoking relapse prevention, cessation, and reduction in postpartum women. Birth, 34, 341–347.
Lifshitz, M., Aubert Bonn, N., Fischer, A., Kashem, I. F., & Raz, A. (2013). Using suggestion to modulate automatic processes: From Stroop to McGurk and beyond. Cortex, 49, 463–473.
Luppi, P. H., Clément, O., Sapin, E., Gervasoni, D., Peyron, C., Léger, L., . . . Fort, P. (2011). The neuronal network responsible for paradoxical sleep and its dysfunctions causing narcolepsy and rapid eye movement (REM) behavior disorder. Sleep Medicine Reviews, 15, 153–163.
Mage, D. T., & Donner, M. (2006). Female resistance to hypoxia: Does it explain the sex difference in mortality rates? Journal of Women’s Health, 15, 786–794.
Mahowald, M. W., & Schenck, C. H. (2000). Diagnosis and management of parasomnias. Clinical Cornerstone, 2, 48–54.
Mahowald, M. W., Schenck, C. H., & Cramer Bornemann, M. A. (2005). Sleep-related violence. Current Neurology and Neuroscience Reports, 5, 153–158.
Mayo Clinic. (n.d.). Sleep terrors (night terrors). Retrieved from http://www.mayoclinic.org/diseases-conditions/night-terrors/basics/treatment/con-20032552
Mather, L. E., Rauwendaal, E. R., Moxham-Hall, V. L., & Wodak, A. D. (2013). (Re)introducing medical cannabis. The Medical Journal of Australia, 199, 759–761.
Maxwell, J. C. (2006). Trends in the abuse of prescription drugs. Gulf Coast Addiction Technology Transfer Center. Retrieved from http://asi.nattc.org/userfiles/file/GulfCoast/PrescriptionTrends_Web.pdf
McCarty, D. E. (2010). A case of narcolepsy with strictly unilateral cataplexy. Journal of Clinical Sleep Medicine, 15, 75–76.
McDaid, C., Durée, K. H., Griffin, S. C., Weatherly, H. L., Stradling, J. R., Davies, R. J., . . . Westwood, M. E. (2009). A systematic review of continuous positive airway pressure for obstructive sleep apnoea-hypopnoea syndrome. Sleep Medicine Reviews, 13, 427–436.
McKim, W. A., & Hancock, S. D. (2013). Drugs and behavior: An introduction to behavioral pharmacology, 7th edition. Boston, MA: Pearson.
Mignot, E. J. M. (2012). A practical guide to the therapy of narcolepsy and hypersomnia syndromes. Neurotherapeutics, 9, 739–752.
Miller, N. L., Shattuck, L. G., & Matsangas, P. (2010). Longitudinal study of sleep patterns of United States Military Academy cadets. Sleep, 33, 1623–1631.
Mitchell, E. A. (2009). SIDS: Past, present and future. Acta Paediatrica, 98, 1712–1719.
Montgomery, G. H., Schnur, J. B., & Kravits, K. (2012). Hypnosis for cancer care: Over 200 years young. CA: A Cancer Journal for Clinicians, 63, 31–44.
National Institutes of Health. (n.d.). Information about sleep. Retrieved from http://science.education.nih.gov/supplements/nih3/sleep/guide/info-sleep.htm
National Research Council. (1994). Learning, remembering, believing: Enhancing human performance. Washington, DC: The National Academies Press.
National Sleep Foundation. (n.d.). How much sleep do we really need? Retrieved from http://sleepfoundation.org/how-sleep-works/how-much-sleep-do-we-really-need
Ohayon, M. M. (1997). Prevalence of DSM-IV diagnostic criteria of insomnia: Distinguishing insomnia related to mental disorders from sleep disorders. Journal of Psychiatric Research, 31, 333–346.
Ohayon, M. M. (2002). Epidemiology of insomnia: What we know and what we still need to learn. Sleep Medicine Reviews, 6, 97–111.
Ohayon, M. M., Carskadon, M. A., Guilleminault, C., & Vitiello, M. V. (2004). Meta-analysis of quantitative sleep parameters from childhood to old age in healthy individuals: Developing normative sleep values across the human lifespan. Sleep, 27, 1255–1273.
Ohayon, M. M., & Roth, T. (2002). Prevalence of restless legs syndrome and periodic limb movement disorder in the general population. Journal of Psychosomatic Research, 53, 547–554.
Poe, G. R., Walsh, C. M., & Bjorness, T. E. (2010). Cognitive neuroscience of sleep. Progress in Brain Research, 185, 1–19.
Porkka-Heiskanen, T. (2011). Methylxanthines and sleep. Handbook of Experimental Pharmacology, 200, 331–348.
Presser, H. B. (1995). Job, family, and gender: Determinants of nonstandard work schedules among employed Americans in 1991. Demography, 32, 577–598.
Pressman, M. R. (2007). Disorders of arousal from sleep and violent behavior: The role of physical contact and proximity. Sleep, 30, 1039–1047.
Provini, F., Tinuper, P., Bisulli, F., & Lagaresi, E. (2011). Arousal disorders [Supplemental material]. Sleep Medicine, 12(2 Suppl.), S22–S26.
Rattenborg, N. C., Lesku, J. A., Martinez-Gonzalez, D., & Lima, S. L. (2007). The non-trivial functions of sleep. Sleep Medicine Reviews, 11, 405–409.
Raz, A. (2011). Hypnosis: A twilight zone of the top-down variety: Few have never heard of hypnosis but most know little about the potential of this mind-body regulation technique for advancing science. Trends in Cognitive Sciences, 15, 555–557.
Raz, A., Shapiro, T., Fan, J., & Posner, M. I. (2002). Hypnotic suggestion and the modulation of Stroop interference. Archives of General Psychiatry, 59, 1151–1161.
Reiner, K., Tibi, L., & Lipsitz, J. D. (2013). Do mindfulness-based interventions reduce pain intensity? A critical review of the literature. Pain Medicine, 14, 230–242.
Restless Legs Syndrome Foundation. (n.d.). Restless legs syndrome: Causes, diagnosis, and treatment for the patient living with Restless legs syndrome (RSL). Retrieved from www.rls.org
Rial, R. V., Nicolau, M. C., Gamundí, A., Akaârir, M., Aparicio, S., Garau, C., . . . Esteban, S. (2007). The trivial function of sleep. Sleep Medicine Reviews, 11, 311–325.
Riemann, D., Berger, M., & Volderholzer, U. (2001). Sleep and depression—Results from psychobiological studies: An overview. Biological Psychology, 57, 67–103.
Reinerman, C. (2007, October 14). 5 myths about that demon crack. Washington Post. Retrieved from http://www.washingtonpost.com/wp-dyn/content/article/2007/10/09/AR2007100900751.html
Reissig, C. J., Strain, E. C., & Griffiths, R. R. (2009). Caffeinated energy drinks—A growing problem. Drug and Alcohol Dependence, 99, 1–10.
Robson, P. J. (2014). Therapeutic potential of cannabinoid medicines. Drug Testing and Analysis, 6, 24–30.
Roth, T. (2007). Insomnia: Definition, prevalence, etiology, and consequences [Supplemental material]. Journal of Clinical Sleep Medicine, 3(5 Suppl.), S7–S10.
Rothman, R. B., Blough, B. E., & Baumann, M. H. (2007). Dual dopamine/serotonin releasers as potential medications for stimulant and alcohol addictions. The AAPS Journal, 9, E1–10.
Sánchez-de-la-Torre, M., Campos-Rodriguez, F., & Barbé, F. (2012). Obstructive sleep apnoea and cardiovascular disease. The Lancet Respiratory Medicine, 1, 31–72.
Savard, J., Simard, S., Ivers, H., & Morin, C. M. (2005). Randomized study on the efficacy of cognitive-behavioral therapy for insomnia secondary to breast cancer, part I: Sleep and psychological effects. Journal of Clinical Oncology, 23, 6083–6096.
Schicho, R., & Storr, M. (2014). Cannabis finds its way into treatment of Crohn’s disease. Pharmacology, 93, 1–3.
Shukla, R. K, Crump, J. L., & Chrisco, E. S. (2012). An evolving problem: Methamphetamine production and trafficking in the United States. International Journal of Drug Policy, 23, 426–435.
Siegel, J. M. (2008). Do all animals sleep? Trends in Neuroscience, 31, 208–213.
Siegel, J. M. (2001). The REM sleep-memory consolidation hypothesis. Science, 294, 1058–1063.
Singh, G. K., & Siahpush, M. (2006). Widening socioeconomic inequalities in US life expectancy, 1980–2000. International Journal of Epidemiology, 35, 969–979.
Smedslund, G., Fisher, K. J., Boles, S. M., & Lichtenstein, E. (2004). The effectiveness of workplace smoking cessation programmes: A meta-analysis of recent studies. Tobacco Control, 13, 197–204.
Sofikitis, N., Giotitsas, N., Tsounapi, P., Baltogiannis, D., Giannakis, D., & Pardalidis, N. (2008). Hormonal regulation of spermatogenesis and spermiogenesis. Journal of Steroid Biochemistry and Molecular Biology, 109, 323–330.
Steriade, M., & Amzica, F. (1998). Slow sleep oscillation, rhythmic K-complexes, and their paroxysmal developments [Supplemental material]. Journal of Sleep Research, 7(1 Suppl.), 30–35.
Stickgold, R. (2005). Sleep-dependent memory consolidation. Nature, 437, 1272–1278.
Stone, K. C., Taylor, D. J., McCrae, C. S., Kalsekar, A., & Lichstein, K. L. (2008). Nonrestorative sleep. Sleep Medicine Reviews, 12, 275–288.
Suchecki, D., Tiba, P. A., & Machado, R. B. (2012). REM sleep rebound as an adaptive response to stressful situations. Frontiers in Neuroscience, 3. doi: 10.3389/fneur.2012.00041
Task Force on Sudden Infant Death Syndrome. (2011). SIDS and other sleep-related infant deaths: Expansion of recommendations for a safe infant sleeping environment. Pediatrics, 128, 1030–1039.
Taillard, J., Philip, P., Coste, O., Sagaspe, P., & Bioulac, B. (2003). The circadian and homeostatic modulation of sleep pressure during wakefulness differs between morning and evening chronotypes. Journal of Sleep Research, 12, 275–282.
Thach, B. T. (2005). The role of respiratory control disorders in SIDS. Respiratory Physiology & Neurobiology, 149, 343–353.
U.S. Food and Drug Administration. (2013, October 24). Statement on Proposed Hydrocodone Reclassification from Janet Woodcock, M.D., Director, Center for Drug Evaluation and Research. Retrieved from http://www.fda.gov/drugs/drugsafety/ucm372089.htm
Vogel, G. W. (1975). A review of REM sleep deprivation. Archives of General Psychiatry, 32, 749–761.
Vøllestad, J., Nielsen, M. B., & Nielsen, G. H. (2012). Mindfulness- and acceptance-based interventions for anxiety disorders: A systematic review and meta-analysis. The British Journal of Clinical Psychology, 51, 239–260.
Wagner, U., Gais, S., & Born, J. (2001). Emotional memory formation is enhanced across sleep intervals with high amounts of rapid eye movement sleep. Learning & Memory, 8, 112–119.
Wagner, U., Gais, S., Haider, H., Verleger, R., & Born, J. (2004). Sleep improves insight. Nature, 427, 352–355.
Walker, M. P. (2009). The role of sleep in cognition and emotion. Annals of the New York Academy of Sciences, 1156, 168–197.
Wark, D. M. (2011). Traditional and alert hypnosis for education: A literature review. The American Journal of Clinical Hypnosis, 54(2), 96–106.
Waterhouse. J., Fukuda, Y., & Morita, T. (2012). Daily rhythms of the sleep-wake cycle [Special issue]. Journal of Physiological Anthropology, 31(5). doi:10.1186/1880-6805-31-5
Welsh, D. K. Takahashi, J. S., & Kay, S. A. (2010). Suprachiasmatic nucleus: Cell autonomy and network properties. Annual Review of Physiology, 72, 551–577.
West, S., Boughton, M., & Byrnes, M. (2009). Juggling multiple temporalities: The shift work story of mid-life nurses. Journal of Nursing Management, 17, 110–119.
White, D. P. (2005). Pathogenesis of obstructive and central sleep apnea. American Journal of Respiratory and Critical Care Medicine, 172, 1363–1370.
Williams, J., Roth, A., Vatthauer, K., & McCrae, C. S. (2013). Cognitive behavioral treatment of insomnia. Chest, 143, 554–565.
Williamson, A. M., & Feyer, A. M. (2000). Moderate sleep deprivation produces impairments in cognitive and motor performance equivalent to legally prescribed levels of alcohol intoxication. Occupational and Environmental Medicine, 57, 649–655.
Wolt, B. J., Ganetsky, M., & Babu, K. M. (2012). Toxicity of energy drinks. Current Opinion in Pediatrics, 24, 243–251.
Zangini, S., Calandra-Buonaura, G., Grimaldi, D., & Cortelli, P. (2011). REM behaviour disorder and neurodegenerative diseases [Supplemental material]. Sleep Medicine, 12(2 Suppl.), S54–S58.
Zeidan, F., Grant, J. A., Brown, C. A., McHaffie, J. G., & Coghill, R. C. (2012). Mindfulness meditation-related pain relief: Evidence for unique brain mechanisms in the regulation of pain. Neuroscience Letters, 520, 165–173.
By the end of this section, you will be able to:
- Identify the major glands of the endocrine system
- Identify the hormones secreted by each gland
- Describe each hormone’s role in regulating bodily functions
The endocrine system consists of a series of glands that produce chemical substances known as hormones ([(Link)]). Like neurotransmitters, hormones are chemical messengers that must bind to a receptor in order to send their signal. However, unlike neurotransmitters, which are released in close proximity to cells with their receptors, hormones are secreted into the bloodstream and travel throughout the body, affecting any cells that contain receptors for them. Thus, whereas neurotransmitters’ effects are localized, the effects of hormones are widespread. Also, hormones are slower to take effect, and tend to be longer lasting.
Hormones are involved in regulating all sorts of bodily functions, and they are ultimately controlled through interactions between the hypothalamus (in the central nervous system) and the pituitary gland (in the endocrine system). Imbalances in hormones are related to a number of disorders. This section explores some of the major glands that make up the endocrine system and the hormones secreted by these glands.
The pituitary gland descends from the hypothalamus at the base of the brain, and acts in close association with it. The pituitary is often referred to as the “master gland” because its messenger hormones control all the other glands in the endocrine system, although it mostly carries out instructions from the hypothalamus. In addition to messenger hormones, the pituitary also secretes growth hormone, endorphins for pain relief, and a number of key hormones that regulate fluid levels in the body.
Located in the neck, the thyroid gland releases hormones that regulate growth, metabolism, and appetite. In hyperthyroidism, or Grave’s disease, the thyroid secretes too much of the hormone thyroxine, causing agitation, bulging eyes, and weight loss. In hypothyroidism, reduced hormone levels cause sufferers to experience tiredness, and they often complain of feeling cold. Fortunately, thyroid disorders are often treatable with medications that help reestablish a balance in the hormones secreted by the thyroid.
The adrenal glands sit atop our kidneys and secrete hormones involved in the stress response, such as epinephrine (adrenaline) and norepinephrine (noradrenaline). The pancreas is an internal organ that secretes hormones that regulate blood sugar levels: insulin and glucagon. These pancreatic hormones are essential for maintaining stable levels of blood sugar throughout the day by lowering blood glucose levels (insulin) or raising them (glucagon). People who suffer from diabetes do not produce enough insulin; therefore, they must take medications that stimulate or replace insulin production, and they must closely control the amount of sugars and carbohydrates they consume.
The gonads secrete sexual hormones, which are important in reproduction, and mediate both sexual motivation and behavior. The female gonads are the ovaries; the male gonads are the testis. Ovaries secrete estrogens and progesterone, and the testes secrete androgens, such as testosterone.
Although it is against most laws to do so, many professional athletes and body builders use anabolic steroid drugs to improve their athletic performance and physique. Anabolic steroid drugs mimic the effects of the body’s own steroid hormones, like testosterone and its derivatives. These drugs have the potential to provide a competitive edge by increasing muscle mass, strength, and endurance, although not all users may experience these results. Moreover, use of performance-enhancing drugs (PEDs) does not come without risks. Anabolic steroid use has been linked with a wide variety of potentially negative outcomes, ranging in severity from largely cosmetic (acne) to life threatening (heart attack). Furthermore, use of these substances can result in profound changes in mood and can increase aggressive behavior (National Institute on Drug Abuse, 2001).
Baseball player Alex Rodriguez (A-Rod) has been at the center of a media storm regarding his use of illegal PEDs. Rodriguez’s performance on the field was unparalleled while using the drugs; his success played a large role in negotiating a contract that made him the highest paid player in professional baseball. Although Rodriguez maintains that he has not used PEDs for the several years, he received a substantial suspension in 2013 that, if upheld, will cost him more than 20 million dollars in earnings (Gaines, 2013). What are your thoughts on athletes and doping? Why or why not should the use of PEDs be banned? What advice would you give an athlete who was considering using PEDs?
The glands of the endocrine system secrete hormones to regulate normal body functions. The hypothalamus serves as the interface between the nervous system and the endocrine system, and it controls the secretions of the pituitary. The pituitary serves as the master gland, controlling the secretions of all other glands. The thyroid secretes thyroxine, which is important for basic metabolic processes and growth; the adrenal glands secrete hormones involved in the stress response; the pancreas secretes hormones that regulate blood sugar levels; and the ovaries and testes produce sex hormones that regulate sexual motivation and behavior.
Self Check Questions
Critical Thinking Questions
1. Hormone secretion is often regulated through a negative feedback mechanism, which means that once a hormone is secreted it will cause the hypothalamus and pituitary to shut down the production of signals necessary to secrete the hormone in the first place. Most oral contraceptives are made of small doses of estrogen and/or progesterone. Why would this be an effective means of contraception?
2. Chemical messengers are used in both the nervous system and the endocrine system. What properties do these two systems share? What properties are different? Which one would be faster? Which one would result in long-lasting changes?
3. Given the negative health consequences associated with the use of anabolic steroids, what kinds of considerations might be involved in a person’s decision to use them?
1. The introduction of relatively low, yet constant, levels of gonadal hormones places the hypothalamus and pituitary under inhibition via negative feedback mechanisms. This prevents the alterations in both estrogen and progesterone concentrations that are necessary for successful ovulation and implantation.
2. Both systems involve chemical messengers that must interact with receptors in order to have an effect. The relative proximity of the release site and target tissue varies dramatically between the two systems. In neurotransmission, reuptake and enzymatic breakdown immediately clear the synapse. Metabolism of hormones must occur in the liver. Therefore, while neurotransmission is much more rapid in signaling information, hormonal signaling can persist for quite some time as the concentrations of the hormone in the bloodstream vary gradually over time.
By the end of this section, you will be able to:
- Explain the functions of the spinal cord
- Identify the hemispheres and lobes of the brain
- Describe the types of techniques available to clinicians and researchers to image or scan the brain
The brain is a remarkably complex organ comprised of billions of interconnected neurons and glia. It is a bilateral, or two-sided, structure that can be separated into distinct lobes. Each lobe is associated with certain types of functions, but, ultimately, all of the areas of the brain interact with one another to provide the foundation for our thoughts and behaviors. In this section, we discuss the overall organization of the brain and the functions associated with different brain areas, beginning with what can be seen as an extension of the brain, the spinal cord.
The Spinal Cord
It can be said that the spinal cord is what connects the brain to the outside world. Because of it, the brain can act. The spinal cord is like a relay station, but a very smart one. It not only routes messages to and from the brain, but it also has its own system of automatic processes, called reflexes.
The top of the spinal cord merges with the brain stem, where the basic processes of life are controlled, such as breathing and digestion. In the opposite direction, the spinal cord ends just below the ribs—contrary to what we might expect, it does not extend all the way to the base of the spine.
The spinal cord is functionally organized in 30 segments, corresponding with the vertebrae. Each segment is connected to a specific part of the body through the peripheral nervous system. Nerves branch out from the spine at each vertebra. Sensory nerves bring messages in; motor nerves send messages out to the muscles and organs. Messages travel to and from the brain through every segment.
Some sensory messages are immediately acted on by the spinal cord, without any input from the brain. Withdrawal from heat and knee jerk are two examples. When a sensory message meets certain parameters, the spinal cord initiates an automatic reflex. The signal passes from the sensory nerve to a simple processing center, which initiates a motor command. Seconds are saved, because messages don’t have to go the brain, be processed, and get sent back. In matters of survival, the spinal reflexes allow the body to react extraordinarily fast.
The spinal cord is protected by bony vertebrae and cushioned in cerebrospinal fluid, but injuries still occur. When the spinal cord is damaged in a particular segment, all lower segments are cut off from the brain, causing paralysis. Therefore, the lower on the spine damage is, the fewer functions an injured individual loses.
The Two Hemispheres
The surface of the brain, known as the cerebral cortex, is very uneven, characterized by a distinctive pattern of folds or bumps, known as gyri (singular: gyrus), and grooves, known as sulci (singular: sulcus), shown in [(Link)]. These gyri and sulci form important landmarks that allow us to separate the brain into functional centers. The most prominent sulcus, known as the longitudinal fissure, is the deep groove that separates the brain into two halves or hemispheres: the left hemisphere and the right hemisphere.
There is evidence of some specialization of function—referred to as lateralization—in each hemisphere, mainly regarding differences in language ability. Beyond that, however, the differences that have been found have been minor. What we do know is that the left hemisphere controls the right half of the body, and the right hemisphere controls the left half of the body.
The two hemispheres are connected by a thick band of neural fibers known as the corpus callosum, consisting of about 200 million axons. The corpus callosum allows the two hemispheres to communicate with each other and allows for information being processed on one side of the brain to be shared with the other side.
Normally, we are not aware of the different roles that our two hemispheres play in day-to-day functions, but there are people who come to know the capabilities and functions of their two hemispheres quite well. In some cases of severe epilepsy, doctors elect to sever the corpus ca