OLI Psychology is not your typical course. Our goal is for you to work through the course materials online on your own time and in the way that is most efficient given your prior knowledge.
While you will have more flexibility than you do in a traditional course, you will also have more responsibility for your own learning. You will need to:
Each unit in this course has features designed to support you as an independent learner, including:
Explanatory content: This is the informational “meat” of every unit. It consists of short passages of text with information, images, explanations, and short videos.
Learn By Doing activities: Learn By Doing activities give you the chance to practice the concept that you are learning, with hints and feedback to guide you if you struggle.
Did I Get This? activities: Did I Get This? activities are your chance to do a quick "self-check" and assess your own understanding of the material before doing a graded activity.
When starting an online course, most people neglect planning, opting instead to jump in and begin working. While this might seem efficient (after all, who wants to spend time planning when they could be doing?), it can ultimately be inefficient. In fact, one of the characteristics that distinguishes experts from novices is that experts spend far more time planning their approach to a task and less time actually completing it; while novices do the reverse: rushing through the planning stage and spending far more time overall.
In this course, we want to help you work as efficiently and effectively as possible, given what you already know. Some of you have already taken a psychology course, and are already familiar with many of the concepts. You may not need to work through all of the activities in the course; just enough to make sure that you've "got it." For others, this is your first exposure to psychology, and you will want to do more of the activities, since you are learning these concepts for the first time.
Improving your planning skills as you work through the material in the course will help you to become a more strategic and thoughtful learner and will enable you to more effectively plan your approach to assignments, exams and projects in other courses.
This idea of planning your approach to the course before you start is called Metacognition.
Metacognition involves five distinct skills:
These five skills are applied over and over again in a cycle—within the same course as well as from one course to another:
You get an assignment and ask yourself: “What exactly does this assignment involve and what have I learned in this course that is relevant to it?”
You are exercising metacognitive skills (1) and (2) by assessing the task and evaluating your strengths and weaknesses in relation to it.
If you think about what steps you need to take to complete the assignment and determine when it is reasonable to begin, you are exercising skill (3) by planning.
If you start in on your plan and realize that you are working more slowly than you anticipated, you are putting skill (4) to work by applying a strategy and monitoring your performance.
Finally, if you reflect on your performance in relation to your timeframe for the task, and discover an equally effective but more efficient way to work, you are engaged in skill (5); reflecting and adjusting your approach as needed.
Metacognition is not rocket science. In some respects, it is fairly ordinary and intuitive. Yet you’d be surprised how often people lack strong metacognitive skills; and you’d be amazed by how much weak metacognitive skills can undermine performance.
Now take the opportunity to practice the concepts you've been learning by doing these two Learn By Doing activities. Read each of the scenarios below and identify which metacognitive skill the student is struggling with. If you need help, remember that you can ask for a hint.
You've now read through the explanatory content in this unit, and you've had a chance to practice the concepts. Take a moment to reflect on your understanding. Do you feel like you are "getting it"? Use these next two activities to find out.
Strong metacognitive skills are essential for independent learning, so use the experience of monitoring your own learning in OLI Psychology as an opportunity to hone these skills for other classes and tasks.
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco: Jossey-Bass.
Chi, M. T. H., Bassock, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). "Self-explanations: How students study and use examples in learning to solve problems." Cognitive Science, 13, 145-182.
Dunning, D. (2007). Self-insight: Roadblocks and detours on the path to knowing thyself. New York: Taylor and Francis.
Hayes, J. R., & Flower, L. S. (1986). "Writing research and the writer." American Psychologist Special Issue: Psychological Science and Education, 41, 1106-1113.
Schoenfeld, A. H (1987). "What’s all the fuss about metacognition?" In A. H. Schoenfeld (Ed.), Cognitive science and mathematics education. (pp.189-215). Hillsdale, NJ: Erlbaum.
This Introduction to Psychology course was developed as part of the Community College Open Learning Initiative. Using an open textbook from Flatworld Knowledge as a foundation, Carnegie Mellon University's Open Learning Initiative has built an online learning environment designed to enact instruction for psychology students.
The Open Learning Initiative (OLI) is a grant-funded group at Carnegie Mellon University, offering innovative online courses to anyone who wants to learn or teach. Our aim is to create high-quality courses and contribute original research to improve learning and transform higher education by:
|
![]() |
![]() |
Flatworld Knowledge is a college textbook publishing company on a mission. By using technology and innovative business models to lower costs, Flatword is increasing access and personalizing learning for college students and faculty worldwide. Text, graphics and video in this course are built on materials by Flatworld Knowledge, made available under a CC-BY-NC-SA license. Interested in a companion text for this course? Flatworld provides access to the original textbook online and makes digital and print copies of the original textbook available at a low cost. |
Welcome to world of psychology. This course will introduce you to some of the most important ideas, people, and research methods from the field of Psychology. You probably already know about some people, perhaps Sigmund Freud or B. F. Skinner, who contributed to our understanding of human thought and behavior, and you may have learned about important ideas, such as personality testing or methods of psychotherapy. This course will give you the opportunity to refine and organize the knowledge you bring to the class, and we hope that you will learn about theories, phenomena, and research results that give you new insight into the human condition.
This first module is your opportunity to explore the field of psychology for a while before moving into material that will be assessed and tracked. Let’s start with an obvious question:
The word psychology is based on two words from the Greek language: psyche, which means “life” or, in a more restricted sense “mind,” or “spirit,” and logia, which is the source for the current meaning, “the study of…”
Whatever the origin of the word, over the years, philosophers, scientists, and other interested people have debated about the “proper” subject matter for psychology. Should we focus on actual behavior, which we can observe and even measure, or on the mind, which includes the rich inner experience we all have of the world and of our own thoughts and motives, or on the brain, which is the centerpiece of the physical systems that make thought and behavior possible? And psychology is not just a bunch of fancy theories. Psychology is a vibrant, growing field because psychologists’ ideas and skills are used every day in thousands of ways to solve real-world problems.
To start your introduction to psychology, first survey the range of topics you will be studying in this course. The section on “What do psychologists study?” reviews the scope of topics covered in this course and allows you to see some of the general themes the various units develop. When you are finished with your survey of topics, the section on “What do psychologists do?” gives you some sense of the scope of psychological work and professional fields where psychological training is essential.
Your work in the rest of this module will also introduce you to one of the essential features of this course: Learning by Doing. Research and experience tell us that active involvement in learning is far more effective than mere passive reading or listening. Learning by Doing doesn’t need to be complicated or difficult. It simply requires that you get out of automatic mode and think a bit about the ideas you are encountering. We will get you to participate a bit in your introduction to the online materials and to the field of psychology, so you can Learn by Doing.
You will also encounter another type of activity: Did I Get This? These brief quizzes allow you to determine if you are on track in your understanding of the material. Take the Did I Get This? quiz after each section of the module. If you do well, you might decide that you have mastered the material enough to go on. However, keep in mind that you, not the quiz, should decide if you are ready. The quiz is just there to help you.
Click on one on the general topic boxes below. Choose the topic that seems to capture the general theme of the units. If you don’t choose the best answer, you will be given the opportunity to make another choice after receiving feedback.
Images in this activity courtesy of Erik Daniel Drost, hoyasmeg, Jun's World, oddstock (CC-BY-2.0), and National Institutes of Health (Public Domain).
Before you leave a particular section of a module, you will usually have the opportunity to check your knowledge. The activities called Did I Get This? are brief quizzes about the material you have just been studying. Use them to monitor your understanding of the material before moving on.
You now know that psychology is a big field and psychologists are interested in a great variety of issues. In order to introduce you to so much material, we will often have to focus on specific issues or problems and on an experiment or theory that addresses issue or problem. Much of this research is conducted in university laboratories. This may give you the impression that psychologists only work in universities, conducting experiments with undergraduate psychology students. It is true that a lot of research takes place in university labs, but the majority of psychologists work outside of the university setting.
The next Learn by Doing will give you a chance to consider different ways that people with training in psychology use their skills. In the second module of this introductory unit, we will explore the various areas of psychology more systematically, so this is just a first look at the scope of the work psychologists do.
Your task is to categorize the work of each of our seven psychologists as best fitting basic research, mental health, or applied psychology. In the real world, a single individual might work in two or all three of these areas, but, for the sake of our exercise, find the best fit for the description. Answer by clicking on the appropriate box below.
Images in this activity courtesy of Kim J, Matthews NL, Park S (CC-BY-2.5), FrozenMan (Public Domain), and Brian J. McDermott (CC-BY-2.0).
In the next module of this introductory unit you will learn a bit about the history of psychology and some of the major issues that influence psychological thinking.
In this module we review some of the important philosophical questions that psychologists attempt to answer, the evolution of psychology from ancient philosophy and how psychology became a science. You will learn about the early schools (or approaches) of psychological inquiry and some of the important contributors to each of these early schools of psychology. You will also learn how some of these early schools influenced the newer contemporary perspective of psychology.
The approaches that psychologists originally used to assess the issues that interested them have changed dramatically over the history of psychology. Perhaps most importantly, the field has moved steadily from speculation about the mind and behavior toward a more objective and scientific approach as the technology available to study human behavior has improved. There has also been an increasing influx of women into the field. Although most early psychologists were men, now most psychologists, including the presidents of the most important psychological organizations, are women.
Although psychology has changed dramatically over its history, several questions that psychologists address have remained constant and we will discuss them both here and in the units and modules to come:
Directions: Read each scenario and answer the questions about how each situation might be viewed by a psychologist.
Scenario 1: Alex and Julie, his girlfriend, are having a discussion about aggression in men and women. Julie thinks that males are much more aggressive than females because males have more physical fights and get into trouble with the law than females. Alex does not agree with Julie and tells her that even though males get into more physical fights, he thinks that females are much more aggressive than males because females engage more in gossip, social exclusion, and spreading of malicious rumors than men. Is Alex or Julie’s thinking about aggression correct?
Scenario 2: Your genetics give you certain physical traits and cognitive capabilities. You are great at math but may never be an artist. Sometimes in life, we have to accept the traits and abilities that have been given to us, even though you may wish you were different.
Scenario 3: Raul is having a discussion with his mother about his father, Tomas, and his brother, Hector. Tomas, the father, is an alcoholic and Raul expresses to his mother that he is concerned about his brother, Hector, who is also beginning to drink a lot. Raul does not want his brother to turn out like his father. Raul’s mother tells him that there have been several men in the family who are alcoholics such as his grandfather and two uncles. She says that it runs in the family and that his brother, Hector, can’t help himself and he will also be an alcoholic. Raul responds that he thinks they could stop drinking if they wanted too, despite the history of alcoholism in the family. “After all, look at me. I don’t drink and my friends don’t either. I don’t think I will become an alcoholic because I have friends who know how to control themselves."
The earliest psychologists that we know about are the Greek philosophers Plato (428–347 BC) and Aristotle (384–322 BC). These philosophers asked many of the same questions that today’s psychologists ask; for instance, they questioned the distinction between nature and nurture and mind and body. For example, Plato argued on the nature side, believing that certain kinds of knowledge are innate or inborn, whereas Aristotle was more on the nurture side, believing that each child is born as an “empty slate” (in Latin a tabula rasa) and that knowledge is primarily acquired through sensory learning and experiences.
European philosophers continued to ask these fundamental questions during the Renaissance. For instance, the French philosopher, René Descartes (1596–1650) influenced the belief that the mind (the mental aspects of life) and body (the physical aspects of life) were separate entities. He argued that the mind controls the body through the pineal gland in the brain (an idea that made some sense at the time but was later proved incorrect). This relationship between the mind and body is known as the mind-body dualism in which the mind is fundamentally different from the mechanical body, so much so that we have free will to choose the behaviors that we engage in. Descartes also believed in the existence of innate natural abilities (nature).
Another European philosopher, Englishman John Lock (1632–1704), is known for his viewpoint of empiricism, the belief that the newborn’s mind is a “blank slate” and that the accumulation of experiences mold the person into who he or she becomes.
The fundamental problem that these philosophers faced was that they had few methods for collecting data and testing their ideas. Most philosophers didn’t conduct any research on these questions, because they didn’t yet know how to do it and they weren’t sure it was even possible to objectively study human experience. Philosophers began to argue for the experimental study of human behavior.
Gradually in the mid-1800s, the scientific field of psychology gained its independence from philosophy when researchers developed laboratories to examine and test human sensations and perceptions using scientific methods. The first two prominent research psychologists were the German psychologist Wilhelm Wundt (1832–1920), who developed the first psychology laboratory in Leipzig, Germany in 1879, and the American psychologist William James (1842–1910), who founded an American psychology laboratory at Harvard University.
The Early Schools of Psychology: No Longer Active | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge: Adapted from Introduction to Psychology, v1.0. CC-BY-NC-SA. | ||||||||||||
|
Early Schools of Psychology: Still Active and Advanced Beyond Early Ideas | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge: Adapted from Introduction to Psychology, v1.0. CC-BY-NC-SA. | ||||||||||||||||||||
|
Wundt’s research in his laboratory in Liepzig focused on the nature of consciousness itself. Wundt and his students believed that it was possible to analyze the basic elements of the mind and to classify our conscious experiences scientifically. This focus developed into the field known as structuralism, a school of psychology whose goal was to identify the basic elements or “structures” of psychological experience. Its goal was to create a “periodic table” of the “elements of sensations,” similar to the periodic table of elements that had recently been created in chemistry.
Structuralists used the method of introspection in an attempt to create a map of the elements of consciousness. Introspection involves asking research participants to describe exactly what they experience as they work on mental tasks, such as viewing colors, reading a page in a book, or performing a math problem. A participant who is reading a book might report, for instance, that he saw some black and colored straight and curved marks on a white background. In other studies the structuralists used newly invented reaction time instruments to systematically assess not only what the participants were thinking but how long it took them to do so. Wundt discovered that it took people longer to report what sound they had just heard than to simply respond that they had heard the sound. These studies marked the first time researchers realized that there is a difference between the sensation of a stimulus and the perception of that stimulus, and the idea of using reaction times to study mental events has now become a mainstay of cognitive psychology.
Perhaps the best known of the structuralists was Edward Bradford Titchener (1867–1927). Titchener was a student of Wundt who came to the United States in the late 1800s and founded a laboratory at Cornell University. In his research using introspection, Titchener and his students claimed to have identified more than 40,000 sensations, including those relating to vision, hearing, and taste.
An important aspect of the structuralist approach was that it was rigorous and scientific. The research marked the beginning of psychology as a science, because it demonstrated that mental events could be quantified. But the structuralists also discovered the limitations of introspection. Even highly trained research participants were often unable to report on their subjective experiences. When the participants were asked to do simple math problems, they could easily do them, but they could not easily answer how they did them. Thus the structuralists were the first to realize the importance of unconscious processes—that many important aspects of human psychology occur outside our conscious awareness and that psychologists cannot expect research participants to be able to accurately report on all of their experiences. Introspection was eventually abandoned because it was not a reliable method for understanding psychological processes.
In contrast to structuralism, which attempted to understand the nature of consciousness, the goal of William James and the other members of the school of functionalism was to understand why animals and humans have developed the particular psychological aspects that they currently possess. For James, one’s thinking was relevant only to one’s behavior. As he put it in his psychology textbook, “My thinking is first and last and always for the sake of my doing.”
James and the other members of the functionalist school were influenced by Charles Darwin’s (1809–1882) theory of natural selection, which proposed that the physical characteristics of animals and humans evolved because they were useful, or functional. The functionalists believed that Darwin’s theory applied to psychological characteristics too. Just as some animals have developed strong muscles to allow them to run fast, the human brain, so functionalists thought, must have adapted to serve a particular function in human experience.
Although functionalism no longer exists as a school of psychology, its basic principles have been absorbed into psychology and continue to influence it in many ways. The work of the functionalists has developed into the field of evolutionary psychology, a contemporary perspective of psychology that applies the Darwinian theory of natural selection to human and animal behavior. You learn more about the perspective of evolutionary psychology in the next section of this module.
Perhaps the school of psychology that is most familiar to the general public is the psychodynamic approach to understanding behavior, which was championed by Sigmund Freud (1856–1939) and his followers. Psychodynamic psychology is an approach to understanding human behavior that focuses on the role of unconscious thoughts, feelings, and memories. Freud developed his theories about behavior through extensive analysis of the patients that he treated in his private clinical practice. Freud believed that many of the problems that his patients experienced, including anxiety, depression, and sexual dysfunction, were the result of the effects of painful childhood experiences that the person could no longer remember.
Freud’s ideas were extended by other psychologists whom he influenced including Erik Erikson (1902–1994). These and others who follow the psychodynamic approach believe that it is possible to help the patient if the unconscious drives can be remembered, particularly through a deep and thorough exploration of the person’s early sexual experiences and current sexual desires. These explorations are revealed through talk therapy and dream analysis, in a process called psychoanalysis.
The founders of the school of psychodynamics were primarily practitioners who worked with individuals to help them understand and confront their psychological symptoms. Although they did not conduct much research on their ideas, and although later, more sophisticated tests of their theories have not always supported their proposals, psychodynamics has nevertheless had substantial impact on the perspective of clinical psychology and, indeed, on thinking about human behavior more generally. The importance of the unconscious in human behavior, the idea that early childhood experiences are critical, and the concept of therapy as a way of improving human lives are all ideas that are derived from the psychodynamic approach and that remain central to psychology.
Although they differed in approach, both structuralism and functionalism were essentially studies of the mind. The psychologists associated with the school of behaviorism, on the other hand, were reacting in part to the difficulties psychologists encountered when they tried to use introspection to understand behavior. Behaviorism is a school of psychology that is based on the premise that it is not possible to objectively study the mind, and therefore that psychologists should limit their attention to the study of behavior itself. Behaviorists believe that the human mind is a “black box” into which stimuli are sent and from which responses are received. They argue that there is no point in trying to determine what happens in the box because we can successfully predict behavior without knowing what happens inside the mind. Furthermore, behaviorists believe that it is possible to develop laws of learning that can explain all behaviors.
The first behaviorist was the American psychologist John B. Watson (1878–1958). Watson was influenced in large part by the work of the Russian physiologist Ivan Pavlov (1849–1936), who had discovered that dogs would salivate at the sound of a tone that had previously been associated with the presentation of food. Watson and other behaviorists began to use these ideas to explain how events that people and animals experienced in their environment (stimuli) could produce specific behaviors (responses). For instance, in Pavlov’s research the stimulus (either the food or, after learning, the tone) would produce the response of salivation in the dogs.
In his research Watson found that systematically exposing a child to fearful stimuli in the presence of objects that did not themselves elicit fear could lead the child to respond with a fearful behavior to the presence of the stimulus. In the best known of his studies, an 8-month-old boy named Little Albert was used as the subject. Here is a summary of the findings:
The baby was placed in the middle of a room; a white laboratory rat was placed near him and he was allowed to play with it. The child showed no fear of the rat. In later trials, the researchers made a loud sound behind Albert’s back by striking a steel bar with a hammer whenever the baby touched the rat. The child cried when he heard the noise. After several such pairings of the two stimuli, the child was again shown the rat. Now, however, he cried and tried to move away from the rat. In line with the behaviorist approach, Little Albert had learned to associate the white rat with the loud noise, resulting in crying.
The most famous behaviorist was Burrhus Frederick (B. F.) Skinner (1904–1990), who expanded the principles of behaviorism and also brought them to the attention of the public at large. Skinner used the ideas of stimulus and response, along with the application of rewards or reinforcements, to train pigeons and other animals. He used the general principles of behaviorism to develop theories about how best to teach children and how to create societies that were peaceful and productive. Skinner even developed a method for studying thoughts and feelings using the behaviorist approach.
The behaviorists made substantial contributions to psychology by identifying the principles of learning. Although the behaviorists were incorrect in their beliefs that it was not possible to measure thoughts and feelings, their ideas provided new ideas that helped further our understanding regarding the nature-nurture and mind-body debates. The ideas of behaviorism are fundamental to psychology and have been developed to help us better understand the role of prior experiences in a variety of areas of psychology.
During the first half of the twentieth century, evidence emerged that learning was not as simple as it was described by the behaviorists. Several psychologists studied how people think, learn and remember. And this approach became known as cognitive psychology, a field of psychology that studies mental processes, including perception, thinking, memory, and judgment. The German psychologist Hermann Ebbinghaus (1850–1909) showed how memory could be studied and understood using basic scientific principles. The English psychologist Frederick Bartlett also looked at memory but focused more on how our memories can be distorted by our beliefs and expectations.
The two individuals from this time who arguably made the strongest impact on contemporary cognitive psychology were two great students of child development: the Swiss psychologist Jean Piaget (1896–1980) and the Russian psychologist Lev Vygotsky (1896–1934).
Jean Piaget was a prolific writer, a brilliant systematizer, and a creative observer of children. Using interviews and situations he contrived, he studied the thinking and reasoning of children from their earliest days into adolescence. He is best known for his theory that tracks the development of children’s thinking into a series of four major stages, each with several substages. Within each stage, Piaget pointed to behaviors and responses to questions that revealed how the developing child understands the world. One of Piaget’s critical insights was that children are not deficient adults, so when they do something or make a judgement that, in an adult, might seem to be a mistake, we should not assume that it is a mistake from the child’s perspective. Instead, the child may be using the knowledge and reasoning that are completely appropriate at his or her particular age to make sense of the world. For example, Piaget found that children often believe that other people know or can see whatever they know or can see. So, if you show a young child a scene containing several dolls, where a particular doll is visible to the child but blocked from your view by a dollhouse, the child will simply assume that you can see the blocked doll. Why? Because he or she can see it. Piaget called this thinking egocentrism, by which he meant that the child’s thinking is centered in his or her own view of the world (not that the child is selfish). If an adult made this error, we would find it odd. But it is quite natural for the child, because prior to about 4 years of age, children do not understand that different minds (theirs and yours) can know different things. Egocentric thinking is normal and healthy for a two year old (though not for a 20-year-old).
During the same years that Piaget was interviewing children and trying to chart the course of development, Russian psychologist Lev Vygotsky was struck by the rich social influences that influenced and even guided cognitive development. Like Piaget, Vygotsky observed children playing with one another, and he saw how children guide each other to learn social rules and, through those, to improve self-regulation of behavior and thoughts.
Vygotsky’s best known contribution was his analysis of the interactions of children and parents that lead to the development of more and more sophisticated thinking. He suggested that the effective parent or teacher is one who helps the child reach beyond his or her current level of thinking by creating supports, which Vygotsky’s followers called scaffolding. For example, if a teacher wants the child to learn the difference between a square and a triangle, she might allow the child to play with cardboard cutouts of the shapes, and help the child count the number of sides and angles on each. This assisted exploration is a scaffold—a set of supports for the child who is actively doing something—that can help the child do things and explore in ways that would not be likely or even possible alone.
Both Piaget and Vygotsky emphasized the mental development of the child and gave later psychologists a rich set of theoretical ideas as well as observable phenomena to serve as a foundation for the science of the mind that blossomed in the middle and late 20th century and is the core of 21st century psychology.
By the 1950s, a clear contrast existed between psychologists who favored behaviorism which focused exclusively on behavior that is shaped by the environment and those who favored psychodynamic psychology which focused on mental unconscious processes to explain behavior. Many of the psychodynamic therapists became disillusioned with the results of their therapy and began to propose new ways of thinking about behavior in that unlike animals, human behavior was not innately uncivilized as Freud, James and Skinner believed. Humanism developed on the beliefs that humans are inherently good, have free will to make decisions, and are motivated to seek and improve themselves to their highest potential. Instead of focusing on what went wrong with people’s lives as did the psychodynamic psychologists, humanists asked interesting questions about what made a person “good.” Thus, a new approach to psychology emerged called humanism, an early school of psychology which emphasized that each person is inherently good and motivated to learn and improve to become a healthy, effectively functioning individual. Abraham Maslow and Carl Rogers are credited for developing the humanistic approach in which they asked questions about what made a person good.
Abraham Maslow (1908–1970) developed the theory of self-motivation in which we all have a basic, broad need to develop our special unique human potential, which he called the drive for self-actualization. He proposed that, in order for us to achieve self-actualization, several basic needs beginning with physiological needs of hunger, thirst, and maintenance of other internal states of the body must first be met. As the lower-level needs are satisfied, our internal motivation strives to achieve higher-ordered needs such as safety, belonging and love needs and self-esteem needs until we ultimately achieve self-actualization. Maslow’s theory of hierarchy of needs represents our internal motivation to strive for self-actualization. Achieving self-actualization meant that one has achieved their unique and special human potential to be able to lead a positive and fulfilling life.
Carl Rogers (1902–1987), originally a psychodynamic therapist, developed a new therapy approach which he called client-centered therapy. This therapy approach viewed the person, not as a patient, but rather as a client with more equal status with the therapist. He believed that the client as well as every person should be respected and valued for his or her unique and special abilities and potential, and that the person had the ability to make conscious decisions and free will to achieve one’s highest potential.
While the humanistic school of psychology has been criticized for its lack of rigorous experimental investigation as being more of a philosophical approach, it has influenced current thinking on personality theories and psychotherapy methods. Furthermore, the foundations of the early school of humanism evolved into the contemporary perspective of positive psychology, the scientific study of optimal human functioning.
Psychologist Abraham Maslow introduced the concept of a hierarchy of needs, which suggests that people are motivated to fulfill basic needs before moving on to other, more advanced needs. Consider how this may influence our development, motivation, and accomplishments. Choose which level of needs would best explain the scenario below.
As you may have noticed in the six early schools of psychology, each attempted to answer psychological questions with a single approach. While some attempted to build a big theory around their approach (and some did not even attempt), no one school was successful. By the mid-20th century, the field of psychology was still a very young science, but it was gaining a lot of diverse attention and popularity. Psychologists began to study mental processes and behavior from their own specific points of interests and views. Thus, some of the specific viewpoints became known as perspectives from which to investigate a specific psychological topic.
Today, contemporary psychology reflects several major perspectives such biological/neuroscience, cognitive, behavioral, social, developmental, clinical, and individual differences/personality. These are not a complete list of perspectives and your instructor may introduce others. What’s important to know is that today all psychologists believe that there is no one specific perspective with which to study psychology, but rather any given topic can be approached from a variety of perspectives. For example, investigating how an infant learns language can be studied from all of the different perspectives that could provide information from a different viewpoint about the child’s learning. Also as perspectives become more specific, we see that the perspectives are interconnected with each other, meaning that it is difficult to study any topic on human thought or behavior from just one perspective without considering the complex influence of information from other perspectives.
Contemporary Perspectives of Psychology | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Behavioral neuroscience studies the links among the brain, mind, and behavior. This perspective used to be called psychobiological psychology, which studied the biological roots such as brain structure and brain activity of behavior. But due to the advancements in our ability to view the intricate workings of the brain, called neuroimaging, the name behavioral neuroscience is now used for this broad discipline. Neuroimaging is the use of various techniques to provide pictures of the structures and functions of the living brain. And as you read about the following contemporary psychological perspectives, you will see how interconnected these perspectives are, largely due to neuroimaging techniques.
For example, neuroimaging techniques are used to study brain functions in learning, emotions, social behavior and mental illness which each have their own specialty perspective (see the descriptions of these perspectives below). Also the two perspectives of behavioral neuroscience and biological psychology are closely interconnected in that the uses of neuroimaging techniques such as electrical brain recordings enable biological psychologists to study the structure and functions of the brain. Another example is the study of behavioral genetics which is the study of how genes influence cognition, physical development and behavior.
Another related perspective is evolutionary psychology, which supports the idea that the brain and body are products of evolution and that inheritance plays an important role in shaping thought and behavior. This perspective developed from the functionalists’ basic assumption that many human psychological systems, including memory, emotion and personality, serve key adaptive functions called fitness characteristics. Evolutionary psychologists theorize that fitness characteristics have helped humans to survive and reproduce throughout the centuries at a higher rate than do other species who do not have the same fitness characteristics. Fitter organisms pass on their genes more successfully to later generations, making the characteristics that produce fitness more likely to become part of the organism’s nature than characteristics that do not produce fitness. For example, evolutionary theory attempts to explain many different behaviors including romantic attraction, jealousy, stereotypes and prejudice, and psychological disorders. The evolutionary perspective is important to psychology because it provides logical explanations for why we have many psychological characteristics.
Closely related to behavioral neuroscience, the perspective of biological psychology focuses on studying the connections between bodily systems such as the nervous and endocrine systems and chemicals such as hormones and their relationships to behavior and thought. Biological research on the chemicals produced in the body and brain have helped psychologists to better understand psychological disorders such as depression and anxiety and the effects of stress on hormones and behavior.
Cognitive psychology is the study of how we think, process information and solve problems, how we learn and remember, and how we acquire and use language. Cognitive psychology is interconnected with other perspectives that study language, problem solving, memory, intelligence, education, human development, social psychology, and clinical psychology.
Starting in the 1950s, psychologists developed a rich and technically complex set of ideas to understand human thought processes, initially inspired by the same insights and advances in information technology that produced the computer, cell phone and internet. As technology advanced, so did cognitive psychology. We are now able to see the brain in action using neuroimaging techniques. These images are used to diagnose brain disease and injury, but they also allow researchers to view information processing as it occurs in the brain, because the processing causes the involved area of the brain to increase metabolism and show up on the scan such as the functional magnetic resonance imaging (fMRI). We discuss the use of neuroimaging techniques in many areas of psychology in the units to follow.
The field of social psychology is the study of how social situations and cultures in which people live influence their thinking, feelings and behavior. Social psychologists are particularly concerned with how people perceive themselves and others, and how people influence each other’s behavior. For instance, social psychologists have found that we are attracted to others who are similar to us in terms of attitudes and interests. We develop our own beliefs and attitudes by comparing our opinions to those of others and we frequently change our beliefs and behaviors to be similar to people we care about.
Social psychologists are also interested in how our beliefs, attitudes and behaviors are influenced by our culture. Cultures influence every aspect of our lives. For example, fundamental differences in thinking, feeling and behaving exist among people of Western cultures (such as the United States, Canada, Western Europe, Australia, and New Zealand) and East Asian cultures (such as China, Japan, Taiwan, Korea, India, and Southeast Asia). Western cultures are primarily oriented toward individualism, which is about valuing the self and one’s independence from others, sometimes at the expense of others. The East Asian culture, on the other hand, is oriented toward interdependence, or collectivism, which focuses on developing harmonious social relationships with others, group togetherness and connectedness, and duty and responsibility to one’s family and other groups.
As our world becomes more global, sociocultural research will become more interconnected with the research of other psychological perspectives such as biological, cognitive, personality, developmental and clinical.
Developmental psychology is the study of the development of a human being from conception until death. This perspective emphasizes all of the transformations and consistencies of human life. Three major domains or aspects of human life, cognitive, physical and socioemotional, are researched as one ages. The cognitive domain refers to all of the mental processes that a person uses to obtain knowledge or think about the environment. The physical domain refers to all the growth and changes that occur in a person’s body and the genetic, nutritional, and health factors that affect that growth and change. And the socioemotional development includes development of emotions, temperament, and social skills. Developmentalists study how individuals change or remain the same over time in each of these three domains. It is easy to see how the perspective of developmental psychology is interconnected with all of the other major contemporary perspectives because of the overlapping and all encompassing aspects of the developmental perspective.
Clinical psychology focuses on the diagnosis and treatment of mental, emotional and behavioral disorders and ways to promote psychological health. This field evolved from the early psychodynamic and humanistic schools of psychology. While the clinical psychology perspective emphasizes treating individuals so that they may lead fulfilling and productive lives, clinical psychologists also conduct research to discover the origins of mental and behavioral disorders and effective treatment methods. The clinical psychology perspective is closely interconnected to behavioral neuroscience and biological psychology.
Personality psychology is the study of the differences and uniqueness of people and the influences on a person’s personality. Researchers in this field study whether personality traits change as we age or stay the same, something that developmental psychologists also study. Researchers interested in personality also study how environmental influences such as traumatic events affect personality.
Instructions: Imagine that you are a psychologist and you want to investigate specific behaviors of a person with Alzheimer’s disease. You have a team of psychologists who represent several contemporary perspectives in psychology to help you explore information as to the origin, symptoms, prevalence, influences and causes of this brain disease and the impact on family members who care for a relative with Alzheimer’s disease. Read the following scenario about a person who had Alzheimer’s disease.
Alzheimer’s disease (AD), the most common type of dementia, is a steady and gradual progressive brain disorder that damages and destroys brain cells. Eventually Alzheimer’s disease progresses to the point where the person requires full nursing care. Ronald Reagan, who was president of the United States from 1981 to 1989, announced in 1994 that he had Alzheimer’s disease. He died 10 years later at age 93. Despite extensive research, psychologists still have many questions to be researched about this fatal disease.
Read each set of questions. While some of these sets of questions could be researched by different psychological perspectives, try to determine which psychological perspective would most likely want to provide answers for each set of questions. Your team of psychologists represents the following perspectives and only one perspective is the correct answer for each set of questions.
Challenge Questions
As you can see, psychologists from all different contemporary perspectives can contribute to the scientific knowledge of Alzheimer’s disease, and for that matter, any kind of research pertaining to humans and animals.
Psychology is not one discipline but rather a collection of many subdisciplines that all share at least some common perspectives that work together to exchange knowledge to form a coherent discipline. Because the field of psychology is so broad, students may wonder which areas are most suitable for their interests and which types of careers might be available to them. The following figure will help you consider the answers to these questions. Click on any of the labeled, blue circles to learn more about each discipline.
Photo courtesy of longislandwins (CC-BY-2.0).
You can learn more about these different subdisciplines of psychology and the careers associated with them by visiting the American Psychological Association (APA) website.
Step 1: Go to the APA website.
On this APA Home webpage, notice the various types of information.
Step 2: Find the box titled “Quick Links” on the APA Homepage.
Click on the link titled Divisions.
Step 3: On the APA site, search for the topic “Undergraduate Education.” Find the “Psychology as a Career” webpage to learn about what employers need from an employee, and then answer the following questions.
Now search for the topic “Careers in Psychology” on the APA website. Here you can read interesting information about the field of psychology. This section provides a long list of subfields in psychology that psychologists specialize in. Read about some of the interesting job tasks that psychologists perform in some of the subfields, and then complete the following statements by identifying the subfield that corresponds with it job tasks.
Psychologists aren’t the only people who seek to understand human behavior and solve social problems. Philosophers, religious leaders, and politicians, among others, also strive to provide explanations for human behavior. But psychologists believe that research is the best tool for understanding human beings and their relationships with others. Rather than accepting the claim of a philosopher that people do (or do not) have free will, a psychologist would collect data to empirically test whether or not people are able to actively control their own behavior. Rather than accepting a politician’s contention that creating (or abandoning) a new center for mental health will improve the lives of individuals in the inner city, a psychologist would empirically assess the effects of receiving mental health treatment on the quality of life of the recipients. The statements made by psychologists are based on an empirical study. An empirical study is results of verifiable evidence from a systematic collection and analysis of data that has been objectively observed, measured, and undergone experimentation.
In this unit you will learn how psychologists develop and test their research ideas; how they measure the thoughts, feelings, and behavior of individuals; and how they analyze and interpret the data they collect. To really understand psychology, you must also understand how and why the research you are reading about was conducted and what the collected data mean. Learning about the principles and practices of psychological research will allow you to critically read, interpret, and evaluate research.
In addition to helping you learn the material in this course, the ability to interpret and conduct research is also useful in many of the careers that you might choose. For instance, advertising and marketing researchers study how to make advertising more effective, health and medical researchers study the impact of behaviors such as drug use and smoking on illness, and computer scientists study how people interact with computers. Furthermore, even if you are not planning a career as a researcher, jobs in almost any area of social, medical, or mental health science require that a worker be informed about psychological research.
Psychologists study behavior of both humans and animals, and the main purpose of this research is to help us understand people and to improve the quality of human lives. The results of psychological research are relevant to problems such as learning and memory, homelessness, psychological disorders, family instability, and aggressive behavior and violence. Psychological research is used in a range of important areas, from public policy to driver safety. It guides court rulings with respect to racism and sexism as in the 1954 case of Brown v. Board of Education, as well as court procedure, in the use of lie detectors during criminal trials, for example. Psychological research helps us understand how driver behavior affects safety such as the effects of texting while driving, which methods of educating children are most effective, how to best detect deception, and the causes of terrorism.
Some psychological research is basic research. Basic research is research that answers fundamental questions about behavior. For instance, bio-psychologists study how nerves conduct impulses from the receptors in the skin to the brain, and cognitive psychologists investigate how different types of studying influence memory for pictures and words. There is no particular reason to examine such things except to acquire a better knowledge of how these processes occur. Applied research is research that investigates issues that have implications for everyday life and provides solutions to everyday problems. Applied research has been conducted to study, among many other things, the most effective methods for reducing depression, the types of advertising campaigns that serve to reduce drug and alcohol abuse, the key predictors of managerial success in business, and the indicators of effective government programs, such as Head Start.
Basic research and applied research inform each other, and advances in science occur more rapidly when each type of research is conducted. For instance, although research concerning the role of practice on memory for lists of words is basic in orientation, the results could potentially be applied to help children learn to read. Correspondingly, psychologist-practitioners who wish to reduce the spread of AIDS or to promote volunteering frequently base their programs on the results of basic research. This basic AIDS or volunteering research is then applied to help change people’s attitudes and behaviors.
One goal of research is to organize information into meaningful statements that can be applied in many situations.
A theory is an integrated set of principles that explains and predicts many, but not all, observed relationships within a given domain of inquiry. One example of an important theory in psychology is the stage theory of cognitive development proposed by the Swiss psychologist Jean Piaget. The theory states that children pass through a series of cognitive stages as they grow, each of which must be mastered in succession before movement to the next cognitive stage can occur. This is an extremely useful theory in human development because it can be applied to many different content areas and can be tested in many different ways.
Good theories have four important characteristics. A good theory is:
Piaget’s stage theory of cognitive development meets all four characteristics of a good theory. First, it is general in that it can account for developmental changes in behavior across a wide variety of domains, and second, it does so parsimoniously—by hypothesizing a simple set of cognitive stages. Third, the stage theory of cognitive development has been applied not only to learning about cognitive skills but also to the study of children’s moral and gender development. And finally, the stage theory of cognitive development is falsifiable because the stages of cognitive reasoning can be measured and because if research discovers, for instance, that children learn new tasks before they have reached the cognitive stage hypothesized to be required for that task, then the theory will be shown to be incorrect.
No single theory is able to account for all behavior in all cases. Rather, theories are each limited in that they make accurate predictions in some situations or for some people but not in other situations or for other people. As a result, there is a constant exchange between theory and data: Existing theories are modified on the basis of collected data, and the new modified theories then make new predictions that are tested by new data, and so forth. When a better theory is found, it will replace the old one. This is part of the accumulation of scientific knowledge as a result of research.
When psychologists have a question that they want to research, it usually comes from a theory based on other’s research reported in scientific journals. Recall that a theory is based on principles that are general and can be applied to many situations or relationships. Therefore, when a scientist has a research question to study, the question must be stated in a research hypothesis, which is a precise statement of the presumed relationship among specific parts of a theory. Furthermore, a research hypothesis is a specific and falsifiable prediction about the relationship between or among two or more variables, where a variable is any attribute that can assume different values among different people or across different times or places.
The research hypothesis states the existence of a relationship between the variables of interest and the specific direction of that relationship. For instance, the research hypothesis “Using marijuana will reduce learning” predicts that there is a relationship between a variable “using marijuana” and another variable called “learning.” Similarly, in the research hypothesis “Participating in psychotherapy will reduce anxiety,” the variables that are expected to be related are “participating in psychotherapy” and “level of anxiety.”
When stated in an abstract manner, the ideas that form the basis of a research hypothesis are known as conceptual variables. Conceptual variables are abstract ideas that form the basis of research hypotheses. Sometimes the conceptual variables are rather simple—for instance, age, gender, or weight. In other cases the conceptual variables represent more complex ideas, such as anxiety, cognitive development, learning, self-esteem, or sexism.
The first step in testing a research hypothesis involves turning the conceptual variables into measured variables, which are variables consisting of numbers that represent the conceptual variables. For instance, the conceptual variable “participating in psychotherapy” could be represented as the measured variable “number of psychotherapy hours the patient has accrued” and the conceptual variable “using marijuana” could be assessed by having the research participants rate, on a scale from 1 to 10, how often they use marijuana or by administering a blood test that measures the presence of the chemicals in marijuana.
Psychologists use the term operational definition to refer to a precise statement of how a conceptual variable is turned into a measured variable. The following table lists some potential operational definitions for conceptual variables that have been used in psychological research. As you read through this list, note that in contrast to the abstract conceptual variables, the operational definitions are measurable and very specific. This specificity is important for two reasons. First, more specific definitions mean that there is less danger that the collected data will be misunderstood by others. Second, specific definitions will enable future researchers to replicate the research.
Examples of Some Conceptual Variables Defined as Operational Definitions for Psychological Research | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||
|
One of the keys to developing a well-designed research study is to precisely define the conceptual variables found in a hypothesis. When conceptual variables are turned into operational variables in a hypothesis, it then becomes a testable hypothesis. In this activity, read each of the following statements and answer its accompanying question to:
Now to make sure that you can identify the characteristics of a hypothesis and distinguish its conceptual variables from operational definitions used in a research study, choose an answer that correctly completes each of the following statements.
All scientists (whether they are physicists, chemists, biologists, sociologists, or psychologists) are engaged in the basic processes of collecting data and drawing conclusions about those data. The methods used by scientists have developed over many years and provide a common framework for developing, organizing, and sharing information. The scientific method is the set of assumptions, rules, and procedures scientists use to conduct research.
In addition to requiring that science be empirical, the scientific method demands that the procedures used are objective, or free from the personal bias or emotions of the scientist. The scientific method proscribes how scientists collect and analyze data, how they draw conclusions from data, and how they share data with others. These rules increase objectivity by placing data under the scrutiny of other scientists and even the public at large. Because data are reported objectively, other scientists know exactly how the scientist collected and analyzed the data. This means that they do not have to rely only on the scientist’s own interpretation of the data; they may draw their own, potentially different, conclusions.
In the following activity, you learn about a model presenting a five-step process of scientific research in psychology. A researcher or a small group of researchers formulate a research question and state a hypothesis, conduct a study designed to answer the question, analyze the resulting data, draw conclusions about the answer to the question, and then publish the results so that they become part of the research literature found in scientific journals.
Because the research literature is one of the primary sources of new research questions, this process can be thought of as a cycle. New research leads to new questions and new hypotheses, which lead to new research, and so on. This model also indicates that research questions can originate outside of this cycle either with informal observations or with practical problems that need to be solved. But even in these cases, the researcher begins by checking the research literature to see if the question had already been answered and to refine it based on what previous research had already found.
All scientists use the the scientific method which is a set of basic processes performed in the same order for conducting research. Using the following diagram of the scientific method, label each of the research steps in the correct order that scientists use to conduct scientific studies.
Now imagine that you are a research psychologist and you want to conduct a study to find out if there are any negative effects of talking on a cell phone while driving a car. What would you do first to begin your study? How would you know if your study might provide any new information? How would you go about conducting your study? What would you do after you have completed your study? These are questions that every researcher must answer in order to properly conduct a scientific study. As the researcher for your study on cell phone usage while driving, you will need to answer all of these questions.
The research by Mehl and his colleagues is described nicely by this model. Their question—whether women are more talkative than men—was suggested to them both by people’s stereotypes and by published claims about the relative talkativeness of women and men. When they checked the research literature, however, they found that this question had not been adequately addressed in scientific studies. They conducted a careful empirical study , analyzed the results (finding very little difference between women and men), and published their work so that it became part of the research literature. The publication of their article is not the end of the story, however, because their work suggests many new questions about the reliability of the result, such as potential cultural differences that will likely be further researched by them or other researchers.
Most new research is designed to replicate—that is, to repeat, add to, or modify—previous research findings. The process of repeating previous research, which forms the basis of all scientific inquiry, is known as replication. The scientific method therefore results in an accumulation of scientific knowledge through the reporting of research and the addition to and modifications of previous reported findings that are then replicated by other researchers.
One of the questions that all scientists must address concerns the ethics of their research. Research in psychology may cause some stress, harm, or inconvenience for the people who participate in that research. For instance, researchers may require introductory psychology students to participate in research projects and then deceive these students, at least temporarily, about the nature of the research. Psychologists may induce stress, anxiety, or negative moods in their participants, expose them to weak electrical shocks, or convince them to behave in ways that violate their moral standards. And researchers may sometimes use animals in their research, potentially harming them in the process.
Decisions about whether research is ethical are made using established ethical codes and standards developed by scientific organizations, such as the American Psychological Association, and the federal government, such as the U.S. Department of Health and Human Services (DHHS). In addition, there is no way to know ahead of time what the effects of a given procedure will be on every person or animal who participates or what benefit to society the research is likely to produce. What is ethical is defined by the current state of thinking within society, and thus perceived costs and benefits change over time. The DHHS regulations require that all universities receiving funds from the department set up an Institutional Review Board (IRB) to determine whether proposed research meets department regulations. The Institutional Review Board is a committee of at least five members whose goal it is to determine the cost-benefit ratio of research conducted within an institution. The IRB approves the procedures of all the research conducted at the institution before the research can begin. The board may suggest modifications to the procedures, or (in rare cases) it may inform the scientist that the research violates DHHS guidelines and thus cannot be conducted at all.
The following table presents some of the most important factors that psychologists take into consideration when designing their research using people.
Characteristics of an Ethical Research Project Using Human Participants | |||||||||
---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | |||||||||
|
The most direct ethical concern of the scientist is to prevent harm to the research participants. One example is the well-known research, conducted in 1961 by Stanley Milgram, which investigated obedience to authority. Participants were induced by an experimenter to administer electric shocks to another person so that Milgram could study the extent to which they would obey the demands of an authority figure. Most participants evidenced high levels of stress resulting from the psychological conflict they experienced between engaging in aggressive and dangerous behavior and following the instructions of the experimenter. Studies such as those by Milgram are no longer conducted because the scientific community is now much more sensitized to the potential of such procedures to create emotional discomfort or harm.
Another goal of ethical research is to guarantee that participants have free choice regarding whether they wish to participate in research. Students in psychology classes may be allowed, or even required, to participate in research, but they are also always given an option to choose a different study to be in, or to perform other activities instead. And once an experiment begins, the research participant is always free to leave the experiment if he or she wishes to. Concerns about free choice also occur in institutional settings, such as in schools, hospitals, corporations, and prisons, when individuals are required by the institutions to take certain tests or when employees are told or asked to participate in research.
Researchers must also protect the privacy of the research participants. In some cases data can be kept anonymous by not having the respondents put any identifying information on their questionnaires. In other cases the data cannot be anonymous because the researcher needs to keep track of which respondent contributed the data. In this case one technique is to have each participant use a unique code number to identify his or her data, such as the last four digits of the student ID number. In this way the researcher can keep track of which person completed which questionnaire, but no one will be able to connect the data with the individual who contributed them.
Perhaps the most widespread ethical concern to the participants in behavioral research is the extent to which researchers employ deception. Deception occurs whenever research participants are not completely and fully informed about the nature of the research project before participating in it. Deception may occur in an active way, such as when the researcher tells the participants that he or she is studying learning when in fact the experiment really concerns obedience to authority. In other cases the deception is more passive, such as when participants are not told about the hypothesis being studied or the potential use of the data being collected.
Some researchers have argued that no deception should ever be used in any research. [2] They argue that participants should always be told the complete truth about the nature of the research they are in, and that when participants are deceived there will be negative consequences, such as the possibility that participants may arrive at other studies already expecting to be deceived. Other psychologists defend the use of deception on the grounds that it is needed to get participants to act naturally and to enable the study of psychological phenomena that might not otherwise get investigated. They argue that it would be impossible to study topics such as altruism, aggression, obedience, and stereotyping without using deception because if participants were informed ahead of time what the study involved, this knowledge would certainly change their behavior. The codes of ethics of the American Psychological Association and other organizations allow researchers to use deception, but these codes also require them to explicitly consider how their research might be conducted without the use of deception.
Nevertheless, an important tool for ensuring that research is ethical is the use of a written informed consent form. Informed consent, conducted before a participant begins a research session, is designed to explain the research procedures and inform the participant of his or her rights during the investigation. An informed consent form explains as much as possible about the true nature of the study, particularly everything that might be expected to influence willingness to participate, but it may in some cases withhold some information that allows the study to work.
Finally, participating in research has the potential for producing long-term changes in the research participants. Therefore, all participants should be fully debriefed immediately after their participation. The debriefing is a procedure designed to fully explain the purposes and procedures of the research and remove any harmful aftereffects of participation.
Instructions: View the video clip that describes the Stanley Milgram experiment on obedience and watch for any ethical violations made by the researchers. Then using the information provided in table above titled, "Characteristics of an Ethical Research Project Using Human Participants," answer the following questions to determine which ethical violations occurred in the Milgram experiment on obedience.
Note: In the following questions, the two types of participants used in Stanley Milgram’s study are the “participant-punisher” who administers electric shocks and the “participant-learner” who repeats the paired-word combinations.
Because animals make up an important part of the natural world, and because some research cannot be conducted using humans, animals are also participants in psychological research. Most psychological research using animals is now conducted with rats, mice, and birds; the use of other animals in research is declining. As with ethical decisions involving human participants, basic principles have been developed to help researchers make informed decisions about such research. The following table summarizes the APA Guidelines on Humane Care and Use of Animals in Research.
APA Guidelines on Humane Care and Use of Animals in Research | |||||||||
---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | |||||||||
|
Because the use of animals in research involves a personal value, people naturally disagree about this practice. Although many people accept the value of such research, a minority of people, including animal-rights activists, believes that it is ethically wrong to conduct research on animals. This argument is based on the assumption that because animals are living creatures just as humans are, no harm should ever be done to them.
Psychologists
may use animals in their research, but reasonable efforts are made to
minimize the animals' discomfort. From Flat World Knowledge Introduction to Psychology, v1.0: © Thinkstock.
Most scientists, however, reject this view. They argue that such beliefs ignore the potential benefits that have and continue to come from research with animals. For instance, drugs that can reduce the incidence of cancer or AIDS may first be tested on animals, and surgery that can save human lives may first be practiced on animals. Research on animals has also led to a better understanding of the physiological causes of depression, phobias, and stress, among other illnesses. In contrast to animal-rights activists, then, scientists believe that because there are many benefits that accrue from animal research, such research can and should continue as long as the humane treatment of the animals used in the research is guaranteed.
Determine whether each each of the following scenarios is a compliance or violation of the ethical and humane care and use of animals as outlined by the American Psychological Association. Then select the appropriate guideline that applies to each scenario. To help you with this activity, you may want to review the APA Guidelines on Humane Care and Use of Animals in Research presented earlier.
Imagine you are on the Animal Care and Use Committee at your college. It is part of your responsibility to evaluate and either approve or reject research proposals of faculty members who want to use animals for research or instructional purposes. The two proposals are based on real experiments and describe the studies including goals, benefits, and discomforts or injuries of animals used. You must either approve or disapprove the research proposal based on the information provided. There is no need to suggest improvements or experimental design changes. Indicate why you decided upon the course of action that you did for each proposal. [4]
Case 1
Professor Smith is a psychobiologist working on the frontiers of a new and exciting research area of neuroscience, brain grafting. Research has shown that neural tissue can be be removed from the brains of monkey fetuses and implanted into the brains of monkeys that have suffered brain damage. The neurons seem to make the proper connections and are sometimes effective in improving performance in brain-damaged animals. These experiments offer important animal models for human degenerative diseases such as Parkinson’s and Alzheimer’s. Dr. Smith want to transplant tissue from fetal monkey brains into the entorhinal cortex of adult monkeys; this is the area of the human brain that is involved with Alzheimer’s disease.
The experiment will use 20 adult rhesus monkeys. First, the monkeys will be subjected to brain lesioning. The procedure will involve anesthetizing the animals, opening their skulls, and making lesions using a surgical instrument. After they recover, the monkeys will be tested on a learning task to make sure their memory is impaired. Three months later, half of the animals will be given transplant surgery. Tissue taken from the cortex of the monkey fetuses will be implanted into the area of the brain damage. Control animals will be subjected to a placebo surgery, and all animals will be allowed to recover for 2 months. They will then learn a task to test the hypothesis that the animals having brain grafts will show better memory than the control group.
Dr. Smith argues that this research is in the exploratory stages and can only be done using animals. She further states that in 10 years, over 2 million Americans will have Alzheimer’s disease and that her research could lead to a treatment for the devastating memory loss that Alzheimer’s victims suffer. [4]
Case 2
The Psychology Department is requesting permission from your committee to use 10 rats per semester for demonstration experiments in a physiological psychology course. The students will work in groups of three; each group will be given a rat. The students will first perform surgery on the rats. Each animal will be anesthetized. Following standard surgical procedures, an incision will be made in the scalp and two holes drilled in the animal’s skull. Electrodes will be lowered into the brain to create lesions on each side. The animals will then be allowed to recover. Several weeks later, the effects of destroying this part of the animal’s brain will be tested in a shuttle avoidance task in which the animals will learn when to cross over an electrified grid.
The instructor acknowledges that the procedure is a common demonstration and that no new scientific information will be gained from the experiment. He argues, however, that students taking a course in physiological psychology must have the opportunity to engage in small animal surgery and to see firsthand the effects of brain lesions. [4]
Psychologists agree that if their ideas and theories about human behavior are to be taken seriously, they must be backed up by data collected through research. The research goals that a psychologist wants to study determine the use of one of three types of research approaches. These varying approaches, summarized in the table below, are known as research designs. A research design is the specific method a researcher uses to collect, analyze, and interpret data. Psychologists use three major types of research designs in their research, and each provides an essential avenue for scientific investigation.
Each of the three research designs varies according to its strengths and limitations, and it is important to understand how each differs.
Characteristics of the Three Research Designs | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0: Stangor, C. (2011). Research Methods for the Behavioral Sciences (4th ed.). Mountain View, CA: Cengage. | ||||||||||||||||
|
Descriptive research is designed to create a snapshot of the current thoughts, feelings, or behavior of individuals. There are three types of descriptive research: case studies, surveys, and naturalistic observation.
Sometimes the data in a descriptive research project is based on only a small set of individuals, often only one person or a single small group. These research designs are known as case studies—descriptive records of one or more individuals’ experiences and behavior. Sometimes case studies involve ordinary individuals, as when developmental psychologist Jean Piaget used his observation of his own children to develop his stage theory of cognitive development. More frequently, case studies are conducted on individuals who have unusual or abnormal experiences or characteristics or who find themselves in particularly difficult or stressful situations. The assumption is that we can learn something about human nature by carefully studying individuals who are socially marginal, experiencing unusual situations, or going through a difficult phase in their lives.
Sigmund Freud was a master of using the psychological difficulties of individuals to draw conclusions about basic psychological processes. Freud wrote case studies of some of his most interesting patients and used these careful examinations to develop his important theories of personality. One classic example is Freud’s description of “Little Hans,” a child whose fear of horses the psychoanalyst interpreted in terms of repressed sexual impulses and the Oedipus complex.
The second type of descriptive research is the survey—a measure administered through either a face-to-face or telephone interview, or a written or computer-generated questionnaire—to get a picture of the beliefs or behaviors of a sample of people of interest. The people chosen to participate in the research, called a sample, are selected to be representative of all the people that the researcher wishes to know about, called the population. In election polls, for instance, a sample is taken from the population of all “likely voters” in the upcoming elections.
The results of surveys may sometimes be rather mundane, such as “nine out of ten doctors prefer Tymenocin,” or “the median income in Montgomery County is $36,712.” Yet other times (particularly in discussions of social behavior), the results can be shocking: “more than 40,000 people are killed by gunfire in the United States every year,” or “more than 60% of women between the ages of 50 and 60 suffer from depression.” Descriptive research is frequently used by psychologists to get an estimate of the prevalence (or incidence) of psychological disorders.
The third type of descriptive research—known as naturalistic observation—is research based on the observation of everyday events occurring in the natural environment of people or animals. For instance, a developmental psychologist who watches children on a playground and describes how they interact is conducting descriptive research. Another example of naturalistic research is a bio-psychologist who observes animals in their natural habitats.
A famous example of this type of research is the work of Dr. Jane Goodall at her primate research facility in Gombe Stream National Park, Tanzania (on the continent of Africa). Goodall and her staff have observed and recorded the social interactions and family life of the Kasakela chimpanzee community for over 50 years. Their work is considered groundbreaking and has revealed aspects of chimpanzee life that may have gone undiscovered. For instance, Goodall described human-like socializing behaviors in that members of the group would show affection and encouragement to one another. It was discovered that the chimpanzees were toolmakers and users—stripping leaves from twigs and poking the twig into termite holes to retrieve a meal. Goodall also described the carnivore side of chimpanzees, reporting that hunting groups from the chimpanzee community would stalk, isolate, and kill smaller primates for food and then divide their kill for distribution to other group members.
Two parts of Dr. Goodall’s research methods are representative of the disadvantages of naturalistic observational research. First, she decided to name the chimpanzees she studied instead of using the scientific convention of numbering subjects. The numbering technique hypothetically promotes objective observation that is devoid of attachment and bias on the part of the observer. Dr. Goodall identified members of her chimpanzee community by name, and discussed their behavior in terms of emotion, personality, intelligence, and family relationships; she was criticized by some for becoming overly involved and thus more subjective in her interpretations. This is known as observer bias, which happens when the individual observing behavior is influenced by their own experiences, expectations, or knowledge about the purpose of the observation or study.
Second, the Gombe research team utilized feeding stations to attract the animals for observation, thus potentially altering the natural feeding patterns and behaviors of the troop. This may have promoted artificial competition and increased aggression among the chimpanzees when the observers meaningfully interacted with the chimpanzees. This is called the observer effect. Indeed, the observer effect (interference with or modification of the subject’s behaviors by the process of observation) can lead to a distorted picture of a natural phenomenon, thus defeating the point of “naturalistic” observational research. It is difficult to know how influential the presence of a stranger can be in an established social situation or for the subject being observed.
In many observational studies, particularly those conducted with children, the observers are hidden away from the subjects. Some researchers use two-way mirrors, others use hidden cameras with monitors located in a separate room. Subjects can also be recorded on video from several angles as they interact socially or within their environment; the video recordings can then be observed and data recorded at a later time. A major advantage to this method is the ability to have two or more observers observe and record the behavior, followed by calculating a score for interrater reliability. This score can estimate how much agreement there is between the two observers about what the subjects were doing. This type of test can also identify observer bias.
An advantage of descriptive research is that it attempts to capture the complexity of everyday behavior. Specifically, case studies provide detailed information about a single person or a small group of people and surveys capture the thoughts or reported behaviors of a large population of people. Naturalistic observation, meanwhile, objectively record the behavior of people or animals as it naturally occurs. In sum, descriptive research is used to provide a relatively complete understanding of what is currently happening.
Despite these advantages, descriptive research has a distinct disadvantage in that, although it allows us to get an idea of what is currently happening, it is usually limited to static pictures. Although descriptions of particular experiences may be interesting, they are not always transferable to other individuals in other situations, nor do they tell us exactly why specific behaviors or events occurred. For instance, descriptions of individuals who have suffered a stressful event, such as a war or an earthquake, can be used to understand the individuals’ reactions to the event but cannot tell us anything about the long-term effects of the stress. And because there is no comparison group that did not experience the stressful situation, we cannot know what these individuals would be like if they hadn’t had the stressful experience.
In contrast to descriptive research, which is designed primarily to provide static pictures, correlational research involves measuring the relationship between or among two or more relevant variables. For instance, the variables of height and weight are systematically related (correlated) because taller people generally weigh more than shorter people. In the same way, study time and memory errors are also related, because the more time a person is given to study a list of words, the fewer errors he or she will make. When there are two variables in the research design, one of them is called the predictor variable and the other the outcome variable.
One way of organizing the data from a correlational study with two variables is to graph the values of each of the measured variables using a scatter plot. As you can see in figure below, a scatter plot is a visual image of the relationship between two variables. A point is plotted for each individual at the intersection of his or her scores for the two variables. When the association between the variables on the scatter plot can be easily approximated with a straight line, as in parts (a) and (b), the variables are said to have a linear relationship.
When the straight line indicates that individuals who have above-average values for one variable also tend to have above-average values for the other variable, as in part (a), the relationship is said to be positive linear. Examples of positive linear relationships include those between height and weight, between education and income, and between age and mathematical abilities in children. In each case people who score higher on one of the variables also tend to score higher on the other variable. Negative linear relationships, in contrast, as shown in part (b), occur when above-average values for one variable tend to be associated with below-average values for the other variable. Examples of negative linear relationships include those between the age of a child and the number of diapers the child uses, and between practice on and errors made on a learning task. In these cases, people who score higher on one of the variables tend to score lower on the other variable.
Relationships between variables that cannot be described with a straight line are known as nonlinear relationships. Part (c) shows a common pattern in which the distribution of the points is essentially random. In this case there is no relationship at all between the two variables, and they are said to be independent.
The most common statistical measure of the strength of linear relationships among variables is the Pearson correlation coefficient, which is symbolized by the letter r. The value of the correlation coefficient ranges from r = –1.00 to r = +1.00. The direction of the linear relationship is indicated by the sign of the correlation coefficient. Positive values of r (such as r= .54 or r = .67) indicate that the relationship is positive linear (i.e., the pattern of the dots on the scatter plot runs from the lower left to the upper right), whereas negative values of r (such as r = –.30 or r = –.72) indicate negative linear relationships (i.e., the dots run from the upper left to the lower right). The strength of the linear relationship is indexed by the distance of the correlation coefficient from zero (its absolute value). For instance, r = –.54 is a stronger relationship than r = .30, and r = .72 is a stronger relationship than r = –.57.
An important limitation of correlational research designs is that they cannot be used to draw conclusions about the causal relationships among the measured variables. Consider, for instance, a researcher who has hypothesized that viewing violent behavior will cause increased aggressive play in children. He has collected, from a sample of fourth-grade children, the data on how many violent television shows each child views during the week. He has also collected the data on how aggressively each child plays on the school playground. From his collected data, the researcher discovers a positive correlation between the two measured variables.
Although this positive correlation appears to support the researcher’s hypothesis, it cannot be taken to indicate that viewing violent television causes aggressive behavior. Although the researcher is tempted to assume that viewing violent television causes aggressive play, there are other possibilities.
It may be possible that the causal direction is exactly opposite from what has been hypothesized. Perhaps children who buhave aggressively at school develop residual excitement that leads them to want to watch violent television shows at home:
Although this possibility may seem less likely, there is no way to rule out the possibility of such reverse causation on the basis of this observed correlation. It is also possible that both causal directions are operating and that the two variables cause each other:
Another possible explanation for the observed correlation is that it has been produced by the presence of a common-causal variable (also known as a third variable). A common-causal variable is a variable that is not part of the research hypothesis but that causes both the predictor and the outcome variable and thus produces the observed correlation between them. In our example, a potential common-causal variable is the discipline style of the children’s parents. Parents who use a harsh and punitive discipline style may produce children who like to watch violent television and who behave aggressively in comparison to children whose parents use less harsh discipline:
In this case, television viewing and aggressive play would be positively correlated (as indicated by the curved arrow between them), even though neither one caused the other but they were both caused by the discipline style of the parents (the straight arrows). When the predictor and outcome variables are both caused by a common-causal variable, the observed relationship between them is said to be spurious. A spurious relationship is a relationship between two variables in which a common-causal variable produces and “explains away” the relationship. If effects of the common-causal variable were taken away, or controlled for, the relationship between the predictor and outcome variables would disappear. In the example, the relationship between aggression and television viewing might be spurious because by controlling for the effect of the parents’ disciplining style, the relationship between television viewing and aggressive behavior might go away.
Common-causal variables in correlational research designs can sometimes be thought of as “mystery” variables. For instance, some variables have not been measured or their presence and identity are unknown to the researcher. Since it is not possible to measure every variable that could cause both the predictor and outcome variables, the existence of an unknown common-causal variable is always a possibility. For this reason, we are left with the basic limitation of correlational research: Correlation does not demonstrate causation. It is important that when you read about correlational research projects, you keep in mind the possibility of spurious relationships and be sure to interpret the findings appropriately. Although correlational research is sometimes reported as demonstrating causality without any mention being made of the possibility of reverse causation or common-causal variables, informed consumers of research, like you, are aware of these interpretational problems.
In summary, correlational research designs have both strengths and limitations. One strength is that they can be used when experimental research is not possible because the predictor variables cannot be manipulated. Correlational designs also have the advantage of allowing the researcher to study behavior as it occurs in everyday life. And we can also use correlational designs to make predictions—for instance, to predict from the scores on their battery of tests the success of job trainees during a training session. But we cannot use such correlational information to determine whether the training caused better job performance. For that, researchers rely on experiments.
Instructions: Each of the following examples describes an actual correlational study. Your task is to decide what the results look like. Was there a positive correlation, a negative correlation, or no correlation?
All of the research reported here is loosely based on actual studies. Because most published research is more complicated than the descriptions given here, we have simplified—but hopefully not seriously distorted—the results of actual research.
Research Study 1
Researchers found that the constant exposure to mass media (television, magazines, websites) depicting atypically thin, glamorous female models (the thin-ideal body) may be linked to body image disturbance in women. The general finding was that the more women were exposed to pictures and articles about thin females perceived as ideal, the lower their satisfaction was with their own bodies.
Research Study 2
This research study found that kindergarten and elementary school children who were better at rhymes and hearing the sounds of individual letters before they started to read later learned to read words more quickly than children who were not as good with making and distinguishing elementary sounds of language.
Research Study 3
Ninth-grade students and teachers were surveyed to determine the level of bullying that the students experienced. The researchers were given permission to access scores of the students on several standardized tests, with topics including algebra, earth science, and world history. The researchers found that the more bullying a student experienced, the lower the student’s grades on the standardized tests.
Research Study 4
At one time, people used to assume that poor reading abilities were caused by low intelligence. The most thoroughly studied kind of reading problem is called dyslexia, a learning disability which appears in elementary school readers as difficulty learning to recognize individual words. Dyslexia can vary in seriousness from mild forms to profound levels. In one study, researchers assessed a large number of kindergarten and first-grade children for signs of dyslexia, and they also measured the children’s IQ using a standardized IQ measure. They found no relationship between IQ and seriousness of dyslexia.
Research Study 5
Researchers from the United Kingdom analyzed results of a survey of more than 5,000 young people ages 10 to 15. They used a variety of indicators to rate how healthy a lifestyle each person led, using factors like eating and drinking habits, smoking and drug use, and participation in sports and other activities. In addition, they used responses to several questions to rate each person on his or her level of happiness. They found that healthier habits were strongly related to how happy these young adolescents reported themselves to be.
True experiments are the only reliable method scientists have for inferring causal relationships between two variables of interest: Does one thing cause another? So, the goal of an experimental research design is to provide more definitive conclusions about the causal relationships among the variables in the research hypothesis than is available from correlational designs.
In an experimental research design, the variables of interest are called the independent variable (or variables) and the dependent variable. The independent variable in an experiment is the causing variable that is created (manipulated) by the experimenter. The dependent variable in an experiment is a measured variable that is expected to be influenced by the experimental manipulation. The research hypothesis suggests that the manipulated independent variable or variables will cause changes in the measured dependent variables. We can diagram the research hypothesis by using an arrow that points in one direction. This demonstrates the expected direction of causality:
An example of the use of these independent and dependent variables in an experiment is the effect of witnessing aggression on children’s aggressive behaviors—certainly an important developmental question given the television and video game influence currently in place. In a classic study conducted by Albert Bandura in 1961, it was revealed that children who first watched an adult demonstrating violent behavior on a Bobo doll (inflatable clown with sand at the base) in a play room were more likely to show the same aggressive behaviors, compared to children who watched a passive adult in the play room or no adult at all in the play room before each child entered the play room. The independent variable manipulated by the experimenter was viewing violent behavior with the Bobo doll. The dependent variable, or the measure of behavior, was whether the child in a play room by his or herself expressed aggression by hitting a Bobo doll. The operational definition of this dependent variable was the number of hits, kicks, and other displays of aggression the child inflicted on the Bobo doll. The design of the experiment is shown in the following figure .
Consider an experiment conducted by Anderson and Dill. [1] The study was designed to test the hypothesis that viewing violent video games would increase aggressive behavior. In this research, male and female undergraduates from Iowa State University were given a chance to play with either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played their assigned video games for 15 minutes. Then, after the play, each participant played a competitive game with an opponent in which the participant could deliver blasts of white noise through the earphones of the opponent. The operational definition of the dependent variable (aggressive behavior) was the level and duration of noise delivered to the opponent. The design of the experiment is shown in the figure below.
Anderson and Dill first randomly assigned about 100 participants to each of their two groups (Group A and Group B). Because they used random assignment to conditions, they could be confident that, before the experimental manipulation occurred, the students in Group A were, on average, equivalent to the students in Group B on every possible variable, including variables that are likely to be related to aggression, such as parental discipline style, peer relationships, hormone levels, diet—and in fact everything else.
Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation—they had the participants in Group A play the violent game and the participants in Group B play the nonviolent game. Then they compared the dependent variable (the white noise blasts) between the two groups, finding that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game.
Anderson and Dill had from the outset created initial equivalence between the groups. This initial equivalence allowed them to observe differences in the white noise levels between the two groups after the experimental manipulation, leading to the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was the video game they had played.
Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to the measurement of the dependent variable. This eliminates the possibility of reverse causation. Second, the influence of common-causal variables is controlled, and thus eliminated, by creating initial equivalence among the participants in each of the experimental conditions before the manipulation occurs.
The most common method of creating equivalence among the experimental conditions is through random assignment to conditions, a procedure in which the condition that each participant is assigned to is determined through a random process, such as drawing numbers out of an envelope or using a random number table.
Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Second, and more important, is that some of the most interesting and key social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behavior, or to compare the personality characteristics of people who join suicide cults with those of people who do not join such cults, these relationships must be assessed using a quasi-experiemental design because it may simply not possible to randomly assign subjects to groups or manipulate variables of interest.
Considering the phenomenon of aggression, certainly some would argue that spanking is a demonstration of aggression to which some children are exposed. Does spanking increase aggression in children? To study this question with an experimental design, one would need to recruit families into the study, randomly divide them into “spanking” and “non-spanking” groups, and compare the aggression in the children from the two groups. However, it would not be ethical or even possible to compel parents to spank their children in order to manipulate the independent variable. Thus, the strategy for this type of research would be a quasi-experimental design, which compares two groups that already exist in the population—in this case families who spank their children and those who do not. However, this nonrandom design eliminates the possibility of finding a causal relationship because we could never be sure there wasn’t a third variable contributing to differences in aggression. For instance, perhaps the families who do not spank their children also watch a significant amount of violence on television while the spanking families watch much less television in general. There’s no pattern or order to these habits, but they can nonetheless influence the analysis, particularly if there’s no difference in aggression found. Why? If spanking increases aggression and so does violent television, the groups might be about equal in aggression but for completely different reasons. Without random assignment and controlling for extraneous variables, it’s not possible to discover causality. Thus, quasi-experiments are used to describe relationships between existing groups. We call these pre-existing variables quasi-independent variables, and studies that use quasi-independent variables instead of randomly assigned true independent variables quasi-experiments.
In this activity, you will view a series of short videos about experimental designs. For each video, you will answer several questions:
Study 1: Reducing Stress
Background: Social Psychologist Mark Baldwin has studied the relationship between stressful environments and feelings of anxiety. He wondered if stressful work might lead people to expect and even search for negative messages, such as angry facial expressions, from other people. This can be a problem, because these negative messages then add to the stress which in turn increases anxiety. In this video, Dr. Baldwin shows a creative way to help people break away from this stress-induced tendency to search for negative messages.
Summary of the experiment:
Telemarketers were randomly assigned to one of two tasks. Some of them looked for happy faces among sets of non-smiling faces. This version of the independent variable was the “treatment condition” because the experimenter believed that this task would be the one that would reduce stress levels. The other task was to search for flowers with five petals. This was the control condition, because the experimenter believed that this task would have little if any beneficial effect on stress.
Because telemarketers were randomly assigned to one of the two tasks, we can call this a TRUE experiment. If they had been assigned based on personal characteristics or preferences or some other nonrandom basis, we would call this a quasi-experiment.
The primary dependent variable in this study was stress hormone level. Stress hormones are a biological indicator of the stress that the person is feeling, so level of the hormone is a valuable measure of actual stress experienced, and it does not depend on people’s personal statements about their feelings of stress, which might be distorted by various other factors. The researchers also looked at telemarketing sales, though the idea is that lower stress leads to better performance, so telemarketing sales are only indirectly affected by the task.
The research question was this: Does the specific task—looking for smiling faces versus flowers—affect stress hormone levels and telemarketing sales performance?
The results were clear. Stress hormones were 17% lower in the group that looked for smiling faces when compared to the control group. The only known difference between these groups was the task itself, so—tentatively—we can conclude that the task of looking for smiling faces caused the people in that group to have lower stress levels than similar telemarketers in the other group.
The results also showed that telemarketing sales in the treatment group (smiling faces) were 68% higher than sales for those in the control group (flowers). Here, too, we can tentatively conclude that the task of looking for smiling faces caused the people in that group to have higher telemarketing sales than similar telemarketers in the other group.
Study 2: Red for Romance?
Background: Psychologist Daniella Niesta is a researcher who is interested in factors that influence our motivation. Some factors that influence our behavior and thoughts can be unconscious and subtle. Dr. Niesta believes that colors often have meaning, and that red, particularly in a context of romantic attraction, carries a strong subconscious meaning for men. If she is correct, then colors associated with a romantic context should influence men’s feelings of attraction and interest.
Summary of the experiment:
Undergraduate males were randomly assigned to one of two conditions. The men in one group saw a picture of a woman wearing a red blouse and the men in the other condition saw the same picture, except that the blouse had been digitally altered to be blue. The color of the shirt was the independent variable.
The men then rated the woman on the question:”How attractive do you think this person is?” They answered on a 9-point scale, where 1 was labeled “not at all” and 9 was labeled “extremely.” They were also asked other related questions and answered these on a similar scale.
The results showed that the woman was rated as more attractive on the average when she was depicted in a red blouse than when she was in a blue blouse. Other related questions showed a similar higher level of attraction for the woman when the shirt was red.
Dr. Niesta also looked at other colors: green, gray, and white. The same pattern of results showed when red was up against these other colors. Men found the woman more attractive when she was wearing red.
Study 3
Background: Dr. Ahmed El-Sohemy is a professor of nutritional science at the University of Toronto. In this study, he was interested in differences in people’s ability to regulate their own food intake, particularly their consumption of sugar. He divided people according to which version of a particular gene they had and studied their eating behavior. Watch the video for more details.
Summary of the experiment:
Previous research had indicated to Dr. El-Sohemy that the GLUT2 gene might be involved in regulation of sugar consumption. He recruited volunteers and, using blood samples, divided his volunteers into two groups according to the variation on the GLUT2 gene each person had. The gene with its two variations (alleles) was the quasi-independent variable.
He then had each person fill out an extensive questionnaire on their eating behaviors. For the study reported here, sugar consumption based on these reports was the dependent variable.
The researcher found that people with different versions of the gene consumed substantially different amounts of sugar. The researcher speculated that this gene might be associated with sensitivity to sugar so it could influence how much we eat before we feel we have had enough.
Example 1
A researcher is interested in the effects of a famous author on the persuasiveness of a message. He recruits 50 college students to be in his study and randomly assigns 25 to be in the “high prestige” group and 25 to the be in the “low prestige” group. All of the students read the same document about the importance of improving mental health services at the college. But the 25 students in the “high prestige” group read on the document that the author is the chairperson of the Psychology Department. The 25 students in the “low prestige” group read on the document that the author is a psychology undergraduate student as part of a class assignment. Everyone was asked after reading the article to indicate how much they agreed with the idea that psychological services should be improved at the college.
Example 2
A therapist develops a new approach to treating depression using exercise and diet along with regular counseling sessions. For all of her new clients who agree to be in her test of the therapy approach, she randomly assigns half to receive her new form of treatment and the other half receive the traditional form of treatment that the therapist has offered for many years. She uses a respected, standardized measure of depression before the beginning of her therapy and then again after 3 months of treatment. She uses the difference between these before and after measures to indicate the change in level of depression.
Example 3
A researcher is interested to see if older adults (70 to 80 years old) have more trouble with multitasking than middle-age adults (40 to 50 years old). Participants were seated in a driving simulator that allowed them to drive a simulated car through a variety of situations. They were asked to carry on a hands-free cellphone conversation with an experimenter located in a different room during the driving test. The experimenters measured driving abilities, including avoidance of problems, speed, braking effectiveness, and other related behaviors.
Good research is valid research. When research is valid, the conclusions drawn by the researcher are legitimate. For instance, if a researcher concludes that participating in psychotherapy reduces anxiety, or that taller people are smarter than shorter people, the research is valid only if the therapy really works or if taller people really are smarter. Unfortunately, there are many threats to the validity of research, and these threats may sometimes lead to unwarranted conclusions. Often, and despite researchers’ best intentions, some of the research reported on websites as well as in newspapers, magazines, and even scientific journals is invalid. Validity is not an all-or-nothing proposition, which means that some research is more valid than other research. Only by understanding the potential threats to validity will you be able to make knowledgeable decisions about the conclusions that can or cannot be drawn from a research project. Here we discuss two of these major types of threats to the validity of research: internal and external validity.
Two Threats to the Validity of Research
Internal validity refers to the extent to which we can trust the conclusions that have been drawn about the causal relationship between the independent and dependent variables. Internal validity applies primarily to experimental research designs, in which the researcher hopes to conclude that the independent variable has caused the dependent variable. Internal validity is maximized when the research is free from the presence of confounding variables—variables other than the independent variable on which the participants in one experimental condition differ systematically from those in other conditions.
Consider an experiment in which a researcher tested the hypothesis that drinking alcohol makes members of the opposite sex look more attractive. Participants older than 21 years of age were randomly assigned either to drink orange juice mixed with vodka or to drink orange juice alone. To eliminate the need for deception, the participants were told whether or not their drinks contained vodka. After enough time had passed for the alcohol to take effect, the participants were asked to rate the attractiveness of pictures of members of the opposite sex. The results of the experiment showed that, as predicted, the participants who drank the vodka rated the photos as significantly more attractive.
If you think about this experiment for a minute, it may occur to you that although the researcher wanted to draw the conclusion that the alcohol caused the differences in perceived attractiveness, the expectation of having consumed alcohol is confounded with the presence of alcohol. That is, the people who drank alcohol also knew they drank alcohol, and those who did not drink alcohol knew they did not. It is possible that simply knowing that they were drinking alcohol, rather than the effect of the alcohol itself, may have caused the differences as shown in the following figure. One solution to the problem of potential expectancy effects is to tell both groups that they are drinking orange juice and vodka but really give alcohol to only half of the participants (it is possible to do this because vodka has very little smell or taste). If differences in perceived attractiveness are found, the experimenter could then confidently attribute them to the alcohol rather than to the expectancies about having consumed alcohol.
Another threat to internal validity can occur when the experimenter knows the research hypothesis and also knows which experimental condition the participants are in. The outcome is the potential for experimenter bias, a situation in which the experimenter subtly treats the research participants in the various experimental conditions differently, resulting in an invalid confirmation of the research hypothesis. In one study demonstrating experimenter bias, Rosenthal and Fode [2] sent twelve students to test a research hypothesis concerning maze learning in rats. Although it was not initially revealed to the students, they were actually the participants in an experiment. Six of the students were randomly told that the rats they would be testing had been bred to be highly intelligent, whereas the other six students were led to believe that the rats had been bred to be unintelligent. In reality there were no differences among the rats given to the two groups of students. When the students returned with their data, a startling result emerged. The rats run by students who expected them to be intelligent showed significantly better maze learning than the rats run by students who expected them to be unintelligent. Somehow the students’ expectations influenced their data. They evidently did something different when they tested the rats, perhaps subtly changing how they timed the maze running or how they treated the rats. And this experimenter bias probably occurred entirely out of their awareness.
To avoid experimenter bias, researchers frequently run experiments in which the researchers are blind to condition. This means that although the experimenters know the research hypotheses, they do not know which conditions the participants are assigned to. Experimenter bias cannot occur if the researcher is blind to condition. In a double-blind experiment, both the researcher and the research participants are blind to condition. For instance, in a double-blind trial of a drug, the researcher does not know whether the drug being given is the real drug or the ineffective placebo, and the patients also do not know which they are getting. Double-blind experiments eliminate the potential for experimenter effects and at the same time eliminate participant expectancy effects.
While internal validity refers to conclusions drawn about events that occurred within the experiment, external validity refers to the extent to which the results of a research design can be generalized beyond the specific way the original experiment was conducted. Generalization is the extent to which relationships among conceptual variables can be demonstrated in a wide variety of people and a wide variety of manipulated or measured variables.
Psychologists who use college students as participants in their research may be concerned about generalization, wondering if their research will generalize to people who are not college students. And researchers who study the behaviors of employees in one company may wonder whether the same findings would translate to other companies. Whenever there is reason to suspect that a result found for one sample of participants would not hold up for another sample, then research may be conducted with these other populations to test for generalization.
Recently, many psychologists have been interested in testing hypotheses about the extent to which a result will replicate across people from different cultures. For instance, a researcher might test whether the effects on aggression of viewing violent video games are the same for Japanese children as they are for American children by showing violent and nonviolent films to a sample of both Japanese and American schoolchildren. If the results are the same in both cultures, then we say that the results have generalized, but if they are different, then we have learned a limiting condition of the effect.
Unless the researcher has a specific reason to believe that generalization will not hold, it is appropriate to assume that a result found in one population (even if that population is college students) will generalize to other populations. Because the investigator can never demonstrate that the research results generalize to all populations, it is not expected that the researcher will attempt to do so. Rather, the burden of proof rests on those who claim that a result will not generalize.
Because any single test of a research hypothesis will always be limited in terms of what it can show, important advances in science are never the result of a single research project. Advances occur through the accumulation of knowledge that comes from many different tests of the same theory or research hypothesis. These tests are conducted by different researchers using different research designs, participants, and operationalizations of the independent and dependent variables. The process of repeating previous research, which forms the basis of all scientific inquiry, is known as replication.
Situation 1: A researcher wants to know if creativity can be taught. She designs a curriculum for teaching creative drawing to elementary school children. Then (with the permission of parents and the school) she randomly assigns 30 students to participate in several weekly sessions of creativity training. Another 30 randomly chosen students participate in a weekly session where they can draw, but they receive no creativity instruction. At the end of the six weeks of instruction, she has each child draw a picture. Five local school art teachers, who are also friends of hers, serve as judges. Each picture has a label showing whether the child was in the “creative training” group or the “no creative training” group. The art teacher-judges rate each picture on a 10-point scale, where 10 means “very high in creativity” and 1 means “very low in creativity.” The results were that the children in the “creative training” group received an average rating of 8.5 and the children in the “no creative training” group received an average rating of 4.0. Based on these results, the researcher claimed that her creativity training curriculum succeeded in teaching students to be more creative.
Situation 2: A researcher at Harvard University is interested in how much people enjoy film documentaries. He recruits 40 students enrolled in the documentary filmmaking program at Harvard. He has each person watch 5 recently produced documentaries about poverty, pollution, and European monetary policy. Then the students rated each documentary on a several questions related to enjoyment (e.g., How much did you enjoy this movie? Would you take a date to see this movie?). He also had the students watch and rate 5 recently produced Hollywood action movies. The students rated the documentaries more enjoyable than the Hollywood action movies. Based on this, the researcher states that movie producers should move from the “dying art form” of action movies to the “new wave” of important issue documentaries because people now prefer documentaries.
Situation 3: The following descriptions are from two actual research studies. Read both studies and answer the following questions about the validity of Experiment 1.
Experiment 1: In 1950, the Pepsi Cola Corporation, now PepsiCo, Inc., conducted the “Pepsi Challenge” by randomly assigning individuals to taste either Pepsi or Coca-Cola. The researchers labeled the cups with only an “S” for Pepsi or an “L” for Coca-Cola and asked the participants to rate how much they liked the beverage. The research showed that participants overwhelmingly preferred cup S over cup L, and the researchers concluded that Pepsi was preferred to Coca-Cola.
Experiment 2: In 1983, independent researchers modified the 1950s study in which randomly assigned participants tasted cola from two cups, one marked L and the other marked S. The same product (either Pepsi or Coca-Cola) was placed in both cups. Just as in the 1950s study, the participants overwhelmingly reported that cup S contained the better-tasting product regardless of whether cup S contained Pepsi or Coca-Cola.
The researchers then extended their study by conducting another experiment in which participants were asked their preference for either Pepsi or Coca-Cola. The participants drank from a Pepsi bottle (which contained Coke) and from a Coke bottle (which contained Pepsi). The results indicated that the participants were significantly influenced by the visible label of the product they preferred and not by taste differences between the two products. The researchers concluded that a taste comparison of colas should avoid using any type of labels, even presumably neutral ones like letters of the alphabet, since such labels may have more powerful influences on product comparisons than taste differences.
In 1986 Anne Adams was working as a cell biologist at the University of Toronto in Ontario, Canada. She took a leave of absence from her work to care for a sick child, and while she was away, she completely changed her interests, dropping biology entirely and turning her attention to art. In 1994 she completed her painting Unravelling Boléro, a translation of Maurice Ravel’s famous orchestral piece, Boléro, onto canvas. As you can see in the following image, this artwork is filled with themes of repetition. Each bar of music is represented by a lacy vertical figure, with the height representing volume, the shape representing note quality, and the color representing the music’s pitch. Like Ravel’s music (see the following video), which is a hypnotic melody consisting of two melodial themes repeated eight times over 340 musical bars, the theme in the painting repeats and builds, leading to a dramatic change in color from blue to orange and pink, a representation of Boléro’s sudden and dramatic climax.
Shortly after finishing the painting, Adams began to experience behavioral problems, including increased difficulty speaking. Neuroimages of Adams’s brain taken during this time show that regions in the front part of her brain, which are normally associated with language processing, had begun to deteriorate, while at the same time, regions of the brain responsible for the integration of information from the five senses were unusually well developed. [1] The deterioration of the frontal cortex is a symptom of frontotemporal dementia, a disease associated with changes in artistic and musical tastes and skills [2] as well as with an increase in repetitive behaviors. [3]
What Adams did not know as she worked on her painting was that her brain may have been undergoing the same changes that Ravel’s had undergone 66 years earlier. In fact, it appears that Ravel may have suffered from the same neurological disorder. Ravel composed Boléro at age 53, when he himself was beginning to show behavioral symptoms that interfered with his ability to move and speak. Scientists have concluded, on the basis of an analysis of his written notes and letters, that Ravel was also experiencing the effects of frontotemporal dementia. [4] If Adams and Ravel were both affected by the same disease, it could explain why they both became fascinated with the repetitive aspects of their arts, and it would present a remarkable example of the influence of our brains on behavior.
Every behavior begins with biology. Our behaviors, as well as our thoughts and feelings, are produced by the actions of our brains, nerves, muscles, and glands. In this unit, we begin our journey into the world of psychology by considering the biological makeup of the human being, including the most remarkable of human organs—the brain. We consider the structure of the brain and the methods psychologists use to study the brain and to understand how it works. Let’s begin by looking at neurons, which are nerve cells involved with all information processing in your brain.
A neuron is a cell in the nervous system whose function it is to receive and transmit information. Amazingly, your nervous system is composed of more than 100 billion neurons!
As you can see in the following figure, neurons consist of three major parts: a cell body, or soma, which contains the nucleus of the cell and keeps the cell alive; a branching, treelike fiber known as the dendrite, which collects information from other cells and sends the information to the soma; and a long, segmented fiber known as the axon, which transmits information away from the cell body toward other neurons or to the muscles and glands.
Some neurons have hundreds or even thousands of dendrites, and these dendrites may be branched to allow the cell to receive information from thousands of other cells. The axons are also specialized, and some, such as those that send messages from the spinal cord to the muscles in the hands or feet, may be very long—even up to several feet in length. To improve the speed of their communication, and to keep their electrical charges from shorting out with other neurons, axons are often surrounded by a myelin sheath. The myelin sheath is a layer of fatty tissue surrounding the axon of a neuron that both acts as an insulator and allows faster transmission of the electrical signal. Axons branch out toward their ends, and at the tip of each branch is a terminal button.
The nervous system operates using an electrochemical process (see the following video). An electrical charge moves through the neuron, and chemicals are used to transmit information between neurons. Within the neuron, when a signal is received by the dendrites, it is transmitted to the soma in the form of an electrical signal, and if the signal is strong enough, it may then be passed to the axon and then to the terminal buttons. If the signal reaches the terminal buttons, they are signaled to emit chemicals known as neurotransmitters, which communicate with other neurons across the spaces between the cells, known as synapses. You will be learning more about synapses and why they are important later on in this module.
The electrical signal moves through the neuron as a result of changes in the electrical charge of the axon. Normally, the axon remains in the resting potential, a state in which the interior of the neuron contains a greater number of negatively charged ions than does the area outside the cell. When the segment of the axon that is closest to the cell body is stimulated by an electrical signal from the dendrites, and if this electrical signal is strong enough that it passes a certain level or threshold, the cell membrane in this first segment opens its gates, allowing positively charged sodium ions that were previously kept out to enter. This change in electrical charge that occurs in a neuron when a nerve impulse is transmitted is known as the action potential. Once the action potential occurs, the number of positive ions exceeds the number of negative ions in this segment, and the segment temporarily becomes positively charged.
As you can see in the following figure, the axon is segmented by a series of breaks between the sausage-like segments of the myelin sheath. Each of these gaps is a node of Ranvier. The electrical charge moves down the axon from segment to segment, in a set of small jumps, moving from node to node. When the action potential occurs in the first segment of the axon, it quickly creates a similar change in the next segment, which then stimulates the next segment, and so forth, as the positive electrical impulse continues all the way down to the end of the axon. As each new segment becomes positive, the membrane in the prior segment closes up again, and the segment returns to its negative resting potential. In this way, the action potential is transmitted along the axon toward the terminal buttons. The entire response along the length of the axon is very fast—it can happen up to 1,000 times each second.
An important aspect of the action potential is that it operates in an all-or-nothing manner. What this means is that the neuron either fires completely, such that the action potential moves all the way down the axon, or it does not fire at all. Thus, neurons can provide more energy to the neurons down the line by firing faster but not by firing more strongly. Furthermore, the neuron is prevented from repeated firing by the presence of a refractory period—a brief time after the firing of the axon in which the axon cannot fire again because the neuron has not yet returned to its resting potential.
Not only do neural signals travel via electrical charges within the neuron, but they also travel via chemical transmission between the neurons. As we just learned, neurons are separated by junction areas known as synapses, areas where the terminal buttons at the end of the axon of one neuron nearly, but don’t quite, touch the dendrites of another. The synapses provide a remarkable function because they allow each axon to communicate with many dendrites in neighboring cells. Because a neuron may have synaptic connections with thousands of other neurons, the communication links among the neurons in the nervous system allow for a highly sophisticated communication system.
When the electrical impulse from the action potential reaches the end of the axon, it signals the terminal buttons to release neurotransmitters into the synapse. A neurotransmitter is a chemical that relays signals across the synapses between neurons. Neurotransmitters travel across the synaptic space between the terminal button of one neuron and the dendrites of other neurons, where they bind to the dendrites in the neighboring neurons. Furthermore, different terminal buttons release different neurotransmitters, and different dendrites are particularly sensitive to different neurotransmitters. The dendrites admit the neurotransmitters only if they are the right shape to fit in the receptor sites on the receiving neuron. For this reason, the receptor sites and neurotransmitters are often compared to a lock and key, as shown in the following figure.
To explore the process of neurotransmission, watch this animation.
When neurotransmitters are accepted by the receptors on the receiving neurons, their effect may be either excitatory (i.e., they make the cell more likely to fire) or inhibitory (i.e., they make the cell less likely to fire). Furthermore, if the receiving neuron is able to accept more than one neurotransmitter, it is influenced by the excitatory and inhibitory processes of each. If the excitatory effects of the neurotransmitters are greater than the inhibitory influences of the neurotransmitters, the neuron moves closer to its firing threshold, and if it reaches the threshold, the action potential and the process of transferring information through the neuron begins.
Neurotransmitters that are not accepted by the receptor sites must be removed from the synapse in order for the next potential stimulation of the neuron to happen. This process occurs in part through the breaking down of the neurotransmitters by enzymes, and in part through reuptake, a process in which neurotransmitters that are in the synapse are reabsorbed into the transmitting terminal buttons, ready to again be released after the neuron fires.
Watch the animation and then complete the following exercises.
More than 100 chemical substances produced in the body have been identified as neurotransmitters, and these substances have a wide and profound effect on emotion, cognition, and behavior. Neurotransmitters regulate our appetite, our memory, our emotions, as well as our muscle action and movement. And as you can see in the following table, some neurotransmitters are also associated with psychological and physical diseases.
Drugs that we might ingest—either for medical reasons or recreationally—can act like neurotransmitters to influence our thoughts, feelings, and behavior. An agonist is a drug that has chemical properties similar to a particular neurotransmitter and thus mimics the effects of the neurotransmitter. When an agonist is ingested, it binds to the receptor sites in the dendrites to excite the neuron, acting as if more of the neurotransmitter had been present. As an example, cocaine is an agonist for the neurotransmitter dopamine. Because dopamine produces feelings of pleasure when it is released by neurons, cocaine creates similar feelings when it is ingested. An antagonist is a drug that reduces or stops the normal effects of a neurotransmitter. When an antagonist is ingested, it binds to the receptor sites in the dendrite, thereby blocking the neurotransmitter. As an example, the poison curare is an antagonist for the neurotransmitter acetylcholine. When the poison enters the brain, it binds to the dendrites, stops communication among the neurons, and usually causes death. Still other drugs work by blocking the reuptake of the neurotransmitter itself—when reuptake is reduced by the drug, more neurotransmitter remains in the synapse, increasing its action.
The Major Neurotransmitters and Their Functions | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | |||||||||||||||||||||
|
If you were someone who understood brain anatomy and were to look at the brain of an animal that you had never seen before, you would nevertheless be able to deduce the likely capacities of the animal because the brains of all animals are much alike in overall form. In each animal the brain is layered, and the basic structures of the brain are similar. The innermost structures of the brain—the parts nearest the spinal cord—are the oldest part of the brain, and these areas carry out the same functions they did for our distant ancestors. The “old brain” regulates basic survival functions, such as breathing, moving, resting, and feeding, and creates our experiences of emotion. Mammals, including humans, have developed further brain layers that provide more advanced functions—for instance, better memory, more sophisticated social interactions, and the ability to experience emotions. Humans have a large and highly developed outer layer known as the cerebral cortex, which makes us particularly adept at these processes.
The brain stem is the oldest and innermost region of the brain. It controls the most basic functions of life, including breathing, attention, and motor responses. The brain stem begins where the spinal cord enters the skull and forms the medulla, the area of the brain stem that controls heart rate and breathing. In many cases, the medulla alone is sufficient to maintain life—animals that have their brains severed above the medulla are still able to eat, breathe, and even move. The spherical shape above the medulla is the pons, a structure in the brain stem that helps control the movements of the body, playing a particularly important role in balance and walking. The pons is also important in sleeping, waking, dreaming, and arousal.
Running through the medulla and the pons is a long, narrow network of neurons known as the reticular formation. The job of the reticular formation is to filter out some of the stimuli that are coming into the brain from the spinal cord and to relay the remainder of the signals to other areas of the brain. The reticular formation also plays important roles in walking, eating, sexual activity, and sleeping. When electrical stimulation is applied to the reticular formation of an animal, it immediately becomes fully awake, and when the reticular formation is severed from the higher brain regions, the animal falls into a deep coma.
Two structures near the brain stem are also vital for basic survival functions. The thalamus is the egg-shaped structure sitting just above the brain stem that applies still more filtering to the sensory information coming from the spinal cord and through the reticular formation, and it relays some of these remaining signals to the higher brain levels. [1] The thalamus also receives some of the higher brain’s replies, forwarding them to the medulla and the cerebellum. The thalamus is also important in sleep because it shuts off incoming signals from the senses, allowing us to rest.
The cerebellum (literally, “little brain”) consists of two wrinkled ovals behind the brain stem. It functions to coordinate voluntary movement. People who have damage to the cerebellum have difficulty walking, keeping their balance, and holding their hands steady. Consuming alcohol influences the cerebellum, which is why people who are drunk have difficulty walking in a straight line. Also, the cerebellum contributes to emotional responses, helps us discriminate between different sounds and textures, and is important in learning. [2]
Whereas the primary function of the brain stem is to regulate the most basic aspects of life, including motor functions, the limbic system is largely responsible for memory and emotions, including our responses to reward and punishment. The limbic system is a set of distinct and important brain structures located beneath and around the thalamus. Limbic system structures interact with the rest of the brain in complex ways, and they are extremely important for memory and control of emotional responses. They include the amygdala, the hypothalamus, and the hippocampus, among other structures.
The amygdala consists of two almond-shaped clusters (amygdala comes from the Latin word for almond) and is primarily responsible for regulating our perceptions of and reactions to aggression and fear. The amygdala has connections to other bodily systems related to fear, including the sympathetic nervous system (which we will see later is important in fear responses), facial responses (which perceive and express emotions), the processing of smells, and the release of neurotransmitters related to stress and aggression. [1] In a 1939 study, Klüver and Bucy [2] damaged the amygdala of an aggressive rhesus monkey. They found that the once angry animal immediately became passive and no longer responded to fearful situations with aggressive behavior. Electrical stimulation of the amygdala in other animals also influences aggression. In addition to helping us experience fear, the amygdala helps us learn from situations that create fear. When we experience events that are dangerous, the amygdala stimulates the brain to remember the details of the situation so that we learn to avoid it in the future. [3]
Located just under the thalamus (hence its name), the hypothalamus is a brain structure that contains a number of small areas that perform a variety of functions. Through its many interactions with other parts of the brain, the hypothalamus helps regulate body temperature, hunger, thirst, and sex drive and responds to the satisfaction of these needs by creating feelings of pleasure.
The hippocampus consists of two “horns” that curve back from the amygdala. The hippocampus is important in storing information in long-term memory. If the hippocampus is seriously damaged on both sides of the brain, a person may be unable to store new long-term memories, living instead in a strange world where everything he or she experiences just fades away, even while older memories from the time before the damage are untouched.
All animals have adapted to their environments by developing abilities that help them survive. Some animals have hard shells, others run extremely fast, and some have acute hearing. Human beings do not have any of these particular characteristics, but we do have one big advantage over other animals—we are very, very smart.
You might think we should be able to determine the intelligence of an animal by looking at the ratio of the animal’s brain weight to the weight of its entire body. But brain size is not a measure of intelligence. The elephant’s brain is one-thousandth of its body weight, but the whale’s brain is only one-ten-thousandth of its body weight. On the other hand, although the human brain is one-sixtieth of its body weight, the mouse’s brain represents one fortieth of its body weight. Despite these comparisons, elephants do not seem 10 times smarter than whales, and humans definitely seem smarter than mice.
The key to the advanced intelligence of humans is not found in the size of our brains. What sets humans apart from other animals is our larger cerebral cortex—the outer barklike layer of our brain that allows us to so successfully use language, acquire complex skills, create tools, and live in social groups. [1] In humans, the cerebral cortex is wrinkled and folded rather than smooth, as it is in most other animals. This creates a much greater surface area and size. and allows increased capacities for learning, remembering, and thinking. The folding of the cerebral cortex is called corticalization.
Although the cortex is only about one tenth of an inch thick, it makes up more than 80% of the brain’s weight. The cortex contains about 20 billion nerve cells and 300 trillion synaptic connections. [2] Supporting all these neurons are billions more glial cells (glia), cells that surround and link to the neurons, protecting them, providing them with nutrients, and absorbing unused neurotransmitters. The glia come in different forms and have different functions. For instance, the myelin sheath surrounding the axon of many neurons is a type of glial cell. The glia are essential partners of neurons, without which the neurons could not survive or function. [3]
The cerebral cortex is divided into two hemispheres, and each hemisphere is divided into four lobes, each separated by folds known as fissures. If we look at the cortex starting at the front of the brain and moving over the top, we see first the frontal lobe (behind the forehead), which is responsible primarily for thinking, planning, memory, and judgment. Following the frontal lobe is the parietal lobe, which extends from the middle to the back of the skull and is responsible primarily for processing information about touch. Then comes the occipital lobe, at the very back of the skull, which processes visual information. Finally, in front of the occipital lobe (pretty much between the ears) is the temporal lobe, responsible primarily for hearing and language.
When the German physicists Gustav Fritsch and Eduard Hitzig (1870/2009) applied mild electric stimulation to different parts of a dog’s cortex, they discovered that they could make different parts of the dog’s body move. [4] They also discovered an important and unexpected principle of brain activity. They found that stimulating the right side of the brain produced movement in the left side of the dog’s body, and conversely, stimulating the left brain affected the right side of the body. This finding follows from a general principle about how the brain is structured, called contralateral control. The brain is wired such that in most cases the left hemisphere receives sensations from and controls the right side of the body, and vice versa.
Fritsch and Hitzig also found that the movement that followed the brain stimulation occurred only when they stimulated a specific arch-shaped region that runs across the top of the brain from ear to ear, just at the front of the parietal lobe. Fritsch and Hitzig had discovered the motor cortex, the part of the cortex that controls and executes movements of the body by sending signals to the cerebellum and the spinal cord. More recently, researchers have mapped the motor cortex even more fully by providing mild electronic stimulation to different areas of the motor cortex in fully conscious participants while observing their bodily responses (because the brain has no sensory receptors, these participants feel no pain). As you can see in the following figure, this research has revealed that the motor cortex is specialized for providing control over the body: The parts of the body that require more precise and finer movements, such as the face and hands, also are allotted the greatest amount of cortical space.
Just as the motor cortex sends out messages to the specific parts of the body, the somatosensory cortex, an area just behind and parallel to the motor cortex at the back of the frontal lobe, receives information from the skin’s sensory receptors and the movements of different body parts. Again, the more sensitive the body region, the more area is dedicated to it in the sensory cortex. Our sensitive lips, for example, occupy a large area in the sensory cortex, as do our fingers and genitals.
Other areas of the cortex process other types of sensory information. The visual cortex is the area located in the occipital lobe (at the very back of the brain) that processes visual information. If you were stimulated in the visual cortex, you would see flashes of light or color, and perhaps you have had the experience of “seeing stars” when you were hit in or fell on the back of your head. The temporal lobe, located on the lower side of each hemisphere, contains the auditory cortex, which is responsible for hearing and language. The temporal lobe also processes some visual information, providing us with the ability to name the objects around us. [5]
As you can see in the preceding figure, the motor and sensory areas of the cortex account for a relatively small part of the total cortex. The remainder of the cortex is made up of association areas in which sensory and motor information are combined and associated with our stored knowledge. These association areas are responsible for most of the things that make human beings seem human—the higher mental functions, such as learning, thinking, planning, judging, moral reflecting, figuring, and spatial reasoning.
The control of some bodily functions, such as movement, vision, and hearing, is performed in specific areas of the cortex, and if an area is damaged, the individual will likely lose the ability to perform the corresponding function. For instance, if an infant suffers damage to facial recognition areas in the temporal lobe, it is likely that he or she will never be able to recognize faces. [1] However, the brain is not divided in an entirely rigid way. The brain’s neurons have a remarkable capacity to reorganize and extend themselves to carry out particular functions in response to the needs of the organism and to repair damage. As a result, the brain constantly creates new neural communication routes and rewires existing ones. Neuroplasticity is the brain’s ability to change its structure and function in response to experience or damage. Neuroplasticity enables us to learn and remember new things and adjust to new experiences.
Our brains are the most “plastic” when we are young children, as it is during this time that we learn the most about our environment. And neuroplasticity continues to be observed even in adults. [2] The principles of neuroplasticity help us understand how our brains develop to reflect our experiences. For instance, accomplished musicians have a larger auditory cortex compared with the general population [3] and also require less neural activity to play their instruments than do novices. [4] These observations reflect the changes in the brain that follow our experiences.
Plasticity is also observed when damage occurs to the brain or to parts of the body that are represented in the motor and sensory cortexes. When a tumor in the left hemisphere of the brain impairs language, the right hemisphere begins to compensate to help the person recover the ability to speak. [5] And if a person loses a finger, the area of the sensory cortex that previously received information from the missing finger begins to receive input from adjacent fingers, causing the remaining digits to become more sensitive to touch. [6]
Although neurons cannot repair or regenerate themselves as skin and blood vessels can, new evidence suggests that the brain can engage in neurogenesis, the forming of new neurons. [7] These new neurons originate deep in the brain and may then migrate to other brain areas where they form new connections with other neurons. [8] This leaves open the possibility that someday scientists might be able to “rebuild” damaged brains by creating drugs that help grow neurons.
We learned that the left hemisphere of the brain primarily senses and controls the motor movements on the right side of the body, and vice versa. This fact provides an interesting way to study brain lateralization—the idea that the left and the right hemispheres of the brain are specialized to perform different functions. Gazzaniga, Bogen, and Sperry [9] studied a patient, known as W. J., who had undergone an operation to relieve severe seizures. In this surgery, the region that normally connects the two halves of the brain and supports communication between the hemispheres, known as the corpus callosum, is severed. As a result, the patient essentially becomes a person with two separate brains. Because the left and right hemispheres are separated, each hemisphere develops a mind of its own, with its own sensations, concepts, and motivations. [10]
In their research, Gazzaniga and his colleagues tested the ability of W. J. to recognize and respond to objects and written passages that were presented to only the left or to only the right brain hemispheres. The researchers had W. J. look straight ahead and then flashed, for a fraction of a second, a picture of a geometric shape to the left of where he was looking. By doing so, they assured that—because the two hemispheres had been separated—the image of the shape was experienced only in the right brain hemisphere (remember that sensory input from the left side of the body is sent to the right side of the brain). Gazzaniga and his colleagues found that W. J. was able to identify what he had been shown when he was asked to pick the object from a series of shapes, using his left hand, but that he could not do so when the object was shown in the right visual field. Conversely, W. J. could easily read written material presented in the right visual field (and thus experienced in the left hemisphere) but not when it was presented in the left visual field.
The information presented on the left side of our field of vision is transmitted to the right brain hemisphere, and vice versa. In split-brain patients, the severed corpus callosum does not permit information to be transferred between hemispheres, which allows researchers to learn about the functions of each hemisphere.
This research, and many other studies following it, demonstrated that the two brain hemispheres specialize in different abilities. In most people, the ability to speak, write, and understand language is located in the left hemisphere. This is why W. J. could read passages that were presented on the right side and thus transmitted to the left hemisphere, but could not read passages that were only experienced in the right brain hemisphere. The left hemisphere is also better at math and at judging time and rhythm. It is also superior in coordinating the order of complex movements—for example, lip movements needed for speech. The right hemisphere has only limited verbal abilities, and yet it excels in perceptual skills. The right hemisphere is able to recognize objects, including faces, patterns, and melodies, and it can put a puzzle together or draw a picture. This is why W. J. could pick out the image when he saw it on the left, but not the right, visual field.
Although Gazzaniga’s research demonstrated that the brain is in fact lateralized, such that the two hemispheres specialize in different activities, this does not mean that when people behave in a certain way or perform a certain activity they are using only one hemisphere of their brains at a time. That would be drastically oversimplifying the concept of brain differences. We normally use both hemispheres at the same time, and the difference between the abilities of the two hemispheres is not absolute. [11]
One problem in understanding the brain is that it is difficult to get a good picture of what is going on inside it. But a variety of empirical methods allow scientists to look at brains in action, and the means by which to study the brain have improved dramatically in recent years with the development of new neuroimaging techniques. In this section, we consider the various techniques that psychologists use to learn about the brain. Each technique has some advantages, and when we put them together, we begin to get a relatively good picture of how the brain functions and which brain structures control which activities.
Perhaps the most immediate approach to visualizing and understanding the structure of the brain is to directly analyze the brains of human cadavers. When Albert Einstein died in 1955, his brain was removed and stored for later analysis. Researcher Marian Diamond [1] later analyzed a section of the Einstein’s cortex to investigate its characteristics. Diamond was interested in the role of glia, and she hypothesized that the ratio of glial cells to neurons was an important determinant of intelligence. To test this hypothesis, she compared the ratio of glia to neurons in Einstein’s brain with the ratio in the preserved brains of 11 more “ordinary” men. However, Diamond was able to find support for only part of her research hypothesis. Although she found that Einstein’s brain had relatively more glia in all the areas she studied than did the control group, the difference was statistically significant in only one of the areas she tested. Diamond admits a limitation in her study is that she had only one Einstein to compare with 11 ordinary men.
An advantage of the cadaver approach is that the brains can be fully studied, but an obvious disadvantage is that the brains are no longer active. In other cases, however, we can study living brains. The brains of living human beings may be damaged, for instance, as a result of strokes, falls, automobile accidents, gunshots, or tumors. These damages are called lesions. In rare circumstances, brain lesions may be created intentionally through surgery, for example, to remove brain tumors or (as in split-brain patients) to reduce the effects of epilepsy. Psychologists also sometimes intentionally create lesions in animals to study the effects on their behavior. In so doing, they hope to be able to draw inferences about the likely functions of human brains from the effects of the lesions in animals.
Lesions allow the scientist to observe any loss of brain function that may occur. For instance, when an individual suffers a stroke, a blood clot deprives part of the brain of oxygen, killing the neurons in the area and rendering that area unable to process information. In some cases, the result of the stroke is a specific lack of ability. For instance, if the stroke influences the occipital lobe, then vision may suffer, and if the stroke influences the areas associated with language or speech, these functions will suffer. In fact, our earliest understanding of the specific areas involved in speech and language were gained by studying patients who had experienced strokes.
It is now known that a good part of our social decision-making abilities are located in the frontal lobe, and at least some of this understanding comes from lesion studies. For instance, consider the well-known case of Phineas Gage, a 25-year-old railroad worker who, as a result of an explosion in 1848, had an iron rod driven into his right cheek and out through the top of his skull, causing major damage to his frontal lobe. [2] Remarkably, Gage was able to return to work after the wounds healed, but he no longer seemed to be the same person to those who knew him. The amiable, soft-spoken Gage had become irritable, rude, irresponsible, and dishonest. Although there are questions about the interpretation of this case study, [3] it did provide early evidence that the frontal lobe is involved in personality, emotion, inhibitory control, and goal-setting abilities.
Lesion studies are done using various neuroimaging methods that can record electrical activity in the brain, visualize blood flow and areas of brain activity in real time, provide cross-sectional images, and even provide computer-generated three-dimensional composites of the brain.
The single-unit recording method, in which a thin microelectrode is surgically inserted in or near an individual neuron, is used primarily with animals. The microelectrode records electrical responses or activity of the specific neuron. Research using this method has found, for instance, that specific neurons, known as feature detectors, in the visual cortex detect movement, lines, edges, and even faces. [1]
A less invasive electrical method that is used on humans is called the electroencephalograph (EEG). The EEG is an instrument that records the electrical activity produced by the brain’s neurons through the use of electrodes placed on the surface of the research participant’s head. An EEG can show if a person is asleep, awake, or anesthetized because the brain wave patterns are known to differ during each state. EEGs can also track the waves that are produced when a person is reading, writing, and speaking and are useful for understanding brain abnormalities, such as epilepsy. A particular advantage of EEG is that the participant can move around while the recordings are being taken, which is useful when measuring brain activity in children who often have difficulty keeping still. Furthermore, by following electrical impulses across the surface of the brain, researchers can observe changes over very short time periods (microseconds).
Although the EEG can provide information about the general patterns of electrical activity within the brain, and although the EEG allows the researcher to see these changes quickly as they occur in real time, the electrodes must be placed on the surface of the skull, and each electrode measures brain waves from large areas of the brain. As a result, EEGs do not provide a very clear picture of the structure of the brain.
But other methods exist to provide more specific brain images. The positron emission tomography (PET) scan is an invasive imaging technique that provides color-coded images of brain activity by tracking the brain’s use of a radioactively tagged compound, such as glucose, oxygen, or a drug that has been injected into a person’s bloodstream. The person lies in a PET scanner and performs a mental task, such as recalling a list of words or solving an arithmetic problem, while the PET scanner tracks the amounts of radioactive substance that causes metabolic changes in different brain regions as they are stimulated by a person’s activity. A computer analyzes the data, producing color-coded images of the brain’s activity. A PET scan can determine levels of activity when a person is given a task that requires hearing, seeing, speaking, or thinking.
Functional magnetic resonance imaging (fMRI) is a type of brain scan that uses a magnetic field to create images of brain activity in each brain area. The patient lies on a bed in a large cylindrical structure containing a very strong magnet. Neurons that are firing use more oxygen than neurons that are not firing, and the need for oxygen increases blood flow to the area. The fMRI detects the amount of blood flow in each brain region and thus is an indicator of neural activity.
Very clear and detailed pictures of brain structures can be produced via fMRI. Often, the images take the form of cross-sectional “slices” that are obtained as the magnetic field is passed across the brain. The images of these slices are taken repeatedly and are superimposed on images of the brain structure itself to show how activity changes in different brain structures over time. When the research participant is asked to engage in tasks (e.g., playing a game with another person), the images can show which parts of the brain are associated with which types of tasks. Another advantage of the fMRI is that is it noninvasive. The research participant simply enters the machine and the scans begin.
Although the scanners are expensive, the advantages of fMRI are substantial, and the machines are now available in many university and hospital settings. fMRI is now the most commonly used method of learning about brain structure.
A new approach that is being more frequently implemented to understand brain function, transcranial magnetic stimulation (TMS), may turn out to be the most useful of all. TMS is a procedure in which magnetic pulses are applied to the brain of living persons with the goal of temporarily and safely deactivating a small brain region. In TMS studies, the research participant is first scanned in an fMRI machine to determine the exact location of the brain area to be tested. Then the electrical stimulation is provided to the brain before or while the participant works on a cognitive task, and the effects of the stimulation on performance are assessed. If the participant’s ability to perform the task is influenced by the presence of the stimulation, then the researchers can conclude that this particular area of the brain is important to carrying out the task.
The primary advantage of TMS is that it allows the researcher to draw causal conclusions about the influence of brain structures on thoughts, feelings, and behaviors. When the TMS pulses are applied, the brain region becomes less active, and this deactivation is expected to influence the research participant’s responses. Current research has used TMS to study the brain areas responsible for emotion and cognition and their roles in how people perceive intention and approach moral reasoning. [1] [2] [3] TMS is also used as a treatment for a variety of conditions, including migraine, Parkinson disease, and major depressive disorder.
Imagine that you are a brain scientist. In the following scenarios, select the best method to learn more about the person’s brain and the presenting problem.
Recall what you learned about the amygdala in the previous module. The amygdala is a small, almond-shaped group of nuclei found at the base of the temporal lobe. Research has shown that the amygdala performs a primary role in the processing and memory of emotional reactions.
Using an fMRI machine, researchers assessed differences in the amygdala activity of participants when they viewed either expressions on a human face (figure on the left) or geometric shapes (figure on the right). While each participant was in the fMRI machine, he or she was told to identify either the two identical faces or the two identical shapes within a trio. This simple cognitive task was used to keep the participants’ gaze and attention on either the faces or the shapes so that the fMRI machine could record the activity of the amygdala in both conditions (faces or shapes). Then the activity level of the amygdala (its activation or lack of activation) was compared between the two conditions.
Study the fMRI image above. Using what you have learned about the amygdala and how the fMRI records activation in the brain, answer the following two questions.
Neuroimaging techniques have important implications for understanding human behavior, including people's responses to others. Naomi Eisenberger and her colleagues [4] tested the hypothesis that people who were excluded by others would report emotional distress and that images of their brains would show they experienced pain in the same part of the brain where physical pain is normally experienced. The experiment involved 13 participants. Each was placed into an fMRI brain imaging machine and told that he or she would be playing a computer cyberball game with two other players who were also in fMRI machines (the two opponents did not actually exist, and their responses were controlled by the computer).
Each participant was measured under three different conditions. In the first part of the experiment, the participants were told that due to technical difficulties, the link to the other two scanners could not yet be made, and until the problem was fixed, they could not engage in, but only watch, the game play. This allowed the researchers to take a baseline fMRI reading. Then, during a second inclusion scan, the participants played the game, supposedly with two other players. During this time, the other players threw the ball to the participants. In the third, exclusion, scan, however, the participants initially received seven throws from the other two players but were then excluded from the game because the two players stopped throwing the ball to the participants for the remainder of the scan (45 throws).
The results of the analyses showed that activity in two areas of the frontal lobe was significantly greater during the exclusion scan than during the inclusion scan. Because these brain regions are known from prior research to be active for individuals who are experiencing physical pain, the results suggest that the physiological brain responses associated with being socially excluded by others are similar to brain responses experienced upon physical injury.
Further research [5] [6] has documented that people react to being excluded in a variety of situations with a variety of emotions and behaviors. People who feel they are excluded, and even those who observe other people being excluded, not only experience pain but feel worse about themselves and their relationships with people more generally, and they may work harder to try to restore their connections with others.
Now that we have considered how individual neurons operate and the roles of the different brain areas, it is time to ask how the body manages to “put it all together.” How do the complex activities in the various parts of the brain, the simple all-or-nothing firings of billions of interconnected neurons, and the various chemical systems within the body, work together to allow the body to respond to the social environment and engage in everyday behaviors? In this section, we will see that the complexities of human behavior are accomplished through the joint actions of electrical and chemical processes in the nervous system and the endocrine system.
The nervous system, the electrical information highway of the body, is made up of nerves—bundles of interconnected neurons that fire in synchrony to carry messages. The central nervous system (CNS) , made up of the brain and spinal cord, is the major controller of the body’s functions, charged with interpreting sensory information and responding to it with its own directives. The CNS interprets information coming in from the senses, formulates an appropriate reaction, and sends responses to the appropriate system to respond accordingly. Everything we see, hear, smell, touch, and taste is conveyed to us from our sensory organs as neural impulses, and each of the commands that the brain sends to the body, both consciously and unconsciously, travels through this system as well.
Nerves are differentiated according to their function. A sensory neuron carries information from the sensory receptors, whereas a motor neuron transmits information to the muscles and glands. An interneuron, which is by far the most common type of neuron, is located primarily within the CNS and is responsible for communicating among the neurons. Interneurons allow the brain to combine the multiple sources of available information to create a coherent picture of the sensory information being conveyed.
The spinal cord is the long, thin, tubular bundle of nerves and supporting cells that extends down from the brain. It is the central pathway of information for the body. Within the spinal cord, ascending tracts of sensory neurons relay sensory information from the sense organs to the brain while descending tracts of motor neurons relay motor commands back to the body. When a quicker-than-usual response is required, the spinal cord can do its own processing, bypassing the brain altogether. A reflex is an involuntary and nearly instantaneous movement in response to a stimulus. Reflexes are triggered when sensory information is powerful enough to reach a given threshold and the interneurons in the spinal cord act to send a message back through the motor neurons without relaying the information to the brain, as shown in the following figure. When you touch a hot stove and immediately pull your hand back, or when you fumble your cell phone and instinctively reach to catch it before it falls, reflexes in your spinal cord order the appropriate responses before your brain even knows what is happening.
If the central nervous system is the command center of the body, the peripheral nervous system (PNS) represents the front line. The PNS links the CNS to the body’s sense receptors, muscles, and glands. As you can see in the following figure, the PNS is divided into two subsystems, one controlling internal responses and one controlling external responses.
The autonomic nervous system (ANS) is the division of the PNS that governs the internal activities of the human body, including heart rate, breathing, digestion, salivation, perspiration, urination, and sexual arousal. Many of the actions of the ANS, such as heart rate and digestion, are automatic and out of our conscious control, but others, such as breathing and sexual activity, can be controlled and influenced by conscious processes.
The somatic nervous system (SNS) is the division of the PNS that controls the external aspects of the body, including the skeletal muscles, skin, and sense organs. The somatic nervous system consists primarily of motor nerves responsible for sending brain signals for muscle contraction.
The autonomic nervous system itself can be further subdivided into the sympathetic and parasympathetic systems (see figure below). The sympathetic division of the ANS is involved in preparing the body for rapid action in response to stress from threats or emergencies by activating the organs and glands in the endocrine system. When the sympathetic nervous system recognizes danger or a threat, the heart beats faster, breathing accelerates, and lungs and bronchial tubes expand. These physiological responses increase the amount of oxygen to the brain and muscles to prepare your body for defense. In other sympathetic nervous system responses, your pupils dilate to increase your field of vision, salivation stops and your mouth becomes dry, digestion stops in your stomach and intestines, and you begin to sweat due to your body’s use of more energy and heat. These bodily changes collectively represent the fight-or-flight response, which prepares you to either fight or flee from a perceived danger.
The parasympathetic division of the ANS tends to calm the body by slowing the heart and breathing and by allowing the body to recover from the activities that the sympathetic system causes. The parasympathetic nervous system acts more slowly than the sympathetic nervous system as it calms the activated organs and glands of the endocrine system, eventually returning your body to a normal state, called homeostasis.
Our everyday activities are also controlled by the interaction between the sympathetic and parasympathetic nervous systems. For example, when we get out of bed in the morning, we would experience a sharp drop in blood pressure if it were not for the action of the sympathetic system, which automatically increases blood flow through the body. Similarly, after we eat a big meal, the parasympathetic system automatically sends more blood to the stomach and intestines, allowing us to efficiently digest the food. And perhaps you’ve had the experience of not being at all hungry before a stressful event, such as a sports game or an exam (when the sympathetic division was primarily in action), but suddenly finding yourself starved afterward, as the parasympathetic system takes over. The two systems work together to maintain vital bodily functions, resulting in homeostasis, the natural balance in the body’s systems.
As you have seen, the nervous system is divided structurally into the central nervous system and the peripheral nervous system. The PNS is further divided into subdivisions, each having a particular function in the nervous system to help regulate the body. In the following activity, you will learn the function of each of the nervous system divisions by matching a specific descriptive function with each structure.
Instructions: Read the two scenarios below and answer the questions for each scenario.
Scenario 1: Susan, a college freshman, is taking college algebra. She never liked math and fears she will probably not do well in this first math course. She stays up all night studying for the first exam, and the next morning, she enters the classroom to take the test. As she sits down and takes out her pencils, she feels nervous; she begins to sweat, her stomach is upset, and her heart begins to race.
Scenario 2: As the exam is passed out, Susan takes several deep breaths and closes her eyes. She visualizes herself confidently taking the exam and focuses on her breathing and heart rate. She feels her heart and breathing slow down, and she feels calm and able to focus on answering the questions on the exam.
The nervous system is designed to protect us from danger through its interpretation of and reactions to stimuli. But a primary function of the sympathetic and parasympathetic nervous systems is to interact with the endocrine system, which secretes chemical messengers called hormones that influence our emotions and behaviors.
The endocrine system is made up of glands, which are groups of cells that secrete hormones into the bloodstream. When the hormones released by a gland arrive at receptor tissues or other glands, these receiving receptors may trigger the release of other hormones, resulting in a complex chemical chain reaction. The endocrine system works together with the nervous system to influence many aspects of human behavior, including growth, reproduction, and metabolism. The endocrine system also plays a vital role in emotions. Since the glands in men and women differ, the hormones from each of these glands, the ovaries and testes, explain some of the observed behavioral differences between men and women. The major glands in the endocrine system are shown in the figure above.
The secretion of hormones is regulated by the hypothalamus of the brain. The hypothalamus is the main link between the nervous system and the endocrine system and directs the release of hormones by its interactions with the pituitary gland, which is next to and highly interconnected with the hypothalamus. Review the module "Neurons: The Building Blocks of the Nervous System" for more information on the hypothalamus. The pituitary gland, a pea-sized gland, is responsible for controlling the body’s growth, but it also has many other influences that make it of primary importance to regulating behavior. The pituitary secretes hormones that influence our responses to pain as well as hormones that signal the ovaries and testes to make sex hormones. The pituitary gland also controls ovulation and the menstrual cycle in women. Because the pituitary has such an important influence on other glands, it is sometimes known as the “master gland.”
Other glands in the endocrine system include the pancreas, which secretes hormones designed to keep the body supplied with fuel to produce and maintain stores of energy; and the pineal gland, located in the middle of the brain, which secretes melatonin, a hormone that helps regulate the wake-sleep cycle.
The body has two triangular adrenal glands, one on top of each kidney. The adrenal glands produce hormones that regulate salt and water balance in the body, and they are involved in metabolism, the immune system, and sexual development and function. The most important function of the adrenal glands is to secrete the hormones epinephrine (also known as adrenaline) and norepinephrine (also known as noradrenaline) when we are excited, threatened, or stressed. Epinephrine and norepinephrine stimulate the sympathetic division of the autonomic nervous system, causing increased heart and lung activity, dilation of the pupils, and increases in blood sugar, which give the body a surge of energy to respond to a threat. The activity and role of the adrenal glands in response to stress provides an excellent example of the close relationship and interdependency of the nervous and endocrine systems. A quick-acting nervous system is essential for immediate activation of the adrenal glands, while the endocrine system mobilizes the body for action.
At this point, you can begin to see the important role the hormones play in behavior. But the hormones we reviewed in this section represent only a subset of the many influences that hormones have on our behaviors. In the upcoming units, we consider the important roles that hormones play in many other behaviors, including sleeping, sexual activity, and helping and harming others.
Instructions: In the following vignette, you will apply what you have learned about how the electrical components of the nervous system and the chemical components of the endocrine system work together to influence our behavior. Read the vignette and choose the best answers to complete the sentences describing the correct interaction of the nervous system and endocrine system.
Larry and Claire are hiking on a trail in the Rocky Mountains. As they walk, the trail becomes less distinguishable and is overgrown with brush. Suddenly, a man holding an axe jumps in front of them. This scares both of them; their hearts begin to pump faster and their breathing increases. They begin running in the opposite direction to get away from the man.
As Larry and Claire begin to run, they hear the man calling them. He yells, “Wait! I didn’t mean to scare you. I am a forest ranger, trying the clear part of this trail. Please don’t run away.” Larry and Claire stop running and turn around to look at the man. They notice that he is dressed in a typical forest ranger uniform and see his identification badge. Not feeling threatened any longer, both Larry and Claire begin to feel “calmed down” and walk back toward the forest ranger to resume their hike on the trail.
On September 6, 2007, the Asia-Pacific Economic Cooperation (APEC) leaders’ summit was being held in downtown Sydney, Australia. World leaders, including then U.S. president, George W. Bush, were attending the summit. Many roads in the area were closed for security reasons, and police presence was high.
As a prank, eight members of the Australian television satire The Chaser’s War on Everything assembled a false motorcade made up of two black four-wheel-drive vehicles, a black sedan, two motorcycles, body guards, and chauffeurs (see the video below). Group member Chas Licciardello was in one of the cars disguised as Osama bin Laden. The motorcade drove through Sydney’s central business district and entered the security zone of the meeting. The motorcade was waved on by police, through two checkpoints, until the Chaser group decided it had taken the gag far enough and stopped outside the InterContinental Hotel where former President Bush was staying. Licciardello stepped out onto the street and complained, in character as bin Laden, about not being invited to the APEC Summit. Only at this time did the police belatedly check the identity of the group members, finally arresting them.
Afterward, the group testified that it had made little effort to disguise its attempt as anything more than a prank. The group’s only realistic attempt to fool police was its Canadian flag–marked vehicles. Other than that, the group used obviously fake credentials, and its security passes were printed with “JOKE,” “Insecurity,” and “It’s pretty obvious this isn’t a real pass,” all clearly visible to any police officer who might have been troubled to look closely as the motorcade passed. The required APEC 2007 Official Vehicle stickers had the name of the group’s show printed on them, and this text: “This dude likes trees and poetry and certain types of carnivorous plants excite him.” In addition, a few of the “bodyguards” were carrying camcorders, and one of the motorcyclists was dressed in jeans, both details that should have alerted police that something was amiss.
The Chaser pranksters later explained the primary reason for the stunt. They wanted to make a statement about the fact that bin Laden, a world leader, had not been invited to an APEC Summit where issues of terror were being discussed. The secondary motive was to test the event’s security. The show’s lawyers approved the stunt, under the assumption that the motorcade would be stopped at the APEC meeting.
The senses provide our brains with information about the outside world and about our own internal world. Even single-celled organisms have ways to detect facts about their environment and they typically have the ability to use this information either to find nutrients or to avoid danger. For more complex organisms, certainly for humans, many sources of information about the external and internal world are necessary to allow us to survive and thrive. The systems we have throughout our bodies that allow us to detect information and transform energy into neural impulses are called the senses or sensory systems.
Detection of food or danger is generally not enough to permit an organism to respond effectively for survival. The world is full of complex stimuli that must be responded to in different ways. Organisms generally use both genetically transmitted knowledge and knowledge derived from experience to organize and interpret incoming sensory information. This process of organization and interpretation is what we refer to as perception.
In this unit we discuss the strengths and limitations of these capacities, focusing on both sensation—awareness resulting from the stimulation of a sense organ, and perception—the organization and interpretation of sensations. Sensation and perception work seamlessly together to allow us to experience the world through our eyes, ears, nose, tongue, and skin, but also to combine what we are currently learning from the environment with what we already know about it to make judgments and to choose appropriate behaviors.
The study of sensation and perception is exceedingly important for our everyday lives because the knowledge generated by psychologists is used in so many ways to help so many people. Psychologists work closely with mechanical and electrical engineers, with experts in defense and military contractors, and with clinical, health, and sports psychologists to help them apply this knowledge to their everyday practices. The research is used to help us understand and better prepare people to cope with such diverse events as driving cars, flying planes, creating robots, and managing pain. [1]
We begin the unit with a focus on the six senses of seeing, hearing, smelling, touching, tasting, and monitoring the body’s positions, also called proprioception. We will see that sensation is sometimes relatively direct, in the sense that the wide variety of stimuli around us inform and guide our behaviors quickly and accurately, but nevertheless is always the result of at least some interpretation. We do not directly experience stimuli, but rather we experience those stimuli as they are created by our senses. Each sense accomplishes the basic process of transduction—the conversion of stimuli detected by receptor cells to electrical impulses that are then transported to the brain—in different but related ways.
Each of your sense organs is a specialized system for detecting energy in the external environment and initiating neural messages—action potentials—to send information to the brain about the strength and other characteristics of the detected stimulus. For example, the eyes detect photons (individual units of light) and photosensitive (light sensitive) cells in the back of the eye react to the photons by sending an action potential down a series of neurons all the way to the occipital cortex in the back of the brain. Each of the senses has a specific place in the brain where information from that particular sense is processed. Very often, the information from these sense-specific brain areas is then sent to other parts of the brain for further analysis and integration with information with other senses. The result is your experience of a rich and constantly changing multisensory world, full of sights, sounds, smells, tastes, and texture.
After we have reviewed the basic processes of sensation, we will turn to the topic of perception, focusing on how the brain’s processing of sensory experience allows us to process and organize our experience of the outside world. However, the perceptual system does more than pass on information to the brain. Perception involves interpretation and even distortion of the sensory input. Perceptual illusions, discussed at the end of the unit, allow scientists to explore the various ways that the brain goes beyond the information that it receives.
Odd as it may seem, there is disagreement about the exact number of senses that we have. No one questions the fact that seeing uses the visual sensory system and hearing uses the auditory sensory system. There is some disagreement, however, about how to categorize the skin senses, which detect pressure and heat and pain, and the body senses, which tell our brains about body position. For our purposes, we discuss these senses:
Transduction is the process of turning energy detected around us into nerve impulses. Remember from the brain unit that a nerve impulse is called an action potential, so the result of transduction is always an action potential along a nerve going to the brain. Even though some action potentials start in the retina of the eye and other action potentials start in the cochlea in the inner ear, all action potentials are the same. At the neural level, there is no difference between an action potential coming from the eye or the ear or any other sensory system. What makes sensory experiences different from one another is not the sense organ or the action potential coming from a sense organ. Sensory experiences differ based on which brain area interprets the incoming message.
In the list below, for each of the senses, pick the type of signal that goes along the nerve from the sensory receptor on the body to the brain.
Each of our senses is specialized to detect a certain kind of energy and then to send a message to the brain in the form of action potentials in nerves that run from the sense receptor to specific parts of the brain. Let’s consider what kind of energy or information each sense receptor picks up:
Now let’s do an exercise to see if this all makes sense (no pun intended!).
There are three major types of transduction:
There is no difference between an action potential coming from one sense (e.g., the eye) and an action potential coming from a different sense (e.g., the ear). The way your brain knows if it is processing visual information or sound is by the location that receives the signal. If the action potential ends in the occipital lobe, your brain experiences it as visual information. If the action potential ends in the temporal lobe, then the brain interprets it as sound information.
In Unit 4, Brains, Bodies, and Behavior, you learned that different parts of the brain serve different functions. Let’s see where in the brain each of the senses sends its messages. First, all of the senses—except the sense of smell—send action potentials to the THALAMUS, in the middle of the brain, deep under the cortex. Then the different senses go to different parts of the brain.
For this exercise, put together the information you just learned about the pathway from the sensory sytem to the brain.
To explore the various senses further, go to this website of the BBC (British Broadcasting System) and click on each of the senses listed.
For each sensory system, determine
Humans possess powerful sensory capacities that allow us to sense the kaleidoscope of sights, sounds, smells, and tastes that surround us. Our eyes detect light energy and our ears pick up sound waves. Our skin senses touch, pressure, hot, and cold. Our tongues react to the molecules of the foods we eat, and our noses detect scents in the air. The human perceptual system is wired for accuracy, and people are exceedingly good at making use of the wide variety of information available to them. [1]
In many ways our senses are quite remarkable. The human eye can detect the equivalent of a single candle flame burning 30 miles away and can distinguish among more than 300,000 different colors. The human ear can detect sounds as low as 20 hertz (vibrations per second) and as high as 20,000 hertz, and it can hear the tick of a clock about 20 feet away in a quiet room. We can taste a teaspoon of sugar dissolved in 2 gallons of water, and we are able to smell one drop of perfume diffused in a three-room apartment. We can feel the wing of a bee on our cheek dropped from 1 centimeter above. [2]
Although there is much that we do sense, there is even more that we do not. Dogs, bats, whales, and some rodents all have much better hearing than we do, and many animals have a far richer sense of smell. Birds are able to see the ultraviolet light that we cannot (see the figure below) and can also sense the pull of the earth’s magnetic field. Cats have an extremely sensitive and sophisticated sense of touch, and they are able to navigate in complete darkness using their whiskers. The fact that different organisms have different sensations is part of their evolutionary adaptation. Each species is adapted to sensing the things that are most important to them, while being blissfully unaware of the things that don’t matter.
Psychophysics is the branch of psychology that studies the effects of physical stimuli on sensory perceptions and mental states. The field of psychophysics was founded by the German psychologist Gustav Fechner (1801–1887), who was the first to study the relationship between the strength of a stimulus and a person’s ability to detect the stimulus.
The measurement techniques developed by Fechner and his colleagues are designed in part to help determine the limits of human sensation. One important criterion is the ability to detect very faint stimuli. The absolute threshold of a sensation is the intensity of a stimulus that allows an organism to just barely detect it.
In a typical psychophysics experiment, an individual is presented with a series of trials in which a signal is sometimes presented and sometimes not, or in which two stimuli are presented that are either the same or different. Imagine, for instance, that you were asked to take a hearing test. On each of the trials, your task is to indicate either yes if you heard a sound or no if you did not. The signals are purposefully made to be very faint, making accurate judgments difficult.
The problem for you is that the very faint signals create uncertainty. Because our ears are constantly sending background information to the brain, you will sometimes think that you heard a sound when no sound was made, and you will sometimes fail to detect a sound that was made. Your must determine whether the neural activity that you are experiencing is due to the background noise alone or is a result of a signal within the noise.
The responses you give on the hearing test can be analyzed using signal detection analysis. Signal detection analysis is a technique used to determine the ability of the perceiver to separate true signals from background noise. [1] [2] As you can see in the figure below, each judgment trial creates four possible outcomes: A hit occurs when you, as the listener, correctly say yes when there was a sound. A false alarm occurs when you respond yes to no signal. In the other two cases, you respond no—either a miss (saying no when there was a signal) or a correct rejection (saying no when there was in fact no signal).
The analysis of the data from a psychophysics experiment creates two measures. One measure, known as sensitivity, refers to the true ability of the individual to detect the presence or absence of signals. People who have better hearing will have higher sensitivity than will those with poorer hearing. The other measure, response bias, refers to a behavioral tendency to respond “yes” to the trials, which is independent of sensitivity.
Imagine for instance that rather than taking a hearing test, you are a soldier on guard duty, and your job is to detect the very faint sound of the breaking of a branch that indicates that an enemy is nearby. You can see that in this case making a false alarm by alerting the other soldiers to the sound might not be as costly as a miss (a failure to report the sound), which could be deadly. Therefore, you might well adopt a very lenient response bias in which whenever you are at all unsure, you send a warning signal. In this case, your responses may not be very accurate (your sensitivity may be low because you are making many false alarms) and yet the extreme response bias can save lives.
Another application of signal detection occurs when medical technicians study body images for the presence of cancerous tumors. Again, a miss (in which the technician incorrectly determines that there is no tumor) can be very costly, but false alarms (referring patients who do not have tumors to further testing) also have costs. The ultimate decisions that the technicians make are based on the quality of the signal (clarity of the image), their experience and training (the ability to recognize certain shapes and textures of tumors), and their best guesses about the relative costs of misses versus false alarms.
Signal detection analysis is often used to study ABSOLUTE THRESHOLD—the minimum intensity at which some sensory system works. In this demonstration, we use signal detection analysis, but not with absolute threshold, because absolute threshold is difficult to show under uncontrolled conditions. Instead, you will simply search for a target from within a field of objects that makes it hard to find.
You will see a set of blue crosses on a white background. The stimulus will be present for only 1 second and then it disappears. You must decide if there is a single blue L among the crosses. For example, here is a screen you might see for 1 second:
In this case, there is an L-shape in the figure, so you would click on the YES button. In case you don’t see it, here is where it is:
On other trials, there will be no L-shape, so you click on the NO button. Here is an example of a screen with no L-shape:
This demonstration takes about a minute—there are 12 trials in total. After each trial, you will learn if you were correct or incorrect in your decision. Then you will see your results in a signal detection report.
When you are ready to begin, click the Start button.
Although we have focused to this point on the absolute threshold, a second important criterion concerns the ability to assess differences between stimuli. The difference threshold (or just noticeable difference [JND]), refers to the change in a stimulus that can just barely be detected by the organism. The German physiologist Ernst Weber (1795–1878) made an important discovery about the JND: that the ability to detect differences depends not so much on the size of the difference but on the size of the difference in relationship to the absolute size of the stimulus. Weber’s law maintains that the just noticeable difference of a stimulus is a constant proportion of the original intensity of the timulus. As an example, if you have a cup of coffee that has only a very little bit of sugar in it (say, 1 teaspoon), adding another teaspoon of sugar will make a big difference in taste. But if you added that same teaspoon to a cup of coffee that already had 5 teaspoons of sugar in it, then you probably wouldn’t taste the difference as much (in fact, according to Weber’s law, you would have to add 5 more teaspoons to make the same difference in taste).
One interesting application of Weber’s law is in our everyday shopping behavior. Our tendency to perceive cost differences between products is dependent not only on the amount of money we will spend or save but also on the amount of money saved relative to the price of the purchase. I would venture to say that if you were about to buy a soda or candy bar in a convenience store and the price of the items ranged from $1 to $3, you would think that the $3 item cost a lot more than the $1 item. But now imagine that you were comparing two music systems, one that cost $397 and one that cost $399. Probably you would think that the cost of the two systems was about the same even though buying the cheaper one would still save you $2.
Weber’s law states that our ability to detect the difference between two stimuli is proportional to the magnitude of the stimuli. This may sound difficult, but consider this example. Imagine that you have a 1-pound weight in one hand. I put a 2-pound weight in your other hand. Do you think you could tell the difference? Probably so. These weights are light (low magnitude) so a difference of 1 pound is very easily detected. Now I put a 50-pound weight in one hand and a 51-pound weight in the other hand. Now do you think you could tell the difference? Probably not. When the weight is heavy (high magnitude), the 1-pound difference is not so easily detected.
Weber’s law focuses on one of the oldest variables in psychology, the JND. These letters stand for just noticeable difference, which is the smallest difference between two stimuli that you can reliably detect. Using this term, Weber’s law says that the size of the JND will increase as the magnitude of the stimulus increases. In the weight example, the JND when you have a 50-pound weight in your hand is much greater (2 pounds? 5 pounds? 1pounds?) than when you have a 1-pound weight in your hand (1 pound? ˝ pound? Ľ pound?).
In this activity, we will try to determine your JND for some visual stimuli. For example, look at the two colored circles below. Your task is to decide if they are exactly the same or if they are different from one another. If they are the same, click the SAME button. If they are different, click the DIFFERENT button. As soon as you finish a judgment, you will see the next trial. You will see a total of 24 pairs to judge.
Whereas other animals rely primarily on hearing, smell, or touch to understand the world around them, human beings rely in large part on vision. A large part of our cerebral cortex is devoted to seeing, and we have substantial visual skills. Seeing begins when light falls on the eyes, initiating the process of transduction. Once this visual information reaches the visual cortex, it is processed by a variety of neurons that detect colors, shapes, and motion, and that create meaningful perceptions out of the incoming stimuli.
The air around us is filled with a sea of electromagnetic energy—pulses of energy waves that can carry information from place to place. As you can see in the figure below, electromagnetic waves vary in their wavelength—the distance between one wave peak and the next wave peak, with the shortest gamma waves being only a fraction of a millimeter in length and the longest radio waves being hundreds of kilometers long. Humans are blind to almost all of this energy—our eyes detect only the range from about 400 to 700 billionths of a meter, the part of the electromagnetic spectrum known as the visible spectrum.
As you can see in the above figure, light enters the eye through the cornea, a clear covering that protects the eye and begins to focus the incoming light. The light then passes through the pupil, a small opening in the center of the eye. The pupil is surrounded by the iris, the colored part of the eye that controls the size of the pupil by constricting or dilating in response to light intensity. When we enter a dark movie theater on a sunny day, for instance, muscles in the iris open the pupil and allow more light to enter. Complete adaptation to the dark may take up to 20 minutes.
Behind the pupil is the lens, a structure that focuses the incoming light on the retina, the layer of tissue at the back of the eye that contains photoreceptor cells. As our eyes move from near objects to distant objects, a process known as accommodation occurs. Accommodation is the process of changing the curvature of the lens to keep the light entering the eye focused on the retina. Rays from the top of the image strike the bottom of the retina, and vice versa, and rays from the left side of the image strike the right part of the retina, and vice versa, causing the image on the retina to be upside down and backward. Furthermore, the image projected on the retina is flat, and yet our final perception of the image will be three dimensional.
Accommodation is not always perfect, and in some cases the light hitting the retina is a bit out of focus. As you can see in the figure below, when the focus is in front of the retina, we say that the person is nearsighted, and when the focus is behind the retina we say that the person is farsighted. Eyeglasses and contact lenses correct this problem by adding another lens in front of the eye. Laser eye surgery corrects the problem by reshaping the eye's cornea, while another type of surgery involves replacing the eye's own lens.
The retina contains layers of neurons specialized to respond to light (see the figure below). As light falls on the retina, it first activates receptor cells known as rods and cones. The activation of these cells then spreads to the bipolar cells and then to the ganglion cells, which gather together and converge, like the strands of a rope, forming the optic nerve. The optic nerve is a collection of millions of ganglion neurons that sends vast amounts of visual information, via the thalamus, to the brain. Because the retina and the optic nerve are active processors and analyzers of visual information, it is not inappropriate to think of these structures as an extension of the brain itself.
Rods are visual neurons that specialize in detecting black, white, and gray colors. There are about 120 million rods in each eye. The rods do not provide a lot of detail about the images we see, but because they are highly sensitive to shorter-waved (darker) and weak light, they help us see in dim light, for instance, at night. Because the rods are located primarily around the edges of the retina, they are particularly active in peripheral vision (when you need to see something at night, try looking away from what you want to see). Cones are visual neurons that are specialized in detecting fine detail and colors. The 5 million or so cones in each eye enable us to see in color, but they operate best in bright light. The cones are located primarily in and around the fovea, which is the central point of the retina.
To demonstrate the difference between rods and cones in attention to detail, choose a word in this text and focus on it. Do you notice that the words a few inches to the side seem more blurred? This is because the word you are focusing on strikes the detail-oriented cones, while the words surrounding it strike the less-detail-oriented rods, which are located on the periphery.
As you can see in the figure below, the sensory information received by the retina is relayed through the thalamus to corresponding areas in the visual cortex, which is located in the occipital lobe at the back of the brain. (Hint: You can remember that the occipital lobe processes vision because it starts with the letter O, which is round like an eye.) Although the principle of contralateral control might lead you to expect that the left eye would send information to the right brain hemisphere and vice versa, nature is smarter than that. In fact, the left and right eyes each send information to both the left and the right hemispheres, and the visual cortex processes each of the cues separately and in parallel. This is an adaptational advantage to an organism that loses sight in one eye, because even if only one eye is functional, both hemispheres will still receive input from it.
Trace the path of visual information through the visual pathway.
The visual cortex is made up of specialized neurons that turn the sensations they receive from the optic nerve into meaningful images. Because there are no photoreceptor cells at the place where the optic nerve leaves the retina, a hole or blind spot in our vision is created (see the figure below). When both of our eyes are open, we don’t experience a problem because our eyes are constantly moving, and one eye makes up for what the other eye misses. But the visual system is also designed to deal with this problem if only one eye is open—the visual cortex simply fills in the small hole in our vision with similar patterns from the surrounding areas, and we never notice the difference. The ability of the visual system to cope with the blind spot is another example of how sensation and perception work together to create meaningful experience.
You can get an idea of the extent of your blind spot (the place where the optic nerve leaves the retina) by trying this demonstration. Close your left eye and stare with your right eye at the cross in the diagram. You should be able to see the elephant image to the right (don’t look at it, just notice that it is there). If you can’t see the elephant, move closer or farther away until you can. Now slowly move so that you are closer to the image while you keep looking at the cross. At one distance (probably a foot or so), the elephant will completely disappear from view because its image has fallen on the blind spot.
Perception is created in part through the simultaneous action of thousands of feature detector neurons—specialized neurons, located in the visual cortex, that respond to the strength, angles, shapes, edges, and movements of a visual stimulus. [1] [2] The feature detectors work in parallel, each performing a specialized function. When faced with a red square, for instance, the parallel line feature detectors, the horizontal line feature detectors, and the red color feature detectors all become activated. This activation is then passed on to other parts of the visual cortex where other neurons compare the information supplied by the feature detectors with images stored in memory. Suddenly, in a flash of recognition, the many neurons fire together, creating the single image of the red square that we experience. [3]
Some feature detectors are tuned to selectively respond to particularly important objects, for instance, faces, smiles, and other parts of the body. [4] [5] When researchers disrupted face recognition areas of the cortex using the magnetic pulses of transcranial magnetic stimulation (TMS), people were temporarily unable to recognize faces, and yet they were still able to recognize houses. [6] [7]
It has been estimated that the human visual system can detect and discriminate among 7 million color variations, [1] but these variations are all created by the combinations of the three primary colors: red, green, and blue. The shade of a color, known as hue, is conveyed by the wavelength of the light that enters the eye (we see shorter wavelengths as more blue and longer wavelengths as more red), and we detect brightness from the intensity or height of the wave (bigger or more intense waves are perceived as brighter).
In his important research on color vision, Hermann von Helmholtz (1821–1894) theorized that color is perceived because the cones in the retina come in three types. One type of cone reacts primarily to blue light (short wavelengths), another reacts primarily to green light (medium wavelengths), and a third reacts primarily to red light (long wavelengths). The visual cortex then detects and compares the strength of the signals from each of the three types of cones, creating the experience of color. According to this Young-Helmholtz trichromatic color theory, what color we see depends on the mix of the signals from the three types of cones. If the brain is receiving primarily red and blue signals, for instance, it perceives purple; if it is receiving primarily red and green signals, it perceives yellow; and if it is receiving messages from all three types of cones, it perceives white.
The different functions of the three types of cones are apparent in people who experience colorblindness—the inability to detect either green and/or red colors. About 1 in 50 people, mostly men, lack functioning in the red- or green-sensitive cones, leaving them only able to experience either one or two colors.
The trichromatic color theory cannot explain all of human vision, however. For one, although the color purple does appear to us as a mixing of red and blue, yellow does not appear to be a mix of red and green. And people with colorblindness, who cannot see either green or red, nevertheless can still see yellow. An alternative approach to the Young-Helmholtz theory, known as the opponent-process color theory, proposes that we analyze sensory information not in terms of three colors but rather in three sets of “opponent colors”: red-green, yellow-blue, and white-black. Evidence for the opponent-process theory comes from the fact that some neurons in the retina and in the visual cortex are excited by one color (e.g., red) but inhibited by another color (e.g., green) as shown in the following figure.
One example of opponent processing occurs in the experience of an afterimage. If you stare at the flag on the left side of the figure below for about 30 seconds (the longer you look, the better the effect), and then move your eyes to the blank area to the right of it, you will see the afterimage. When we stare at the green stripes, our green receptors habituate and begin to process less strongly, whereas the red receptors remain at full strength. When we switch our gaze, we see primarily the red part of the opponent process. Similar processes create blue after yellow and white after black.
Watch the video below on aftereffect.
The tricolor and the opponent-process mechanisms work together to produce color vision. When light rays enter the eye, the red, blue, and green cones on the retina respond in different degrees, and send different strength signals of red, blue, and green through the optic nerve. The color signals are then processed both by the ganglion cells and by the neurons in the visual cortex. [1]
One of the important processes required in vision is the perception of form. German psychologists in the 1930s and 1940s, including Max Wertheimer (1880–1943), Kurt Koffka (1886–1941), and Wolfgang Köhler (1887–1967), argued that we create forms out of their component sensations based on the idea of the gestalt, a meaningfully organized whole. The idea of the gestalt is that the “whole is more than the sum of its parts.” Some examples of how gestalt principles lead us to see more than what is actually there are summarized in the following table.
Face vase image by user:mikkalai (Public Domain). Geese in V-formation by gillesgonthier, courtesy of Animal Photos! (CC-BY-2.0).
Depth perception is the ability to perceive three-dimensional space and to accurately judge distance. Without depth perception, we would be unable to drive a car, thread a needle, or simply navigate our way around the supermarket. [1] Research has found that depth perception is in part based on innate capacities and in part learned through experience. [2]
Psychologists Eleanor Gibson and Richard Walk [3] tested the ability to perceive depth in 6- to 14-month-old infants by placing them on a visual cliff, a mechanism that gives the perception of a dangerous drop-off, in which infants can be safely tested for their perception of depth (see video below). The infants were placed on one side of the “cliff” while their mothers called to them from the other side. Gibson and Walk found that most infants either crawled away from the cliff or remained on the board and cried because they wanted to go to their mothers, but the infants perceived a chasm that they instinctively could not cross. Further research has found that even very young children who cannot yet crawl are fearful of heights. [4] On the other hand, studies have also found that infants improve their hand-eye coordination as they learn to better grasp objects and as they gain more experience in crawling, indicating that depth perception is also learned. [5]
Here is a video showing some of the babies in the original study. Notice how the mothers helped the experimenter to learn what the babies would and would not do.
Depth perception is the result of our use of depth cues, messages from our bodies and the external environment that supply us with information about space and distance. Binocular depth cues are depth cues that are created by retinal image disparity—that is, the space between our eyes, and thus require the coordination of both eyes. One outcome of retinal disparity is that the images projected on each eye are slightly different from each other. The visual cortex automatically merges the two images into one, enabling us to perceive depth. Three-dimensional movies make use of retinal disparity by using 3-D glasses that the viewer wears to create a different image on each eye. The perceptual system quickly, easily, and unconsciously turns the disparity into three dimensions.
An important binocular depth cue is convergence, the inward turning of our eyes that is required to focus on objects that are less than about 50 feet away from us. The visual cortex uses the size of the convergence angle between the eyes to judge the object’s distance. You will be able to feel your eyes converging if you slowly bring a finger closer to your nose while continuing to focus on it. When you close one eye, you no longer feel the tension—convergence is a binocular depth cue that requires both eyes to work.
The visual system also uses accommodation to help determine depth. As the lens changes its curvature to focus on distant or close objects, information relayed from the muscles attached to the lens helps us determine an object’s distance. Accommodation is only effective at short viewing distances, however, so while it comes in handy when threading a needle or tying shoelaces, it is far less effective when driving or playing sports.
Although the best cues to depth occur when both eyes work together, we are able to see depth even with one eye closed. Monocular depth cues are depth cues that help us perceive depth using only one eye. [6] Some of the most important are summarized in the following table.
Photos in this activity courtesy of GlacierNPS, Alicia Nijdam, KlipschFan, scillystuff, rhondawebber (CC-BY-2.0).
Creative artists have taken advantage of the cues that the brain uses to perceive motion, starting in the early days of motion pictures and continuing to the present with modern computerized visual effects. The general phenomenon is called apparent motion. One example of apparent motion can be seen if two bright circles, one on the left of the screen and the other on the right of the screen, are flashed on and off in quick succession. At the right speed, your brain creates a blur that seems to move back and forth between the two circles. This is called the phi phenomenon. A similar, but different phenomenon occurs if a series of circles are flashed on and off in sequence, though the flashing occurs more slowly than in the phi phenomenon. The circle appears to move from one location to the next, though the connecting blur associated with the phi phenomenon is not present.
It is not necessary to use circles. Any visual shape can produce apparent motion. Motion pictures, which use a sequence of still images, each similar to but slightly different from the one before, to create the experience of smooth movement. At the frame speeds of modern motion pictures, the phi phenomenon is the best explanation for our experience of smooth and natural movement. However, as visual artists discovered more than a century ago, even at slower change rates, the beta effect can produce the experience of a moving image.
Like vision and all the other senses, hearing begins with transduction. Sound waves collected by our ears are converted to neural impulses, which are sent to the brain where they are integrated with past experience and interpreted as the sounds we experience. The human ear is sensitive to a wide range of sounds, ranging from the faint tick of a clock in a nearby room to the roar of a rock band at a nightclub, and we have the ability to detect very small variations in sound. But the ear is particularly sensitive to sounds in the same frequency as the human voice. A mother can pick out her child’s voice from a host of others, and when we pick up the phone we quickly recognize a familiar voice. In a fraction of a second, our auditory system receives the sound waves, transmits them to the auditory cortex, compares them to stored knowledge of other voices, and identifies the identity of the caller.
Just as the eye detects light waves, the ear detects sound waves. Vibrating objects (such as the human vocal cords or guitar strings) cause air molecules to bump into each other and produce sound waves, which travel from their source as peaks and valleys much like the ripples that expand outward when a stone is tossed into a pond. Unlike light waves, which can travel in a vacuum, sound waves are carried within mediums such as air, water, or metal, and it is the changes in pressure associated with these mediums that the ear detects.
As with light waves, we detect both the wavelength and the amplitude of sound waves. The wavelength of the sound wave (known as frequency) is measured in terms of the number of waves that arrive per second and determines our perception of pitch, the perceived frequency of a sound. Longer sound waves have lower frequency and produce a lower pitch, whereas shorter waves have higher frequency and a higher pitch.
The amplitude, or height of the sound wave, determines how much energy it contains and is perceived as loudness (the degree of sound volume). Larger waves are perceived as louder. Loudness is measured using the unit of relative loudness known as the decibel. Zero decibels represents the absolute threshold for human hearing, below which we cannot hear a sound. Each increase in 10 decibels represents a tenfold increase in the loudness of the sound. This means that 20 decibels is 10 times louder than 10 decibels, and 30 decibels is 100 times louder (10 X 10) than 10 decibels. The sound of a typical conversation (about 60 decibels) is 1,000 times louder than the sound of a faint whisper (30 decibels), whereas the sound of a jackhammer (130 decibels) is 10 billion times louder than the whisper.
Audition begins in the pinna, or auricle, the external and visible part of the ear, which is shaped like a funnel to draw in sound waves and guide them into the auditory canal. At the end of the canal, the sound waves strike the tightly stretched, highly sensitive membrane known as the tympanic membrane (or eardrum), which vibrates with the waves. The resulting vibrations are relayed into the middle ear through three tiny bones, known as the ossicles—the hammer (or malleus), anvil (or incus), and stirrup (or stapes)—to the cochlea, a snail-shaped liquid-filled tube in the inner ear. The vibrations cause the oval window, the membrane covering the opening of the cochlea, to vibrate, disturbing the fluid inside the cochlea.
The movements of the fluid in the cochlea bend the hair cells of the inner ear, much in the same way that a gust of wind bends over wheat stalks in a field. The movements of the hair cells trigger nerve impulses in the attached neurons, which are sent to the auditory nerve and then to the auditory cortex in the brain. The cochlea contains about 16,000 hair cells, each of which holds a bundle of fibers known as cilia on its tip. The cilia are so sensitive that they can detect a movement that pushes them the width of a single atom. To put things in perspective, cilia swaying at the width of an atom is equivalent to the tip of the Eiffel Tower swaying by half an inch. [1]
Watch the video below, then answer the questions that follow.
Although loudness is directly determined by the number of hair cells that are vibrating, two different mechanisms are used to detect pitch. The frequency theory of hearing proposes that whatever the pitch of a sound wave, nerve impulses of a corresponding frequency will be sent to the auditory nerve. For example, a tone measuring 600 hertz will be transduced into 600 nerve impulses a second. This theory has a problem with high-pitched sounds, however, because the neurons cannot fire fast enough. About 1,000 nerve impulses a second is the maximum firing rate for the fastest neurons. To reach the necessary speed for higher frequency (pitch) sounds, the neurons work together in a sort of volley system in which different neurons fire in sequence, allowing us to detect sounds up to about 4,000 hertz.
Not only is frequency important, but location is critical as well. The cochlea relays information about the specific area, or place, in the cochlea that is most activated by the incoming sound. The place theory of hearing proposes sounds of different frequencies set off waves in the cochlea that peak at different locations along the tube that makes up the coclea. Higher tones peak at areas closest to the opening of the cochlea (near the oval window). Lower tones peak at areas near the narrow tip of the cochlea, at the opposite end. Pitch is therefore determined in part by the area of the cochlea where the wave of energy reaches its maximum point.
Just as having two eyes in slightly different positions allows us to perceive depth, so the fact that the ears are placed on either side of the head enables us to benefit from stereophonic, or three-dimensional, hearing. If a sound occurs on your left side, the left ear will receive the sound slightly sooner than the right ear, and the sound it receives will be more intense, allowing you to quickly determine the location of the sound. Although the distance between our two ears is only about 6 inches, and sound waves travel at 750 miles an hour, the time and intensity differences are easily detected. [2] When a sound is equidistant from both ears, such as when it is directly in front, behind, beneath or overhead, we have more difficulty pinpointing its location. It is for this reason that dogs (and people, too) tend to cock their heads when trying to pinpoint a sound, so that the ears receive slightly different signals.
More than 31 million Americans suffer from some kind of hearing impairment. [1] Conductive hearing loss is caused by physical damage to the ear (such as to the eardrums or ossicles) that reduce the ability of the ear to transfer vibrations from the outer ear to the inner ear. Sensorineural hearing loss, which is caused by damage to the cilia or to the auditory nerve, is less common overall but frequently occurs with age. [2] The cilia are extremely fragile, and by the time we are 65 years old, we will have lost 40% of them, particularly those that respond to high-pitched sounds. [3]
Prolonged exposure to loud sounds will eventually create sensorineural hearing loss as the cilia are damaged by the noise. People who constantly operate noisy machinery without using appropriate ear protection are at high risk of hearing loss, as are people who listen to loud music on their headphones or who engage in noisy hobbies, such as hunting or motorcycling. Sounds that are 85 decibels or more can cause damage to your hearing, particularly if you are exposed to them repeatedly. Sounds of more than 130 decibels are dangerous even if you are exposed to them infrequently. People who experience tinnitus (a ringing or a buzzing sensation) after being exposed to loud sounds have very likely experienced some damage to their cilia. Taking precautions when being exposed to loud sound is important, as cilia do not grow back.
While conductive hearing loss can often be improved through hearing aids that amplify the sound, they are of little help to sensorineural hearing loss. But if the auditory nerve is still intact, a cochlear implant may be used. A cochlear implant is a device made up of a series of electrodes that are placed inside the cochlea. The device serves to bypass the hair cells by stimulating the auditory nerve cells directly. The latest implants utilize place theory, enabling different spots on the implant to respond to different levels of pitch. The cochlear implant can help children hear who would normally be deaf, and if the device is implanted early enough, these children can frequently learn to speak, often as well as normal children do. [4] [5]
Although vision and hearing are by far the most important, human sensation is rounded out by four other senses, each of which provides an essential avenue to a better understanding of and response to the world around us. These other senses are touch, taste, smell, and our sense of body position and movement (proprioception).
Taste is important not only because it allows us to enjoy the food we eat, but even more crucial, because it leads us toward foods that provide energy (sugar, for instance) and away from foods that could be harmful. Many children are picky eaters for a reason—they are biologically predisposed to be very careful about what they eat. Together with the sense of smell, taste helps us maintain appetite, assess potential dangers (such as the odor of a gas leak or a burning house), and avoid eating poisonous or spoiled food.
Our ability to taste begins at the taste receptors on the tongue. The tongue detects six different taste sensations, known respectively as sweet, salty, sour, bitter, piquancy (spicy), and umami (savory). Umami is a meaty taste associated with meats, cheeses, soy, seaweed, and mushrooms, and particularly found in monosodium glutamate (MSG), a popular flavor enhancer. [1] [2]
Our tongues are covered with taste buds, which are designed to sense chemicals in the mouth. Most taste buds are located in the top outer edges of the tongue, but there are also receptors at the back of the tongue as well as on the walls of the mouth and at the back of the throat. As we chew food, it dissolves and enters the taste buds, triggering nerve impulses that are transmitted to the brain. [3] Human tongues are covered with 2,000 to 10,000 taste buds, and each bud contains between 50 and 100 taste receptor cells. Taste buds are activated very quickly; a salty or sweet taste that touches a taste bud for even one tenth of a second will trigger a neural impulse. [4] On average, taste buds live for about 5 days, after which new taste buds are created to replace them. As we get older, however, the rate of creation decreases making us less sensitive to taste. This change helps explain why some foods that seem so unpleasant in childhood are more enjoyable in adulthood.
The area of the sensory cortex that responds to taste is in a very similar location to the area that responds to smell, a fact that helps explain why the sense of smell also contributes to our experience of the things we eat. You may remember having had difficulty tasting food when you had a bad cold, and if you block your nose and taste slices of raw potato, apple, and parsnip, you will not be able to taste the differences between them. Our experience of texture in a food (the way we feel it on our tongues) also influences how we taste it.
As we breathe in air through our nostrils, we inhale airborne chemical molecules, which are detected by the 10 million to 20 million receptor cells embedded in the olfactory membrane of the upper nasal passage. The olfactory receptor cells are topped with tentacle-like protrusions that contain receptor proteins. When an odor receptor is stimulated, the membrane sends neural messages up the olfactory nerve to the brain.
We have approximately 1,000 types of odor receptor cells, [5] and it is estimated that we can detect 10,000 different odors. [6] The receptors come in many different shapes and respond selectively to different smells. Like a lock and key, different chemical molecules “fit” into different receptor cells, and odors are detected according to their influence on a combination of receptor cells. Just as the 10 digits from 0 to 9 can combine in many different ways to produce an endless array of phone numbers, odor molecules bind to different combinations of receptors, and these combinations are decoded in the olfactory cortex. Women tend to have a more acute sense of smell than men. The sense of smell peaks in early adulthood and then begins a slow decline. By ages 60 to 70, the sense of smell has become sharply diminished.
The sense of touch is essential to human development. Infants thrive when they are cuddled and attended to, but not if they are deprived of human contact. [1] [2] [3] Touch communicates warmth, caring, and support, and is an essential part of the enjoyment we gain from our social interactions with close others. [4] [5]
The skin, the largest organ in the body, is the sensory organ for touch. The skin contains a variety of nerve endings, combinations of which respond to particular types of pressures and temperatures. When you touch different parts of the body, you will find that some areas are more ticklish, whereas other areas respond more to pain, cold, or heat.
The thousands of nerve endings in the skin respond to four basic sensations: pressure, hot, cold, and pain, but only the sensation of pressure has its own specialized receptors. Other sensations are created by a combination of the other four. For instance:
Once the nerve endings receive sensory information, such as pain or heat, the information travels along sensory pathways to the central nervous system. Although some sensations may stop at the spinal cord, resulting in a reflex response, most continue on towards the brain for interpretation. The brain will then process the information and direct your body to respond based on the sensory information (i.e., your brain recognizes that you have felt the sensation of pain when a mosquito bit you and now sends the message to your muscles to slap your arm).
The skin is important not only in providing information about touch and temperature but also in proprioception—the ability to sense the position and movement of our body parts. Proprioception is accomplished by specialized neurons located in the skin, joints, bones, ears, and tendons, which send messages about the compression and the contraction of muscles throughout the body. Without this feedback from our bones and muscles, we would be unable to play sports, walk, or even stand upright. In fact, learning any new motor skill involves your proprioceptive sense. Imagine trying to hit a baseball if you had to watch your feet too? How would you be able to keep your eye on the ball coming at you? Fortunately, proprioception is automatic in many ways that it normally operates without us even noticing it.
The ability to keep track of where the body is moving is also provided by the vestibular system, a set of liquid-filled areas in the inner ear that monitors the head’s position and movement, maintaining the body’s balance. As you can see in the figure below, the vestibular system includes the semicircular canals and the vestibular sacs. These sacs connect the canals with the cochlea. The semicircular canals sense the rotational movements of the body and the vestibular sacs sense linear accelerations. The vestibular system sends signals to the neural structures that control eye movement and to the muscles that keep the body upright.
Watch the following video on the proprioception and then respond to the question below.
We do not enjoy it, but the experience of pain is how the body informs us that we are in danger. The burn when we touch a hot radiator and the sharp stab when we step on a nail lead us to change our behavior, preventing further damage to our bodies. People who cannot experience pain are in serious danger of damage from wounds that others with pain would quickly notice and attend to.
The gate-control theory of pain proposes that pain is determined by the operation of two types of nerve fibers in the spinal cord. One set of smaller nerve fibers carries pain from the body to the brain, whereas a second set of larger fibers is designed to stop or start (as a gate would) the flow of pain. [6] It is for this reason that massaging an area where you feel pain may help alleviate it—the massage activates the large nerve fibers that block the pain signals of the small nerve fibers. [7]
Experiencing pain is a lot more complicated than simply responding to neural messages, however. It is also a matter of perception. We feel pain less when we are busy focusing on a challenging activity, [8] which can help explain why sports players may feel their injuries only after the game. We also feel less pain when we are distracted by humor. [9] Our experience of pain is also affected by our expectations and moods. So, for example, if you were having a bad day and feeling frustrated, stubbing your toe may feel especially painful. Thankfully, our bodies have a way of dealing with pain, soothing it by the brain’s release of endorphins, which are natural hormonal pain killers. The release of endorphins can explain the euphoria experienced in the running of a marathon. [10]
The eyes, ears, nose, tongue, and skin sense the world around us, and in some cases perform preliminary information processing on the incoming data. But by and large, we do not experience sensation—we experience the outcome of perception—the total package that the brain puts together from the pieces it receives through our senses and that the brain creates for us to experience. When we look out the window at a view of the countryside, or when we look at the face of a good friend, we don’t just see a jumble of colors and shapes—we see, instead, an image of a countryside or an image of a friend. [1]
Our perception is also influenced by psychological factors such as our belief system and our expectations. For example, every so often, we may hear of people spotting UFO’s in the sky, only to discover later that they were hot air balloons or military air craft. Similarly, some people believe that they have seen images of divinity in unexpected places, such as the 10 year old who saw the Virgin Mary on his grilled cheese sandwich. Our beliefs, expectations, and culture therefore influence how we perceive sensory information.
This meaning-making involves the automatic operation of a variety of essential perceptual processes. One of these is sensory interaction-the working together of different senses to create experience. Sensory interaction is involved when taste, smell, and texture combine to create the flavor we experience in food. It is also involved when we enjoy a movie because of the way the images and the music work together.
Although you might think that we understand speech only through our sense of hearing, it turns out that the visual aspect of speech is also important. One example of sensory interaction is shown in the McGurk effect—an error in perception that occurs when we misperceive sounds because the audio and visual parts of the speech are mismatched.
The McGurk effect is a great example of sensory interaction, the mixing of information from more than one sensory system. The McGurk effect is an example of a phenomenon that does not occur in the natural world, but it still tells us about normal, natural information processing. In the natural world, what a person says and how that person’s lips move as he is talking are completely linked. You cannot move your lips one way and say something that doesn’t conform to the way your lips move. The closest thing to an exception is ventriloquism, where a person talks without moving his lips.
The McGurk effect is possible because we can use computer-based video editing to create an artificial experience. A psychologist can film a person saying various things, and then separate and recombine sound segments and visual segments. To keep the experience simple, the McGurk effect is usually created by having a person say simple nonsense sounds, like /GA/ or /BA/.
Watch the video below. Be sure you follow the instructions of the narrator, watching the screen when you are instructed to do that and not watching the screen when you are instructed to close your eyes. As instructed, write down what you actually hear in each condition. After you have completed this exercise, we will discuss the results.
See the video below on the McGurk effect:
To fully appreciate the McGurk effect, notice how you make the /BA/ sound. Try it. You close your lips and then open them to say /BA/. So you close and then open your air passage at the front of your mouth, at the lips. Now make the /GA/ sound. Notice that you close the air passage at the back of your mouth by touching the middle of your tongue to the back of your mouth. So /BA/ is made by closing the front of the air passage and /GA/ is made by closing the back of the air passage.
And /DA/? Notice that you make the /DA/ sound by putting the tip of your tongue on the top of your mouth near the middle of the air passage. What you hear is a compromise: Your ears HEAR a sound made by closing the front of the mouth and SEE a sound made by closing the back of your mouth, so your brain—your perceptual system—splits the difference and PERCEIVES a sound made by closing the middle of the mouth.
What does the McGurk effect tell us about normal processing? This automatic combination of visual and auditory information is not something that your brain throws together to make psychologists happy. The McGurk effect tells us that we always combine visual and auditory information when we are conversing with another person. Of course, in normal life, both senses tell us the same story—the mouth is making the movements that really do match the sounds. But this integration of sound and sight may be very useful in a noisy environment or if the person speaking doesn’t speak clearly. Our use of visual information allows us to automatically fill in the gaps, allowing us to perceive what a persons says more clearly than our auditory system could have done alone.
Other examples of sensory interaction include the experience of nausea that can occur when the sensory information being received from the eyes and the body does not match information from the vestibular system [1] and synesthesia—an experience in which one sensation (e.g., hearing a sound) creates experiences in another (e.g., vision). Most people do not experience synesthesia, but those who do link their perceptions in unusual ways, for instance, by experiencing color when they taste a particular food or by hearing sounds when they see certain objects. [2]
Synesthesia is a phenomenon that has received a lot of attention in recent years. It occurs when sensory input stimulates both normal sensory experiences as well as sensory experiences either from a different sense or another dimension of that same sense. An example of the first type is sound-color synesthesia: Sounds lead to the perception of those sounds, but also to visual experiences of colors, almost like fireworks. An example of the second type is number-color synesthesia: Looking at numbers not only leads to the visual experience of those numbers, but the numbers are experienced in particular colors, so, for instance, 9 might be red and 5 might be yellow, and so on.
Watch the two videos below and then we will analyze synesthesia more fully.
See the video below on number-color synesthesia:
See the video below on color-sound synesthesia:
Another important perceptual process is selective attention—the ability to focus on some sensory inputs while tuning out others. View the videos in the exercise below. You will see how surprisingly little we notice about things happening right in front of our faces. Perhaps the process of selective attention can help you see why the security guards completely missed the fact that the Chaser group’s motorcade was a fake—they focused on some aspects of the situation, such as the color of the cars and the fact that they were there at all, and completely ignored others (the details of the security information).
See the video below on selective attention:
In the video below, the researcher who created the gorilla video and used it in his research, Dan Simons, explains more about his work.
Next, watch the video below with British illusionist Derren Brown demonstrating how little we actually see.
Now watch this British safe driving advertisement:
Selective attention also allows us to focus on a single talker at a party while ignoring other conversations that are occurring around us. [1] Without this automatic selective attention, we’d be unable to focus on the single conversation we want to hear. However, selective attention is not complete; we also at the same time monitor what is happening in the channels we are not focusing on. Perhaps you have had the experience of being at a party and talking to someone in one part of the room, when suddenly you hear your name being mentioned by someone in another part of the room. This cocktail party phenomenon shows us that, although selective attention is limiting what we process, we are nevertheless at the same time doing a lot of unconscious monitoring of the world around us—you did not know you were attending to the background sounds of the party, but evidently you were.
One of the major problems in perception is to ensure that we always perceive the same object in the same way, despite the fact that the sensations that it creates on our receptors changes dramatically. The ability to perceive a stimulus as constant despite changes in sensation is known as perceptual constancy. Consider our image of a door as it swings. When it is closed, we see it as rectangular, but when it is open, we see only its edge and it appears as a line. But we never perceive the door as changing shape as it swings—perceptual mechanisms take care of the problem for us by allowing us to see a constant shape.
The figure below illustrates this point. In natural circumstances, it never occurs to us that a door is changing shapes as it opens in front of our eyes. As the figure shows, the shape that hits the retina undergoes a significant transformation, but our perceptual system simply interprets it as a rectangular door that is changing in its orientation to us.
The visual system also corrects for color constancy. Imagine that you are wearing blue jeans and a bright white t-shirt. When you are outdoors, both colors will be at their brightest, but you will still perceive the white t-shirt as bright and the blue jeans as darker. When you go indoors, the light shining on the clothes will be significantly dimmer, but you will still perceive the t-shirt as bright. This is because we put colors in context and see that, compared to its surroundings, the white t-shirt reflects the most light. [1] In the same way, a green leaf on a cloudy day may reflect the same wavelength of light as a brown tree branch does on a sunny day. Nevertheless, we still perceive the leaf as green and the branch as brown.
To see an example of color constancy, look at the picture of the hot air balloon below. If you were looking at it in a natural setting, not thinking about your psychology class, you would simply think of the main colors: green, red, white, and blue. However, our camera allows us to take a picture frozen in time to analyze. What we can easily see is that the four colors are extremely variable. Look at the color patches to the right. They are all taken from the three bands of color-green, white, and blue-in the lower half of the picture. Notice how the actual hues vary considerably, although we would simply perceive them as the same color if we were not concentrating on their variability. Color constancy is produced by your perceptual system, taking sensory input and compensating for changes in lighting.
A third kind of constancy is size constancy. Our eyes see objects that differ radically in size, but our perceptual system compensates for this. In the classroom scene below, notice the huge difference in actual size of people in the picture as you go from front to back. This is what the retina detects. But your perceptual system compensates for these differences, using the knowledge that all of the people are approximately the same size.
Our perceptual system compensates for distance, using depth cues from the space around objects to help it make adjustments. The figure below shows how physical context influences our perceptual system. On the left, the monster in front seems smaller than the one in the back because our perceptual system adjusts the perceived size to take the distance into the tunnel into account. On the right, we remove the tunnel, and the two monsters are easily seen as being identical in size.
Instructions: The image below illustrates the results of our perceptual system’s attempt to make sense of a complicated pattern—a pattern designed to fool the perceptual system. This is an example of color constancy.
Click and drag to move the rows of the figure (marked a through f) around to help you answer these questions.
Instructions: The picture below is another illustration of our perceptual system’s attempt to compensate for a complex world. In this case, it appears that some squares are in a shadow. Rather than assuming that those squares are actually different from the rest of the checkerboard, the perceptual system compensates for the shadow.
Click and drag to move squares A and B around to help you answer these questions.
Although our perception is very accurate, it is not perfect. Illusions occur when the perceptual processes that normally help us correctly perceive the world around us are fooled by a particular situation so that we see something that does not exist or that is incorrect. We will look at some perceptual illusions in the exercises below to see what we can learn from them.
Instructions: For the figures below, adjust the length of the center bar on the top figure so it looks to you to be the same length as the middle bar in the lower figure. Simply rely on what your perceptual system tells you looks equal. Don’t try to compensate for any illusions or use a straightedge to help you make the judgement. Just use your visual system.
The figures you just worked with form the Mueller-Lyer illusion (see also the figure below). The middle line on the figure with the arrows pointing in ( >--< ) looks longer to most people than the middle line on the one with the arrows pointing out (<-->) when they are both actually the same length.
Many psychologists believe that this illusion is, in part, the result of the way we interpret the angled lines at the ends of the figure. Based on our experience with rooms (see figure below), when we see lines pointing out away from the center line (building on the left), we tend to interpret that as meaning that the center line is close to us and the angled lines are going away into the distance. But when the angled lines on the end point in toward the center line (room interior on the right), we interpret that as meaning that the center line is far away and the angled lines are coming toward us. Our perceptual system then compensates for these assumptions and gives us the experience of perceiving the “more distant line” (the one on the right) as being longer than it really is.
The moon illusion refers to the fact that the moon is perceived to be about 50% larger when it is near the horizon than when it is seen overhead, despite the fact that both moons are the same size and cast the same size retinal image. The monocular depth cues of position and aerial perspective create the illusion that things that are lower and more hazy are farther away. The skyline of the horizon (trees, clouds, outlines of buildings) also gives a cue that the moon is far away, compared to a moon at its zenith. If we look at a horizon moon through a tube of rolled up paper, taking away the surrounding horizon cues, the moon will immediately appear smaller.
The moon always looks larger on the horizon than when it is high above. But if we take away the surrounding distance cues of the horizon, the illusion disappears. It seems that the long curvature of the earth on the horizon makes the moon, by comparison, look smaller. It appears that our perceptual system compares the curvature of the moon to the curvature of the earth and then exaggerates the smaller curvature of the moon to make it seem even smaller. Without those cues, when the moon is high in the sky, the perceptual system no longer exaggerates the moon’s relatively small size.
From Flat World Knowledge Introduction to Psychology, v1.0: © Thinkstock.
We can’t create the moon illusion in this course, but a similar illusion is one called the Ebbinhaus illusion.
Instructions: Adjust the two center circles so that they appear to you to be the same size. Don’t use any aids to measure the size or try to compensate for the illusion. Rely on your visual system.
Instructions: Adjust the horizontal black lines in the figure below until they appear to be straight. Do not try to compensate for any illusion.
The figure below is called the Wundt illusion after its creator, 19th-century German psychologist Wilhelm Wundt.
The Ponzo illusion operates on the same principle. The monocular depth cue of linear perspective leads us to believe that, given two similar objects, the distant one can only cast the same size retinal image as the closer object if it is larger. The topmost bar therefore appears longer.
The Ponzo illusion is caused by a failure of the monocular depth cue of linear perspective: Both bars are the same size even though the top one looks larger.
Instructions: In the figure above, the bottom yellow line is fixed. Your task is to adjust the top yellow line until it LOOKS the same length as the bottom one. Please use only your eyes to guide your decision. Don’t try to compensate for the illusion. Just adjust the line until your perceptual system tells you that the lines look to be the same length.
Illusions demonstrate that our perception of the world around us may be influenced by our prior knowledge. But the fact that some illusions exist in some cases does not mean the perceptual system is generally inaccurate—in fact, humans normally become so closely in touch with their environment that the physical body and the particular environment we sense and perceive become embodied—that is, built into and linked with—our cognition, such that the worlds around us become part of our brain. [1] The close relationship between people and their environments means that, although illusions can be created in the lab and under some unique situations, they may be less common with active observers in the real world. [2]
Here is a great spot to see many more illusions and to learn more about why they occur.
A Google image search will also turn up interesting illusions.
It is a continuous challenge living with post-traumatic stress disorder (PTSD), and I’ve suffered from it for most of my life. I can look back now and gently laugh at all the people who thought I had the perfect life. I was young, beautiful, and talented, but unbeknownst to them, I was terrorized by an undiagnosed debilitating mental illness.
Having been properly diagnosed with PTSD at age 35, I know that there is not one aspect of my life that has gone untouched by this mental illness. My PTSD was triggered by several traumas, most importantly a sexual attack at knifepoint that left me thinking I would die. I would never be the same after that attack. For me there was no safe place in the world, not even my home. I went to the police and filed a report. Rape counselors came to see me while I was in the hospital, but I declined their help, convinced that I didn’t need it. This would be the most damaging decision of my life.
For months after the attack, I couldn’t close my eyes without envisioning the face of my attacker. I suffered horrific flashbacks and nightmares. For four years after the attack I was unable to sleep alone in my house. I obsessively checked windows, doors, and locks. By age 17, I’d suffered my first panic attack. Soon I became unable to leave my apartment for weeks at a time, ending my modeling career abruptly. This just became a way of life. Years passed when I had few or no symptoms at all, and I led what I thought was a fairly normal life, just thinking I had a “panic problem.”
Then another traumatic event retriggered the PTSD. It was as if the past had evaporated, and I was back in the place of my attack, only now I had uncontrollable thoughts of someone entering my house and harming my daughter. I saw violent images every time I closed my eyes. I lost all ability to concentrate or even complete simple tasks. Normally social, I stopped trying to make friends or get involved in my community. I often felt disoriented, forgetting where, or who, I was. I would panic on the freeway and became unable to drive, again ending a career. I felt as if I had completely lost my mind. For a time, I managed to keep it together on the outside, but then I became unable to leave my house again.
Around this time I was diagnosed with PTSD. I cannot express to you the enormous relief I felt when I discovered my condition was real and treatable. I felt safe for the first time in 32 years. Taking medication and undergoing behavioral therapy marked the turning point in my regaining control of my life. I’m rebuilding a satisfying career as an artist, and I am enjoying my life. The world is new to me and not limited by the restrictive vision of anxiety. It amazes me to think back to what my life was like only a year ago, and just how far I’ve come. For me there is no cure, no final healing. But there are things I can do to ensure that I never have to suffer as I did before being diagnosed with PTSD. I’m no longer at the mercy of my disorder, and I would not be here today had I not had the proper diagnosis and treatment. The most important thing to know is that it’s never too late to seek help. [1]
In the early part of the 20th century, Russian physiologist Ivan Pavlov (1849–1936) was studying the digestive system of dogs when he noticed an interesting behavioral phenomenon: The dogs began to salivate when the lab technicians who normally fed them entered the room, even though the dogs had not yet received any food. Pavlov realized that the dogs were salivating because they knew that they were about to be fed; the dogs had begun to associate the arrival of the technicians with the food that soon followed their appearance in the room.
With his team of researchers, Pavlov began studying this process in more detail. He conducted a series of experiments in which, over a number of trials, dogs were exposed to a sound immediately before receiving food. He systematically controlled the onset of the sound and the timing of the delivery of the food, and recorded the amount of the dogs’ salivation. Initially the dogs salivated only when they saw or smelled the food, but after several pairings of the sound and the food, the dogs began to salivate as soon as they heard the sound. The animals had learned to associate the sound with the food that followed.
Use the next exercise to sort out the sometimes tricky terminology of classical conditioning.
In the first step, we focus on the initial conditions before conditioning has taken place. After looking at the pictures below, determine the unconditioned stimulus and the unconditioned response that results.
Now you must predict what should happen when a hungry dog is presented with food.
In this step, we find a neutral stimulus—a stimulus that produces no response. After looking at the pictures below, determine what the neutral stimulus is and the response that it causes. In this test, the dog is not hungry.
Now you must predict what should happen when a hungry dog hears the neutral stimulus: a tone.
In this step, we go through the actual conditioning process, associating a conditioned stimulus (CS) with an unconditioned stimulus (US). When we are finished, the neutral stimulus (NS) will have become the conditioned stimulus. After looking at the pictures below, determine the conditioned and unconditioned stimuli and the unconditioned response that they cause. Note that this sequence differs from the first one you did because there are always two stimuli present: first a CS and then a US.
This is the last part of this exercise. We want to see if conditioning—associative learning—has taken place. In the previous exercise, the US was always present, so it could produce the UR. Now we remove the US to see if the animal has learned to produce the same response when only the CS is present. If so, we will rename the response, when produced only by the CS, the Conditioned Response (CR). After looking at the pictures below, determine what the conditioned stimulus is and the conditioned response that it causes.
This exercise is best understood related to the previous exercise on learning. Imagine a series of learning trials in which the CS is followed by the US, and the UR is measured. Now, in this exercise, suppose on the next trial, we present only the CS without the US. What happens? So in Trial 1 we pair the CS and US only one time before presenting the CS alone. In Trial 2 we pair the CS and US two times before presenting the CS alone. In Trial 3 we pair the CS and US three times before presenting the CS alone.
Pavlov identified a fundamental associative learning process called classical conditioning. Classical conditioning refers to learning that occurs when a neutral stimulus (e.g., a tone) becomes associated with a stimulus (e.g., food) that naturally produces a specific behavior. After the association is learned, the previously neutral stimulus is sufficient to produce the behavior. As you can see in the following figure, psychologists use specific terms to identify the stimuli and the responses in classical conditioning. The unconditioned stimulus (US) is something (such as food) that triggers a natural occurring response, and the unconditioned response (UR) is the naturally occurring response (such as salivation) that follows the unconditioned stimulus. The conditioned stimulus (CS) is a neutral stimulus that, after being repeatedly presented prior to the unconditioned stimulus, evokes a response similar to the response to the unconditioned stimulus. In Pavlov’s experiment, the sound of the tone served as the conditioned stimulus that, after learning, produced the conditioned response (CR), which is the acquired response to the formerly neutral stimulus. Note that the UR and the CR are the same behavior—in this case salivation—but they are given different names because they are produced by different stimuli (the US and the CS, respectively).
Conditioning is evolutionarily beneficial because it allows organisms to develop expectations that help them prepare for both good and bad events. Imagine, for instance, that an animal first smells a new food, eats it, and then gets sick. If the animal can learn to associate the smell (CS) with the food (US), then it will quickly learn that the food creates the negative outcome and will not eat it next time.
A researcher is testing young children to see if they can learn to associate a red circle with an event that the child enjoys. She sets up an experiment where a toy bear dances. The infants predictably love the toy bear and stare at it when it makes noise and dances. She then trains the child by showing a big red circle on a screen in front of the child and, immediately after that, the bear appears and dances off to the side. The bear is only visible right after the red circle appears and the child must turn his or her head to see the bear.
After he had demonstrated that learning could occur through association, Pavlov moved on to study the variables that influenced the strength and the persistence of conditioning. In some studies, after the conditioning had taken place, Pavlov presented the sound repeatedly but without presenting the food afterward. As you can see, after the initial acquisition (learning) phase in which the conditioning occurred, when the CS was then presented alone, the behavior rapidly decreased—the dogs salivated less and less to the sound, and eventually the sound did not elicit salivation at all. Extinction is the reduction in responding that occurs when the conditioned stimulus is presented repeatedly without the unconditioned stimulus.
Although at the end of the first extinction period the CS was no longer producing salivation, the effects of conditioning had not entirely disappeared. Pavlov found that, after a pause, sounding the tone again elicited salivation, although to a lesser extent than before extinction took place. The increase in responding to the CS following a pause after extinction is known as spontaneous recovery. When Pavlov again presented the CS alone, the behavior again showed extinction.
For each example, select the term that best describes it.
Although the behavior has disappeared, extinction is never complete. If conditioning is again attempted, the animal will learn the new associations much faster than it did the first time. Pavlov also experimented with presenting new stimuli that were similar, but not identical to, the original conditioned stimulus. For instance, if the dog had been conditioned to being scratched before the food arrived, the stimulus would be changed to being rubbed rather than scratched. He found that the dogs also salivated upon experiencing the similar stimulus, a process known as generalization. Generalization refers to the tendency to respond to stimuli that resemble the original conditioned stimulus. The ability to generalize has important evolutionary significance. If we eat some red berries and they make us sick, it would be a good idea to think twice before we eat some purple berries. Although the berries are not exactly the same, they nevertheless are similar and may have the same negative properties.
Lewicki [1] conducted research that demonstrated the influence of stimulus generalization and how quickly and easily it can happen. In his experiment, high school students first had a brief interaction with a female experimenter who had short hair and glasses. The study was set up so that the students had to ask the experimenter a question, and (according to random assignment) the experimenter responded either in a negative way or a neutral way toward the students. Then the students were told to go into a second room in which two experimenters were present, and to approach either one of them. However, the researchers arranged it so that one of the two experimenters looked a lot like the original experimenter, while the other one did not (she had longer hair and no glasses). The students were significantly more likely to avoid the experimenter who looked like the earlier experimenter when that experimenter had been negative to them than when she had treated them more neutrally. The participants showed stimulus generalization such that the new, similar-looking experimenter created the same negative response in the participants as had the experimenter in the prior session.
The flip side of generalization is discrimination—the tendency to respond differently to stimuli that are similar but not identical. Pavlov’s dogs quickly learned, for example, to salivate when they heard the specific tone that had preceded food, but not upon hearing similar tones that had never been associated with food. Discrimination is also useful—if we do try the purple berries, and if they do not make us sick, we will be able to make the distinction in the future. And we can learn that although the two people in our class, Courtney and Sarah, may look a lot alike, they are nevertheless different people with different personalities.
In some cases, an existing conditioned stimulus can serve as an unconditioned stimulus for a pairing with a new conditioned stimulus—a process known as second-order conditioning. In one of Pavlov’s studies, for instance, he first conditioned the dogs to salivate to a sound, and then repeatedly paired a new CS, a black square, with the sound. Eventually he found that the dogs would salivate at the sight of the black square alone, even though it had never been directly associated with the food. Secondary conditioners in everyday life include our attractions to things that stand for or remind us of something else, such as when we feel good on a Friday because it has become associated with the paycheck that we receive on that day, which itself is a conditioned stimulus for the pleasures that the paycheck buys us.
Scientists associated with the behaviorist school argued that all learning is driven by experience, and that nature plays no role. Classical conditioning, which is based on learning through experience, represents an example of the importance of the environment. But classical conditioning cannot be understood entirely in terms of experience. Nature also plays a part, as our evolutionary history has made us better able to learn some associations than others.
Clinical psychologists make use of classical conditioning to explain the learning of a phobia—a strong and irrational fear of a specific object, activity, or situation. For example, driving a car is a neutral event that would not normally elicit a fear response in most people. But if a person were to experience a panic attack in which he suddenly experienced strong negative emotions while driving, he may learn to associate driving with the panic response. The driving has become the CS that now creates the fear response.
Psychologists have also discovered that people do not develop phobias to just anything. Although people may in some cases develop a driving phobia, they are more likely to develop phobias toward objects (such as snakes, spiders, heights, and open spaces) that have been dangerous to people in the past. In modern life, it is rare for humans to be bitten by spiders or snakes, to fall from trees or buildings, or to be attacked by a predator in an open area. Being injured while riding in a car or being cut by a knife are much more likely. But in our evolutionary past, the potential of being bitten by snakes or spiders, falling out of a tree, or being trapped in an open space were important evolutionary concerns, and therefore humans are still evolutionarily prepared to learn these associations over others. [1] [2]
Another evolutionarily important type of conditioning is conditioning related to food. In his important research on food conditioning, John Garcia and his colleagues [3] [4] attempted to condition rats by presenting either a taste, a sight, or a sound as a neutral stimulus before the rats were given drugs (the US) that made them nauseous. Garcia discovered that taste conditioning was extremely powerful—the rat learned to avoid the taste associated with illness, even if the illness occurred several hours later. But conditioning the behavioral response of nausea to a sight or a sound was much more difficult. These results contradicted the idea that conditioning occurs entirely as a result of environmental events, such that it would occur equally for any kind of unconditioned stimulus that followed any kind of conditioned stimulus. Rather, Garcia’s research showed that genetics matters—organisms are evolutionarily prepared to learn some associations more easily than others. You can see that the ability to associate smells with illness is an important survival mechanism, allowing the organism to quickly learn to avoid foods that are poisonous.
Classical conditioning has also been used to help explain the experience of posttraumatic stress disorder (PTSD), as in the case of P. K. Philips described at the beginning of this module. PTSD is a severe anxiety disorder that can develop after exposure to a fearful event, such as the threat of death. [5] PTSD occurs when the individual develops a strong association between the situational factors that surrounded the traumatic event (e.g., military uniforms or the sounds or smells of war) and the US (the fearful trauma itself). As a result of the conditioning, being exposed to, or even thinking about the situation in which the trauma occurred (the CS), becomes sufficient to produce the CR of severe anxiety. [6]
PTSD develops because the emotions experienced during the event have produced neural activity in the amygdala and created strong conditioned learning. In addition to the strong conditioning that people with PTSD experience, they also show slower extinction in classical conditioning tasks. [7] In short, people with PTSD have developed very strong associations with the events surrounding the trauma and are also slow to show extinction to the conditioned stimulus.
Instructions: John Garcia’s experiment, described in the text above, was based on the idea that it is easy to condition some associations, but others are difficult to condition. He injected a rat with a chemical that made the rat nauseous.
Let’s reconstruct Garcia’s experiment, so we’re sure that its implications are clear. Here is our subject, a rat.
Task #1: Conditioning the rat to a light. Does it work?
Answer each question to fill in the boxes. The emoticon faces represent whether the rat is happy or sick. You will have one emoticon face left over.
Now show what happened when Garcia tested to see if conditioning had been successful after only a few of the training trials above. As before, you will have an emoticon face left over. You will only use one of the stimuli, so pick the right one to see if classical conditioning has occurred.
Task #2: Conditioning the rat to a distinctive taste (e.g., salt). Does it work?
Answer the questions to fill in the learning phase of the experiment. What are the conditioned and unconditioned stimuli, and what is the unconditioned response? You will have one of the emoticon faces left over.
Now show what happened when Garcia tested to see if conditioning had been successful after only a few of the training trials above. As before, you will have an emoticon face left over. You will only use one of the stimuli, so pick the right one to see if classical conditioning has occurred.
Task #3: Conditioning the rat to a tone. Does it work?
Answer the questions to put the pictures in the appropriate locations to show the learning phase of the experiment. What are the conditioned and unconditioned stimuli, and what is the unconditioned response? You will have one of the emoticon faces left over.
Now show what happened when Garcia tested to see if conditioning had been successful after only a few of the training trials above. As before, you will have an emoticon face left over. You will only use one of the stimuli, so pick the right one to see if classical conditioning has occurred.
Images in this activity courtesy of qtipd, inky2010, azieser, rgesthuizen (Public Domain), and L. Marie (CC-BY-2.0).
In this module, you learn about a different kind of conditioning called operant conditioning. First, remember how classical conditioning works. In classical conditioning, the individual learns to associate new stimuli with natural, biological responses, such as salivation or fear. The organism does not learn a new behavior but rather learns to perform an existing behavior in the presence of a new signal. For example, remember how Pavlov’s dogs learned to salivate when a bell was rung. This learning occurred because the dog learned to associate a new stimulus (e.g., the bell) with an existing stimulus (e.g., the meat), so now the bell produced the same response (the CR: salivation) originally produced only by the meat.
Operant conditioning, on the other hand, is learning that occurs on the bases of the consequences of behavior and can involve the learning of new behaviors. For operant conditioning, the process starts with doing something—behavior—and then noticing the consequences of that behavior. For example, operant conditioning occurs when a dog rolls over on command because it has been praised for doing so in the past. We go into the details of how this learning happens later in this module, but the important point is that an animal that never rolled over on command can learn this new behavior because it notices that its actions lead to rewards—treats or praise.
In summary, classical conditioning is a process in which an individual learns a new cue for an existing behavior by associating the new cue (the CS) with the existing cue (US). Operant conditioning is the process of learning a new behavior by noticing the consequences of that behavior.
Psychologist Edward L. Thorndike (1874–1949) was the first scientist to systematically study operant conditioning. In his research Thorndike [1] observed cats who had been placed in a “puzzle box” from which they tried to escape. At first the cats scratched, bit, and swatted haphazardly, without any idea how to get out. But eventually, and accidentally, they pressed the lever that opened the door and exited to their prize, a scrap of fish. The next time the cat was constrained within the box, it attempted fewer of the ineffective responses before carrying out the successful escape, and after several trials the cat learned to almost immediately make the correct response.
Observing these changes in the cats’ behavior led Thorndike to develop his law of effect, the principle that responses that create a typically pleasant outcome in a particular situation are more likely to occur again in a similar situation, whereas responses that produce a typically unpleasant outcome are less likely to occur again in the situation. [2] The essence of the law of effect is that successful responses, because they are pleasurable, are “stamped in” by experience and thus occur more frequently. Unsuccessful responses, which produce unpleasant experiences, are “stamped out” and subsequently occur less frequently.
The influential behavioral psychologist B. F. Skinner (1904–1990) expanded on Thorndike’s ideas to develop a more complete set of principles to explain operant conditioning. Skinner created specially designed environments known as operant chambers (usually called Skinner boxes) to systemically study learning. A Skinner box (operant chamber) is a structure that is big enough to fit a rodent or bird and that contains a bar or key that the organism can press or peck to release food or water. It also contains a device to record the animal’s responses.
The most basic of Skinner’s experiments was quite similar to Thorndike’s research with cats. A rat placed in the chamber reacted as one might expect, scurrying about the box and sniffing and clawing at the floor and walls. Eventually the rat chanced upon a lever, which it pressed to release pellets of food. The next time around, the rat took a little less time to press the lever, and on successive trials, the time it took to press the lever became shorter and shorter. Soon the rat was pressing the lever as fast as it could eat the food that appeared. As predicted by the law of effect, the rat had learned to repeat the action that brought about the food and cease the actions that did not.
Skinner studied, in detail, how animals changed their behavior through reinforcement and punishment, and he developed terms that explained the processes of operant learning as shown in the table below.. Skinner used the term reinforcer to refer to any event that strengthens or increases the likelihood of a behavior and the term punisher to refer to any event that weakens or decreases the likelihood of a behavior. And he used the terms positive and negative to refer to whether a reinforcement was presented or removed, respectively.
How Positive and Negative Reinforcement and Punishment Influence Behavior | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||||||||||
|
Reinforcement, either positive or negative, works by increasing the likelihood of a behavior. Punishment, on the other hand, refers to any event that weakens or reduces the likelihood of a behavior. Positive punishment weakens a response by presenting something unpleasant after the response, whereas negative punishment weakens a response by reducing or removing something pleasant. A child who is yelled at after fighting with a sibling (positive punishment) or who loses out on the opportunity to go to recess after getting a poor grade (negative punishment) is less likely to repeat these behaviors.
For each example below, select which terms best describe it.
Although the distinction between reinforcement (which increases behavior) and punishment (which decreases it) is usually clear, in some cases it is difficult to determine whether a reinforcer is positive or negative. On a hot day a cool breeze could be seen as a positive reinforcer (because it brings in cool air) or a negative reinforcer (because it removes hot air). In other cases, reinforcement can be both positive and negative. One may smoke a cigarette both because it brings pleasure (positive reinforcement) and because it eliminates the craving for nicotine (negative reinforcement).
It is important to note that reinforcement and punishment are not simply opposites. The use of positive reinforcement in changing behavior is almost always more effective than using punishment. This is because positive reinforcement makes the person or animal feel better, helping create a positive relationship with the person providing the reinforcement. Types of positive reinforcement that are effective in everyday life include verbal praise or approval, the awarding of status or prestige, and direct financial payment. Punishment, on the other hand, is more likely to create only temporary changes in behavior because it is based on coercion and typically creates a negative and adversarial relationship with the person providing the reinforcement. When the person who provides the punishment leaves the situation, the unwanted behavior is likely to return.
Perhaps you remember watching a movie or being at a show in which an animal—maybe a dog, a horse, or a dolphin—did some pretty amazing things. The trainer gave a command and the dolphin swam to the bottom of the pool, picked up a ring on its nose, jumped out of the water through a hoop in the air, dived again to the bottom of the pool, picked up another ring, and then took both of the rings to the trainer at the edge of the pool. The animal was trained to do the trick, and the principles of operant conditioning were used to train it. But these complex behaviors are a far cry from the simple stimulus-response relationships that we have considered thus far. How can reinforcement be used to create complex behaviors such as these?
One way to expand the use of operant learning is to modify the schedule on which the reinforcement is applied. To this point we have only discussed a continuous reinforcement schedule, in which the desired response is reinforced every time it occurs; whenever the dog sits, for instance, it gets a biscuit. This type of reinforcement schedule can be depicted as follows.
Continuous reinforcement results in relatively fast learning but also rapid extinction of the desired behavior once the reinforcer disappears. The problem is that because the organism is used to receiving the reinforcement after every behavior, the responder may give up quickly when it doesn’t appear.
Most real-world reinforcers are not continuous; they occur on a partial (or intermittent) reinforcement schedule—a schedule in which the responses are sometimes reinforced, and sometimes not. In comparison to continuous reinforcement, partial reinforcement schedules lead to slower initial learning, but they also lead to greater resistance to extinction. Because the reinforcement does not appear after every behavior, it takes longer for the learner to determine that the reward is no longer coming, and thus extinction is slower.
The four types of partial reinforcement schedules are summarized in the following table.
Four Types of Partial Reinforcement Schedules | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | |||||||||||||||
|
Partial reinforcement schedules are determined by whether the reinforcement is presented on the basis of the time that elapses between reinforcement (interval) or on the basis of the number of responses that the organism engages in (ratio), and by whether the reinforcement occurs on a regular (fixed) or unpredictable (variable) schedule.
In a fixed-interval schedule, reinforcement occurs for the first response made after a specific amount of time has passed. For instance, on a one-minute fixed-interval schedule the animal receives a reinforcement every minute, assuming it engages in the behavior at least once during the minute.
In a variable-interval schedule, the reinforcers appear on an interval schedule, but the timing is varied around the average interval, making the actual appearance of the reinforcer unpredictable. An example might be checking your e-mail: You are reinforced by receiving messages that come, on average, say every 30 minutes, but the reinforcement occurs only at random times. Interval reinforcement schedules tend to produce slow and steady rates of responding.
In a fixed-ratio schedule, a behavior is reinforced after a specific number of responses. For instance, a rat’s behavior may be reinforced after it has pressed a key 20 times, or a salesperson may receive a bonus after she has sold 10 products. A variable-ratio schedule provides reinforcers after a specific but average number of responses. Winning money from slot machines or on a lottery ticket are examples of reinforcement that occur on a variable-ratio schedule. For instance, a slot machine may be programmed to provide a win every 20 times the user pulls the handle, on average.
Complex behaviors are also created through shaping, the process of guiding an organism’s behavior to the desired outcome through the use of successive approximation to a final desired behavior. Skinner made extensive use of this procedure in his boxes as shown in the video below. For instance, he could train a rat to press a bar two times to receive food, by first providing food when the animal moved near the bar. Then when that behavior had been learned he would begin to provide food only when the rat touched the bar. Further shaping limited the reinforcement to only when the rat pressed the bar, to when it pressed the bar and touched it a second time, and finally, to only when it pressed the bar twice. Although it can take a long time, in this way operant conditioning can create chains of behaviors that are reinforced only when they are completed.
Reinforcing animals if they correctly discriminate between similar stimuli allows scientists to test the animals’ ability to learn, and the discriminations that they can make are sometimes quite remarkable. Pigeons have been trained to distinguish between images of Charlie Brown and the other Peanuts characters, [1] and between different styles of music and art. [2] [3]
Behaviors can also be trained through the use of secondary reinforcers. Whereas a primary reinforcer includes stimuli that are naturally preferred or enjoyed by the organism, such as food, water, and relief from pain, a secondary reinforcer (sometimes called conditioned reinforcer) is a neutral event that has become associated with a primary reinforcer through classical conditioning. An example of a secondary reinforcer would be the whistle given by an animal trainer, which has been associated over time with the primary reinforcer, food. An example of an everyday secondary reinforcer is money. We enjoy having money, not so much for the stimulus itself, but rather for the primary reinforcers (the things that money can buy) with which it is associated.
You want to teach your dog to turn on the light in your living room when you command him to. Using shaping, put in order the behaviors you would reward. Remember: in shaping, you are rewarding successive approximations. Start out rewarding the most general behavior, and as the learning progresses, start rewarding behaviors that look more and more like the desired behavior.
Ivan Pavlov, John Watson, and B. F. Skinner were scientists who believed that all learning could be explained by the processes of conditioning. To critical features of their ideas about conditioning are useful to keep in mind as you study this section. First, they thought of learning as being the same thing as behavior change. Don’t confuse this with the idea that you figure out some idea first and then change your behavior. Second, they believed that learning (i.e., behavior change) occurs only when the individual directly and personally experiences the impact of some reward or punishment.
In this section, you will encounter some kinds of learning are difficult to explain using the ideas that learning occurs only with behavior change and that personal experience is required for learning. Although classical and operant conditioning play key roles in learning, they constitute only a part of the total picture.
In the preceding module, where you learned about operant conditioning, you read about Edward Thorndike’s work with trial-and-error learning. This kind of learning was described in the video that showed how a cat was able to learn to escape from a puzzle box. Trial-and-error leads to learning, according to Thorndike, because of the law of effect: individuals notice the consequences of their actions. They repeat actions that lead to desirable outcomes and avoid those that lead to undesirable results. Trial-and-error learning is the basis of operant conditioning.
One type of learning that is not determined by classical conditioning (learned associations) or operant conditioning (based on trial-and-error) occurs when we suddenly find the solution to a problem, as if the idea just popped into our head. This type of learning is known as insight, the sudden understanding of a solution to a problem. The German psychologist Wolfgang Köhler [1] carefully observed what happened when he presented chimpanzees with a problem that was not easy for them to solve, such as placing food in an area that was too high in the cage to be reached. He found that the chimps first engaged in trial-and-error attempts at solving the problem, but when these failed they seemed to stop and contemplate for a while. Then, after this period of contemplation, they would suddenly seem to know how to solve the problem, for instance by using a stick to knock the food down or by standing on a chair to reach it. Köhler argued that it was this flash of insight, not the prior trial-and-error approaches, which were so important for conditioning theories, that allowed the animals to solve the problem.
Edward Tolman [1] was studying traditional trial-and-error learning when he realized that some of his research subjects (rats) actually knew more than their behavior initially indicated. In one of Tolman’s classic experiments, he observed the behavior of three groups of hungry rats that were learning to navigate mazes.
The first group always received a food reward at the end of the maze, so the payoff for learning the maze was real and immediate. The second group never received any food reward, so there was no incentive to learn to navigate the maze effectively. The third group was like the second group for the first 10 days, but on the 11th day, food was now placed at the end of the maze.
As you might expect when considering the principles of conditioning, the rats in the first group quickly learned to negotiate the maze, while the rats of the second group seemed to wander aimlessly through it. The rats in the third group, however, although they wandered aimlessly for the first 10 days, quickly learned to navigate to the end of the maze as soon as they received food on day 11. By the next day, the rats in the third group had caught up in their learning to the rats that had been rewarded from the beginning. It was clear to Tolman that the rats that had been allowed to experience the maze, even without any reinforcement, had nevertheless learned something, and Tolman called this latent learning. Latent learning is to learning that is not reinforced and not demonstrated until there is motivation to do so. Tolman argued that the rats had formed a “cognitive map” of the maze but did not demonstrate this knowledge until they received reinforcement.
In the Learn by Doing exercise below, go through Tolman’s experiment with his three groups of rats. Keep in mind that Tolman, as a good scientist, was testing an idea that was controversial at the time: the idea that we can learn something without our behavior immediately revealing that we have learned it. It is the delay between the learning and the revealing behavior that is the basis for the name: latent (or “hidden”) learning.
Your task here is to predict what is going to happen on Trial 12 for the “no food until Trial 11” group.
Option A: Notice that this result is the same as the “no food on any trial” group. So, if you choose option A, you think that they will not act differently now than they acted on the first 11 trials and they will continue to make a lot of wrong turns.
Option B: This option suggests that they are now motivated to learn the path to the food, but that they will do so in small steps, just as we have seen for all three groups up to this point. Option B says that they are moving in the direction of the “food on every trial” group, but that it will take some extra learning to get there.
Option C: This option says that they already know the path to the food and, now that they are motivated to get there, they will show that they already know just as much as the “food on every trial” group. Their performance on Trial 12 will be the same as the low-error performance of the “food on every trial” group.
Tolman’s studies of latent learning show that animals, and people, can learn during unrewarded experience, but this learning only shows itself when rewards or punishment provide motivation to use some knowledge. However, Tolman’s rats at least had the opportunity to wander through the maze, and the rats in the critical condition did this for 10 days before food provided them with motivation to find an efficient path through the maze. So we could still hold onto the theory that learning only takes place if you actually do something, even it it is unrewarded. The direct connection between learning and behavior—even if the behavior seems aimless—has not been disproved. But in the 1960s, Albert Bandura conducted a series of experiments showing that learning can and does occur even when the learner is merely a passive spectator. Learning can occur when someone else is doing all the behaving and receiving all the rewards and punishments.
One of the best known and most influential experiments in the history of psychology involved some adults, some children, and a big inflatable doll called a Bobo doll. Bandura and his colleagues [1] allowed children to watch an adult—a man or a woman in different conditions of the study—“playing” with a Bobo doll, an inflatable balloon with a weight in the bottom that makes it pop back up when you knock it down. Bandura wanted to know if watching the adults would influence the way the children behaved.
The motivation for Bandura’s study was not that of solving some abstract scientific question, but a real debate that continues to this day. Many people felt that children who were “raised properly” would not be influenced very strongly by seeing someone—an unfamiliar adult, for instance—behave in a mean or hostile way. The children might be upset by what they saw, but surely they would not imitate poor behavior. As you learned when you studied about Tolman’s rats, many learning specialists believed that learning (i.e., behavior based on experience) only occurs for the individual who is actually doing the behavior. This theory held by the learning specialists supported the belief that the children would not learn by watching, because they would not be doing anything and they would not receive rewards or punishment. Bandura—along with many parents and some other psychologists—suspected that learning might occur merely by watching the actions of others and the consequences of those actions.
Watch the following video as Dr. Bandura explains his study to you.
In the following activity, you will go through one of Bandura’s classic studies.
Phase 1 of the Experiment
Bandura studied the impact of an adult’s behavior on the behavior of children who saw them. One of his independent variables was whether or not the adult was hostile or agressive toward the Bobo doll, so for some children the adults acted aggressively (treatment condition) and for others they did not (control condition 1) and for yet other children there were no adults at all (control condition 2). He was also interested to learn if the sex of the child and/or the sex of the adult model influenced what the child learned.
To give you a good view of how the experiment was organized or “designed,” the first thing you will do is put all the individuals involved—the adult models and the children—into the correct places in the study.
Instructions: The boxes show the labels for the three different modeling conditions: aggressive behavior, non-aggressive behavior (control condition 1), and no model (control condition 2). Organize the study by putting the adult models and children in the proper boxes. Be sure you distribute the children so that the same number of boys and girls are in all the conditions with a model. Put the rest of the children in the No Model boxes.
Phase 1 of the Experiment: The Observation Phase
The observation phase of the experiment is when the children see the behavior of the adults. Each child was shown into a room where an adult was already sitting near the Bobo doll. The child was positioned so he or she could easily see the adult.
Instructions: There are three people involved in the first phase of the experiment: an adult model, a child subject or participant, and an experimenter. Demonstrate your understanding of the first step of the experiment by moving each of the three characters (adult, experimenter, and child) to the correct location in the Experimentation Room depicted below.
Phase 2 of the Experiment: Frustration
Dr. Bandura thought that the children might be a bit more likely to show aggressive behavior if they were frustrated. The second phase of the experiment was designed to produce this frustration. After a child had watched the adult in phase 1, he or she was taken to another room, one that also contained a lot of attractive, fun toys and was told that is was fine to play with the toys. As soon as the child started to enjoy playing with the toys, the experimenter said something.
Phase 3 of the Experiment: The Testing Phase
After the child was told to stop playing with “the very best toys,” the experimenter said that he or she could play with any of the toys in the next room. Then the child was taken to a third room. This room contained a variety of toys. Many of the toys were engaging and interactive, but not the type that encouraged aggressive play. Critically, the Bobo doll and the hammer that the model had used in the first phase were now in this new play room. The goal of this phase in the experiment was to see how the child would react without a model around.
Instructions: The figure below represents the third room. Three individuals from the study are indicated by the boxes below the diagram. Drag each of them to the proper locations to indicate your understanding of the experimental procedure. One of three individuals does not appear in phase 3 of the study, so put this individual in the black box that says, “Not in this phase.”
The child was allowed to play freely for 20 minutes. Note that an adult did stay in the room so the child would not feel abandoned or frightened. However, this adult worked inconspicuously in a corner and interacted with the child as little as possible.
During the 20 minutes that the child played alone in the third room, the experimenters observed his or her behavior from behind a see-through mirror. Using a complex system that we won’t go into here, the experimenters counted the number of various types of behaviors that the child showed during this period. These behaviors included ones directed at the Bobo doll, as well as those involving any of the other toys. They were particularly interested in the number of behaviors the child showed that clearly imitated the actions of the adults that the child had observed earlier, in phase 1.
Below are the results for the number of imitative physically aggressive acts the children showed on average toward the Bobo doll. These acts included hitting and punching the Bobo doll. On the left, you see the two modeling conditions: aggression by the model in phase 1 or no aggression by the model in phase 1. Note: Children in the no-model conditions showed very few physically aggressive acts and their results do not change the interpretation, so we will keep the results simple by leaving them out of the table.
The story is a slightly, though not completely, different when we look at imitative verbal aggression, rather than physical aggression. The table below shows the number of verbally aggressive statements by the boys and girls under different conditions in the experiment. Verbally aggressive statements were ones like the models had made: for example, “Sock him” and “Kick him down!”
Note: Just as was true for the physically aggressive acts, children in the no model conditions showed very few verbally aggressive acts either and their results do not change the interpretation, so we will keep the results simple by leaving them out of the table.
Images in this activity courtesy of Gioppino, klaasvangend, Machovka, Gerald_G, Dug, Machovka (Public Domain); steve greer, Sacheverelle, and Horia Varlan (CC-BY-2.0).
Bandura and his colleagues did other studies in which they had children observe adults on television performing the acts rather than watching them in person. The effects of aggressive modeling were weaker when the adult was not physically present, but the same general pattern of results were found with television models. In yet another variation, Bandura had the children watch cartoon models rather than real adults. These results were even weaker than the television adult model results, but the pattern was still there and the aggressive modeling effect was still statically significant. Children even imitated the aggressive behavior of cartoon characters.
Observational learning is useful for animals and for people because it allows us to learn without having to actually engage in what might be a risky behavior. Monkeys that see other monkeys respond with fear to the sight of a snake learn to fear the snake themselves, even if they have been raised in a laboratory and have never actually seen a snake. [2] As Bandura put it, “the prospects for [human] survival would be slim indeed if one could learn only by suffering the consequences of trial and error. For this reason, one does not teach children to swim, adolescents to drive automobiles, and novice medical students to perform surgery by having them discover the appropriate behavior through the consequences of their successes and failures. The more costly and hazardous the possible mistakes, the heavier is the reliance on observational learning from competent learners. [3]
Although modeling is normally adaptive, it can be problematic for children who grow up in violent families. These children are not only the victims of aggression, but they also see it happening to their parents and siblings. Because children learn how to be parents in large part by modeling the actions of their own parents, it is no surprise that there is a strong correlation between family violence in childhood and violence as an adult. Children who witness their parents being violent or who are themselves abused are more likely as adults to inflict abuse on intimate partners or their children, and to be victims of intimate violence. [4] In turn, their children are more likely to interact violently with each other and to aggress against their parents. [5]
The average American child watches more than 4 hours of television every day, and two out of three programs they watch contain aggression. It has been estimated that by the age of 12, the average American child has seen more than 8,000 murders and 100,000 acts of violence. At the same time, children are also exposed to violence in movies, video games, and virtual reality games, as well as in music videos that include violent lyrics and imagery. [1] [2] [3]
It is clear that watching television violence can increase aggression, but what about violent video games? These games are more popular than ever and also more graphically violent. Youths spend countless hours playing these games, many of which involve engaging in extremely violent behaviors. The games often require the player to take the role of a violent person, to identify with the character, to select victims, and of course to kill the victims. These behaviors are reinforced by winning points and moving on to higher levels, and are repeated over and over.
In one experiment, Bushman and Anderson [4] assessed the effects of viewing violent video games on aggressive thoughts and behavior. Participants were randomly assigned to play either a violent or a nonviolent video game for 20 minutes. Each participant played one of four violent video games or one of four nonviolent video games.
Participants then read a story, such as this one about Todd, and were asked to list 20 thoughts, feelings, and actions about how they would respond if they were Todd:
Todd was on his way home from work one evening when he had to brake quickly for a yellow light. The person in the car behind him must have thought Todd was going to run the light because he crashed into the back of Todd’s car, causing a lot of damage to both vehicles. Fortunately, there were no injuries. Todd got out of his car and surveyed the damage. He then walked over to the other car.
Now it is your task to predict what will happen.
As you read in the text, Bushman and Anderson asked the participants what they would do, what they would be thinking, and how they would feel if they were in Todd’s position. The graph above is blank, so your task is to put the correct results into it. Note that the green bars show the results for people who had just played a nonviolent video game and the red bars are for people who just played a violent video game. The Y-axis shows how aggressive the response is, so a taller bar means MORE aggressive and a shorter bar means less aggressive.
It might not surprise you to hear that these exposures to violence have an effect on aggressive behavior. The evidence is impressive and clear: The more media violence people, including children, view, the more aggressive they are likely to be. [5] [6] The relationship between viewing television violence and aggressive behavior is about as strong as the relation between smoking and cancer or between studying and academic grades. People who watch more violence become more aggressive than those who watch less violence.
As you have just read, playing violent video games also leads to aggressive responses. A recent meta-analysis by Anderson and Bushman [7] reviewed 35 research studies that had tested the effects of playing violent video games on aggression. The studies included both experimental and correlational studies, with both male and female participants in both laboratory and field settings. They found that exposure to violent video games is significantly linked to increases in aggressive thoughts, aggressive feelings, psychological arousal (including blood pressure and heart rate), as well as aggressive behavior. Furthermore, playing more video games was found to relate to less altruistic behavior.
For some people, memory is truly amazing. Consider, for instance, the case of Kim Peek, who was the inspiration for the Academy Award–winning film Rain Man.
There are others who are capable of amazing feats of memory. The Russian psychologist A. R. Luria [1] has described the abilities of a man known as “S,” who seems to have unlimited memory. S remembers strings of hundreds of random letters for years at a time, and seems in fact to never forget anything. As you watch the following video, you’ll notice that at the beginning, Kim is referred to as an idiot savant. Idiot is an old term that was used to describe profound mental retardation. Now we just say “profound mental retardation.” Kim and people like him are called savants.
The subject of this unit is memory. The term memory refers to our capacity to acquire, store, and retrieve the information and habits that guide our behavior. This capacity is largely modulated by associative learning mechanisms like those discussed in the unit on learning. Our memories allow us to do relatively simple things, such as remembering where we parked our car or the name of the current president of the United States. We can also form complex memories, such as how to ride a bicycle or write a computer program. Moreover, our memories define us as individuals—memories are the records of our experiences, our relationships, our successes, and our failures. Perhaps the coolest aspect of memory is that it provides us with the means to use mental time travel to access a lifetime of experiences and learning.
This unit is about human memory, but in our culture we commonly hear the term memory used in conjunction with descriptions of computers. Computer memory and human memory have some distinct differences and some similarities. Let's take a look at some of these.
Differences between Brains and Computers
Although we depend on computers in multiple aspects of our lives, and computers eclipse human processing capacity in terms of speed and volume, at least for some things, human memory is exponentially better than a computer [2] . Once we learn a face, we can recognize that face many years later—a task which computers have yet to master. Impressively, our memories can be acquired rapidly and retained indefinitely. Mitchell [3] contacted participants 17 years after they had been briefly exposed to some line drawings in a lab and found that they still could identify the images significantly better than participants who had never seen them.
In this unit we learn how psychologists use behavioral responses (such as memory tests and reaction time) to draw inferences about what and how people remember. And we will see that although we have very good memory for some things, our memories are far from perfect. [4] The errors we make are due to the fact that our memories are not simply recording devices that input, store, and retrieve the world around us. Rather, we actively process and interpret information as we remember and recollect it, and these cognitive processes influence what we remember and how we remember it. Because memories are constructed, not recorded, when we remember events we don’t reproduce exact replicas of those events. [5]
We also learn that our prior knowledge can influence our memory. People who read the words dream, sheets, rest, snore, blanket, tired, and bed and are then asked to remember the words often think they saw the word sleep even though that word was not in the list. [6] In other circumstances, we are influenced by the ease with which we can retrieve information from memory or by the information that we are exposed to after we first learn something.
Basic memory research has revealed profound inaccuracies in our memories and judgments. Understanding these potential errors is the first step in learning to account for them in our everyday lives.
Types of Memory | ||||||
---|---|---|---|---|---|---|
|
Explicit memory refers to knowledge or experiences that can be consciously and intentionally remembered. For instance, recalling when you have a dentist appointment or what you wore to senior prom relies on explicit memory. As you can see in the figure below, there are two types of explicit memory: episodic and semantic. Episodic memory refers to the firsthand experiences, or episodes, that we have on a daily basis (e.g., recollections of our high school graduation day or of the fantastic show we saw in New York last summer). Semantic memory refers to our knowledge of facts and concepts about the world (e.g., that the absolute value of −90 is greater than the absolute value of 9 and that one definition of the word affect is “the experience of feeling or emotion”).
Memory is assessed using measures that require an individual to consciously retrieve information. A recall test is a measure of explicit memory that involves retrieving information that has been previously learned, and it requires us to use a search strategy to perform that retrieval. We rely on our recall memory when we take an essay test, because the test requires us to generate previously remembered information. A multiple-choice test is an example of a recognition memory test, a measure of memory that involves determining whether information has been seen or learned before.
Read about and view the following historical event, and then respond to the questions below in terms of whether the information/memory in question is semantic or episodic.
On April 29, 2011, England’s Prince William married his longtime girlfriend, Kate Middleton, in Westminster Abbey. The Dean of Westminster, the Very Reverend Dr. John Hall, expressed his delight at the couple’s announcement of their choice of Westminster as the place to hold the ceremony. In attendance were various political and religious leaders as well as a number of celebrities such as Elton John and David Beckham, although most of the guest were friends and family of the couple.
Watch these highlights of the ceremony.
Your own experiences taking tests will probably lead you to agree with the scientific research finding that recall is more difficult than recognition. Recall, such as required on essay tests, involves two steps: first generating an answer and then determining whether it seems to be the correct one. Recognition, as on multiple-choice tests, only involves determining which item from a list seems most correct. [1] Although they involve different processes, recall and recognition memory measures tend to be correlated. Students who do better on a multiple-choice exam will also, by and large, do better on an essay exam. [2]
A third way of measuring memory is known as relearning. [3] Measures of relearning (or savings) assess how much more quickly information is processed or learned when it is studied again after it has already been learned but then forgotten. If you have taken some French courses in the past, for instance, you might have forgotten most of the vocabulary you learned. But if you were to work on your French again, you’d learn the vocabulary much faster the second time around. Relearning can be a more sensitive measure of memory than either recall or recognition because it allows assessing memory in terms of “how much” or “how fast” rather than simply “correct” versus “incorrect” responses. Relearning also allows us to measure memory for procedures like driving a car or playing a piano piece, as well as memory for facts and figures.
While explicit memory consists of the things that we can consciously report that we know, implicit memory refers to knowledge that we cannot consciously access. However, implicit memory is nevertheless exceedingly important to us because it has a direct effect on our behavior. Implicit memory refers to the influence of experience on behavior, even if the individual is not aware of those influences. As you can see above in the figure below, there are three general types of implicit memory: procedural memory, classical conditioning effects, and priming.
Procedural memory refers to our often unexplainable knowledge of how to do things. When we walk from one place to another, speak to another person in English, dial a cell phone, or play a video game, we are using procedural memory. Procedural memory allows us to perform complex tasks, even though we may not be able to explain to others how we do them. It is difficult to tell someone how to ride a bicycle; a person has to learn by doing it. The idea of implicit memory helps explain how infants are able to learn. The ability to crawl, walk, and talk are procedures, and these skills are easily and efficiently developed while we are children despite the fact that as adults we have no conscious memory of having learned them.
A second type of implicit memory is classical conditioning effects, in which we learn, often without effort or awareness, to associate neutral stimuli (such as a sound or a light) with another stimulus (such as food), which creates a naturally occurring response, such as enjoyment or salivation. The memory for the association is demonstrated when the conditioned stimulus (the sound) begins to create the same response as the unconditioned stimulus (the food) did before the learning.
The final type of implicit memory is known as priming, or changes in behavior as a result of experiences that have happened frequently or recently. Priming refers both to the activation of knowledge (e.g., we can prime the concept of “kindness” by presenting people with words related to kindness) and to the influence of that activation on behavior (people who are primed with the concept of kindness may act more kindly).
One measure of the influence of priming on implicit memory is the word fragment test, in which a person is asked to fill in missing letters to make words. You can try this yourself: First, try to complete the following word fragments, but work on each one for only three or four seconds. Do any words pop into mind quickly?
_ i b _ a _ y
_ h _ s _ _ i _ n
_ o _ k
_ h _ i s _
Now read the following sentence carefully:
“He got his materials from the shelves, checked them out, and then left the building.”
Then try again to make words out of the word fragments.
You might find that it is easier to complete fragments 1 and 3 as “library” and “book,” respectively, after you read the sentence than it was before you read it. However, reading the sentence didn’t really help you to complete fragments 2 and 4 as “physician” and “chaise.” This difference in implicit memory probably occurred because as you read the sentence, the concept of library (and perhaps book) was primed, even though they were never mentioned explicitly. Once a concept is primed it influences our behaviors, for instance, on word fragment tests.
Our everyday behaviors are influenced by priming in a wide variety of situations. Seeing an advertisement for cigarettes may make us start smoking, seeing the flag of our home country may arouse our patriotism, and seeing a student from a rival school may arouse our competitive spirit. And these influences on our behaviors may occur without our being aware of them.
One of the most important characteristics of implicit memories is that they are frequently formed and used automatically, without much effort or awareness on our part. In one demonstration of the automaticity and influence of priming effects, John Bargh and his colleagues [4] conducted a study in which they showed college students lists of five scrambled words, each of which they were to make into a sentence. Furthermore, for half of the research participants, the words were related to stereotypes of the elderly. These participants saw words such as the following:
in Florida retired live people
bingo man the forgetful plays
The other half of the research participants also made sentences, but from words that had nothing to do with elderly stereotypes. The purpose of this task was to prime stereotypes of elderly people in memory for some of the participants but not for others.
The experimenters then assessed whether the priming of elderly stereotypes would have any effect on the students’ behavior—and indeed it did. When the research participant had gathered all of his or her belongings, thinking that the experiment was over, the experimenter thanked him or her for participating and gave directions to the closest elevator. Then, without the participants knowing it, the experimenters recorded the amount of time that the participant spent walking from the doorway of the experimental room toward the elevator. As you can see in the figure below, participants who had made sentences using words related to elderly stereotypes took on the behaviors of the elderly—they walked significantly more slowly as they left the experimental room.
To determine if these priming effects occurred out of the awareness of the participants, Bargh and his colleagues asked still another group of students to complete the priming task and then to indicate whether they thought the words they had used to make the sentences had any relationship to each other, or could possibly have influenced their behavior in any way. These students had no awareness of the possibility that the words might have been related to the elderly or could have influenced their behavior.
Another way of understanding memory is to think about it in terms of stages that describe the length of time information remains available to us. According to this approach, as shown the following figure, information begins in sensory memory, moves to short-term memory, and eventually moves to long-term memory. But not all information makes it through all three stages; most of it is forgotten. Whether the information moves from shorter-duration memory into longer-duration memory or whether it is lost from memory entirely depends on how the information is attended to and processed.
Sensory memory refers to the brief storage of sensory information. Sensory memory is a memory buffer that lasts only very briefly and then, unless it is attended to and passed on for more processing, is forgotten. The purpose of sensory memory is to give the brain some time to process the incoming sensations and to allow us to see the world as an unbroken stream of events rather than as individual pieces.
Visual sensory memory is known as iconic memory. Iconic memory was first studied by the psychologist George Sperling. [1] In his research, Sperling showed participants a display of letters in rows, similar to the figure shown in the following activity. However, the display lasted only about 50 milliseconds (1/20 of a second). Then, Sperling gave his participants a recall test in which they were asked to name all the letters that they could remember. On average, the participants could remember only about one-quarter of the letters that they had seen.
Sperling [1] showed his participants displays such as this one for only 1/20 of a second. He found that when he cued the participants to report one of the three rows of letters, they could do it, even if the cue was given shortly after the display had been removed. The research demonstrated the existence of iconic memory.
Instructions: You will now be a participant in a similar experiment on the time course of iconic memory. You will see a brief grid of letters and, after the letters are hidden, you will see a green diamond signaling which row of letters to type. Press Start to begin.
Sperling reasoned that the participants had seen all the letters but could remember them only very briefly, making it impossible for them to report them all. To test this idea, in his next experiment he first showed the same letters, but then after the display had been removed, he signaled to the participants to report the letters from either the first, second, or third row. In this condition, the participants now reported almost all the letters in that row. This finding confirmed Sperling’s hunch: Participants had access to all of the letters in their iconic memories, and if the task was short enough, they were able to report on the part of the display he asked them to. The “short enough” is the length of iconic memory, which turns out to be about 250 milliseconds (Ľ of a second).
Auditory sensory memory is known as echoic memory. In contrast to iconic memories, which decay very rapidly, echoic memories can last as long as 4 seconds. [2] This is convenient as it allows you—among other things—to remember the words that you said at the beginning of a long sentence when you get to the end of it, and to take notes on your psychology professor’s most recent statement even after he or she has finished saying it.
Instructions: You will now hear a series of sound clips containing white noise. It is hard to use words to describe different types of white noise, which makes it hard to use short-term memory to help find patterns within white noise. Some of the clips contain repeating patterns and others do not. After listening to each clip, decide whether it contained a repeating pattern or not.
In some people iconic memory seems to last longer, a phenomenon known as eidetic imagery (or “photographic memory”) in which people can report details of an image over long periods of time. These people, who often suffer from psychological disorders such as autism, claim that they can “see” an image long after it has been presented, and can often report accurately on that image. There is also some evidence for eidetic memories in hearing; some people report that their echoic memories persist for unusually long periods of time. The composer Wolfgang Amadeus Mozart may have possessed eidetic memory for music, because even when he was very young and had not yet had a great deal of musical training, he could listen to long compositions and then play them back almost perfectly. [3]
Most of the information that gets into sensory memory is forgotten, but information that we turn our attention to, with the goal of remembering it, may pass into short-term memory. Short-term memory (STM) is the place where small amounts of information can be temporarily kept for more than a few seconds but usually for less than one minute. [4] The cognitive psychologist George Miller [5] referred to “seven plus or minus two” pieces of information as the “magic number” in short-term memory. Information in short-term memory is not stored permanently but rather becomes available for us to process, and the processes that we use to make sense of, modify, interpret, and store information in STM are known as working memory.
Although it is called “memory,” working memory is not a store of memory like STM but rather a set of memory procedures or operations. Imagine, for instance, that you are asked to participate in a task such as this one, which is a measure of working memory. [6] Each of the following questions appears individually on a computer screen and then disappears after you answer the question:
To successfully accomplish the task, you have to answer each of the math problems correctly and at the same time remember the letter that follows the task. Then, after the six questions, you must list the letters that appeared in each of the trials in the correct order (in this case S, R, P, T, U, Q).
To accomplish this difficult task, you need to use a variety of skills. You clearly need to use STM, as you must keep the letters in storage until you are asked to list them. But you also need a way to make the best use of your available attention and processing. For instance, you might decide to use a strategy of “repeat the letters twice, then quickly solve the next problem, and then repeat the letters, including the new one, twice again.” Keeping this strategy (or others like it) going is the role of working memory’s central executive—the part of working memory that directs attention and processing. The central executive will make use of whatever strategies seem to be best for the given task. For instance, the central executive will direct the rehearsal process and at the same time direct the visual cortex to form an image of the list of letters in memory. You can see that although STM is involved, the processes that we use to operate on the material in memory are also critical.
STM is limited in both the length and the amount of information it can hold. Peterson and Peterson [7] found that when people were asked to remember a list of three-letter strings and then were immediately asked to perform a distracting task (counting backward by threes), the material was quickly forgotten, as shown the figure below, such that by 18 seconds it was virtually gone.
One way to prevent the decay of information from STM is to use working memory to rehearse it. Maintenance rehearsal is the process of repeating information mentally or out loud with the goal of keeping it in memory. We engage in maintenance rehearsal to keep something that we want to remember (e.g., a person’s name, e-mail address, or phone number) in mind long enough to write it down, use it, or potentially transfer it to long-term memory.
If we continue to rehearse information, it will stay in STM until we stop rehearsing it, but there is also a capacity limit to STM. Try reading each of the following rows of numbers, one row at a time, at a rate of about one number each second. Then when you have finished each row, close your eyes and write down as many of the numbers as you can remember.
019
3586
10295
861059
1029384
75674834
657874104
6550423897
If you are like the average person, you will have found that on this test of working memory, known as a digit span test, you did pretty well up to about the fourth line, and then you started having trouble. I bet you missed some of the numbers in the last three rows, and did pretty poorly on the last one.
The digit span of most adults is between five and nine digits, with an average of about seven, as noted by Miller [5] . But if we can only hold a maximum of about nine digits in short-term memory, then how can we remember larger amounts of information than this? For instance, how can we ever remember a 10-digit phone number long enough to dial it?
One way we are able to expand our ability to remember things in STM is by using a memory technique called chunking. Chunking is the process of organizing information into smaller groupings (chunks), thereby increasing the number of items that can be held in STM. For instance, try to remember this string of 12 letters:
XOFCBANNCVTM
You probably won’t do that well because the number of letters is more than the magic number of seven.
Now try again with this one:
MTVCNNABCFOX
Would it help you if I pointed out that the material in this string could be chunked into four sets of three letters each? I think it would, because then rather than remembering 12 letters, you would only have to remember the names of four television stations. In this case, chunking changes the number of items you have to remember from 12 to only four.
Experts rely on chunking to help them process complex information. Herbert Simon and William Chase [8] showed chess masters and chess novices various positions of pieces on a chessboard for a few seconds each. The experts did a lot better than the novices in remembering the positions because they were able to see the “big picture.” They didn’t have to remember the position of each of the pieces individually, but chunked the pieces into several larger layouts. But when the researchers showed both groups random chess positions—positions that were unlikely to occur in real games—both groups did equally poorly, because in this situation the experts lost their ability to organize the layouts, as shown in the following figure. The same occurs for basketball. Basketball players recall actual basketball positions much better than do nonplayers, but only when the positions make sense in terms of what is happening on the court, or what is likely to happen in the near future, and thus can be chunked into bigger units. [9]
If information makes it past STM it may enter long-term memory (LTM), memory storage that can hold information for days, months, and years. The capacity of long-term memory is large, and there is no known limit to what we can remember. [10] Although we may forget at least some information after we learn it, other things will stay with us forever. In the next section we will discuss the principles of long-term memory.
Although it is useful to hold information in sensory and short-term memory, we also rely on our long-term memory (LTM). Long-term memory is relatively permanent storage. Explicit memories stored there will stay with us throughout our lifetime (barring a brain disease or injury) as long as we continue to use them. Most of us who teach psychology originally learned the material many years ago. But because we continue to use the information, it remains readily available to us. Once we stop using the material we have learned, it will gradually fade. Implicit memories are less subject to fading with disuse, but over time, they will fade too. If you learn to ski as a child, stop in your teens, and then resume it again in your 40s, you will still remember some of what you learned, but you’ll have to practice again to do it well.
We use long-term memory to remember the name of the new boy in the class, the name of the movie we saw last week, and the material for our upcoming psychology test. Psychological research has produced a great deal of knowledge about long-term memory, and this research can be useful as you try to learn and remember new material. In this module we consider this question in terms of the types of processing we do on the information we want to remember. To be successful, the information we want to remember must be encoded and stored and then retrieved. The rest of this module discusses these three concepts.
Encoding is the process by which we place our experiences into memory. Unless information is encoded, it cannot be remembered. I’m sure you’ve been to a party where you were introduced to someone, and then—maybe only seconds later—you realized you did not remember the person’s name. It's not surprising that you forgot the name, because you probably were distracted and never encoded the name to begin with.
Not everything we experience can or should be encoded. We tend to encode things that we need to remember and not bother to encode things that are irrelevant. Look at figure below, which shows different images of U.S. pennies. Can you tell which one is the real one? Nickerson and Adams [1] found that very few of the U.S. participants they tested could identify the right one. We see pennies a lot, but we don’t bother to encode their features.
One way to improve our memory is to use better encoding strategies. Some ways of studying are more effective than others. Research has found that we are better able to remember information if we encode it in a meaningful way. When we engage in elaborative encoding, we process new information in ways that make it more relevant or meaningful. [2] [3]
Imagine that you are trying to remember the characteristics of the different schools of psychology we discussed in the first unit. Rather than simply trying to remember the schools and their characteristics, you might try to relate the information to things you already know. For instance, you might try to remember the fundamentals of the cognitive school of psychology by linking the characteristics to the computer model. The cognitive school focuses on how information is input, processed, and retrieved, and you might think about how computers do pretty much the same thing. You might also try to organize the information into meaningful units. For instance, you might link the cognitive school to structuralism because both are concerned with mental processes. You also might try to use visual cues to help you remember the information. You might look at the image of Freud and imagine what he looked like as a child. That image might help you remember that childhood experiences were an important part of Freudian theory. Each person has his or her unique way of elaborating on information; the important thing is to try to develop unique and meaningful associations among the materials. These suggestions are very good study hints.
We all have knowledge bases we can build upon. Some information connects in meaningful ways to previous knowledge more easily than does other information—for example, a Spanish speaker can connect knowledge of Spanish grammar to help remember the rules of French grammar. Elaborative encoding makes use of these connections.
In an important study showing the effectiveness of elaborative encoding, Rogers, Kuiper, and Kirker [4] studied how people recalled information that they had learned under different processing conditions. All the participants were presented with the same list of 40 adjectives to learn, but through the use of random assignment, the participants were given one of four different sets of instructions about how to process the adjectives.
Participants assigned to the structural task condition were asked to judge whether the word was printed in uppercase or lowercase letters. Participants in the phonemic task condition were asked whether or not the word rhymed with another given word. In the semantic task condition, the participants were asked if the word was a synonym of another word. And in the self-reference task condition, participants were asked to indicate whether or not the given adjective was or was not true of themselves. After completing the specified task, each participant was asked to recall as many adjectives as he or she could remember.
Rogers and his colleagues hypothesized that different types of processing would have different effects on memory. As you can see in the following figure, participants in the self-reference task condition recalled significantly more adjectives than did participants in any other condition. This finding, known as the self-reference effect, is powerful evidence that the self-concept helps us organize and remember information. The next time you are studying for an exam, you might try relating the material to your own experiences. The self-reference effect suggests that doing so will help you better remember the information. [5]
Hermann Ebbinghaus (1850–1909) was a pioneer of the study of memory. In this section we consider three of his most important findings, each of which can help you improve your memory. In his research, in which he was the only research participant, Ebbinghaus practiced memorizing lists of nonsense syllables, such as the following:
DIF, LAJ, LEQ, MUV, WYC, DAL, SEN, KEP, NUD
You can imagine that because the material he was trying to learn was not at all meaningful, it was not easy to do. Ebbinghaus plotted how many of the syllables he could remember against the time that had elapsed since he studied them. He discovered an important principle of memory: Memory decays rapidly at first, but the amount of decay levels off with time (see the Ebbinghaus Forgetting Curve in the following figure). Although Ebbinghaus looked at forgetting after days had elapsed, the same effect occurs on longer and shorter time scales. Bahrick [1] found that students who took a Spanish language course forgot about one half of the vocabulary they had learned within 3 years, but after that time, their memory remained pretty much constant. Forgetting also drops off quickly on a shorter time frame, which suggests that you should try to review the material you have already studied right before you take an exam; that way, you will be more likely to remember the material during the exam.
Ebbinghaus also discovered another important principle of learning, known as the spacing effect. The spacing effect refers to the fact that learning is better when the same amount of study is spread out over periods of time than it is when it occurs closer together or at the same time. This means that even if you have only a limited amount of time to study, you’ll learn more if you study continually throughout the semester (a little bit every day is best) than if you wait to cram at the last minute before your exam. Another good strategy is to study and then wait as long as you can before you forget the material. Then review the information and again wait as long as you can before you forget it. (This probably will be a longer period of time than the first time.) Repeat and repeat again. The spacing effect is usually considered in terms of the difference between distributed practice (practice that is spread out over time) and massed practice (practice that comes in one block), with the former approach producing better memory.
Ebbinghaus also considered the role of overlearning—that is, continuing to practice and study even when we think that we have mastered the material. Ebbinghaus and other researchers have found that overlearning helps encoding. [2] Students frequently think that they have already mastered the material but then discover when they get to the exam that they have not. The point is clear: Try to keep studying and reviewing, even if you think you already know all the material.
Instructions: Answer the first two questions on the basis of what you know so far about the forgetting curve. Answer the third and fourth questions on the basis of what you know about the spacing effect and overlearning.
Even when information has been adequately encoded and stored, it does not do us any good if we cannot retrieve it. Retrieval is the process of reactivating information that has been stored in memory.
We’ve all experienced retrieval failure in the form of the frustrating tip-of-the-tongue phenomenon in which we are certain that we know something that we are trying to recall but cannot quite come up with it. You can try this one on your friends as well. Read your friend the names of the 10 states listed below, and ask him or her to name the capital city of each state. Now, for the capital cities that your friend can’t name, provide just the first letter of the capital city. You’ll probably find that having the first letters of the cities helps with retrieval. The tip-of-the-tongue experience is a very good example of the inability to retrieve information that is actually stored in memory.
Try this demonstration of the tip-of-the-tongue phenomenon with a classmate. Follow the instructions from the paragraph above.
States and Capital Cities | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||||||||||||
|
You can get an idea of the difficulty posed by retrieval by simply reading each of the words in the activity below. After you have read all the words, you will be asked to recall them.
Instructions: On the next page you will have 2 minutes to memorize a list of words. After you read the list, you will be given time to enter all the words that you can recall. Press Start to begin.
We are more likely to be able to retrieve items from memory when conditions at retrieval are similar to the conditions under which we encoded them. Context-dependent learning refers to an increase in retrieval when the external situation in which information is learned matches the situation in which it is remembered. Godden and Baddeley [1] conducted a study to test this idea using scuba divers. They asked the divers to learn a list of words either when they were on land or when they were underwater. Then they tested the divers on their memory, either in the same or the opposite situation. As you can see in the following figure, the divers’ memory was better when they were tested in the same context in which they had learned the words than when they were tested in the other context.
You can see that context-dependent learning might also be important in improving your memory. For instance, you might want to try to study for an exam in a situation that is similar to the one in which you are going to take the exam.
Whereas context-dependent learning refers to a match in the external situation between learning and remembering, state-dependent learning refers to superior retrieval of memories when the individual is in the same physiological or psychological state as during encoding. Research has found, for instance, that animals that learn a maze while under the influence of one drug tend to remember their learning better when they are tested under the influence of the same drug than when they are tested without the drug. [2] And research with humans finds that bilinguals remember better when tested in the same language in which they learned the material. [3] Mood states may also produce state-dependent learning. People who learn information when they are in a bad (rather than a good) mood find it easier to recall these memories when they are tested while they are in a bad mood, and vice versa. It is easier to recall unpleasant memories than pleasant ones when we’re sad, and easier to recall pleasant memories than unpleasant ones when we’re happy. [4] [5]
Variations in the ability to retrieve information are also seen in the serial position curve. When we give people a list of words one at a time (e.g., on flashcards) and then ask them to recall them, the results look something like those in the figure below. People are able to retrieve more words that were presented to them at the beginning and the end of the list than they are words that were presented in the middle of the list. This pattern, known as the serial position curve, is caused by two retrieval phenomenon: The primacy effect is a tendency to better remember stimuli that are presented early in a list. The recency effect is the tendency to better remember stimuli that are presented later in a list.
There are a number of explanations for primacy and recency effects; one has to do with the effects of rehearsal on short-term and long-term memory. [6] Because we can keep the last words we learned in the presented list in short-term memory by rehearsing them before the memory test begins, they are relatively easily remembered. The recency effect therefore can be explained in terms of maintenance rehearsal in short-term memory. And the primacy effect may also be due to rehearsal—when we hear the first word in the list, we start to rehearse it, making it more likely that it will be moved from short-term to long-term memory. The same is true for the other words that come early in the list. But for the words in the middle of the list, this rehearsal becomes much harder, making them less likely to be moved to long-term memory.
In some cases our existing memories influence our new learning. This may occur either in a backward way or a forward way. Retroactive interference occurs when learning something new impairs our ability to retrieve information that was learned earlier. For example, if you have learned to program in one computer language, and then you learn to program in another similar one, you may start to make mistakes programming the first language that you never would have made before you learned the new one. In this case the new memories work backward (retroactively) to influence retrieval from memory that is already in place.
In contrast to retroactive interference, proactive interference works in a forward direction. Proactive interference occurs when earlier learning impairs our ability to encode information that we try to learn later. For example, if you learned French as a second language, this knowledge may make it more difficult, at least in some respects, to learn a third language (say Spanish), which involves similar but not identical vocabulary.
Memories stored in long-term memory are not isolated but rather are linked into categories—networks of associated memories that have features in common with each other. Forming categories and using them to guide behavior is a fundamental part of human nature. Associated concepts within a category are connected through spreading activation, which occurs when activating one element of a category activates other associated elements. For instance, because tools are associated in a category, reminding people of the word screwdriver will help them remember the word wrench. And when people learn lists of words that come from different categories (e.g., as in retrieval exercise on the previous page), they do not recall the information haphazardly. If they remember the word wrench, they are more likely to remember the word screwdriver next than they are to remember the word dahlia because the words are organized in memory by category and because screwdriver is activated by spreading activation from wrench. [1]
Some categories have defining features that must be true of all members of the category. For instance, all members of the category “triangles” have three sides, and all members of the category “birds” lay eggs. But most categories are not so well defined; the members of the category share some common features, but it is impossible to define which are or are not members of the category. For instance, there is no clear definition of the category “tool.” Some examples of the category, such as a hammer and a wrench, are clearly and easily identified as category members, whereas other members are not so obvious. Is an ironing board a tool? What about a car?
Members of categories (even those with defining features) can be compared to the category prototype, which is the member of the category that is most average or typical of the category. Some category members are more prototypical of, or similar to, the category than others. For instance, some category members (robins and sparrows) are highly prototypical of the category “birds,” whereas other category members (penguins and ostriches) are less prototypical. We retrieve information that is prototypical of a category faster than we retrieve information that is less prototypical. [2]
Mental categories are sometimes referred to as schemas—patterns of knowledge in long-term memory that help us organize information. We have schemas about objects (that a triangle has three sides and may take on different angles), about people (that Sam is friendly, likes to golf, and always wears sandals), about events (the particular steps involved in ordering a meal at a restaurant), and about social groups (we call these group schemas stereotypes).
Schemas are important in part because they help us remember new information by providing an organizational structure for it. Read the following paragraph [3] and then try to write down everything you can remember.
The procedure is actually quite simple. First you arrange things into different groups. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities, that is the next step; otherwise you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications can easily arise. A mistake can be expensive as well. At first the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then one never can tell. After the procedure is completed, one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually they will be used once more and the whole cycle will then have to be repeated. However, that is part of life.
It turns out that people’s memory for this information is quite poor unless they are told before they read it that the information describes doing laundry, in which case their memory for the material is much better. This demonstration of the role of schemas in memory shows how our existing knowledge can help us organize new information and how this organization can improve encoding, storage, and retrieval.
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||||||||||||||
|
Just as information is stored on digital media such as DVDs and flash drives, the information in LTM must be stored in the brain. How do different encoding and retrieval strategies affect our brains at the neural level? We saw from previous sections on elaborative encoding, categories and schemas that we give LTM a unique internal organization. Does that mean that there must be a “memory center” of the brain where all memories are organized for quick retrieval? Additionally, how do diseases such as Alzheimer’s disease and conditions such as amnesia cause us to forget information we have already stored in the brain? To answer these questions, we must think of the brain at two different levels: at the level of neurons and at the level of brain areas.
The ability to maintain information in LTM involves a gradual strengthening of the connections among the neurons in the brain. When pathways in these neural networks are frequently and repeatedly fired, the synapses become more efficient in communicating with each other, and these changes create memory. This process, known as long-term potentiation (LTP), refers to the strengthening of the synaptic connections between neurons as result of frequent stimulation. [1] Drugs that block LTP reduce learning, whereas drugs that enhance LTP increase learning. [2] Because the new patterns of activation in the synapses take time to develop, LTP happens gradually. The period of time in which LTP occurs and in which memories are stored is known as the period of consolidation. Consolidation of memories formed during the day often happens during sleep, and some theorize that this is one important function of sleep.
Long-term potentiation occurs as a result of changes in the synapses, which suggests that chemicals, particularly neurotransmitters and hormones, must be involved in memory. There is quite a bit of evidence that this is true. Glutamate, a neurotransmitter and a form of the amino acid glutamic acid, is perhaps the most important neurotransmitter in memory. [3] When animals, including people, are under stress, more glutamate is secreted, and this glutamate can help them remember. [4] The neurotransmitter serotonin is also secreted when animals learn, and epinephrine may also increase memory, particularly for stressful events. [5] [6] Estrogen, a female sex hormone, also seems critical, because women who are experiencing menopause, along with a reduction in estrogen, frequently report memory difficulties. [7] These changes occur through practice. Rehearsal is important in learning. Each time we rehearse, the pathway is activated and each activation strengthens the connections along that pathway.
Our knowledge of the role of biology in memory suggests that it might be possible to use drugs to improve our memories, and Americans spend several hundred million dollars per year on memory supplements with the hope of doing just that. Yet controlled studies comparing memory enhancers, including Ritalin, methylphenidate, ginkgo biloba, and amphetamines, with placebo drugs find very little evidence for their effectiveness. [8] [9] Memory supplements are usually no more effective than drinking a sugared soft drink, which also releases glucose and thus improves memory slightly.
The following video demonstrates a metaphor for how long-term potentiation creates strong, easily accessible memories. Please answer the following questions about long-term potentiation based on the video and the reading.
Memory occurs through sophisticated interactions between new and old brain structures shown the following figure. One of the most important brain regions in explicit memory is the hippocampus, which serves as a preprocessor and elaborator of information. [10] The hippocampus helps us encode information about spatial relationships, the context in which events were experienced, and the associations among memories. [11] The hippocampus also serves in part as a switching point that holds the memory for a short time and then directs the information to other parts of the brain, such as the cortex, to actually do the rehearsing, elaboration, and long-term storage. [12] Without the hippocampus, which might be described as the brain’s “librarian,” our explicit memories would be inefficient and disorganized. Even so, the older we get, the more susceptible we are to proactive and retroactive interference, which shows that the “librarian” finds it harder to retrieve the right memory from a pile of similar memories as we age. In people with Alzheimer’s disease, a neurodegenerative disease which is most common in the elderly, the hippocampus is severely atrophied. Unsurprisingly, one of the most common symptoms of this disease is the inability to form new memories, followed by a loss of the most recent memories and, finally, the loss of old memories.
While the hippocampus is handling explicit memory, the cerebellum and the amygdala are concentrating on implicit and emotional memories, respectively. Research shows that the cerebellum is more active when we are learning associations and in priming tasks, and animals and humans with damage to the cerebellum have more difficulty in classical conditioning studies. [13] [14] The cerebellum is also highly involved in the learning of procedural tasks which need fine motor control, such as writing, riding a bike, and sewing. The storage of many of our most important emotional memories, and particularly those related to fear, is initiated and controlled by the amygdala. [15] If both amygdalae are damaged, people do not lose their memories of positive or negative emotional associations, but they lose the ability to create new positive or negative associations with objects and events.
Although some brain structures are particularly important in memory, this does not mean that all memories are stored in one place. The American psychologist Karl Lashley [16] attempted to determine where memories were stored in the brain by teaching rats how to run mazes, and then lesioning different brain structures to see if they were still able to complete the maze. This idea seemed straightforward, and Lashley expected to find that memory was stored in certain parts of the brain. But he discovered that no matter where he removed brain tissue, the rats retained at least some memory of the maze, leading him to conclude that memory isn’t located in a single place in the brain, but rather is distributed around it.
Our memories are not perfect. They fail in part due to our inadequate encoding and storage, and in part due to our inability to accurately retrieve stored information. But memory is also influenced by the setting in which it occurs, by the events that occur to us after we have experienced an event, and by the cognitive processes that we use to help us remember. Although our cognition allows us to attend to, rehearse, and organize information, cognition may also lead to distortions and errors in our judgments and our behaviors.
In this section we consider some of the cognitive biases that are known to influence humans. Cognitive biases are errors in memory or judgment that are caused by the inappropriate use of cognitive processes. The study of cognitive biases is important both because it relates to the important psychological theme of accuracy versus inaccuracy in perception, and because being aware of the types of errors that we may make can help us avoid them and therefore improve our decision-making skills.
A particular problem for eyewitnesses such as Jennifer Thompson, who misidentified her rapist in court resulting in his wrongful conviction, is that our memories are often influenced by the things that occur to us after we have learned the information. [1] [2] [3] This new information can distort our original memories such that we are no longer sure what is the real information and what was provided later. The misinformation effect refers to errors in memory that occur when new information influences existing memories.
In an experiment by Loftus and Palmer, [4] participants viewed a film of a traffic accident and then, according to random assignment to experimental conditions, answered one of three questions:
As you can see in the figure below, although all the participants saw the same accident, their estimates of the cars’ speed varied by condition. Participants who had been asked about the cars “smashing” each other estimated the highest average speed, and those who had been asked the “contacted” question estimated the lowest average speed.
In addition to distorting our memories for events that have actually occurred, misinformation may lead us to falsely remember information that never occurred. Loftus and her colleagues asked parents to provide them with descriptions of events that did (e.g., moving to a new house) and did not (e.g., being lost in a shopping mall) happen to their children. Then (without telling the children which events were real or made-up) the researchers asked the children to imagine both types of events. The children were instructed to “think real hard” about whether the events had occurred. [5] More than half of the children generated stories regarding at least one of the made-up events, and they remained insistent that the events did in fact occur even when told by the researcher that they could not possibly have occurred. [6] Even college students are susceptible to manipulations that make events that did not actually occur seem as if they did. [7]
The ease with which memories can be created or implanted is particularly problematic when the events to be recalled have important consequences. Therapists often argue that patients may repress memories of traumatic events they experienced as children, such as childhood sexual abuse, and then recover the events years later as the therapist leads them to recall the information—for instance, by using dream interpretation and hypnosis. [8]
But other researchers argue that painful memories such as sexual abuse are usually very well remembered, that few memories are actually repressed, and that even if they are, it is virtually impossible for patients to accurately retrieve them years later. [9] [10] These researchers have argued that the procedures used by the therapists to “retrieve” the memories are more likely to actually implant false memories, leading the patients to erroneously recall events that did not actually occur. Because hundreds of people have been accused, and even imprisoned, on the basis of claims about “recovered memory” of child sexual abuse, the accuracy of these memories has important societal implications. Many psychologists now believe that most of these claims of recovered memories are due to implanted, rather than real, memories. [11]
One potential error in memory involves mistakes in differentiating the sources of information. Source monitoring refers to the ability to accurately identify the source of a memory. Perhaps you’ve had the experience of wondering whether you really experienced an event or only dreamed or imagined it. If so, you wouldn’t be alone. Rassin, Merkelbach, and Spaan [12] reported that up to 25% of college students reported being confused about real versus dreamed events. Studies suggest that people who are fantasy-prone are more likely to experience source monitoring errors, [13] and such errors also occur more often for both children and the elderly than for adolescents and younger adults. [14]
In other cases, we may be sure that we remembered the information from real life but be uncertain about exactly where we heard it. Imagine that you read a news story in a tabloid magazine such as the National Enquirer. Probably you would have discounted the information because you know that its source is unreliable. But what if later you were to remember the story but forget the source of the information? If this happens, you might become convinced that the news story is true because you forget to discount it. The sleeper effect refers to an attitude change that occurs over time when we forget the source of information. [15]
In still other cases we may forget where we learned information and mistakenly assume that we created the memory ourselves. Kaavya Viswanathan, the author of the book How Opal Mehta Got Kissed, Got Wild, and Got a Life, was accused of plagiarism when it was revealed that many parts of her book were very similar to passages from other material. Viswanathan argued that she had simply forgotten that she had read the other works, mistakenly assuming she had made up the material herself. And the musician George Harrison claimed that he was unaware that the melody of his song “My Sweet Lord” was almost identical to an earlier song by another composer. The judge in the copyright suit that followed ruled that Harrison didn’t intentionally commit the plagiarism. (Please use this knowledge to become extra vigilant about source attributions in your written work, not to try to excuse yourself if you are accused of plagiarism.)
Research reveals a pervasive cognitive bias toward overconfidence, which is the tendency for people to be too certain about their ability to accurately remember events and to make judgments. David Dunning and his colleagues [16] asked college students to predict how another student would react in various situations. Some participants made predictions about a fellow student whom they had just met and interviewed, and others made predictions about their roommates whom they knew very well. In both cases, participants reported their confidence in each prediction, and accuracy was determined by the responses of the people themselves. The results were clear: Regardless of whether they judged a stranger or a roommate, the participants consistently overestimated the accuracy of their own predictions.
Eyewitnesses to crimes are also frequently overconfident in their memories, and there is only a small correlation between how accurate and how confident an eyewitness is. A witness who claims to be absolutely certain about, for example, his or her identification of a suspect or account of events is not much more likely to be accurate than one who appears less sure, making it almost impossible to determine whether or not a particular witness is accurate. [17]
I am sure that you have a clear memory of when you first heard about the 9/11 attacks in 2001, and perhaps also when you heard that Princess Diana was killed in 1997 or when the verdict of the O. J. Simpson trial was announced in 1995. This type of memory, which we experience along with a great deal of emotion, is known as a flashbulb memory—a vivid and emotional memory of an unusual event that people believe they remember very well. [18]
People are very certain of their memories of these important events, and frequently overconfident. Talarico and Rubin [19] tested the accuracy of flashbulb memories by asking students to write down their memory of how they had heard the news about either the September 11, 2001, terrorist attacks or about an everyday event that had occurred to them during the same time frame. These recordings were made on September 12, 2001. Then the participants were asked again, either 1, 6, or 32 weeks later, to recall their memories. The participants became less accurate in their recollections of both the emotional event and the everyday events over time. But the participants’ confidence in the accuracy of their memory of learning about the attacks did not decline over time. After 32 weeks, the participants were overconfident; they were much more certain about the accuracy of their flashbulb memories than they should have been. Schmolck, Buffalo, and Squire [20] found similar distortions in memories of news about the verdict in the O. J. Simpson trial.
If you’ve already covered the unit on how cognitive change proceeds in childhood, this paragraph and the following Learn By Doing exercise will be a quick refresher on schematic processing. If that material is yet to come, then this will serve as a brief introduction to the topic. Schemata (plural of schema) are mental representations of the world that are formed and adjusted using the processes of assimilation and accommodation as a person experiences life. Assimilation is the use of existing schema to interpret new information and accommodation is the adjustment of existing schema to fit new information. Generally, both processes are in action at the same time.
We have seen that schemas help us remember information by organizing material into coherent representations. However, although schemas can improve our memories, they may also lead to cognitive biases. Using schemas may lead us to falsely remember things that never happened to us and to distort or misremember things that did. For one, schemas lead to the confirmation bias, which is the tendency to verify and confirm our existing memories rather than to challenge and disconfirm them. The confirmation bias occurs because once we have schemas, they influence how we seek out and interpret new information. The confirmation bias leads us to remember information that fits our schemas better than we remember information that disconfirms them, [1] a process that makes our stereotypes very difficult to change. And we ask questions in ways that confirm our schemas. [2] If we think that a person is an extrovert, we might ask her about ways that she likes to have fun, thereby making it more likely that we will confirm our beliefs. In short, once we begin to believe in something—for instance, a stereotype about a group of people—it becomes very difficult to later convince us that these beliefs are not true; the beliefs become self-confirming.
Darley and Gross [3] demonstrated how schemas about social class could influence memory. In their research they gave participants a picture and some information about a fourth-grade girl named Hannah. To activate a schema about her social class, Hannah was pictured sitting in front of a nice suburban house for one-half of the participants and pictured in front of an impoverished house in an urban area for the other half. Then the participants watched a video that showed Hannah taking an intelligence test. As the test went on, Hannah got some of the questions right and some of them wrong, but the number of correct and incorrect answers was the same in both conditions. Then the participants were asked to remember how many questions Hannah got right and wrong. Demonstrating that stereotypes had influenced memory, the participants who thought that Hannah had come from an upper-class background remembered that she had gotten more correct answers than those who thought she was from a lower-class background.
Our reliance on schemas can also make it more difficult for us to “think outside the box.” Peter Wason [4] asked college students to determine the rule that was used to generate the numbers 2-4-6 by asking them to generate possible sequences and then telling them if those numbers followed the rule. The first guess that students made was usually “consecutive ascending even numbers,” and they then asked questions designed to confirm their hypothesis (“Does 102-104-106 fit?” “What about 404-406-408?”). Upon receiving information that those guesses did fit the rule, the students stated that the rule was “consecutive ascending even numbers.” But the students’ use of the confirmation bias led them to ask only about instances that confirmed their hypothesis, and not about those that would disconfirm it. They never bothered to ask whether 1-2-3 or 3-11-200 would fit, and if they had, they would have learned that the rule was not “consecutive ascending even numbers” but simply “any three ascending numbers.” Again, you can see that once we have a schema (in this case a hypothesis), we continually retrieve that schema from memory rather than other relevant ones, leading us to act in ways that tend to confirm our beliefs.
Functional fixedness occurs when people’s schemas prevent them from using an object in new and nontraditional ways. Duncker [5] gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below . Few of the participants realized that the box could be tacked to the wall and used as a platform to hold the candle. The problem again is that our existing memories are powerful, and they bias the way we think about new information. Because the participants were “fixated” on the box’s normal function of holding thumbtacks, they could not see its alternative use.
Still another potential for bias in memory occurs because we are more likely to attend to, and thus make use of and remember, some information more than other information. For one, we tend to attend to and remember things that are highly salient, meaning that they attract our attention. Things that are unique, colorful, bright, moving, and unexpected are more salient. [6] [7] In one relevant study, Loftus, Loftus, and Messo [8] showed people images of a customer walking up to a bank teller and pulling out either a pistol or a checkbook. By tracking eye movements, the researchers determined that people were more likely to look at the gun than at the checkbook, and that this reduced their ability to accurately identify the criminal in a lineup that was given later. The salience of the gun drew people’s attention away from the face of the criminal.
The salience of the stimuli in our social worlds has a big influence on our judgment, and in some cases may lead us to behave in ways that we might better not have. Imagine, for instance, that you wanted to buy a new music player for yourself. You’ve been trying to decide what brand of tablet computer to buy. You checked Consumer Reports online and found that, although the brands differed on many dimensions, including price, battery life, and so forth, Brand X was nevertheless rated significantly higher by owners than were the other brands. As a result, you decide to purchase Brand X the next day. That night, however, you go to a party, and a friend shows you her new Brand Y tablet. You check it out, and it seems really cool. You tell her that you were thinking of buying Brand X, and she tells you that you are crazy. She says she knows someone who had one and it had a lot of problems—it didn’t download files correctly, the battery died right after the warranty expired, and so forth—and that she would never buy one. Would you still buy Brand X, or would you switch your plans?
If you think about this question logically, the information that you just got from your friend isn’t really all that important. You now know the opinion of one more person, but that can’t change the overall rating of the brands very much. On the other hand, the information your friend gives you, and the chance to use her Brand Y tablet, are highly salient. The information is right there in front of you, in your hand, whereas the statistical information from Consumer Reports is only in the form of a table that you saw on your computer. The outcome in cases such as this is that people frequently ignore the less salient but more important information, such as the likelihood that events occur across a large population (these statistics are known as base rates), in favor of the less important but nevertheless more salient information. The situation is further complicated by the fact that people tend to selectively remember certain outcomes because they are salient while disregarding mundane ones. Moreover, people’s first person perspective leads them to overestimate the degree to which they played a role in an event or project, a phenomenon called cognitive accessibility.
Another way that our information processing may be biased occurs when we use heuristics, which are information-processing strategies that are useful in many cases but may lead to errors when misapplied. These strategies are in contrast to algorithms, which are recipe-style information-processing strategies that guarantee a correct answer at all times. Two examples are using the Pythagorean theorem for finding the length of the hypotenuse of a triangle and using a formula to convert from Fahrenheit to Celsius (and vice versa), and there are many others. The reason that people don’t always use algorithmic processing is that there are not algorithmic solutions to most problems people encounter (or even if there were, they may be too complicated), thus they resort to heuristics as the next best alternative. Let’s consider two of the most frequently applied (and misapplied) heuristics: the representativeness heuristic and the availability heuristic.
In many cases we base our judgments on information that seems to represent, or match, what we expect will happen, while ignoring other potentially more relevant statistical information. When we do so, we are using the representativeness heuristic. Consider, for instance, the puzzle presented in the following table. Let’s say that you went to a hospital, and you checked the records of the babies that were born today. Which pattern of births do you think you are most likely to find?
The Representativeness Heuristic | ||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Adapted from Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||||||||||||||||||||||||||
|
Using the representativeness heuristic may lead us to incorrectly believe that some patterns of observed events are more likely to have occurred than others. In this case, list B seems more random, and thus is judged as more likely to have occurred, but statistically both lists are equally likely.
Most people think that list B is more likely, probably because list B looks more random, and thus matches (is “representative of”) our ideas about randomness. But statisticians know that any pattern of four girls and four boys is mathematically equally likely. The problem is that we have a schema of what randomness should be like, which doesn’t always match what is mathematically the case. Similarly, people who see a flipped coin come up “heads” five times in a row will frequently predict, and perhaps even wager money, that “tails” will be next. This behavior is known as the gambler’s fallacy. But mathematically, the gambler’s fallacy is an error: The likelihood of any single coin flip being “tails” is always 50%, regardless of how many times it has come up “heads” in the past. Probability, the likelihood of something happening, is calculated solely by dividing the total number of potential favorable outcomes by the total number of possible outcomes (in our case ˝ for both heads and tails). The previous history of events does not affect future events. Another illustration of the gambler’s fallacy is the deceptive phenomenon of streaky basketball shooters.
Imagine you are at a Laker game and are watching Dwight Howard shooting free throws. Let’s assume he generally makes 6 out of 10 shots, so his accuracy is 60%. As the game goes on, the other team continues to foul Dwight, so he takes many free throws throughout the game. You start to wonder what are the chances of him making a certain number of shots. You remember from your math class that one needs to multiply individual probabilities of events together to calculate their combined probability.
Our judgments can also be influenced by how easy it is to retrieve a memory.The tendency to make judgments of the frequency or likelihood that an event occurs on the basis of the ease with which it can be retrieved from memory is known as the availability heuristic. [1] [2] Imagine, for instance, that I asked you to indicate whether there are more words in the English language that begin with the letter “R” or that have the letter “R” as the third letter. You would probably answer this question by trying to think of words that have each of the characteristics, thinking of all the words you know that begin with “R” and all that have “R” in the third position. Because it is much easier to retrieve words by their first letter than by their third, we may incorrectly guess that there are more words that begin with “R,” even though there are in fact more words that have “R” as the third letter.
The availability heuristic may also operate on episodic memory. We may think that our friends are nice people, because we see and remember them primarily when they are around us (their friends, who they are, of course, nice to). And the traffic might seem worse in our own neighborhood than we think it is in other places, in part because nearby traffic jams are more easily retrieved than are traffic jams that occur somewhere else.
In addition to influencing our judgments about ourselves and others, the ease with which we can retrieve potential experiences from memory can have an important effect on our own emotions. If we can easily imagine an outcome that is better than what actually happened, then we may experience sadness and disappointment; on the other hand, if we can easily imagine that a result might have been worse than what actually happened, we may be more likely to experience happiness and satisfaction. The tendency to think about and experience events according to “what might have been” is known as counterfactual thinking. [3] [4]
Imagine, for instance, that you were participating in an important contest, and you won the silver (second-place) medal. How would you feel? Certainly you would be happy that you won the silver medal, but wouldn’t you also be thinking about what might have happened if you had been just a little bit better—you might have won the gold medal! On the other hand, how might you feel if you won the bronze (third-place) medal? If you were thinking about the counterfactuals (the “what might have beens”) perhaps the idea of not getting any medal at all would have been highly accessible; you’d be happy that you got the medal that you did get, rather than coming in fourth.
Tom Gilovich and his colleagues investigated this idea by videotaping the responses of athletes who won medals in the 1992 Summer Olympic Games. [5] They videotaped the athletes both as they learned that they had won a silver or a bronze medal and again as they were awarded the medal. Then the researchers showed these videos, without any sound, to raters who did not know which medal which athlete had won. The raters were asked to indicate how they thought the athlete was feeling, using a range of feelings from “agony” to “ecstasy.” The results showed that the bronze medalists were, on average, rated as happier than were the silver medalists. In a follow-up study, raters watched interviews with many of these same athletes as they talked about their performance. The raters indicated what we would expect on the basis of counterfactual thinking—the silver medalists talked about their disappointments in having finished second rather than first, whereas the bronze medalists focused on how happy they were to have finished third rather than fourth.
You might have experienced counterfactual thinking in other situations. Once I was driving across country, and my car was having some engine trouble. I really wanted to make it home when I got near the end of my journey; I would have been extremely disappointed if the car broke down only a few miles from my home. Perhaps you have noticed that once you get close to finishing something, you feel like you really need to get it done. Counterfactual thinking has even been observed in juries. Jurors who were asked to award monetary damages to others who had been in an accident offered them substantially more in compensation if they barely avoided injury than they offered if the accident seemed inevitable. [6]
Perhaps you are thinking that the kinds of errors that we have been talking about don’t seem that important. After all, who really cares if we think there are more words that begin with the letter “R” than there actually are, or if bronze medal winners are happier than the silver medalists? These aren’t big problems in the overall scheme of things. But it turns out that what seem to be relatively small cognitive biases on the surface can have profound consequences for people.
Why would so many people continue to purchase lottery tickets, buy risky investments in the stock market, or gamble their money in casinos when the likelihood of them ever winning is so low? One possibility is that they are victims of salience; they focus their attention on the salient likelihood of a big win, forgetting that the base rate of the event occurring is very low. The belief in astrology, which all scientific evidence suggests is not accurate, is probably driven in part by the salience of the occasions when the predictions are correct. When a horoscope comes true (which will, of course, happen sometimes), the correct prediction is highly salient and may allow people to maintain the overall false belief.
People may also take more care to prepare for unlikely events than for more likely ones, because the unlikely ones are more salient. For instance, people may think that they are more likely to die from a terrorist attack or a homicide than they are from diabetes, stroke, or tuberculosis. But the odds are much greater of dying from the latter than the former.
Salience and accessibility also color how we perceive our social worlds, which may have a big influence on our behavior. For instance, people who watch a lot of violent television shows also view the world as more dangerous, [7] probably because violence becomes more cognitively accessible for them. We also unfairly overestimate our contribution to joint projects, [8] perhaps in part because our own contributions are highly accessible, whereas the contributions of others are much less so.
Even people who should know better, and who need to know better, are subject to cognitive biases. Economists, stock traders, managers, lawyers, and even doctors make the same kinds of mistakes in their professional activities that people make in their everyday lives. [9] Just like us, these people are victims of overconfidence, heuristics, and other biases.
Furthermore, every year thousands of individuals, such as Ronald Cotton, are charged with and often convicted of crimes based largely on eyewitness evidence. When eyewitnesses testify in courtrooms regarding their memories of a crime, they often are completely sure that they are identifying the right person. But the most common cause of innocent people being falsely convicted is erroneous eyewitness testimony. [10] The many people who were convicted by mistaken eyewitnesses prior to the advent of forensic DNA and who have now been exonerated by DNA tests have certainly paid for all-too-common memory errors. [11]
Although cognitive biases are common, they are not impossible to control, and psychologists and other scientists are working to help people make better decisions. One possibility is to provide people with better feedback about their judgments. Weather forecasters, for instance, learn to be quite accurate in their judgments because they have clear feedback about the accuracy of their predictions. Other research has found that accessibility biases can be reduced by leading people to consider multiple alternatives rather than focus only on the most obvious ones, and particularly by leading people to think about opposite possible outcomes than the ones they are expecting. [12] Forensic psychologists are also working to reduce the incidence of false identification by helping police develop better procedures for interviewing both suspects and eyewitnesses. [13]
Another source of errors in cognition is belief in the paranormal. A Gallup poll in 2005 showed that 3 out of 4 Americans believe in the supernatural with over 40% responding that they believe in extra-sensory perception (ESP), the ability to sense things without being in physical proximity with the person, place, thing, or event. There has been much research that has claimed to prove or disprove the existence of such phenomena. While the paranomal is taken for granted by the general public, quite the opposite is observed in the American Academy of Sciences, where only 4% members believe in the existence of such phenomena.
The paranormal is a term that most people use to refer to a whole range of unusual aspects of human perception and cognition. Parapsychologists, scientists that study anomalous phenomena like ESP, generally use the term Psi, and have identified two specific forms. Psi-gamma refers to those phenomena that involve anomalous information transfer, like ESP, clairvoyance, and remote viewing. On the other hand, psi-kappa refers to those phenomena that involve anomalous transfer of matter, such as psychokinesis or telekinesis (the ability to move things with one’s mind), or even anomalous transfer of energy, such as pyrokinsesis (the ability to set things aflame with one’s mind). To date, the most rigorous set of studies were conducted at the Princeton Engineering Anomalies Research (PEAR) at Princeton University. Despite three decades of positive results, this research has not been accepted as a valid avenue of empirical investigation by the mainstream scientific community. [14]
Virtually all animals have ways to communicate with other members of their own species, whether it be through sounds, gestures, odors, or other means. Some animals, like chimpanzees and dolphins, have rich and complicated communication systems. But even the most sophisticated communication system of other species does not come close to the complexity and subtlety of human language. It is not an exaggeration to claim that the human language system takes communication to a very different level than that found in any other creature on earth. Although the word language is often used broadly (e.g., “the language of the bees”), here we restrict its use to a particular part of human communication—spoken language—and consider it apart from other important aspects of human communication (e.g., body language or emotional messages conveyed by facial expressions).
Language involves both the ability to comprehend spoken and written words and to produce meaningful communication when we speak or write. Most languages first appear in their spoken form. Although speaking may seem simple, it is a remarkably complex skill that involves a variety of cognitive, social, and biological processes, including operation of the vocal cords and the coordination of breath with movements of the throat, mouth, and tongue. A number of languages that are primarily or entirely expressed in sign also exist. In sign languages, communication is expressed by movements of the hands along with facial and bodily gestures. The most common sign language is American Sign Language (ASL), currently spoken by more than 500,000 people in the United States alone. Except for artificial languages developed for technology and an occasional special-use language, languages do not develop in written form. Although writing is generally derivative of spoken language, it involves a complex set of processes, some of them unique to writing.
Language is often used for the transmission of factual information (“Turn right at the next light, and then go straight,” “Place tab A into slot B”), but that is only its most mundane function. Language also allows us to access existing knowledge, to draw conclusions, to set and accomplish goals, and to understand and communicate complex social relationships. Language is fundamental to our ability to think, and without it we would be nowhere near as intelligent as we are.
Spoken languages can be conceptualized in terms of sounds, meaning, and the environmental factors that help us understand it. Although we usually notice words and sentences when we think about language, some of the most important psychological research on language involves more basic elements that give form and content to words and sentences. In the next section, we discuss phonemes, which are elementary units of sound that make up words; morphemes, which are “word parts”—small but meaningful sounds that alter and refine a word’s meaning; and finally, syntax, which is the set of grammatical rules that control how words are put together into phrases and sentences. Languages are governed by rules, but contextual information, the when, where, and why of communication, is also necessary for understanding the meaning of what a person says. The importance of context is also discussed in this section.
A phoneme is the smallest unit of sound that makes a meaningful difference in a language. Phonemes correspond to the sounds associated with the letters of an alphabet, though there is not always a one-to-one correspondence between sounds and letters. The word bit has three phonemes, /b/, /i/, and /t/ (in transcription, phonemes are placed between slashes), and the word pit also has three: /p/, /i/, and /t/. These two words differ by a single phoneme: /b/ versus /p/. However, the six-letter word phrase has only four phonemes: /f/, /r/, /long-a/, and /z/. In spoken languages, phonemes are produced by movements of our lips, teeth, tongue, vocal cords, and throat (the vocal tract), whereas in sign languages phonemes are defined by the shapes and movement of the hands.
Hundreds of unique phonemes can be made by human speakers, but most languages use only a small subset of the possibilities. English uses about 45 phonemes, whereas other languages have as few as 15 and others more than 60. For instance, the Hawaiian language contains only about a dozen phonemes, including five vowels (a, e, i, o, and u) and seven consonants (h, k, l, m, n, p, and w).
The fact that different languages use different sets of phonemes is the reason people usually have accents in languages that are not their native language. It is difficult to learn to make a new speech sound and use it regularly in words if you did not learn it early in life. And accents are not the whole story. Because the phoneme is actually a category of sounds—that is, many variations on a sound—and the members of this category are treated alike by the brain, some languages group several sounds together as a single phoneme, and others separate those same sounds as different phonemes. Speakers of different languages can hear the difference only between the sounds their language marks as different phonemes, and they cannot tell the difference between two sounds that are grouped together as the same phoneme. This is known as the categorical perception of speech sounds. For example, English speakers can differentiate the /r/ phoneme from the /l/ phoneme, and thus rake and lake are heard as different words. In Japanese, however, /r/ and /l/ are the same phoneme, and thus native speakers of Japanese cannot tell the difference between rake and lake. The /r/ versus /l/ difference is obvious to native English speakers, but English speakers run into the same problem when listening to speakers of other language. Try saying cool and keep out loud. Can you hear the difference between the two /k/ sounds? To English speakers, they both sound the same, but to speakers of Arabic they are two different phonemes.
Let’s practice identifying the various components of language. The first part of this activity focuses on phonemes, the smallest unit of sound. When you click the play button, you will hear a speech sound. Your job is to drag and drop the grapheme, or letter, of the phoneme you hear into the corresponding box.
Categorical perception is a way of perceiving different sensory inputs and mapping them to the same category. It explains why speakers of a particular language all group a variety of sounds into a single phoneme, so each phoneme is actually a set of variations on a single theme. This means that we hear different sounds as if they were the same, and often we cannot tell the difference even if we try. To demonstrate this fact, psychologists used computers to create a series of sounds, each made up of two phonemes, that gradually—in precise steps—changed from a /ba/ sound to a /pa/ sound. (Other two-phoneme sounds were tested as well, but we use /ba/ and /pa/ for our explanation.)
The experimenters wanted to know what the people (in this case, adults) would perceive when they heard the sounds. If you didn’t know about phonemes, you might expect that they would hear a clear /ba/ sound that gradually became more like a /pa/ sound until it became a clear /pa/ sound. But that is not what happened.
The following figure shows the many variations of the /ba/ and /pa/ sounds on the X-axis. The X-axis is labeled “Voice onset time (ms).” Voice onset time is a technical unit, not critical for our discussion. Simply understand that the sounds, created by a computer to guarantee precise differences, went from having strong characteristics of /ba/ on the far left to strong characteristics of /pa/ on the far right.
The two lines represent the percentage of participants who said they heard /pa/ and the percentage who said they heard /ba/. Percentages are shown on the Y-axis. Together the two lines add up to 100%, because participants always had to choose one or the other. What the graph shows is that there was only a small region of ambiguity, where the two lines cross. For most of the sounds, going left to right, just about every participant on every trial chose either /ba/ on the left or /pa/ on the right.
If the participants in the study had simply heard the differences, we would predice that as the sounds became more mixed (this occurs the most in the very center of the figure), the participants would be increasingly confused.
But that is not what happened. Instead, people perceived /ba/ unambiguously across many variations. Then, in the center of the variations, where /ba/ and /pa/ sounds were most mixed together, there was a bit of uncertainty, and then they unambiguously heard /pa/ sounds. You can see this in the following figure, where there is only a small range of sounds that led to any uncertainty about whether participants heard /ba/ or /pa/. This sharp change from perceiving the sounds to be /ba/ to perceiving the sounds to be /pa/ is called categorical perception, meaning that what we perceive is far more sharply (or “categorically”) divided than what our ears actually hear.
Infants are born able to understand all phonemes, but they lose their ability to do so as they get older; by 10 months of age, a child’s ability to recognize phonemes becomes similar to that of the adult speakers of the native language. Phonemes that were initially differentiated come to be treated as equivalent. [1]
Phonemes are units of sound, but sound is simply used by language to convey meaning. The basic meaningful units of words are called morphemes. A morpheme is a string of one or more phonemes that make up word meanings and, if a morpheme is added, eliminated, or changed, the meaning of the word changes. In some cases, an entire word is a morpheme. For instance, the word painted has seven letters, six phonemes (/p/, /long-a/, /n/, /t/, /e/, and /d/), and two morphemes (paint + ed, which is a morpheme that means that the first morpheme occurred in the past). However, we can add morphemes—for instance, the prefix re to make repainted—or eliminate morphemes, taking the ed away to leave the single morpheme word, paint. We can even add a morpheme to make up new words, such as unrepainted or depainting, even if we aren’t quite sure what they mean. However, in general, we know what the changed word means when we add a morpheme. For example, the syllable re-, as in rewrite or repay, means “to do again,” and the suffix -est, as in happiest or coolest, means “to the maximum.”
In this activity, we look at morphemes, which consist of one or more phonemes.
un
over
dis
touch
agree
cook
ed
s
able
Syntax is the set of rules of a language by which we construct sentences. Each language has a different syntax. The syntax of the English language requires that each sentence have a noun and a verb, each of which may be modified by adjectives and adverbs. Some syntactical rules make use of the order in which words appear, while others do not. In English, “The man bites the dog” is different from “The dog bites the man.” Because the words are the same in both sentences, the order of the words must convey the difference in meaning. In German, however, only the article endings before the noun matter. “Der Hund beisst den Mann” means “The dog bites the man” but so does “Den Mann beisst der Hund.” The German word der goes with the subject of the sentence, while den goes with the object. The order of the words in this sentence is not as important as it would be in English.
In this activity, you create grammatical sentences. For each sentence, select the word that best completes the sentence.
easily
armadillos
juicy
shelved
quickly
ordered
ripe
infants
Words, phrases, and entire sentences do not possess fixed meanings but change their interpretation as a function of the context in which they are spoken. We use contextual information—the situation in which language is being used, the topic, the things that were said previously, body language, and so on—to help us interpret a word or sentence. For example, imagine that you run into a friend who just saw a new high-tech movie and ask, “How was it?” The friend replies with an enthusiastic look on his face, “Unbelievable!” Now imagine that you run into a friend who just went to a lecture on “How to make a million dollars in two days selling bottled air.” You ask, “How was it?” and your friend rolls her eyes and groans, “Unbelievable!” In the first case, “unbelievable” means “very good,” and in the second case, “unbelievable” means “very bad.” We use context so naturally that we seldom notice how much it impacts our interpretation of language.
Examples of contextual information include our own knowledge, our assumptions about other people's knowledge, and nonverbal expressions such as facial expressions, postures, gestures, and tone of voice. Misunderstandings can easily arise if people aren’t attentive to contextual information or if some of it is missing, such as it may be in newspaper headlines or in text messages.
Examples in Which Syntax Is Correct but the Interpretation Can Be Ambiguous:
Now let’s take a look at the role of contextual information in language. In English, some words have multiple meanings. To determine which meaning of a word is the most appropriate, we must consider contextual information. For each of the following sentences, select which meaning of the italicized word is most appropriate given the context.
For each of the following definitions, select the language-related term it describes.
Anyone who has tried to master a second language as an adult knows the difficulty of language learning. And yet children learn languages easily and naturally. Children who are not exposed to language early in their lives will likely never learn one. Documented case studies of such children include Victor the “Wild Child,” who was abandoned as a baby in France and not discovered until he was 12, and Genie, a child whose parents kept her locked in a closet from 18 months until 13 years of age. Both of these children made some progress in socialization after they were rescued and even learned many words and simple phrases, but neither of them ever developed language even to the level of a typical 3-year-old. [1]
The cases of Victor and Genie illustrate the importance of early experience of language with people, usually adults, who are fluent in the language. There is one group of children who are in danger of isolation even though they are born to completely normal and loving families: congenitally deaf children or children who lose their hearing very early in life. The parents of these children are seldom deaf themselves and often have no warning that their newborn will be deaf. These parents are seldom fluent or even novices in sign language, a language that, like any other language, takes years of practice to achieve competency. Deaf children who are not exposed to sign language during their early years are likely to have difficulty mastering it when they are older. [2] Deaf children have a much better chance of later acquiring other languages, even spoken languages, if they have early exposure to fluent signing.
Deaf children and children like Victor and Genie have shown us that language is a complex ability that is most likely to mature in a normal way if the child is raised in an environment rich in language experiences. There is probably a “sensitive period,” usually associated with the period of childhood, when exposure to language must occur for the brain systems associated with language to develop properly. If exposure does not occur until later, as happened to Genie and Victor and as once regularly happened to deaf children in isolated communities, then the ability to acquire true language abilities may be extremely difficult or even impossible. The brain may lose its ability to form the necessary neural networks to permit language to develop.
For most people, language processing is dominated by the left hemisphere, but many people have a reversal of this pattern, so the right hemisphere dominates language. For instance, a study that looked at the relationship between handedness and the dominant hemisphere for language found that only 4% of people who are strongly right handed have right hemisphere dominance for language, while 15% of ambidextrous individuals and 27% of strong left handers had right hemisphere dominance for language. [3]
These differences in hemispheric dominance can easily be seen in the results of neuroimaging studies that show that listening to and producing language creates greater activity in the left hemisphere than in the right. As shown in the following figure, the Broca area, an area in front of the left hemisphere near the motor cortex, is responsible for language production. This area was first localized in the 1860s by the French physician Paul Broca, who studied patients with lesions to various parts of the brain. The Wernicke area, an area of the brain next to the auditory cortex, is responsible for language comprehension.
Evidence for the importance of the Broca and Wernicke areas in language is seen in patients who experience aphasia, a condition in which language functions are severely impaired. People with Broca aphasia have difficulty producing speech. They speak haltingly, using the minimum number of words to convey an idea. Frequently, they struggle to say a word even though they know the word they are looking for. Other times, they appear to search for a word but fail to find it. People with damage to Wernicke area can produce speech, but what they say is often confusing, and they have trouble understanding what other people are saying to them. Wernicke aphasia is sometimes called fluent aphasia because the person appears to speak in a relatively normal, fluent way, but the content of the sentences may be imprecise or even nonsensical. People with both types of aphasia have difficulty understanding what other people say to them, but this problem tends to be deeper and more serious in cases of Wernicke aphasia. People with Broca aphasia may have comprehension problems because they have difficulty understanding a particular word here and there, but people with Wernicke aphasia have trouble making sense of the meaning of entire sentences and also may have trouble keeping track of the point of the conversation.
The following video shows one man who suffers from Wernicke aphasia and another man who suffers from Broca aphasia. Watch the video and take note of the speech behaviors that are characteristic to each man’s type of aphasia.
Now that you’ve seen examples of people with damage to the Wernicke and Broca areas, let’s see if you can match the area of the brain to its appropriate role in language.
Let’s see how well you remember the parts of the brain that are important for language.
Language learning begins even before birth, because the fetus can hear muffled versions of speaking from outside the womb. Moon, Cooper, and Fifer [1] found that infants only two days old sucked harder on a pacifier when they heard their mother's native language being spoken than when they heard a foreign language, even when both the native and foreign languages were spoken by strangers. Babies are also aware of the patterns of their native language, showing surprise when they hear speech that has a different pattern of phonemes than those they are used to. [2]
During the first year or so after birth, and long before they speak their first words, infants are already learning language. One aspect of this learning is practice in producing speech. By the time they are 6 to 8 weeks old, babies start making vowel sounds, called cooing (“ooohh,” “aaahh,” “goo”) as well as a variety of cries and squeals to help them practice.
Between 5 and 7 months of age, most infants begin babbling, engaging in intentional vocalizations that lack specific meaning. Babbling sounds usually involve a combination of consonant and vowel sounds. In the early months these sounds are often simple consonant vowel pairs that are repeated, such as guh-guh-guy or ba-ba. This is called repetitive babbling. Over the next few months, the sound combinations become more complex, with different consonants and vowels mixed together, such as ma-ba-guh or aah-ga-mee. This is called variegated babbling. Children seem to be naturally motivated to make speech sounds, because they will often vocalize when they are alone and not in distress. This natural motivation has an important function because it encourages the baby to practice making and distinguishing speech sounds, a skill that will be very important as language emerges. Babbling can also serve an important social function for the infant. Parents and infants frequenty engage in "conversational exchanges" of sounds, where the adult will say something to the child, such as "You're such a sweet baby, yes you are, such a sweet baby." The infant will watch and listen and then, when it is his or her turn, make a babbling response. This can be an enjoyable interaction between adult and infant, serving a bonding function, and it also allows the infant to practice the skills of conversation prior to the appearance of words and sentences.
On the average, infants produce their first word at around 1 year of age. There is a great deal of variability in the timing of first words, so some children may start months earlier and others may not utter a distinguishable first word for another 6 months or more. The timing of first words typically has no relationship to later language abilities, though it is true that some disorders can lead to a delay in speech production.
At the same time infants are practicing their speaking skills by babbling, they are also learning to better understand sounds and eventually the words of language. One of the first words children understand is their own name, usually by about 6 months, followed by commonly used words like bottle, mama, and doggie by 10 to 12 months. [3]
At about 1 year of age, children begin to understand that words are more than sounds—they refer to particular objects and ideas. By the time children are 2 years old, they have a vocabulary of several hundred words, and by kindergarten, their vocabularies have increased to several thousand words. During the first decade of life, pronunciation of phonemes becomes increasingly precise (and understandable), and the use of morphemes and syntax becomes increasingly sophisticated.
The early utterances of children contain many errors, for instance, confusing /b/ and /d/, or /c/ and /z/. And the words that children create are often simplified, in part because they are not yet able to make the more complex sounds of the real language. [4] Children may say “keekee” for kitty, “nana” for banana, and “vesketti” for spaghetti, in part because it is easier. Often these early words are accompanied by gestures that may also be easier to produce than the words themselves.
Most of a child’s first words are nouns, and early sentences may include only the noun. “Ma” may mean “more milk please,” and “da” may mean “look, there’s Fido.” Eventually, typically by 18 months of age, the length of the utterances increases to two words (“ma ma” or “da bark”), and these primitive sentences begin to follow the appropriate syntax of the native language. By age 2, more complex sentences start to appear, and there is rapid increase in vocabulary and variations in language structure. Here, as was indicated earlier, there is a great deal of variability among perfectly normal children in the timing of language development, so a child who is ahead of the milestones discussed here is not neceassarily going to remain advanced, and a child who misses them, even by months, is not likely to remain behind his or her peers linguistically or intellectually.
Because language involves the active categorization of sounds and words into higher-level units, children make some mistakes in interpreting what words mean and how to use them. In particular, they often make overextensions of concepts, which means they use a given word in a broader context than appropriate. A child might at first call all adult men “daddy” or all animals “doggie.”
Children also use contextual information, particularly the cues that parents provide, to help them learn language. Infants are frequently more attuned to the tone of voice of the person speaking than to the content of the words themselves and are aware of the target of speech. Werker, Pegg, and McLeod [5] found that infants listened longer to a woman who was speaking to a baby than to a woman who was speaking to another adult. Children learn that people are usually referring to things that they are looking at when they are speaking [6] and that the speaker’s emotional expressions are related to the content of their speech. Children also use their knowledge of syntax to help them figure out what words mean. If a child hears an adult point to a strange object and say, “This is a dirb,” they will infer that a dirb is a thing, but if they hear them say, “This is one of those dirb things,” they will infer that dirb refers to the color or other characteristic of the object. And if they hear the word “dirbing,” they will infer that dirbing is something we do. [7]
Psychological theories of language learning differ in terms of the importance they place on nature versus nurture. Yet it is clear that both matter. Children are not born knowing language; they learn to speak by hearing what happens around them. On the other hand, human brains, unlike those of any other animal, are prewired in a way that leads them, almost effortlessly, to learn language.
Perhaps the most straightforward explanation of language development is that it occurs through principles of learning, including association, reinforcement, and the observation of others. [1] There must be at least some truth to the idea that language is learned, because children learn the language that they hear spoken around them rather than some other language. Also supporting this idea is the gradual improvement of language skills with time. It seems that children modify their language through imitation, reinforcement, and shaping, as would be predicted by learning theories.
But language cannot be entirely learned. For one, children learn words too fast for them to be learned through reinforcement. Between the ages of 18 months and 5 years, children learn up to 10 new words every day. [2] More important, language is more generative than it is imitative. Generativity refers to the ability of speakers to compose sentences to represent new ideas they have never before been exposed to. Language is not a predefined set of ideas and sentences that we choose when we need them, but rather a system of rules and procedures that allows us to create an infinite number of statements, thoughts, and ideas, including those that have never previously occurred. When a child says that she “swimmed” in the pool, for instance, she is showing generativity. An adult speaker of English would not say “swimmed,” yet the word is easily generated from the normal system of producing language.
Other evidence that refutes the idea that all language is learned through experience comes from the observation that children may learn languages better than they ever hear them. Deaf children whose parents do not speak American Sign Language very well nevertheless can learn it perfectly on their own and may even make up their own language if they need to. [3] A group of deaf children in a school in Nicaragua, whose teachers could not sign, invented a way to communicate through made-up signs and through signs different individuals had used to communicate with their own families. [4] Within a few years, this made-up signing system became increasingly rule governed and consistent. The development of this new Nicaraguan Sign Language has continued and changed as new generations of students have come to the school and started using the language. Although the original system was not a real language, linguists now find that the signing system invented by these children has all the typical features and complexity of a real language.
In the middle of the 20th century, American linguist Noam Chomsky explained how some aspects of language could be innate. Prior to this time, people tended to believe that children learn language soley by imitating the adults around them. Chomsky agreed that individual words must be learned by experience, but he argued that genes could code into the brain categories and organization that form the basis of grammatical structure. We come into the world ready to distinguish different grammatical classes, like nouns and verbs and adjectives, and sensitive to the order in which words are spoken. Then, using this innate sensitivity, we quickly learn from listening to our parents about how to organize our own language [5] [6] For instance, if we grow up hearing Spanish, we learn that adjectives come after nouns (el gato amarillo, where gato means “cat” and amarillo is “yellow”), but if we grow up hearing English, we learn that adjectives come first (“the yellow cat”). Chomsky termed this innate sensitivity that allows infants and young children to organize the abstract categories of language the language acquisition device (LAD).
According to Chomsky’s approach, each of the many languages spoken around the world (there are between 6,000 and 8,000) is an individual example of the same underlying set of procedures that are hardwired into human brains. Each language, while unique, is just a set of variations on a small set of possible rule systems that the brain permits language to use. Chomsky’s account proposes that children are born with a knowledge of general rules of grammar (including phoneme, morpheme, and syntactical rules) that determine how sentences are constructed.
Although there is general agreement among psychologists that babies are genetically programmed to learn language, there is still debate about Chomsky’s idea that a universal grammar can account for all language learning. Evans and Levinson [7] surveyed the world’s languages and found that none of the presumed underlying features of the language acquisition device were entirely universal. In their search they found languages that did not have noun or verb phrases, that did not have tenses (e.g., past, present, future), and some that did not have nouns or verbs at all, even though a basic assumption of a universal grammar is that all languages should share these features. Other psychologists believe that early experience can fully explain language acquisition, and Chomsky’s language acquisition device is unnecessary. Nevertheless, Chomsky’s work clearly laid out the many problems that had to be solved in order to adequately explain how children acquire language and why languages have the structures that they do.
The two theories of language acquisition discussed in the text map to the nature versus nurture distinction. Proponents of the nurture view, such as learning theorists, maintain that language is, for the most part, acquired through principles of learning. Supporters of the nature view, such as Noam Chomsky, believe that the general foundation for grammatical parts of language is innate, though many important aspects of language are learned. For each of the following statements, select either Skinner’s learning theory or Chomsky’s LAD theory (LAD: language acquisition device).
Let’s ensure that you can identify the two theories of language acquisition discussed in the text: Skinner’s learning theory and Chomsky’s LAD theory. For each statement, select whether it is true or false.
Although it is less common in the United States than in other countries, bilingualism (the ability to speak two languages) is becoming increasingly frequent in the modern world. Nearly one-half of the world’s population, including 18% of U.S. citizens, grows up bilingual.
In recent years, many U.S. states have passed laws outlawing bilingual education in schools. These laws are in part based on the idea that students will have a stronger identity with the school, the culture, and the government if they speak only English and in part based on the idea that speaking two languages may interfere with cognitive development.
Some early psychological research showed that, when compared with monolingual children, bilingual children performed more slowly when processing language, and their verbal scores were lower. But these tests were frequently given in English, even when this was not the child’s first language, and the children tested were often of lower socioeconomic status than the monolingual children. [1]
More current research controlled for these factors and found that although bilingual children may in some cases learn language somewhat more slowly than do monolingual children [2] , bilingual and monolingual children do not significantly differ in the final depth of language learning, nor do they generally confuse the two languages. [3] In fact, participants who speak two languages have been found to have better cognitive functioning, cognitive flexibility, and analytic skills in comparison to monolinguals. [4] Thus, rather than slowing language development, learning a second language seems to increase cognitive abilities.
Does bilingualism cause mental confusion? Is being bilingual a cognitive advantage? People have debated these questions for a long time, but the answers aren’t simple. For this exercise, please read a brief article that discusses the work of some of the leading researchers in bilingualism. Then answer a few questions based on your reading.
Nonhuman animals have a wide variety of systems of communication. Some species communicate using scents; others use visual displays, such as baring the teeth, puffing up the fur, or flapping the wings; and still others use vocal sounds. Male songbirds, such as canaries and finches, sing songs to attract mates and to protect territory, and chimpanzees use a combination of facial expressions, sounds, and actions, such as slapping the ground, to convey aggression. [1] Honeybees use a “waggle dance” to direct other bees to the location of food sources. [2] The language of vervet monkeys is relatively advanced in the sense that they use specific sounds to communicate specific meanings. Vervets make different calls to signify that they have seen either a leopard, a snake, or a hawk. [3]
As mentioned earlier, despite the variety and sophistication of animal communication systems, none comes close to human language in its ability to express a variety of ideas and subtle differences in meaning. For years, scientists have wondered if it is the communication systems that are limited or if other animals are simply unable to acquire a system as advanced as human language. Quite a few efforts have been made to learn more by attempting to teach human language to other animals, especially to chimpanzees and their cousins, bonobos.
Despite their wide abilities to communicate, efforts to teach animals to use language have had only limited success. One of the early efforts was made by Catherine and Keith Hayes, who raised a chimpanzee named Viki in their home along with their own children. But Viki learned little and could never speak. [4] Researchers speculated that Viki’s difficulties might have been in part because the she could not create the words in her vocal cords, and so subsequent attempts were made to teach primates to speak using sign language or using boards on which they can point to symbols.
Allen and Beatrix Gardner worked for many years to teach a chimpanzee named Washoe to sign using ASL. Washoe, who lived to be 42 years old, could label up to 250 different objects and make simple requests and comments, such as “please tickle” and “me sorry.” [5] Washoe’s adopted daughter Loulis, who was never exposed to human signers, learned more than 70 signs simply by watching her mother sign.
The most proficient nonhuman language speaker is Kanzi, a bonobo who lives at the Language Learning Center at Georgia State University. [6] As you can see in the following video clip, Kanzi has a propensity for language that is in many ways similar to humans’. He learned faster when he was younger than when he got older, he learns by observation, and he can use symbols to comment on social interactions rather than simply for food treats. Kanzi can also create elementary syntax and understand relatively complex commands. Kanzi can make tools and can even play Pac-Man.
And yet even Kanzi does not have a true language in the same way that humans do. Human babies learn words faster and faster as they get older, but Kanzi does not. Each new word he learns is almost as difficult as the one before. Kanzi usually requires many trials to learn a new sign, whereas human babies can speak words after only one exposure. Kanzi’s language is focused primarily on food and pleasure and only rarely on social relationships. Although he can combine words, he generates few new phrases and cannot master syntactic rules beyond the level of about a 2-year-old human child. [7]
Watch the video about Kanzi, then answer the questions.
In sum, although many animals communicate, none of them have a true language. With some exceptions, the information that can be communicated in nonhuman species is limited primarily to displays of liking or disliking and related to basic motivations of aggression and mating. Humans also use this more primitive type of communication, in the form of nonverbal behaviors such as eye contact, touch, hand signs, and interpersonal distance, to communicate their like or dislike for others, but they (unlike animals) supplant this more primitive communication with language. Although other animal brains share similarities to ours, only the human brain is complex enough to create language. What is perhaps most remarkable is that although language never appears in nonhumans, language is universal in humans. All humans, unless they have a profound brain abnormality or are completely isolated from other humans, learn language.
Psychologists have long debated how to best conceptualize and measure intelligence. These questions include how many types of intelligence there are, the role of nature versus nurture in intelligence, how intelligence is represented in the brain, and the meaning of group differences in intelligence.
Psychologists have studied human intelligence since the 1880s. As you will read, there are several theories of intelligence and a variety of tests to measure intelligence. In fact, some define intelligence as whatever an intelligence test measures. And most intelligence tests measure how much knowledge one has, or in other words, “school smarts”. Today, most psychologists define intelligence as a mental ability consisting of the ability to learn from experience, solve problems, and use knowledge to adapt to new situations.
In the early 1900s, the French psychologist Alfred Binet (1857–1914) and his colleague Henri Simon (1872–1961) began working in Paris to develop a measure that would differentiate students who were expected to be better learners from students who were expected to be slower learners. The goal was to help teachers better educate these two groups of students. Binet and Simon developed what most psychologists today regard as the first intelligence test, which consisted of a wide variety of questions that included the ability to name objects, define words, draw pictures, complete sentences, compare items, and construct sentences.
Binet and Simon believed that the questions they asked their students, even though they were on the surface dissimilar, all assessed the basic abilities to understand, reason, and make judgments. And it turned out that the correlations among these different types of measures were in fact all positive; students who got one item correct were more likely to also get other items correct, even though the questions themselves were very different.
On the basis of these results, the psychologist Charles Spearman (1863–1945) hypothesized that there must be a single underlying construct that all of these items measure. He called the construct that the different abilities and skills measured on intelligence tests have in common the general intelligence factor (g). Virtually all psychologists now believe that there is a generalized intelligence factor, g, that relates to abstract thinking and that includes the abilities to acquire knowledge, to reason abstractly, to adapt to novel situations, and to benefit from instruction and experience. People with higher general intelligence learn faster.
Generalized intelligence factor, referred to as g, is assessed by having a person complete a variety of tasks. Some of the tasks that psychologists will have the person do are intended to measure key skill sets often needed to be successful in traditional school settings. They include the following:
Using what you have learned about g, determine what skill is being assessed in each of the following tasks.
Soon after Binet and Simon introduced their test, the American psychologist Lewis Terman (1877–1956) developed an American version of Binet’s test that became known as the Stanford-Binet Intelligence Test. The Stanford-Binet is a measure of general intelligence made up of a wide variety of tasks including vocabulary, memory for pictures, naming of familiar objects, repeating sentences, and following commands.
Although there is general agreement among psychologists that g exists, there is also evidence for specific intelligence (s), a measure of specific skills in narrow domains. One empirical result in support of the idea of s comes from intelligence tests themselves. Although the different types of questions do correlate with each other, some items correlate more highly with each other than do other items; they form clusters or clumps of intelligences.
One distinction is between fluid intelligence, which refers to the capacity to learn new ways of solving problems and performing activities, and crystallized intelligence, which refers to the accumulated knowledge of the world we have acquired throughout our lives. These intelligences must be different because crystallized intelligence increases with age—older adults are as good as or better than young people in solving crossword puzzles—whereas fluid intelligence tends to decrease with age.
Other researchers have proposed even more types of intelligences. L. L. Thurstone [1] proposed that there were seven clusters of primary mental abilities: word fluency, verbal comprehension, spatial ability, perceptual speed, numerical ability, inductive reasoning, and memory. But even these dimensions tend to be at least somewhat correlated, showing again the importance of g.
The goal of most intelligence tests is to measure g, the general intelligence factor. Good intelligence tests are reliable, meaning that they are consistent over time, and also demonstrate construct validity, meaning that they actually measure intelligence rather than something else. Because intelligence is such an important individual difference dimension, psychologists have invested substantial effort in creating and improving measures of intelligence, and these tests are now the most accurate of all psychological tests. In fact, the ability to accurately assess intelligence is one of the most important contributions of psychology to everyday public life.
Intelligence changes with age. A 3-year-old who could accurately multiply 183 by 39 would certainly be intelligent, but a 25-year-old who could not do so might be seen as unintelligent. Thus, understanding intelligence requires that we know the norms or standards in a given population of people at a given age. The standardization of a test involves giving it to a large number of people at different ages and computing the average score on the test at each age level.
It is important that intelligence tests be standardized periodically to determine that the average scores on the test at each age level remain the same; in other words, that the median score of 100 remains the same for each age level on the test over time. James Flynn, a New Zealand researcher, discovered that the mean IQ score of 100 between the years 1918 and 1995 had actually risen by about 25 points. [1] This is called the Flynn effect, referring to the observation that scores on intelligence tests worldwide have increased substantially over the past decades. Although the increase varies somewhat from country to country, the average increase is about 3 IQ points every 10 years. It is uncertain what causes this increase in intelligence on IQ tests. But some of the explanations for the Flynn effect include better nutrition, increased access to information, and more familiarity with multiple-choice tests. Whether people are actually getting smarter is debatable.
Each year from 1945 through 1985, all children in the fifth grade in the United States were given the California Scholastic Achievement Test, which was developed in 1944 and had not undergone any revisions or standardization. Later, a New Zealand researcher analyzed the patterns of these scores. Study the three line graphs on the chart below and complete the following questions.
Once the standardization has been accomplished, we have a picture of the average abilities of people at different ages and can calculate a person’s mental age, which is the age at which a person is performing intellectually. If we compare the mental age of a person to the person’s chronological age, the result is the intelligence quotient (IQ), a measure of intelligence that is adjusted for age. A simple way to calculate IQ is by using the following formula:
IQ = mental age ÷ chronological age × 100
A 10-year-old child who does as well as the average 10-year-old child has an IQ of 100 (10 ÷ 10 × 100), whereas an 8-year-old child who does as well as the average 10-year-old child would have an IQ of 125 (10 ÷ 8 × 100). Most modern intelligence tests are based on the relative position of a person’s score among people of the same age, rather than on the basis of this formula, but the idea of an intelligence “ratio” or “quotient” provides a good description of the score’s meaning.
Using the intelligence quotient formula, compute the required information for each of the persons described in the below scenarios.
A number of scales are based on the IQ. The Wechsler Adult lntelligence Scale (WAIS) is the most widely used intelligence test for adults. The current version of the WAIS, called the WAIS-IV, was standardized on 2,200 people ranging from 16 to 90 years of age. It consists of 15 different tasks, each designed to assess intelligence, including working memory, arithmetic ability, spatial ability, and general knowledge about the world (see the figure below). The WAIS-IV yields scores on four domains: verbal, perceptual, working memory, and processing speed. The reliability of the test is high (more than 0.95), meaning that when a person is assessed at different times on the test, the person will score approximately the same every time with more than a 95% accuracy rate.
The WAIS-IV also shows substantial validity; it is correlated highly with other IQ tests such as the Stanford-Binet, as well as with criteria of academic and life success, including college grades, measures of work performance, and occupational level. It also shows significant correlations with measures of everyday functioning among the mentally retarded.
The Wechsler scale has also been adapted for preschool children in the form of the Wechsler Primary and Preschool Scale of Intelligence (WPPSI-III) and for older children and adolescents in the form of the Wechsler Intelligence Scale for Children (WISC-IV).
Let's now look at the concepts of reliability and validity. These are concepts that are easily confused with one another and, in fact, they are related. A valid test must be reliable, but the fact that a test is reliable does not mean it is valid. So what is the difference?
Validity refers to the degree to which a test or other measure of some psychological construct actually measures that construct. A valid measure of your self-confidence is a questionnaire or other measure that accurately indicates or predicts your true level of self-confidence. There are a couple of things to notice about this definition.
First, it says “the degree to which…”—that means that validity is not an all-or-none idea. Some tests are more valid than other tests. You will seldom see a test in general use that is absolutely invalid, because such a test will be noticed and discarded by people who want to study the construct being measured (e.g., self-confidence).
Second, validity is a very difficult characteristic to prove, particularly when you are trying to measure something as complex as self-esteem or level of depression. For this reason, any test in widespread use in psychology has many studies that attempt to determine how valid it is in measuring what it is trying to measure and enumerating its limitations.
Reliability refers to the degree to which a test keeps producing the same or similar results over repeated testing. In other words, reliability is another term for consistency. There are a couple of things to notice here, as well.
First, reliability, like validity, is not all-or-none. Some tests are more reliable than other tests.
Second, reliability is easier to establish than validity, because we can easily conduct research that allows us to see if a test gives the same answer on repeated testing.
A nice metaphor for thinking about validity and reliability comes from target shooting. Imagine that you go to a target range and shoot at a target for a bull’s eye. Here is what you hope you will see when you are done:
But let’s imagine that the target below is the one you produce:
What’s wrong? The hits are all clustered together, so you are very consistent. The trouble is that you are missing the place you are aiming for.
For your review, here are four drawings of targets that illustrate the various levels of validity and reliability.
Now let’s apply the concepts of validity and reliability to a psychological test designed to measure self-confidence. A person’s true self-confidence is the “center of the target.” The self-confidence test requires people to answer five questions, such as the ones below. The person reads each statement and then rates himself or herself on a scale of 1 to 5, with 1 representing strong disagreement with the statement and 5 representing strong agreement with the statement.
This is not a real self-confidence test but just an example. As you can see, the test will result in a minimum score of 5 and a maximum score of 25. A person’s score can then be compared to his or her actual or true self-confidence level to determine how closely the test predicts the respondent’s actual self-confidence. The closer the result is to the true self-confidence level, the higher the validity.
To help you with this activity, each person’s true level of self-confidence is provided in the chart below in the column labeled TRUTH. Obviously, in real life, we don’t associate a person’s actual self-confidence level with a score—that’s why we are using this self-questionnaire.
Imagine that you gave the self-confidence test to 10 people, and then you retested them a week later. The chart below lists each person’s test scores for Week 1 and Week 2. The two scores are compared to determine whether they are consistent with each other. The more consistent the test scores, the higher the test reliability. Now you want to determine the validity and reliability of the self-confidence test.
Here is another example with different scores for each person for weeks 1 and 2:
Following is another example that presents more difficulty in determining the level of validity and reliability. Let’s see if you can get this one correct.
Depending on the design, intelligence tests measure achievement (what one has already learned) and aptitude (an ability to learn). Most licensed psychologists, who want to determine one’s mental abilities with regard to any mental disorders, will assess an individuals IQ that measures both achievement and aptitude. A psychologist will use tests such as the Standford Binet or one of the Wechsler scales.
More familiar intelligence tests are aptitude tests that are designed to measure one’s ability to do well in college or in postgraduate training. Most U.S. colleges and universities require students to take an aptitude test such as the Scholastic Assessment Test (SAT) or the American College Test (ACT), and postgraduate schools require the Graduate Record Examination (GRE), Medical College Admissions Test (MCAT), or the Law School Admission Test (LSAT). These tests are useful as one criteria for selecting students because they predict academic success in the programs that they are designed for, particularly in the first year of the program. These aptitude tests also measure, in part, intelligence. Frey and Detterman [1] found that the SAT correlated highly (between about r = .7 and r = .8) with standard measures of intelligence, particularly on the WAIS.
Aptitude tests are also used by industrial and organizational (I/O) psychologists in the process of personnel selection. Personnel selection is the use of structured tests to select people who are likely to perform well at given jobs. To develop a personnel selection test, industrial and organizational (I/O) psychologists begin by conducting a job analysis in which they determine what knowledge, skills, abilities, and personal characteristics (KSAPs) are required for a given job. This is normally accomplished by surveying and/or interviewing current workers and their supervisors. Based on the results of the job analysis, I/O psychologists choose selection methods that are most likely to be predictive of job performance. Measures include tests of cognitive and physical ability and job knowledge tests, as well as measures of intelligence and personality.
For students of psychology, it is important to know about some of the famous researchers and psychologists who made valuable contributions to the field of intelligence.
Before discussing different views of intelligence and controversies related to interpreting intelligence, let’s look at the typical results that researchers get when they measure intelligence using the Wechsler Adult Intelligence Scale (WAIS), which you studied in the previous module.
The WAIS was most recently updated in 2008. A sample of 2,200 adults varying in age (16 to 90 years old), sex, race, ethnicity, and other factors was tested. Although the test has many questions, the scores are standardized so that the average performance is scored as 100. People who did better than average have scores above 100, and people who did worse than average have scores below 100.
When we list all the scores of people taking tests like the WAIS, including other intelligence tests as well as tests of aptitudes and various skills and traits, those scores frequently fall into a pattern called the normal distribution, or the bell curve. It looks like this:
Although this section is a little technical, understanding normal distribution is useful for interpreting results not only of psychological studies but also of studies in many other fields. We focus on an IQ test—the WAIS—but what you will learn can be applied any time you see a bell curve.
In the following activities, you will learn the results of the study when the IQ scores from the 2,200 adults were analyzed.
The recent study tested 2,200 people, but for our purposes, let’s reduce the number to 16 people, and later we will discuss the results for all 2,200 people. Imagine that the 16 people pictured below participated in the IQ study. Each photo is labeled with the individual's IQ score.
Let’s see what happens when we organize them according to their IQ scores. Your task is to drag each person to an appropriate box below. You have only 16 people, so a lot of the boxes will be empty.
Start with the people with average IQ. Drag the pictures of the people with exactly average IQ (remember, average = 100 in IQ scores) into the purple boxes, starting with the bottom box.
Notice what happens as you get further to the right. What you see here with our little sample of 16 people is very much like what would happen if you had all 2,200 scores.
Now let’s eliminate the empty boxes.
What we have here is called a frequency distribution. It shows how frequently each score (in this case, each IQ score) appears in our group. Let’s see if you can read it accurately.
The group of people, or distribution, below looks like a triangle.
However, if all 2,200 people and all the possible IQ scores were represented, the shape would look like this:
The black line that goes above and around our 16 people has a distinctive shape, which gives the graph its name: bell curve.
It is also called the normal distribution. It shows how many people have each score on the IQ scale or the scale for any other test or measure (e.g., height, weight, achievement test score, and many others).
Let’s look at the bell curve without our little people inside.
Now imagine 2,200 people squeezed beneath our bell curve. Notice that the numbers are now gone from the Y axis on the left, so you can only talk in terms of more or fewer people with any particular score.
When working with the bell curve, researchers often want to identify locations or regions. For instance, they might want to talk about the top 10% of IQ scores or the middle 50%. To do this, they break the curve into units, and for IQ those units are 15 points wide. This 15-point step is called a standard deviation.
For example, here is our bell curve, but notice that we have colored in a region that goes from the mean (IQ = 100) to 15 points above the mean (100 + 15 = 115). Because this segment covers exactly 15 points, it is one standard deviation wide.
To keep track of these units, we start counting from the mean (100) and count the standard deviation steps from the mean in each direction. Here is another picture of the bell curve with the standard deviation steps marked above the IQ scores.
And here is the area from the mean (100) to one standard deviation, or 15 points below the mean (100 − 15 = 85).
As you can see, the further you get away from the mean, the smaller the area under the curve in the 15-point units.
Here is the area between one and two standard deviations below the mean.
And now we go between two and three standard deviations below the mean.
We can keep going, but now you see there aren’t many people left as we get to the area between three and four standard deviations below the mean.
Let’s look more closely at this. Here is the bell curve with the region covering the first standard deviation above the mean colored in blue. Notice the numbers above this region. It shows that this is 34% of the total area under the curve (just a little more than one-third of the total). To put that in perspective, the number below it shows how many people out of that original group of 2,200 used to standardize the IQ test are in this area: 748 out of 2,200.
If we go the same distance on the other side of the mean, we now show two colored regions: one above the mean (100 to 115) and one below the mean (100 down to 85).
Just above each segment, you can see the percentage of the area under the curve and the number of people in that unit. On top, in red, you can see the sum of these colored-in areas. This figure says that 748 people out of 2,200 had IQ scores between 100 and 115 and another 748 had IQ scores between 100 and 85, so a total of 1,496 had IQ scores between 85 and 115. That is just a little more than two-thirds of the total set of 2,200. Of course, we’re not really interested in those 2,200 people. They simply represent the larger population of adults.
Here is a figure you can manipulate. Simply click on any of the areas under the curve, and it will change colors (blue above the mean and green below the mean). The area under the curve and the number of people in the sample of 2,200, as well as the totals for all the colored areas, will be shown.
In the previous section, we explored the normal distribution in relation to the IQ scores of samples of particular people. However, IQ scores in the general population are also normally distributed. The figure below displays the distribution of IQ scores in the general population.
One end of the distribution of intelligence scores is defined by people with very low IQ. Mental retardation is a generalized disorder ascribed to people who have an IQ below 70, who have experienced deficits since childhood, and who have trouble with basic life skills, such as self-care and communicating with others. [1] About 1% of the U.S. population, most of them males, fulfill the criteria for mental retardation, but some children who are diagnosed as mentally retarded lose the classification as they get older and better learn to function in society. A particular vulnerability of people with low IQ is that they may be taken advantage of by others, and this is an important aspect of the definition of mental retardation. [2] Mental retardation is divided into four categories: mild, moderate, severe, and profound. Severe and profound mental retardation is usually caused by genetic mutations or accidents during birth, whereas mild forms have both genetic and environmental influences.
One cause of mental retardation is Down syndrome, a chromosomal disorder leading to mental retardation caused by the presence of all or part of an extra 21st chromosome. The incidence of Down syndrome is estimated at 1 per 800 to 1,000 births, although its prevalence rises sharply in those born to older mothers. People with Down syndrome typically exhibit a distinctive pattern of physical features, including a flat nose, upwardly slanted eyes, a protruding tongue, and a short neck.
Societal attitudes toward individuals with mental retardation have changed over the past decades. We no longer use terms such as “moron,” “idiot,” or “imbecile” to describe these people, although these were the official psychological terms used to describe degrees of retardation in the past. Laws such as the Americans with Disabilities Act (ADA) have made it illegal to discriminate on the basis of mental and physical disability, and there has been a trend to bring the mentally retarded out of institutions and into our workplaces and schools. In 2002 the U.S. Supreme Court ruled that the execution of people with mental retardation is “cruel and unusual punishment,” thereby ending this practice. [3]
Having an extremely high IQ is clearly less of a problem than having an extremely low IQ, but there may also be challenges to being particularly smart. It is often assumed that schoolchildren who are labeled as “gifted” may have adjustment problems that make it more difficult for them to create social relationships. To study gifted children, Lewis Terman and his colleagues [4] selected about 1,500 high school students who scored in the top 1% on the Stanford-Binet and similar IQ tests (i.e., who had IQs of about 135 or higher) and tracked them for more than seven decades (the children became known as the “Termites” and are still being studied today). This study found, first, that these students were not unhealthy or poorly adjusted but rather were above average in physical health and were taller and heavier than individuals in the general population. The students also had above average social relationships—for instance, they were less likely to divorce than the average person. [5]
Terman’s study also found that many of these students went on to achieve high levels of education and entered prestigious professions, including medicine, law, and science. Of the sample, 7% earned doctoral degrees, 4% earned medical degrees, and 6% earned law degrees. These numbers are all considerably higher than what would have been expected from a more general population. Another study of young adolescents who had even higher IQs found that these students ended up attending graduate school at a rate more than 50 times higher than that in the general population. [6]
As you might expect based on our discussion of intelligence, kids who are gifted have higher scores on general intelligence (g). But there are also different types of giftedness. Some children are particularly good at math or science, some at automobile repair or carpentry, some at music or art, some at sports or leadership, and so on. There is a lively debate among scholars about whether it is appropriate or beneficial to label some children as gifted and talented in school and to provide them with accelerated special classes and other programs that are not available to everyone. Although doing so may help the gifted kids, [7] it also may isolate them from their peers and make such provisions unavailable to those who are not classified as gifted.
Here is a figure you can manipulate. Simply click on any of the areas under the curve, and it will change colors (blue above the mean and green below the mean). The area under the curve and the number of people in the sample of 2,200, as well as the totals for all the colored areas, will be shown.
One advocate of the idea of multiple intelligences is the psychologist Robert Sternberg. Sternberg has proposed a triarchic (three-part) theory of intelligence that proposes that people may display more or less analytical intelligence, creative intelligence, and practical intelligence. Sternberg [1] [2] argued that traditional intelligence tests assess analytical intelligence, the ability to answer problems with a single right answer, but that they do not well assess creativity (the ability to adapt to new situations and create new ideas) or practicality (e.g., the ability to write good memos or to effectively delegate responsibility).
As Sternberg proposed, research has found that creativity is not highly correlated with analytical intelligence, [3] and exceptionally creative scientists, artists, mathematicians, and engineers do not score higher on intelligence than do their less creative peers. [4] Furthermore, the brain areas associated with convergent thinking, thinking that is directed toward finding the correct answer to a given problem, are different from those associated with divergent thinking, the ability to generate many different ideas for or solutions to a single problem. [5] On the other hand, being creative often takes some of the basic abilities measured by g, including the abilities to learn from experience, to remember information, and to think abstractly. [6]
Studies of creative people suggest at least five components that are likely to be important for creativity:
The last aspect of the triarchic model, practical intelligence, refers primarily to intelligence that cannot be gained from books or formal learning. Practical intelligence represents a type of “street smarts,” or common sense, that is learned from life experiences. Although a number of tests have been devised to measure practical intelligence, [11] [12] research has not found much evidence that practical intelligence is distinct from g or that it is predictive of success at any particular tasks. [13] Practical intelligence may include, at least in part, certain abilities that help people perform well at specific jobs, and these abilities may not always be highly correlated with general intelligence. [11] On the other hand, these abilities or skills are very specific to particular occupations and do not seem to represent the broader idea of intelligence.
Another champion of the idea of multiple intelligences is the psychologist Howard Gardner. [14] [15] Gardner argued that it would be evolutionarily functional for different people to have different talents and skills and proposed that there are eight intelligences that can be differentiated from each other (shown in the table below). Gardner noted that some evidence for multiple intelligences comes from the abilities of autistic savants, people who score low on intelligence tests overall but who nevertheless may have exceptional skills in a given domain, such as math, music, art, or in being able to recite statistics in a given sport. [16]
Howard Gardner’s Eight Specific Intelligences | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||||||||
|
The idea of multiple intelligences has been influential in the field of education, and teachers have used these ideas to try to teach differently to different students. For instance, to teach math problems to students who have particularly good kinesthetic intelligence, a teacher might encourage the students to move their bodies or hands according to the numbers. On the other hand, some have argued that these “intelligences” sometimes seem more like “abilities” or “talents” rather than real intelligence. And there is no clear conclusion about how many intelligences there are. Are sense of humor, artistic skills, dramatic skills, and so forth also separate intelligences? Furthermore, and again demonstrating the underlying power of a single intelligence, the many different intelligences are in fact correlated and thus represent, in part, g. [17]
Although most psychologists have considered intelligence a cognitive ability, people also use their emotions to help them solve problems and relate effectively to others. Emotional intelligence is the ability to accurately identify, assess, and understand emotions, as well as to effectively control one’s own emotions. [18] [19]
The idea of emotional intelligence is seen in Howard Gardner’s interpersonal intelligence (the capacity to understand the emotions, intentions, motivations, and desires of other people) and intrapersonal intelligence (the capacity to understand oneself, including one’s emotions). Public interest in and research on emotional intellgence became widely prevalent following the publication of Daniel Goleman’s best-selling book, Emotional Intelligence: Why It Can Matter More Than IQ. [20]
Mayer and Salovey [19] developed a four-branch model of emotional intelligence that describes four fundamental capacities or skills. More specifically, this model defines emotional intelligence as the ability to
There are a variety of measures of emotional intelligence. [21] [22] One popular measure, the Mayer-Salovey-Caruso Emotional Intelligence Test, includes items about the ability to understand, experience, and manage emotions, such as these:
One problem with emotional intelligence tests is that they often do not show a great deal of reliability or construct validity. [23] Although it has been found that people with higher emotional intelligence are also healthier, [24] findings are mixed about whether emotional intelligence predicts life success—for instance, job performance. [25] Furthermore, other researchers have questioned the construct validity of the measures, arguing that emotional intelligence really measures knowledge about what emotions are, not necessarily how to use those emotions, [26] and that emotional intelligence is actually a personality trait, a part of g, or a skill that can be applied in some specific work situations—for instance, academic and work situations. [27]
Although measures of the ability to understand, experience, and manage emotions may not predict effective behaviors, another important aspect of emotional intelligence—emotion regulation—does. Emotion regulation is the ability to control and productively use one’s emotions. Research has found that people who are better able to override their impulses to seek immediate gratification and who are less impulsive also have higher cognitive and social intelligence. They have better SAT scores, are rated by their friends as more socially adept, and cope with frustration and stress better than those with less skill at emotion regulation. [28] [29] [30]
Because emotional intelligence seems so important, many school systems have designed programs to teach it to their students. However, the effectiveness of these programs has not been rigorously tested, and we do not yet know whether emotional intelligence can be taught or if learning it would improve the quality of people’s lives. [31]
The Mayer-Salovey-Caruso Emotional Intelligence Test includes questions about various abilities: identifying emotions, facilitating thinking, understanding emotions, and managing emotions.
Indicate which of the four branches of emotional intelligence is being assessed for each of the following items. That is, is the question assessing the ability to (1) identify emotions, (2) facilitate thinking, (3) understand emotions, or (4) manage emotions?
What do you think intelligence is? Is it something that you are born with, largely inherited from your parents, leaving you with little room for improvement? Or is it something that can be changed through hard work and by taking advantage of opportunities to grow intellectually?
Your personal answer to this question turns out to be surprisingly important. It may even affect your intelligence! Stanford University psychologist Carol Dweck has spent her career studying how people’s beliefs about their own abilities—particularly mental abilities like intelligence—influence the kinds of challenges they give themselves. In much of her research, she has studied students, from their early prekindergarten days through college.
Dweck identified two broad “theories of intelligence” that people—from young children to mature adults—hold. Some people have an “entity” theory of intelligence. They believe that their intelligence is determined by factors present at birth, particularly related to their genetic inheritance. According to this theory, intelligence is a relatively unchangeable fact about who you are and about your potential to excel. You may work hard, but intelligence will always act as a limit for some people and as a supercharged fuel for others. Other people hold an “incremental” theory of intelligence. They believe that intelligence can be changed, particularly through efforts to learn and to excel. They believe that genetic factors are only a starting point, and people’s future competencies are not determined by their initial strengths and weaknesses.
These two theories of intelligence would only be vaguely interesting if they didn’t influence people’s behavior. But they do. It turns out that students who hold the entity theory of intelligence tend to avoid academic challenges. When given the opportunity to work on a really challenging task—with opportunity for success but also a real possibility of failure—they are less likely to take the opportunity than are their classmates who hold an incremental theory of intelligence. The students who believe that intelligence is unchangeable—the entity theory holders—are more likely to choose a task that they already know will lead to success.
Attitudes toward failure can also be predicted by knowing which theory a student has about intelligence. Students who have an entity theory of intelligence tend to interpret failure—in academics and in other aspects of life—as a message about their own inherent limitations, so failure or even the anticipation of failure reduces their motivation to work on something. For them, the way to protect their self-esteem is to avoid failure. For the students who have an incremental theory of intelligence, failure is more often seen as a challenge that can actually increase motivation. These students are more likely than their entity theorist classmates to see failure as an opportunity to discover and test their potential, thus inspiring them to try to see what they can do.
Everything you have already read in this unit should have made it clear to you that intelligence is a difficult quality to define and measure. Nevertheless, we do know that IQ—the standard measure of intelligence—can change. For example, children who are unable to attend school for a long periods due to war or extended illness show IQ levels 2 standard deviations (30 IQ points) below their peers who are attending school. [1] On the positive side, research has shown that children, particularly children from low socioeconomic groups, can improve in IQ if they are placed in an enriched prekindergarten program and then in an elementary school of sufficient quality to maintain these gains. [2] Sadly, if children are only placed in enriched preschools, but later attend academically poor elementary schools, their IQ gains diminish or even disappear.
Perhaps the most convincing discovery for understanding whether intelligence can change or not comes from the work of James Flynn, who analyzed IQ data from 14 nations spanning a period from the beginning of the 20th century to the late 1980s. Later studies by Flynn and others have added another 16 countries to the list, and these more recent results are consistent with the initial findings. Flynn used cross-sectional testing, meaning that different people were tested at each point in time, rather than longitudinal testing, where the same people would have been tracked across the time period of the study.
What Flynn found was that countries that were fully modern, with a thriving middle class, effective school systems, and good employment opportunities, showed gains in IQ that averaged about 3 points per decade. This may seem like a small amount, but changes of this magnitude are actually astounding. Across a 50-year period, the average IQ of people in these countries increased by approximately 15 IQ points, which is a full standard deviation (see the Bell Curve module if you have forgotten what a standard deviation is). These changes were not due to modifications of the test; it reflects improvements in performance on IQ tests controlling for any changes to the test itself. Further confirmation of this progressive improvement in IQ—a change now called “the Flynn effect”—comes from data from countries that modernized in the mid-20th century (e.g., Kenya and some Caribbean nations). The people in these countries did not show substantial changes prior to modernization, but after people’s socioeconomic status changed, the same IQ gains that characterized the countries that modernized earlier—about 3 IQ points per decade—were recorded.
The reasons for these general changes in IQ are not completely understood. It is likely that improvements to nutrition are important, along with better schooling, more stimulating jobs, and even changes in childbearing practices. For example, with fewer children per family, a phenomenon common in more economically developed countries, parents can devote more time to each child, leading to more opportunities for the child to learn effectively from adults.
These changes may not go on forever. Some countries that were already modernized in the late 19th century (e.g., Scandinavian countries) were great examples of the Flynn effect when Flynn initially reported his discovery in the 1980s, but have stopped showing IQ improvements in recent years. Other countries, including the United States and Great Britain, continue to show the 3 IQ points per decade improvement, but there is no guarantee that this change will be sustained in future decades. Nevertheless, the Flynn effect along with the effects of enriched pre-kindergarten programs mentioned earlier clearly show that intelligence—at least as measured by IQ tests—is something that can be improved, both at the level of individuals and at the level of an entire nation.
From the earliest days of IQ testing, people have wondered about group differences in intelligence. You will probably not be surprised to learn that studies of differences between groups in intelligence can easily feed stereotypes and prejudices, and raise questions about testing biases and even the integrity of the researchers.
You might think that the question of differences between men and women could easily be resolved by analyzing IQ tests for thousands of people and simply reporting the results. But it turns out that this won’t work. The most commonly used IQ test, the WAIS, is regularly adjusted to eliminate questions that produce differences between men and women. Consequently, the fact that there are no gender differences on the WAIS is not interesting; the test is designed to eliminate the possibility of differences. The goal of this adjustment is to avoid including biased questions, but it means that we need to look elsewhere to answer the question of IQ differences between men and women.
Using advanced statistical techniques, researchers have been able to extract the necessary information from tests not specifically designed to measure IQ. For instance, Arthur Jensen, whose view we will discuss further in the section on race and intelligence, studied results from tests that are strongly related to IQ tests (“loaded heavily on g”) but have not been adjusted to eliminate gender differences. Jensen [1] found minor differences between men and women on tests of specific abilities, but he found no overall differences between men and women in average intelligence. Using a different strategy, James Flynn [2] looked at results from the Raven’s Progressive Matrices test, a well-regarded nonverbal measure of intelligence, and found that males and females were not different either in the childhood samples or adult samples. The generally accepted view today is that the average IQs for men and women are the same.
Note: For the following activities, the red distributions correspond to females and the blue distributions correspond to males.
This is not the end of questions about sex differences in intelligence, however. In 2006, Lawrence Summers, president of Harvard University, was discussing reasons that far more men than women go into advanced positions in science and engineering. Citing published research, Summers suggested that there may not be a difference between the average intelligence for men and women, but men may be more variable in their intelligence.
Prof. Summers’ claim is not without some foundation. Mental retardation (i.e., IQ below 70) is about 3.6 times more common for males than females. [3] Note that the diagnosis of mental retardation is based on more than scores on a standard IQ test, but this fact is consistent with the low end of the blue curve in the figure—more males than females appear to occupy that lower region. Summers’ statement about the high end of the distributions is based primarily on data showing more males than females among the highest scorers on tests of mathematics and quantitative reasoning. For instance, in the 1980s, twelve times more boys than girls scored above 700 (out of a possible 800) on the SAT test. [4] These results are consistent with the high extreme depicted in the figure: the blue curve for males is higher (more individuals) on the right than the red curve for females.
Note: For this activity, the red distributions correspond to females and the blue distributions correspond to males.
Now we get to the controversial part of the IQ debate for gender. Professor Summers of Harvard University suggested that women’s intelligence—particularly as related to mathematical thinking—might be less likely to extend to the genius level than men’s intelligence.
Why are there these differences between the spread of scores for males and for females in intelligence-related measures? Most researchers think that the male and female distributions would be about the same on the lower end of the distributions if males were not more vulnerable to genetic and prenatal factors that can hurt development of brain structures related to thinking. Some X-chromosome regions have been linked to mental retardation, and only males have a Y chromosome. Furthermore, prenatal steroidal hormones influence brain structures related to intelligence, and males fetuses are exposed to massively higher levels of these steroids than are female fetuses. Estrogen, an important female hormone, may be a protective factor for girls and women, reducing the impact of biological factors that can influence intellectual functioning.
On the high extreme, genetic and other biological factors favoring males have not been found. Instead, experience appears to be the critical factor. For instance, differences between men and women on mathematics and other quantitative skills are rapidly diminishing as girls and women take more mathematics-related courses in school and are encouraged to excel in these areas. The 12:1 ratio of males over females on the quantitative SAT scores comes from data in the mid-20th century. Sex differences have dramatically declined since that time. More recent studies show about a 3:1 ratio (3 times more boys than girls) at the highest levels of SAT scores. It is impossible to know if these trends will continue or not, but such a dramatic change in a few decades strongly supports the hypothesis that most or all of the differences between men and women at the high extreme of IQ-related tests can be explained by educational and motivational differences between genders, and these differences are quickly disappearing.
Note: For this activity, the red distributions correspond to females and the blue distributions correspond to males.
In the text, you learned that most psychologists (as well as scholars in other fields) disagreed with Dr. Summers’s suggestion that there may be a genetic reason for higher male performance on the highest extreme of quantitative measures related to IQ. See the text to learn some of the reasons for rejecting the genetic hypothesis. The idea that there may be differences between men and women on the lower extreme of the IQ distribution is less controversial.
Intelligence tests are adjusted to eliminate gender differences, but racial and ethnic differences have been more resistant to test changes. Prior to the civil rights era of the 1960s, little attention was paid to choices of content and wording that might bias the tests in favor of one racial or ethnic group over another. During the 1960s and 1970s, the validity and fairness of IQ tests and other achievement and aptitude tests was hotly debated. Among the positive results of that discussion was the development of now-standard procedures for detecting and eliminating some types of unfair items and the increased sensitivity of testers to potential biases in the content and scoring of tests. Nevertheless, differences in culture and experience are so complex that no procedures or statistical tests can successfully detect items or procedures that are unfair to some individuals or groups. [1]
In the early 20th century, it was not unusual to attribute differences in average IQ scores among various racial and ethnic groups to biological causes, an interpretation sometimes called the hereditarian theory of IQ. [2] The belief was that some groups had evolved further than others, leading these so-called superior groups to function better intellectually than others. For instance, at the beginning of the 20th century, immigrants from Italy to America had a median tested IQ of 87, which is nearly a standard deviation (15 points) below the average of 100. One the most respected figures in intelligence testing at the time, Henry Goddard, concluded that approximately 80% of Italian immigrants were “feeble-minded,” and he made similar claims about immigrants from Russia and the Austro-Hungarian empire. Goddard and many others attributed this supposed intellectual deficiency to hereditary factors. He recommended that all immigrants to the U.S. be tested for intelligence, and those with low scores should be barred from entry. What we now know is that the intelligence tests were given in English, not the immigrants’ native language and furthermore, contained phrases that were culturally-specific. For example, in America, you may often hear the phrase “to kill two birds with one stone.” You will likely understand this to mean that you will be able to get two jobs done with one action. However, for someone not accustomed to this phrase, it will mean very little. Today, Italian Americans as a group score slightly above 100 on the average, and other immigrant groups that, like those of Italian descent, have been able to integrate with the dominant white culture show similar gains, supporting the well-known fact that early intelligence tests had extreme cultural bias and consequently, the results were widely misconstrued.
The hereditarian theory has not completely disappeared. Most notably, in 1994, a book called The Bell Curve by two well-known social scientists, Richard Herrnstein and Charles Murray, forcefully argued that some IQ differences between races are genetic and unlikely to disappear regardless of social and educational opportunities. [3] Not surprisingly, the book immediately led to intense and often acrimonious debate. In response to the debate, the American Psychological Association (APA) asked a group of highly regarded scientists, headed by respected psychologist Ulric Neisser of Emory University, to review the scientific literature that related to IQ differences among racial and ethnic groups, as well as sex differences.
The APA Task Force [4] started by noting that “intelligence” is a very complex quality and even experts disagree about its definition. Furthermore, all IQ tests assess only a limited number and range of abilities, inevitably leaving many other aspects of intelligence untested.
The APA Task Force report summarized racial and ethnic group average IQ scores from the mid-1990s. Both Asian Americans and Caucasian Americans had averages around 100, the standardized average for IQ scores. African Americans averaged around 90, having reduced the 15 points (average IQ = 85) found in earlier decades by 5 points. Hispanic Americans were found to score in the mid-90s on average.
After reviewing evidence related to IQ, the APA Task Force concluded that there is no credible evidence for a genetic explanation for racial and ethnic differences in IQ. This does not mean that our genes have no influence on IQ; that link has been firmly established. However, this simply means that our parents’ IQ is somewhat correlated with our own IQ, but this reasoning cannot be extended to group differences. To understand how genetic factors can influence the relationship between the IQ of parents and their children while not explaining differences between racial groups, work through the following activity.
People often confuse causes of individual differences (e.g., why you are a better athlete than your best friend) with group differences (e.g., why people from Africa currently dominate distance running). The mistake is to believe that the cause of some difference at the individual level must follow the same rules as the cause of a difference at the group level. But let’s see if this is necessarily true.
Imagine that you have two varieties of a plant. The orange variety has developed under excellent lighting conditions. Notice that some of them are tall, some are medium sized, and some are short. This is just normal variability that we find with almost any characteristic of a living thing that you measure. A second variety of the plant, with blue flowers, can be seen on the right. As with the orange variety, some are relatively tall, some are relatively medium in height, and some are relatively short.
Now we are going to see what happens when each of these plants has offspring. What do the plants that grew from each of these look like.
Let’s also imagine that this strong relationship between relative heights of parents and offspring is heavily influenced by genes. So the reason that tall plants have tall offspring is because that tall parents pass on genes that maximize growth and the reason that short parents have short offspring is because the short parents have genes that minimize growth.
Now it is time to do some transplanting. Imagine that you wonder if the blue plants are getting enough sunlight. You plant some of the blue variety in the same sunny conditions that the orange variety already enjoys. Let’s see what happens.
How could this happen? Even though genes determine relative height (in our cartoon flowers anyway), the genes don’t determine the absolute height. Better or worse growing conditions can determine just how tall the plant can possibly grow, but, within any given set of conditions, genes will determine which are the tall, medium, and short plants.
This idea is not at all unrealistic. At the end of World War II, people from the United States were the tallest people in the world. This did not mean that everyone from the United States was taller than everyone from any other country. There were still tall, medium, and short U.S. citizens and tall, medium, and short citizens of other countries, and tall parents still tended to have tall children, short parents tended to have short children, and so on. At that time, because the United States was such a dominant center for agriculture, average nutrition in the United States was superior to average nutrition in most other places, even though there were then, as now, people going hungry even in the United States. Since that time, nutrition has improved in many other countries and now citizens of the U.S. are not, on average, among the top 10 in terms of height. The tallest people in the world today are from the Netherlands. People from the U.S. have not shrunk. People from these other countries have gotten taller.
To make the point even more strongly, imagine that you planted both the orange and blue varieties in some different soil and this is what happened:
Notice that the orange plants are unaffected by the new soil, so they are the same as before. But the blue variety loves the new soil and thrives in it. Now the variety that used to be shorter on average—the blue variety—is taller on average. But there is still the same genetic link: Tall plants have tall offspring, medium plants have medium offspring, and short plants have short offspring.
Now let’s apply this same idea to intelligence.
Imagine that there are two groups of people, represented by the orange and blue borders below. The orange group lives in an intellectually stimulating environment, while the blue group lives in a less varied and intellectually challenging environment. This graphic represents the IQ levels of members of the two groups.
Now imagine that the blue group experiences an improvement of circumstances, moving to a more stimulating and intellectually challenging environment. Here is what happens:
Under intellectually enriched conditions, the blue group has now improved its average IQ. But there is still a direct relationship between parent and offspring IQ. It is possible for genes to determine the relationship between parent and offspring IQ even if environmental factors determine the average level of each group’s IQ. Of course, in the real world, unlike our cartoon world, both genes and environmental factors influence IQ, so the relationship between parent and offspring IQ is not so easily predicted as it is in our cartoons.
Although the hereditarian hypothesis is a logically possible explanation of differences between racial and ethnic groups, research doesn’t support it. There is not empirical support for the idea that racial or ethnic groups differ genetically in ways that relate to intelligence. The APA Task Force also reviewed the research on biased tests and biased testing procedures and found that these factors cannot explain the size of the group differences. Similarly, they rejected differences in socioeconomic status as an adequate explanation. In fact, although they unequivocally rejected the hereditarian explanation, they concluded that “at this point, no one knows what causes this differential,” where the “differential” is the group difference in IQ.
Even if we cannot clearly identify the reasons for racial and ethnic differences in IQ, we do know that these differences can change, just as we suggested in the Learn by Doing exercise. For example, Dickens and Flynn [5] looked at IQ scores from 1972 to 2002 and found a reduction in the difference between blacks and whites during this period of more than 5 IQ points, one-third of the difference between the groups.
An adoption study by Moore [6] looked at black children adopted into black and white families, where the children adopted by the white families could expect to receive the benefits of greater affluence and the social and cultural support of the dominant group. On later testing, children adopted into white families had IQs that were 13 points higher than the children adopted into black families. This is almost the entire 15 point difference reported between the two races at that time. These results and others suggest that racial differences in IQ are largely or completely the result of differences in experience. It is particularly interesting that black children measured at age 4 are only about 5 IQ points below white children of the same age, while the group differences are about 17 points by age 24, a finding that strongly suggests that the life experiences of black and white children may produce these diverging outcomes.
There are many ways that experience can influence intelligence. Socioeconomic status (i.e., income and social opportunities) can affect the type of neighborhood a person can afford to live in, the quality of schooling and likelihood of preschool educational opportunities, and likelihood of taking advantage of educational activities (e.g., visiting museums and historical sites). Differences in parental education levels can have a huge impact. For instance, Hart and Risley [7] studied interactions between adults and children in families differing in socioeconomic status and education. This was not a race difference study. They found that children of professional-level parents heard three times as many words and far more complex and unusual words than did children of unemployed parents, and children of working-class parents heard twice as many words as the children of unemployed parents. Although a word count like this may seem unimportant, exposure to language is very important for development of communication skills and categories of knowledge.
It is impossible to know for certain at this point in time why racial groups differ in measured IQ. However, it is clear that many social and environmental factors can influence intellectual development and racial groups differ substantially in typical social experiences. Furthermore, as genetics has matured as a science, the old distinction between nature and nurture has been lost. We now understand that environmental factors influence genetic expression from the moment of conception to the end of life, so differences between groups that differ in typical experiences are to be expected. Research is slowly unraveling those influences.
Although intelligence tests may not be culturally biased, the situation in which one takes a test may be. One environmental factor that may affect how individuals perform and achieve is their expectations about their ability at a task. In some cases these beliefs may be positive, and they have the effect of making us feel more confident and thus better able to perform tasks. For instance, research has found that because Asian students are aware of the cultural stereotype that “Asians are good at math,” reminding them of this fact before they take a difficult math test can improve their performance on the test. [1] On the other hand, sometimes these beliefs are negative, and they create negative self-fulfilling prophecies such that we perform more poorly just because of our knowledge about the stereotypes.
In 1995 Claude Steele and Joshua Aronson tested the hypothesis that the differences in performance on IQ tests between Blacks and Whites might be due to the activation of negative stereotypes. [2] Because Black students are aware of the stereotype that Blacks are intellectually inferior to Whites, this stereotype might create a negative expectation, which might interfere with their performance on intellectual tests through fear of confirming that stereotype.
In support of this hypothesis, the experiments revealed that Black college students performed worse (in comparison to their prior test scores) on standardized test questions when this task was described to them as being diagnostic of their verbal ability (and thus when the stereotype was relevant), but that their performance was not influenced when the same questions were described as an exercise in problem solving. And in another study, the researchers found that when Black students were asked to indicate their race before they took a math test (again activating the stereotype), they performed more poorly than they had on prior exams, whereas White students were not affected by first indicating their race.
Steele and Aronson argued that thinking about negative stereotypes that are relevant to a task that one is performing creates stereotype threat—performance decrements that are caused by the knowledge of cultural stereotypes. That is, they argued that the negative impact of race on standardized tests may be caused, at least in part, by the performance situation itself. Because the threat is “in the air,” Black students may be negatively influenced by it.
Research has found that stereotype threat effects can help explain a wide variety of performance decrements among those who are targeted by negative stereotypes. For instance, when a math task is described as diagnostic of intelligence, Latinos and Latinas perform more poorly than do Whites. [3] Similarly, when stereotypes are activated, children with low socioeconomic status perform more poorly in math than do those with high socioeconomic status, and psychology students perform more poorly than do natural science students. [4] [5]
Even groups who typically enjoy advantaged social status can be made to experience stereotype threat. White men perform more poorly on a math test when they are told that their performance will be compared with that of Asian men, [6] and Whites perform more poorly than Blacks on a sport-related task when it is described to them as measuring their natural athletic ability. [7] [8]
Research has found that stereotype threat is caused by both cognitive and emotional factors. [9] On the cognitive side, individuals who are experiencing stereotype threat show an increased vigilance toward the environment as well as increased attempts to suppress stereotypic thoughts. Engaging in these behaviors takes cognitive capacity away from the task. On the affective side, stereotype threat occurs when there is a discrepancy between our positive concept of our own skills and abilities and the negative stereotypes that suggest poor performance. These discrepancies create stress and anxiety, and these emotions make it harder to perform well on the task.
Stereotype threat is not, however, absolute; we can get past it if we try. What is important is to reduce the self doubts that are activated when we consider the negative stereotypes. Manipulations that affirm positive characteristics about the self or one’s social group are successful at reducing stereotype threat. [10] [11] In fact, just knowing that stereotype threat exists and may influence our performance can help alleviate its negative impact. [12]
The following video clip discusses stereotype threat. Watch the video and complete the table below.
The video described a study by Jeff Stone that measured the effects of stereotype threat on Black and White athletes’ performance on a test of athletic aptitude. The study consisted of two conditions. In one condition, participants were told that the test measured natural athletic ability, and in the other condition participants were told that the test measured ability to think strategically.
During the 1970s, American millionaire Robert Klark Graham began one of the most controversial and unique sperm banks in the world. He called it the Repository for Germinal Choice. The sperm bank was part of a project that attempted to combat the “genetic decay” Graham saw all around him. He believed human reproduction was experiencing a genetic decline, making for a population of “retrograde humans,” and he was convinced that the way to save the human race was to breed the best genes of his generation. [1]
Graham began his project by collecting sperm samples from the most intelligent and highly achieving people he could find, including scientists, entrepreneurs, athletes, and even Nobel Prize winners. Then he advertised for potential mothers, who were required to be married to infertile men, educated, and financially well-off. Graham mailed out catalogs to the potential mothers, describing the donors using code names such as “Mr. Grey-White,” who was “ruggedly handsome, outgoing, and positive, a university professor, expert marksman who enjoys the classics,” and “Mr. Fuchsia,” who was an “Olympic gold medalist, tall, dark, handsome, bright, a successful businessman and author.” [1] When the mother had made her choice, the sperm sample was delivered by courier and insemination was carried out at home. Before it closed following Graham’s death in 1999, the repository claimed responsibility for the birth of 228 children.
But did Graham’s project actually create superintelligent babies? Although it is difficult to be sure, because very few interviews with the offspring have been permitted, at least some of the repository’s progeny are indeed smart. A reporter for Slate magazine, David Plotz, [1] spoke to nine families who benefited from the repository, and they proudly touted their children’s achievements. He found that most of the offspring in the families interviewed seem to resemble their genetic fathers. Three from donor Mr. Fuchsia, the Olympic gold medalist, are reportedly gifted athletes. Several who excel in math and science were fathered by professors of math and science.
And the offspring, by and large, seem to be doing well, often attending excellent schools and maintaining very high grade-point averages. One of the offspring, now 26 years old, is particularly intelligent. In infancy, he could mark the beat of classical music with his hands. In kindergarten, he could read Hamlet and was learning algebra, and at age 6, his IQ was already 180. But he refused to apply to prestigious universities, such as Harvard or Yale, opting instead to study at a smaller progressive college and to major in comparative religion with the aim of becoming an elementary school teacher. He is now an author of children’s books.
Although it is difficult to know for sure, it appears that at least some of the children of the repository are indeed outstanding. But can the talents, characteristics, and skills of this small repository sample be attributed to genetics alone? After all, consider the parents of these children: Plotz reported that the parents, particularly the mothers, were highly involved in their children’s development and took their parental roles very seriously. Most of the parents studied child care manuals, coached their children’s sports teams, practiced reading with their kids, and either home-schooled them or sent them to the best schools in their areas. And the families were financially well-off. Furthermore, the mothers approached the repository at a relatively older child-bearing age, when all other options were exhausted. These children were desperately wanted and very well loved. It is undeniable that, in addition to their genetic backgrounds, all this excellent nurturing played a significant role in the development of the repository children.
Although the existence of the repository provides interesting insight into the potential importance of genetics on child development, the results of Graham’s experiment are inconclusive. The offspring interviewed are definitely smart and talented, but only one of them was considered a true genius and child prodigy. And nurture may have played as much a role as nature in their outcomes. [2] [1]
The goal of this unit is to investigate the fundamental, complex, and essential process of human development. Developmental psychology concerns the physiological, behavioral, cognitive, and social changes that occur throughout human life, which are guided by both genetic predispositions (nature) and environmental influences (nurture).
Developmental psychologists explore three questions throughout their careers:
We begin our study of development at the moment of conception, when the father’s sperm unites with the mother’s egg, and then consider prenatal development in the womb. Next we focus on infancy, the developmental stage that begins at birth and continues to one year of age, and childhood, the period between infancy and the onset of puberty. We then consider the developmental changes that occur during adolescence—the years between the onset of puberty and the beginning of adulthood; the stages of adulthood itself, including emerging, early, middle, and older adulthood; and finally, the preparations for and eventual facing of death.
Each stage of development has its unique physical, cognitive, and emotional changes that define the stage and distinguish it from the others. Many developmental theorists discuss the changes that we, as humans, undergo throughout our lifetime. One such theorist, Erik Erikson [3] , proposed a model of lifespan development that provides a useful guideline for thinking about the changes we experience. As you can see in the following table, Erikson believed each life stage has a unique challenge that the person who reaches it must face. How the person resolves the challenge has an impact on his or her overall development. So, for example, the young infant begins his life in the first stage, the trust versus mistrust stage. During that time, the infant is dependent on others to meet his basic needs, which includes feeding him; clothing him; hugging him when he is afraid, excited, or simply needs to be reassured; and allowing him to interact with his environment in a safe and healthy manner. If the infant receives what he needs, he will leave this stage with feelings of trust for his caregivers. However, if the infant is mistreated, neglected, or not given what he needs during this stage, he may resolve this stage with feelings of distrust for the people who were supposed to care for him.
Erik Erikson’s Framework for Development | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0. Adapted from Erikson, E. H. (1963). Childhood and Society. New York: Norton (p. 202). | |||||||||||||||||||||||||||
|
As we progress through this module, we will see that Robert Klark Graham was in part right—nature does play a substantial role in development (it has been found, for instance, that identical twins, who share all of their genetic code, usually begin sitting up and walking on the exact same days). But nurture is also important—we begin to be influenced by our environments even while still in the womb, and these influences remain with us throughout our development. Furthermore, we will see that we play an active role in shaping our own lives. Our own behavior influences how and what we learn, how people respond to us, and how we develop as individuals. As you read the unit, you will no doubt get a broader view of how we each pass through our own lives. You will see how we learn and adapt to life’s changes, and this new knowledge may help you better understand and better guide your own personal life journey. By the end of this unit, you will see that human development is a lifelong process; we evolve and change as we age, mature, and experience our lives.
Conception occurs when an egg from the mother is fertilized by a sperm from the father. In humans, the conception process begins with ovulation, when an ovum, or egg (the largest cell in the human body), which has been stored in one of the mother’s two ovaries, matures and is released into the fallopian tube. Ovulation occurs about halfway through the woman’s menstrual cycle and is aided by the release of a complex combination of hormones. In addition to helping the egg mature, the hormones also cause the lining of the uterus to grow thicker and more suitable for implantation of a fertilized egg.
If the woman has had sexual intercourse within 1 or 2 days of the egg’s maturation, one of the up to 500 million sperm deposited by the man’s ejaculation, which are traveling up the fallopian tube, may fertilize the egg. Although few of the sperm can make the long journey, some of the strongest swimmers succeed in meeting the egg. As the sperm reach the egg in the fallopian tube, they release enzymes that attack the outer jelly-like protective coating of the egg, each trying to be the first to enter. When one of the millions of sperm enters the egg’s coating, the egg immediately responds by blocking out all other challengers and at the same time pulling in the single successful sperm.
Most cells in your body have 23 pairs of chromosomes, for a total of 46. The egg and sperm are different. Each egg and each sperm has only one set of 23 chromosomes, not a pair. When fertilization occurs, the 23 chromosomes from the egg fuse with the 23 from the sperm to create a zygote, which starts as a fertilized egg, or ovum, with the full complement of 23 pairs of chromosomes. The zygote continues to travel down the fallopian tube to the uterus. Although the uterus is only about 4 inches away in the woman’s body, the journey is nevertheless a substantial one for a microscopic organism. Consequently, fewer than half of zygotes survive beyond this earliest stage of life. If the zygote is still viable when it completes the journey, it attaches itself to the wall of the uterus, but if it is not, it is flushed out in the woman’s menstrual flow. During this time, the cells in the zygote continue to divide: The original two cells become four, those four become eight, and so on, until there are thousands (and eventually trillions) of cells. Soon the cells begin to differentiate, each taking on a separate function. The earliest differentiation is between the cells on the inside of the zygote, which begin to form the developing human being, and the cells on the outside, which form the protective environment that provides support for the new life throughout the pregnancy.
Once the zygote attaches to the wall of the uterus, it is known as the embryo. During the embryonic phase, which lasts for the next 6 weeks, the major internal and external organs are formed, each beginning at the microscopic level, with only a few cells. The changes in the embryo’s appearance continue rapidly from this point until birth.
While the inner layer of embryonic cells is busy forming the embryo, the outer layer is forming the surrounding protective environment that helps the embryo survive the pregnancy. This environment consists of three major structures: The amniotic sac is the fluid-filled reservoir in which the embryo (soon to be known as a fetus) lives until birth. The amniotic sac also acts as a cushion against outside pressure and as a temperature regulator. The placenta is an organ that allows the exchange of nutrients between the embryo and the mother, while at the same time filtering out harmful material. The filtering occurs through a thin membrane that separates the mother’s blood from the blood of the fetus, allowing them to share only the material that can pass through the filter. Finally, the umbilical cord links the embryo directly to the placenta and transfers all material to the fetus. Together, the placenta and the umbilical cord protect the fetus from many foreign agents in the mother’s system that might otherwise pose a threat.
Beginning in the 9th week after conception, the embryo becomes a fetus. The defining characteristic of the fetal stage is growth. All the major aspects of the growing organism were formed in the embryonic phase, and now the fetus has approximately six months to go from weighing less than an ounce to weighing an average of 6 to 8 pounds. That’s quite a growth spurt.
The fetus begins to take on many of the characteristics of a human being, including moving (by the third month, the fetus can curl and open its fingers, form fists, and wiggle its toes), sleeping, as well as early forms of swallowing and breathing. The fetus begins to develop its senses, becoming able to distinguish tastes and respond to sounds. Research has found that the fetus even develops some initial preferences. A newborn prefers the mother’s voice to that of a stranger, the languages heard in the womb over other languages, [1] [2] and even the kinds of foods that the mother ate during the pregnancy. [3] By the end of the third month of pregnancy, the sexual organs are visible.
Photos in this activity courtesy of Ed Uthman, MD, Biology Flashcards, and lunar caustic.
Prenatal development is a complicated process and may not always go as planned. About 45% of pregnancies result in a miscarriage, often without the mother ever being aware it has occurred. [1] Although the amniotic sac and the placenta are designed to protect the embryo, substances that can harm the fetus, known as teratogens, may nevertheless cause problems. Teratogens include general environmental factors, such as air pollution and radiation, as well as cigarettes, alcohol, and drugs that the mother may use. Teratogens do not always harm the fetus, but they are more likely to do so when they occur in larger amounts, for long time periods, and during the more sensitive phases of fetal development, such as when the fetus is growing most rapidly. The most vulnerable period for many of the fetal organs is very early in the pregnancy—before the mother even knows she is pregnant.
Harmful substances that the mother ingests may harm the child. Cigarette smoking, for example, reduces the blood oxygen for both the mother and child and can cause a fetus to be born severely underweight. Another serious threat is fetal alcohol syndrome (FAS), a condition caused by maternal alcohol drinking that can lead to numerous detrimental developmental effects, including limb and facial abnormalities, genital anomalies, and mental retardation. One in about every 500 babies in the United States is born with fetal alcohol syndrome, and it is considered one of the leading causes of retardation in the world today. [2] Because there is no known safe level of alcohol consumption for a pregnant woman, the U.S. Centers for Disease Control and Prevention recommends that “a pregnant woman should not drink alcohol.” [3] Therefore, the best approach for expectant mothers is to avoid alcohol completely. Maternal drug abuse is also of major concern and is considered one of the greatest risk factors facing unborn children.
The environment in which the mother lives also has a major impact on infant development. [4] [5] Children born into homelessness or poverty are more likely to have mothers who are malnourished; who suffer from domestic violence, stress, and other psychological problems; and who smoke or abuse drugs. Children born into poverty are also more likely to be exposed to teratogens. Poverty’s impact may also amplify other issues, creating substantial problems for healthy child development. [6] [7] Mothers normally receive genetic and blood tests during the first months of pregnancy to determine the health of the embryo or fetus. They may undergo sonogram, ultrasound, amniocentesis, or other testing. The screenings detect potential birth defects, including neural tube defects, chromosomal abnormalities (such as Down syndrome), genetic diseases, and other potentially dangerous conditions. Early diagnosis of prenatal problems can allow medical treatment to improve the health of the fetus.
If all has gone well, a baby is born sometime around the 38th week of pregnancy. The fetus is responsible, at least in part, for its own birth because chemicals released by the developing fetal brain trigger the muscles in the mother’s uterus to start the rhythmic contractions of childbirth. The contractions are initially spaced at about 15-minute intervals but come more rapidly with time. When the contractions reach an interval of 2 to 3 minutes, the mother is requested to assist in the labor and help push the baby out.
Newborns are relatively helpless, and they will need the protective and nurturing attention of adults for many years. However, the newborn does have a variety of responses to environmental stimuli that appear from the first day of life, and some start to appear before birth. These responses are called the survival reflexes. Some, like the grasp reflex, whereby the baby holds hard to an object that has brushed the palm of her hand, may have some real survival value. Others, like the stepping reflex, are probably not useful immediately, but they reflect a muscle control system that already has behavioral patterns that will later mature when the infant is ready to use those muscles.
Doctors test for these reflexes to determine if the nervous system is developing correctly in the newborn and young child. The failure of a reflex to appear as a well-organized response can be a warning sign of trouble, and the failure of a reflex to disappear at the appropriate time can also indicate a problem.
In addition to reflexes, newborns have preferences—they like sweet-tasting foods at first, becoming more open to salty items by 4 months of age. [1] [2] Newborns also prefer the smell of their mothers. An infant only 6 days old is significantly more likely to turn toward its own mother’s breast pad than to the breast pad of another baby’s mother, [3] and a newborn also shows a preference for the face of its own mother. [4]
Although infants are born ready to engage in some activities, they also contribute to their own development through their behaviors. The child’s knowledge and abilities increase as it babbles, talks, crawls, tastes, grasps, plays, and interacts with the objects in the environment. [5] [6] [7] Parents may help in this process by providing a variety of activities and experiences for the child. Research has found that animals raised in environments with more novel objects and that engage in a variety of stimulating activities have more brain synapses and larger cerebral cortexes, and they perform better on a variety of learning tasks compared with animals raised in more impoverished environments. [8] Similar effects likely occur in children who have opportunities to play, explore, and interact with their environments. [9]
This exercise shows videos of seven basic infant reflexes. Infants have many more reflexes, but these seven nicely illustrate the diversity of reflexes. Also read the brief explanation accompanying each reflex. After you explore the seven reflexes, you are asked to recall them and answer questions about them. Fortunately, except for a couple of the reflexes, the names are very informative about the behavior that the infant shows.
Observe: When the newborn’s cheek is lightly touched, the head turns in the direction of the touch. This reflex aids breastfeeding, because it allows the infant to orient its head in the direction of the mother’s breast. The rooting reflex is present at birth. Over the next few months, the infant gains more and more voluntary control of these head movements, and the reflex disappears.
Observe: When the infant’s lips are lightly touched, sucking behavior occurs. It is easy to see that sucking behavior is vital for the infant to be able to feed from the mother’s breast, something that must be possible immediately after birth. The sucking reflex disappears as the infant increasingly gains voluntary control over feeding behaviors as she gains experience.
Observe: When the infant is touched by a light tap on top of the head, she blinks immediately. Additional blinks following the initial reaction to the tap on the head indicate that the infant is “sensitized to” stimuli around the head. This reflex is present from birth and persists throughout our lives in some form. The blink reflex is a protective response of the eyelids to potentially damaging objects around the eyes.
Observe: If you poke an infant (softly) on the bottom of the foot with a sharp object, the infant will pull both legs into a crouching position, moving them away from the source of irritation. This reflex is present from birth, and it disappears as a simple reflex as the response becomes more complex with increased motor control.
Observe: This reflex is sometimes called the palmar reflex. When the palm of the infant’s hand is touched or brushed, the hand closes in a grip. The infant’s grip can be very strong. In fact, many infants can support their own weight for a short period of time. Stroking the back of the hand produces a release of the grip. This reflex is present at birth and disappears as a reflex at around 5 or 6 months of age. The grasp reflex can permit the infant to hold an object, but it may have originated as a way for the infant to cling to the mother as she traveled or searched for food.
Observe: The Moro reflex is also called the startle reflex. It is a reaction to a startling noise or to a loss of support that feels like falling. The head and legs extend, and the arms move up and out. Then the arms come together across the body with the fists clenched. The Moro reflex is present at birth, and it disappears by 3 or 4 months of age. This reflex may be the foundation of a body response to falling or losing hold of the mother.
Observe: When an infant is supported in a standing position on a flat surface, he will move his legs in a walking movement when his feet touch the ground. The walking reflex is present at birth, then disappears at about 6 weeks of age. It probably reflects an innate muscular basis for the walking movement that will be developed later.
Observe: If you stroke the bottom of the infant’s foot from heel to toe, he spreads his toes out except for the big toe, which moves upward, and he turns the foot inward. This reflex shows a complex, innate muscular reaction to the kinds of stimuli we feel when walking. It is present at birth and generally disappears by the end of the first year as the infant gains more motor control.
Childhood is a time when changes occur quickly. The child is growing physically, and cognitive abilities are also developing. During this time the child learns to actively manipulate and control the environment and is first exposed to the requirements of society, particularly the need to control the bladder and bowels. According to Erik Erikson, the challenges the child must confront in childhood relate to the development of initiative, competence, and independence. Children need to learn to explore the world, to become self-reliant, and to make their own way in the environment.
These skills do not come overnight. Neurological changes during childhood provide children the ability to do some things at certain ages and yet make it impossible for them to do other things. This fact was made apparent through the groundbreaking work of the Swiss psychologist Jean Piaget. During the 1920s, Piaget was administering intelligence tests to children in an attempt to determine the kinds of logical thinking children are capable of. In the process of testing the children, Piaget became intrigued not so much by the answers the children got right but more by the answers they got wrong. Piaget believed that the incorrect answers the children gave were not mere shots in the dark but rather represented specific ways of thinking unique to the children’s developmental stage. Just as almost all babies learn to roll over before they learn to sit up by themselves, and learn to crawl before they learn to walk, Piaget believed that children gain their cognitive ability in a developmental order. These insights—that children at different ages think in fundamentally different ways—led to Piaget’s stage model of cognitive development.
Piaget argued that children do not just passively learn but also actively try to make sense of their worlds. He argued that as they learn and mature, children develop schemas—patterns of knowledge in long-term memory—that help them remember, organize, and respond to information. Furthermore, Piaget thought that when children experience new things, they attempt to reconcile the new knowledge with existing schemas. Piaget believed that children use two distinct methods in doing so, methods he called assimilation and accommodation.
When children employ assimilation, they use already developed schemas to understand new information. If children have learned a schema for horses, then they may call the striped animal they see at the zoo a horse rather than a zebra. In this case, children fit the existing schema to the new information and label the new information with the existing knowledge. Accommodation, on the other hand, involves learning new information, and thus changing the schema. When a mother says, “No, honey, that’s a zebra, not a horse,” the child may adapt the schema to fit the new stimulus, learning that there are different types of four-legged animals, only one of which is a horse.
A schema is an organized set of related information in long-term memory. A concept (e.g., furniture or things that can be used as a hammer) is an example of a schema. Behaviors, such as movements needed to ride a bicycle, can also be schemas.
According to Piaget, we develop and use schemas throughout our lives. However, at different times in our lives, different schemas are likely to be particularly important.
Piaget’s most important contribution to understanding cognitive development, and the fundamental aspect of his theory, was the idea that development occurs in unique and distinct stages, with each stage occurring at a specific time, in a sequential manner, and in a way that allows the child to think about the world using new capacities. Piaget’s stages of cognitive development are summarized in the following table.
Piaget's Stages of Cognitive Development | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA. | ||||||||||||||||||||
|
The first developmental stage for Piaget was the sensorimotor stage, the cognitive stage that begins at birth and lasts until around the age of 2. It is defined by the direct physical interactions that babies have with the objects around them. During this stage, babies form their first schemas by using their primary senses—they stare at, listen to, reach for, hold, shake, and taste the things in their environments. During the sensorimotor stage, babies’ use of their senses to perceive the world is so central to their understanding that whenever babies do not directly perceive objects, as far as they are concerned, the objects do not exist. Piaget found, for instance, that if he first interested babies in a toy and then covered the toy with a blanket, children who were younger than 6 months of age would act as if the toy had disappeared completely—they never tried to find it under the blanket but would nevertheless smile and reach for it when the blanket was removed. Piaget found that it was not until about 8 months that the children realized the object was merely covered and not gone. Piaget used the term object permanence to refer to the child’s ability to know that an object exists even when the object cannot be perceived.
At about 2 years of age, and until about 7 years of age, children move into the preoperational stage. During this stage, children begin to use language and to think more abstractly about objects, but their understanding is more intuitive and without much ability to deduce or reason. The thinking is preoperational, meaning that the child lacks the ability to operate on or transform objects mentally. In one study that showed the extent of this inability, Judy DeLoache [1] showed children a room within a small dollhouse. Inside the room, a small toy was visible behind a small couch. The researchers took the children to another lab room, which was an exact replica of the dollhouse room, but full sized. When children who were 2.5 years old were asked to find the toy, they did not know where to look—they were simply unable to make the transition across the changes in room size. Three-year-old children, on the other hand, immediately looked for the toy behind the couch, demonstrating that they were improving their operational skills.
The inability of young children to view transitions also leads them to be egocentric—unable to readily see and understand other people’s viewpoints. Developmental psychologists define the theory of mind as the ability to take another person’s viewpoint, and the ability to do so increases rapidly during the preoperational stage. In one demonstration of the development of theory of mind, a researcher shows a child a video of another child (let’s call her Anna) putting a ball in a red box. Then Anna leaves the room, and the video shows that while she is gone, a researcher moves the ball from the red box into a blue box. As the video continues, Anna comes back into the room. The child is then asked to point to the box where Anna will probably look to find her ball. Children who are younger than 4 years of age typically are unable to understand that Anna does not know the ball has been moved, and they predict that she will look for it in the blue box. After 4 years of age, however, children have developed a theory of mind—they realize that different people can have different viewpoints, and that (although she will be wrong) Anna will nevertheless think the ball is still in the red box.
After about 7 years of age, the child moves into the concrete operational stage, which is marked by more frequent and more accurate use of transitions, operations, and abstract concepts, including those of time, space, and numbers. An important milestone during the concrete operational stage is the development of conservation—the understanding that changes in the form of an object do not necessarily mean changes in the quantity of the object. Children younger than 7 years generally think that a tall glass holds more milk than a shorter, wider glass, and they continue to believe so even when they see the same milk poured back and forth between the glasses. It appears that these children focus only on one dimension (in this case, the height of the glass) and ignore the other dimension (width). However, when children reach the concrete operational stage, their abilities to understand such transformations make them aware that, although the milk looks different in the different glasses, the amount must be the same.
Associate each of the following situations with a phenomenon or "attainment."
You have two identical glasses with exactly the same amount of water in each. You pour the water from one of the tall, thin glasses into a low, wide bowl. Veronica is watching as you do this. You then ask her, "Which one holds more, this one (the tall, thin glass) or this one (the low, wide bowl)?"
She responds that the tall, thin glass has more.
You are playing with a very young infant, dangling a doll in front of her as she grabs it or tries to grab it when you move it. Both of you are having fun. Then you wiggle the doll in front of her and, as she watches intently, you put it under a blanket. She briefly looks around and then seems to lose interest in the doll.
A child watches as you tell the following story using dolls: "Becky walks into her bedroom carrying her favorite book. She puts the book in the top drawer of her dresser, closes the drawer, and leaves the room. A few minutes later, her brother Zack comes into the room, takes the book from the drawer, and puts it in the toy box. Then Zack leaves the room. Then Becky comes back to the room to get her book." You ask the child to tell you where Becky will look first. The child says, "In the toy box!" without hesitation.
At about 11 years of age, children enter the formal operational stage, which is marked by the ability to think in abstract terms and to use scientific and philosophical lines of thought. Children in the formal operational stage are better able to systematically test alternative ideas to determine their influences on outcomes. For instance, rather than haphazardly changing different aspects of a situation that allows no clear conclusions to be drawn, they systematically make changes in one thing at a time and observe what difference that particular change makes. They learn to use deductive reasoning, such as “if this, then that,” and they become capable of imagining situations that “might be” rather than just those that actually exist.
Piaget’s theories have made a substantial and lasting contribution to developmental psychology. His contributions include the idea that children are not merely passive receptacles of information but actively engage in acquiring new knowledge and making sense of the world around them. This general idea has generated many other theories of cognitive development, each designed to help us better understand the development of the child’s information-processing skills. [1] [2] Furthermore, the extensive research that Piaget’s theory has stimulated has generally supported his beliefs about the order in which cognition develops. Piaget’s work has also been applied in many domains—for instance, many teachers use Piaget’s stages to develop educational approaches aimed at the level children are developmentally prepared for. [3] [4]
Over the years, Piagetian ideas have been refined. For instance, it is now believed that object permanence develops gradually, rather than immediately, as a true stage model would predict, and that it can sometimes develop much earlier than Piaget expected. Renée Baillargeon and her colleagues [5] [6] placed babies in a habituation setup, having them watch as an object was placed behind a screen, entirely hidden from view. The researchers then arranged for the object to reappear from behind another screen in a different place. Babies who saw this pattern of events looked longer at the display than did babies who witnessed the same object physically being moved between the screens. These data suggest the babies were aware that the object still existed even though it was hidden behind the screen and thus that they were displaying object permanence as early as 3 months of age rather than the 8 months that Piaget predicted.
Another factor that might have surprised Piaget is the extent to which a child’s social surroundings influence learning. In some cases, children progress to new ways of thinking and retreat to old ones depending on the type of task they are performing, the circumstances they find themselves in, and the nature of the language used to instruct them. [7] And children in different cultures show somewhat different patterns of cognitive development. Dasen [8] found that children in non-Western cultures moved to the next developmental stage about a year later than did children from Western cultures and that level of schooling also influenced cognitive development. In short, Piaget’s theory probably understated the contribution of environmental factors to social development.
Piaget’s background was in biology, and he viewed the cognitive development process as something like physical maturation in which changes occur inevitably as the individual gets older. Piaget has been criticized for not giving sufficient credit to the impact of experience on the growth of the child’s skills and knowledge. Here we discuss the ideas of another important figure in early developmental psychology, Lev Vygotsky.
Lev Vygotsky (1896–1934) was a Russian (Soviet) psychologist whose emphasis on the interaction of the parent (or other caregivers) and child has had a profound influence on many developmental psychologists. According to Vygotsky, learning starts with doing something—interacting with the world. For instance, an infant tries to communicate with an adult to get some juice, and that process of interacting with the adult—whether successful or not—is a learning experience that is a little step in cognitive development.
In Vygotsky’s theory, our mental processes are actions that are internalized, which means they take place symbolically in our minds. For example, the child’s early speech has the purpose of communicating needs and desires to others, particularly to the child’s caregivers. As the child uses speech regularly to interact with adults and other children, he or she develops a parallel system of silent inner speech, which is the foundation of reasoning and problem solving. In Vygotsky’s view, this inner speech is not simply the act of talking to oneself. It is more abstract and condensed than language spoken aloud, and it would be unrecognizable if we did somehow broadcast it so we could hear it. Inner speech is used to explore ideas and regulate and understand our own behaviors. One type of evidence Vygotsky uses for this theory of internalization of speech is the fact that children of 3 and 4 years of age often talk aloud when they are solving a problem or regulating their own behavior, even when they are alone. By the time they are 5 or 6, the amount of self-talk diminishes dramatically.
This period when the child talks aloud to himself or herself is considered a transitional phase between the early stage, when the child can speak but lacks internalized thought processes and the later stage when the child has abilities like those of the adult to reason and analyze ideas without making a sound. An important difference between this view and Piaget’s is that thinking does not just naturally appear and mature over time. Rather, it is the consequence of a great deal of practice that includes constant spoken interactions with other people, particularly parents or other caregivers.
Vygotsky saw the role of parents and other caregivers as essential to normal cognitive development. Parents model behaviors by doing things while the child watches, and they are constantly around as the child explores objects, witnesses events, and communicates with others. Vygotsky’s particular contribution to understanding how adults help children learn is the concept of scaffolding. A scaffold is a support structure used by construction workers to reach higher places on a building than they could by merely standing on the ground. Using this idea as an analogy, Vygotsky suggested that caregivers can support the child to achieve higher cognitive levels by providing support and guidance. They do this by helping the child to do things for herself rather than by doing things for the child. For instance, a parent can help a child learn the meaning of a new word by pointing at the object it labels or by holding the object so that the child can see it and explore it. Using a different example, a parent can provide scaffolding for a child who is trying to open a box by orienting the box at a convenient angle and assuring that the lid is a type that can be opened by the child.
Vygotsky suggested that the child is most likely to learn and mature cognitively if the adult is sensitive to the child’s abilities and limitations. Ideally, the adult understands what kinds of tasks and activities will challenge the child without being so difficult that they lead to complete failure. Vygotsky used the daunting phrase zone of proximal development to describe abilities that a child is just starting to be able to use. A task that is too difficult is beyond the zone of proximal development, and a task that is too easy is not a challenge for the child, so it will not lead to cognitive growth. Putting this idea together with scaffolding, Vygotsky suggested that the effective caregiver knows what tasks and activities the child is just starting to be able to handle (the zone of proximal development) and, by providing just enough assistance (scaffolding) that the child can avoid frustration yet can explore and succeed, the parent helps the child develop new abilities and greater understanding than would be possible without help.
It is through the remarkable increases in cognitive ability that children learn to interact with and understand their environments. But these cognitive skills are only part of the changes that occur during childhood. Equally crucial is the development of the child’s social skills—the ability to understand, predict, and create bonds with the other people in their environments.
One of the important milestones in a child’s social development is learning about his or her self-existence. Self-awareness is an important part of consciousness; as the infant’s cognitive skills develop, so does the awareness of others and self. Self-awareness is the realization that he or she is a distinct individual, whose body, mind and actions are separate from those of other people. At about age 5 months, infants begin to realize that they exist separately from their caregivers. Self-awareness continues with the emergence of the self-concept. The self-concept is a knowledge representation or schema that contains knowledge about ourselves, including our beliefs about our personality traits, physical characteristics, abilities, values, goals, and roles, as well as the knowledge that we exist as individuals. [1]
Some animals, including chimpanzees, orangutans, and perhaps dolphins, have at least a primitive sense of self. [2] In one study, [3] researchers painted a red dot on the foreheads of anesthetized chimpanzees and then placed each animal in a cage with a mirror. When the chimps woke up and looked in the mirror, they touched the dot on their faces, not the dot on the faces in the mirror. These actions suggest that the chimps understood they were looking at themselves and not at other animals, and thus we can assume they are able to realize that they exist as individuals. On the other hand, most other animals, including, for instance dogs, cats, and monkeys, never behave as if they realize they are seeing themselves in the mirror.
Infants who have a similar red dot painted on their foreheads recognize themselves in a mirror in the same way that the chimps do, and they do this by about 18 months of age. [4] Other examples of self-awareness are their ability to use language that refers to "I, me, mine, and my", particularly when referring to their possessions such as toys. By age 2, children begin to show emotions such as pride, shame and embarrassment.
Children's knowledge about the self continues to develop as cognitive and language skills increase due to brain maturity and learning from their sociocultural surroundings about sex and gender differences. By age 2, children are aware of their biological sex, which is their physical body parts associated with being a boy or a girl. As children grow older, they develop a gender identity which is the psychological and sociocultural characteristics of maleness or femaleness that are associated with one’s physical sex. Children express their gender identity by insisting on playing with toys that they associate with their gender such as dolls for girls and trucks or cars for boys. Their awareness of the gender differences in clothing, playmates, and future careers provide further evidence of the young child’s sense of gender identity.
By age 4, self-descriptions are likely to be based on physical features, such as hair color and possessions, and by about age 6, children can understand their basic emotions and the basic concepts of their personality traits, demonstrated by statements such as, “I am a nice person.” [5]
Soon after children enter grade school (at about age 6 or 7), they begin to evaluate themselves against their observations of other children, a process known as social comparison. For example, a child might describe himself as being faster than one boy but slower than another. [6] According to Erikson, the important component of this process is the development of competence and autonomy—the recognition of one’s own abilities relative to other children. And children increasingly show awareness of social situations—they understand that other people are looking at and judging them the same way they are looking at and judging others. [7]
One of the most important behaviors a child must learn is how to be accepted by others—the development of close and meaningful social relationships. The emotional bonds we develop with those with whom we feel closest, and particularly the bonds an infant develops with the mother or primary caregiver, are called attachment. [1]
As late as the 1930s, psychologists believed that children who were raised in institutions such as orphanages, and who received good physical care and proper nourishment, would develop normally, even if they had little interaction with their caretakers. But studies by the developmental psychologist John Bowlby [2] and others showed that these children did not develop normally—they were usually sickly, emotionally slow, and generally unmotivated. These observations helped make it clear that normal infant development requires successful attachment with a caretaker.
In one classic study showing the importance of attachment, Wisconsin University psychologists Harry and Margaret Harlow investigated the responses of young monkeys, separated from their biological mothers, to two surrogate mothers introduced to their cages. One—the wire mother—consisted of a wooden head, a mesh of cold metal wires, and a bottle of milk from which the baby monkey could drink. The second mother was a foam-rubber form wrapped in a heated terry-cloth blanket. The Harlows found that although the infant monkeys went to the wire mother for food, they overwhelmingly preferred and spent significantly more time with the warm terry-cloth mother that provided no food but did provide comfort. [3]
The studies by the Harlows showed that young monkeys preferred the warm mother that provided a secure base to the cold mother that provided food.
The Harlows’ studies confirmed that babies have social as well as physical needs. Both monkeys and human babies need a secure base that allows them to feel safe. From this base, they can gain the confidence they need to venture out and explore their worlds. Erikson agreed with the importance of a secure base, arguing that the most important goal of infancy was the development of a basic sense of trust in one’s caregivers. Review this first stage in the table "Erik Erikson's Framework for Development."
Developmental psychologist Mary Ainsworth, a student of John Bowlby, was interested in studying the development of attachment in infants. Ainsworth created a laboratory test that measured an infant’s attachment to his or her parent. The test is called the Strange Situation Classification because it is conducted in a context that is unfamiliar to the child and therefore likely to heighten the child’s need for his or her parent. [4] During the procedure, which lasts about 20 minutes, the parent and the infant are first left alone while the infant explores the room full of toys. Then a strange adult enters the room and talks for a minute to the parent, after which the parent leaves the room. The stranger stays with the infant for a few minutes, and then the parent reenters and the stranger leaves the room. During the entire session, a video camera records the child’s behaviors, which are later coded by trained coders.
In the Strange Situation, children are observed responding to the comings and goings of parents and unfamiliar adults in their environments.
On the basis of their behaviors, the children are categorized into one of four groups, where each group reflects a different kind of attachment relationship with the caregiver. A child with a secure attachment style usually explores freely while the mother is present and engages with the stranger. The child may be upset when the mother departs but is also happy to see the mother return. A child with an ambivalent (sometimes called insecure-resistant) attachment style is wary about the situation in general, particularly the stranger, and stays close or even clings to the mother rather than exploring the toys. When the mother leaves, the child is extremely distressed and is ambivalent when she returns. The child may rush to the mother but then fail to cling to her when she picks up the child. A child with an avoidant (sometimes called insecure-avoidant) attachment style will avoid or ignore the mother, showing little emotion when the mother departs or returns. The child may run away from the mother when she approaches. The child will not explore very much, regardless of who is there, and the stranger will not be treated much differently from the mother.
Finally, a child with a disorganized attachment style seems to have no consistent way of coping with the stress of the strange situation—the child may cry during the separation but avoid the mother when she returns, or the child may approach the mother but then freeze or fall to the floor. Although some cultural differences in attachment styles have been found, [5] research has also found that the proportion of children who fall into each of the attachment categories is relatively constant across cultures.
You might wonder whether differences in attachment style are determined more by the child (nature) or by the parents (nurture). Most developmental psychologists believe that socialization is primary, arguing that a child becomes securely attached when the mother is available and able to meet the needs of the child in a responsive and appropriate manner and that the insecure styles occur when the mother is insensitive and responds inconsistently to the child’s needs. In a direct test of this idea, Dutch researcher Dymphna van den Boom [6] randomly assigned some babies’ mothers to a training session in which they learned to better respond to their children’s needs. The research found that these mothers’ babies were more likely to show a secure attachment style than were babies of mothers in a control group that did not receive training.
The attachment behavior of the child is also likely influenced, at least in part, by temperament, the innate personality characteristics of the infant. Specifically, temperament is the infant’s distinctive pattern of attention, arousal, and reactivity to new or novel situations. Temperament is displayed by some children being warm, friendly, and responsive, whereas others display a more irritable, less manageable, and difficult-to-console behavior. A specific temperament pattern appears early, is relatively stable and long-lasting from early infancy and appears to become the basis for one’s personality that is displayed throughout adulthood. Researchers [1] studied differences in infants’ temperaments by interviewing mothers with 2- to 3-month-old infants and then observing these same infants repeatedly over the next 7 years. The researchers then rated each infant on nine components of temperament, including activity level, attention span, fussiness, and mood. On the basis of these ratings, they divided infants into four categories.
These differences may also play a role in attachment. [2] [3] Taken together, it seems safe to say that attachment, like most other developmental processes, is affected by an interplay of genetic and socialization influences.
One thing you may have wondered about as you grew up, and which you may start to think about again if you decide to have children yourself, concerns the skills involved in parenting. Some parents are strict, others are lax; some parents spend a lot of time with their kids, trying to resolve their problems and helping to keep them out of dangerous situations, whereas others leave their children with nannies or in day care. Some parents hug and kiss their kids and tell them they love them over and over every day, whereas others never do. Do these behaviors matter? And what makes a “good parent”?
We have already considered two answers to this question in the form of what all children require: (1) babies need a conscientious mother who does not smoke, drink, or use drugs during her pregnancy, and (2) infants need caretakers who are consistently available, loving, and supportive to help them form a secure base. One case in which these basic goals are less likely to be met is when the mother is an adolescent. Adolescent mothers are more likely to use drugs and alcohol during their pregnancies, to have poor parenting skills in general, and to provide insufficient support for the child. [1] As a result, the babies of adolescent mothers have higher rates of academic failure, delinquency, and incarceration than do children of older mothers. [2]
Normally, it is the mother who provides early attachment, but fathers are not irrelevant. Studies have found that children whose fathers are more involved with childrearing tend to be more cognitively and socially competent, more empathic, and psychologically better adjusted than children whose fathers are less involved. [3] In fact, Amato [4] found that, in some cases, the role of the father can be as important as or even more important than that of the mother in the child’s overall psychological health and well-being. Amato concluded, “Regardless of the quality of the mother–child relationship, the closer adult offspring were to their fathers, the happier, more satisfied, and less distressed they reported being” (p. 1039).
As the child grows, parents take on one of four types of parenting styles—parental behaviors that determine the nature of parent–child interactions and that guide their interaction with the child. These styles depend on whether the parent is more or less demanding and more or less responsive to the child. Authoritarian parents are demanding but not responsive. They impose rules and expect obedience, tending to give orders (“Eat your food!”) and enforcing their commands with rewards and punishment, without providing any explanation of where the rules came from except “Because I said so!” Permissive parents, on the other hand, tend to make few demands and give little punishment, but they are responsive in the sense that they generally allow their children to make their own rules. Authoritative parents are demanding (“You must be home by curfew”), but they are also responsive to the needs and opinions of the child (“Let’s discuss what an appropriate curfew might be”). They set rules and enforce them, but they also explain and discuss the reasons behind the rules. Finally, rejecting-neglecting parents are undemanding and unresponsive overall.
Parenting style is based on the combination of demandingness and responsiveness. The authoritative style, characterized by both responsiveness and demandingness, is the most effective.
Many studies of children and their parents, using different methods, measures, and samples, have reached the same conclusion: that authoritative parenting, compared to the other three styles, is associated with a wide range of psychological and social advantages for children. Children whose parents use the authoritative style (demanding but responsive) demonstrate better psychological adjustment, school performance, and psychosocial maturity than do children whose parents use the other styles. These children tend to be more independent, socially adept, and self-confident. [5] [6] Although there are some cultural differences in parenting styles among racial and ethnic groups, cross-cultural research indicates that authoritative parenting leads to the most positive outcomes across all groups.
Despite that different parenting styles are differentially effective overall, every child is different and parents must be adaptable. Some children have particularly difficult temperaments, and these children require more parenting. Because these difficult children demand more parenting, the behaviors of the parents matter more for the children’s development than they do for other, less demanding children who require less parenting overall. [7] These findings remind us how the behavior of the child can influence the behavior of the people in his or her environment.
Although childrearing demands a focus on the child, the parents must never forget about each other. Parenting is time consuming and emotionally taxing, and the parents must work together to create a relationship in which both mother and father contribute to the household tasks and support each other. It is also important for the parents to invest time in their own intimacy, as happy parents are more likely to stay together, and divorce has a profoundly negative impact on children, particularly during and immediately after the divorce. [8] [9]
Adolescence is defined as the years between the onset of puberty and the beginning of adulthood. In the past, when people were likely to marry in their early 20s or younger, this period might have lasted only 10 years or less—starting roughly between ages 12 and 13 and ending by age 20, at which time the child got a job or went to work on the family farm, married, and started his or her own family. Today, children mature more slowly, move away from home at later ages, and maintain ties with their parents longer. For instance, children may go away to college but still receive financial support from parents, and they may come home on weekends or even to live for extended time periods. Thus the period between puberty and adulthood may well last into the late 20s, merging into adulthood itself. In fact, it is appropriate now to consider the period of adolescence and that of emerging adulthood (the ages between 18 and the middle or late 20s) together.
During adolescence, the child continues to grow physically, cognitively, and emotionally, changing from a child into an adult. The body grows rapidly in size and the sexual and reproductive organs become fully functional. At the same time, as adolescents develop more advanced patterns of reasoning and a stronger sense of self, they seek to forge their own identities, developing important attachments with people other than their parents. Particularly in Western societies, where the need to forge a new independence is critical, [1] [2] this period can be stressful for many children, as it involves new emotions, the need to develop new social relationships, and an increasing sense of responsibility and independence.
Although adolescence can be a time of stress for many teenagers, most of them weather the trials and tribulations successfully. For example, the majority of adolescents experiment with alcohol sometime before high school graduation. Although many will have been drunk at least once, relatively few teenagers will develop long-lasting drinking problems or permit alcohol to adversely affect their school or personal relationships. Similarly, a great many teenagers break the law during adolescence, but very few young people develop criminal careers. [3] These facts do not, however, mean that using drugs or alcohol is a good idea. The use of recreational drugs can have substantial negative consequences, and the likelihood of these problems (including dependence, addiction, and even brain damage) is significantly greater for young adults who begin using drugs at an early age.
Adolescence begins with the onset of puberty, a developmental period in which hormonal changes cause rapid physical alterations in the body, culminating in sexual maturity. Although the timing varies to some degree across cultures, the average age range for reaching puberty is between 9 and 14 years for girls and between 10 and 17 years for boys. [1]
Puberty begins when the pituitary gland begins to stimulate production of the male sex hormone testosterone in boys and the female sex hormones estrogen and progesterone in girls. The release of these sex hormones triggers the development of the primary sex characteristics, the sex organs concerned with reproduction. These changes include enlargement of the testicles and penis in boys and development of the ovaries, uterus, and vagina in girls. Secondary sex characteristics (features that distinguish the two sexes from each other but are not involved in reproduction), such as an enlarged Adam’s apple, a deeper voice, and pubic and underarm hair in boys and growing breasts, widening hips, and pubic and underarm hair in girls, as shown in the figure below. The enlargement of breasts is usually the first sign of puberty in girls and, on average, occurs between ages 10 and 12. [1] Boys typically begin to grow facial hair between ages 14 and 16, and both boys and girls experience a rapid growth spurt during this stage. The growth spurt usually occurs earlier for girls than for boys, and some boys continue to grow into their 20s.
A major milestone of puberty for girls is menarche, the first menstrual period, typically experienced at around 12 or 13 years of age. [2] The age of menarche varies substantially and is determined by genetics, as well as by diet and lifestyle, since a certain amount of body fat is needed to attain menarche. Girls who are very slim, who engage in strenuous athletic activities, or who are malnourished may experience delayed menarche. Even after menstruation begins, girls whose level of body fat drops below the critical level may stop having their periods. A less obvious but equally important reproductive milestone for boys is spermarche, which is the beginning of sperm development in boys' testicles. The sequence of events for puberty is more predictable than the age at which they occur. Some girls may begin to grow pubic hair at age 10 but not attain menarche until age 15. In boys, facial hair may not appear until 10 years after the initial onset of puberty.
The timing of puberty in both boys and girls can have significant psychological consequences. Boys who mature early attain some social advantages because they are taller and stronger and, therefore, often more popular. [3] At the same time, however, early-maturing boys are at greater risk for delinquency and are more likely than their peers to engage in antisocial behaviors, including drug and alcohol use, truancy, and precocious sexual activity. Girls who mature early may find their maturity stressful, particularly if they experience teasing or sexual harassment. [4] [5] Early-maturing girls are also more likely than their peers to have emotional problems, a lower self-image, and higher rates of depression, anxiety, and disordered eating than their peers. [6]
Although the most rapid cognitive changes occur during childhood, the brain continues to develop throughout adolescence and even into the 20s. [1] During adolescence, the brain continues to form new neural connections but also casts off unused neurons and connections. [2] As teenagers mature, the prefrontal cortex, the area of the brain responsible for reasoning, planning, and problem solving, also continues to develop. [3] Myelin, the fatty tissue that forms around axons and neurons and helps speed transmissions between different regions of the brain, also continues to grow. [4]
Adolescents often seem to act impulsively rather than thoughtfully, perhaps in part because the development of the prefrontal cortex is generally slower than the development of the emotional parts of the brain, including the limbic system. [2] Furthermore, the hormonal surge associated with puberty, which primarily influences emotional responses, may create strong emotions and lead to impulsive behavior. It is hypothesized that adolescents may engage in risky behavior, such as smoking, drug use, dangerous driving, and unprotected sex, in part because they have not yet fully acquired the mental ability to curb impulsive behavior or to make entirely rational judgments. [5]
On the positive side, adolescents are gaining new cognitive abilities that differentiate them from children. Recall Piaget’s cognitive development stages. During puberty, adolescents begin to develop formal operational thinking—the ability to think systematically and use scientific reasoning. They also gain the ability to think and reason about abstract concepts—something they were unable to do as children.
The new cognitive abilities attained during adolescence may also give rise to new feelings of egocentrism, in which adolescents believe that they can do anything and that they know better than anyone else, including their parents. [6] Teenagers are likely to be highly self-conscious, often creating an imaginary audience in which they feel that everyone is constantly watching them. [7] Because teens think so much about themselves, they mistakenly believe that others must be thinking about them, too. [8] It is no wonder that everything a teen’s parents do suddenly feels embarrassing to them when they are in public.
Some of the most important changes that occur during adolescence involve the further development of self-concept and the development of new attachments. Whereas young children are most strongly attached to their parents, the important attachments of adolescents move increasingly away from parents and toward peers. [1] As a result, parents’ influence diminishes at this stage.
According to Erikson (refer to the table "Erikson’s Framework for Development"), the main social task of the adolescent is the search for a unique identity—the ability to answer the question, Who am I? In the search for identity, the adolescent may experience role confusion in which he or she is balancing or choosing among identities, taking on negative or undesirable identities, or temporarily giving up looking for an identity altogether if things are not going well. Erikson [2] believed that it was normative for adolescents to “try on” different roles to determine what their identity would become. He termed this search for identity role experimentation.
One approach to assessing identity development was proposed by James Marcia. [3] In his approach, adolescents are asked questions regarding their exploration of and commitment to issues related to occupation, politics, religion, and sexual behavior. Their responses enable researchers to classify the adolescents into one of four identity categories shown in the following table.
James Marcia’s Stages of Identity Development | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, v1.0; adapted from Marcia, J. E. (1980). Identity in adolescence. In J. Adelson (Ed.), Handbook of Adolescent Psychology, Vol. 5, pp. 145–160, New York: Wiley. | ||||||||||
|
Studies assessing how teens pass through Marcia’s stages show that although most teens eventually succeed in developing a stable identity, the path to it is not always easy and many routes can be taken. Some teens may simply adopt the beliefs of their parents or the first role that is offered to them, perhaps at the expense of searching for other, more promising possibilities (foreclosure status). Other teens may spend years trying on different possible identities (moratorium status) before finally choosing one.
To help them work through the process of developing an identity, teenagers may try out different identities in different social situations. They may maintain one identity at home and a different persona when they are with their peers. Eventually, most teenagers integrate the different possibilities into a single self-concept and a comfortable sense of identity (identity-achievement status).
For teenagers, the peer group provides valuable information about self-concept. For instance, in response to the question What were you like as a teenager? (e.g., cool, nerdy, awkward?), posed on the website Answerbag, one teenager replied in this way:
I’m still a teenager now, but from 8th–9th grade I didn’t really know what I wanted at all. I was smart, so I hung out with the nerdy kids. I still do; my friends mean the world to me. But in the middle of 8th I started hanging out with whom you may call the 'cool' kids...and I also hung out with some stoners, just for variety. I pierced various parts of my body and kept my grades up. Now, I’m just trying to find who I am. I’m even doing my sophomore year in China so I can get a better view of what I want. [4]
Responses like this one demonstrate the extent to which adolescents are developing their self-concepts and self-identities and how they rely on peers to help them do that. This is also an example of Erikson’s idea of role experimentation. The writer here is trying out several (perhaps conflicting) identities, and the identities any teen experiments with are defined by the group the person chooses to be a part of. The friendship groups (cliques, crowds, or gangs) that are such an important part of the adolescent experience allow young adults to try out different identities, and these groups provide a sense of belonging and acceptance. [5] A big part of what adolescents are learning is social identity, the part of their self-concept that is derived from group memberships. Adolescents define their social identities according to how they are similar to and how they differ from others, finding meaning in the sports, religious, school, gender, and ethnic categories they belong to.
It’s important to remember that, unlike Piaget's and Erikson's, Marcia’s theory is not a stage theory, so Marcia uses the term status to indicate which of the four options characterizes someone. Adolescents don’t progress from one status to another in a sequential manner. For example, not all adolescents explore different identities, and some do not make a commitment to an identity. The way to determine the status of an adolescent is to determine (a) whether he or she has explored meaningful alternatives to an identity question, and (b) whether he or she has made a commitment to an identity. Based on these questions, the following table shows which status best describes an adolescent's current level of identity formation:
The independence that comes with adolescence requires independent thinking as well as the development of morality—standards of behavior that are generally accepted within a culture to be right or proper. Just as Piaget believed that children’s cognitive development follows specific patterns, Lawrence Kohlberg [1] argued that children learn their moral values through active thinking and reasoning and that the development of moral reasoning progresses through a series of levels and stages and continues to develop throughout an individual’s life. To develop his theory, Kohlberg posed hypothetical moral dilemmas to children, adolescents, and adults and then matched each person’s response to a specific level and stage of moral development. One of Kohlberg’s classic hypothetical moral dilemmas is the following:
Heinz’s wife is dying of cancer, and only one drug can save her. The only place to get the drug is at the store of a pharmacist who is known to overcharge people for drugs. The man can only pay $1,000, but the pharmacist wants $2,000 and refuses to sell it to him for less or to let him pay later. Desperate, the man later breaks into the pharmacy and steals the medicine. Should he have done that? Was it right or wrong? Why? (p. 200)
A moral dilemma is a mental conflict involving choice, wherein each potential course of action breaches an individual's moral principles or cultural standards. The Heinz dilemma described above presents several such conflicts for the husband.
As you can see in the following table, Kohlberg concluded, on the basis of their responses to the moral questions, that as children develop intellectually, they pass through three stages of moral thinking: the preconventional level, the conventional level, and the postconventional level.
Lawrence Kohlberg’s Stages of Moral Reasoning | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Each of the following persons are presented the Heinz dilemma and asked if Heinz should have stolen the drug for his wife and why. Read each person's response, and then choose the answer that best reflects his or her level of moral development.
Although research supports Kohlberg’s idea that moral reasoning changes from an early emphasis on punishment and social rules and regulations to an emphasis on more general ethical principles, as with Piaget’s approach, Kohlberg’s stage model is probably too simple. For one, children may use higher levels of reasoning for some types of problems but revert to lower levels when doing so is more consistent with their goals or beliefs. [1] Second, it has been argued that the stage model is particularly appropriate for Western, rather than non-Western, samples in which allegiance to social norms (such as respect for authority) may be particularly important. [2] And there is frequently little correlation between how children score on the moral stages and how they behave in real life.
Perhaps the most important critique of Kohlberg’s theory is that it may describe the moral development of boys better than it describes that of girls. Carol Gilligan [3] has argued that, because of differences in their socialization, males tend to value principles of justice and rights, whereas females value caring for and helping others. Although there is little evidence that boys and girls score differently on Kohlberg’s stages of moral development. [4] it is true that girls and women tend to focus more on issues of caring, helping, and connecting with others than do boys and men. [5] If you don’t believe this, ask yourself when you last got a thank-you note from a man.
Until the 1970s, psychologists tended to treat adulthood as a single developmental stage with few or no distinctions made among the various periods we pass through between adolescence and death. Present-day psychologists realize, however, that physical, cognitive, and emotional responses continue to develop throughout life, with corresponding changes in our social needs and desires. Each of the three stages of adulthood—early, middle, and late—has its own physical, cognitive, and social challenges.
In this section, we consider the development of our cognitive and physical aspects that occur during early adulthood and middle adulthood—roughly the ages between 25 and 45 and between 45 and 65 respectively. These stages represent a long period of time—longer, in fact, than any of the other developmental stages—and the bulk of our lives is spent in them. These are also the periods in which most of us make our most substantial contributions to society, by meeting two of Erik Erikson’s life challenges: Intimacy versus isolation is the conflict we face in learning to give and receive love in a close, long-term relationship. Generativity versus stagnationis the conflict we face in developing an interest in guiding the development of the next generation, often by becoming parents.
Remember that the stage in Erikson’s framework that covers the period of adolescence is identity versus role confusion. This period involves a psychological movement away from the family as the center of one’s life and establishing one’s sense of self as well as life goals and interests. Ideally, by the time a person moves into the adult years, he or she has an increasingly stable identity and a new challenge, intimacy versus isolation, turns the individual’s focus from the self to the social world. During this period, young people become increasingly invested in intimate personal relationships and more serious about developing personal and romantic relationships that will influence the course of their adult lives. Marriage and starting a family may be put off or rejected as life goals, but mature relationships lead to the experience of intimacy, or, if such relationships fail to materialize, the person experiences isolation and the doubts and difficulties that can be produced by an inadequate social base.
In later adulthood, which, in Erikson’s scheme, covers the period from the mid-20s to the mid-60s—the majority of one’s life—people concern themselves with their life’s work. This is the period described by the contrast of generativity versus stagnation. Generativity or productivity is largely self-defined in terms of specific accomplishments, but to be generative, an individual must have goals that are motivating and must succeed in achieving some of those goals. The failure of generativity, stagnation, can be due to a failure to define motivating goals or a failure to act consistently in ways to achieve them.
Consider the following people and descriptions of where they are in their lives:
Compared with the other stages, the physical and cognitive changes that occur in the stages of early and middle adulthood are less dramatic. As individuals pass into their 30s and 40s, their recovery from muscular strain becomes more prolonged, and their sensory abilities may become somewhat diminished, at least when compared with their prime years, during the teens and early 20s. [1] Visual acuity diminishes somewhat, and many people in their late 30s and early 40s begin to notice that their eyes are changing and they need eyeglasses. Adults in their 30s and 40s may also begin to suffer some hearing loss because of damage to the hair cells (cilia) in the inner ear. [2] And it is during middle adulthood that many people first begin to suffer from ailments such as high cholesterol and high blood pressure as well as low bone density. [3] Corresponding to changes in our physical abilities, our cognitive and sensory abilities also seem to show some, but not dramatic, decline during this stage.
The stages of both early and middle adulthood bring about a gradual decline in fertility, particularly for women. Eventually, women experience menopause, the cessation of the menstrual cycle, which usually occurs at around age 50. Menopause occurs because of the gradual decrease in the production of the female sex hormones estrogen and progesterone, which slows the production and release of eggs into the uterus. Women whose menstrual cycles have stopped for 12 consecutive months are considered to have entered menopause. [4] A set of symptoms (hot flashes, insomnia, mood swings) due to hormonal changes prior to menopause are often referred to perimenopause.
Researchers have found that women’s responses to menopause are both social and physical and that they vary substantially across both individuals and cultures. Among individuals, some women may react more negatively than others to menopause, worrying that they have lost their femininity and that their final chance to bear children is over, whereas other women may regard menopause more positively, focusing on the new freedom from menstrual discomfort and unwanted pregnancy. In Western cultures such as in the United States, women are likely to see menopause as a challenging and potentially negative event, whereas in India, where older women enjoy more social privileges than do younger ones, menopause is more positively regarded. [5]
Menopause may have evolutionary benefits. Infants have better chances of survival when their mothers are younger and have more energy to care for them, and the presence of older women who do not have children of their own to care for (but who can help out with raising grandchildren) can be beneficial to the family group. Also consistent with the idea of an evolutionary benefit of menopause is that the decline in fertility occurs primarily for women, who do most of the child care and who need the energy of youth to accomplish it. If older women were able to have children, they might not be as able to effectively care for them.
Most men never completely lose their fertility, but they do experience a gradual decrease in testosterone levels, sperm count, and speed of erection and ejaculation.
Perhaps one of the major markers of adulthood is the ability to create an effective and independent life. Whereas children and adolescents are generally supported by parents, adults must make their own living and must start their own families. Furthermore, the needs of adults are different from those of younger persons.
Although the timing of the major life events that occur in early and middle adulthood vary substantially across individuals, they nevertheless tend to follow a general sequence, known as a social clock. The social clock is the culturally preferred “right time” for major life events, such as moving out of the childhood house, getting a job, getting married, having children, and owning your own home. People who do not appear to be following the norms of the social clock (e.g., young adults who still live with their parents, individuals who never marry, and couples who choose not to have children) may be seen as unusual or deviant, and they may be stigmatized by others. [6] [7]
Although, on average, they are doing it later than they did even 20 or 30 years ago, most people do eventually marry. Marriage is beneficial to the partners in terms of both mental health and physical health. People who are married report greater life satisfaction than those who are not married and also suffer fewer health problems. [8] [9]
Divorce is more common now than it was 50 years ago. In 2003, almost half of marriages in the United States ended in divorce, [10] although about three quarters of people who divorce will remarry. Most divorces occur for couples in their 20s, because younger people are frequently not mature enough to make good marriage choices or to make marriages last. Marriages are more successful for older adults and for those with more education. [11]
Parenthood also involves a major and long-lasting commitment, one that can cause substantial stress on the parents. The time and finances invested in children create stress, which frequently results in decreased marital satisfaction. [12] This decline is especially true for women, who usually bear the larger part of the burden of raising the children and taking care of the house, despite that they increasingly also work and have careers.
Despite the challenges of early and middle adulthood, the majority of middle-aged adults are not unhappy. These years are often very satisfying, as families have been established, careers have been entered into, and some percentage of life goals has been realized. [13]
We have seen that, over the course of their lives, most individuals are able to develop secure attachments; reason cognitively, socially and morally; and create families and find appropriate careers. Eventually, however, as people enter into their 60s and beyond, the aging process leads to faster changes in our physical, cognitive, and social capabilities and needs, and life begins to come to its natural conclusion, resulting in the final life stage, beginning in the 60s, known as late adulthood.
Despite that the body and mind are slowing, most older adults nevertheless maintain an active lifestyle, remain as happy as or are happier than when they were younger, and increasingly value their social connections with family and friends. [1] Kennedy, Mather, and Carstensen [2] found that people’s memories of their lives became more positive with age, and Myers and Diener [3] found that older adults tended to speak more positively about events in their lives, particularly their relationships with friends and family, than did younger adults.
The changes associated with aging do not affect everyone in the same way, and they do not necessarily interfere with a healthy life. Former Beatles drummer Ringo Starr celebrated his 70th birthday in 2010 by playing at Radio City Music Hall, and Rolling Stones singer Mick Jagger (who once supposedly said, “I’d rather be dead than singing ‘Satisfaction’ at 45”) continues to perform as he pushes 70. The golfer Tom Watson almost won the 2010 British Open golf tournament at the age of 59, playing against competitors in their 20s and 30s. And people such as the financier Warren Buffett, U.S. Senator Frank Lautenberg, and actress Betty White, each in their 80s, all enjoy highly productive and energetic lives.
Researchers are beginning to better understand the factors that allow some people to age better than others. For one, research has found that the people who are best able to adjust well to changing situations early in life are also able to better adjust later in life. [4] [5] Perceptions also matter. People who believe that the elderly are sick, vulnerable, and grumpy often act according to such beliefs, [6] and Levy, Slade, Kunkel, and Kasl [7] found that the elderly who had more positive perceptions about aging also lived longer.
In one important study concerning the role of expectations on memory, Becca Levy and Ellen Langer [8] found that, although young American and Chinese students performed equally well on cognitive tasks, older Americans performed significantly more poorly on those tasks than did their Chinese counterparts. Furthermore, this difference was explained by beliefs about aging—in both cultures, the older adults who believed memory declines with age also showed more actual memory decline than did the older adults who believed memory does not decline with age. In addition, more older Americans than older Chinese believed that memory declines with age, and as you can see in the following figure, older Americans performed more poorly on the memory tasks.
Whereas it was once believed that almost all older adults suffered from a generalized memory loss, research now indicates that healthy older adults actually experience only some particular types of memory deficits, while other types of memory remain relatively intact or may even improve with age. Older adults do seem to process information more slowly—it may take them longer to evaluate information and to understand language, and it takes them longer, on average, than it does younger people, to recall a word they know, even though they are perfectly able to recognize the word once they see it. [9] Older adults also have more difficulty inhibiting and controlling their attention, [10] making them, for example, more likely to talk about topics that are not relevant to the topic at hand when conversing. [11]
But slower processing and less accurate executive control does not always mean loss of memory or intelligence. Perhaps the elderly are slower in part because they simply have more knowledge. Indeed, older adults have more crystallized intelligence—that is, general knowledge about the world, as reflected in semantic knowledge, vocabulary, and language. As a result, adults generally outperform younger people on measures of history, geography, and even on crossword puzzles, where this information is useful. [12] This superior knowledge combined with a slower and more complete processing style and a more sophisticated understanding of the workings of the world around them, gives the elderly the advantage of “wisdom” over the advantages of fluid intelligence—the ability to think and acquire information quickly and abstractly—which favor the young. [13] [14]
The differential changes in crystallized versus fluid intelligence help explain why the elderly do not necessarily show poorer performance on tasks that also require experience (i.e., crystallized intelligence), although they show poorer memory overall. A young chess player may think more quickly, for instance, but a more experienced chess player has more knowledge to draw on. Older adults are also more effective at understanding the nuances of social interactions than younger adults are, in part because they have more experience in relationships. [15]
Understanding the difference between crystallized and fluid intelligence can help us understand cognitive changes as we age.
Some older adults suffer from biologically based cognitive impairments in which the brain is so adversely affected by aging that it becomes very difficult for the person to continue to function effectively. Dementia is a progressive neurological disease that includes loss of cognitive abilities significant enough to interfere with everyday behaviors, and Alzheimer’s disease is a form of dementia that, over a period of years, leads to a loss of emotions, cognitions, and physical functioning, and that is ultimately fatal. Dementia and Alzheimer’s disease are most likely to be observed in individuals who are 65 and older, and the likelihood of developing Alzheimer’s disease doubles about every 5 years after age 65. After age 85, the risk reaches nearly 8% per year. [1] Dementia and Alzheimer’s disease both produce a gradual decline in functioning of the brain cells that produce the neurotransmitter acetylcholine. Without this neurotransmitter, neurons are unable to communicate, leaving the brain less and less functional.
Dementia and Alzheimer’s disease are in part heritable, but increasing evidence suggests that environment also plays a role. And current research is helping us understand the things that older adults can do to help them slow down or prevent the negative cognitive outcomes of aging, including dementia and Alzheimer’s disease. [2] Older adults who continue to keep their minds active by engaging in cognitive activities, such as reading, playing musical instruments, attending lectures, or doing crossword puzzles; who maintain social interactions with others; and who keep themselves physically fit have a greater chance of maintaining their mental acuity than those who do not. [3] [4] In short, although physical illnesses may happen to anyone, the more people keep their brains active and the more they maintain a healthy and active lifestyle, the more healthy their brains will remain. [5]
Because of increased life expectancy in the 21st century, elderly people can expect to spend approximately a quarter of their lives in retirement. Leaving a career is a major life change and can be a time when people experience anxiety, depression, and other negative changes in self-concept and self-identity. At the same time, retirement may also serve as an opportunity for a positive transition from work and career roles to stronger family and community member roles, and the latter may have a variety of positive outcomes for the individual. Retirement may be a relief for people who have worked in boring or physically demanding jobs, particularly if they have other outlets for stimulation and expressing self-identity.
Psychologist Mo Wang [1] observed the well-being of 2,060 people between the ages of 51 and 61 over an 8-year period and made the following recommendations to make the retirement phase a positive one:
Whereas these seven tips are helpful for a smooth transition to retirement, Wang also notes that people tend to be adaptable and that no matter how they do it, retirees will eventually adjust to their new lifestyles.
Living includes dealing with our own and our loved ones’ mortality. In her book On Death and Dying, [1] Elizabeth Kübler-Ross describes five phases of grief through which people pass in grappling with the knowledge that they or someone close to them is dying:
Despite Ross’s popularity, a growing number of critics of her theory argue that her five-stage sequence is too constraining because attitudes toward death and dying have been found to vary greatly across cultures and religions, and these variations make the process of dying different according to culture. [2] As an example, Japanese Americans restrain their grief [3] so as not to burden other people with their pain. By contrast, Jews observe a 7-day, publicly announced mourning period. In some cultures, the elderly are more likely to be living and coping alone, or perhaps only with their spouse, whereas in other cultures, such as the Hispanic culture, the elderly are more likely to be living with their sons and daughters and other relatives, and this social support may create a better quality of life for them. [4]
Margaret Stroebe and her colleagues [5] found that although most people adjust to the loss of a loved one without seeking professional treatment, many have an increased risk of mortality, particularly within the early weeks and months after the loss. These researchers also found that people going through the grieving process suffer more physical and psychological symptoms and illnesses and use more medical services.
The health of survivors during the end of life is influenced by factors such as circumstances surrounding the loved one’s death, individual personalities, and ways of coping. People serving as caretakers to partners or other family members who are ill frequently experience a great deal of stress themselves, making the dying process even more stressful. Despite the trauma of the loss of a loved one, people do recover and are able to continue with effective lives. Grief intervention programs can go a long way in helping people cope during the bereavement period. [6]
Consider the following people and descriptions of where they are in Elizabeth Kübler-Ross’s stages of grief model:
He was 3,000 feet up in the air when the sudden loss of power in his airplane put his life, as well as the lives of 150 other passengers and crew members, in his hands. Both of the engines on flight 1539 had shut down, and his options for a safe landing were limited.
Sully kept flying the plane and alerted the control tower to the situation:
"This is Cactus 1539 . . . hit birds. We lost thrust in both engines. We’re turning back towards La Guardia."
When the tower gave him the compass setting and runway for a possible landing, Sullenberger’s extensive experience allowed him to give a calm response:
"I’m not sure if we can make any runway. . . . Anything in New Jersey?"
Captain Sullenberger was not just any pilot in a crisis, but a former U.S. Air Force fighter pilot with 40 years of flight experience. He had served as a flight instructor and the Airline Pilots Association safety chairman. Training had quickened his mental processes in assessing the threat, allowing him to maintain what tower operators later called an “eerie calm.” He knew the capabilities of his plane.
When the tower suggested a runway in New Jersey, Sullenberger calmly replied:
"We’re unable. We may end up in the Hudson."
The last communication from Captain Sullenberger to the tower advised of the eventual outcome:
"We’re going to be in the Hudson."
He calmly set the plane down on the water. Passengers reported that the landing was like landing on a rough runway. The crew kept the passengers calm as women, children, and then the rest of the passengers were evacuated onto the boats of the rescue personnel that had quickly arrived. Captain Sullenberger then calmly walked the aisle of the plane to be sure that everyone was out before joining the 150 other rescued survivors." [1] [2]
Some called it “grace under pressure” and others, the “miracle on the Hudson.” But psychologists see it as the ultimate in emotion regulation—the ability to control and productively use one’s emotions.
The topic of this module is affect, defined as the experience of feeling or emotion. Affect is an essential part of the study of psychology because it plays such an important role in everyday life. As we will see, affect guides behavior, helps us make decisions, and has a major impact on our mental and physical health.
The two fundamental components of affect are emotions and motivation. Both of these words have the same underlying Latin root, meaning “to move.” In contrast to cognitive processes that are calm, collected, and frequently rational, emotions and motivations involve arousal, or our experiences of the bodily responses created by the sympathetic division of the autonomic nervous system (ANS). Because they involve arousal, emotions and motivations are “hot”—they “charge,” “drive,” or “move” our behavior.
When we experience emotions or strong motivations, we feel the experiences. When we become aroused, the sympathetic nervous system provides us with energy to respond to our environment. The liver puts extra sugar into the bloodstream, the heart pumps more blood, our pupils dilate to help us see better, respiration increases, and we begin to perspire to cool the body. The stress hormones epinephrine and norepinephrine are released. We experience these responses as arousal.
An emotion is a mental and physiological feeling state that directs our attention and guides our behavior. Whether it is the thrill of a roller-coaster ride that elicits an unexpected scream, the flush of embarrassment that follows a public mistake, or the horror of a potential plane crash that creates an exceptionally brilliant response in a pilot, emotions move our actions. Emotions normally serve an adaptive role: We care for infants because of the love we feel for them, we avoid making a left turn onto a crowded highway because we fear that a speeding truck may hit us, and we are particularly nice to Mandy because we are feeling guilty that we didn’t go to her party. But emotions may also be destructive, such as when a frustrating experience leads us to lash out at others who do not deserve it.
Motivations are closely related to emotions. A motivation is a driving force that initiates and directs behavior. Some motivations are biological, such as the motivation for food, water, and sex. But there are a variety of other personal and social motivations that can influence behavior, including the motivations for social approval and acceptance, the motivation to achieve, and the motivation to take, or to avoid taking, risks. [3] In each case we follow our motivations because they are rewarding. As predicted by basic theories of operant learning, motivations lead us to engage in particular behaviors because doing so makes us feel good.
We begin this module by considering the role of affect on behavior, discussing the most important psychological theories of emotions. Then we will consider how emotions influence our mental and physical health. We will discuss how the experience of long-term stress causes illness, and then turn to research on positive thinking and what has been learned about the beneficial health effects of more positive emotions. Finally, we will review some of the most important human motivations, including the behaviors of eating and sex. The importance of this unit is not only in helping you gain an understanding the principles of affect but also in helping you discover the important roles that affect plays in our everyday lives, and particularly in our mental and physical health. The study of the interface between affect and physical health—that principle that “everything that is physiological is also psychological”—is a key focus of the branch of psychology known as health psychology. The importance of this topic has made health psychology one of the fastest growing fields in psychology.
Most of us aren’t faced with the need to make a split-second life-or-death decision as Captain Sullenberger had to do; however, we have all faced times when we felt strong emotions. Imagine you are awakened in the middle of the night by a strange noise in your family room. You have no idea of what made the noise. Immediately, you feel the signs of autonomic arousal: your heart pounds, you feel flushed and maybe sick to your stomach, you have trouble breathing. You are afraid of what you might find in your family room, so you pull the covers over your head trying to muster the courage to take a look. You are experiencing the physiological part of emotion—arousal—and the emotion—fear. I’m sure you have had similar feelings in other situations, perhaps when you were in love, angry, embarrassed, frustrated, or very sad.
If you experience the fear and arousal at the same time—heart pounding and the fear—your experiences are consistent with the Cannon-Bard theory of emotions. The Cannon-Bard theory proposes that emotions and arousal occur at the same time. If your experiences are like mine, as you reflected on the arousal that you have experienced in strong emotional situations, you probably thought something like, “I was afraid and my heart started beating like crazy.” According to the theory of emotion proposed by Walter Cannon and Philip Bard, the experience of the emotion (in this case, “I’m afraid”) occurs alongside our experience of the arousal (“my heart is beating fast”). According to the Cannon-Bard theory of emotion, the experience of an emotion is accompanied by physiological arousal. Thus, according to this model of emotion, as we become aware of danger, our heart rate also increases.
But there is another way to look at the connection between fear and arousal. In the same story above, you are afraid because you pulled the covers over your head. You react first and then experience the emotion. This alternative way of explaining the connection between emotion and arousal was proposed by William James and Carl Lange and is called the James-Lange theory. Although the idea that the experience of an emotion occurs alongside the accompanying arousal seems intuitive to our everyday experiences, James and Lange had another idea about the role of arousal. According to the James-Lange theory of emotion, our experience of an emotion is the result of the arousal that we experience. This approach proposes that the arousal and the emotion are not independent, but rather that the emotion depends on the arousal. The fear does not occur along with the racing heart but occurs because of the racing heart. As William James [1] put it, “We feel sorry because we cry, angry because we strike, afraid because we tremble” (p. 190). A fundamental aspect of the James-Lange theory is that different patterns of arousal may create different emotional experiences.
There is yet another way to look at it. You realize you are experiencing the physical signs of arousal. Then you think about the situation you are in (an unexplained noise in the middle of the night) and you explain that arousal as fear. This third explanation is consistent with Stanley Schachter and Jerome Singer’s two-factor theory of emotion. The two-factor theory of emotion argues that the arousal that we experience is basically the same in every emotion, and that all emotions (including the basic emotions) are differentiated only by our cognitive appraisal of the source of the arousal. The two-factor theory of emotion asserts that the experience of emotion is determined by the intensity of the arousal we are experiencing, but that the cognitive appraisal of the situation determines what the emotion will be. Because both arousal and appraisal are necessary, we can say that emotions have two factors, an arousal factor and a cognitive factor: [2] emotion = arousal + cognition.
To fully understand the Schachter-Singer theory, it’s helpful to look at another example. Let’s say you’re experiencing the same arousal—pounding heart, feeling flushed, having difficulty catching your breath. But this time you are engaging in one of your favorite activities—parachute jumping. Now you interpret the same feelings of arousal as exhilaration not fear. You have different interpretation of the situation you’re in and therefore a different label for the same arousal.
Later in our discussion, we look at a physiological view of emotions which offers some support for the Schachter-Singer two factor theory.
You have just learned about three theories of emotion: the Cannon-Bard theory, the James-Lange theory, and the Schachter-Singer theory. In this activity, you will complete the following figure so that it represents the association between arousal and emotion according to each theory.
Adapted from Flat World Knowledge, Introduction to Psychology, v1.0, CC-BY-NC-SA.
There is research evidence to support each of these theories. The operation of the fast emotional pathway (see the figure "Slow and Fast Emotional Pathways" in the module) supports the idea that arousal and emotions occur together. The emotional circuits in the limbic system are activated when an emotional stimulus is experienced, and these circuits quickly create corresponding physical reactions. [1] The process happens so quickly that it may feel to us as if emotion is simultaneous with our physical arousal.
On the other hand, and as predicted by the James-Lange theory, our experiences of emotion are weaker without arousal. Patients who have spinal injuries that reduce their experience of arousal also report decreases in emotional responses. [2] There is also at least some support for the idea that different emotions are produced by different patterns of arousal. People who view fearful faces show more amygdala activation than those who watch angry or joyful faces, [3] [4] we experience a red face and flushing when we are embarrassed but not when we experience other emotions, [5] and different hormones are released when we experience compassion than when we experience other emotions. [6]
In some cases it may be difficult for a person who is experiencing a high level of arousal to accurately determine which emotion she is experiencing. That is, she may be certain that she is feeling arousal, but the meaning of the arousal (the cognitive factor) may be less clear. Some romantic relationships, for instance, have a very high level of arousal, and the partners alternatively experience extreme highs and lows in the relationship. One day they are madly in love with each other and the next they are in a huge fight. In situations that are accompanied by high arousal, people may be unsure what emotion they are experiencing. In the high arousal relationship, for instance, the partners may be uncertain whether the emotion they are feeling is love, hate, or both at the same time (sound familiar?). The tendency for people to incorrectly label the source of the arousal that they are experiencing is known as the misattribution of arousal.
In one interesting field study by Dutton and Aron, [7] an attractive young woman approached individual young men as they crossed a wobbly, long suspension walkway hanging more than 200 feet above a river in British Columbia, Canada. The woman asked each man to help her fill out a class questionnaire. When he had finished, she wrote her name and phone number on a piece of paper, and invited him to call if he wanted to hear more about the project. More than half of the men who had been interviewed on the bridge later called the woman. In contrast, men approached by the same woman on a low solid bridge, or who were interviewed on the suspension bridge by men, called significantly less frequently. The idea of misattribution of arousal can explain this result—the men were feeling arousal from the height of the bridge, but they misattributed it as romantic or sexual attraction to the woman, making them more likely to call her.
If you think a bit about your own experiences of different emotions, and if you consider the equation that suggests that emotions are represented by both arousal and cognition, you might start to wonder how much was determined by each. That is, do we know what emotion we are experiencing by monitoring our feelings (arousal) or by monitoring our thoughts (cognition)? The bridge study you just read about might begin to provide you an answer: The men seemed to be more influenced by their perceptions of how they should be feeling (their cognition) rather than by how they actually were feeling (their arousal).
Stanley Schachter and Jerome Singer [8] directly tested this prediction of the two-factor theory of emotion in a well-known experiment. Schachter and Singer believed that the cognitive part of the emotion was critical—in fact, they believed that the arousal that we are experiencing could be interpreted as any emotion, provided we had the right label for it. Thus they hypothesized that if an individual is experiencing arousal for which he has no immediate explanation, he will “label” this state in terms of the cognitions that are created in his environment. On the other hand, they argued that people who already have a clear label for their arousal would have no need to search for a relevant label, and therefore should not experience an emotion.
In the research, male participants were told that they would be participating in a study on the effects of a new drug, called “suproxin,” on vision. On the basis of this cover story, the men were injected with a shot of the neurotransmitter epinephrine, a drug that normally creates feelings of tremors, flushing, and accelerated breathing in people. The idea was to give all the participants the experience of arousal.
Then, according to random assignment to conditions, the men were told that the drug would make them feel certain ways. The men in the epinephrine informed condition were told the truth about the effects of the drug—they were told that they would likely experience tremors, their hands would start to shake, their hearts would start to pound, and their faces might get warm and flushed. The participants in the epinephrine-uninformed condition, however, were told something untrue—that their feet would feel numb, that they would have an itching sensation over parts of their body, and that they might get a slight headache. The idea was to make some of the men think that the arousal they were experiencing was caused by the drug (the informed condition), whereas others would be unsure where the arousal came from (the uninformed condition).
Then the men were left alone with a confederate who they thought had received the same injection. While they were waiting for the experiment (which was supposedly about vision) to begin, the confederate behaved in a wild and crazy (Schachter and Singer called it “euphoric”) manner. He wadded up spitballs, flew paper airplanes, and played with a hula-hoop. He kept trying to get the participant to join in with his games. Then right before the vision experiment was to begin, the participants were asked to indicate their current emotional states on a number of scales. One of the emotions they were asked about was euphoria.
If you are following the story, you will realize what was expected: The men who had a label for their arousal (the informed group) would not be experiencing much emotion because they already had a label available for their arousal. The men in the misinformed group, on the other hand, were expected to be unsure about the source of the arousal. They needed to find an explanation for their arousal, and the confederate provided one. As you can see in the left side of the figure below, this is just what they found. The participants in the misinformed condition were more likely to be experiencing euphoria (as measured by their behavioral responses with the confederate) than were those in the informed condition.
Then Schachter and Singer conducted another part of the study, using new participants. Everything was exactly the same except for the behavior of the confederate. Rather than being euphoric, he acted angry. He complained about having to complete the questionnaire he had been asked to do, indicating that the questions were stupid and too personal. He ended up tearing up the questionnaire that he was working on, yelling “I don’t have to tell them that!” Then he grabbed his books and stormed out of the room.
What do you think happened in this condition? The answer is the same thing: The misinformed participants experienced more anger (again as measured by the participant’s behaviors during the waiting period) than did the informed participants as shown on the right side of the figue above. The idea is that because cognitions are such strong determinants of emotional states, the same state of physiological arousal could be labeled in many different ways, depending entirely on the label provided by the social situation. As Schachter and Singer [8] put it, “Given a state of physiological arousal for which an individual has no immediate explanation, he will ‘label’ this state and describe his feelings in terms of the cognitions available to him” (p. 381).
Because it assumes that arousal is constant across emotions, the two-factor theory also predicts that emotions may transfer or “spill over” from one highly arousing event to another. My university basketball team recently won the NCAA basketball championship, but after the final victory some students rioted in the streets near the campus, lighting fires and burning cars. This seems to be a very strange reaction to such a positive outcome for the university and the students, but it can be explained through the spillover of the arousal caused by happiness to destructive behaviors. The principle of excitation transfer refers to the phenomenon that occurs when people who are already experiencing arousal from one event tend to also experience unrelated emotions more strongly.
In sum, each of the three theories of emotion has something to support it. In terms of Cannon-Bard, emotions and arousal generally are subjectively experienced together, and the spread is very fast. In support of the James-Lange theory, there is at least some evidence that arousal is necessary for the experience of emotion, and that the patterns of arousal are different for different emotions. And in line with the two-factor model, there is also evidence that we may interpret the same patterns of arousal differently in different situations.
The most fundamental emotions, known as the basic emotions, are those of anger, disgust, fear, happiness, sadness, and surprise (and some psychologists also include contempt). The basic emotions have a long history in human evolution, and they have developed in large part to help us make rapid judgments about stimuli and to quickly guide appropriate behavior. [1] The basic emotions are determined in large part by one of the oldest parts of our brain, the limbic system, including the amygdala, the hypothalamus, and the thalamus. Because they are primarily evolutionarily determined, the basic emotions are experienced and displayed in much the same way across cultures, [2] [3] [4] [5] and people are quite accurate at judging the facial expressions of people from different cultures. Watch the following video to see a demonstration of the basic emotions.
Not all of our emotions come from the old parts of our brain; we also interpret our experiences to create a more complex array of emotional experiences. For instance, the amygdala may sense fear when it senses that the body is falling, but that fear may be interpreted completely differently (perhaps even as “excitement”) when we are falling on a roller-coaster ride than when we are falling from the sky in an airplane that has lost power. The cognitive interpretations that accompany emotions—known as cognitive appraisal—allow us to experience a much larger and more complex set of secondary emotions, as shown in the following figure. Although they are in large part cognitive, our experiences of the secondary emotions are determined in part by arousal (on the vertical axis of the figure) and in part by their valence—that is, whether they are pleasant or unpleasant feelings (on the horizontal axis of the figure).
When you succeed in reaching an important goal, you might spend some time enjoying your secondary emotions, perhaps the experience of joy, satisfaction, and contentment. But when your close friend wins a prize that you thought you had deserved, you might also experience a variety of secondary emotions (in this case, the negative ones)—for instance, feeling angry, sad, resentful, and ashamed. You might mull over the event for weeks or even months, experiencing these negative emotions each time you think about it. [6]
The distinction between the primary and the secondary emotions is paralleled by two brain pathways: a fast pathway and a slow pathway. [1] [2] [3] The thalamus acts as the major gatekeeper in this process as shown in the figure below. Our response to the basic emotion of fear, for instance, is primarily determined by the fast pathway through the limbic system. When a car pulls out in front of us on the highway, the thalamus activates and sends an immediate message to the amygdala. We quickly move our foot to the brake pedal. Secondary emotions are more determined by the slow pathway through the frontal lobes in the cortex. When we stew in jealousy over the loss of a partner to a rival or recollect on our win in the big tennis match, the process is more complex. Information moves from the thalamus to the frontal lobes for cognitive analysis and integration, and then from there to the amygdala. We experience the arousal of emotion, but it is accompanied by a more complex cognitive appraisal, producing more refined emotions and behavioral responses.
Let’s go back to the example we used earlier when talking about the historical theories. When you hear a strange noise in your bedroom late at night, you are likely to react fearfully before your know whether or not there is something to fear. Which pathway is involved in this reaction? If you guessed the fast pathway, you’re right. No cognition is involved so the pathway is direct to the amygdala. Now you go into your bedroom, check, and realize it was just your cat jumping down from a shelf. You’ve appraised the situation as harmless so the fear abates. What’s the pathway here? Right, this is the slow pathway because you’re using your cortex to appraise the situation.
Fast pathway reactions involve no cognition and often can’t be overridden by cognitive appraisal. A person reacts fearfully (screaming and jumping backward) to the mere site of any snake, dangerous or not. This person lives in an area where there are no dangerous snakes and they know it. They can appraise the snake as harmless but the fear reaction occurs anyway.
Because there are more neural connections that feed into the frontal cortex from the amygdala than the other way around, it is not uncommon for people to make decisions based on emotion rather than rational thinking. When voting, people are unlikely to vote for a candidate they don’t like even if that candidate’s political beliefs more closely match their own.
Emotionally based decisions are not always a bad thing. Although emotions might seem to be more frivolous or less important in comparison to our more rational cognitive processes, both emotions and cognitions can help us make effective decisions. In some cases we take action after rationally processing the costs and benefits of different choices, but in other cases we rely on our emotions. Emotions become particularly important in guiding decisions when the alternatives between many complex and conflicting alternatives present us with a high degree of uncertainty and ambiguity, making a complete cognitive analysis difficult. In these cases we often rely on our emotions to make decisions, and these decisions may in many cases be more accurate than those produced by cognitive processing. [4] [5] [6] [7]
In addition to experiencing emotions internally, we also express our emotions to others, and we learn about the emotions of others by observing them. This communication process has evolved over time, and is highly adaptive. One way that we perceive the emotions of others is through their nonverbal communication, that is, communication that does not involve words. [1] [2] Nonverbal communication includes our tone of voice, gait, posture, touch, and facial expressions, and we can often accurately detect the emotions that other people are experiencing through these channels.
The most important communicator of emotion is the face. The face contains 43 muscles that allow it to make more than 10,000 unique configurations and to express a wide variety of emotions. For example, happiness is expressed by smiles, which are created by two of the major muscles surrounding the mouth and the eyes, and anger is created by lowered brows and firmly pressed lips.
In addition to helping us express our emotions, the face also helps us feel emotion. The facial feedback hypothesis proposes that the movement of our facial muscles can trigger corresponding emotions. Fritz Strack and his colleagues [3] asked their research participants to hold a pen in their teeth (mimicking the facial action of a smile) or between their lips (similar to a frown), and then had them rate the funniness of a cartoon. They found that the cartoons were rated as more amusing when the pen was held in the “smiling” position—the subjective experience of emotion was intensified by the action of the facial muscles.
When we communicate electronically, we can’t see the writer’s nonverbal communications, so many people add emoticons. The simplest emoticons are created on the keyboard to add a smiley face or a sad face. When texting on a smartphone, you can use text symbols such as the following:
1. :-D for laughing
2. :-o for surprise
3. ;-) for happy
4. :-|| for angry
5. :-P for disgust or sticking your tongue out
6. :-( for sad
Acronyms are another way to communicate emotion; for example:
1. LOL for laughing out loud,
2. EWG for evil wicked grin,
3. JT for just teasing,
4. MEGO for my eyes glaze over
5. AML for all my love
6. SB for smiling back
The following table shows some of the important nonverbal behaviors we use to express emotion and some other information (particularly liking or disliking and dominance or submission).
Some Common Nonverbal Communicators | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | |||||||||||||||||||||
|
Just as there is no “universal” spoken language, there is no universal nonverbal language. For instance, in the United States and many Western cultures, we express disrespect by showing the middle finger (the “finger” or the “bird”). But in Britain, Ireland, Australia, and New Zealand, the “V” sign (made with back of the hand facing the recipient) serves a similar purpose. In countries where Spanish, Portuguese, or French are spoken, a gesture in which a fist is raised and the arm is slapped on the bicep is equivalent to the finger, and in Russia, Indonesia, Turkey, and China, a sign in which the hand and fingers are curled and the thumb is thrust between the middle and index fingers is used for the same purpose.
These results, and others like them, show that our behaviors, including our facial expressions, are influenced by, but also influence, our affect. We may smile because we are happy, but we are also happy because we are smiling. And we may stand up straight because we are proud, but we are proud because we are standing up straight. [4]
The stress of the Monday through Friday grind can be offset by the fun that we can have on the weekend, and the concerns that we have about our upcoming chemistry exam can be offset by a positive attitude toward school, life, and other people. Put simply, the best antidote for stress is a happy one: Think positively, have fun, and enjoy the company of others.
You have probably heard about the “power of positive thinking”—the idea that thinking positively helps people meet their goals and keeps them healthy, happy, and able to effectively cope with the negative events that occur to them. It turns out that positive thinking really works. People who think positively about their future, who believe that they can control their outcomes, and who are willing to open up and share with others are healthier people. [1] A positive thinker doesn’t ignore negative events that happen in their lives. They are just better able to cope by having a positive outlook. They are thinking this will get better.
The power of positive thinking comes in different forms, but they are all helpful. Some researchers have focused on optimism, a general tendency to expect positive outcomes, finding that optimists are happier and have less stress. [2] Others have focused self-efficacy, the belief in our ability to carry out actions that produce desired outcomes. It’s a belief that we will be able to succeed in things we try. Our successes improve our self-efficacy. People with high self-efficacy respond to environmental and other threats in an active, constructive way—by getting information, talking to friends, and attempting to face and reduce the difficulties they are experiencing. These people too are better able to ward off their stresses in comparison to people with less self-efficacy. [3]
Self-efficacy helps in part because it leads us to perceive that we can control the potential stressors that may affect us. Workers who have control over their work environment (e.g., by being able to move furniture and control distractions) experience less stress, as do patients in nursing homes who are able to choose their everyday activities. [4] Glass, Reim, and Singer [5] found that participants who believed that they could stop a loud noise experienced less stress than those who did not think that they could, even though the people who had the option never actually used it. The ability to control our outcomes may help explain why animals and people who have higher status live longer. [6]
Suzanne Kobasa and her colleagues [7] have argued that the tendency to be less affected by life’s stressors can be characterized as an individual difference measure that has a relationship to both optimism and self-efficacy known as hardiness. Hardy individuals are those who are more positive overall about potentially stressful life events, who take more direct action to understand the causes of negative events, and who attempt to learn from them what may be of value for the future. Hardy individuals use effective coping strategies, and they take better care of themselves.
Taken together, these various coping skills, including optimism, self-efficacy, and hardiness, have been shown to have a wide variety of positive effects on our health. Optimists make faster recoveries from illnesses and surgeries. [8] People with high self-efficacy have been found to be better able to quit smoking and lose weight and are more likely to exercise regularly. [9] And hardy individuals seem to cope better with stress and other negative life events. [10] The positive effects of positive thinking are particularly important when stress is high. Baker [11] found that in periods of low stress, positive thinking made little difference in responses to stress, but that during stressful periods, optimists were less likely to smoke on a day-to-day basis and more likely to respond to stress in more productive ways, such as by exercising.
It is possible to learn to think more positively, and doing so can be beneficial. Antoni and colleagues [12] found that pessimistic cancer patients who were given training in optimism reported more optimistic outlooks after the training and were less fatigued after their treatments. And Maddi, Kahn, and Maddi [13] found that a “hardiness training” program that included focusing on ways to effectively cope with stress was effective in increasing satisfaction and decreasing self-reported stress.
The benefits of taking positive approaches to stress can last a lifetime. Christopher Peterson and his colleagues [14] found that the level of optimism reported by people who had first been interviewed when they were in college during the years between 1936 and 1940 predicted their health over the next 50 years. Students who had a more positive outlook on life in college were less likely to have died up to 50 years later of all causes, and they were particularly likely to have experienced fewer accidental and violent deaths, in comparison to students who were less optimistic. Similar findings were found for older adults. After controlling for loneliness, marital status, economic status, and other correlates of health, Levy and Myers found that older adults with positive attitudes and higher self-efficacy had better health and lived on average almost 8 years longer than their more negative peers. [15] [16] And Diener, Nickerson, Lucas, and Sandvik [17] found that people who had cheerier dispositions earlier in life had higher income levels and less unemployment when they were assessed 19 years later.
Happiness is determined in part by genetic factors, such that some people are naturally happier than others, [1] [2] but also in part by the situations that we create for ourselves. Psychologists have studied hundreds of variables that influence happiness, but there is one that is by far the most important. People who report that they have positive social relationships with others—the perception of social support—also report being happier than those who report having less social support. [3] [4] Married people report being happier than unmarried people, [5] and people who are connected with and accepted by others suffer less depression, higher self-esteem, and less social anxiety and jealousy than those who feel more isolated and rejected. [6]
Social support also helps us better cope with stressors. Koopman, Hermanson, Diamond, Angell, and Spiegel [7] found that women who reported higher social support experienced less depression when adjusting to a diagnosis of cancer, and Ashton and colleagues [8] found a similar buffering effect of social support for AIDS patients. People with social support are less depressed overall, recover faster from negative events, and are less likely to commit suicide. [9] [10] [11] [12]
Social support buffers us against stress in several ways. For one, having people we can trust and rely on helps us directly by allowing us to share favors when we need them. These are the direct effects of social support. But having people around us also makes us feel good about ourselves. These are the appreciation effects of social support. Gençöz and Özlale found that students with more friends felt less stress and reported that their friends helped them, but they also reported that having friends made them feel better about themselves. [13]
Taylor, Klein, Lewis, Gruenewald, Gurung, and Updegraff propose tend-and–befriend as a female alternative to the flight-or-fight response to stress more likely to be used by men. Taylor and her colleagues say that when women find themselves in stressful situations, they spend more time tending to their children and families and seek out the comfort of friends. [14] The tend-and-befriend response, so often used by women, is an important and effective way to reduce stress.
Historically, psychology has devoted much of its time to describing problems and helping people deal with them. Psychologists treat psychological disorders, so most of our efforts are aimed at trying to make them go away. We spend very little time talking about people without psychological disorders and how we might make their lives better. This realization is part of what gave birth to the positive psychology movement. In positive psychology, we are interested in how we might make people’s lives better even if, and perhaps especially if, they do not have a psychological disorder. Mihaly Csikszentmihalyi [1] said people were happiest when in a state of flow. According to Csikszentmihalyi, people are in a state of flow when they are completely absorbed in what they are doing, so much so that they become one with the task. Their energy is completely focused on what they are doing to the exclusion of what’s going on around them. [1]
One difficulty that people face when trying to improve their happiness is that they may not always know what will make them happy. As one example, many of us think that if we just had more money we would be happier. While it is true that we do need money to afford food and adequate shelter for ourselves and our families, after this minimum level of wealth is reached, more money does not generally buy more happiness. [2] For instance, as you can see in the figure below, even though income and material success has improved dramatically in many countries over the past decades, happiness has not. Despite tremendous economic growth in France, Japan, and the United States between 1946 and 1990, there was no increase in reports of well-being by the citizens of these countries. Americans today have about three times the buying power they had in the 1950s, and yet overall happiness has not increased. The problem seems to be that we never seem to have enough money to make us “really” happy. Csikszentmihalyi [3] reported that people who earned $30,000 per year felt that they would be happier if they made $50,000 per year, but that people who earned $100,000 per year said that they would need $250,000 per year to make them happy.
These findings might lead us to conclude that we don’t always know what does or what might make us happy, and this seems to be at least partially true. For instance, Jean Twenge and her colleagues [1] found in several studies that although people with children frequently claim that having children makes them happy, couples who do not have children actually report being happier than those who do.
Psychologists have found that people’s ability to predict their future emotional states is not very accurate. Wilson and Gilbert [2] call this affective forecasting. For one, people overestimate their emotional reactions to events. Although people think that positive and negative events that might occur to them will make a huge difference in their lives, and although these changes do make at least some difference in life satisfaction, they tend to be less influential than we think they are going to be. Positive events tend to make us feel good, but their effects wear off pretty quickly, and the same is true for negative events. For instance, Brickman, Coates, and Janoff-Bulman [3] interviewed people who had won more than $50,000 in a lottery and found that they were not happier than they had been in the past and were also not happier than a control group of similar people who had not won the lottery. On the other hand, the researchers found that individuals who were paralyzed as a result of accidents were not as unhappy as might be expected.
How can this possibly be? There are several reasons. First, people are resilient: they bring their coping skills to play when negative events occur, and this makes them feel better. Second, most people do not continually experience very positive, or very negative, affect over a long period of time but rather adapt to their current circumstances. Just as we enjoy the second chocolate bar we eat less than we enjoy the first, as we experience more and more positive outcomes in our daily lives, we habituate to them and our life satisfaction returns to a more moderate level. [4]
Another reason we may mispredict our happiness is that our social comparisons change when our own status changes as a result of new events. People who are wealthy compare themselves to other wealthy people, people who are poor tend to compare with other poor people, and people who are ill tend to compare with other ill people. When our comparisons change, our happiness levels are correspondingly influenced. And when people are asked to predict their future emotions, they may focus only on the positive or negative event they are asked about, and forget about all the other things that won’t change. Wilson, Wheatley, Meyers, Gilbert, and Axsom [5] found that when people were asked to focus on all the more regular things they will still be doing in the future (working, going to church, socializing with family and friends, and so forth), their predictions about how something really good or bad would influence them were less extreme.
Motivations are often considered in psychology in terms of drives, which are internal states that are activated when the physiological characteristics of the body are out of balance, and goals, which are desired end states that we strive to attain. Motivation can thus be conceptualized as a series of behavioral responses that lead us to attempt to reduce drives and to attain goals by comparing our current state with a desired end state. [1] Like a thermostat on an air conditioner, the body tries to maintain homeostasis, the natural state of the body’s systems, with goals, drives, and arousal in balance. When a drive or goal is aroused—for instance, when we are hungry—the thermostat turns on and we start to behave in a way that attempts to reduce the drive or meet the goal (in this case to seek food). As the body works toward the desired end state, the thermostat continues to check whether or not the end state has been reached. Eventually, the need or goal is satisfied (we eat), and the relevant behaviors are turned off. The body’s thermostat continues to check for homeostasis and is always ready to react to future needs.
One of the ways psychologists have looked at motivation is to discuss the difference between intrinsic and extrinsic motivation. Someone who is intrinsically motivated works because they enjoy the task, they believe the task is important and they desire to do a good job. Intrinsic motivation resides within the individual. Earlier, we talked about a very good example of intrinsic motivation. People in a state of flow [2] are intrinsically motivated. They enjoy the task so much that they get lost in it. On the other hand, a person extrinsically motivated works because they are getting a reward such as money or praise. Extrinsic motivation exists outside the person. You learned that reinforcement and punishment can be used to intrinsic and extrinsic motivation. Someone who is intrinsically motivated works because they enjoy the task, they believe the task is important and they desire to do a good job. Intrinsic motivation resides within the individual. People in a state of flow are intrinsically motivated. They enjoy the task so much that they get lost in it. On the other hand, a person extrinsically motivated works because they are getting a reward such as money or praise. Extrinsic motivation exists outside the person. You learned in the unit on learning that reinforcement and punishment can be used to change behavior. They are both extrinsic motivators.
When you are intrinsically motivated, you need no external reinforcers such as good grades. You study and learn because you like the subject material. You are more likely to continue a task if intrinsically motivated. One danger with extrinsic motivation is that once the reward such as praise or good grades stops, you may not continue with the behavior. Using rewards for a task that the person already enjoys doing may not have the intended effect. Oftentimes it makes the task less enjoyable. The person may lose their intrinsic motivation and do the task only for rewards. For example, your child gets good grades in school because they enjoy learning. Then you start paying your child for every A or B they receive. You are running the risk that your child will lose their intrinsic motivation and continue to get good grades only if you pay them. It’s generally not a good idea to reward a behavior that a person already enjoys doing.
Along with the need to drink fresh water, which humans can normally attain in all except the most extreme situations, the need for food is the most fundamental and important human need. More than 1 in 10 U.S. households contain people who live without enough nourishing food, and this lack of proper nourishment has profound effects on their abilities to create effective lives. [1] When people are extremely hungry, their motivation to attain food completely changes their behavior. Hungry people become listless and apathetic to save energy and then become completely obsessed with food. Ancel Keys and his colleagues [2] found that volunteers who were placed on severely reduced-calorie diets lost all interest in sex and social activities, becoming preoccupied with food. Like most interesting psychological phenomena, the simple behavior of eating has both biological and social determinants as shown in the figure below. Biologically, hunger is controlled by the interactions among complex pathways in the nervous system and a variety of hormonal and chemical systems in the brain and body. The stomach is of course important. We feel more hungry when our stomach is empty than when it is full. But we can also feel hunger even without input from the stomach. Two areas of the hypothalamus are known to be particularly important in eating. The lateral part of the hypothalamus responds primarily to cues to start eating, whereas the ventromedial part of the hypothalamus primarily responds to cues to stop eating. If the lateral part of the hypothalamus is damaged, the animal will not eat even if food is present, whereas if the ventromedial part of the hypothalamus is damaged, the animal will eat until it is obese. [3]
Hunger is also determined by hormone levels as depicted in the following figure. Glucose is the main sugar that the body uses for energy, and the brain monitors blood glucose levels to determine hunger. Glucose levels in the bloodstream are regulated by insulin, a hormone secreted by the pancreas gland. When insulin is low, glucose is not taken up by body cells, and the body begins to use fat as an energy source. Eating and appetite are also influenced by other hormones, including orexin, ghrelin, and leptin. [4] [5]
Normally the interaction of the various systems that determine hunger creates a balance, or homeostasis, in which we eat when we are hungry and stop eating when we feel full. But homeostasis varies among people; some people simply weigh more than others, and there is little they can do to change their fundamental weight. Weight is determined in large part by the basal metabolic rate, the amount of energy expended while at rest. Each person’s basal metabolic rate is different, due to his or her unique physical makeup and physical behavior. A naturally occurring low metabolic rate, which is determined entirely by genetics, makes weight management a very difficult undertaking for many people.
How we eat is also influenced by our environment. When researchers rigged clocks to move faster, people got hungrier and ate more, as if they thought they must be hungry again because so much time had passed since they last ate. [6] And if we forget that we have already eaten, we are likely to eat again even if we are not actually hungry. [7]
Cultural norms about appropriate weights also influence eating behaviors. Current norms for women in Western societies are based on a very thin body ideal, emphasized by television and movie actresses, models, and even children’s dolls, such as the ever-popular Barbie. These norms for excessive thinness are very difficult for most women to attain: Barbie’s measurements, if translated to human proportions, would be about 36 in.–18 in.–33 in. at bust–waist–hips, measurements that are attained by less than 1 in 100,000 women. [8] Many women idealize being thin and yet are unable to reach the standard that they prefer.
Cultural norms for men are represented by GI Joe and other action figures as well as the emphasis on lean bodies and six-pack abs. These images are just as unrepresentative of the average male as Barbie and other media images are of the average female. Men as well as women are affected by the cultural emphasis on being thin. The changing norms for men and women play a part too. Because men are no longer always the breadwinner or at the top of the corporate ladder, they seek other ways to prove their masculinity. Well-sculpted bodies are one way to do that.
In some cases, the desire to be thin can lead to eating disorders, which are estimated to affect about 1 million males and 10 million females the United States alone. [1] [2] Anorexia nervosa is an eating disorder characterized by extremely low body weight, distorted body image, an obsession with exercise, and an obsessive fear of gaining weight. Nine out of 10 sufferers are women. Anorexia begins with a severe weight loss diet and develops into a preoccupation with food and dieting.
Bulimia nervosa is an eating disorder characterized by binge eating followed by purging. Bulimia nervosa begins after the dieter has broken a diet and gorged. Bulimia involves repeated episodes of overeating, followed by vomiting, laxative use, fasting, or excessive exercise. It is most common in women in their late teens or early 20s, and it is often accompanied by depression and anxiety, particularly around the time of the binging. The cycle in which the person eats to feel better, but then after eating becomes concerned about weight gain and purges, repeats itself over and over again, often with major psychological and physical results.
Eating disorders are in part heritable, [3] and it is not impossible that at least some have been selected through their evolutionary significance in coping with food shortages. [4] Eating disorders are also related psychological causes, including low self-esteem, perfectionism, and the perception that one’s body weight is too high, [5] as well as to cultural norms about body weight and eating. [6] Because eating disorders can create profound negative health outcomes, including death, people who suffer from them should seek treatment. This treatment is often quite effective.
These disorders in men are more common when they are participating in sports that require a low weight such as being a runner or a wrestler. Runners who weigh less are more likely to win races. For the wrestler there is advantage in losing a few pounds to be at the top of a lower weight class than keeping the weight and being near the bottom of a higher weight class. Many men with eating disorders were overweight at one time in their lives usually as children. Men are also less likely to come forward for help because eating disorders are seen as a female problem. Because of this, the 10% estimate is probably low.
Although thin is in for both sexes, being too thin can be a problem for men. Often beginning in childhood, boys who are too skinny are teased constantly and may become adults with an obsession to bulk up. That sometimes leads to illegal steroid use. Steroids use is one of the ways men can look like the action figures described above. At least some sports figures such as professional wrestlers and bodybuilders use steroids to achieve that overly muscled look. Several of these cases have been widely publicized in recent times. Steroid use has many side effects, some of which are potentially very serious. They include such problems as acne, testicular atrophy, reduces sperm count, increased aggression, high blood pressure, liver damage, gynocomastia (breast development in men) and prostate problems. And this is just a partial list.
Although some people eat too little, eating too much is also a major problem. Obesity is a medical condition in which so much excess body fat has accumulated in the body that it begins to have an adverse impact on health. In addition to causing people to be stereotyped and treated less positively by others, [7] uncontrolled obesity leads to health problems including cardiovascular disease, diabetes, sleep apnea, arthritis, Alzheimer’s disease, and some types of cancer. [8] Obesity also reduces life expectancy. [9]
Obesity is determined by calculating the body mass index (BMI), a measurement that compares one’s weight and height. People are defined as overweight when their BMI is greater than 25 kg/m2 and as obese when it is greater than 30 kg/m2. If you know your height and weight, you can calculate your BMI. Remember when using the BMI, it applies to excess body fat and not well developed muscles. People like body builders can have a high BMI and not be considered obese.
Obesity is a leading cause of death worldwide. Its prevalence is rapidly increasing, and it is one of the most serious public health problems of the 21st century. Although obesity is caused in part by genetics, it is increased by overeating and a lack of physical activity. [10] [11]
There are really only two approaches to controlling weight: eat less and exercise more. To be effective, both of these have to involve permanent lifestyle changes. Dieting is difficult for anyone, but it is particularly difficult for people with slow basal metabolic rates, who must cope with severe hunger to lose weight. Although most weight loss can be maintained for about a year, very few people are able to maintain substantial weight loss through dieting alone for more than three years. [12] Substantial weight loss of more than 50 pounds is typically seen only when weight loss surgery has been performed. [13] Weight loss surgery reduces stomach volume or bowel length, leading to earlier satiation and reduced ability to absorb nutrients from food.
Although dieting alone does not produce a great deal of weight loss over time, its effects are substantially improved when it is accompanied by more physical activity. People who exercise regularly, and particularly those who combine exercise with dieting, are less likely to be obese. [14] Exercise not only improves our waistline but also makes us healthier overall. Exercise increases cardiovascular capacity, lowers blood pressure, and helps improve diabetes, joint flexibility, and muscle strength. [15] Exercise also slows the cognitive impairments that are associated with aging. [16] Again, changing one’s eating habits (dieting) and/or increasing one’s time spent exercising for a short time is not effective for long-term weight loss. The way to maintain long-term weight loss is to permanently change your eating habits (eat fewer calories) and exercise more (burn the same amount of calories or more than you take in).
Because the costs of exercise are immediate but the benefits are long term, it may be difficult for people who do not exercise to get started. It is important to make a regular schedule, to work exercise into one’s daily activities, and to view exercise not as a cost but as an opportunity to improve oneself. [17] Exercising is more fun when it is done in groups, so team exercise is recommended. [18]
A recent report found that only about one-half of Americans perform the 30 minutes of exercise 5 times a week that the Centers for Disease Control and Prevention suggests as the minimum healthy amount. [19] As for the other half of Americans, they most likely are listening to the guidelines, but they are unable to stick to the regimen. Almost half of the people who start an exercise regimen give it up by the 6-month mark. [15] This is a problem because exercise has long-term benefits only if it is continued.
Perhaps the most important aspect of human experience is the process of reproduction. Without it, none of us would be here. Successful reproduction in humans involves the coordination of a wide variety of behaviors, including courtship, sex, household arrangements, parenting, and child care. So one might ask, what motivates us to have sex with another? Or another way to say it, what arouses our sexual interest and attracts us to others?
Sex hormones play a role in our arousal. For men it’s clear that the male sex hormone testosterone (an androgen) is the main hormone responsible for arousal. If you're a man and your testosterone levels are normal, you’ll be motivated to have sex. Interestingly enough, testosterone operates on an inverted U in the body. There is a maximum level of it necessary for normal arousal. Levels below that will interfere with arousal as will levels above that. Earlier we talked about steroid use. Steroids are artificial androgens and thus function in the body just like testosterone. Because they artificially increase testosterone, sex drive is likely to be decreased.
The picture with women is not as clear. We know that testosterone levels affect sex drive in women much as they do in men. In women it’s produced in the adrenals and the ovaries. The role of estrogens is not clear. Some postmenopausal women (estrogen levels drop dramatically with menopause) report a decrease in sex drive while others report an increase or no change.
Oxytocin is another hormone that has received a lot of attention. Sometimes called the love hormone, it promotes closeness and bonding. Levels increase when we are engaging in sexual activity. Touching the skin is particularly likely to increase oxytocin levels as does having an orgasm. High levels of oxytocin are at least partially responsible for the closeness we feel towards our partners particularly following orgasm.
In your brain, parts of the hypothalamus, several cortical areas, and the limbic system have been implicated in arousal The neurotransmitter dopamine is also involved in arousal and pleasure. Your brain generates sexual fantasies. Many if not most people are aroused by fantasizing about sexual acts or partners.
Our senses are also involved with touch and vision playing the largest roles. Touch is the number one sense in arousal. The skin is the largest sense organ in the body and many parts are very sensitive to arousal. Some are biologically based such as the genitals, the mouth including the lips and tongue, the inner thighs, the breasts and the nape of the neck. We refer to these as primary erogenous zones. Others are learned through classical conditioning. Learned ones are called secondary erogenous zones. Any part of the body can become one if it is stimulated during a sexual act. For example, if you constantly stroke your partner’s eyebrows during sex, eventually you’ll be able to arouse your partner just by stroking the eyebrows.
Both men and women are visually stimulated. The ability of men to be visually stimulated has never been in doubt. Most commercially available pornography is oriented towards a male audience. It hasn’t always been so clear for women. However, when women are asked to view pornography and at the same time connected to physiological measures of arousal, it shows they are in fact aroused even if they say they aren’t. Visual arousal is affected by one’s culture. For example, in American culture, a glimpse of female breasts is considered arousing; however, that is not true in all cultures.
There are three other factors we should consider—attractiveness, proximity, and similarity. Another’s attractiveness may be the first factor that draws us to them. Psychologists define attractiveness by facial symmetry and lack of any observable defects. Most humans regardless of culture will respond to these. Evolution plays a role here. Facial symmetry and lack of observable defects communicate a healthy person capable of successful breeding. Youth also conveys this message. Then culture takes over. What we consider attractive in terms of dress or adornments are cultural issues and learned. They also change with the times. For example, it’s fashionable to sport one or more tattoos in the early part of the 21st century. No so long before that, it was not considered attractive and probably will be that way again. Take a look at the couples around you. Most people are pretty evenly matched on the attractiveness scale. They look like they go together. It’s not common to see a really attractive person with a much less attractive person.
We are also likely to be attracted to someone we come into regular contact with. That’s the proximity factor. Another way to say it is our partner choices are limited by where we live, work and play. And finally, we are more likely to be attracted to people with whom we have something in common—the similarity factor. Contrary to popular opinion, opposites do not attract. It should make sense if you think about it for a minute. If you were truly opposites how would you spend your time? Common interests are what make relationships last.
So what attracts us to another? Our brain and hormones, our senses, our attractiveness, similarity, and proximity. If these are in harmony, we are likely to be highly motivated to pursue sexual activity.
Men and women vary in their interest in sex. Men show a more consistent interest in sex, whereas the sexual desires of women are more likely to vary over time. [1] Men fantasize about sex more often than do women, and their fantasies are more physical and less intimate. [2] Men are also more willing than women to have casual, and their standards for sex partners are lower. [3] [4] However, women are more likely to put the brakes on sexual behavior. When men are asked by a desirable partner if they are interested in having casual sex, they are more likely to say yes than when the roles are reversed.
Sex differences in sexual interest probably occur in part as a result of the evolutionary predispositions of men and women, and this interpretation is bolstered by the finding that gender differences in sexual interest are observed cross-culturally. [5] Evolutionarily, women should be more selective than men in their choices of sex partners because they must invest more time in bearing and nurturing their children than do men (most men do help out, of course, but women simply do more. [6] Because they do not need to invest a lot of time in child rearing, men may be evolutionarily predisposed to be more willing and desiring of having sex with many different partners and may be less selective in their choice of mates. Women, on the other hand, because they must invest substantial effort in raising each child, should be more selective.
Sex researchers have found that sexual behavior varies widely, not only between men and women but within each sex. [7] [8] About a quarter of women report having a low sexual desire, and about 1% of people report feeling no sexual attraction whatsoever. [9] [10] [11] There are also people who experience hyperactive sexual drives. For about 3% to 6% of the population (mainly men), the sex drive is so strong that it dominates life experience and may lead to hyperactive sexual desire disorder. [12]
There is also variety in sexual orientation, which is the direction of our sexual desire toward people of the opposite sex, people of the same sex, or people of both sexes. The vast majority of human beings have a heterosexual orientation—their sexual desire is focused toward members of the opposite sex. A smaller minority is primarily homosexual (i.e., they have sexual desire for members of their own sex). Between 3% and 4% of men are gay, and between 1% and 2% of women are lesbian. Another 1% of the population reports being bisexual (having desires for both sexes). The love and sexual lives of homosexuals are not different from those of heterosexuals, except that they choose to have sex with someone that has the same set of sexual equipment. As with heterosexuals, some gays and lesbians are celibate, some are promiscuous, but most are in committed, long-term relationships. [13]
Although homosexuality has been practiced as long as records of human behavior have been kept, and occurs in many animals at least as frequently as it does in humans, cultures nevertheless vary substantially in their attitudes toward it. In Western societies such as the United States and Europe, attitudes are becoming progressively more tolerant of homosexuality, but it remains unacceptable in many other parts of the world. The American Psychiatric Association does not consider homosexuality to be a “mental illness,” although it did so until 1973. Because prejudice against gays and lesbians can lead to experiences of ostracism, depression, and even suicide, [14] these improved attitudes can benefit the everyday lives of gays, lesbians, and bisexuals.
Historically sexologists looked at the three orientations as mutually exclusive. First it was about with whom you had sex. You could tell who was in which category by whom they had sex with. Then it was about with whom you fall in love. You could tell who was in which category by whom they fell in love with. Not all people neatly fit into one of these three and even those that do can change categories over time. Sexuality is more fluid than static. People can identify themselves as belonging to one category, say heterosexual, and still have sex with members of their own sex. Others can identify themselves as homosexual now but later can change to heterosexual or bisexual. Called sexual fluidity, it is more commonly seen in women but can be seen in both sexes. It is probably more common in women because there is a greater societal prohibition against men having sex with men than women having sex with women. Because there is less of a societal prohibition against women, they are more likely to freely explore their sexuality than men.
The idea of sexual fluidity closely matches the Kinsey scale of sexual orientation as depicted above.
0—Exclusively heterosexual with no homosexual
1—Predominantly heterosexual, only incidentally homosexual
2—Predominantly heterosexual, but more than incidentally homosexual
3—Equally heterosexual and homosexual
4—Predominantly homosexual, but more than incidentally heterosexual
5—Predominantly homosexual, only incidentally heterosexual
6—Exclusively homosexual
Whether sexual orientation is driven more by nature or by nurture has received a great deal of research attention, and research has found that sexual orientation is primarily biological. [15] Areas of the hypothalamus are different in homosexual men, as well as in animals with homosexual tendencies, than they are in heterosexual members of the species, and these differences are in directions such that gay men are more similar to women than are straight men. [16] [17] [18] Twin studies also support the idea that there is a genetic component to sexual orientation. Among male identical twins, 52% of those with a gay brother also reported homosexuality, whereas the rate in fraternal twins was just 22%. [19] [20] There is also evidence that sexual orientation is influenced by exposure and responses to sex hormones. [21] [22]
Paula Bernstein and Elyse Schein were identical twins who were adopted into separate families immediately after their births in 1968. It was only at the age of 35 that the twins were reunited and discovered how similar they were to each other.
Paula Bernstein grew up in a happy home in suburban New York. She loved her adopted parents and older brother and even wrote an article titled “Why I Don’t Want to Find My Birth Mother.” Elyse’s childhood, also a happy one, was followed by college and then film school abroad.
In 2003, 35 years after she was adopted, Elyse, acting on a whim, inquired about her biological family at the adoption agency. The response came back: “You were born on October 9, 1968, at 12:51 p.m., the younger of twin girls. You’ve got a twin sister, Paula, and she’s looking for you.”
“Oh my God, I’m a twin! Can you believe this? Is this really happening?” Elyse cried.
Elyse dialed Paula’s phone number: “It’s almost like I’m hearing my own voice in a recorder back at me,” she said.
“It’s funny because I feel like in a way I was talking to an old, close friend I never knew I had. . . . We had an immediate intimacy, and yet, we didn’t know each other at all,” Paula said.
The two women met for the first time at a café for lunch and talked until the late evening.
“We had 35 years to catch up on,” said Paula. “How do you start asking somebody, ‘What have you been up to since we shared a womb together?’ Where do you start?”
With each new detail revealed, the twins learned about their remarkable similarities. They’d both gone to graduate school in film. They both loved to write, and they had both edited their high school yearbooks. They have similar taste in music.
“I think, you know, when we met it was undeniable that we were twins. Looking at this person, you are able to gaze into your own eyes and see yourself from the outside. This identical individual has the exact same DNA and is essentially your clone. We don’t have to imagine,” Paula said.
Now they finally feel like sisters.
“But it’s perhaps even closer than sisters,” Elyse said, “Because we’re also twins.”
The twins, who both now live in Brooklyn, combined their writing skills to write a book called Identical Strangers about their childhoods and their experience of discovering an identical twin in their mid-30s. [1] [2]
You can learn more about the experiences of Paula Bernstein and Elyse Schein by viewing the video below.
One of the most fundamental tendencies of human beings is to size up other people. We say that Bill is fun, that Marian is adventurous, or that Frank is dishonest. When we make these statements, we mean that we believe that these people have stable individual characteristics—their personalities. Personality is an individual’s consistent patterns of feeling, thinking, and behaving. [3]
The tendency to perceive personality is a fundamental part of human nature, and a most adaptive one. If we can draw accurate generalizations about what other people are normally like, we can predict how they will behave in the future, and this can help us determine how they are likely to respond in different situations. Understanding personality can also help us better understand psychological disorders and the negative behavioral outcomes they may produce. In short, personality matters because it guides behavior.
In this module we will consider the wide variety of personality traits found in human beings. We’ll consider how and when personality influences our behavior, and how well we perceive the personalities of others. We will also consider how psychologists measure personality, and the extent to which personality is caused by nature versus nurture. The fundamental goal of personality psychologists is to understand what makes people different from each other (the study of individual differences), but they also find that people who share genes (as do Paula Bernstein and Elyse Schein) have a remarkable similarity in personality.
Early theories assumed that personality was expressed in people’s physical appearance. One early approach, developed by the German physician Franz Joseph Gall (1758–1828) and known as phrenology, was based on the idea that we could measure personality by assessing the patterns of bumps on people’s skulls (shown in the figure below). In the Victorian age, phrenology was taken seriously, and many people promoted its use as a source of psychological insight and self-knowledge. Machines were even developed for helping people analyze skulls. [4] However, because careful scientific research did not validate the predictions of the theory, phrenology has now been discredited in contemporary psychology.
Another approach, known as somatology, championed by the psychologist William Herbert Sheldon (1898–1977), was based on the idea that we could determine personality from people’s body types (shown in the figure below). Sheldon argued that people with more body fat and a rounder physique (“endomorphs”) were more likely to be assertive and bold, whereas thinner people (“ectomorphs”) were more likely to be introverted and intellectual. [5] As with phrenology, scientific research did not validate the predictions of the theory, and somatology has now been discredited in contemporary psychology.
Another approach to detecting personality is known as physiognomy, or the idea that it is possible to assess personality from facial characteristics. In contrast to phrenology and somatology, for which no research support has been found, contemporary research has found that people are able to detect some aspects of a person’s character—for instance, whether they are gay or straight and whether they are Democrats or Republicans—at above-chance levels by looking only at his or her face. [6] [7] [8]
Despite these results, the ability to detect personality from faces is not guaranteed. Olivola and Todorov [9] recently studied the ability of thousands of people to guess the personality characteristics of hundreds of thousands of faces on the website "What's My Image?" In contrast to the predictions of physiognomy, the researchers found that these people would have made more accurate judgments about the strangers if they had just guessed, using their expectations about what people in general are like, rather than trying to use the particular facial features of individuals to help them. It seems then that the predictions of physiognomy may also, in the end, find little empirical support.
Personalities are characterized in terms of traits, which are relatively enduring characteristics that influence our behavior across many situations. Personality traits such as introversion, friendliness, conscientiousness, honesty, and helpfulness are important because they help explain consistencies in behavior.
The most popular way of measuring traits is by administering personality tests on which people self-report about their own characteristics. Psychologists have investigated hundreds of traits using the self-report approach, and this research has found many personality traits that have important implications for behavior. You can see some examples of the personality dimensions that have been studied by psychologists and their implications for behavior in the following table.
Some Personality Traits That Predict Behavior | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||||||||||||||
|
As with intelligence tests, the utility of self-report measures of personality depends on their reliability and construct validity. Some popular measures of personality are not useful because they are unreliable or invalid. Perhaps you have heard of a personality test known as the Myers-Briggs Type Indicator (MBTI). If so, you are not alone, because the MBTI is the most widely administered personality test in the world, given millions of times a year to employees in thousands of companies. The MBTI categorizes people into one of four categories on each of four dimensions: introversion versus extroversion, sensing versus intuiting, thinking versus feeling, and judging versus perceiving.
Although completing the MBTI can be useful for helping people think about individual differences in personality, and for “breaking the ice” at meetings, the measure itself is not psychologically useful because it is not reliable or valid. People’s classifications change over time, and scores on the MBTI do not relate to other measures of personality or to behavior. [10] Measures such as the MBTI remind us that it is important to scientifically and empirically test the effectiveness of personality tests by assessing their stability over time and their ability to predict behavior.
One of the challenges of the trait approach to personality is that there are so many of them: at least 18,000 English words can be used to describe people. [11] Thus a major goal of psychologists is to take this vast number of descriptors (many of which are similar to each other) and determine the underlying important or “core” traits among them. [12]
The trait approach to personality was pioneered by early psychologists, including Gordon Allport (1897–1967), Raymond Cattell (1905–1998), and Hans Eysenck (1916–1997). Each of these psychologists believed in the idea of the trait as the stable unit of personality, and each attempted to provide a list or taxonomy of the most important trait dimensions. Their approach was to provide people with a self-report measure and then to use statistical analyses to look for the underlying “factors” or “clusters” of traits, according to the frequency and the co-occurrence of traits in the respondents.
Allport [13] began his work by reducing the 18,000 traits to a set of about 4,500 traitlike words that he organized into three levels according to their importance. He called them “cardinal traits” (the most important traits), “central traits” (the basic and most useful traits), and “secondary traits” (the less obvious and less consistent ones). Cattell [14] used a statistical procedure known as factor analysis to analyze the correlations among traits and to identify the most important ones. On the basis of his research, he identified what he called “source” (more important) and “surface” (less important) traits, and he developed a measure that assesses 16 dimensions of traits based on personality adjectives taken from everyday language.
Hans Eysenck was particularly interested in the biological and genetic origins of personality and made an important contribution to understanding the nature of a fundamental personality trait: extroversion versus introversion. [15] Eysenck proposed that people who are extroverted (i.e., who enjoy socializing with others) have lower levels of naturally occurring arousal than do introverts (who are less likely to enjoy being with others). Eysenck argued that extroverts have a greater desire to socialize with others to increase their arousal level, which is naturally too low, whereas introverts, who have naturally high arousal, do not desire to engage in social activities because they are overly stimulating.
The fundamental work on trait dimensions conducted by Allport, Cattell, Eysenck, and many others has led to contemporary trait models, the most important and well-validated of which is the Five-Factor (Big Five) Model of Personality. According to this model, there are five fundamental underlying trait dimensions that are stable across time, cross-culturally shared, and explain a substantial proportion of behavior. [16] [17] As you can see in the following table, the five dimensions (sometimes known as the “Big Five”) are agreeableness, conscientiousness, extroversion, neuroticism, and openness to experience. (You can remember them using the watery acronyms CANOE or OCEAN.)
The Five Factors of the Five-Factor Model of Personality | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||||||||
|
A large body of research evidence has supported the five-factor model. The Big Five dimensions seem to be cross-cultural, because the same five factors have been identified in participants in China, Japan, Italy, Hungary, Turkey, and many other countries. [18] The Big Five dimensions also accurately predict behavior. For instance, a pattern of high conscientiousness, low neuroticism, and high agreeableness predicts successful job performance. [19] Scores on the Big Five dimensions also predict the performance of U.S. presidents; ratings of openness to experience are correlated positively with ratings of presidential success, whereas ratings of agreeableness are correlated negatively with success. [20] The Big Five factors are also increasingly being used in helping researchers understand the dimensions of psychological disorders such as anxiety and depression. [21] [22]
An advantage of the five-factor approach is that it is parsimonious. Rather than studying hundreds of traits, researchers can focus on only five underlying dimensions. The Big Five may also capture other dimensions that have been of interest to psychologists. For instance, the trait dimension of need for achievement relates to the Big Five variable of conscientiousness, and self-esteem relates to low neuroticism. On the other hand, the Big Five factors do not seem to capture all the important dimensions of personality. For instance, the Big Five does not capture moral behavior, although this variable is important in many theories of personality. And there is evidence that the Big Five factors are not exactly the same across all cultures. [23]
One challenge to the trait approach to personality is that traits may not be as stable as we think they are. When we say that Malik is friendly, we mean that Malik is friendly today and will be friendly tomorrow and even next week. And we mean that Malik is friendlier than average in all situations. But what if Malik were found to behave in a friendly way with his family members but to be unfriendly with his fellow classmates? This would clash with the idea that traits are stable across time and situation.
The psychologist Walter Mischel [24] reviewed the existing literature on traits and found that there was only a relatively low correlation (about r = .30) between the traits that a person expressed in one situation and those that they expressed in other situations. In one relevant study, Hartshorne, May, Maller, and Shuttleworth [25] examined the correlations among various behavioral indicators of honesty in children. They also enticed children to behave either honestly or dishonestly in different situations, for instance, by making it easy or difficult for them to steal and cheat. The correlations among children’s behavior was low, generally less than r = .30, showing that children who steal in one situation are not always the same children who steal in a different situation. And similar low correlations were found in adults on other measures, including dependency, friendliness, and conscientiousness. [26]
Psychologists have proposed two possibilities for these low correlations. One possibility is that the natural tendency for people to see traits in others leads us to believe that people have stable personalities when they really do not. In short, perhaps traits are more in the heads of the people who are doing the judging than they are in the behaviors of the people being observed. The fact that people tend to use human personality traits, such as the Big Five, to judge animals in the same way that they use these traits to judge humans is consistent with this idea. [27] And this idea also fits with research showing that people use their knowledge representation (schemas) about people to help them interpret the world around them and that these schemas color their judgments of others’ personalities. [28]
Research has also shown that people tend to see more traits in other people than they do in themselves. You might be able to get a feeling for this by taking the following short quiz. First, think about a person you know—your mom, your roommate, or a classmate—and choose which of the three responses on each of the four lines best describes him or her. Then answer the questions again, but this time about yourself.
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||
|
Richard Nisbett and his colleagues [29] had college students complete this same task for themselves, for their best friend, for their father, and for the (at the time well-known) newscaster Walter Cronkite. As you can see in the following figure, the participants chose one of the two trait terms more often for other people than they did for themselves, and chose “depends on the situation” more frequently for themselves than they did for the other people. These results also suggest that people may perceive more consistent traits in others than they should.
The human tendency to perceive traits is so strong that it is very easy to convince people that trait descriptions of themselves are accurate. Imagine that you had completed a personality test and the psychologist administering the measure gave you this description of your personality:
You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused capacity, which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them. Disciplined and self-controlled outside, you tend to be worrisome and insecure inside. At times you have serious doubts as to whether you have made the right decision or done the right thing.
I would imagine that you might find that it described you. You probably do criticize yourself at least sometimes, and you probably do sometimes worry about things. The problem is that you would most likely have found some truth in a personality description that was the opposite. Could this description fit you too?
You frequently stand up for your own opinions even if it means that others may judge you negatively. You have a tendency to find the positives in your own behavior. You work to the fullest extent of your capabilities. You have few personality weaknesses, but some may show up under stress. You sometimes confide in others that you are concerned or worried, but inside you maintain discipline and self-control. You generally believe that you have made the right decision and done the right thing.
The Barnum effect refers to the observation that people tend to believe in descriptions of their personality that supposedly are descriptive of them but could in fact describe almost anyone. The Barnum effect helps us understand why many people believe in astrology, horoscopes, fortune-telling, palm reading, tarot card reading, and even some personality tests. People are likely to accept descriptions of their personality if they think that they have been written for them, even though they cannot distinguish their own tarot card or horoscope readings from those of others at better than chance levels. [30] Again, people seem to believe in traits more than they should.
A second way that psychologists responded to Mischel’s findings was by searching even more carefully for the existence of traits. One insight was that the relationship between a trait and a behavior is less than perfect because people can express their traits in different ways. [31] People high in extroversion, for instance, may become teachers, salesmen, actors, or even criminals. Although the behaviors are very different, they nevertheless all fit with the meaning of the underlying trait.
Psychologists also found that, because people do behave differently in different situations, personality will only predict behavior when the behaviors are aggregated or averaged across different situations. We might not be able to use the personality trait of openness to experience to determine what Saul will do on Friday night, but we can use it to predict what he will do over the next year in a variety of situations. When many measurements of behavior are combined, there is much clearer evidence for the stability of traits and for the effects of traits on behavior. [32] [33]
Taken together, these findings make a very important point about personality, which is that it not only comes from inside us but is also shaped by the situations that we are exposed to. Personality is derived from our interactions with and observations of others, from our interpretations of those interactions and observations, and from our choices of which social situations we prefer to enter or avoid. [34] In fact, behaviorists such as B. F. Skinner explain personality entirely in terms of the environmental influences that the person has experienced. Because we are profoundly influenced by the situations that we are exposed to, our behavior does change from situation to situation, making personality less stable than we might expect. And yet personality does matter—we can, in many cases, use personality measures to predict behavior across situations.
You can try completing a self-report measure of personality (a short form of the Five-Factor Personality Test). There are 120 questions, and it should take you about 15 to 20 minutes to complete. You will receive feedback about your personality after you have finished the test.
Ben is usually animated and talkative when he is with his girlfriend, but he is often quiet and reserved at home. He actively participates in many classroom discussions but frequently seems reluctant to talk with friends at the campus coffee shop.
Most of Janet’s friends and co-workers would describe her as being quiet and reserved.
Doug considers John to be a very honest person. In fact, in a psychology class in which the consistency of traits was being discussed, Doug cited his friend as an example of consistency.
Gina was informed by a professional palm reader: “You generally communicate openly with others, but you have certain dark secrets that even your closest friends could never guess.”
One of the most important measures of personality (which is used primarily to assess deviations from a “normal” or “average” personality) is the Minnesota Multiphasic Personality Inventory (MMPI), a test used around the world to identify personality and psychological disorders. [1] The MMPI was developed by creating a list of more than 1,000 true-false questions and choosing those that best differentiated patients with different psychological disorders from other people. The current version (the MMPI-2) has more than 500 questions, and the items can be combined into a large number of different subscales. Some of the most important of these are shown in the table below, but there are also scales that represent family problems, work attitudes, and many other dimensions. The MMPI also has questions that are designed to detect the tendency of the respondents to lie, fake, or simply not answer the questions.
Some of the Major Subscales of the MMPI | ||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||||||||||||||||||||||||||||
|
To interpret the results, the clinician looks at the pattern of responses across the different subscales and makes a diagnosis about the potential psychological problems facing the patient. Although clinicians prefer to interpret the patterns themselves, a variety of research has demonstrated that computers can often interpret the results as well as can clinicians. [2] [3] Extensive research has found that the MMPI-2 can accurately predict which of many different psychological disorders a person suffers from. [4]
One potential problem with a measure like the MMPI is that it asks people to consciously report on their inner experiences. But much of our personality is determined by unconscious processes of which we are only vaguely or not at all aware. Projective measures are measures of personality in which unstructured stimuli, such as inkblots, drawings of social situations, or incomplete sentences, are shown to participants, who are asked to freely list what comes to mind as they think about the stimuli. Experts then score the responses for clues to personality. The proposed advantage of these tests is that they are more indirect—they allow the respondent to freely express whatever comes to mind, including perhaps the contents of their unconscious experiences.
One commonly used projective test is the Rorschach Inkblot Test, developed by the Swiss psychiatrist Hermann Rorschach (1884–1922). The Rorschach Inkblot Test is a projective measure of personality in which the respondent indicates his or her thoughts about a series of 10 symmetrical inkblots (shown in the figure below). The Rorschach is administered millions of time every year. The participants are asked to respond to the inkblots, and their responses are systematically scored in terms of what, where, and why they saw what they saw. For example, people who focus on the details of the inkblots may have obsessive-compulsive tendencies, whereas those who talk about sex or aggression may have sexual or aggressive problems.
Another frequently administered projective test is the Thematic Apperception Test (TAT), developed by the psychologist Henry Murray (1893–1988). The Thematic Apperception Test (TAT) is a projective measure of personality in which the respondent is asked to create stories about sketches of ambiguous situations, most of them of people, either alone or with others (shown in the figure below). The sketches are shown to individuals, who are asked to tell a story about what is happening in the picture. The TAT assumes that people may be unwilling or unable to admit their true feelings when asked directly but that these feelings will show up in the stories about the pictures. Trained coders read the stories and use them to develop a personality profile of the respondent.
Other popular projective tests include those that ask the respondent to draw pictures, such as the Draw-A-Person test, [5] and free association tests in which the respondent quickly responds with the first word that comes to mind when the examiner says a test word. Another approach is the use of “anatomically correct” dolls that feature representations of the male and female genitals. Investigators allow children to play with the dolls and then try to determine on the basis of the play if the children may have been sexually abused.
The advantage of projective tests is that they are less direct, allowing people to avoid using their defense mechanisms and therefore show their “true” personality. The idea is that when people view ambiguous stimuli, they will describe them according to the aspects of personality that are most important to them and therefore bypass some of the limitations of more conscious responding.
Despite their widespread use, however, the empirical evidence supporting the use of projective tests is mixed. [3] [6] The reliability of the measures is low because people often produce very different responses on different occasions. The construct validity of the measures is also suspect because there are very few consistent associations between Rorschach scores or TAT scores and most personality traits. The projective tests often fail to distinguish between people with psychological disorders and those without or to correlate with other measures of personality or with behavior.
In sum, projective tests are more useful as icebreakers to get to know a person better, to make the person feel comfortable, and to get some ideas about topics that may be of importance to that person than for accurately diagnosing personality.
Instructions: For each of the following test items, indicate whether it is an example of a self-report personality inventory or a projective test of personality.
One trait that has been studied in thousands of studies is leadership, the ability to direct or inspire others to achieve goals. Trait theories of leadership are theories based on the idea that some people are simply “natural leaders” because they possess personality characteristics that make them effective. [1] Consider Steve Jobs, the founder of the Apple Inc., shown below. What characteristics do you think he possessed that allowed him to create such a strong company, even though many similar companies failed?
Research has found that being intelligent is an important characteristic of leaders as long as the leader communicates to others in a way that is easily understood by his or her followers. [2] [3] Other research has found that people with good social skills, such as the ability to accurately perceive the needs and goals of the group members and to communicate with others, also tend to make good leaders. [4]
Because so many characteristics seem to be related to leader skills, some researchers have attempted to account for leadership not in terms of individual traits, but rather in terms of a package of traits that successful leaders seem to have. Some have considered this in terms of charisma. [5] [6] Charismatic leaders are leaders who are enthusiastic, committed, and self-confident; who tend to talk about the importance of group goals at a broad level; and who make personal sacrifices for the group. Charismatic leaders express views that support and validate existing group norms but that also contain a vision of what the group could or should be. Charismatic leaders use their referent power to motivate, uplift, and inspire others. And research has found a positive relationship between a leader’s charisma and effective leadership performance. [7]
Another trait-based approach to leadership is based on the idea that leaders take either transactional or transformational leadership styles with their subordinates. [8] [9] Transactional leaders are the more regular leaders, who work with their subordinates to help them understand what is required of them and to get the job done. Transformational leaders, on the other hand, are more like charismatic leaders—they have a vision of where the group is going, and attempt to stimulate and inspire their workers to move beyond their present status and to create a new and better future.
Despite the fact that there appear to be at least some personality traits that relate to leadership ability, the most important approaches to understanding leadership take into consideration both the personality characteristics of the leader as well as the situation in which the leader is operating. In some cases the situation itself is important. For instance, you might remember that President George W. Bush’s ratings as a leader increased dramatically after the September 11, 2001, terrorist attacks on the World Trade Center. This is a classic example of how a situation can influence the perceptions of a leader’s skill.
In still other cases, different types of leaders may perform differently in different situations. Leaders whose personalities lead them to be more focused on fostering harmonious social relationships among the members of the group, for instance, are particularly effective in situations in which the group is already functioning well and yet it is important to keep the group members engaged in the task and committed to the group outcomes. Leaders who are more task-oriented and directive, on the other hand, are more effective when the group is not functioning well and needs a firm hand to guide it. [10]
Although measures such as the Big Five and the Minnesota Multiphasic Personality Inventory (MMPI) are able to effectively assess personality, they do not say much about where personality comes from. In this section we will consider two major theories of the origin of personality: psychodynamic and humanistic approaches.
One of the most important psychological approaches to understanding personality is based on the theorizing of the Austrian physician and psychologist Sigmund Freud (1856–1939), who founded what today is known as the psychodynamic approach to understanding personality. Many people know about Freud because his work has had a huge impact on our everyday thinking about psychology, and the psychodynamic approach is one of the most important approaches to psychological therapy, [1] [2] Freud is probably the best known of all psychologists, in part because of his impressive observation and analyses of personality (there are 24 volumes of his writings). As is true of all theories, many of Freud’s ingenious ideas have turned out to be at least partially incorrect, and yet other aspects of his theories are still influencing psychology.
Freud was influenced by the work of the French neurologist Jean-Martin Charcot (1825–1893), who had been interviewing patients (almost all women) who were experiencing what was at the time known as hysteria. Although it is no longer used to describe a psychological disorder, hysteria at the time referred to a set of personality and physical symptoms that included chronic pain, fainting, seizures, and paralysis.
Charcot could find no biological reason for the symptoms. For instance, some women experienced a loss of feeling in their hands and yet not in their arms, and this seemed impossible given that the nerves in the arms are the same that are in the hands. Charcot was experimenting with the use of hypnosis, and he and Freud found that under hypnosis many of the hysterical patients reported having experienced a traumatic sexual experience, such as sexual abuse, as children. [3]
Freud and Charcot also found that during hypnosis the remembering of the trauma was often accompanied by an outpouring of emotion, known as catharsis, and that following the catharsis the patient’s symptoms were frequently reduced in severity. These observations led Freud and Charcot to conclude that these disorders were caused by psychological rather than physiological factors.
Freud used the observations that he and Charcot had made to develop his theory regarding the sources of personality and behavior, and his insights are central to the fundamental themes of psychology. In terms of free will, Freud did not believe that we were able to control our own behaviors. Rather, he believed that all behaviors are predetermined by motivations that lie outside our awareness, in the unconscious. These forces show themselves in our dreams, in neurotic symptoms such as obsessions, while we are under hypnosis, and in Freudian “slips of the tongue” in which people reveal their unconscious desires in language. Freud argued that we rarely understand why we do what we do, although we can make up explanations for our behaviors after the fact. For Freud the mind was like an iceberg, with the many motivations of the unconscious being much larger, but also out of sight, in comparison to the consciousness of which we are aware.
Freud proposed that the mind is divided into three components: id, ego, and superego, and that the interactions and conflicts among the components create personality. [4] According to Freudian theory, the id is the component of personality that forms the basis of our most primitive impulses. The id is entirely unconscious, and it drives our most important motivations, including the sexual drive (libido) and the aggressive or destructive drive (Thanatos). According to Freud, the id is driven by the pleasure principle—the desire for immediate gratification of our sexual and aggressive urges. The id is why we smoke cigarettes, drink alcohol, view pornography, tell mean jokes about people, and engage in other fun or harmful behaviors, often at the cost of doing more productive activities.
In stark contrast to the id, the superego represents our sense of morality and oughts. The superego tell us all the things that we shouldn’t do, or the duties and obligations of society. The superego strives for perfection, and when we fail to live up to its demands we feel guilty.
In contrast to the id, which is about the pleasure principle, the function of the ego is based on the reality principle—the idea that we must delay gratification of our basic motivations until the appropriate time with the appropriate outlet. The ego is the largely conscious controller or decision-maker of personality. The ego serves as the intermediary between the desires of the id and the constraints of society contained in the superego. We may wish to scream, yell, or hit, and yet our ego normally tells us to wait, reflect, and choose a more appropriate response.
Freud believed that psychological disorders, and particularly the experience of anxiety, occur when there is conflict or imbalance among the motivations of the id, ego, and superego. When the ego finds that the id is pressing too hard for immediate pleasure, it attempts to correct for this problem, often through the use of defense mechanisms—unconscious psychological strategies used to cope with anxiety and to maintain a positive self-image. Freud believed that the defense mechanisms were essential for effective coping with everyday life, but that any of them could be overused.
The Major Freudian Defense Mechanisms | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||||||||
|
The most controversial, and least scientifically valid, part of Freudian theory is its explanations of personality development. Freud argued that personality is developed through a series of psychosexual stages, each focusing on pleasure from a different part of the body (shown in the table below). Freud believed that sexuality begins in infancy, and that the appropriate resolution of each stage has implications for later personality development.
Freud’s Stages of Psychosexual Development | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||
|
In the first of Freud’s proposed stages of psychosexual development, which begins at birth and lasts until about 18 months of age, the focus is on the mouth. During this oral stage, the infant obtains sexual pleasure by sucking and drinking. Infants who receive either too little or too much gratification become fixated or “locked” in the oral stage, and are likely to regress to these points of fixation under stress, even as adults. According to Freud, a child who receives too little oral gratification (e.g., who was underfed or neglected) will become orally dependent as an adult and be likely to manipulate others to fulfill his or her needs rather than becoming independent. On the other hand, the child who was overfed or overly gratified will resist growing up and try to return to the prior state of dependency by acting helpless, demanding satisfaction from others, and acting in a needy way.
The anal stage, lasting from about 18 months to 3 years of age, is when children first experience psychological conflict. During this stage children desire to experience pleasure through bowel movements, but they are also being toilet trained to delay this gratification. Freud believed that if this toilet training was either too harsh or too lenient, children would become fixated in the anal stage and become likely to regress to this stage under stress as adults. If the child received too little anal gratification (i.e., if the parents had been very harsh about toilet training), the adult personality will be anal retentive—stingy, with a compulsive seeking of order and tidiness. On the other hand, if the parents had been too lenient, the anal expulsive personality results, characterized by a lack of self-control and a tendency toward messiness and carelessness.
The phallic stage, which lasts from age 3 to age 6 is when the penis (for boys) and clitoris (for girls) become the primary erogenous zone for sexual pleasure. During this stage, Freud believed that children develop a powerful but unconscious attraction for the opposite-sex parent, as well as a desire to eliminate the same-sex parent as a rival. Freud based his theory of sexual development in boys (the “Oedipus complex”) on the Greek mythological character Oedipus, who unknowingly killed his father and married his mother, and then put his own eyes out when he learned what he had done. Freud argued that boys will normally eventually abandon their love of the mother, and instead identify with the father, also taking on the father’s personality characteristics, but that boys who do not successfully resolve the Oedipus complex will experience psychological problems later in life. Although it was not as important in Freud’s theorizing, in girls the phallic stage is often termed the “Electra complex,” after the Greek character who avenged her father’s murder by killing her mother. Freud believed that girls frequently experienced penis envy, the sense of deprivation supposedly experienced by girls because they do not have a penis.
The latency stage is a period of relative calm that lasts from about 6 years to 12 years. During this time, Freud believed that sexual impulses were repressed, leading boys and girls to have little or no interest in members of the opposite sex.
The fifth and last stage, the genital stage, begins about 12 years of age and lasts into adulthood. According to Freud, sexual impulses return during this time frame, and if development has proceeded normally to this point, the child is able to move into the development of mature romantic relationships. But if earlier problems have not been appropriately resolved, difficulties with establishing intimate love attachments are likely.
Freudian theory was so popular that it led to a number of followers, including many of Freud’s own students, who developed, modified, and expanded his theories. Taken together, these approaches are known as neo-Freudian theories. The neo-Freudian theories are theories based on Freudian principles that emphasize the role of the unconscious and early experience in shaping personality but place less evidence on sexuality as the primary motivating force in personality and are more optimistic concerning the prospects for personality growth and change in personality in adults.
Alfred Adler (1870–1937) was a follower of Freud who developed his own interpretation of Freudian theory. Adler proposed that the primary motivation in human personality was not sex or aggression, but rather the striving for superiority. According to Adler, we desire to be better than others and we accomplish this goal by creating a unique and valuable life. We may attempt to satisfy our need for superiority through our school or professional accomplishments, or by our enjoyment of music, athletics, or other activities that seem important to us.
Adler believed that psychological disorders begin in early childhood. He argued that children who are either overly nurtured or overly neglected by their parents are later likely to develop an inferiority complex—a psychological state in which people feel that they are not living up to expectations, leading them to have low self-esteem, with a tendency to try to overcompensate for the negative feelings. People with an inferiority complex often attempt to demonstrate their superiority to others at all costs, even if it means humiliating, dominating, or alienating them. According to Adler, most psychological disorders result from misguided attempts to compensate for the inferiority complex in order meet the goal of superiority.
Carl Jung (1875–1961) was another student of Freud who developed his own theories about personality. Jung agreed with Freud about the power of the unconscious but felt that Freud overemphasized the importance of sexuality. Jung argued that in addition to the personal unconscious, there was also a collective unconscious, or a collection of shared ancestral memories. Jung believed that the collective unconscious contains a variety of archetypes, or cross-culturally universal symbols, which explain the similarities among people in their emotional reactions to many stimuli. Important archetypes include the mother, the goddess, the hero, and the mandala or circle, which Jung believed symbolized a desire for wholeness or unity. For Jung, the underlying motivation that guides successful personality is self-realization, or learning about and developing the self to the fullest possible extent.
Karen Horney (1855–1952; the final syllable of her last name rhymes with “eye”), was a German physician who applied Freudian theories to create a personality theory that she thought was more balanced between men and women. Horney believed that parts of Freudian theory, and particularly the ideas of the Oedipus complex and penis envy, were biased against women. Horney argued that women’s sense of inferiority was not due to their lack of a penis but rather to their dependency on men, a condition from which culture made it difficult for them to break. For Horney, the underlying motivation that guides personality development is the desire for security, the ability to develop appropriate and supportive relationships with others.
Another important neo-Freudian was Erich Fromm (1900–1980). Fromm’s focus was on the negative impact of technology, arguing that the increases in its use have led people to feel increasingly isolated from others. Fromm believed that the independence that technology brings us also creates the need “escape from freedom,” that is, to become closer to others.
Fromm believed that the primary human motivation was to escape the fear of death, and contemporary research has shown how our concerns about dying can influence our behavior. In this research, people have been made to confront their death by writing about it or otherwise being reminded of it, and effects on their behavior are then observed. In one relevant study, McGregor and colleagues [1] demonstrated that people who are provoked may be particularly aggressive after they have been reminded of the possibility of their own death. The participants in the study had been selected, on the basis of prior reporting, to have either politically liberal or politically conservative views. When they arrived at the lab they were asked to write a short paragraph describing their opinion of politics in the United States. In addition, half of the participants (the mortality salient condition) were asked to “briefly describe the emotions that the thought of your own death arouses in you” and to “jot down, as specifically as you can, what you think will happen to you as you physically die and once you are physically dead.” Participants in the exam control condition also thought about a negative event, but not one associated with a fear of death. They were instructed to “please briefly describe the emotions that the thought of your next important exam arouses in you” and to “jot down, as specifically as you can, what you think will happen to you as you physically take your next exam and once you are physically taking your next exam.”
Then the participants read the essay that had supposedly just been written by another person. (The other person did not exist, but the participants didn’t know this until the end of the experiment.) The essay that they read had been prepared by the experimenters to be very negative toward politically liberal views or to be very negative toward politically conservative views. Thus one-half of the participants were provoked by the other person by reading a statement that strongly conflicted with their own political beliefs, whereas the other half read an essay in which the other person’s views supported their own (liberal or conservative) beliefs.
At this point the participants moved on to what they thought was a completely separate study in which they were to be tasting and giving their impression of some foods. Furthermore, they were told that it was necessary for the participants in the research to administer the food samples to each other. At this point, the participants found out that the food they were going to be sampling was spicy hot sauce and that they were going to be administering the sauce to the very person whose essay they had just read. In addition, the participants read some information about the other person that indicated that he very much disliked eating spicy food. Participants were given a taste of the hot sauce (it was really hot!) and then instructed to place a quantity of it into a cup for the other person to sample. Furthermore, they were told that the other person would have to eat all the sauce.
As you can see in the figure below, McGregor et al. found that the participants who had not been reminded of their own death, even if they had been insulted by the partner, did not retaliate by giving him a lot of hot sauce to eat. On the other hand, the participants who were both provoked by the other person and who had also been reminded of their own death administered significantly more hot sauce than did the participants in the other three conditions. McGregor and colleagues [1] argued that thinking about one’s own death creates a strong concern with maintaining one’s one cherished worldviews (in this case our political beliefs). When we are concerned about dying we become more motivated to defend these important beliefs from the challenges made by others, in this case by aggressing through the hot sauce.
Freud has probably exerted a greater impact on the public’s understanding of personality than any other thinker, and he has also in large part defined the field of psychology. Although Freudian psychologists no longer talk about oral, anal, or genital “fixations,” they do continue to believe that our childhood experiences and unconscious motivations shape our personalities and our attachments with others, and they still make use of psychodynamic concepts when they conduct psychological therapy.
Nevertheless, Freud’s theories, as well as those of the neo-Freudians, have in many cases failed to pass the test of empiricism, and as a result they are less influential now than they have been in the past. [2] The problems are first, that it has proved to be difficult to rigorously test Freudian theory because the predictions that it makes (particularly those regarding defense mechanisms) are often vague and unfalsifiable, and second, that the aspects of the theory that can be tested often have not received much empirical support.
As examples, although Freud claimed that children exposed to overly harsh toilet training would become fixated in the anal stage and thus be prone to excessive neatness, stinginess, and stubbornness in adulthood, research has found few reliable associations between toilet training practices and adult personality. [3] And since the time of Freud, the need to repress sexual desires would seem to have become much less necessary as societies have tolerated a wider variety of sexual practices. And yet the psychological disorders that Freud thought we caused by this repression have not decreased.
There is also little scientific support for most of the Freudian defense mechanisms. For example, studies have failed to yield evidence for the existence of repression. People who are exposed to traumatic experiences in war have been found to remember their traumas only too well. [4] Although we may attempt to push information that is anxiety-arousing into our unconscious, this often has the ironic effect of making us think about the information even more strongly than if we hadn’t tried to repress it. [5] It is true that children remember little of their childhood experiences, but this seems to be true of both negative as well as positive experiences, is true for animals as well, and probably is better explained in terms of the brain’s inability to form long-term memories than in terms of repression. On the other hand, Freud’s important idea that expressing or talking through one’s difficulties can be psychologically helpful has been supported in current research [6] and has become a mainstay of psychological therapy.
A particular problem for testing Freudian theories is that almost anything that conflicts with a prediction based in Freudian theory can be explained away in terms of the use of a defense mechanism. A man who expresses a lot of anger toward his father may be seen via Freudian theory to be experiencing the Oedipus complex, which includes conflict with the father. But a man who expresses no anger at all toward the father also may be seen as experiencing the Oedipus complex by repressing the anger. Because Freud hypothesized that either was possible, but did not specify when repression would or would not occur, the theory is difficult to falsify.
In terms of the important role of the unconscious, Freud seems to have been at least in part correct. More and more research demonstrates that a large part of everyday behavior is driven by processes that are outside our conscious awareness. [7] And yet, although our unconscious motivations influence every aspect of our learning and behavior Freud probably overestimated the extent to which these unconscious motivations are primarily sexual and aggressive.
Taken together, it is fair to say that Freudian theory, like most psychological theories, was not entirely correct and that it has had to be modified over time as the results of new studies have become available. But the fundamental ideas about personality that Freud proposed, as well as the use of talk therapy as an essential component of therapy, are nevertheless still a major part of psychology and are used by clinical psychologists every day.
Psychoanalytic models of personality were complemented during the 1950s and 1960s by the theories of humanistic psychologists. In contrast to the proponents of psychoanalysis, humanists embraced the notion of free will. Arguing that people are free to choose their own lives and make their own decisions, humanistic psychologists focused on the underlying motivations that they believed drove personality, focusing on the nature of the self-concept, the set of beliefs about who we are, and self-esteem, our positive feelings about the self.
One of the most important humanists, Abraham Maslow (1908–1970), conceptualized personality in terms of a pyramid-shaped hierarchy of motives. At the base of the pyramid are the lowest-level motivations, including hunger and thirst, and safety and belongingness. Maslow argued that only when people are able to meet the lower-level needs are they able to move on to achieve the higher-level needs of self-esteem, and eventually self-actualization, which is the motivation to develop our innate potential to the fullest possible extent.
Maslow studied how successful people, including Albert Einstein, Abraham Lincoln, Martin Luther King Jr., Helen Keller, and Mahatma Gandhi had been able to lead such successful and productive lives. Maslow [1] believed that self-actualized people are creative, spontaneous, and loving of themselves and others. They tend to have a few deep friendships rather than many superficial ones, and are generally private. He felt that these individuals do not need to conform to the opinions of others because they are very confident and thus free to express unpopular opinions. Self-actualized people are also likely to have peak experiences, or transcendent moments of tranquility accompanied by a strong sense of connection with others.
Perhaps the best-known humanistic theorist is Carl Rogers (1902–1987). Rogers was positive about human nature, viewing people as primarily moral and helpful to others, and he believed that we can achieve our full potential for emotional fulfillment if the self-concept is characterized by unconditional positive regard—a set of behaviors including being genuine, open to experience, transparent, able to listen to others, and self-disclosing and empathic. When we treat ourselves or others with unconditional positive regard, we express understanding and support, even while we may acknowledge failings. Unconditional positive regard allows us to admit our fears and failures, to drop our pretenses, and yet at the same time to feel completely accepted for what we are. The principle of unconditional positive regard has become a foundation of psychological therapy; therapists who use it in their practice are more effective than those who do not. [2] [3]
Although there are critiques of the humanistic psychologists (e.g., that Maslow focused on historically productive rather than destructive personalities in his research and thus drew overly optimistic conclusions about the capacity of people to do good), the ideas of humanism are so powerful and optimistic that they have continued to influence both everyday experiences as well as psychology. Today the positive psychology movement argues for many of these ideas, and research has documented the extent to which thinking positively and openly has important positive consequences for our relationships, our life satisfaction, and our psychological and physical health. [4]
Tory Higgins and his colleagues [5] [6] studied how different aspects of the self-concept relate to personality characteristics. These researchers focused on the types of emotional distress that we might experience as a result of how we are currently evaluating our self-concept. Higgins proposes that the emotions we experience are determined both by our perceptions of how well our own behaviors meet up to the standards and goals we have provided ourselves (our internal standards) and by our perceptions of how others think about us (our external standards). Furthermore, Higgins argues that different types of self-discrepancies lead to different types of negative emotions.
In one of Higgins’s experiments, [5] participants were first asked to describe themselves using a self-report measure. The participants listed 10 thoughts they believed described the kind of person they actually are; this is the actual self-concept. Then, participants listed 10 thoughts they believed described the type of person they would “ideally like to be” (the ideal self-concept) as well as 10 thoughts describing the way someone else—for instance, a parent—thinks they “ought to be” (the ought self-concept).
Higgins then divided his participants into two groups. Those with low self-concept discrepancies were those who listed similar traits on all three lists. Their ideal, ought, and actual self-concepts were all pretty similar and so they were not considered to be vulnerable to threats to their self-concept. The other half of the participants, those with high self-concept discrepancies, were those for whom the traits listed on the ideal and ought lists were very different from those listed on the actual self list. These participants were expected to be vulnerable to threats to the self-concept.
Then, at a later research session, Higgins first asked people to express their current emotions, including those related to sadness and anxiety. After obtaining this baseline measure Higgins activated either ideal or ought discrepancies for the participants. Participants in the ideal self-discrepancy priming condition were asked to think about and discuss their own and their parents’ hopes and goals for them. Participants in the ought self-priming condition listed their own and their parents’ beliefs concerning their duty and obligations. Then all participants again indicated their current emotions.
As you can see in the figure below, for low self-concept discrepancy participants, thinking about their ideal or ought selves did not much change their emotions. For high self-concept discrepancy participants, however, priming the ideal self-concept increased their sadness and dejection, whereas priming the ought self-concept increased their anxiety and agitation. These results are consistent with the idea that discrepancies between the ideal and the actual self lead us to experience sadness, dissatisfaction, and other depression-related emotions, whereas discrepancies between the actual and ought self are more likely to lead to fear, worry, tension, and other anxiety-related emotions.
One of the critical aspects of Higgins’s approach is that, as is our personality, our feelings are also influenced both by our own behavior and by our expectations of how other people view us. This makes it clear that even though you might not care much about achieving in school, your failure to do well may still produce negative emotions because you realize your parents do think it is important.
Watch the following video of Carl Rogers discussing his beliefs and approach to the therapeutic relationship:
One question that is exceedingly important for the study of personality concerns the extent to which it is the result of nature or nurture. If nature is more important, then our personalities will form early in our lives and will be difficult to change later. If nurture is more important, however, then our experiences are likely to be particularly important, and we may be able to flexibly alter our personalities over time. In this section we will see that the personality traits of humans and animals are determined in large part by their genetic makeup, and thus it is no surprise that identical twins Paula Bernstein and Elyse Schein turned out to be very similar even though they had been raised separately. But we will also see that genetics does not determine everything.
In the nucleus of each cell in your body are 23 pairs of chromosomes. One of each pair comes from your father, and the other comes from your mother. The chromosomes are made up of strands of the molecule DNA (deoxyribonucleic acid), and the DNA is grouped into segments known as genes. A gene is the basic biological unit that transmits characteristics from one generation to the next. Human cells have about 25,000 genes.
The genes of different members of the same species are almost identical. The DNA in your genes, for instance, is about 99.9% the same as the DNA in my genes and in the DNA of every other human being. These common genetic structures lead members of the same species to be born with a variety of behaviors that come naturally to them and that define the characteristics of the species. These abilities and characteristics are known as instincts—complex inborn patterns of behaviors that help ensure survival and reproduction. [1] Different animals have different instincts. Birds naturally build nests, dogs are naturally loyal to their human caretakers, and humans instinctively learn to walk and to speak and understand language.
But the strength of different traits and behaviors also varies within species. Rabbits are naturally fearful, but some are more fearful than others; some dogs are more loyal than others to their caretakers; and some humans learn to speak and write better than others do. These differences are determined in part by the small amount (in humans, the 0.1%) of the differences in genes among the members of the species.
Personality is not determined by any single gene, but rather by the actions of many genes working together. There is no “IQ gene” that determines intelligence, and there is no “good marriage partner gene” that makes a person a particularly good marriage bet. Furthermore, even working together, genes are not so powerful that they can control or create our personality. Some genes tend to increase a given characteristic and others work to decrease that same characteristic—the complex relationship among the various genes, as well as a variety of random factors, produces the final outcome. Furthermore, genetic factors always work with environmental factors to create personality. Having a given pattern of genes doesn’t necessarily mean that a particular trait will develop, because some traits might occur only in some environments. For example, a person may have a genetic variant that is known to increase his or her risk for developing emphysema from smoking. But if that person never smokes, then emphysema most likely will not develop.
Perhaps the most direct way to study the role of genetics in personality is to selectively breed animals for the trait of interest. In this approach the scientist chooses the animals that most strongly express the personality characteristics of interest and breeds these animals with each other. If the selective breeding creates offspring with even stronger traits, then we can assume that the trait has genetic origins. In this manner, scientists have studied the role of genetics in how worms respond to stimuli, how fish develop courtship rituals, how rats differ in play, and how pigs differ in their responses to stress.
Although selective breeding studies can be informative, they are clearly not useful for studying humans. For this psychologists rely on behavioral genetics—a variety of research techniques that scientists use to learn about the genetic and environmental influences on human behavior by comparing the traits of biologically and nonbiologically related family members. [2] Behavioral genetics is based on the results of family studies, twin studies, and adoptive studies.
A family study starts with one person who has a trait of interest—for instance, a developmental disorder such as autism—and examines the individual’s family tree to determine the extent to which other members of the family also have the trait. The presence of the trait in first-degree relatives (parents, siblings, and children) is compared to the prevalence of the trait in second-degree relatives (aunts, uncles, grandchildren, grandparents, and nephews or nieces) and in more distant family members. The scientists then analyze the patterns of the trait in the family members to see the extent to which it is shared by closer and more distant relatives.
Although family studies can reveal whether a trait runs in a family, it cannot explain why. In a twin study, researchers study the personality characteristics of twins. Twin studies rely on the fact that identical (or monozygotic) twins have essentially the same set of genes, while fraternal (or dizygotic) twins have, on average, a half-identical set. The idea is that if the twins are raised in the same household, then the twins will be influenced by their environments to an equal degree, and this influence will be pretty much equal for identical and fraternal twins. In other words, if environmental factors are the same, then the only factor that can make identical twins more similar than fraternal twins is their greater genetic similarity.
In a twin study, the data from many pairs of twins are collected and the rates of similarity for identical and fraternal pairs are compared. A correlation coefficient is calculated that assesses the extent to which the trait for one twin is associated with the trait in the other twin. Twin studies divide the influence of nature and nurture into three parts:
In the typical twin study, all three sources of influence are operating simultaneously, and it is possible to determine the relative importance of each type.
An adoption study compares biologically related people, including twins, who have been reared either separately or apart. Evidence for genetic influence on a trait is found when children who have been adopted show traits that are more similar to those of their biological parents than to those of their adoptive parents. Evidence for environmental influence is found when the adoptee is more like his or her adoptive parents than the biological parents.
The results of family, twin, and adoption studies are combined to get a better idea of the influence of genetics and environment on traits of interest. The table below presents data on the correlations and heritability estimates for a variety of traits based on the results of behavioral genetics studies. [3]
If you look in the second column, you will see the observed correlations for the traits between identical twins who have been raised together in the same house by the same parents. This column represents the pure effects of genetics, in the sense that environmental differences have been controlled to be a small as possible. You can see that these correlations are higher for some traits than for others. Fingerprint patterns are very highly determined by our genetics (r = .96), whereas the Big Five trait dimensions have a heritability of 40% to 50%.
You can also see from the table that, overall, there is more influence of nature than of parents. Identical twins, even when they are raised in separate households by different parents (column 4), turn out to be quite similar in personality, and are more similar than fraternal twins who are raised in separate households (column 5). These results show that genetics has a strong influence on personality, and helps explain why Elyse and Paula were so similar when they finally met.
Despite the overall role of genetics, you can see from the table that the correlations between identical twins (column 2) and heritability estimates for most traits (column 6) are substantially less than 1.00, showing that the environment also plays an important role in personality. [4] For instance, for sexual orientation, the estimates of heritability vary from 18% to 39% of the total across studies, suggesting that 61% to 82% of the total influence is due to environment.
You might at first think that parents would have a strong influence on the personalities of their children, but this would be incorrect. As you can see by looking in column 7 of the table, research finds that the influence of shared environment (i.e., the effects of parents or other caretakers) plays little or no role in adult personality. [5] Shared environment does influence the personality and behavior of young children, but this influence decreases rapidly as the child grows older. By the time we reach adulthood, the impact of shared environment on our personalities is weak at best. [6] What this means is that, although parents must provide a nourishing and stimulating environment for children, no matter how hard they try they are not likely to be able to turn their children into geniuses or into professional athletes, nor will they be able to turn them into criminals.
If parents are not providing the environmental influences on the child, then what is? The last column in the table above, the influence of nonshared environment, represents whatever is “left over” after removing the effects of genetics and parents. You can see that these factors—the largely unknown things that happen to us that make us different from other people—often have the largest influence on personality.
Identical twins Gerald Levey and Mark Newman were separated at birth and raised in different homes. When reunited at age 31, they discovered, among many other similarities, that they both volunteered as firefighters.
Because Gerry and Mark are identical twins and share nearly 100% of their genes, their fingerprints are almost identical, as genetic factors contribute about 96% to the development of ridges on fingertips. In comparison, Gerry’s and Mark’s scores on the Big Five personality traits should be less similar. Researchers have broken down the contributions to personality development into the following four factors:
Note: Error refers to the influence on personality development that cannot as yet be identified and is attributed to errors in testing and measurement procedures. This error percentage should decrease and other factors should increase as methodology improves.
Nonshared influences can occur because individual children tend to elicit different responses from their parents for a variety of reasons—their temperament, gender, or birth order, or accidents and illnesses they may have had. This explains why, despite Gerry’s and Mark’s remarkable similarities in personality, they also display some unique differences.
In addition to the use of behavioral genetics, our understanding of the role of biology in personality recently has been dramatically increased through the use of molecular genetics, which is the study of which genes are associated with which personality traits. [1] [2] These advances have occurred as a result of new knowledge about the structure of human DNA made possible through the Human Genome Project and related work that has identified the genes in the human body. [3] Molecular genetics researchers have also developed new techniques that allow them to find the locations of genes within chromosomes and to identify the effects those genes have when activated or deactivated.
One approach that can be used in animals, usually in laboratory mice, is the knockout study. In this approach the researchers use specialized techniques to remove or modify the influence of a gene in a line of “knockout” mice. [4] The researchers harvest embryonic stem cells from mouse embryos and then modify the DNA of the cells. The DNA is created such that the action of certain genes will be eliminated or “knocked out.” The cells are then injected into the embryos of other mice that are implanted into the uteruses of living female mice. When these animals are born, they are studied to see whether their behavior differs from a control group of normal animals. Research has found that removing or changing genes in mice can affect their anxiety, aggression, learning, and socialization patterns.
In humans, a molecular genetics study normally begins with the collection of a DNA sample from the participants in the study, usually by taking some cells from the inner surface of the cheek. In the lab, the DNA is extracted from the sampled cells and is combined with a solution containing a marker for the particular genes of interest as well as a fluorescent dye. If the gene is present in the DNA of the individual, then the solution will bind to that gene and activate the dye. The more the gene is expressed, the stronger the reaction.
In one common approach, DNA is collected from people who have a particular personality characteristic and also from people who do not. The DNA of the two groups is compared to see which genes differ between them. These studies are now able to compare thousands of genes at the same time. Research using molecular genetics has found genes associated with a variety of personality traits including novelty-seeking, [5] attention-deficit/hyperactivity disorder, [6] and smoking behavior. [7]
Over the past two decades scientists have made substantial progress in understanding the important role of genetics in behavior. Behavioral genetics studies have found that, for most traits, genetics is more important than parental influence. And molecular genetics studies have begun to pinpoint the particular genes that are causing these differences. The results of these studies might lead you to believe that your destiny is determined by your genes, but this would be a mistaken assumption.
For one, the results of all research must be interpreted carefully. Over time we will learn even more about the role of genetics, and our conclusions about its influence will likely change. Current research in the area of behavioral genetics is often criticized for making assumptions about how researchers categorize identical and fraternal twins, about whether twins are in fact treated in the same way by their parents, about whether twins are representative of children more generally, and about many other issues. Although these critiques may not change the overall conclusions, it must be kept in mind that these findings are relatively new and will certainly be updated with time. [8]
Furthermore, it is important to reiterate that although genetics is important, and although we are learning more every day about its role in many personality variables, genetics does not determine everything. In fact, the major influence on personality is nonshared environmental influences, which include all the things that occur to us that make us unique individuals. These differences include variability in brain structure, nutrition, education, upbringing, and even interactions among the genes themselves.
The genetic differences that exist at birth may be either amplified or diminished over time through environmental factors. The brains and bodies of identical twins are not exactly the same, and they become even more different as they grow up. As a result, even genetically identical twins have distinct personalities, resulting in large part from environmental effects.
Because these nonshared environmental differences are nonsystematic and largely accidental or random, it will be difficult to ever determine exactly what will happen to a child as he or she grows up. Although we do inherit our genes, we do not inherit personality in any fixed sense. The effect of our genes on our behavior is entirely dependent on the context of our life as it unfolds day to day. No one can say, on the basis of our genes, what kind of human being we will turn out to be or what we will do in life.
Sam Spady, a 19-year-old student at Colorado State University, had been a homecoming queen, a class president, a captain of the cheerleading team, and an honor student in high school. But despite her outstanding credentials and her hopes and plans for the future, Sam Spady died on September 5, 2004, after a night of binge drinking with her friends.
Sam had attended a number of different parties on the Saturday night that she died, celebrating the CSU football game against the University of Colorado–Boulder. When she passed out, after consuming 30 to 40 beers and shots over the evening, her friends left her alone in an empty room in a fraternity house to sleep it off. The next morning a member of the fraternity found her dead. [1]
Sam is one of an estimated 1,700 college students between the ages of 18 and 24 who die from alcohol-related injuries each year. These deaths come from motor vehicle crashes, assaults, and overdosing as a result of binge drinking. [2]
“Nobody is immune,” said Sam’s father. “She was a smart kid, and she was a good kid. And if it could happen to her, it could happen to anybody.”
Despite efforts at alcohol education, Pastor Reza Zadeh, a former CSU student, says little has changed in the drinking culture since Sam’s death: “People still feel invincible. The bars still have 25-cent shot night and two-for-ones and no cover for girls.” [1]
Sam’s parents have created a foundation in her memory, dedicated to informing people, particularly college students, about the dangers of binge drinking and to helping them resist the peer pressure that brings it on. You can learn about the signs of alcohol poisoning at the Sam Spady Foundation.
The subdiscipline of psychology discussed in this unit reflects the highest level of explanation that we will consider. This topic, known as social psychology, is defined as the scientific study of how we feel about, think about, and behave toward the other people around us, and how those people influence our thoughts, feelings, and behavior.
The subject matter of social psychology is our everyday interactions with people, including the social groups to which we belong. Social-psychological questions include why we are often helpful to other people but at other times are unfriendly or aggressive; why we sometimes conform to the behaviors of others but at other times we assert our independence; and what factors help groups work together in effective and productive ways. A fundamental principle of social psychology is that, although we may not always be aware of it, our cognitions, emotions, and behaviors are substantially influenced by the social situation, or the people with whom we are interacting.
In this module we introduce the principles of social cognition—the part of human thinking that helps us understand and predict the behavior of ourselves and others—and consider the ways that our judgments about other people guide our behaviors toward them. We explore how we form impressions of other people, and what makes us like or dislike them. We also see how our attitudes—our enduring evaluations of people or things—influence, and are influenced by, our behavior.
Then we consider the social psychology of interpersonal relationships, including the behaviors of altruism, aggression, and conformity. We will see that humans have a natural tendency to help each other but that we may also become aggressive if we feel we are being threatened. And we will see how social norms, the accepted beliefs about what we do or what we should do in particular social situations (such as the norm of binge drinking common on many college campuses) influence our behavior. Finally, we consider the social psychology of social groups, with a particular focus on the conditions that limit and potentially increase productive group performance and decision-making.
The principles of social psychology can help us understand tragic events such as the death of Sam Spady. Many people might blame the tragedy on Sam herself, asking, for instance, “Why did she drink so much?” or “Why didn’t she say no?” As we will see in this unit, research conducted by social psychologists shows that the poor decisions Sam made on the night she died may have been due less to her own personal weaknesses or deficits than to her desires to fit in with and be accepted by the others around her—desires that in her case led to a disastrous outcome.
One important aspect of social cognition involves forming impressions of other people. Making these judgments quickly and accurately helps us guide our behavior to interact appropriately with the people we know. If we can figure out why our roommate is angry at us, we can react to resolve the problem; if we can determine how to motivate the people in our group to work harder on a project, then the project might be better.
Our initial judgments of others are based in large part on what we see. The physical features of other people, particularly their gender, race, age, and physical attractiveness, are very salient, and we often focus our attention on these dimensions. [3] [4]
Although it may seem inappropriate or shallow to admit it, many people are often strongly influenced by the physical attractiveness of others, and in some situations, physical attractiveness is the most important determinant of our initial liking for others. [5] Infants who are only a year old prefer to look at faces that adults consider to be attractive than at unattractive faces. [6] Evolutionary psychologists have argued that our belief that “what is beautiful is also good” may be because we use attractiveness as a cue for health; people whom we find more attractive may also, evolutionarily, have been healthier. [7]
One indicator of health is youth. Leslie Zebrowitz and her colleagues [8] [9] extensively studied the tendency for both men and women to prefer people whose faces have characteristics similar to those of babies. These features include large, round, and widely spaced eyes, a small nose and chin, prominent cheekbones, and a large forehead. People who have baby faces (both men and women) are seen as more attractive than people who do not.
Another indicator of health is symmetry. People are more attracted to faces that are more symmetrical and this may be due in part to the perception that people with symmetrical faces are healthier. [10]
Although you might think that we would prefer faces that are unusual or unique, the opposite is true. Langlois and Roggman [11] showed college students the faces of men and women. They were composites of the average of 2, 4, 8, 16, or 32 faces. The researchers found that the more faces that were averaged into the stimulus, the more attractive it was judged. Again, our liking for average faces may be because they appear healthier.
Although preferences for youthful, symmetrical, and average faces have been observed cross-culturally, and thus appear to be common human preferences, different cultures may also have unique beliefs about what is attractive. In modern Western cultures, “thin is in,” and people prefer those who have little excess fat. [12] The need to be thin to be attractive is particularly strong for women in contemporary society, and the desire to maintain a low body weight can lead to low self-esteem, eating disorders, and other unhealthy behaviors. However, the norm of thinness has not always been in place; the preference for women with slender, masculine, and athletic looks has become stronger over the past 50 years. In contrast to the relatively universal preferences for youth, symmetry, and averageness, other cultures do not show such a strong propensity for thinness. [13]
The “what is beautiful is good” stereotype refers to the common belief that attractive people possess other desirable characteristics such as being more friendly, likable, and fun to be around compared to less attractive individuals. Snyder, Tanke, and Berscheid [14] conducted a study on the behavioral confirmation of this stereotype involving physical attractiveness.
In the study, male subjects participated in a getting-acquainted session with females over the telephone. Prior to the conversation, the male subjects were shown a photograph of either a physically attractive or physically unattractive female—not the photograph of the actual person they talked to. This was the experimental manipulation intended to create different beliefs and expectations on the part of the male subjects about their interaction partner. The conversations were tape recorded, and each participant’s conversational behavior was analyzed by naive observer judges for evidence of behavioral confirmation. Specifically, the judges rated how friendly, likable, and sociable the male and female interaction partners were during the conversation. The results revealed evidence of behavioral confirmation, indicating that a self-fulfilling prophecy had occurred.
We frequently use people’s appearances to form our judgments about them and to determine our responses to them. The tendency to attribute personality characteristics to people on the basis of their external appearance or their social group memberships is known as stereotyping. Our stereotypes about physically attractive people lead us to see them as more dominant, sexually warm, mentally healthy, intelligent, and socially skilled than we perceive physically unattractive people. [1] And our stereotypes lead us to treat people differently—the physically attractive are given better grades on essay exams, are more successful on job interviews, and receive lighter sentences in court judgments than their less attractive counterparts. [2] [3]
In addition to physical attractiveness, we also regularly stereotype people on the basis of their gender, race, age, religion, and many other characteristics, and these stereotypes are frequently negative. [4] This is unfair because stereotypes are based on our preconceptions and negative emotional responses to members of the group. Stereotyping is closely related to prejudice, the tendency to dislike people because of their appearance or group memberships, and discrimination, negative behaviors toward others based on prejudice. Stereotyping, prejudice, and discrimination work together. For example, we may not vote for a gay person because of our negative stereotypes about gays, and we may avoid people from other religions or those with mental illness because of our prejudices.
Some stereotypes may be accurate—at least in part. Research has found, for instance, that attractive people are actually more sociable, more popular, and less lonely than less attractive individuals. [1] And, consistent with the stereotype that they are “emotional,” women are, on average, more empathic and attuned to the emotions of others than are men. [5] Group differences in personality traits may occur in part because people act toward others on the basis of their stereotypes. This creates a self-fulfilling prophecy, which is when our expectations about the personality characteristics of others lead us to behave in ways that make those beliefs come true. If I have a stereotype that attractive people are friendly, then I may act in a friendly way toward them. This friendly behavior may be reciprocated by the attractive person, and if many other people also engage in the same positive behaviors he or she may actually become friendlier.
But even if attractive people are on average friendlier than unattractive people, not all attractive people are friendlier than all unattractive people. And even if women are, on average, more emotional than men, not all men are less emotional than all women. Social psychologists believe that it is better to treat people as individuals rather than rely on our stereotypes and prejudices, because stereotyping and prejudice are always unfair and often inaccurate. [6] [7] Furthermore, many of our stereotypes and prejudices exert influence out of our awareness, such that we do not even know that we are using them.
You might want to test your own stereotypes and prejudices by completing the Implicit Association Test, a measure of unconscious stereotyping.
We use our stereotypes and prejudices in part because they are easy: if we can quickly size up people on the basis of their physical appearance, that can save us a lot of time and effort. We may be evolutionarily disposed to stereotyping. Because our primitive ancestors needed to accurately separate members of their own kin group from those of others, categorizing people into “us” (the ingroup) and “them” (the outgroup) was useful and even necessary. [8] And the positive emotions that we experience as a result of our group memberships—known as social identity—can be an important and beneficial part of our everyday experiences. [9] We may gain social identity as members of our university, our sports teams, our religious and racial groups, and many other groups.
But the fact that we may use our stereotypes does not mean that we should use them. Stereotypes, prejudice, and discrimination, whether they are consciously or unconsciously applied, make it difficult for some people to effectively contribute to society and may create both mental and physical health problems for them. [10]
In certain situations, people are concerned that they will be evaluated based on a negative stereotype. For example, research has shown that the academic performance of women and African Americans can be affected by concerns about confirming the expectation that they will not do well relative to individuals who belong to stereotypically high-performing groups. This is known as stereotype threat. In addition, European American students may feel stereotype threat on the basketball court when competing with African Americans. For further discussion of stereotype threat, watch the video below.
In some cases getting beyond our prejudices is required by law, as detailed in the U.S. Civil Rights Act of 1964, the Equal Opportunity Employment Act of 1972, and the Fair Housing Act of 1978.
There are individual differences in the degree to which prejudice influences behavior. For example, some people are more likely to try to control and confront their stereotypes and prejudices whereas others apply them more freely. [11] [12] In addition, some people believe that some groups are naturally better than others—whereas other people are more egalitarian and hold fewer prejudices. [13] [14]
The tendency to endorse stereotypes and prejudices and to act on them can be reduced, for instance, through positive interactions and friendships with members of other groups, through practice in avoiding using them, and through education. [15]
Research has demonstrated that people can draw very accurate conclusions about others on the basis of very limited data. Ambady and Rosenthal [16] made videotapes of six female and seven male graduate students while they were teaching an undergraduate course. The courses covered diverse areas of the college curriculum, including humanities, social sciences, and natural sciences. For each teacher, three 10-second video clips were taken: 10 seconds from the first 10 minutes of the class, 10 seconds from the middle of the class, and 10 seconds from the last 10 minutes of the class.
The researchers then asked nine female undergraduates to rate the clips of the teachers on 15 dimensions including optimistic, confident, active, enthusiastic, dominant, likable, warm, competent, and supportive. Ambady and her colleagues then compared the ratings of the participants who had seen the teacher for only 30 seconds with the ratings of the same instructors that had been made by students who had spent a whole semester with the teacher, and who had rated her at the end of the semester on scales such as “Rate the quality of the section overall” and “Rate section leader’s performance overall.” As you can see in the following table, the ratings of the participants and the ratings of the students were highly positively correlated.
Accurate Perceptions in 30 Seconds | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
This table shows the Pearson correlation coefficients between the impressions that a group of students made after they had seen a video of instructors teaching for only 30 seconds and the teaching ratings of the same instructors made by students who had spent a whole semester in the class. You can see that the correlations are all positive, and that many of them are quite large. The conclusion is that people are sometimes able to draw accurate impressions about other people very quickly. From Flat World Knowledge, Introduction to Psychology, v1.0. Source: Ambady and Rosenthal. [16] | ||||||||||||||||||||||||||||||||||
|
If the finding that judgments made about people in 30 seconds correlate highly with judgments made about the same people after a whole semester surprises you, then perhaps you may be even more surprised to hear that we do not even need that much time. Indeed, Willis and Todorov [17] found that even a tenth of a second was enough to make judgments that correlated highly with those same judgments made by other people who were given several minutes to make the judgments. Other research has found that we can make accurate judgments, for instance, about salespersons [18] and about the sexual orientation of other people, [19] in just a few seconds. Todorov, Mandisodza, Goren, and Hall [20] found that people voted for political candidates in large part on the basis of whether or not their faces, seen only for one second, looked like faces of competent people. Taken together, this research shows that we are well able to form initial impressions of others quickly and often quite accurately.
One of the most important tasks faced by humans is to develop successful relationships with others. These include acquaintanceships and friendships but also the more important close relationships, which are the long-term intimate and romantic relationships that we develop with another person—for instance, in a marriage . [1] Because most of us will want to enter into a close relationship at some point, and because close relationships are evolutionarily important as they form the basis for effective child rearing, it is useful to know what psychologists have learned about the principles of liking and loving within them.
A major interest of social psychologists is the study of interpersonal attraction, or what makes people like, and even love, each other. One important factor is a perceived similarity in values and beliefs between the partners. [2] Similarity is important for relationships because it is more convenient (it’s easier if both partners like to ski or go to the movies), but also because similarity supports our values—I can feel better about myself and my choice of activities if I see that you also enjoy doing the same things that I do.
Liking is also enhanced by self-disclosure, the tendency to communicate frequently, without fear of reprisal, and in an accepting and empathetic manner. Friends are friends because we can talk to them openly about our needs and goals, and because they listen to and respond to our needs. [3] But self-disclosure must be balanced. If I open up to you about the concerns that are important to me, I expect you to do the same in return. When self-disclosure is not reciprocal, the relationship is less likely to last.
Another important determinant of liking is proximity, or the extent to which people are physically near us. Research has found that we are more likely to develop friendships with people who are nearby, for instance, those who live in the same dorm that we do, and even with people who just happen to sit nearer to us in our classes. [4]
Proximity has its effect on liking through the principle of mere exposure, which is the tendency to prefer stimuli (including but not limited to people) that we have seen more frequently. Moreland and Beach [5] studied mere exposure by having female confederates attend a large lecture class either 0, 5, 10, or 15 times during a semester. At the end of the term, students were shown pictures of the confederates and asked to indicate both if they recognized them and also how much they liked them. The number of times the confederates had attended class didn’t influence the other students’ ability to recognize them, but it did influence their liking for them. As predicted by the mere exposure hypothesis, students who had attended class more often were liked more as shown in the following table.
The effect of mere exposure is powerful and occurs in a wide variety of situations. Infants tend to smile at a photograph of someone they have seen already more than they smile at a photograph of someone they are seeing for the first time. [6] People prefer side-to-side reversed images of their own faces over their normal (nonreversed) face, whereas their friends prefer their normal face over the reversed one. [7] This is expected on the basis of mere exposure, since people see their own faces primarily in mirrors and thus are exposed to the reversed face more often.
Mere exposure may well have an evolutionary basis. We have an initial fear of the unknown, but as things become more familiar they seem more similar and safe, and thus produce more positive feelings and seem less threatening and dangerous. [8] In fact, research has found that stimuli tend to produce more positive affect as they become more familiar. [9] When the stimuli are people, there may well be an added effect. Familiar people become more likely to be seen as part of the ingroup rather than the outgroup, and this may lead us to like them more. Leslie Zebrowitz and her colleagues found that we like people of our own race in part because they are perceived as similar to us. [10]
In the most successful relationships the two people begin to see themselves as a single unit. Arthur Aron and his colleagues [11] assessed the role of closeness in relationships using the Inclusion of Other in the Self Scale as shown in figure below. You might try completing the measure yourself for some different people that you know—for instance, your family members, friends, spouse, or girlfriend or boyfriend. The measure is simple to use and to interpret; if people see the circles representing the self and the other as more overlapping, this means that the relationship is close. But if they choose the circles that are less overlapping, then the relationship is less so.
Although the closeness measure is very simple, it has been found to be strongly correlated with people’s satisfaction with their close relationships, and of the tendency for couples to stay together. [12] [13] When the partners in a relationship feel that they are close, and when they indicate that the relationship is based on caring, warmth, acceptance and social support, we can say that the relationship is intimate. [3]
When a couple begins to take care of a household together, has children, and perhaps has to care for elderly parents, the requirements of the relationship will grow. As a result, partners in close relationships increasingly turn to each other for help in coordinating activities, remembering dates and appointments, and accomplishing tasks. Relationships are close in part because the couple becomes highly interdependent, relying on each other to meet important goals. [14]
In relationships in which a positive rapport between the partners is developed and maintained over a period of time, the partners are naturally happy with the relationship and they become committed to it. Commitment refers to the feelings and actions that keep partners working together to maintain the relationship [15] and is characterized by mutual expectations that the self and the partner will be responsive to each other’s needs. [16] Partners who are committed to the relationship see their mates as more attractive, are less able to imagine themselves with another partner, express less interest in other potential mates, and are less likely to break up. [17]
People also find relationships more satisfactory, and stay in them longer, when they feel that the relationships are rewarding. When the needs of either or both of the partners are not being met, the relationship is in trouble. This is not to say that people only think about the benefits they are getting; they will also consider the needs of the other. But over the long term, both partners must benefit from the relationship.
Although sexual arousal and excitement are more important early on in relationships, intimacy is also determined by sexual and romantic attraction. Indeed, intimacy is also dependent on passion — the partners must display positive affect toward each other. Happy couples are in positive moods when they are around each other; they laugh with each other, express approval rather than criticism of each other’s behaviors, and enjoy physical contact. People are happier in their relationships when they view the other person in a positive or even an “idealized” sense, rather than a more realistic and perhaps more negative one. [18]
Margaret Clark and Edward Lemay [19] recently reviewed the literature on close relationships and argued that their most important characteristic is a sense of responsiveness. People are happy, healthy, and likely to stay in relationships in which they are sure that they can trust the other person to understand, validate, and care for them. It is this unconditional giving and receiving of love that promotes the welfare of both partners and provides the secure base that allows both partners to thrive.
When we observe people’s behavior, we may attempt to determine if the behavior really reflects their underlying personality. If Frank hits Joe, we might wonder if Frank is naturally aggressive or if perhaps Joe had provoked him. If Leslie leaves a big tip for the waitress, we might wonder if she is a generous person or if the service was particularly excellent. The process of trying to determine the causes of people’s behavior, with the goal of learning about their personalities, is known as causal attribution. [1]
Making causal attributions is a bit like conducting a research study. We carefully observe the people we are interested in and note how they behave in different social situations. After we have made our observations, we draw our conclusions. Sometimes we may decide that the source or cause of the behavior was due to characteristics that reside within the individual; this is referred to as a dispositional attribution. At other times, we may determine that the behavior was caused primarily by the situation; this is called making a situational attribution. And at other times we may decide that the behavior was caused by both the person and the situation.
It is easier to make personal attributions when behavior is more unusual or unexpected. Imagine that you go to a party and you are introduced to Tess. Tess shakes your hand and says “Nice to meet you!” Would you readily conclude, on the basis of this behavior, that Tess is a friendly person? Probably not. Because the social situation demands that people act in a friendly way (shaking your hand and saying “nice to meet you”), it is difficult to know whether Tess was friendly because of the situation or because she is really friendly. Imagine, however, that instead of shaking your hand, Tess sticks out her tongue at you and walks away. I think you would agree that it is easier in this case to infer that Tess is unfriendly because her behavior is so contrary to what one would expect. [2]
Although people are reasonably accurate in their attributions (we could say, perhaps, that they are “good enough”), [3] they are far from perfect. One error that we frequently make when making judgments about ourselves is to make self-serving attributions by judging the causes of our own behaviors in overly positive ways. If you did well on a test, you will probably attribute that success to dispositional attributions (“I’m smart,” “I studied really hard”), but if you do poorly on the test you are more likely to make situational attributions (“The test was hard,” “I had bad luck”). Although making causal attributions is expected to be logical and scientific, our emotions are not irrelevant.
Another way that our attributions are often inaccurate is that we are, by and large, too quick to attribute the behavior of other people to something about them rather than to something about their situation. We are more likely to say, “Leslie left a big tip, so she must be generous” than “Leslie left a big tip, but perhaps that was because the service was really excellent.” The common tendency to overestimate the role of dispositional factors and overlook the impact of situations in judging others is known as the fundamental attribution error (or correspondence bias).
The fundamental attribution error occurs in part because other people are so salient in our social environments. When I look at you, I see you as my focus, and so I am likely to make personal attributions about you. If the situation is reversed such that people see situations from the perspectives of others, the fundamental attribution error is reduced. [4] And when we judge people, we often see them in only one situation. It’s easy for you to think that your math professor is “picky and detail-oriented” because that describes her behavior in class, but you don’t know how she acts with her friends and family, which might be completely different. And we also tend to make person attributions because they are easy. We are more likely to commit the fundamental attribution error—quickly jumping to the conclusion that behavior is caused by underlying personality—when we are tired, distracted, or busy doing other things. [5]
An important moral about perceiving others applies here: We should not be too quick to judge other people. It is easy to think that poor people are lazy, that people who say something harsh are rude or unfriendly, and that all terrorists are insane madmen. But these attributions may frequently overemphasize the role of the person, resulting in an inappropriate and inaccurate tendency to blame the victim. [6] [7] Sometimes people are lazy and rude, and some terrorists are possibly insane, but these people may also be influenced by the situation in which they find themselves. Poor people may find it more difficult to get work and education because of the environment they grow up in, people may say rude things because they are feeling threatened or are in pain, and terrorists may have learned in their family and school that committing violence in the service of their beliefs is justified. When you find yourself making strong person attributions for the behaviors of others, I hope you will stop and think more carefully. Would you want other people to make person attributions for your behavior in the same situation, or would you prefer that they more fully consider the situation surrounding your behavior? Are you perhaps making the fundamental attribution error?
Attitude refers to our relatively enduring evaluations of people and things. [1] We each hold many thousands of attitudes, including those about family and friends, political parties and political figures, abortion rights, preferences for music, and much more. Some of our attitudes, including those about sports, roller coaster rides, and capital punishment, are at least in part heritable, which explains why we are similar to our parents on many dimensions. [2] Other attitudes are learned through direct and indirect experiences with the attitude objects. [3]
Attitudes are important because they frequently (but not always) predict behavior. If we know that a person has a more positive attitude toward Frosted Flakes than toward Cheerios, then we will naturally predict that she will buy more of the former when she gets to the market. If we know that Charlie is in love with Charlene, then we will not be surprised when he proposes marriage. Because attitudes often predict behavior, people who wish to change behavior frequently try to change attitudes through the use of persuasive communications. The following table presents some of the many techniques that can be used to change people’s attitudes. [4]
Techniques That Can Be Effective in Persuading Others | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||
|
Attitudes predict behavior better for some people than for others. People who are high in self-monitoring—the tendency to regulate behavior to meet the demands of social situations—tend to change their behaviors to match the social situation and thus do not always act on their attitudes. [5] High self-monitors agree with statements such as, “In different situations and with different people, I often act like very different persons” and “I guess I put on a show to impress or entertain people.” Attitudes are more likely to predict behavior for low self-monitors, who are more likely to act on their own attitudes even when the social situation suggests that they should behave otherwise. Low self-monitors are more likely to agree with statements such as “At parties and social gatherings, I do not attempt to do or say things that others will like” and “I can only argue for ideas that I already believe.”
The match between the social situations in which the attitudes are expressed and the behaviors are engaged in also matters, such that there is a greater attitude-behavior correlation when the social situations match. Imagine for a minute the case of Magritte, a 16-year-old high school student. Magritte tells her parents that she hates the idea of smoking cigarettes. But how sure are you that Magritte’s attitude will predict her behavior? Would you be willing to bet that she’d never try smoking when she’s out with her friends?
The problem here is that Magritte’s attitude is being expressed in one social situation (when she is with her parents) whereas the behavior (trying a cigarette) is going to occur in a very different social situation (when she is out with her friends). The relevant social norms are, of course, much different in the two situations. Magritte’s friends might be able to convince her to try smoking, despite her initial negative attitude, by enticing her with peer pressure. Behaviors are more likely to be consistent with attitudes when the social situation in which the behavior occurs is similar to the situation in which the attitude is expressed. [6]
Although it might not have surprised you to hear that our attitudes predict our behaviors, you might be more surprised to learn that our behaviors also have an influence on our attitudes. It makes sense that if I like Frosted Flakes I’ll buy them, because my positive attitude toward the product influences my behavior. But my attitudes toward Frosted Flakes may also become more positive if I decide—for whatever reason—to buy some. It makes sense that Charlie’s love for Charlene will lead him to propose marriage, but it is also the case that he will likely love Charlene even more after he does so.
Behaviors influence attitudes in part through the process of self-perception. Self-perception occurs when we use our own behavior as a guide to help us determine our own thoughts and feelings. [7] [8] In one demonstration of the power of self-perception, Wells and Petty [9] assigned their research participants to shake their heads either up and down or side to side as they read newspaper editorials. The participants who had shaken their heads up and down later agreed with the content of the editorials more than the people who had shaken them side to side. Wells and Petty argued that this occurred because the participants used their own head-shaking behaviors to determine their attitudes about the editorials.
Persuaders may use the principles of self-perception to change attitudes. The foot-in-the-door technique is a method of persuasion in which the person is first persuaded to accept a rather minor request and then asked for a larger one after that. In one demonstration, Guéguen and Jacob [10] found that students in a computer discussion group were more likely to volunteer to complete a 40-question survey on their food habits (which required 15 to 20 minutes of their time) if they had already, a few minutes earlier, agreed to help the same requestor with a simple computer-related question (about how to convert a file type) than if they had not first been given the smaller opportunity to help. The idea is that when asked the second time, the people looked at their past behavior (having agreed to the small request) and inferred that they are helpful people.
Behavior also influences our attitudes through a more emotional process known as cognitive dissonance. Cognitive dissonance refers to the discomfort we experience when we choose to behave in ways that we see as inappropriate. [11] [12] If we feel that we have wasted our time or acted against our own moral principles, we experience negative emotions (dissonance) and may change our attitudes about the behavior to reduce the negative feelings.
Elliot Aronson and Judson Mills [13] studied whether the cognitive dissonance created by an initiation process could explain how much commitment students felt to a group that they were part of. In their experiment, female college students volunteered to join a group that would be meeting regularly to discuss various aspects of the psychology of sex. According to random assignment, some of the women were told that they would be required to perform an embarrassing procedure (they were asked to read some obscene words and some sexually oriented passages from a novel in public) before they could join the group, whereas other women did not have to go through this initiation. Then all the women got a chance to listen to the group’s conversation, which turned out to be very boring.
Aronson and Mills found that the women who had gone through the embarrassing experience subsequently reported more liking for the group than those who had not. They argued that the more effort an individual expends to become a member of the group (e.g., a severe initiation), the more they will become committed to the group, to justify the effort they have put in during the initiation. The idea is that the effort creates dissonant cognitions (“I did all this work to join the group”), which are then justified by creating more consonant ones (“OK, this group is really pretty fun”). Thus the women who spent little effort to get into the group were able to see the group as the dull and boring conversation that it was. The women who went through the more severe initiation, however, succeeded in convincing themselves that the same discussion was a worthwhile experience.
When we put in effort for something—an initiation, a big purchase price, or even some of our precious time—we will likely end up liking the activity more than we would have if the effort had been less; not doing so would lead us to experience the unpleasant feelings of dissonance. After we buy a product, we convince ourselves that we made the right choice because the product is excellent. If we fail to lose the weight we wanted to, we decide that we look good anyway. If we hurt someone else’s feelings, we may even decide that he or she is a bad person who deserves our negative behavior. To escape from feeling poorly about themselves, people will engage in quite extraordinary rationalizing. No wonder that most of us believe that “If I had it all to do over again, I would not change anything important.”
Social psychologists generally define an attitude as an evaluation of a person, object, or idea. The evaluation can be positive, negative, or even ambivalent. To elaborate further on the definition of an attitude, consider its three parts: an affective component, consisting of our emotional reactions toward the attitude object (e.g., another person or a social issue); a cognitive component, consisting of our thoughts and beliefs about the attitude object, and a behavioral component, consisting of our actions or observable behaviors toward the attitude object.
As we have seen, not only do our attitudes often predict our behaviors, but our behaviors often have a substantial effect on our attitudes. Self-perception theory and cognitive dissonance theory provide different explanations of how our behavior can influence our attitudes. In this activity, you will see ten scenarios. Each is an example of self-perception, cognitive dissonance or some other concept. For each of the following scenarios, indicate which theory best applies.
Humans have developed a variety of social skills that enhance our ability to successfully interact with others. We are often helpful, even when that helping comes at some cost to ourselves, and we often change our opinions and beliefs to fit in with the opinions of those whom we care about. Yet we also are able to be aggressive if we feel the situation warrants it.
Altruism refers to any behavior that is designed to increase another person’s welfare, and particularly those actions that do not seem to provide a direct reward to the person who performs them. [1] Altruism occurs when we stop to help a stranger who has been stranded on the highway, when we volunteer at a homeless shelter, or when we donate to a charity. According to a survey conducted by Independent Sector, a coalition that studies and encourages volunteering, in 2001 over 83 million American adults reported that they helped others by volunteering, on average of 3.6 hours per week.
Because altruism is costly, you might wonder why we engage in it at all. There are a variety of explanations for the occurrence of altruism, and the following table summarizes some of the variables that are known to increase helping.
Some of the Variables Known to Increase Helping | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||
|
The tendency to help others in need may be a functional evolutionary adaptation. Although helping others can be costly to us as individuals, helping people who are related to us can perpetuate our own genes. [7] [8] [9] Burnstein, Crandall, and Kitayama [10] found that students indicated they would be more likely to help a person who was closely related to them (e.g., a sibling, parent, or child) than they would be to help a person who was more distantly related (e.g., a niece, nephew, uncle, or grandmother). People are more likely to donate kidneys to relatives than to strangers, [11] and even children indicate that they are more likely to help their siblings than they are to help a friend. [12]
Although it makes evolutionary sense that we would help people who we are related to, why would we help people to whom we not related? One explanation for such behavior is based on the principle of reciprocal altruism. [13] [14] Reciprocal altruism is the principle that, if we help other people now, those others will return the favor should we need their help in the future. By helping others, we increase the chances of survival and reproductive success for ourselves and for those we help. Over the course of evolution, those who engage in reciprocal altruism should be able to reproduce more often than those who do not, thus enabling this kind of altruism to continue.
We also learn to help by imitating the helpful behavior of others. Although people frequently worry about the negative impact of the violence in mass media, there is also a great deal of helping behavior shown on television. Smith and colleagues [15] found that 73% of TV shows had some altruism, and that about three altruistic behaviors were shown every hour. Furthermore, the prevalence of altruism was particularly high in children’s shows. But just as viewing altruism can increase helping, imitating behavior that is not altruistic can decrease altruism. For instance, Anderson and Bushman [16] found that playing violent video games led to a decrease in helping.
We are more likely to help when we receive rewards for doing so and less likely to help when helping is costly. Parents praise their children who share their toys with others, and may reprimand children who are selfish. In addition, we are more likely to help when we have plenty of time than when we are in a hurry. [17] Another potential reward is the status we gain as a result of helping. When we act altruistically, we gain a reputation as a person with high status who is able and willing to help others, and this status makes us more desirable in the eyes of others. [6]
One outcome of the reinforcement of altruism is the development of social norms about helping—standards of behavior that we see as appropriate and desirable . The reciprocity norm reminds us that we should follow the principles of reciprocal altruism. If someone helps us, then we should help them in the future, and we should help people now with the expectation that they will help us later if we need it. The reciprocity norm is expressed in everyday adages such as “Scratch my back and I’ll scratch yours” and in religious and philosophical teachings such as the “Golden Rule”: “Do unto other as you would have them do unto you.”
Because this kind of helping is based on the return of earlier help and the expectation of a future return from others, it might not seem like true altruism. We might hope that our children internalize another relevant social norm that seems more altruistic: the social responsibility norm, which tells us that we should try to help others who need assistance, even without any expectation of future paybacks. This is the basis of many religious and ethical principles: as good human beings, we shoud reach out and help other people whenever we can.
Late at night on March 13, 1964, 28-year-old Kitty Genovese was murdered within a few yards of her apartment building in New York City after a violent fight with her killer in which she struggled and screamed. When the police interviewed Kitty’s neighbors about the crime, they discovered that 38 people indicated that they had seen or heard the fight but none of them intervened, and only one person called the police.
Two social psychologists, Bibb Latané and John Darley, were interested in the factors that influenced people to help (or to not help) in such situations. [18] They developed a model (see the figure below) that took into consideration the important role of the social situation in determining helping. The model has been extensively tested in many studies. Social psychologists have discovered that it was the 38 people themselves that contributed to the tragedy, because people are less likely to notice, interpret, and respond to the emergencies when they are with others than they are when they are alone.
The first step in the model is noticing the event. Latané and Darley [18] asked research participants to complete a questionnaire in a small room. Some of the participants completed the questionnaire alone, whereas others completed it in small groups. A few minutes after the participants had begun the questionnaires, some white smoke came into the room through a vent in the wall. The experimenters timed how long it took before the first person in the room looked up and noticed the smoke.
The people who were working alone noticed the smoke in about 5 seconds, and within 4 minutes most of the participants who were working alone had taken some action. On the other hand, on average, the first person in the group conditions did not notice the smoke until over 20 seconds had elapsed. And, although 75% of the participants who were working alone reported the smoke within 4 minutes, the smoke was reported in only 12% of the groups by that time. In fact, in only 3 of the 8 groups did anyone report the smoke, even after it had filled the room. You can see that the social situation has a powerful influence on noticing; we simply don’t see emergencies when other people are with us.
Even if we notice an emergency, we might not interpret it as one. Were the cries of Kitty Genovese really calls for help, or were they simply an argument with a boyfriend? The problem is compounded when others are present. When we are unsure how to interpret events we normally look to others to help us understand them, and at the same time they also are looking to us for information. The problem is that each bystander thinks that other people aren’t acting because they don’t see an emergency. Believing that the others know something that they don’t, each observer concludes that help is not required.
Even if we have noticed the emergency and interpret it as being one, this does not necessarily mean that we will come to the rescue of the other person. We still need to decide that it is our responsibility to do something. The problem is that when we see others around, it is easy to assume that they are going to do something, and that we don’t need to do anything ourselves. Diffusion of responsibility occurs when we assume that others will take action and therefore we do not take action ourselves. The irony again, of course, is that people are more likely to help when they are the only ones in the situation than when there are others around.
Perhaps you have noticed diffusion of responsibility in social media like Facebook. Did you find that it was easier to get help if you directed your request to a smaller set of users than when you directed it to a larger number of people? Markey [19] found that people received help more quickly (in about 37 seconds) when they asked for help by specifying a participant’s name than when no name was specified (51 seconds).
The final step in the helping model is knowing how to help. Most of us are not professionals and we have little training in how to help in emergencies. People with emergency-response training are more likely to help. The rest of us just don’t know what to do, and therefore we may simply walk by. On the other hand, today many people have cell phones, and we can do a lot with a quick call; in fact, a phone call made in time might have saved Kitty Genovese’s life.
Aggression is behavior that is intended to harm another individual. Aggression may occur in the heat of the moment, for instance, when a jealous lover strikes out in rage or fans light fires and destroy cars after an important basketball game. Or it may occur in a more cognitive, deliberate, and planned way, such as the aggression of a bully who steals another child’s toys, a terrorist who kills civilians to gain political exposure, or a hired assassin who kills for money.
Not all aggression is physical. Aggression also occurs in nonphysical ways, as when children exclude others from activities, call them names, or spread rumors about them. Paquette and Underwood [1] found that both boys and girls said that nonphysical aggression such as name-calling made them feel more “sad and bad” than did physical aggression.
We may aggress against others in part because it allows us to gain access to valuable resources such as food, territory, and desirable mates, or to protect ourselves from direct attack by others. If aggression helps in the survival of our genes, then the process of natural selection may well have caused humans, as it would any other animal, to be aggressive. [2]
There is evidence for the genetics of aggression. Aggression is strongly influenced by the amygdala. One of the primary functions of the amygdala is to help us learn to associate stimuli with the rewards and the punishment that they may provide. The amygdala is particularly activated in our responses to threatening or fear-arousing stimuli.
But just because we can aggress does not mean that we will aggress. It is not necessarily evolutionarily adaptive to aggress in all situations. Neither people nor animals are always aggressive; they rely on aggression only when they feel that they absolutely need to. [3] The prefrontal cortex serves as a control center on aggression; when it is more highly activated, we are more able to control our aggressive impulses. Research has found that the cerebral cortex is less active in murderers and death row inmates, suggesting that violent crime may be caused at least in part by a failure or reduced ability to regulate aggression. [4]
Hormones are also important in regulating aggression. In particular, testosterone, is associated with increased aggression in both males and females. Research conducted on a variety of animals has found a positive correlation between levels of testosterone and aggression. This relationship seems to be weaker among humans than among animals, yet it is still significant. [5]
Consuming alcohol increases the likelihood that people will respond aggressively to provocations, even for people who are not normally aggressive. [6] Alcohol lowers inhibitions and makes people more self-focused and less aware of the social constraints that normally prevent them from engaging aggressively. [7] [8]
If I were to ask you about the times that you have been aggressive, I bet that you would say it was more likely when you were angry, in a bad mood, tired, in pain, sick, or frustrated and you would be right: We are much more likely to aggress when we are experiencing negative emotions especially frustration. When we are frustrated we may lash out at others, even at people who did not cause it. In some cases the aggression is displaced aggression, which is aggression that is directed at an object or person other than the person who caused the frustration.
Other negative emotions also increase aggression. Griffit and Veitch [9] had students complete questionnaires either in rooms at a normal temperature or where the temperature was over 90 degrees Fahrenheit. The students in the hot rooms expressed significantly more hostility. Aggression is greater during heat waves, and most violent riots occur during the hottest days of the year. [10] Pain also increases aggression. [11]
If we are aware that we are feeling negative emotions, we might think that we could release them in a relatively harmless way, such as by punching a pillow or kicking something, with the hopes that doing so will release our aggressive tendencies. Catharsis—the idea that observing or engaging in less harmful aggressive actions will reduce the tendency to aggress later in a more harmful way—has been considered by many as a way of decreasing violence, and it was an important part of the theories of Sigmund Freud.
As far as social psychologists have been able to determine, however, catharsis simply does not work. Rather than decreasing aggression, engaging in aggressive behaviors of any type increases the likelihood of later aggression. Bushman, Baumeister, and Stack [12] first angered their research participants by having another student insult them. Then half of the participants were allowed to engage in a cathartic behavior: They were given boxing gloves and then got a chance to hit a punching bag for 2 minutes. Then all the participants played a game in which they had a chance to blast their opponent with a painful blast of white noise. Contrary to the catharsis hypothesis, the students who hit the punching bag set a higher noise level and delivered longer bursts of noise than the participants who did not. It seems that if we hit a punching bag, punch a pillow, or scream as loud as we can to release our frustration, the opposite may occur—rather than decreasing aggression, these behaviors in fact increase it.
The average American watches over 4 hours of television every day, and these programs contain a substantial amount of aggression. At the same time, children are also exposed to violence in movies and video games, as well as in popular music and music videos that include violent lyrics and imagery. Research evidence suggests that, on average, people who watch violent behavior become more aggressive. The evidence supporting this relationship comes from many studies conducted over many years using both correlational designs as well as experiments in which people have been randomly assigned to view either violent or nonviolent material. [13] Viewing violent behavior also increases aggression in part through observational learning. Children who witness violence are more likely to be aggressive. One example is in the studies of Albert Bandura, as shown below.
Another outcome of viewing large amounts of violent material is desensitization, which is the tendency over time to show weaker emotional responses to emotional stimuli. When we first see violence, we are likely to be shocked, aroused, and even repulsed by it. However, over time, as we see more and more violence, we become habituated to it, such that the subsequent exposures produce fewer and fewer negative emotional responses. Continually viewing violence also makes us more distrustful and more likely to behave aggressively. [14] [15]
Of course, not everyone who views violent material becomes aggressive; individual differences also matter. People who experience a lot of negative affect and who feel that they are frequently rejected by others are more aggressive . [16] People with inflated or unstable self-esteem are more prone to anger and are highly aggressive when their high self-image is threatened. [17] For instance, classroom bullies are those children who always want to be the center of attention, who think a lot of themselves, and who cannot take criticism. [18] Bullies are highly motivated to protect their inflated self-concept, and they react with anger and aggression when it is threatened.
Across cultures, men tend to be more physically violent than women . [19] [20] About 99% of rapes and about 90% of robberies, assaults, and murders are committed by men. [21] These gender differences do not imply that women are never aggressive. Both men and women respond to insults and provocation with aggression; the differences between men and women are smaller after they have been frustrated, insulted, or threatened. [22]
In addition to differences across cultures, there are also regional differences in the incidence of violence. As one example, the homicide rate is significantly higher in the southern and the western states but lower in the eastern and northern states. One explanation for these differences is variation in cultural norms about the appropriate reactions to threats against one’s social status. These cultural differences apply primarily to men. In short, some men react more violently than others when they believe that others are threatening or insulting them.
The social norm that condones and even encourages responding to insults with aggression is known as the culture of honor. The culture of honor leads people to view even relatively minor conflicts or disputes as challenges to one’s social status and reputation and can therefore trigger aggressive responses. Beliefs in culture of honor norms are stronger among men who live or who were raised in the South and West than among men who are from or living in the North and East.
In one series of experiments, Cohen, Nisbett, Bosdle, and Schwarz [23] investigated how white male students who had grown up either in the northern or in the southern regions of the United States responded to insults. The experiments, involved an encounter in which the research participant was walking down a narrow hallway. The experimenters enlisted the help of a confederate who did not give way to the participant but rather bumped into him and insulted him. Compared with Northerners, students from the South who had been bumped were more likely to think that their masculine reputations had been threatened, exhibited greater physiological signs of being upset, had higher testosterone levels, engaged in more aggressive and dominant behavior (gave firmer handshakes), and were less willing to yield to a subsequent confederate.
In another test of the impact of culture of honor, Cohen and Nisbett [24] sent letters to employers across the United States from a fictitious job applicant who admitted having been convicted of a felony. To half the employers, the applicant reported that he had impulsively killed a man who had been having an affair with his fiancée and then taunted him about it in a crowded bar. To the other half, the applicant reported that he had stolen a car because he needed the money to pay off debts. Employers from the South and the West, places in which the culture of honor is strong, were more likely than employers in the North and East to respond in an understanding and cooperative way to the letter from the convicted killer, but there were no cultural differences for the letter from the auto thief.
One possible explanation for regional differences in the culture of honor involves their history. While people in the northern parts of the United States were usually farmers who grew crops, people from southern climates were more likely to raise livestock. Unlike the crops grown by the northerners, the herds were mobile and vulnerable to theft, and it was difficult for law enforcement officials to protect them. To be successful in an environment where theft was common, a man had to build a reputation for strength and toughness, and this was accomplished by a willingness to use swift, and sometimes violent, punishment against thieves.
When we decide on what courses to enroll in by asking for advice from our friends, change our beliefs or behaviors as a result of the ideas that we hear from others, or binge drink because our friends are doing it, we are engaging in conformity, a change in beliefs or behavior that occurs as the result of the presence of the other people around us. We conform not only because we believe that other people have accurate information and we want to have knowledge (informational conformity) but also because we want to be liked by others (normative conformity).
The typical outcome of conformity is that our beliefs and behaviors become more similar to those of others around us. But some situations create more conformity than others, and some of the factors that contribute to conformity are shown in the table below.
Variables That Increase Conformity | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology. Sources: Milgram, S., Bickman, L., and Berkowitz, L. (1969). Note on the drawing power of crowds of different size. Journal of Personality and Social Psychology 13(2):79–82; and Milgram, S. (1974). Obedience to Authority: An Experimental View. New York: Harper and Row. | ||||||||||||
|
At times conformity occurs in a relatively spontaneous and unconscious way, without any obvious intent of one person to change the other, or an awareness that the conformity is occurring. Robert Cialdini and his colleagues [2] found that college students were more likely to throw litter on the ground themselves when they had just seen another person throw some paper on the ground, and Cheng and Chartrand [3] found that people unconsciously mimicked the behaviors of others, such as by rubbing their face or shaking their foot, and that that mimicry was greater when the other person was of high versus low social status.
Muzafer Sherif [4] studied how norms develop in ambiguous situations. In his studies, groups of college students were placed in a dark room with a single point of light and were asked to indicate, each time the light was turned on and how much it appeared to move. (The movement, which is not actually real, occurs because of the saccadic movement of the eyes.) Each group member gave his or her response on each trial aloud and each time in a different random order. As you can see in figure below, Sherif found a conformity effect: Over time, the responses of the group members became more and more similar to each other such that after four days they converged on a common norm. When the participants were interviewed after the study, they indicated that they had not realized that they were conforming.
Not all conformity is passive. In the research of Solomon Asch [5] male college students were told that they were to be participating in a test of visual abilities. The men were seated in front of a board that displayed the visual stimuli that they were going to judge. The men were told that there would be 18 trials during the experiment, and on each trial they would see two cards. The standard card had a single line that was to be judged, and the test card had three lines that varied in length between about 2 and 10 inches.
On each trial, each person in the group answered out loud, beginning with one end of the group and moving toward the other end. Although the real research participant did not know it, the other group members were experimental confederates who gave predetermined answers on each trial. Because the real participant was seated next to last in the row, he always made his judgment following most of the other group members. Although on the first two trials the confederates each gave the correct answer, on the third trial, and on 11 of the subsequent trials, they all had been instructed to give the same wrong choice. For instance, even though the correct answer was Line 1, they would all say it was Line 2. Thus when it became the participant’s turn to answer, he could either give the clearly correct answer or conform to the incorrect responses of the confederates.
Remarkably, in this study about 76% of the 123 men who were tested gave at least one incorrect response when it was their turn, and 37% of the responses, overall, were conforming. This is indeed evidence for the power of conformity because the participants were making clearly incorrect responses in public. However, conformity was not absolute; in addition to the 24% of the men who never conformed, only 5% of the men conformed on all 12 of the critical trials.
The tendency to conform to those in authority, known as obedience, was demonstrated in a remarkable set of studies performed by Stanley Milgram. [6] Milgram designed a study in which he could observe the extent to which a person who presented himself as an authority would be able to produce obedience, even to the extent of leading people to cause harm to others. Like many other researchers who were interested in conformity, Milgram’s interest stemmed in part from his desire to understand how the presence of a powerful social situation—in this case the directives of Adolph Hitler, the German dictator who ordered the killing of millions of Jews and other “undesirable” people during World War II—could produce obedience.
Milgram used newspaper ads to recruit men (and in one study, women) from a wide variety of backgrounds to participate in his research. When the research participant arrived at the lab, he or she was introduced to a man who was ostensibly another volunteer but who actually was a confederate working with the research team. The experimenter explained that the goal of the research was to study the effects of punishment on learning. After the participant and the confederate both consented to be in the study, the researcher explained that one of them would be the teacher, and the other the learner. They were each given a slip of paper and asked to open it and indicate what it said. In fact both papers read “teacher,” which allowed the confederate to pretend that he had been assigned to be the learner and thus to assure that the actual participant was always the teacher.
While the research participant (now the teacher) looked on, the learner was taken into the adjoining room and strapped to an electrode that was to deliver the punishment. The experimenter explained that the teacher’s job would be to sit in the control room and read a list of word pairs to the learner. After the teacher read the list once, it would be the learner’s job to remember which words went together. For instance, if the word pair was “blue sofa,” the teacher would say the word “blue” on the testing trials, and the learner would have to indicate which of four possible words (“house,” “sofa,” “cat,” or “carpet”) was the correct answer by pressing one of four buttons in front of him.
After the experimenter gave the “teacher” a mild shock to demonstrate that the shocks really were painful, the experiment began. The research participant first read the list of words to the learner and then began testing him on his learning. The shock apparatus, shown below, was in front of the teacher, and the learner was not visible in the shock room. The experimenter sat behind the teacher and explained to him that each time the learner made a mistake he was to press one of the shock switches to administer the shock. Moreover, the switch that was to be pressed increased by one level with each mistake, so that each mistake required a stronger shock.
Once the learner (who was, of course, actually the research confederate) was alone, he unstrapped himself from the shock machine and brought out a tape recorder that he used to play a prerecorded series of responses that the teacher could hear through the wall.
The teacher heard the learner say “ugh!” after the first few shocks. After the next few mistakes, when the shock level reached 150 Volts, the learner was heard to exclaim, “Let me out of here. I have heart trouble!” As the shock reached about 270 Volts, the protests of the learner became more vehement, and after 300 Volts the learner proclaimed that he was not going to answer any more questions. From 330 Volts and up, the learner was silent. At this point the experimenter responded to participants’ questions, if any, with a scripted response indicating that they should continue reading the questions and applying increasing shock when the learner did not respond.
The results of Milgram’s research were themselves quite shocking. Although all the participants gave the initial mild levels of shock, responses varied after that. Some refused to continue after about 150 Volts, despite the insistence of the experimenter to continue to increase the shock level. Still others, however, continued to present the questions and to administer the shocks, under the pressure of the experimenter, who demanded that they continue. In the end, 65% of the participants continued giving the shock to the learner all the way up to the 450 Volts maximum, even though that shock was marked as "XXX" and no response had been heard from the participant for several trials. In other words, well over half of the men who participated had, as far as they knew, shocked another person to death, all as part of a supposed experiment on learning.
In case you are thinking that such high levels of obedience would not be observed in today’s modern culture, there is fact evidence that they would. In a more recent study, Milgram’s findings were almost exactly replicated, using men and women from a wide variety of ethnic groups. [7] In this replication , 67% of the men and 73% of the women agreed to administer increasingly painful electric shocks when an authority figure ordered them to. The participants in this study were not, however, allowed to go beyond the 150-Volt shock switch.
Although it might be tempting to conclude that Burger’s and Milgram’s research demonstrate that people are innately bad creatures who are ready to shock others to death, this is not in fact the case. Rather it is the social situation, and not the people themselves, that is responsible for the behavior. When Milgram created variations on his original procedure, he found that changes in the situation dramatically influenced the level of conformity. It was significantly reduced when people were allowed to choose their own shock level rather than being ordered to use the level required by the experimenter, when the experimenter communicated by phone rather than from within the experimental room, or when other research participants refused to give the shock. These findings are consistent with a basic principle of social psychology: The situation in which people find themselves has a major influence on their behavior.
The research that we have discussed to this point suggests that most people conform to the opinions and desires of others, but this is not always the case. For one, there are individual differences in conformity. People with lower self-esteem are more likely to conform than are those with higher self-esteem, and people who are dependent on and who have a strong need for approval from others are also more conforming. [8] People who highly identify with or who have a high degree of commitment to a group are also more likely to conform to group norms than those who care less about the group. [9] Despite these individual differences research has generally found that the impact of individual difference variables on conformity is smaller than the influence of situational variables, such as the size and unanimity of the majority.
We have seen that conformity usually occurs such that the opinions and behaviors of individuals become more similar to the opinions and behaviors of the majority of the people in the group. However, and although it is much more unusual, there are cases in which a smaller number of individuals is able to influence the opinions or behaviors of the larger group—a phenomenon known as minority influence. Minorities who are consistent and confident in their opinions may in some cases be able to be persuasive . [10]
Persuasion that comes from minorities has another, and potentially even more important, effect on the opinions of majority group members: It can lead majorities to engage in fuller, as well as more divergent, innovative, and creative thinking about the topics being discussed. [11] Nemeth and Kwan [12] found that participants working together in groups solved problems more creatively when only one person gave a different and unusual response than the other members did (minority influence) in comparison to when three people gave the same unusual response.
It is a good thing that minorities can be influential; otherwise, the world would be pretty boring indeed. When we look back on history, we find that it is the unusual, divergent, innovative minority groups or individuals, who—although frequently ridiculed at the time for their unusual ideas—end up being respected for producing positive changes.
Another case where conformity does not occur is when people feel that their freedom is being threatened by influence attempts, yet they also have the ability to resist that persuasion. In these cases they may develop a strong emotional reaction that leads people to resist pressures to conform known as psychological reactance. [13] Reactance is aroused when our autonomy is threatened. Under these conditions, people may not conform at all, and may move their opinions or behaviors away from the desires of the influencer. Consider an experiment conducted by Pennebaker and Sanders, [14] who attempted to get people to stop writing graffiti on the walls of campus restrooms. In the first group of restrooms they put a sign that read “Do not write on these walls under any circumstances!” whereas in the second group they placed a sign that simply said “Please don’t write on these walls.” Two weeks later, the researchers returned to the restrooms to see if the signs had made a difference. They found that there was significantly less graffiti in the second group of restrooms than in the first one. It seems as if people who were given strong pressures to not engage in the behavior were more likely to react against those directives than were people who were given a weaker message.
Reactance represents a desire to restore freedom that is being threatened. A child who feels that his or her parents are forcing him to eat his asparagus may react quite vehemently with a strong refusal to touch the plate. And an adult who feels that she is being pressured by a car salesman might feel the same way and leave the showroom entirely, resulting in the opposite of the salesman’s intended outcome.
Now that you are more familiar with the distinction between informational conformity and normative conformity, we can apply these concepts to the studies conducted by Sherif and Asch.
The behavior of Sherif’s participants is thought to be the result of informational conformity, or the use of other people—their comments and actions—as a source of information about what is likely to be correct. The task Sherif asked his participants to perform was ambiguous, since the light doesn’t move at all, but just appears to. And that appearance, being so uncertain and ambiguous, is readily influenced by the expressed judgments of others.
Informational conformity is not thought to be the main source of conformity pressure in Asch’s study, however. For the most part, the right answer was clear to participants, as evidenced by the fact that individuals in a control group who made these judgments by themselves, without any social pressure, almost never made a mistake. Although some informational conformity likely was present, the primary reason people conformed was to avoid standing out, negatively, among the group members. That is, the behavior of Asch’s participants is thought to be the result of normative conformity due to the desire to avoid the disapproval and harsh judgments of others.
Informational conformity, by influencing how we come to see the issues or stimuli before us, tends to influence our private acceptance of the position advanced by the majority. Tending to assume that others have things right, we adopt the group’s perspective. Normative conformity, in contrast, often has a greater influence on public compliance than on private acceptance. To avoid disapproval, we sometimes do or say one thing but continue to believe another.
Watch the video below on Milgram's study of obedience and answer the questions that follow.
Earlier in this module we reviewed research on helping behavior, or altruism. It included a model proposed by Latané and Darley, who identified four steps between an emergency and an individual providing help.
Referring to the following diagram, you will type in each step and submit your response. Compare your response to the correct answer and then rate your response. Do this for each step. To get the maximum learning benefit, do this without consulting your notes or looking back at the text.
Just as our primitive ancestors lived together in small social groups, including families, tribes, and clans, people today still spend a great deal of time in groups. We study together in study groups, we work together on production lines, and we decide the fates of others in courtroom juries. We work in groups because groups can be beneficial. A rock band that is writing a new song or a surgical team in the middle of a complex operation may coordinate their efforts so well that it is clear that the same outcome could never have occurred if the individuals had worked alone. But group performance will only be better than individual performance to the extent that the group members are motivated to meet the group goals, effectively share information, and efficiently coordinate their efforts. Because these things do not always happen, group performance is almost never as good as we would expect, given the number of individuals in the group, and may even in some cases be inferior to that which could have been made by one or more members of the group working alone.
In an early social psychological study, Norman Triplett [1] found that bicycle racers who were competing with other bicyclers on the same track rode significantly faster than bicyclers who were racing alone, against the clock. This led Triplett to hypothesize that people perform tasks better when there are other people present than they do when they are alone. Subsequent findings validated Triplett’s results, and experiments have shown that the presence of others can increase performance on many types of tasks, including jogging, shooting pool, lifting weights, and solving problems. [2] The tendency to perform tasks better or faster in the presence of others is known as social facilitation.
However, although people sometimes perform better when they are in groups than they do alone, the situation is not that simple. Perhaps you remember an experience when you performed a task (playing the piano, shooting basketball free throws, giving a public presentation) very well alone but poorly with, or in front of, others. Thus it seems that the conclusion that being with others increases performance cannot be entirely true. The tendency to perform tasks more poorly or more slowly in the presence of others is known as social inhibition.
Robert Zajonc [3] explained the observed influence of others on task performance using the concept of physiological arousal. According to Zajonc, when we are with others we experience more arousal than we do when we are alone, and this arousal increases the likelihood that we will perform the dominant response, the action that we are most likely to emit in any given situation.
The most important aspect of Zajonc’s theory was that the experience of arousal and the resulting increase in the occurrence of the dominant response could be used to predict whether the presence of others would produce social facilitation or social inhibition. Zajonc argued that when the task to be performed was relatively easy, or if the individual had learned to perform the task very well (a task such as pedaling a bicycle), the dominant response was likely to be the correct response, and the increase in arousal caused by the presence of others would create social facilitation. On the other hand, when the task was difficult or not well learned (a task such as giving a speech in front of others), the dominant response is likely to be the incorrect one, and thus, because the increase in arousal increases the occurrence of the (incorrect) dominant response, performance is hindered.
A great deal of experimental research has now confirmed these predictions. A meta-analysis by Bond and Titus, [2] which looked at the results of over 200 studies using over 20,000 research participants, found that the presence of others significantly increased the rate of performing on simple tasks, and also decreased both rate and quality of performance on complex tasks.
Although the arousal model proposed by Zajonc is perhaps the most elegant, other explanations have also been proposed to account for social facilitation and social inhibition. One modification argues that we are particularly influenced by others when we perceive that the others are evaluating us or competing with us. [4] In one study supporting this idea, Strube, Miles, and Finch [5] found that the presence of spectators increased joggers’ speed only when the spectators were facing the joggers, so that the spectators could see the joggers and assess their performance. The presence of others did not influence joggers’ performance when the joggers were facing in the other direction and thus could not see them.
The ability of a group to perform well is determined by the characteristics of the group members (e.g., are they knowledgeable and skilled?) as well as by the group process—that is, the events that occur while the group is working on the task. When the outcome of group performance is better than we would expect given the individuals who form the group, we call the outcome a group process gain, and when the group outcome is worse than we would have expected given the individuals who form the group, we call the outcome a group process loss.
One group process loss that may occur in groups is that the group members may engage in social loafing, a group process loss that occurs when people do not work as hard in a group as they do when they are working alone. In one of the earliest social psychology experiments, in 1913, Maximilien Ringelmann had individual men, as well as groups of various numbers of men, pull as hard as they could on ropes while he measured the maximum amount that they were able to pull. [6] As you can see in the figure below, although larger groups pulled harder than any one individual, Ringelmann also found a substantial process loss. In fact, the loss was so large that groups of three men pulled at only 85% of their expected capability, whereas groups of eight pulled at only 37% of their expected capability. This type of process loss, in which group productivity decreases as the size of the group increases, has been found to occur on a wide variety of tasks.
Robert Zajonc [3] proposed a theory that would explain both the social facilitation and the social inhibition study results. For this activity, take a look at the figure above, titled "Drive-Arousal Model of Social Facilitation." Zajonc’s theory has three components.
Group process losses can also occur when group members conform to each other rather than expressing their own divergent ideas. Groupthink is a phenomenon that occurs when a group made up of members who may be very competent and thus quite capable of making excellent decisions nevertheless ends up, as a result of a flawed group process and strong conformity pressures, making a poor decision. [1] [2] Groupthink is more likely to occur in groups whose members feel a strong group identity, when there is a strong and directive leader, and when the group needs to make an important decision quickly. The problem is that groups suffering from groupthink become unwilling to seek out or discuss discrepant or unsettling information about the topic at hand, and the group members do not express contradictory opinions. Because the group members are afraid to express opinions that contradict those of the leader, or to bring in outsiders who have other information, the group is prevented from making a fully informed decision. The figure below summarizes the basic causes and outcomes of groupthink.
It has been suggested that groupthink was involved in a number of well-known and important, but very poor, decisions made by government and business groups, including the decision to invade Iraq made by President Bush and his advisors in 2002, the crashes of two Space Shuttle missions in 1986 and 2003, and the decision of President John Kennedy and his advisors to commit U.S. forces to help invade Cuba and overthrow Fidel Castro in 1962. Analyses of the decision-making processes in these cases have documented the role of conformity pressures.
As a result of the high levels of conformity in these groups, the group begins to see itself as extremely valuable and important, highly capable of making high-quality decisions, and invulnerable. The group members begin to feel that they are superior and do not need to seek outside information. Such a situation is conducive to terrible decision-making and resulting fiascoes.
Although many other countries rely on judges to make judgments in civil and criminal trials, the jury is the foundation of the legal system in the United States. The notion of a “trial by one’s peers” is based on the assumption that average individuals can make informed and fair decisions when they work together in groups. But given the potential for group process losses, are juries really the best way to approach these important decisions?
As a small working group, juries have the potential to produce either good or poor decisions, depending on the outcome of the characteristics of the individual members as well as the group process. In terms of individual group characteristics, people who have already served on juries are more likely to be seen as experts, are more likely to be chosen to be the jury foreman, and give more input during the deliberation. It has also been found that status matters; jury members with higher status occupations and education, males rather than females, and those who talk first are more likely be chosen as the foreman, and these individuals also contribute more to the jury discussion . [3]
However, although at least some member characteristics have an influence on jury decision making, group process plays a more important role in the outcome of jury decisions than do member characteristics. Like any group, juries develop their own individual norms, and these norms can have a profound impact on how they reach their decision. Analysis of group process within juries shows that different juries take very different approaches to reaching a verdict. Some spend a lot of time in initial planning, whereas others immediately jump into the deliberation. Some juries base their discussion around a review and reorganization of the evidence, waiting to make a vote until it has all been considered, whereas other juries first determine which decision is preferred in the group by taking a poll and then (if the first vote does not lead to a final verdict) organize their discussion around these opinions. These two approaches are used quite equally but may in some cases lead to different decisions. [4]
Perhaps most importantly, conformity pressures have a strong impact on jury decision making. As you can see in the figure below, when there are a greater number of jury members who hold the majority position, it becomes more and more certain that their opinion will prevail during the discussion. This does not mean that minorities can never be persuasive, but it is very difficult for them to do so. The strong influence of the majority is probably due to both informational conformity (i.e., that there are more arguments supporting the favored position) and normative conformity (the people on the majority side have greater social influence).
Given the potential difficulties that groups face in making good decisions, you might be worried that the verdicts rendered by juries may not be particularly effective, accurate, or fair. However, despite these concerns, the evidence suggests that juries may not do as badly as we would expect. The deliberation process seems to cancel out many individual juror biases, and the importance of the decision leads the jury members to carefully consider the evidence itself.
Taken together, working in groups has both positive and negative outcomes. On the positive side, it makes sense to use groups to make decisions because people can create outcomes working together that any one individual could not hope to accomplish alone. In addition, once a group makes a decision, the group will normally find it easier to get other people to implement it, because many people feel that decisions made by groups are fairer than are those made by individuals.
Yet groups frequently succumb to process losses, leading them to be less effective than they should be. Furthermore, group members often don’t realize that the process losses are occurring around them. For instance, people who participate in brainstorming groups report that they have been more productive than those who work alone, even if the group has actually not done that well. [5] [6] The tendency for group members to overvalue the productivity of the groups they work in is known as the illusion of group productivity, and it seems to occur for several reasons. For one, the productivity of the group as a whole is highly accessible, and this productivity generally seems quite good, at least in comparison to the contributions of single individuals. The group members hear many ideas expressed by themselves and the other group members, and this gives the impression that the group is doing very well, even if objectively it is not. And, on the affective side, group members receive a lot of positive social identity from their group memberships. These positive feelings naturally lead them to believe that the group is strong and performing well.
What we need to do, then, is to recognize both the strengths and limitations of group performance and use whatever techniques we can to increase process gains and reduce process losses. The table below presents some of the techniques that are known to help groups achieve their goals.
Techniques That Can Be Used to Improve Group Performance | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||
|
Picture yourself at your best and then picture the reality of your life. Is there a difference? In your mind, are you managing the demands of the day, being effective at work or school, and still have time for family and friends? Typically, throughout the trials of a given day, people might fantasize about what it would mean to be in a state of “balance.” Being in a state of balance means being mentally steady or emotionally stable—having an internal state of harmony or symmetry. With that definition, balance becomes open to the interpretation that a person's “mental steadiness” is dependent on his or her upbringing, culture, and lived experience.
Take a moment and compartmentalize your life. Within your external “compartments,” does balance include your job, your family, and social life? Within the internal “compartments,” does your life include things such as a physical balance and spiritual balance? Also, are the internal and external components of your life in balance with each other? As you can see, the way that each of us answers these questions will be unique and intensely personal. We can speak of balance in a general way, but, in the details, it is different for each person.
What does “wellness” mean to you? If you were to ask a doctor, a counselor, a personal trainer, or a nutritionist to define wellness, you are likely get answers that are similar yet different.
Although it is a challenging term to define, for our purposes, wellness is a state of being in which a person actively maintains a proper balance of physical, mental, emotional, and social health. This definition may be described as a bio-psycho-social understanding of wellness. (You will sometimes see this term written as "biopsychosocial." We will use hyphens to separate bio, psycho, and social to emphasize the fact that there are separate sources of influence.) It should come as no surprise that balance and optimal wellness go hand in hand!
There is a structure in the forebrain called the hypothalamus. The hypothalamus has been referred to as the thermostat for the body because it is responsible for maintaining homeostasis. Homeostasis is a technical term that describes the goal of a system—like your body—that tries to maintain a constant or optimal level of functioning. The hypothalamus is a system in the human brain that tries to maintain homeostasis. When the hypothalamus functions properly, it monitors temperature, fluids, metabolism, and level of nutrients to keep them at constant and optimal levels.
The hypothalamus plays a role for other forms of balance. In addition to acting as the thermostat for the body, the hypothalamus also play a role in primary motivation. For instance, motivations to drink, eat, and for sex are centered in the hypothalamus. In addition, the hypothalamus aids in sleep as it contains a substructure called the suprachiasmatic nucleus (SCN). The SCN is alert to light and dark and, as such, helps establish the times of day where we feel energized and times of day where we feel tired. When someone feels tired at night, the experience is due to the SCN signaling the release of the hormone melatonin.
Given that wellness is a multifaceted concept, we can think of it as having various characteristics, also known as dimensions. This notion of wellness having dimensions was first developed by Dr. Bill Hettler in 1976. His model is based on six dimensions: physical, social, intellectual, spiritual, emotional, and occupational. [1]
Dimensions of wellness are often depicted as segments of a circle or wheel. Our model of wellness will reflect external and internal dimensions as components of two, connected “wellness wheels.” Let’s begin by looking at several important extrinsic characteristics of wellness. These external characteristics are aspects of our lives outside of our bodies and mind. Our External Wellness Wheel represents four characteristics: social, occupational, environmental, and familial.
According to John Bowlby, [2] attachment theory emphasizes the influence parents have over a child’s personality development. Consequently, a secure bond results in a secure child who feels comfortable to explore the environment. The secure bond further allows the child to become accustomed to, and later involved in, interpersonal relationships. Overall, caregivers meeting a child’s early emotional needs have long-lasting, positive social indicators. [3] [4] [5] For instance, women showed both increased psychological well-being and social competence when securely attached to both parent figures as they later transitioned into college. [5]
Well-balanced social interactions are considered from a developmental perspective. Specifically, gender differences start to appear in childhood and continue into adolescence. For instance, behaviors of a socially balanced female child include having emotionally closer relationships when compared to males. In addition, their interactions tend to be more reciprocal. Girls also tend to have peer relationships based on an intimate, emotional exchange. Young boys, however, tend to be more outspoken with their friends, and it is quite natural for male children to brag and interrupt conversations. Boys are more physical in their activities, and their exchanges are based on shared activities.
Well-balanced adult social interactions are more heterogeneous. In addition, well-balanced social interactions in adults result in increased physical and emotional health when compared to socially isolated adults. Adults who maintain social connectedness tend to live longer; this finding is cross-cultural and also crosses socioeconomic status.
A focus of an industrial/organizational psychologist is to measure job satisfaction. Overall, job satisfaction is positively correlated with age. Another consistent observation is that those in more of an administrative role are more satisfied than those who are not.
What might not be so obvious is the correlation between satisfaction and salary. For instance, job satisfaction tends not to be related to pay for highly paid workers. Instead, things such as job challenges, independence, and power tend to be related to job satisfaction for these workers. In addition, how fairly one believes he or she is paid is related more to job satisfaction than is actual salary. People also tend to compare their salaries to those around them, and workers tend to subjectively assess their comparable worth.
Overall, the greatest factor influencing job satisfaction is whether or not the job allows the person to use the professional or occupational skills in which he or she is trained. Low job satisfaction is correlated with high job turnover rates, and high job satisfaction is correlated with job longevity. Job dissatisfaction has been correlated with both physical and psychological consequences such as headaches, fatigue, anxiety, and depression. [6]
In 1943, Abraham Maslow created his hierarchy of needs. He believed that the most basic needs are at the bottom of the hierarchy, and each stage of the hierarchy has to be satisfied in order to advance to, or feel motivated to achieve, the next stage. The first needs to be satisfied are basic physiological needs required for survival: air, food, and water. If these requirements are not met, the human body cannot continue to function.
When physiological needs are met, the next priority is safety. This is the environment in which the person lives. In an ideal world, a person lives in an environment that is dominated by feelings of safety and security. In other words, the person lives in a well-balanced environment, which sets the stage to work on other needs within Maslow’s hierarchy. The remaining needs include love and belonging (feeling accepted as an essential part of something, having satisfying relationships), esteem (the need for self-respect and the respect of others), and self-actualization (striving for and realizing one’s full potential).
Safety needs set the foundation on which all other needs are achieved, and threats to safety can shake that foundation. People experiencing danger, whether immediate (e.g., during a war or natural disaster) or imminent (living in a high-crime area), find it difficult or impossible to address the higher needs in the hierarchy, such as self-improvement or maintaining healthy relationships. [7]
Another example of how the environment can influence overall balance is found in the diathesis-stress hypothesis, a theory of psychopathology that suggests that certain psychological disorders, if biologically predisposed, can be triggered by an unstable environment. For instance, numerous studies have found higher rates of schizophrenia in urban areas and, conversely, lower rates of schizophrenia in more rural areas. [8]
For the most part, humans are one of only a few species that maintain relationships with others. Most nonhuman animals give birth to their young, and as the animal develops, the mother and her young separate. Humans, on the other hand, tend to be strongly tied to their families of origin.
In 1978, Mary Ainsworth created the “Strange Situation,” which involved having a parent and child (age 12 to 18 months) together while the child explored an unfamiliar environment. After some time, an unfamiliar person entered and spoke with the parent, who then left the room. With the parent absent, the unfamiliar person interacted with the child and later left the room. The parent returned, interacted with the child, and left again. The child was now alone, and the unknown person reentered and interacted with the child, and then the parent reentered the room. The unknown person then left, and the Strange Situation had ended. [9] [10]
The researchers gauged the reaction of each child in the study, which was recorded, analyzed, and classified into one of four categories: secure, insecure, avoidant/ambivalent, and disorganized/disoriented. Three of the categories spoke to the child being off balance. Specifically, the child labeled as anxious/avoidant showed disinterest in her environment, little distress when the mother left, and avoided her when she returned. The anxious resistant child was anxious in the presence of his mother, distressed when she left, and ambivalent upon her return. The disorganized/disoriented child alternated between avoidance and proximity seeking when the mother left. The behavior of a disorganized/disoriented child included restricted movements in the presence of the parent and rocking on hands and knees following the parent leaving. In addition, this type of child moved away from the parent when frightened but also screamed upon separation from the parent. [11] The securely attached, balanced child actively pursued the environment and demonstrated age-appropriate social skills toward the stranger. Although the child might have shown distress upon her mother leaving, she sought her physical contact upon the mother's return.
The Strange Situation has been replicated and expanded to include how a child in a given classification later functions as an adult. Based on responses to the Adult Attachment Interview, there are three insecure attachment types and one secure attachment type. Dismissing adults, for instance, do not believe that human attachment is all that important. As a defense, such adults tend to idealize memories of their childhood but cannot justify their claims with concrete examples. Preoccupied adults tend to have relationships with their parents that lack clear boundaries. As such, the adult might appear to have anger and feel that any life issues are permanent. The Unresolved adults were usually classified in the disorganized type of child. Such adults tend to have been victims of abuse and neglect and have dysfunctional relationships as adults. They also tend to have children that are classified as the disorganized type.
It is the secure-autonomous adult that is the most balanced. Such adults feel that relationships are something to be valued. They are able to reflect on their parental bonds with objectivity and have rewarding relationships of their own.
Now that we’ve explored external characteristics of wellness, we can turn our attention to several internal dimensions. These aspects of our lives are inherently tied to the functioning of our bodies, minds, and spirits as we experience them as individuals.
God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.
Living one day at a time;
Enjoying one moment at a time;
Accepting hardships as the pathway to peace;
Taking, as He did, this sinful world
as it is, not as I would have it;
Trusting that He will make all things right
if I surrender to His Will;
That I may be reasonably happy in this life
and supremely happy with Him
Forever in the next.
Amen.
This is the serenity prayer of 12-step programs such as Alcoholics Anonymous (AA). This prayer is to be fully accepted in the 12 step program in order to move on the subsequent steps. The serenity prayer is based on the concept of balance. Let’s take a closer look at this prayer and decipher its latent content: First is the concept of acceptance. More specifically, internalizing the serenity prayer means fully accepting that some things are out of your control. In the case of AA, it is accepting that one is powerless against alcohol, surrendering the effort to control alcohol use, and instead seeking guidance from a “higher power.”
Acceptance must take place nonjudgmentally—acknowledging that it is neither good nor bad to have or not have control over some things. Complete acceptance of powerlessness over alcohol, or over any stressor that is not within a person's control, can lift an emotional weight from the person's shoulders.
Finally, the serenity prayer highlights the concept of mindfulness. To be mindful is to be completely focused on the here and now—the present moment. To focus on a moment that will be or on a moment that was is not being mindful of the present moment. True mindfulness is a multisensory, present-focused orientation. Mindfulness helps a person feel grounded.
In sum, to be in spiritual balance includes accepting present circumstances, or the moment, in a nonjudgmental way. It also includes being focused on the present rather than on the past or future.
The U.S. Department of Agriculture's Center for Nutrition Policy and Promotion developed a balanced nutrition guide that replaces the well-known Food Pyramid with a more practical dinner plate showing suggested serving sizes of four food groups (20% fruits, 30% grains, 30% vegetables, and 20% proteins) and a glass representing dairy. Unveiled in June 2011, MyPlate is the most recent representation of over 100 years of nutrition guides. ChooseMyPlate.gov [1] is an interactive website that includes ideas for daily food plans for children, adults, dieters, and women who are pregnant or breastfeeding. It also contains advice on eating healthy on a budget, food safety, physical activity, and other healthy-lifestyle topics.
The benefits of a balanced diet are many, one of which is to provide the body with energy for physical activity. The U.S. Department of Health and Human Services (USDHHS) [2] recommends that, for adults, physical exercise be divided into aerobic activity and strength training. For aerobic activity (running, walking), the suggested time is 150 minutes per week of moderate movement or 75 minutes per week of vigorous movement. Activities such as mowing the lawn and brisk vacuuming also count as aerobic activity. Strength training is recommended twice per week and can include activities such as rock climbing and outdoor physical labor in addition to traditional weight machines. The USDHHS also provides activity guidelines for children, adolescents, and older adults.
The government suggestions for diet and exercise establish a good foundation for physical wellness. Along with a healthy diet and exercise, the government recommends that all Americans have an annual wellness checkup.
Intellectual wellness and balance is more of an active and challenging process. To achieve intellectual balance is to feel temporarily unbalanced. For instance, Lee Vygotsky was a Russian psychologist in the early 1920s. He stated that intellectual growth in children occurred within the social environment and on two levels, with assistance from others and within the child. Vygotsky coined a process called “zone of proximal development.” This is a space between what a child can cognitively achieve on his own and what he can achieve with assistance. The process of working within this space is called “scaffolding” and is where both intellectual growth and age-appropriate intellectual imbalance occur. So in a sense, intellectual balance is interrupted and later reestablished once a new concept is grasped.
This process of scaffolding can also occur as adults, although the process is much more self initiated. In other words, opportunities for intellectual growth can occur in the workplace, during social interactions, or reading about topics of interest. As adults become older, the lifelong process of gaining knowledge and responding to intellectual stimuli manifests into a sense of wisdom. Psychologist Erik Erikson stated that as adults enter old age, they reflect upon their lives. If older persons can reflect upon their lives and feel that their lives were meaningful, a sense of integrity develops . Through satisfaction and integrity, older adults perceive a sense of wisdom that they can impart on to others. [3]
Like it or not, we all have emotions that need to be expressed. From the perspective of evolution, for example, humans need to respond to fear in order to survive. From a cultural perspective, emotions can be nonverbal methods of communication. For example, there are certain types of universal emotions that cross cultures. In other words, one person may not be able to understand a spoken language, but that same person would be able to understand feelings states in another person. Research suggests that there are six basic emotions that, no matter what culture, can be accurately interpreted. The six basic, universal emotions are fear, anger, joy, sadness, surprise, and disgust.
From a biological perspective, emotional stimuli are processed in the brain. Specifically, the limbic system, located in the forebrain, contains structures that include the amygdala, septum, and hippocampus. The amygdala helps assign emotional intensity to a given situation and helps us channel emotional energy into a behavior, most notably anger and aggression. The septum inhibits emotions: Damage to this area in animals has resulted in “septal rage syndrome,” which involves inhibition of rage. Finally, the hippocampus is less related to emotion and more related to memory.
From a balanced perspective, Marsha Linehan [4] talked about being in “wise mind.” Linehan developed a treatment model for those diagnosed with borderline personality disorder, the hallmark of which is emotional instability. Wise mind, which is the goal of this treatment also known as dialectical behavior therapy (DBT), means that one is fully aware of the experience of an emotion. For instance, the tightening of the chest for anxiety, or the physical heaviness of sadness, are cues that should be responded to in such a way that it helps process an emotion. At the very least, in cases where there is nothing that can be done about the emotional situation, being in wise mind will help temporarily distract a person from the emotion.
Picture in your mind the most beautiful lawn you have ever seen: Lush, thick green grass is nicely manicured, and all blades of grass are of equal length. Did you ever wish your lawn could be like this? Or did you ever think, “That’s what my lawn is going to look like?” Overall, proper lawn maintenance depends on the type of grass, such as cool season and warm season grass. Proper lawn care also includes knowing what to do at a given time of year. There is basic lawn care that includes mowing, watering, and removal of small weeds by hand. Proper lawn care also includes grub control, patching, feeding, and thatching. It also includes maintenance of your lawn mower so dull blades do not “split” the grass blades. Most important, to maintain a healthy lawn requires active attention to the soil, including monitoring the pH balance. In other words, lawn care is not a passive process.
In reality, most homeowners do not monitor the pH of the soil, despite this being instrumental in overall lawn health. Much like having a healthy lawn, it is also important to take that same active approach to maintain personal balance. However, people tend to take a more passive approach to self-care, often attempting to balance external demands at the expense of their own physical and mental well-being. Consider your own personal wellness wheel and its individual components.
In broad strokes, it seems that not enough time exists to do things that would create more balance. Specifically, if a you work 8 hours per day, then must take care of other external demands such as school, afterschool activities for children, and making dinner for the family, not much time is left for taking care of personal balance. The more demands placed on a person, the more likely that person will not make time for self-care. This can almost be ironic in that, considering the internal and external wellness wheels, a person would be able to better manage the demands of life if both internal and external wellness wheels were balanced with each other.
Having said that, there is also balance to self-care. In other words, some people may take too much self time at the expense of external demands. In other words, after working 40 hours and playing golf on Saturdays and watching football on Sunday, a person's relationships may be negatively impacted because of limited attention to family (if family is included in a person's definition of wellness).
With moderation, personal needs are extremely important for balance because meeting those needs makes a person more effective in managing external demands. People can create their own individualized wellness wheel to include physical health, physical exercise, eating and sleeping habits, and personal time for hobbies.
The first module discussed being in balance. This module discusses maintaining balance and moving toward optimal balance. As mentioned in the previous module, overall wellness is comprised of lifestyle and choices. Another aspect to add is acceptance as this would be a starting point for moving back towards wellness. Acceptance is required because it allows a person to accept where he is at a particular moment, as opposed to becoming overwhelmed with where he wishes he could be, where he was at one point in time, or how he perceives others to be. Acceptance does not necessarily mean that a person loves where he is at that moment. Rather, indicates accepting responsibility for he is at that moment. In other words, he may not love what the doctor has to say about the number on the scale, but he is willing to take responsibility for it and create goals. An example of a personal starting point would be the self-care report card activity that was completed in the previous module.
While reviewing the self-care report card, it is important to be aware of maintaining the balance of staying challenged while not becoming overwhelmed. This also applies to life goals as feeling overwhelmed can create the natural tendency to want to “shut down.” In other words, feeling that there is so much to do that nothing gets done. A first intervention is to recognize the connection between the mind and body.
An overall objective of this module is to learn how the mind-body connection contributes to moving back to, and then ultimately staying in, balance. The principles of the mind-body connection will become more specific as the module progresses, however, the following video and activity provides an introduction to the mind body connection.
Watch this video and answer the questions that follow.
Moving back into balance might not be easy, and the hardest part can be getting started. In some cases, mental health intervention may be warranted.
We have seen that psychologists and other practitioners employ a variety of treatments in their attempts to reduce the negative outcomes of psychological disorders. But we have not yet considered the important question of whether these treatments are effective, and if they are, which approaches are most effective for which people and for which disorders. Accurate empirical answers to these questions are important as they help practitioners focus their efforts on the techniques that have been proven to be most promising, and will guide societies as they make decisions about how to spend public money to improve the quality of life of their citizens. [1]
Psychologists use outcome research. Studies that assess the effectiveness of medical treatments., that is, studies that assess the effectiveness of medical treatments, to determine the effectiveness of different therapies. Thousands of studies have been conducted to test the effectiveness of psychotherapy, and by and large they find evidence that it works. Some outcome studies compare a group that gets treatment with another (control) group that gets no treatment. For instance, Ruwaard, Broeksteeg, Schrieken, Emmelkamp, and Lange [2] found that patients who interacted with a therapist over a website showed more reduction in symptoms of panic disorder than did a similar group of patients who were on a waiting list but did not get therapy. Although studies such as this one control for the possibility of natural improvement (the treatment group improved more than the control group, which would not have happened if both groups had only been improving naturally over time), they do not control for either nonspecific treatment effects or for placebo effects. The people in the treatment group might have improved simply by being in the therapy (nonspecific effects), or they may have improved because they expected the treatment to help them (placebo effects).
Another type of outcome study compares different approaches with each other. For instance, Herbert and colleagues [3] tested whether social skills training could boost the results received for the treatment of social anxiety disorder with cognitive-behavioral therapy (CBT) alone. As you can see in Figure 13.11, they found that people in both groups improved, but CBT coupled with social skills training showed significantly greater gains than CBT alone. A more in depth description of CBT is described below.
Because there are thousands of studies testing the effectiveness of psychotherapy, and the independent and dependent variables in the studies vary widely, the results are often combined using a meta-analysis. A meta-analysis is a statistical technique that uses the results of existing studies to integrate and draw conclusions about those studies. In one important meta-analysis analyzing the effect of psychotherapy, Smith, Glass, and Miller [4] summarized studies that compared different types of therapy or that compared the effectiveness of therapy against a control group. To find the studies, the researchers systematically searched computer databases and the reference sections of previous research reports to locate every study that met the inclusion criteria. Over 475 studies were located, and these studies used over 10,000 research participants.
The results of each of these studies were systematically coded, and a measure of the effectiveness of treatment known as the effect size was created for each study. Smith and her colleagues found that the average effect size for the influence of therapy was 0.85, indicating that psychotherapy had a relatively large positive effect on recovery. What this means is that, overall, receiving psychotherapy for behavioral problems is substantially better for the individual than not receiving therapy.
Other meta-analyses have also found substantial support for the effectiveness of specific therapies, including cognitive therapy, CBT [5] [6] couples and family therapy [7] and psychoanalysis. [8] On the basis of these and other meta-analyses, a list of empirically supported therapies—that is, therapies that are known to be effective—has been developed. [9] [10] These therapies include cognitive therapy and behavioral therapy for depression; cognitive therapy, exposure therapy, and stress inoculation training for anxiety; CBT for bulimia; and behavior modification for bed-wetting.
Smith, Glass, and Miller [4] did not find much evidence that any one type of therapy was more effective than any other type, and more recent meta-analyses have not tended to find many differences either. [11] What this means is that a good part of the effect of therapy is nonspecific, in the sense that simply coming to any type of therapy is helpful in comparison to not coming. This is true partly because there are fewer distinctions among the ways that different therapies are practiced than the theoretical differences among them would suggest. What a good therapist practicing psychodynamic approaches does in therapy is often not much different from what a humanist or a cognitive-behavioral therapist does, and so no one approach is really likely to be better than the other.
What all good therapies have in common is that they give people hope; help them think more carefully about themselves and about their relationships with others; and provide a positive, empathic, and trusting relationship with the therapist—the therapeutic alliance. [12] This is why many self-help groups are also likely to be effective and perhaps why having a psychiatric service dog may also make us feel better.
The idea of the mind-body connection is the foundation of cognitive-behavioral therapy (CBT). The power of the mind is always interesting. Two people can wake up on a given day and go to the same job. However, the way one thinks about the job can affect his or her mood that day. For instance, one person can think, “This job will never get better” while the other person thinks, “This is how I’m going to make my job better.” Based on the thoughts, one person can feel depressed while the other feels optimistic. In moving toward wellness using CBT, the focus is on changing thoughts that create undesired emotions so to minimize their intensity. This type of therapy is widely utilized in mental health settings.
Consider the following table:
Trigger | Running late for work and behind a slow driver | Running late for work and behind a slow driver |
Thoughts | “I can’t believe that I’m going to be late"; "It figures, I always get behind someone going slow when I’m running later"; "This idiot won’t move." | "It's a good thing I haven’t been late this year"; "Let me give the office a call and let them know I’ll be a little late." |
Feelings | Anger/frustration | Contentment/mild frustration |
Behavior | Tailgate the driver; flash high beams | Stay at a safe distance; wait for a safe moment to pass |
Consequences | Possible ticket, accident; possible bad mood for the rest of the day | Get to work a little late with the boss already knowing |
As you can see, there is a domino effect in that triggers create automatic thoughts, which create feelings, that create behavior, that create consequences. Sometimes, there is no control over the trigger (such as being behind a slow driver), but recognizing automatic thoughts and then knowing how to change them is where a person has control.
Of course, when moving toward emotional wellness, there are therapeutic interventions that do not necessarily have to include psychotherapy. At present, there is a movement that incorporates more Eastern approaches to emotional balance. Once such intervention, or rather, way of life, is the concept of mindfulness.
Watch the following video and answer the questions.
Although the video states that mindfulness is a “program,” there are likely episodes of mindfulness that are experienced on a daily basis. Simply stated, to be mindful is to do one thing at a time with all energy and focus on that one thing. The ability to be mindful is enhanced when it becomes a multi sensory experience. For instance, some children and adolescents mindfully play video games. Playing video games is multisensory, as it incorporates sight, sound, and touch (manipulating the controller). Adults practice mindfulness when they become “lost” in a good book or while lying on the beach. No matter how mindfulness is experienced, it allows a person to be fully conscious of an experience and serve as a temporary distraction from stressors. Other examples of mindful experiences can include: petting the dog, taking a hot bath with scented oils and candles, walking outside and being mindful of the trees rustling and the wind in your face. Did you ever ask yourself, “Where did the time go?” or say to yourself, “Wow! I can’t believe this is over already.” That is likely because you were experiencing mindfulness.
There are many activities that can have a positive impact on your well-being and help you maintain balance in your life.
Watch the video, "Dean Ornish on the World’s Killer Diet" and answer the questions that follow.
You’ve been introduced to the concept of mindfulness and how a meditation practice can help you develop and deepen this form of awareness.
The following videos offer you an opportunity to explore two common styles of meditation. The first short video will take you through a body-scan guided meditation. The second short meditation focuses on the repeated utterance of a single sound.
Try the guided meditation in this first video and then reflect on the question that follows. You’ll need about five minutes for the meditation. Find a quiet, comfortable place, and then start the video.
What did you experience physically, emotionally, and mentally as you did the body scan guided meditation?
Give yourself a little time between doing the body-scan guided meditation and trying the next meditation technique.
This is a short meditation during which you’ll repeat a single word. Find a quiet place where you feel comfortable speaking aloud. When you’re ready, begin the video.
What was your experience of meditating on a sound that you made over and over again? How did this type of meditation influence your focus and attention?
One of the most well-known forms of movement meditation is yoga. This ancient practice is focused on integrating your breathing with movements of your body into and out of different postures, called asanas. This is one of the primary reasons yoga is a form of meditation rather than a type of calisthenics.
There are several different schools of yoga, such as Hatha, Vinyasa, Iyengar, and Kundalini. Many asanas are found in all of these styles of yoga. The posture known as Downward-Facing Dog is one such asana.
The following video demonstrates how to do Downward-Facing Dog. Give it a try!
Remember: Go into the posture as much as your body allows. Don’t force your heels to the floor. If you need to, bend your knees. And, breathe!
A final concept with moving back into wellness is to be mindful of having a good support system in place. Ideally, a support system would include having healthy relationships. That being said, healthy relationship are not just formed. They are formed and developed through working on them. (FWK) When good friendships develop, when people get married and plan to spend the rest of their lives together, and when families grow closer over time, the relationships take on new dimensions and must be understood in somewhat different ways. Although humans seem to be the only animals that are able to develop close relationships in which partners stay sexually faithful to each other for a lifetime, [1] these relationships do not come easily. About one half of contemporary marriages in the United States and Canada end in divorce. [2]
The factors that keep people liking each other in long-term relationships are at least in part the same as the factors that lead to initial attraction. For instance, regardless of how long they have been together, people remain interested in the physical attractiveness of their partners, although it is relatively less important than for initial encounters. And similarity remains essential. Relationships are also more satisfactory and more likely to continue when the individuals develop and maintain similar interests and continue to share their important values and beliefs over time. [3] Proximity also remains important—relationships that undergo the strain of the partners’ being apart from each other for very long are more at risk for breakup.
But what about passion? Does it still matter over time? Yes and no. People in long-term relationships who are most satisfied with their partners report that they still feel passion for their partners—they still want to be around them as much as possible, and they enjoy making love with them. [4] [5] And partners report that the more they love their partners, the more attractive they find them. [6] On the other hand, the high levels of passionate love that are experienced in initial encounters are not likely to be maintained throughout the course of a long-term relationship. [7] Over time, cognition becomes relatively more important than emotion, and close relationships are more likely to be based on companionate love, which is love that is based on friendship, mutual attraction, common interests, mutual respect, and concern for each other’s welfare.
Consider the following benefits of having a good, healthy long-term relationship as reported by the U.S. Department of Health and Human Services.
For Women
Researchers have found many benefits for women who are in healthy marriages, compared to unhealthy marriages, including the following:
For Men
Researchers have found many benefits for men who are in healthy marriages, compared to unhealthy marriages, including the following:
It should be noted that having healthy relationship is not just considered in marriages and long-term relationships. For instance, a relationship with a therapist should be healthy, but it is also temporary. Other examples of having relationships include those that people have with their relatives, neighbors, employees or co workers, and church members.
If you have ever been on a plane, you have seen the safety instructions provided by the flight attendants. First, someone from the flight crew directs passengers to the Fasten Seat Belt sign and reminds them to stow any carry-on luggage underneath the seat in front of you or in an overhead bin. Flight attendants also remind passengers to put seats back and folding trays in the upright position. Selected passengers are then reminded that those who are seated next to an emergency exit should read the instruction card and are then given the opportunity to move their seat if they do not wish to assume such a responsibility. Flight crew will also remind passengers that the flight is a nonsmoking one and that all electronic devices should be turned off until a certain altitude is reached.
Most relevant to this module is the part when flight attendants remind passengers that if there is a loss of cabin pressure, oxygen masks will descend. They then demonstrate to passengers how to place them over their nose and mouth. They also remind passengers to place their own masks on before assisting anyone else. In other words, a passenger has to place his own mask on before he can help the 3-year-old next to him.
The moral of the above is that flight passengers are ineffective if they are passed out. It is the same thing in life in that being out of balance can make a person ineffective. As such, it is important for people to be mindful of when they are feeling out of balance.
Thus far in our discussion of wellness, we’ve characterized wellness as having eight dimensions: four of them associated with external aspects of our lives and the other four rooted in our individual bodies, minds, and spirits. We’ve also emphasized that our own sense of wellness is dynamic and, as such, something we each must actively engage in if we want to be well. The introduction to this section also reminds us that awareness of and openness to what is happening in our lives, as well as how we are acting and reacting within each moment, determines the overall sense of balance we experience.
Sounds simple, right? Maintaining balance among these various facets of our lives should be easy! Yet, we all know, it’s not. Events occur that can demand our attention and consume our energy. Negative emotions take hold that can cloud our outlook and judgment. Physical symptoms can appear that can indicate our bodies are in distress. These things are going to happen. We will become “out of balance” throughout the day...throughout our lives. Only when we are mindful of any imbalances can we first consider options that will effectively address our needs, and then choose to act on those that will move us toward balance and well-being.
In this section, we continue to use our internal and external wellness wheels to explore the concept of imbalance in each of the eight dimensions previously described.
In the previous section, we developed an external and an internal wellness wheel. The external wheel consists of the four dimensions: social, environmental, occupational, and family. The internal wheel represents the physical, emotional, intellectual, and spiritual dimensions of wellness. As we begin to discuss the characteristics of imbalance with respect to wellness, let’s use these wheels to explore this concept in an interactive way.
Now that you’re aware of the dynamic and connected nature of these eight facets of wellness, particularly when one or more of these dimensions is out of balance, try to answer the following set of questions.
The consequences for being physically out of balance are numerous. On the positive side of this, the world balance does not have to mean walking on a tightrope. Rather the world balance is comprised of, for the most part, choices and lifestyle. Those choices and lifestyles will be discussed in the next module, however, as this module discusses the consequences of poor choices and unhealthy lifestyle.
The World Health Organization (WHO) is the world’s leading institution in reporting health-related news and statistics. It is considered an online database that also conducts research, establishes health related policies, and monitors trends in worldwide health. According to the World Health Organization (WHO), the rates of worldwide obesity have more than doubled since 1980. Specifically, 1.5 billion adults over the age of 20 were estimated to be overweight in 2008. Of this 1.5 billion, 500 million men and women were classified in the obese range. The World Health Organization estimates that in 2010, 43 million children under the age of five were overweight. [1]
With obesity come health consequences. The WHO states a positive correlation between degree of being overweight and health consequences. For instance, as a person becomes more overweight, the risk rises for cardiovascular diseases such as heart disease and stroke, type 2 diabetes, osteoarthritis, and certain types of cancers such as endometrial, breast, and colon cancers. [2] As weight increases, a person potentially decreases the number of years that he or she lives. In the case of women, the chances of becoming pregnant decrease.
Watch the video and answer the following questions about obesity.
On the other side of the out of balance physical well-being continuum is the presence of certain eating disorders. Although eating disorders are primarily associated with women and girls, there has been more of a social awareness in boys and men. [3] Overall, there is a lag between the social awareness of males with eating disorders and the medical model of diagnosis. Specifically, the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition-Text Revision (DSM-IV-TR) is the book that mental health clinicians use to diagnose psychological disorder. It lists four criteria for anorexia nervosa. Of the four, one is focused on females (which is amenorrhea or the absence of at least three consecutive menstrual cycles), which contributes to an underreporting of the prevalence of anorexia in males.
That being said there are some available statistics for anorexia in males. For instance, about 10-15% of people with anorexia or bulimia are male and, as noted above, this may be an under representation of the population due to the diagnostic criteria and men being less likely to seek treatment for a “woman’s disease.” [4] [5]
Despite the skewed statistics and the medical lag in diagnostic accuracy, what is known about males with anorexia is that they have some similar psychological experiences with females. For instance, females and males experience a distorted body image. Another similarity is that recovery from anorexia is long for both genders, and there is a significant vulnerability for relapse if the proper supports and services are not in place. [6]
As noted above, there is a similarity between men and women in that both suffer from distorted perceptions their bodies. However, there is a difference between how the distorted body image is manifested. Specifically, males are more likely to suffer from muscle dysmorphia, which is defined as being overly concerned with being more muscular. This is in contrast to females who primarily are interested in losing weight. So, while a female with anorexia has a desire to be thinner than she believes she is, a male might have a desire to appear more muscular than he is.
There are both medical and psychological consequences of anorexia that can occur between genders. The most severe outcome is death. In addition, abnormal heart rhythms (arrhythmias) can occur as can anemia (where the body does not have enough healthy red blood cells which can result in chest pain, fatigue, headaches, and difficulty concentrating), gastrointestinal problems (such as constipation, bloating or nausea), kidney problems, and irreversible organ damage. From a psychological perspective, anorexia can contribute to, or magnify, symptoms of depression and anxiety disorders.
It should be noted that there are still consequences for physical imbalance that lie in between the continuum of anorexia and obesity. This will be addressed in the section on the consequences of being emotionally out of balance.
Dialectical behavior therapy (DBT) is a type of therapy developed by Marsha M. Linehan [1] at the University of Washington. DBT is based on three states of mind: reasonable mind, wise mind, and emotional mind. People are in reasonable mind when they take a more intellectual approach to life. Reasonable mind includes the ability to think logically and focus solely on facts. While in reasonable mind, people are “cool" in their approach. On the opposite end of the spectrum, people can be in emotional mind, which is when thinking and behavior are emotionally based. Any attempts at logic can be futile because thoughts can be nonfacts and further fuel emotions. While reasonable mind takes a “cool” approach, thoughts and behaviors in emotional mind are "hot." Finally, a person is in wise mind when there is an overlap of reasonable mind and emotional mind. In other words, a people are aware of escalating emotions but have not lost touch with logic. While some people can engage in unhealthy behaviors as a result of being in emotional mind, those in wise mind are able to recognize potential long term consequences of acting on pure emotions.
DBT also teaches methods of regulating emotions in an effort to decrease the likelihood that one would act on them. To act on pure emotions may produce instant gratification but then have long-term consequences. DBT stresses the importance of self-care in regulating emotions. In other words, neglecting self-care makes one vulnerable to giving into their emotions.
According to DBT, there are six levels of self-care that help establish a strong psychological foundation. The more areas that are off balance, the more likely that one would be emotionally off balance. The six areas of self-care are introduced via PLEASE MASTER. The components of PLEASE MASTER are:
In other words, take care of your body. This includes having annual checkups and also seeing your doctor when necessary. If a doctor writes a prescription, the medications should be taken as prescribed.
This means not eating too much or too little. Specifically, excessive calories can decrease alertness and concentration, which creates a sense of “fogginess” and makes concentration difficult. Conversely, too little food can also make concentration difficult and cause irritability.
In other words, do not use medications that are not prescribed to you, and minimize the use of alcohol. In the long run, prolonged exposure to illegal drugs and alcohol can result in physical ailments and decreased optimal functioning of organs. In the short run, coming down off of excessive use of drug or alcohol makes moods more difficult to manage. In addition, suddenly stopping addictive drugs can result in withdrawal—negative experiences that accompany reducing or stopping drug use, including physical pain and other symptoms. Mood-altering drugs include caffeine, which is found in a wide variety of products, including coffee, tea, soft drinks, candy, and desserts. Although the U.S. Food and Drug Administration lists caffeine as a safe food substance, it has at least some characteristics of dependence, and sudden reduction of excessive caffeine can result in being irritable, restless, and drowsy, as well as experiencing strong headaches.
Off-balanced sleep has similar negative effects as unhealthy eating. Too much sleep can create more drowsiness and too little causes difficulty focusing and irritability. According to a recent poll, [2] about one-fourth of American adults say they get a good night’s sleep only a few nights a month or less. Getting enough sleep is a luxury that many of us seem to be unable or unwilling to afford, and yet sleeping is one of the most important things we can do for ourselves. Continued over time, a nightly deficit of even only 1 or 2 hours can have a substantial impact on mood and performance.
The word “exercise” is more flexible than can be imagined. Daily exercise is important for overall physical and emotional well-being. However, exercise does not have to be going to the gym every day. It could be something as simple as fast-paced housework, brisk walks, and/or parking in a spot that’s farther away from a destination. In other words, exercise can be some concentrated time during a given day, or it can be incorporated throughout a given day.
Try to do one thing a day to make yourself feel competent, build your self-confidence, and give you a sense of being in control. In the course of a day, many people spend more time doing things for others or things that “need to” get done. To incorporate some time during the day or week to things that have been mastered such as or writing, cooking, or learning a new language can raise self-esteem and create a sense of being satisfied.
So taking PLEASE MASTER into account, how can lack of self-care create emotional imbalance? Let’s say you are taking 17 credits in your third semester at college. When you registered, it seemed reasonable that taking extra credits during the Spring and Fall semesters could result in early graduation. As the semester progresses, you seem to be managing the stress of 17 credits well. You have made an outline of a study schedule that fits perfectly with your part time job. Then, 3 weeks before the end of the semester, you get the final exam schedule. It looks as though most of your exams fall on the same day. The 3 weeks' notice that you have sets the stage for anxiety and worry.
Now, let’s take this a step further. Some anxiety and worry is good for you because it helps create focus and motivation. However, your anxiety causes you to lose sleep. Then you develop a cold because lack of sleep can affect your immune system. Now, because you are anxious, not sleeping well, and not feeling well, the anxiety is amplified to the point that your stomach is in knots, so you have start skipping meals. You realize your level of anxiety, so to calm down, you start drinking a little “just to take the edge off.” However, the alcohol has hit a little harder than usual because you have not been eating. So now, you are not feeling well, not sleeping well, not eating well, and are suffering from a hangover. Finally, since your energy is drained from lack of sleep, poor eating, and some alcohol use, you have not had the energy to go to the gym, which gives you a sense of mastery.
This example is exaggerated to highlight that neglecting your needs can create emotional imbalance. Emotional vulnerability, or being in emotional mind as per DBT, creates the potential to give into the emotions. Again, giving into emotions can provide short-term, instant gratification but can also have long-term consequences. In our example, the anxiety resulted in unhealthy responses (neglecting self-care). Although not highlighted in the example, an unhealthy response to anxiety would be to avoid what is making you anxious. If, given all the stressors in this example, you stopped studying, you would jeopardize you GPA and your plan to graduate early.
Think of your own examples of when your emotions were put off balance due to lack of self-care. Most people have experienced being called “grumpy” when they have not had enough sleep. In more extreme cases, drug and alcohol use can create vulnerability for violence, an extreme behavioral response to anger.
Socialization begins as early as 6 months through infant actions such as smiling, gesturing, and making sounds. In most case, socializing begins in a very balanced, give and take manner between infants and adults. That is until the child starts preschool and/or starts interacting with other children. By the time a child is about 18 months old, a child begins to show preference for certain toys and preference for interacting with certain peers.
Once peer interactions tend to increase as the child starts his academic career. At the beginning of elementary school, well balanced peer interactions tend to be based on gender and this slowly becomes more gender integrated as children grow into adolescence.
It is also about early adolescence where social imbalance is most likely to occur. It is at this time where peer popularity is introduced. This is where certain personality characteristics become prominent in being socially balanced. In order to establish and maintain relationships with like age peers, common personality traits include being extraverted, cooperative, and supportive. [1] In addition, socially well balanced peers tend to do well academically.
At the time of late childhood and into early adolescence, signs that a child is socially imbalanced may emerge. Overall, socially imbalanced children and adolescents tend to be opposite of those who are socially balanced. In addition, they tend to be more aggressive, both verbally and physically.
The consequences of being socially imbalanced can have long lasting impact. Initially, those children who feel socially imbalanced will start withdraw socially. This tends to be a defense and a way to “psychologically hide” from others. In other words, in the mind of a socially imbalanced child, it is better to be neglected than rejected.
Aggressive acts in children have been well studied in the past, especially for those rejected peers. At the present time, this topic has had a resurgence with newly coined terms such as “cyber-bullying.” As a result of technology, rejected peers are no longer only subjected to aggressive acts in the school yard and neighborhood streets. Rejected peers are potentially subjected to ridicule 24 hours per day, seven days per week. The consequences of such forms of aggression can go without consequences, often leaving the rejected peer feeling powerless and revictimized. In this modern day and age, aggressive acts are subject to occur in the safety of one’s home, at any hour of the day, and at any day of the week.
The short and long term consequences of feeling socially rejected included difficulty initiating romantic relationships and also more dire consequences. Specifically, recent research has indicated a type of cynical shyness with males. In cynical shyness, males had a strong desire for social involvement but lacked social skills and, consequently, were repeatedly rejected by peers. As rejection re-occurred, the unexpressed emotional pain intensified which resulted in anger and hatred. Males with cynical shyness, and who lacked coping skills and/or resilience, were found more likely to commit acts of violence. [2]
The video below demonstrates the consequences of social phobia. Match up the experiences of this individual with some of the DSM-IV-TR criteria of Social Phobia.
The experience of being out of balance can be subjective to the person. As an example, characteristics of an imbalanced state may include different degrees of deficiency in the areas sleep, nutrition, and affiliation with others. With external demands often taking precedence over internal needs, a state of being out of balance can be both slow and gradual. Without being conscious of the process, the consequences for being out of balance may emerge. Although the consequences are many and, again, subjective to the person, the next two modules discuss two such consequences: stress and physical pain.
Emotions matter because they influence our behavior and there is no emotional experience that has a more powerful influence on us than stress. In 1936, Hans Selye stated that stress is “the nonspecific response of the body to any demand for change." [1] Stress also refers to the physiological responses that occur when an organism fails to respond appropriately to emotional or physical threats. However, the word has since manifested away from science and into popular culture, losing the essence of the original definition. Overall, stress occurs when there is a threat of resource loss. In general, people seek to collect, build, and then protect resources. Resources can be anything such as money, personal characteristics, self-esteem, and seniority. From a physiological perspective, most people have similar reactions to stress (although to varying degrees), however, stress is subjective in that what may be stressful to one person may not be stressful to another.
Extreme negative events, such as being the victim of a terrorist attack, a natural disaster, or a violent crime, may produce an extreme form of stress known as posttraumatic stress disorder (PTSD). PTSD is a medical syndrome that includes symptoms of anxiety, sleeplessness, nightmares, and social withdrawal. It can occur when a person has been exposed to a traumatic event where the person experienced, witnessed, or was confronted with something that involved actual or threatened death or serious injury, followed by intense fear, helplessness, and/or horror. PTSD is frequently experienced by soldiers who return home from wars, with those who have experienced subjectively more extreme events during war potentially experiencing more severe PTSD.
When extreme or prolonged, stress can create substantial health problems. For instance, survivors of hurricane Katrina had three times the rate of heart attacks than the national average in the years following the disaster, and this is probably due to the stress that the hurricane created. [2] In another example, people in New York City who lived nearer to the site of the 9/11 terrorist attacks reported experiencing more stress in the year following it than those who lived farther away. [3]
Watch this video about posttraumatic stress disorder (PTSD) and answer the following questions.
Stress is not unique, however, to the experience of extremely traumatic events. It can occur in our everyday lives, having a variety of both negative—and positive—outcomes. Think about an event that you would describe as “annoying.” It is not dangerous or life-threatening, yet you may find it to be somewhat stressful.
Not all stress is to be avoided and not all stress can be avoided. Positive events, such as going on a trip, planning a party, or learning a new skill, also may result in a person experiencing stress. In some cases, stress even can be helpful. For example, stress can “keep you on your toes” so that you respond to a situation with an appropriate level of attention and arousal. Studying for a test is an example where you want to pass so, a healthy stress level will help focus you to study.
Stimuli or external events that cause a person to experience stress of any kind are called stressors.
The stress that you experience in your everyday life can also be taxing. Thomas Holmes and Richard Rahe [4] developed a measure of some everyday life events that might lead to stress. The following tables show The Holmes and Rahe Stress Scale and its interpretation of the score. The Holmes and Rahe Stress Scale was developed when 2,500 members of the military were asked to complete the rating scale and then the health records of the soldiers over the following 6 month were reviewed. [5] The results were clear: The higher the scale score, the more likely the soldier was to end up in the hospital.
The Holmes and Rahe Stress Scale | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Interpretation of Holmes and Rahe Stress Scale | ||||||||
---|---|---|---|---|---|---|---|---|
|
Although some of the items on the Holmes and Rahe scale are more major, you can see that even minor stressors add to the total score. Our everyday interactions with the environment that are essentially negative, known as daily hassles, can also create stress as well as poorer health outcomes. [6] Events that may seem rather trivial altogether, such as misplacing our keys, having to reboot our computer because it has frozen, being late for an assignment, or getting cut off by another car in rush-hour traffic, can produce stress. [7] [8] found that medical students who were tested during, rather than several weeks before, their school examination periods showed lower immune system functioning. Other research has found that even more minor stressors, such as having to do math problems during an experimental session, can compromise the immune system. [9]
As you just learned, the Holmes and Rahe study showed that the higher the score from their stress scale, the more likely a person was to develop a stress-related illness.
In 1998, Renner and Mackin [10] published a similar scale they had developed with items that typical college-aged individuals would relate to better. This scale is called the College Undergraduate Stress Scale (CUSS) and is presented below. Renner and Mackin gave the CUSS to approximately 12,000 college students.
Event | Stress Ratings |
---|---|
Being raped | 100 |
Finding out that you are HIV-positive | 100 |
Being accused of rape | 98 |
Death of a close friend | 97 |
Death of a close family member | 96 |
Contracting a sexually transmitted disease (other than AIDS) | 94 |
Concerns about being pregnant | 91 |
Finals week | 90 |
Concerns about your partner being pregnant | 90 |
Oversleeping for an exam | 89 |
Flunking a class | 89 |
Having a boyfriend or girlfriend cheat on you | 85 |
Ending a steady dating relationship | 85 |
Serious illness in a close friend or family member | 85 |
Financial difficulties | 84 |
Writing a major term paper | 83 |
Being caught cheating on a test | 83 |
Drunk driving | 82 |
Sense of overload in school or work | 82 |
Two exams in one day | 80 |
Cheating on your boyfriend or girlfriend | 77 |
Getting married | 76 |
Negative consequences of drinking or drug use | 75 |
Depression or crisis in your best friend | 73 |
Difficulties with parents | 73 |
Talking in front of a class | 72 |
Lack of sleep | 69 |
Change in housing situation (hassles, moves) | 69 |
Competing or performing in public | 69 |
Getting a physical fight | 66 |
Difficulties with a roommate | 66 |
Job changes (applying, new job, work, hassles) | 65 |
Declaring a major or concerns about future plans | 65 |
A class you hate | 62 |
Drinking or use of drugs | 61 |
Confrontations with professors | 60 |
Starting a new semester | 58 |
Going on a first date | 57 |
Registration | 55 |
Maintaining a steady dating relationship | 55 |
Commuting to campus or work, or both | 54 |
Peer pressures | 53 |
Being away from home for the first time | 53 |
Getting sick | 52 |
Concerns about your appearance | 52 |
Getting straight A’s | 51 |
A difficult class that you love | 48 |
Making new friend; getting along with friends | 47 |
Fraternity or sorority rush | 47 |
Falling asleep in class | 40 |
Attending an athletic event (e.g., football game) | 20 |
Regardless of the type or the level of stress a person experiences, the body’s physiological response to stressors is basically the same. We’ll explore this aspect of stress in the next section.
A subjective, triggering event will result in physiological changes in the body. Common body characteristics of stress include cold extremities, tension headaches, neck pain, rapid heartbeat, shallow breath, and/or digestive issues. Some of the physiological consequences of prolonged stress include high blood pressure and heart disease.
From a historical context, the physiologist Hans Selye (1907–1982) studied stress by examining how rats responded to being exposed to stressors such as extreme cold, infection, shock, or excessive exercise. [1] [2] [3] Selye found that regardless of the source of the stress, the rats experienced the same series of physiological changes as they suffered the prolonged stress. Selye created the term general adaptation syndrome (GAS) to refer to the three distinct phases of physiological change that occur in response to long-term stress: alarm, resistance, and exhaustion. The figure below describes each phase and depicts the changes in stress resistance over the three phases.
The experience of stress creates both an increase in general arousal in the sympathetic division of the autonomic nervous system (ANS), as well as another, even more complex, system of physiological changes through the HPA axis. The HPA axis is a physiological response to stress involving interactions among the hypothalamus, the pituitary, and the adrenal glands. The HPA response begins when the hypothalamus secretes hormones that direct the pituitary gland to release the hormone ACTH. ACTH then directs the adrenal glands to secrete more hormones, including epinephrine, norepinephrine, and cortisol, a stress hormone that releases sugars into the blood, helping preparing the body to respond to threat. [4]
The initial arousal that accompanies stress is normally quite adaptive because it helps us respond to potentially dangerous events. The experience of prolonged stress, however, has a direct negative influence on our physical health. Specifically, while stress increases activity in the sympathetic division of the autonomic nervous system (ANS), it also suppresses activity in the parasympathetic division of the ANS. When stress is long-term, the HPA axis remains active and the adrenals continue to produce cortisol. This increased cortisol production exhausts the stress mechanism, potentially leading to fatigue and depression.
Not all people experience and respond to stress in the same way, and these differences can be important. The cardiologists Meyer Friedman and R. H. Rosenman [1] were among the first to study the link between stress and heart disease. In their research they noticed that even though the partners in married couples often had similar lifestyles, diet, and exercise patterns, the husbands nevertheless generally had more heart disease than did the wives. As they tried to explain the difference, they focused on the personality characteristics of the partners, finding that the husbands were more likely than the wives to respond to stressors with negative emotions and hostility.
Recent research has shown that the strongest predictor of a physiological stress response from daily hassles is the amount of negative emotion that they evoke. People who experience strong negative emotions as a result of everyday hassles, and who respond to stress with hostility, experience more negative health outcomes than do those who react in a less negative way. [2] [3] Williams and his colleagues [4] found that people who scored high on measures of anger were three times more likely to suffer from heart attacks in comparison to those who scored lower on anger.
On average, men are more likely than are women to respond to stress by activating the fight-or-flight response, which is an emotional and behavioral reaction to stress that increases the readiness for action. The arousal that men experience when they are stressed leads them to either go on the attack, in an aggressive or revenging way, or else retreat as quickly as they can to safety from the stressor. The fight-or-flight response allows men to control the source of the stress if they think they can do so, or if that is not possible, it allows them to save face by leaving the situation. The fight-or-flight response is triggered in men by the activation of the HPA axis.
Women, on the other hand, are less likely to take a fight-or-flight response to stress. Rather, they are more likely to take a tend-and-befriend response. [5] The tend-and-befriend response is a behavioral reaction to stress that involves activities designed to create social networks that provide protection from threats. This approach is also self-protective because it allows the individual to talk to others about her concerns, as well as to exchange resources, such as child care. The tend-and-befriend response is triggered in women by the release of the hormone ocytocin, which promotes affiliation. Overall, the tend-and-befriend response is healthier than the flight-or-flight response because it does not produce the elevated levels of arousal related to the HPA, including the negative results that accompany increased levels of cortisol. This may help explain why women, on average, have less heart disease and live longer than men.
In her fascinating research, Shelly Taylor showed that women and men have very different ways of dealing with stress. Women are more likely to move toward interaction while men are more likely to move toward action and inaction.
No matter how healthy and happy we are in our everyday lives, there are going to be times when we experience stress. But we do not need to throw up our hands in despair when things go wrong; rather, we can use our personal and social resources to help us.
Perhaps the most common approach to dealing with negative affect is to attempt to suppress, avoid, or deny it. You probably know people who seem to be stressed, depressed, or anxious, but they cannot or will not see it in themselves. Perhaps you tried to talk to them about it, to get them to open up to you, but were rebuffed. They seem to act as if there is no problem at all, simply moving on with life without admitting or even trying to deal with the negative feelings. Or, perhaps you have even taken a similar approach yourself. Have you ever had an important test to study for or an important job interview coming up, and rather than planning and preparing for it, you simply tried put it out of your mind entirely?
Research has found that ignoring stress and suppressing negative emotions can be an unhealthy coping resources. For one, ignoring our problems does not make them go away. If we experience so much stress that we get sick, we now have to manage both physical illness and the ignored stressors. [1] For example, if we know that we have that big exam coming up, we have to focus on the exam, despite the initial urge to ignore it. To suppress or deny the fact that exam is coming up takes emotional and physical effort, and we can become physically and emotionally tired
Despite ignoring stress and suppressing emotions not being healthy in the long run, there are times when this might work out in the short run. Specifically, suppressing emotionally based stress can be appropriate when a situation is uncontrollable or when nothing can be done about the situation at the moment. For example, a person is wide awake at 3:00 a.m. because he is worried about his day he has. He mentally reviews all the events he has to do that include: making a doctor appointment, getting his oil changed, and wondering if his study group will show up in the library. At this moment, it would be appropriate to suppress the emotions and ignore the stress as there is nothing he can do about any of the stressors at 3:00 a.m. To stay awake about things that he cannot control will actually make him less effective for the busy day he has later that morning.
Having said that, prolonged suppression of stress, especially when the stressors are controllable, can result in decreased energy and this creates vulnerability for negative emotions re emerging into consciousness. Consequently, a re-experience of the avoided negative affect returns, and likely with much more intensity. For instance, to avoid studying for a test will make testing day more stressful. To avoid the stress of waiting for an oil change will create more stress in the long run as the car drives another mile with old oil.
Although there are pharmacological interventions to help manage stress, nonpharmacological interventions include increasing self-awareness of body cues of stress, environmental control of the stressor such as changing jobs, and/or changing thoughts about the stressor.
Daniel Wegner and his colleagues [2] directly tested whether people would be able to effectively suppress a simple thought. He asked them to not think about a white bear for 5 minutes but to ring a bell if they did. (Try it yourself. Can you do it?) However, participants were unable to suppress the thought as instructed. The white bear kept popping into mind, even when the participants were instructed to avoid thinking about it. You might have had this experience when you were dieting or trying to study rather than party; the chocolate bar in the kitchen cabinet and the fun time you were missing at the party kept popping into mind, disrupting your work.
Suppressing our negative thoughts does not work and, as a matter of fact, the evidence supports the opposite: When we are faced with troubles, it is healthy to let out the negative thoughts and feelings by expressing them either to ourselves or to others. James Pennebaker and his colleagues [3] [4] have conducted many correlational and experimental studies that demonstrate the advantages to our mental and physical health of opening up versus suppressing our feelings. This research team has found that simply talking about or writing about our emotions or our reactions to negative events provides substantial health benefits. For instance, Pennebaker and Beall [5] randomly assigned students to write about either the most traumatic and stressful event of their lives or trivial topics. Although the students who wrote about the traumas had higher blood pressure and more negative moods immediately after they wrote their essays, they were also less likely to visit the student health center for illnesses during the following six months. Other research studied individuals whose spouses had died in the previous year, finding that the more they talked about the death with others, the less likely they were to become ill during the subsequent year. Daily writing about one’s emotional states has also been found to increase immune system functioning. [6]
Opening up probably helps in various ways. For one, expressing our problems to others allows us to gain information, and possibly support, from them (remember the tend-and-befriend response that is so effectively used to reduce stress by women). Writing or thinking about one’s experiences also seems to help people make sense of these events and may give them a feeling of control over their lives. [7]
It is easier to respond to stress if we can interpret it in more positive ways. Kelsey and colleagues [8] found that some people interpret stress as a challenge (something that they feel that they can, with effort, deal with), whereas others see the same stress as a threat (something that is negative and fearful). People who viewed stress as a challenge had fewer physiological stress responses than those who viewed it as a threat—they were able to frame and react to stress in more positive ways.
When an external stressor is unavoidable, changing thoughts about the stressor can help minimize the intensity. It also helps to gain coping resources in order to learn tolerance of the stressor. For instance, a person starting her law career right out of college may fully commit to her assigned tasks. However, 50-hour weeks and bringing home files on the weekend begin to wear on her. She starts to worry that she is not doing an adequate job and her sleep starts to suffer. However, at some point, she starts to change her perception of the job. She starts to think, “If my boss has not said anything, there is a good chance I’m doing a good job.” She also starts to see her 50-hour work weeks and weekend work as expedited ways to gain more experience. She calculates her hours and thinks that she can condense five years of experience into three. She starts to see her new job as an opportunity to gain a lot of experience in a short amount of time. As her thoughts about the stressors change, she starts to feel very excited about her opportunity, and the intensity of her stress is thereby minimized.
There is also a “window of stress” in that too little stress can result in inactivity while too much stress can have physical and emotional consequences. However, just the right amount of stress can be adaptive and very motivating. Stress, when in certain parameters, can be motivating and help with focus. For example, a person is called for a job interview right after graduating with a degree in accounting. If this person has already received an offer from another company, there is likelihood that the intensity of stress will not be that high. As such, that person may not bring his suit to the cleaners. He might run five minutes late for the interview and forget to bring a copy of his resume. If another person was called for the same job, but this has been his only call thus far, the stress level will be higher. In this case, the person will prepare by reviewing answers to different questions, make sure his suit is pressed, and he will bring several copies of resume. If a third person was called for the same interview, but was already passed over for different positions in other companies, his stress may get to a point where it actually works against him. He may very well do the things he is “supposed to do” in an interview, but he also may magnify his stress level by being self conscious about what he says. He may be so focused on what he wants to say, that he is not listening to what the interviewer is saying. As such, he will not get the job. Reflect on your own adaptive stress in the following exercise.
Emotional responses such as the stress reaction are useful in warning us about potential danger and in mobilizing our response to it, so it is a good thing that we have them. However, we also need to learn how to control our emotions, to prevent them from letting our behavior get out of control. The ability to successfully control our emotions is known as emotion regulation.
Emotion regulation has some important positive outcomes. Consider, for instance, research by Walter Mischel and his colleagues. In their studies, they had 4- and 5-year-old children sit at a table in front of a yummy snack, such as a chocolate chip cookie or a marshmallow. The children were told that they could eat the snack right away if they wanted. However, they were also told that if they could wait for just a couple of minutes, they’d be able to have two snacks—both the one in front of them and another just like it. However, if they ate the one that was in front of them before the time was up, they would not get a second.
Mischel found that some children were able to override the impulse to seek immediate gratification to obtain a greater reward at a later time. Other children, of course, were not; they just ate the first snack right away. Furthermore, the inability to delay gratification seemed to occur in a spontaneous and emotional manner, without much thought. The children who could not resist simply grabbed the cookie because it looked so yummy, without being able to stop themselves. [1] [2]
The ability to regulate our emotions has important consequences later in life. When Mischel followed up on the children in his original study, he found that those who had been able to self-regulate grew up to have some highly positive characteristics: They got better SAT scores, were rated by their friends as more socially adept, and were found to cope with frustration and stress better than those children who could not resist the tempting cookie at a young age. Thus effective self-regulation can be recognized as an important key to success in life. [3] [4] [5]
Watch this video to see how Stickman applies some self-regulatory techniques when he’s angry.
Emotion regulation is influenced by body chemicals, particularly the neurotransmitter serotonin. Preferences for small, immediate rewards over large, but later, rewards have been linked to low levels of serotonin in animals, [6] [7] and low levels of serotonin are tied to violence and impulsiveness in human suicides. [8]
We do not enjoy it, but the experience of pain is how the body informs us that we are in danger. The burn when we touch a hot radiator and the sharp stab when we step on a nail leads us to change our behavior, preventing further damage to our bodies. People who cannot experience pain are in serious danger of damage from wounds that others with pain would quickly notice and attend to.
The experience of pain is a combination of body responses and subjective experience. In other words, the level of pain can be positively or negatively impacted by the amount of attention, level of motivation, and belief about the pain. This is also referred to as psychogenic pain because, although there is a physical explanation for it, pain can also be worsened by emotional states such as fear, depression, stress, or anxiety. The above diagram will help bring some insight into the combination of physiological and psychological aspects of pain. [1]
The gate control theory of pain proposes that pain is determined by the operation of two types of nerve fibers in the spinal cord. One set of smaller nerve fibers carries pain from the body to the brain, whereas a second set of larger fibers is designed to stop or start (as a gate would) the flow of pain. [1] It is for this reason that massaging an area where you feel pain may help alleviate it—the massage activates the large nerve fibers that block the pain signals of the small nerve fibers. [2]
Since there are multiple causes of pain, there are also multiple types of pain and multiple ways that people respond to it. Despite the discomfort of pain, it is the way in which the body sends messages to the brain.
One of the many ways to classify pain is to note it as either an acute or chronic type. The former, acute pain, likely has a sudden onset and is of relatively short duration. Acute pain has multiple sources but is usually the result of tissue damage to the bones, muscles, and/or organs. Chronic pain, on the other hand, is defined as pain lasting over three months and medical interventions may have limited success. Chronic pain can be the result of medical conditions such as osteoarthritis or be the medical condition, such as in fibromyalgia. While acute pain is usually associated with tissue damage, chronic type is usually the result of nerve damage. Sometimes, there is no clear origin for chronic pain, which can be a source of frustration when it negatively impacts upon the quality of life.
Watch the following video describing the physiology of pain and answer the questions that follow.
Associated more with chronic pain, especially if the person is prescribed pain medication, is breakthrough pain. Breakthrough pain is, in a sense, chronic pain that will “breakthrough” the effects of pain medication. Breakthrough pain can be experienced suddenly, as the result of seemingly insignificant moving, or as the effects of pain medication begin to wear off.
In addition to acute and chronic pain, there are many other ways that pain can be described. For instance, pain can be classified as being either nociceptive or neuropathic. The nociceptive type is caused by tissue damage while the neuropathic type is caused by nerve damage. [3] [4]
Pain can also be classified by location (muscular pain, joint pain, chest pain, back pain). It can also be described as an ache, a sharp or stabbing, or throbbing. Finally, there are also pain syndromes that include myofascial type pain syndromes such as Fibromyalgia where the pain is located in certain muscles of the body. Overall, pain can be difficult to describe because it is a very personal experience. As such, certain measures have been developed to help describe and quantify pain level as it pertains to a person. [3] [4]
Pain Measure | Description |
---|---|
Survey of Pain Attitudes (SOPA) | Measures the belief about chronic pain so to measure how a person may adjust to having pain. More specifically, it measures perceived level of control over the pain, perceived level of disability, and the belief that there is a cure for the pain. |
Chronic Pain Coping Inventory (CPCI) | Assesses use of coping in response to chronic pain in the past week. Examples of coping include: restricting movement, resting, need for assistance, exercise, task persistence. |
Pain Beliefs and Perceptions Inventory (PBAPI) | Similar to the SOPA in that it measures beliefs about pain and belief about level of control over the pain. |
Spouse Response Inventory (SRI) | Assesses the spouses of those who experience pain. It measures a spouse’s responses to pain and wellness behaviors from the partner with pain issues. |
Experiencing pain is a lot more complicated than simply responding to neural messages. It is also a matter of perception. We feel pain less when we are busy focusing on a challenging activity, [1] which can help explain why sports players may feel their injuries only after the game. We also feel less pain when we are distracted by humor. [2] And pain is soothed by the brain’s release of endorphins, natural hormonal pain killers. The release of endorphins can explain the euphoria experienced in the running of a marathon. [3]
Much like pain level, the level of depression observed in people with mood disorders varies widely. For some, a relationship exists between physical pain and depression, however, it is difficult to determine if physical pain magnifies depression or if depression magnifies physical pain. A phenomena known as the “pain-prone” individual exists in that the physical pain one experiences is a form of unexpressed or unacknowledged depression. [4] The pain-prone individual is more likely to be female, as major depressive disorder occurs about twice as often in women as it does in men. [5] [6]
As you will see in the next unit on psychological disorders, symptoms of depression can include an all-encompassing low mood accompanied by low self-esteem and by loss of interest or pleasure in normally enjoyable activities. Furthermore, those negative feelings profoundly limit the individual’s day-to-day functioning and ability to maintain and develop interests in life. [7]
Somatoform disorders both occur in cases where psychological disorders are related to the experience or expression of physical symptoms.
One case in which psychological problems create real physical impairments is in the somatoform disorder known as somatization disorder (also called Briquet’s syndrome or Brissaud-Marie syndrome). Somatization disorder is a psychological disorder in which a person experiences numerous long-lasting but seemingly unrelated physical ailments that have no identifiable physical cause. A person with somatization disorder might complain of joint aches, vomiting, nausea, muscle weakness, as well as sexual dysfunction. The symptoms that result from a somatoform disorder are real and cause distress to the individual, but they are due entirely to psychological factors. The somatoform disorder is more likely to occur when the person is under stress, and it may disappear naturally over time. Somatoform disorder is more common in women than in men, and usually first appears in adolescents or those in their early 20s.
Another type of somatoform disorder is conversion disorder, which is a psychological disorder where patients experience specific neurological symptoms such as numbness, blindness, or paralysis, but where no neurological explanation exists. [8] The difference between conversion and somatoform disorders is in terms of the location of the physical complaint. In somatoform disorder the malaise is general, whereas in conversion disorder there are one or several specific neurological symptoms.
Conversion disorder gets its name from the idea that the existing psychological disorder is “converted” into the physical symptoms. It was the observation of conversion disorder (then known as “hysteria”) that first led Sigmund Freud to become interested in the psychological aspects of illness in his work with Jean-Martin Charcot. Conversion disorder is not common (a prevalence of less than 1%), but it may in many cases be undiagnosed. Conversion disorder occurs twice or more frequently in women than in men.
Hypochondriasis (hypochondria) is a psychological disorder accompanied by excessive worry about having a serious illness. The patient often misinterprets normal body symptoms such as coughing, perspiring, headaches, or a rapid heartbeat as signs of serious illness, and the patient’s concerns remain even after he or she has been medically evaluated and assured that the health concerns are unfounded. Many people with hypochondriasis focus on a particular symptom such as stomach problems or heart palpitations.
Somatoform disorders are problematic not only for the patient, but they also have societal costs. People with these disorders frequently follow through with potentially dangerous medical tests and are at risk for drug addiction from the drugs they are given and for injury from the complications of the operations they submit to. [9] [10] In addition, people with these disorders may take up hospital space that is needed for people who are really ill. To help combat these costs, emergency department and hospital workers use a variety of tests for detecting these disorders.
According to Marsha Linehan, in her treatment model of dialectical behavior therapy (DBT), both physical and emotional pain is to be accepted as an aspect of life that all will endure. Nonacceptance is essentially combating the reality of pain being a presence in life. Marsha Linehan states that the combination of pain and the nonacceptance of the pain results in suffering. Suffering occurs when a person resists the present reality and focuses on where he once was and/or where he wishes he could be. [11]
In an earlier Wellness module, you were introduced to the role that maladaptive thoughts play in psychological issues and to cognitive-behavioral therapy (CBT). In a future unit, a more general explanation of CBT will be given.
In relationship to this module, it is should be noted that the experience of pain and be amplified by maladaptive thoughts. For instance, one can experience any type of pain as listed in the physiological side of the Venn diagram. However, if one has thoughts such as “I’ll never feel better,” or ignores the advice of medical doctors and says, “I can work through the pain,” that person can potentially make pain worse.
Despite the two examples of thoughts being seemingly opposite, a common theme emerges in that both thoughts are not facts. The role of automatic thoughts in CBT is instrumental. Automatic thoughts are just that—automatic. It can be equated to the doctor tapping a knee that results in the person kicking. To certain extent, one does not “think about thinking” as it can be reflexive. However, a person can gain control over how he or she responds to automatic thoughts.
One example of a maladaptive thought can exacerbate the physiological aspects of pain is called a mental filter. A mental filter is when one focuses exclusively on certain, usually negative or upsetting, aspects of something. So, let’s say someone was experiencing a headache and has an automatic thought such as, “I can’t believe I have another headache.” That type of thought can establish a “filter” that only allows similar thoughts that may include: “I always get these headaches,” “I’m going to have headaches for the rest of my life,” “These headaches are making me miserable,” “No one else gets as many headaches as me.” Although the initial, automatic thought is a fact, the mental filter usually will invite thoughts that are untrue and, left unchallenged or unquestioned, will be treated as fact. This, in turn, will amplify pain.
Narcotic analgesics include opium, morphine, and heroin and are examples of pain reducers that work on the physiological experience of pain. Such narcotic analgesics are ingested and then connect to certain brain receptors. Certain narcotic analgesics mimic the internal agents (such as endorphins) that work on the central nervous system (brain and spinal cord) and work toward decreasing pain.
Hypnotic relief of pain is employing the use of guided relaxation and intense concentration. The goal is to have the attention of the subject entirely focused on an image so that all external stimuli and potential distractors are completely ignored. In other words, the goal is to have the subject in a trance state. Hypnotherapy is a specialized area that requires license, and is typically used as an adjunct to mental health treatment. The use of hypnosis for pain is usually accomplished at pain management centers. [1] [4]
A technique of a hypnotherapist is assisting a subject in visualizing vivid, images in which the subject, for example, pictures himself accomplishing goals. When the session is over, either the subject brings himself out of hypnosis or the hypnotherapist helps end the trance-like state. It is also possible to practice self-hypnosis and employ the skill when necessary. Watch the video below which explains a specific, hypnotic pain management technique:
A phenomenon is a general result that has been observed reliably in systematic empirical research. In essence, it is an established answer to a research question. Examples of phenomena include: the positive impact that expressive writing improves health, women do not talk more than men, and cell phone usage impairs driving ability. Some others are that certain psychological disorders increased greatly in prevalence during the late 20th century, people perform better on easy tasks when they are being watched by others (and worse on difficult tasks), and people recall items presented at the beginning and end of a list better than items presented in the middle.
Phenomena are often given names by their discoverers or other researchers, and these names can catch on and become widely known. Placebo effect is an example of a phenomenon. Placebos, or fake psychological or medical treatments, often lead to improvements in people’s symptoms and functioning.
As cited in a paper by Albert, [2] the American Medical Association defines placebo as “a substance provided to a patient that the physician believes has no specific pharmacological effect upon the condition being treated.” Examples of a placebo include “a sham treatment, injection, pill, procedure.”
As noted above, and according to Marsha Linehan in her treatment model of dialectical behavioral therapy, both physical and emotional pain is to be accepted as an aspect of life that all will endure. According to Linehan, the acceptance of pain is to be free of suffering. Acceptance is the ability to tolerate the moment that is being experienced. It should be noted that true acceptance does not mean that it is OK to be in pain. Rather, it means that the pain is being non judgmentally acknowledged for what it is, as opposed to what one wants it to be. [3]
As previously noted, dialectical behavioral therapy (DBT) incorporates mindfulness as its’ foundation. DBT has been well-researched in treating other issues than it was originally developed for. One such issue that has been found to be highly effective is the treatment of pain. Since DBT is primarily a coping skills treatment, certain coping skills can help manage pain. The skills or modules of DBT are core mindfulness, emotion regulation, distress tolerance, and interpersonal effectiveness. Let’s look at some examples of DBT skills that can help, at the very least, minimize the intensity of pain. [3]
The following video explains the basics of acupuncture and its impact on pain management. After you have watched the video, complete the following activity. The correct answers will provide a summary about acupuncture.
In self-management of pain treatment there is collaboration between the person and the professionals that he or she interacts with. This, in turn, provides a sense of control for the person in that he or she is participating in intervention options, the pace of treatment, and decision-making. Overall, it has been shown that those who have participated in self management programs have significantly increased their ability to manage the physiological discomfort of pain. [5]
In addition to the above treatments for chronic pain, certain medications, electrical stimulation, nerve blocks, or surgery are also interventions. Other forms, and preferably first line interventions might include less invasive forms of pain management such as psychotherapy, relaxation therapies, biofeedback, and behavior modification. Much like acupuncture, pain may also be minimized with the use of alternative treatments such as tai chi, meditation, and/or massage therapies. [5]
Pain Intervention | Description |
---|---|
Electrical stimulation | Transcutaneous electrical nerve stimulation (TENS) was developed in the 1960s. It involves placing small electrodes over the part of the body that is in pain. The electrodes are attached to a machine that sends electricity to the pain area. The electricity interrupts messages about the pain from the nerves to the brain. |
Nerve blocks | Injections in the pain area contain a local anesthetic for acute pain. If it is effective, then more permanent forms of treatment can be employed. |
Biofeedback | Through biofeedback, one can learn relaxation techniques and learn how to calm contracted muscles that can be a source of pain. |
Behavior modification | Through a behavior modification program, evaluation of pain behaviors are noted such as use of pain medications, overall activity levels, and tolerance during specific exercises. Once this baseline is established, pain behaviors and use of pain medications are discouraged while levels of increased activity level and well behaviors are reinforced. |
Medications | Pain medications may be over-the-counter medications (such as Tylenol, aspirin, Cortizone, Ben-Gay) or prescription pain relievers (such as Percocet, Oxycodone, and antiseizure medications). Both prescribed and nonprescribed pain relievers vary in their degree of effectiveness, side effects, and level of addiction. |
Psychotherapy | As noted throughout the module, certain schools of therapy (CBT, DBT) can help individuals cope with pain and/or change the way he or she thinks about the pain. |
Massage therapies | Massage therapy can help relax muscle tissue. Relaxing muscle tissue may result in decreased nerve compression and increased joint space; thus, reduced pain and increased range of motion can result. |
Tai chi | The movements of tai chi have been described as both gentle and graceful. Tai chi has been used as an intervention for elderly people to relieve arthritis pain. Tai chi has been called "moving meditation," due to its relaxing nature that focuses on breathing and inner peace. In other words, quieting the mind and relaxing the body while focusing on breathing and tai chi moves creates less focus on pain. |
Meditation | Shamatha is a type of mediation that creates focused attention on either breath or a mantra. Shamatha teaches observation what's going on in the mind and body without judgment. Research has shown that Shamatha creates changes in how the somatosensory cortex of the brain responded to pain. |
The experience of pain is not only the result of physiology and psychology, but cultural implications also play a role. [1] The role of culture and the experience of pain has been an ongoing area of research since the early 1950s. [2] Included in such research are the role that one’s culture plays in the experience of pain, the influence that culture has in the treatment of pain, and the resources available for pain treatment. [1] As an example, research has stated that certain racial and ethnic groups are more likely to delay care for pain assessment and treatment. Hispanics were seen at greater risk for injury and pain due to the nature of their employment and the willingness to engage in jobs that put physical wellness at risk for the sake of their families. In addition, other research has suggested that Hispanics, when compared with Caucasians, will report a greater magnitude of pain for a given pain issue. [1]
It is not completely known why the above may be, however, it is suggested that the level of pain may be related to biological predisposition for certain ethnic/minority groups when compared to others. For instance, people of Asian and Hispanic origin will often need different dosages of certain medications when compared to Caucasians. [1] It has also been suggested that Hispanics may not immediately seek treatment for pain due to something known as aguantar. This means physical pain, and even emotional pain, is more self managed and pain should be faced with dignity. [2]
Mindfulness is paying attention—without judgement—to what is happening right now, moment by moment by moment.
Society appears to have become faster paced. In other words, what once was managing school or work for college students, and many high school students, now becomes managing work and school. For the high school students who are not working, they might be involved with managing school and extracurricular activities. That same pattern of school and extracurricular activities is also seen in middle and elementary school students. For those younger students who are not driving, it is the responsibilities of parents and/or caregivers to transport them to such activities. This becomes even more burdensome for caregivers who have more than one child involved with activities as the parents now have to manage their work schedules with the activities of more than one child.
The above can be characteristic of the life experiences for many people, but most people seem to have their own unique experiences where they are attempting to spin one too many plates. It seems that managing many responsibilities at one time has become the norm. The consequence of this however, is that people run the risk of ignoring their own needs. Never giving an opportunity to “check in” with themselves.
In modern day society, the opposing concepts of multitasking and mindfulness have become more familiar. Although multitasking has become more of the norm, mindful experiences can help “recharge batteries” and keep a person feeling balanced in a society that can “drain batteries.”
The concept of mindfulness can be difficult to describe in that it can be more of an experience. In earlier modules mindfulness was described as a form of multi-sensory awareness that allows people to be present, focused, and recognize where they are at from moment to moment. It was also described as the ability to do one thing at a time with all energy and focus on that one thing.
Recognition that a mind-body connection exists and affects our health and overall sense of well-being has been unfolding since the 1960s. For the past 40+ years, various fields of healthcare have blended aspects of holistic mind and spiritual based practices with the more physical centric and detail oriented aspects of western medicine to develop approaches and techniques to treatment which are often referred to as complementary medicine or alternative therapies.
Western practitioners have drawn on ancient, as well as contemporary, healing practices from China, India, and Tibet.
This timeline highlights some of the most important events and research studies that have furthered both our understanding and practice of mind-body medicine over the past five decades.
Herbert Benson (1960s and 1970s)
Considered the godfather of mind-body medicine because of his pioneering work on the effects of meditation on stress that led to identifying the “relaxation response” in the early 1970s.
Robert Ader (mid-1970s)
One of the founders of the field known as psychoneuroimmunology. This area of study focuses on the connection between the mind and the body’s immune system.
Jon Kabat-Zinn (early 1980s)
Developed Mindfulness-Based Stress Reduction program. Founded Stress Reduction Program at University of Massachusetts Medical School in 1979.
David Spiegel (late 1980s)
Studied the effects of intensive supportive-expressive group therapy for women with breast cancer.
Candace Pert (1980s)
Developed the theory of mind-body functions as information molecules in the body. Her research has contributed to the study of psychneuroimmunology.
Marsh M. Linehan (late 1980s)
Developed a therapy model known as dialectical behavior therapy in which mindfulness is a major component.
Sara Lazar (2000s)
Researches the mind-body connection using neuroimaging techniques to study the effects of meditation on emotion and cognition.
The basic definition of psychology is “science of the mind.” However, more recently, psychology has incorporated more than just the mind. Several treatment approaches to therapy have combined Eastern approaches to wellness with Western, more traditional, “mind” approaches to psychology.
Watch this short video about Mindfulness-Based Stress Reduction (MBSR) and related mindfulness-based approaches to wellness and them fill in the blanks to complete the passage that follows.
Decide whether or not each of the following items describes an aspect or characteristic of mindfulness.
Mindfulness is a relatively new concept in Western culture and, as such, is something to be practiced. In other words, and as previously noted, society has become faster paced with more demands and less time for those demands. Consequently, the quantity and quality of time to incorporate mindfulness has to be gradually built. In other words, to take one moment of being truly mindful is a starting point that can built into two, five, ten, and then to thirty mindful minutes to focus, accomplish, and engage in self care. Mindfulness was discussed as being present and in full recognition of a moment. It was also described as the ability to do one thing at a time with all energy and focus on that one thing. The ability to engage in this practice can reduce vulnerability to potential consequences of mindlessness and excessive multi-tasking that include the experience of stress and/or the experience of physical pain. Most importantly, the concept of mindfulness allows for a feeling of self control in a world that can feel out of control.
You’ve already tried guided meditation earlier in this unit. The following exercise will take a somewhat different approach derived from Buddhist meditative practices, but also common to many other eastern religious traditions. Rather than a guided meditation you are given 5 simple instructions.
Allocate approximately 15-30 minutes to this exercise. Many people find that this type of meditation is most effective in a quiet, dimly lit place. It can also help to meditate in the morning or in the late evening, when the potential for being disturbed is lessened.
Instructions:
After completing steps 1-5, please consider the following.
Here is an opportunity for you to try a mindfulness meditation called the “raisin meditation.” The video will guide you through this short activity.
Your mind has a tendency to patterns in everything, particularly when you take the time to mindfully attend to the scene. An even more direct approach is to present people with a pattern to look for, like a face. This principle is behind a technique used by some painters called the hidden faces illusion. At first blush these paintings look like typical landscapes, but when primed to look for faces, people instantly become aware of the fact that the scenery has been posed in face-like arrangements. Rather than painting such scenes, a novel approach is to photograph natural scenes that contain patterns resembling hidden faces.
Take a look at the image below which was taken in Pictograph Cave State Park in Montana. What do you see?
Now take a look at the photograph one more time. How many faces can you find? There is no correct answer, but most people can find 3 or more.
Let’s see if a type of attention meditation exercise can help you find more of them. You will be using a technique that is similar to things you’ve already practiced in this unit. Take a few deep breaths and try to forget about your daily concerns. Take a few minutes to allow your eye to wander over the photo, paying close attention to the qualities of the rock-face. Be mindful of the scene, the 3D structure of the rock, the varied hue, etc. Take the time to appreciate the natural beauty of what you see.
As noted in the previous modules, dialectical behavior therapy (DBT) is a type of therapy developed by Marsha M. Linehan [1] at the University of Washington. It was originally developed for the treatment of borderline personality disorder. However, more research has concluded that it can be an effective treatment for mood, anxiety, eating, and substance abuse disorders.
Reviewing the three states of mind, reasonable mind is when a more intellectual approach to life is assumed. To be in reasonable mind is to have the ability to think logically and focus solely on facts. In emotional mind, attempts at logic can be futile because thoughts can become nonfacts and further fuel emotions. Emotional mind culminates in behaviors that are emotionally based but may have a long-term consequence. Finally, an overlap of reasonable mind and emotional mind results in wise mind. People in wise mind are mindful of an internal experience of escalating emotions but also aware of long-term consequences of emotionally based behavior.
To increase the chances that a person stays in wise mind, DBT divides its curriculum into four modules: core mindfulness skills, interpersonal effectiveness, distress tolerance skills, and emotional regulation skills. Core mindfulness the increased ability to attend to and “check in” with the self. In other words, how am I feeling at a given moment? This takes practice as mindfulness is to be practiced in situations that are non urgent as they lay the foundation for employing skills when in an urgent situation. Interpersonal effectiveness is the practice of relating and communicating well with others and being mindful of how communication can become maladaptive and incorporating a communication skill to minimize the emotional intensity of the communication.
Distress tolerance skills are the ability to be incorporated a particular type of coping skill in a given moment when experiencing a crisis. Emotion regulation is the module that helps gain insight into how emotions work, and the skills required for management of emotions, as opposed to having emotions manage the person. [1]
As an example, picture the ideal way to purchase a car. In the beginning stages, purchasing a car can be a very reasonably minded task. In other words, research is completed in that a person may choose a safe car that is fairly priced and gets good gas mileage. This reasonable approach may also include saving enough money to buy a late model used car without having payments.
Let’s take that same reasonable approach and walk into the dealership. Picture a person being approached by a salesperson. The salesperson asks, “How may I help you?” and the person takes out the printout of the make and model of the car that he wishes to purchase outright, with cash and no payments. The salesperson then reviews the computer printouts and acknowledges that this car is still in stock. However, the salesperson says, “Before I take you to see your car, why don’t you have a seat in this brand new car that is featured in our showroom?” Because the salesperson was being nice, the customer decides to sit in this brand new car for a minute. And then, it happens …
The salesperson shuts the door of this brand new SUV. The SUV has ZERO miles on it, and the new car smell starts to stimulate the olfactory sense. The smooth, shiny steering wheel is cool to the touch and feels good in the customer's hands. From there, his thoughts start to change: “My $5,000 would be a good down payment on this car,” “The gas prices aren’t going to go up all that much more,” “The six years worth of payments will go by quick and, besides, everyone has a car payment.” With each thought, the excitement grows and, before he realizes what is happening, the customer is signing up for 6 years worth of payments, with interest, and more money will be spent fueling and insuring the new car than he had anticipated.
So, what happened in the above scenario? The customer started off from a very reasonable place. Once he entered the car, he was not being mindful of his emotional experience. Rather, he was focusing on his thoughts, maybe even fantasizing what it would be like when he rolled into his driveway. In other words, he became unmindful of two things: his goal (to have reliable transportation without a payment) and his emotional experience (the escalation of excitement).
Although DBT is primarily used in clinical settings, it can also be employed in everyday life. For instance, with core mindfulness skills, building awareness and learning how to name the emotion that is experienced when the door to the new SUV shuts. The person probably felt a level of excitement and/or pressure. It’s one thing to be able to label the emotion, but learning to describe the emotions can be a bit more challenging. In other words, how does a person know he is becoming more excited or feeling more pressure?
Internal cues of excitement and/or pressure may include:
Once the emotion is recognized, the person then requires some emotional regulation and/or distress tolerance skills in order to minimize the chance of an emotionally based decision and increase the chances of a wise minded decision. An example of an emotional regulation skill would be to be mindful of the escalating emotion and do the opposite of what the emotion is telling him to do. In the case, the excitement is telling him to stay in the car and continue to smell the new car and fantasize about driving out the showroom with this SUV. So, the opposite would to exit the SUV.
An example of a distress tolerance skill would be to take a timeout, walk outside, and take deep breaths. Once his emotions begin to subside, he can then walk back into the showroom and be interpersonally effective with his salesperson. In other words, from a wise-minded place, the customer can communicate with the salesperson in a manner such as the following:
“I came in with a specific make and model in mind and with the exact amount of money to pay it in full. I am feeling positive in my choice of vehicle, and I would like to process the paperwork for the car that I came in for. If we start the process now, you can get back to your day, see more customers, and I can be home at a decent time.”
To be interpersonally effective is to communicate with only facts and without using nonfacts, assumptions, or attacking. To be interpersonally effective also means to express feelings based on the facts and assert the goal. Once the goal is stated, then interpersonal effectiveness is also communicating the “wins” of keeping to the goal. If the salesperson makes another attempt to sell a car that is not easily afforded, to be interpersonally effective would be to recommunicate the goal and positive consequences of achieving the goal.
The following video is a rap about "ACCEPTS" distress tolerance skills. Listen to strategies offered.
Match the given behaviors with the appropriate ACCEPTS skills and strategies. Each behavior has three specific examples. Remember that all suggestions should be practiced mindfully. In other words, all attention and focus on that one thing.
Mindfulness can also be described in the form of the acronym, R.A.I.N:
“R” is the recognition of an emotional experience. In other words, and this will be addressed in a later activity, an emotional experience begins with a body sensation. The more mindful a person is about an internal emotional experience, the more likely that person to recognize that emotion. “A” is either to allow or acknowledge that the emotion is present. In other words, it would not be helpful to fight the presence of an emotion because that would, in a sense, make the emotion much stronger. “I” is to investigate or inquire. Not only recognizing and acknowledging the emotion but inquire as to why the emotion might be present. Finally, “N” is to not to identify with the emotion. In other words, one can be depressed without identifying him or herself as a depressed person. It is more accepting that in the present moment, the person is a person feeling depressed and the depressed state is fleeting. To practice R.A.I.N. is to decrease the chance that one does not become enmeshed with an emotional experience. It allows a person to take a step back and increase the chances that the person gains a deeper understanding on the function of the emotional experience, be it depression, anger, or sadness.
Watch this TED lecture video given by neuroscientist Sara Lazar summarizing her own experiences with mindful practices and the research it led her to do.
You have already created a MBSR program for yourself, now it’s time to create a wide-reaching wellness plan. A wellness plan is intended to correct current imbalances in one’s wellness by having a person evaluate their physical/medical, intellectual, spiritual, mental/emotional, social, financial, occupational and environmental wellness. The following five step sequence will help you assess your current wellness and come up with a practical, holistic plan to improve areas that are lacking.
This is a very personal exercise and involves a serious consideration of your current wellness and how it could be improved. If you are struggling with answering this question, recruit a close friend or family member to help you with Steps 1-3. While a face-to-face conversation is optimal, this could be done over the phone or through electronic means of communication (Skype, chat, text message, email, etc.)
“I think we probably noticed in his early teens that he became very conscious about aspects of his appearance....He began to brood over it quite a lot,” said Maria as she called in to the talk radio program to describe her son Robert.
Maria described how Robert had begun to worry about his weight. A friend had commented that he had a “fat” stomach, and Robert began to cut down on eating. Then he began to worry that he wasn’t growing enough and devised an elaborate series of stretching techniques to help him get taller.
Robert scrutinized his face and body in the mirror for hours, finding a variety of imagined defects. He believed that his nose was crooked, and he was particularly concerned about a lump that he saw on it: “A small lump,” said his mother. “I should say it wasn’t very significant, but it was significant to him.”
Robert insisted that all his misery stemmed from this lump on his nose, that everybody noticed it. In his sophomore year of high school, he had cosmetic surgery to remove it.
Around this time, Robert had his first panic attack and began to worry that everybody could notice him sweating and blushing in public. He asked his parents for a $10,000 loan, which he said was for overseas study. He used the money for a procedure designed to reduce sweating and blushing. Then, dissatisfied with the results, he had the procedure reversed.
Robert was diagnosed with body dysmorphic disorder. His mother told the radio host,
"At the time we were really happy because we thought that finally we actually knew what we were trying to fight and to be quite honest, I must admit I thought well it sounds pretty trivial....
"Things seemed to go quite well and he got a new girlfriend and he was getting excellent marks in his clinical work in hospital and he promised us that he wasn't going to have any more surgery."
However, a lighthearted comment from a friend about a noticeable vein in his forehead prompted a relapse. Robert had surgery to tie off the vein. When that didn’t solve all his problems as he had hoped, he attempted to have the procedure reversed but learned that it would require complicated microsurgery. He then used injections on himself to try opening the vein again, but he could never completely reverse the first surgery.
Robert committed suicide shortly afterward, in 2001. [1]
Stop for a minute and consider what it means to be “normal” or “abnormal?” Is it normal to wash your hands before you eat dinner? How hot should the water be? How long should you wash your hands? When does this act of hand washing move from what society views as “normal” to what others would deem “abnormal?” You will learn that the types of behavior considered “normal” or “abnormal” falls on a spectrum and is often less easily definable than you may have thought. In this unit, we will explore these concepts and consider how abnormality depends on both the consequences that result from a given behavior and the culture in which someone lives.
Let’s begin with looking at the definition of a psychological disorder. A psychological disorder is an ongoing dysfunctional pattern of thought, emotion, and behavior that causes significant distress, impairs a person’s normal functioning, and is considered deviant in that person’s culture or society. [2] Psychological disorders have much in common with other medical disorders. They are out of the patient’s control, they may in some cases be treated by drugs, and their treatment is often covered by medical insurance. Like medical problems, psychological disorders have both biological (nature) as well as environmental (nurture) influences. These causal influences are reflected in the bio-psycho-social model of illness, a model that you will be learning about later on in this module. [3]
Applying psychological science to our understanding and treatment of psychological disorders is termed abnormal psychology. About 1 in every 4 Americans (or over 78 million people) are affected by a psychological disorder during any one year, [4] and at least a half billion people are affected worldwide. The impact of mental illness is particularly strong on people who are poorer, of lower socioeconomic class, and from disadvantaged ethnic groups.
People with psychological disorders are also stigmatized by the people around them, resulting in shame and embarrassment, as well as prejudice and discrimination against them. Thus the understanding and treatment of psychological disorders has broad implications for the everyday life of many people. The table below shows the prevalence (i.e., the frequency of occurrence of a given condition in a population at a given time) of some of the major psychological disorders in the United States.
One-Year Prevalence Rates for Psychological Disorders in the United States | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. Sources: Kessler, R. C., Chiu, W. T., Demler, O., and Walters, E. E. (2005).Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication. Archives of General Psychiatry, 62(6):617–627; Narrow, W. E., Rae, D. S., Robins, L. N., and Regier, D. A. (2002). Revised prevalence based estimates of mental disorders in the United States: Using a clinical significance criterion to reconcile 2 surveys’ estimates. Archives of General Psychiatry, 59(2):115–123. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
In this unit our focus is on the disorders themselves. We review the major psychological disorders and consider their causes and their impact on the people who suffer from them.
A psychological disorder is an ongoing dysfunctional pattern of thought, emotion, and behavior that causes significant distress, and that is considered deviant in that person’s culture or society. [1] Psychological disorders have much in common with other medical disorders. They are out of the patient’s control, they may in some cases be treated by drugs, and their treatment is often covered by medical insurance. Like medical problems, psychological disorders have both biological (nature) as well as environmental (nurture) influences. These causal influences are reflected in the bio-psycho-social model of illness. [2]
The bio-psycho-social model of illness is a way of understanding disorder that assumes that disorders are caused by biological, psychological, and social factors. The biological component of the bio-psycho-social model refers to the influences on disorder that come from the functioning of the individual’s body. Particularly important are genetic characteristics that make some people more vulnerable to a disorder than others and the influence of neurotransmitters. The psychological component of the bio-psycho-social model refers to the influences that come from the individual, such as patterns of negative thinking and stress responses. The social component of the bio-psycho-social model refers to the influences on disorder due to social and cultural factors such as socioeconomic status, homelessness, abuse, and discrimination.
To consider one example, the psychological disorder of schizophrenia has a biological cause because it is known that there are patterns of genes that make a person vulnerable to the disorder. [3] But whether or not the person with a biological vulnerability experiences the disorder depends in large part on psychological factors such as how the individual responds to the stress he experiences, as well as social factors such as whether or not he is exposed to stressful environments in adolescence and whether or not he has support from people who care about him. [4] [5] Similarly, mood and anxiety disorders are caused in part by genetic factors such as hormones and neurotransmitters, in part by the individual’s particular thought patterns, and in part by the ways that other people in the social environment treat the person with the disorder. We will use the bio-psycho-social model as a framework for considering the causes and treatments of disorder.
Although they share many characteristics with them, psychological disorders are nevertheless different from medical conditions in important ways. For one, diagnosis of psychological disorders can be more difficult. Although a medical doctor can see cancer in the lungs using an MRI scan or see blocked arteries in the heart using cardiac catheterization, there is no corresponding test for psychological disorder. Current research is beginning to provide more evidence about the role of brain structures in psychological disorder, but for now the brains of people with severe mental disturbances often look identical to those of people without such disturbances.
Because there are no clear biological diagnoses, psychological disorders are instead diagnosed on the basis of clinical observations of the behaviors that the individual engages in. These observations find that emotional states and behaviors operate on a continuum, ranging from more “normal” and “accepted” to more “deviant,” “abnormal,” and “unaccepted.” The behaviors that are associated with disorder are in many cases the same behaviors we that engage in our “normal” everyday life. Washing one’s hands is a normal healthy activity, but it can be overdone by those with an obsessive-compulsive disorder (OCD). It is not unusual to worry about and try to improve one’s body image, but Robert’s struggle with his personal appearance, as discussed at the beginning of this module, was clearly unusual, unhealthy, and distressing to him.
Whether a given behavior is considered a psychological disorder is determined not only by whether a behavior is unusual (e.g., whether it is “mild” anxiety versus “extreme” anxiety) but also by whether a behavior is maladaptive—that is, the extent to which it causes distress (e.g., pain and suffering) and dysfunction (impairment in one or more important areas of functioning) to the individual. [6] An intense fear of spiders, for example, would not be considered a psychological disorder unless it has a significant negative impact on the sufferer’s life, for instance by causing him or her to be unable to step outside the house. The focus on distress and dysfunction means that behaviors that are simply unusual (such as some political, religious, or sexual practices) are not classified as disorders.
Another difficulty in diagnosing psychological disorders is that they frequently occur together. For instance, people diagnosed with anxiety disorders also often have mood disorders, [7] and people diagnosed with one personality disorder frequently suffer from other personality disorders as well. Comorbidity occurs when people who suffer from one disorder also suffer at the same time from other disorders. Because many psychological disorders are comorbid, most severe mental disorders are concentrated in a small group of people (about 6% of the population) who have more than three of them. [8]
Every culture and society has its own views on what constitutes abnormal behavior and what causes it. [9] The Old Testament Book of Samuel tells us that as a consequence of his sins, God sent King Saul an evil spirit to torment him (1 Samuel 16:14). Ancient Hindu tradition attributed psychological disorders to sorcery and witchcraft. During the Middle Ages it was believed that mental illness occurred when the body was infected by evil spirits, particularly the devil. Remedies included whipping, bloodletting, purges, and trepanation (cutting a hole in the skull) to release the demons.
Until the 18th century, the most common treatment for the mentally ill was to incarcerate them in asylums or “madhouses.” During the 18th century, however, some reformers began to oppose this brutal treatment of the mentally ill, arguing that mental illness was a medical problem that had nothing to do with evil spirits or demons. In France, one of the key reformers was Philippe Pinel (1745–1826), who believed that mental illness was caused by a combination of physical and psychological stressors, exacerbated by inhumane conditions. Pinel advocated the introduction of exercise, fresh air, and daylight for the inmates, as well as treating them gently and talking with them. In America, the reformers Benjamin Rush (1745–1813) and Dorothea Dix (1802–1887) were instrumental in creating mental hospitals that treated patients humanely and attempted to cure them if possible. These reformers saw mental illness as an underlying psychological disorder, which was diagnosed according to its symptoms and which could be cured through treatment.
The reformers Philippe Pinel, Benjamin Rush, and Dorothea Dix fought the often brutal treatment of the mentally ill and were instrumental in changing perceptions and treatment of them.
Despite the progress made since the 1800s in public attitudes about those who suffer from psychological disorders, people, including police, coworkers, and even friends and family members, still stigmatize people with psychological disorders. A stigma refers to a disgrace or defect that indicates that person belongs to a culturally devalued social group. In some cases the stigma of mental illness is accompanied by the use of disrespectful and dehumanizing labels, including names such as “crazy,” “nuts,” “mental,” “schizo,” and “retard.” The stigma of mental disorder affects people while they are ill, while they are healing, and even after they have healed. [10] On a community level, stigma can affect the kinds of services social service agencies give to people with mental illness, and the treatment provided to them and their families by schools, workplaces, places of worship, and health-care providers. Stigma about mental illness also leads to employment discrimination, despite the fact that with appropriate support, even people with severe psychological disorders are able to hold a job. [11] [12] [13] [14]
The mass media has a significant influence on society’s attitude toward mental illness. [15] While media portrayal of mental illness is often sympathetic, negative stereotypes still remain in newspapers, magazines, film, and television. (See the following video for an example.) Television advertisements may perpetuate negative stereotypes about the mentally ill. Burger King recently ran an ad called “The King’s Gone Crazy” in which the company’s mascot runs around an office complex carrying out acts of violence and wreaking havoc.
The most significant problem of the stigmatization of those with psychological disorder is that it slows their recovery. People with mental problems internalize societal attitudes about mental illness, often becoming so embarrassed or ashamed that they conceal their difficulties and fail to seek treatment. Stigma leads to lowered self-esteem, increased isolation, and hopelessness, and it may negatively influence the individual’s family and professional life. [16]
Despite all of these challenges, however, many people overcome psychological disorders and go on to lead productive lives. It is up to all of us who are informed about the causes of psychological disorder and the impact of these conditions on people to understand, first, that mental illness is not a “fault” any more than is cancer. People do not choose to have a mental illness. Second, we must all work to help overcome the stigma associated with disorder. Organizations such as the National Alliance on Mental Illness, [17] for example, work to reduce the negative imp3act of stigma through education, community action, individual support, and other techniques.
Psychologists have developed criteria that help them determine whether behavior should be considered a psychological disorder and which of the many disorders particular behaviors indicate. These criteria are laid out in a 1,000-page manual known as the Diagnostic and Statistical Manual of Mental Disorders (DSM), a document that provides a common language and standard criteria for the classification of mental disorders. [1] The DSM is used by therapists, researchers, drug companies, health insurance companies, and policymakers in the United States to determine what services are appropriately provided for treating patients with given symptoms.
The first edition of the DSM was published in 1952 on the basis of census data and psychiatric hospital statistics. Since then, the DSM has been revised five times. The last major revision was the fourth edition (DSM-IV), published in 1994, and an update of that document was produced in 2000 (DSM-IV-TR). The fifth edition (DSM-V) is currently undergoing review, planning, and preparation and is scheduled to be published in 2013. The DSM-IV-TR was designed in conjunction with the World Health Organization’s 10th version of the International Classification of Diseases (ICD-10), which is used as a guide for mental disorders in Europe and other parts of the world.
As you can see in the table below, the DSM organizes the diagnosis of disorder according to five dimensions (or axes) relating to different aspects of disorder or disability. The axes are important to remember when we think about psychological disorder, because they make it clear not only that there are different types of disorder, but that those disorders have a variety of different causes. Axis I includes the most usual clinical disorders, including mood disorders and anxiety disorders; Axis II includes the less severe but long-lasting personality disorders as well as mental retardation; Axis III and Axis IV relate to physical symptoms and social-cultural factors, respectively. The axes remind us that when making a diagnosis we must look at the complete picture, including biological, personal, and social-cultural factors.
DSM Axes | |||||
---|---|---|---|---|---|
The DSM organizes psychological disorders into five dimensions (known as axes) that concern the different aspects of disorder. From Flat World Knowledge, Introduction to Psychology, v1.0. Adapted from American Psychiatric Association. (2000). Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.). Washington, DC: Author | |||||
|
The DSM does not attempt to specify the exact symptoms that are required for a diagnosis. Rather, the DSM uses categories, and patients whose symptoms are similar to the description of the category are said to have that disorder. The DSM frequently uses qualifiers to indicate different levels of severity within a category. For instance, the disorder of mental retardation can be classified as mild, moderate, or severe.
Each revision of the DSM takes into consideration new knowledge as well as changes in cultural norms about disorder. Homosexuality, for example, was listed as a mental disorder in the DSM until 1973, when it was removed in response to advocacy by politically active gay rights groups and changing social norms. The current version of the DSM lists about 400 disorders. Some of the major categories are shown below, and you may go here and browse the complete list.
Although the DSM has been criticized regarding the nature of its categorization system (and it is frequently revised to attempt to address these criticisms), for the fact that it tends to classify more behaviors as disorders with every revision (even “academic problems” are now listed as a potential psychological disorder), and for the fact that it is primarily focused on Western illness, it is nevertheless a comprehensive, practical, and necessary tool that provides a common language to describe disorder. Most U.S. insurance companies will not pay for therapy unless the patient has a DSM diagnosis. The DSM approach allows a systematic assessment of the patient, taking into account the mental disorder in question, the patient’s medical condition, psychological and cultural factors, and the way the patient functions in everyday life.
Anxiety, the nervousness or agitation that we sometimes experience, often about something that is going to happen, is a natural part of life. We all feel anxious at times, maybe when we think about our upcoming visit to the dentist or the presentation we have to give to our class next week. Anxiety is an important and useful human emotion; it is associated with the activation of the sympathetic nervous system and the physiological and behavioral responses that help protect us from danger. But too much anxiety can be debilitating, and every year millions of people suffer from anxiety disorders, which are psychological disturbances marked by irrational fears, often of everyday objects and situations. [1]
Consider the following, in which “Chase” describes her feelings of a persistent and exaggerated sense of anxiety, even when there is little or nothing in her life to provoke it:
For the past six months now I’ve had a really bad feeling inside of me. The best way to describe it is like a really bad feeling of negative inevitability, like something really bad is impending, but I don’t know what. It’s like I’m on trial for murder or I’m just waiting to be sent down for something. I have it all of the time but it gets worse in waves that come from nowhere with no apparent triggers. I used to get it before going out for nights out with friends, and it kinda stopped me from doing it as I’d rather not go out and stress about the feeling, but now I have it for more days than not so it doesn’t really make a difference anymore. [2]
Chase is probably suffering from a generalized anxiety disorder (GAD), a psychological disorder diagnosed in situations in which a person has been excessively worrying about money, health, work, family life, or relationships for at least 6months, even though he or she knows that the concerns are exaggerated, and when the anxiety causes significant distress and dysfunction.
In addition to their feelings of anxiety, people who suffer from GAD may also experience a variety of physical symptoms, including irritability, sleep troubles, difficulty concentrating, muscle aches, trembling, perspiration, and hot flashes. The sufferer cannot deal with what is causing the anxiety, nor avoid it, because there is no clear cause for anxiety. In fact, the sufferer frequently knows, at least cognitively, that there is really nothing to worry about.
About 10 million Americans suffer from GAD, and about two thirds are women. [1] [3] Generalized anxiety disorder is most likely to develop between the ages of 7 and 40 years, but its influence may in some cases lessen with age. [4]
Excessive anxiety about a number of events or activities, occurring more days than not, for at least 6 months.
The person finds it difficult to control the worry.
The anxiety and worry are associated with at least three of the following six symptoms (with at least some symptoms present for more days than not, for the past 6 months):
The focus of the anxiety and worry is not confined to features of an Axis I disorder, being embarrassed in public (as in social phobia), being contaminated (as in obsessive-compulsive disorder), being away from home or close relatives (as in separation anxiety disorder), gaining weight (as in anorexia nervosa), having multiple physical complaints (as in somatization disorder), or having a serious illness (as in hypochondriasis), and the anxiety and worry do not occur exclusively during posttraumatic stress disorder.
The anxiety, worry, or physical symptoms cause clinically significant distress or impairment in social or occupational functioning.
The disturbance does not occur exclusively during a mood disorder, a psychotic disorder, pervasive developmental disorder, substance use, or general medical condition. [5]
When I was about 30 I had my first panic attack. I was driving home, my three little girls were in their car seats in the back, and all of a sudden I couldn’t breathe, I broke out into a sweat, and my heart began racing and literally beating against my ribs! I thought I was going to die. I pulled off the road and put my head on the wheel. I remember songs playing on the CD for about 15 minutes and my kids’ voices singing along. I was sure I’d never see them again. And then, it passed. I slowly got back on the road and drove home. I had no idea what it was. [1]
Ceejay is experiencing panic disorder, a psychological disorder characterized by sudden attacks of anxiety and terror that have led to significant behavioral changes in the person’s life. Symptoms of a panic attack include shortness of breath, heart palpitations, trembling, dizziness, choking sensations, nausea, and an intense feeling of dread or impending doom. Panic attacks can often be mistaken for heart attacks or other serious physical illnesses, and they may lead the person experiencing them to go to a hospital emergency room. Panic attacks may last as little as one or as much as 20 minutes, but they often peak and subside within about 10 minutes.
Sufferers are often anxious because they fear that they will have another attack. They focus their attention on the thoughts and images of their fears, becoming excessively sensitive to cues that signal the possibility of threat. [2] They may also become unsure of the source of their arousal, misattributing it to situations that are not actually the cause. As a result, they may begin to avoid places where attacks have occurred in the past, such as driving, using an elevator, or being in public places. Panic disorder affects about 3% of the American population in a given year.
Recurrent unexpected panic attacks
At least one of the attacks has been followed by at least 1 month of one or more of the following:
Presence or absence of agoraphobia
The panic attacks are not due to the direct physiologic effects of a substance (e.g., a drug of abuse, a medication) or a general medical condition (e.g., hyperthyroidism).
The panic attacks are not better accounted for by another mental disorder. [3]
A phobia (from the Greek word phobos, which means “fear”) is a specific fear of a certain object, situation, or activity. The fear experience can range from a sense of unease to a full-blown panic attack. Most people learn to live with their phobias, but for others the fear can be so debilitating that they go to extremes to avoid the fearful situation. A sufferer of arachnophobia (fear of spiders), for example, may refuse to enter a room until it has been checked thoroughly for spiders, or may refuse to vacation in the countryside because spiders may be there. Phobias are characterized by their specificity and their irrationality. A person with acrophobia (a fear of height) could fearlessly sail around the world on a sailboat with no concerns yet refuse to go out onto the balcony on the fifth floor of a building.
A common phobia is social phobia, extreme shyness around people or discomfort in social situations. Social phobia may be specific to a certain event, such as speaking in public or using a public restroom, or it can be a more generalized anxiety toward almost all people outside of close family and friends. People with social phobia will often experience physical symptoms in public, such as sweating profusely, blushing, stuttering, nausea, and dizziness. They are convinced that everybody around them notices these symptoms as they are occurring. Women are somewhat more likely than men to suffer from social phobia.
The most incapacitating phobia is agoraphobia, defined as anxiety about being in places or situations from which escape might be difficult or embarrassing, or in which help may not be available. [1] Often, agoraphobia occurs together with panic disorder, or the individual may be concerned about having a panic attack in the feared situations or places. Typical places that provoke the panic attacks are parking lots; crowded streets or shops; and bridges, tunnels, or expressways. People (mostly women) who suffer from agoraphobia may have great difficulty leaving their homes and interacting with other people.
Phobias affect about 9% of American adults, and they are about twice as prevalent in women as in men. [2] [3] In most cases phobias first appear in childhood and adolescence and usually persist into adulthood.
Persistent fear that is excessive or unreasonable, cued by the presence or anticipation of a specific object or situation.
Exposure provokes immediate anxiety, which can take the form of a situationally predisposed panic attack.
Patients recognize that the fear is excessive or unreasonable.
Patients avoid the phobic situation or else endure it with intense anxiety or distress.
The distress in the feared situation interferes significantly with the person's normal routine, occupational functioning, or social activities or relationships.
In persons younger than 18 years, the duration is at least 6 months.
The fear is not better accounted for by another mental disorder. [1]
A fear of one or more social or performance situations in which the person is exposed to unfamiliar people or to possible scrutiny by others and feels he or she will act in an embarrassing manner.
Exposure to the feared social situation provokes anxiety, which can take the form of a panic attack.
The person recognizes that the fear is excessive or unreasonable.
The feared social or performance situations are avoided or are endured with distress.
The avoidance, anxious anticipation, or distress in the feared situation interferes significantly with the person's normal routine, occupational functioning, or social activities or relationships.
The condition is not better accounted for by another mental disorder, substance use, or general medical condition.
If a general medical condition or another mental disorder is present, the fear is unrelated to it.
The phobia may be considered generalized if fears include most social situations. [1]
Fear of being in places or situations from which escape might be difficult (or embarrassing) or in which help might not be available in the event of having unexpected panic-like symptoms.
The situations are typically avoided or require the presence of a companion.
The condition is not better accounted for by another mental disorder. [1]
Although he is best known for his perfect shots on the field, soccer star David Beckham also suffers from obsessive-compulsive disorder (OCD). As he describes it, David Beckham’s experience with obsessive behavior is not unusual. We all get a little obsessive at times. We may continuously replay a favorite song in our heads, worry about getting the right outfit for an upcoming party, or find ourselves analyzing a series of numbers that seem to have a certain pattern. And our everyday compulsions can be useful. Going back inside the house once more to be sure that we really did turn off the sink faucet or checking the mirror a couple of times to be sure that our hair is combed are not necessarily bad ideas.
Obsessive-compulsive disorder (OCD) is a psychological disorder that is diagnosed when an individual continuously experiences obsessions (distressing, intrusive, or frightening thoughts), and engages in compulsions (repetitive behaviors or mental acts) in an attempt to calm these obsessions. OCD is diagnosed when the obsessive thoughts are so disturbing and the compulsive behaviors are so time consuming that they cause distress and significant dysfunction in a person’s everyday life. Washing your hands once or even twice to make sure that they are clean is normal; washing them 20 times is not. Keeping your fridge neat is a good idea; spending hours a day on it is not. The sufferers know that these rituals are senseless, but they cannot bring themselves to stop them, in part because the relief they feel after they perform them acts as a reinforcer, making the behavior more likely to occur again.
Sufferers of OCD may avoid certain places that trigger the obsessive thoughts or may use alcohol or drugs to try to calm themselves down. OCD has a low prevalence rate (about 1% of the population in a given year) in relation to other anxiety disorders and usually develops in adolescence or early adulthood. [1] [2] The course of OCD varies from person to person. Symptoms can come and go, decrease, or worsen over time.
Obsessions
Compulsions
Obsessive-Compulsive Disorder
"If you imagine burnt pork and plastic, I can still taste it," says Chris Duggan, on his experiences as a soldier in the Falklands War in 1982. "These helicopters were coming in and we were asked to help get the boys off...when they opened the doors the stench was horrendous."
When he left the army in 1986, he suffered from PTSD. "I was a bit psycho," he says. "I was verbally aggressive, very uncooperative. I was arguing with my wife, and eventually we divorced. I decided to change the kitchen around one day, get all new stuff, so I threw everything out of the window. I was 10 stories up in a flat. I poured brandy all over the video and it melted. I flooded the bathroom." [1]
People who have survived a terrible ordeal, such as combat, torture, sexual assault, imprisonment, abuse, natural disasters, or the death of someone close to them may develop posttraumatic stress disorder (PTSD). The anxiety may begin months or even years after the event. People with PTSD experience high levels of anxiety along with reexperiencing the trauma (flashbacks), and a strong desire to avoid any reminders of the event. They may lose interest in things they used to enjoy; startle easily; have difficulty feeling affection; and may experience terror, rage, depression, or insomnia. The symptoms may be felt especially when approaching the area where the event took place or when the anniversary of that event is near.
PTSD affects about 5 million Americans, including victims of the 9/11 terrorist attacks, the wars in Afghanistan and Iraq, and Hurricane Katrina. Sixteen percent of Iraq war veterans, for example, reported experiencing symptoms of PTSD. [2] PTSD is a frequent outcome of childhood or adult sexual abuse, a disorder that has its own Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnosis. Women are more likely to develop PTSD than men. [3]
Not everyone who experiences a trauma will develop PTSD. Risk factors for PTSD include the degree of the trauma’s severity, the lack of family and community support, and additional life stressors. [4] Many people with PTSD also suffer from another mental disorder, particularly depression, other anxiety disorders, and substance abuse. [5]
The person has been exposed to a traumatic event in which both of the following were present:
The traumatic event is persistently re-experienced in at least one of the following ways:
The person persistently avoids stimuli associated with the trauma and has numbing of general responsiveness including at least three of the following:
Persistent symptoms of increased arousal are indicated by at least two of the following:
Duration of the disturbance is more than 1 month.
The disturbance causes clinically significant distress or impairment in social, occupational, or other important areas of functioning. [6]
For each person, select the diagnosis that best describes his or her symptoms.
Both nature and nurture contribute to the development of anxiety disorders. In terms of our evolutionary experiences, humans have evolved to fear dangerous situations. Those of us who had a healthy fear of the dark, of storms, of high places, of closed spaces, and of spiders and snakes were more likely to survive and have descendants. Our evolutionary experience can account for some modern fears as well. A fear of elevators may be a modern version of our fear of closed spaces, while a fear of flying may be related to a fear of heights.
Also supporting the role of biology, anxiety disorders, including PTSD, are heritable, [1] and molecular genetics studies have found a variety of genes that are important in the expression of such disorders. [2] [3] Neuroimaging studies have found that anxiety disorders are linked to areas of the brain that are associated with emotion, blood pressure and heart rate, decision making, and action monitoring. [4] [5] People who experience PTSD also have a somewhat smaller hippocampus in comparison with those who do not, and this difference leads them to have a very strong sensitivity to traumatic events. [6]
Whether the genetic predisposition to anxiety becomes expressed as a disorder depends on environmental factors. People who were abused in childhood are more likely to be anxious than those who had normal childhoods, even with the same genetic disposition to anxiety sensitivity. [7] And the most severe anxiety and dissociative disorders, such as PTSD, are usually triggered by the experience of a major stressful event. One problem is that modern life creates a lot of anxiety. Although our life expectancy and quality of life have improved over the past 50 years, the same period has also created a sharp increase in anxiety levels. [8] These changes suggest that most anxiety disorders stem from perceived, rather than actual, threats to our well-being.
Anxieties are also learned through classical and operant conditioning. Just as rats that are shocked in their cages develop a chronic anxiety toward their laboratory environment (which has become a conditioned stimulus for fear), rape victims may feel anxiety when passing by the scene of the crime, and victims of PTSD may react to memories or reminders of the stressful event. Classical conditioning may also be accompanied by stimulus generalization. A single dog bite can lead to generalized fear of all dogs; a panic attack that follows an embarrassing moment in one place may be generalized to a fear of all public places. People’s responses to their anxieties are often reinforced. Behaviors become compulsive because they provide relief from the torment of anxious thoughts. Similarly, leaving or avoiding fear-inducing stimuli leads to feelings of calmness or relief, which reinforces phobic behavior.
The everyday variations in our feelings of happiness and sadness reflect our mood, which can be defined as the positive or negative feelings that are in the background of our everyday experiences. In most cases we are in a relatively good mood, and this positive mood has some positive consequences— it encourages us to do what needs to be done and to make the most of the situations we are in. [1] When we are in a good mood our thought processes open up, and we are more likely to approach others. We are more friendly and helpful to others when we are in a good mood than we are when we are in a bad mood, and we may think more creatively. [2]
On the other hand, when we are in a bad mood we are more likely to prefer to be alone rather than interact with others, we focus on the negative things around us, and our creativity suffers.
It is not unusual to feel “down” or “low” at times, particularly after a painful event such as the death of someone close to us, a disappointment at work, or an argument with a partner. We often get depressed when we are tired, and many people report being particularly sad during the winter when the days are shorter. Mood (or affective) disorders are psychological disorders in which the person’s mood negatively influences his or her physical, perceptual, social, and cognitive processes. People who suffer from mood disorders tend to experience more intense—and particularly more intense negative— moods. About 10% of the U.S. population suffers from a mood disorder in a given year.
The most common symptom of mood disorders is negative mood, also known as sadness or depression. Consider the feelings of this person, who was struggling with depression and was diagnosed with major depressive disorder:
I didn’t want to face anyone; I didn’t want to talk to anyone. I didn’t really want to do anything for myself....I couldn’t sit down for a minute really to do anything that took deep concentration....It was like I had big huge weights on my legs and I was trying to swim and just kept sinking. And I’d get a little bit of air, just enough to survive and then I’d go back down again. It was just constantly, constantly just fighting, fighting, fighting, fighting, fighting. [3]
Recurrence of depressive episodes is fairly common and is greatest for those who first experience depression before the age of 15 years. About twice as many women suffer from depression than do men. [4] This gender difference is consistent across many countries and cannot be explained entirely by the fact that women are more likely to seek treatment for their depression. Rates of depression have been increasing over the past years, although the reasons for this increase are not known. [5]
As you can see below, the experience of depression has a variety of negative effects on our behaviors. In addition to the loss of interest, productivity, and social contact that accompanies depression, the person’s sense of hopelessness and sadness may become so severe that he or she considers or even succeeds in committing suicide. Suicide is the 11th leading cause of death in the United States, and a suicide occurs approximately every 16 minutes. Almost all the people who commit suicide have a diagnosable psychiatric disorder at the time of their death. [6] [7] [8]
Behaviors Associated with Depression
The level of depression observed in people with mood disorders varies widely. People who experience depression for many years, such that it becomes to seem normal and part of their everyday life, and who feel that they are rarely or never happy, will likely be diagnosed with a mood disorder. If the depression is mild but long-lasting, they will be diagnosed with dysthymia, a condition characterized by mild, but chronic, depressive symptoms that last for at least 2 years.
If the depression continues and becomes even more severe, the diagnosis may become that of major depressive disorder. Major depressive disorder (clinical depression) is a mental disorder characterized by an all-encompassing low mood accompanied by low self-esteem and by loss of interest or pleasure in normally enjoyable activities. Those who suffer from major depressive disorder feel an intense sadness, despair, and loss of interest in pursuits that once gave them pleasure. These negative feelings profoundly limit the individual’s day-to-day functioning and ability to maintain and develop interests in life. [1]
About 21 million American adults suffer from a major depressive disorder in any given year; this is approximately 7% of the American population. Major depressive disorder occurs about twice as often in women as it does in men. [2] [3] In some cases clinically depressed people lose contact with reality and may receive a diagnosis of major depressive episode with psychotic features. In these cases the depression includes delusions and hallucinations.
Juliana is a 21-year-old single woman. Over the past several years she had been treated by a psychologist for depression, but for the past few months she had been feeling a lot better. Juliana had landed a good job in a law office and found a steady boyfriend. She told her friends and parents that she had been feeling particularly good—her energy level was high and she was confident in herself and her life.
One day Juliana was feeling so good that she impulsively quit her new job and left town with her boyfriend on a road trip. But the trip didn’t turn out well because Juliana became impulsive, impatient, and easily angered. Her euphoria continued, and in one of the towns that they visited she left her boyfriend and went to a party with some strangers that she had met. She danced into the early morning and ended up having sex with several of the men.
Eventually Juliana returned home to ask for money, but when her parents found out about her recent behavior, and when she acted aggressively and abusively to them when they confronted her about it, they referred her to a social worker. Juliana was hospitalized, where she was diagnosed with bipolar disorder.
Whereas dysthymia and major depressive disorder are characterized by overwhelming negative moods, bipolar disorder is a psychological disorder characterized by swings in mood from overly “high” to sad and hopeless, and back again, with periods of near-normal mood in between. Bipolar disorder is diagnosed in cases such as Juliana’s, where experiences with depression are followed by a more normal period and then a period of mania or euphoria in which the person feels particularly awake, alive, excited, and involved in everyday activities but is also impulsive, agitated, and distracted. Without treatment, it is likely that Juliana would cycle back into depression and then eventually into mania again, with the likelihood that she would harm herself or others in the process.
Bipolar disorder is an often chronic and lifelong condition that may begin in childhood. Although the normal pattern involves swings from high to low, in some cases the person may experience both highs and lows at the same time. Determining whether a person has bipolar disorder is difficult due to the frequent presence of comorbidity with both depression and anxiety disorders. Bipolar disorder is more likely to be diagnosed when it is initially observed at an early age, when the frequency of depressive episodes is high, and when there is a sudden onset of the symptoms. [2]
Mood disorders are known to be at least in part genetic, because they are heritable. [1] [2] Neurotransmitters also play an important role in mood disorders. Serotonin, dopamine, and norepinephrine are all known to influence mood, [3] and drugs that influence the actions of these chemicals are often used to treat mood disorders. The brains of those with mood disorders may in some cases show structural differences from those without them. Videbech and Ravnkilde [4] found that the hippocampus was smaller in depressed subjects than in normal subjects, and this may be the result of reduced neurogenesis (the process of generating new neurons) in depressed people. [5] Antidepressant drugs may alleviate depression in part by increasing neurogenesis. [6]
Avshalom Caspi and his colleagues [7] used a longitudinal study to test whether genetic predispositions might lead some people, but not others, to suffer from depression as a result of environmental stress. Their research focused on a particular gene, the 5-HTT gene, which is known to be important in the production and use of the neurotransmitter serotonin. The researchers focused on this gene because serotonin is known to be important in depression, and because selective serotonin reuptake inhibitors (SSRIs) have been shown to be effective in treating depression.
People who experience stressful life events, for instance involving threat, loss, humiliation, or defeat, are likely to experience depression. But biological-situational models suggest that a person’s sensitivity to stressful events depends on his or her genetic makeup. The researchers therefore expected that people with one type of genetic pattern would show depression following stress to a greater extent than people with a different type of genetic pattern.
The research included a sample of 1,037 adults from Dunedin, New Zealand. Genetic analysis on the basis of DNA samples allowed the researchers to divide the sample into two groups on the basis of the characteristics of their 5-HTT gene. One group had a short version (or allele) of the gene, whereas the other group did not have the short allele of the gene.
The participants also completed a measure where they indicated the number and severity of stressful life events that they had experienced over the past 5 years. The events included employment, financial, housing, health, and relationship stressors. The dependent measure in the study was the level of depression reported by the participant, as assessed using a structured interview test. [8]
As you can see in the figure below, as the number of stressful experiences the participants reported increased from 0 to 4, depression also significantly increased for the participants with the short version of the gene (top panel). But for the participants who did not have a short allele, increasing stress did not increase depression (bottom panel). Furthermore, for the participants who experienced 4 stressors over the past 5 years, 33% of the participants who carried the short version of the gene became depressed, whereas only 17% of participants who did not have the short version did.
This important study provides an excellent example of how genes and environment work together: An individual’s response to environmental stress was influenced by his or her genetic makeup.
But psychological and social determinants are also important in creating mood disorders and depression. In terms of psychological characteristics, mood states are influenced in large part by our cognitions. Negative thoughts about ourselves and our relationships to others create negative moods, and a goal of cognitive therapy for mood disorders is to attempt to change people’s cognitions to be more positive. Negative moods also create negative behaviors toward others, such as acting sad, slouching, and avoiding others, which may lead those others to respond negatively to the person, for instance by isolating that person, which then creates even more depression. You can see how it might become difficult for people to break out of this “cycle of depression.”
Weissman and colleagues [9] found that rates of depression varied greatly among countries, with the highest rates in European and American countries and the lowest rates in Asian countries. These differences seem to be due to discrepancies between individual feelings and cultural expectations about what one should feel. People from European and American cultures report that it is important to experience emotions such as happiness and excitement, whereas the Chinese report that it is more important to be stable and calm. Because Americans may feel that they are not happy or excited but that they are supposed to be, this may increase their depression. [10]
The term schizophrenia, which in Greek means “split mind,” was first used to describe a psychological disorder by Eugen Bleuler (1857–1939), a Swiss psychiatrist who was studying patients who had very severe thought disorders. Schizophrenia is a serious psychological disorder marked by delusions, hallucinations, loss of contact with reality, inappropriate affect, disorganized speech, social withdrawal, and deterioration of adaptive behavior.
Schizophrenia is the most chronic and debilitating of all psychological disorders. It affects men and women equally, occurs in similar rates across ethnicities and across cultures, and affects at any one time approximately 3 million people in the United States. [1] Onset of schizophrenia is usually between the ages of 16 and 30 and rarely after the age of 45 or in children. [2] [3]
Schizophrenia is accompanied by a variety of symptoms, but not all patients have all of them. [4] Symptoms are divided into positive symptoms, negative symptoms, and cognitive symptoms. [5] [1] Positive symptoms refer to the presence of abnormal behaviors or experiences (such as hallucinations) that are not observed in normal people, whereas negative symptoms (such as lack of affect and an inability to socialize with others) refer to the loss or deterioration of thoughts and behaviors that are typical of normal functioning. Finally, cognitive symptoms are the changes in cognitive processes that accompany schizophrenia. [6] Because the person has lost contact with reality, we say that he or she is experiencing psychosis, which is a psychological condition characterized by a loss of contact with reality.
People with schizophrenia often experience hallucinations—false sensations that occur in the absence of a real stimulus or which are gross distortions of a real stimulus. Auditory hallucinations are the most common and are reported by approximately three quarters of patients. [7] Schizophrenic patients frequently report hearing voices that curse them, comment on their behavior, order them to do things, or warn them of danger. [8] Visual hallucinations are less common and frequently involve seeing God or the devil. [9]
People with schizophrenia also commonly experience delusions, which are false beliefs not commonly shared by others within one’s culture, and maintained even though they are obviously out of touch with reality. People with delusions of grandeur believe that they are important, famous, or powerful. They often become convinced that they are someone else, such as the president or God, or that they have some special talent or ability. Some claim to have been assigned to a special covert mission. [10] People with delusions of persecution believe that a person or group seeks to harm them. They may think that people are able to read their minds and control their thoughts. [11]
People with schizophrenia also often experience the positive symptom of derailment—the shifting from one subject to another, without following any one line of thought to conclusion—and may exhibit grossly disorganized behavior including inappropriate sexual behavior, peculiar appearance and dress, unusual agitation (e.g., shouting and swearing), strange body movements, and awkward facial expressions. It is also common for people with schizophrenia to experience inappropriate affect. For example, the person may laugh uncontrollably when hearing sad news. Movement disorders typically appear as agitated movements, such as repeating a certain motion again and again, but can in some cases include catatonia, a state in which a person does not move and is unresponsive to others. [12] [13]
Negative symptoms of schizophrenia include social withdrawal, poor hygiene and grooming, poor problem-solving abilities, and a distorted sense of time. [6] Persons with schizophrenia often suffer from flat affect, which means that they express almost no emotional response (e.g., they speak in a monotone and have a blank facial expression) even though they may report feeling emotions. [14] Another negative symptom is the tendency toward incoherent language, for instance, to repeat the speech of others (“echo speech”). Some people with schizophrenia experience motor disturbances, ranging from complete catatonia and apparent obliviousness to their environment to random and frenzied motor activity during which they become hyperactive and incoherent. [15]
Not all people with schizophrenia exhibit negative symptoms, but those who do also tend to have the poorest outcomes. [16] Negative symptoms are predictors of deteriorated functioning in everyday life and often make it impossible for people to work or to care for themselves.
Cognitive symptoms of schizophrenia are typically difficult for outsiders to recognize but make it extremely difficult for the individual to lead a normal life. These symptoms include difficulty comprehending information and using it to make decisions (the lack of executive control), difficulty maintaining focus and attention, and problems with working memory (the ability to use information immediately after it is learned).
There is no single cause of schizophrenia. Rather, a variety of biological and environmental risk factors interact in a complex way to increase the likelihood that someone might develop schizophrenia. [1] Studies in molecular genetics have not yet identified the particular genes responsible for schizophrenia, but it is evident from research using family, twin, and adoption studies that genetics are important. [2] As you can see in the figure below, the likelihood of developing schizophrenia increases dramatically if a close relative also has the disease.
Neuroimaging studies have found some differences in brain structure between schizophrenic and normal patients. In some people with schizophrenia, the cerebral ventricles (fluid-filled spaces in the brain) are enlarged. [3] People with schizophrenia also frequently show an overall loss of neurons in the cerebral cortex, and some show less activity in the frontal and temporal lobes, which are the areas of the brain involved in language, attention, and memory. This would explain the deterioration of functioning in language and thought processing that is commonly experienced by schizophrenic patients. [4]
Many researchers believe that schizophrenia is caused in part by excess dopamine, and this theory is supported by the fact that most of the drugs useful in treating schizophrenia inhibit dopamine activity in the brain. [5] Levels of serotonin may also play a part. [6] But recent evidence suggests that the role of neurotransmitters in schizophrenia is more complicated than was once believed. It also remains unclear whether observed differences in the neurotransmitter systems of people with schizophrenia cause the disease, or if they are the result of the disease itself or its treatment. [7]
A genetic predisposition to developing schizophrenia does not always develop into the actual disorder. Even if a person has an identical twin with schizophrenia, he still has less than a 50% chance of getting it himself, and over 60% of all people with schizophrenia have no first- or second-degree relatives with the disorder. [8] [9] This suggests that there are important environmental causes as well.
One hypothesis is that schizophrenia is caused in part by disruptions to normal brain development in infancy that may be caused by poverty, malnutrition, and disease. [10] [11] [12] [13] Stress also increases the likelihood that a person will develop schizophrenic symptoms; onset and relapse of schizophrenia typically occur during periods of increased stress. However, it may be that people who develop schizophrenia are more vulnerable to stress than others and not necessarily that they experience more stress than others. [14]
To this point in the unit we have considered the psychological disorders that fall on Axis I of the Diagnostic and Statistical Manual of Mental Disorders (DSM) categorization system. In comparison to the Axis I disorders, which may frequently be severe and dysfunctional and are often brought on by stress, the disorders that fall on Axis II are longer-term disorders that are less likely to be severely incapacitating.
Axis II consists primarily of personality disorders. A personality disorder is a disorder characterized by inflexible patterns of thinking, feeling, or relating to others that cause problems in personal, social, and work situations. Personality disorders tend to emerge during late childhood or adolescence and usually continue throughout adulthood. [1] The disorders can be problematic for the people who have them, but they are less likely to bring people to a therapist for treatment than are Axis I disorders.
The personality disorders are summarized in the table below. They are categorized into three types: those characterized by odd or eccentric behavior, those characterized by dramatic or erratic behavior, and those characterized by anxious or inhibited behavior. As you consider the personality types described in the table, I’m sure you’ll think of people that you know who have each of these traits, at least to some degree. Probably you know someone who seems a bit suspicious and paranoid, who feels that other people are always “ganging up on him,” and who really doesn’t trust other people very much. Perhaps you know someone who fits the bill of being overly dramatic—the “drama queen” who is always raising a stir and whose emotions seem to turn everything into a big deal. Or you might have a friend who is overly dependent on others and can’t seem to get a life of her own.
The personality traits that make up the personality disorders are common—we see them in the people whom we interact with every day—yet they may become problematic when they are rigid, overused, or interfere with everyday behavior. [2] What is perhaps common to all the disorders is the person’s inability to accurately understand and be sensitive to the motives and needs of the people around them.
The personality disorders create a bit of a problem for diagnosis. For one, it is frequently difficult for the clinician to accurately diagnose which of the many personality disorders a person has, although the friends and colleagues of the person can generally do a good job of it. [3] And the personality disorders are highly comorbid; if a person has one, it’s likely that he or she has others as well. Also, the number of people with personality disorders is estimated to be as high as 15% of the population, [4] which might make us wonder if these are really “disorders” in any real sense of the word.
Although they are considered as separate disorders, the personality disorders are seem like milder versions of more severe Axis I disorders. [5] For example, obsessive-compulsive personality disorder has similar symptoms to obsessive-compulsive disorder (OCD), and schizoid and schizotypal personality disorders are characterized by symptoms similar to those of schizophrenia. This overlap in classification causes some confusion, and some theorists have argued that the personality disorders should be eliminated from the DSM. But clinicians normally differentiate Axis I and Axis II disorders, and thus the distinction is useful for them. [6] [7] [8] focus primarily on harm to others.
Assume that the people you are about to read about have received full diagnostic assessments by a psychologist and have a personality disorder. For each individual, determine which type of personality disorder he or she most likely has.
Although it is not possible to consider the characteristics of each of the personality disorders in this book, let’s focus on two that have important implications for behavior. The first, borderline personality disorder (BPD), is important because it is so often associated with suicide, and the second, antisocial personality disorder (APD), because it is the foundation of criminal behavior. Borderline and antisocial personality disorders are also good examples to consider because they are so clearly differentiated in terms of their focus. BPD (more frequently found in women than men) is known as an internalizing disorder because the behaviors that it entails (e.g., suicide and self-mutilation) are mostly directed toward the self. APD (mostly found in men), on the other hand, is a type of externalizing disorder in which the problem behaviors (e.g., lying, fighting, vandalism, and other criminal activity)
Borderline personality disorder (BPD) is a psychological disorder characterized by a prolonged disturbance of personality accompanied by mood swings, unstable personal relationships, identity problems, threats of self-destructive behavior, fears of abandonment, and impulsivity. BPD is widely diagnosed—up to 20% of psychiatric patients are given the diagnosis, and it may occur in up to 2% of the general population. [1] About three-quarters of diagnosed cases of BPD are women.
People with BPD fear being abandoned by others. They often show a clinging dependency on the other person and engage in manipulation to try to maintain the relationship. They become angry if the other person limits the relationship, but also deny that they care about the person. As a defense against fear of abandonment, borderline people are compulsively social. But their behaviors, including their intense anger, demands, and suspiciousness, repel people.
People with BPD often deal with stress by engaging in self-destructive behaviors, for instance by being sexually promiscuous, getting into fights, binge eating and purging, engaging in self-mutilation or drug abuse, and threatening suicide. These behaviors are designed to call forth a “saving” response from the other person. People with BPD are a continuing burden for police, hospitals, and therapists. Borderline individuals also show disturbance in their concepts of identity: They are uncertain about self-image, gender identity, values, loyalties, and goals. They may have chronic feelings of emptiness or boredom and be unable to tolerate being alone.
BPD has both genetic as well as environmental roots. In terms of genetics, research has found that those with BPD frequently have neurotransmitter imbalances, [2] and the disorder is heritable. [3] In terms of environment, many theories about the causes of BPD focus on a disturbed early relationship between the child and his or her parents. Some theories focus on the development of attachment in early childhood, while others point to parents who fail to provide adequate attention to the child’s feelings. Others focus on parental abuse (both sexual and physical) in adolescence, as well as on divorce, alcoholism, and other stressors. [4] The dangers of BPD are greater when they are associated with childhood sexual abuse, early age of onset, substance abuse, and aggressive behaviors. The problems are amplified when the diagnosis is comorbid (as it often is) with other disorders, such as substance abuse disorder, major depressive disorder, and posttraumatic stress disorder (PTSD). [5]
Posner and colleagues [6] hypothesized that the difficulty that individuals with BPD have in regulating their lives (e.g., in developing meaningful relationships with other people) may be due to imbalances in the fast and slow emotional pathways in the brain. Specifically, they hypothesized that the fast emotional pathway through the amygdala is too active, and the slow cognitive-emotional pathway through the prefrontal cortex is not active enough in those with BPD.
The participants in their research were 16 patients with BPD and 14 healthy comparison participants. All participants were tested in a functional magnetic resonance imaging (fMRI) machine while they performed a task that required them to read emotional and nonemotional words, and then press a button as quickly as possible whenever a word appeared in a normal font and not press the button whenever the word appeared in an italicized font. The researchers found that while all participants performed the task well, the patients with BPD had more errors than the controls (both in terms of pressing the button when they should not have and not pressing it when they should have). These errors primarily occurred on the negative emotional words.
The figure below shows the comparison of the level of brain activity in the emotional centers in the amygdala (left panel) and the prefrontal cortex (right panel). In comparison to the controls, the borderline patients showed relatively larger affective responses when they were attempting to quickly respond to the negative emotions, and showed less cognitive activity in the prefrontal cortex in the same conditions. This research suggests that excessive affective reactions and lessened cognitive reactions to emotional stimuli may contribute to the emotional and behavioral volatility of borderline patients.
In contrast to borderline personality disorder, which involves primarily feelings of inadequacy and a fear of abandonment, antisocial personality disorder (APD) is characterized by a disregard of the rights of others, and a tendency to violate those rights without being concerned about doing so.
APD is a pervasive pattern of violation of the rights of others that begins in childhood or early adolescence and continues into adulthood. APD is about three times more likely to be diagnosed in men than in women. To be diagnosed with APD the person must be 18 years of age or older and have a documented history of conduct disorder before the age of 15. Many years ago, people having antisocial personality disorder were referred to as “sociopaths” or “psychopaths,” but these terms are no longer interchangeable. The disorder “psychopathy” is not in the DSM, but is a distinct personality disorder characterized by lack of remorse and deficits in empathy.
People with APD feel little distress for the pain they cause others. They lie, engage in violence against animals and people, and frequently have drug and alcohol abuse problems. They are egocentric and frequently impulsive, for instance suddenly changing jobs or relationships. People with APD soon end up with a criminal record and often spend time incarcerated. The intensity of antisocial symptoms tends to peak during the 20s and then may decrease over time.
Biological and environmental factors are both implicated in the development of antisocial personality disorder. Twin and adoption studies suggest a genetic predisposition, [1] and biological abnormalities include low autonomic activity during stress, biochemical imbalances, right hemisphere abnormalities, and reduced gray matter in the frontal lobes. [2] [3] Environmental factors include neglectful and abusive parenting styles, such as the use of harsh and inconsistent discipline and inappropriate modeling. [4]
The disorders presented next are categorized in the DSM-IV as “Disorders Usually First Diagnosed in Infancy, Childhood, or Adolescence.” The key word here is usually—adults can be diagnosed with these disorders, but most often, they are diagnosed for the first time when the individual is a child. These disorders, such as ADHD and Autistic Disorder, are categorized together because of the timing of first diagnosis rather than similarity of their symptoms. [1] Although these disorders are often thought of as being the primary mental illnesses faced by children, it is important to note that children can—and do—also experience the other disorders you have read about, including anxiety and mood disorders.
Zack, age 7 years, has always had trouble settling down. He is easily bored and distracted. In school, he cannot stay in his seat for very long, and he frequently does not follow instructions. He is constantly fidgeting or staring into space. Zack has poor social skills and may overreact when someone accidentally bumps into him or uses one of his toys. At home, he chatters constantly and rarely settles down to do a quiet activity, such as reading a book.
Symptoms such as Zack’s are common among 7-year-olds, and particularly among boys. But what do the symptoms mean? Does Zack simply have a lot of energy and a short attention span? Boys mature more slowly than girls at this age, and perhaps Zack will catch up in the next few years. One possibility is for the parents and teachers to work with Zack to help him be more attentive, to put up with the behavior, and to wait it out.
But many parents, often on the advice of the child’s teacher, take their children to a psychologist for diagnosis. If Zack were taken for testing today, it is likely that he would be diagnosed with a psychological disorder known as attention-deficit hyperactivity disorder (ADHD). ADHD is a developmental behavior disorder characterized by problems with focus, difficulty maintaining attention, and inability to concentrate, in which symptoms start before 7 years of age. [1] [2] Although it is usually first diagnosed in childhood, ADHD can remain problematic in adults, and up to 7% of college students are diagnosed with it. [3] In adults, the symptoms of ADHD include forgetfulness, difficulty paying attention to details, procrastination, disorganized work habits, and not listening to others.
ADHD is about 70% more likely to occur in males than in females [4] and is often comorbid with other behavioral and conduct disorders. The diagnosis of ADHD has quadrupled over the past 20 years such that it is now diagnosed in about 1 out of every 20 American children and is the most common psychological disorder among children in the world. [5] ADHD is also being diagnosed much more frequently in adolescents and adults. [6] You might wonder what this all means. Are the increases in the diagnosis of ADHD due to the fact that today’s children and adolescents are actually more distracted and hyperactive than their parents were, due to a greater awareness of ADHD among teachers and parents, or are psychologists and psychiatrists overdiagnosing the problem? Some believe that drug companies contribute to the rate of diagnosis of the disorder, because ADHD is often treated with prescription medications, including stimulants such as Ritalin.
Despite these arguments by skeptics, research suggests that ADHD is a real disorder that is caused by a combination of genetic and environmental factors. Twin studies have found that ADHD is heritable, [2] and neuroimaging studies have found that people with ADHD may have structural differences in areas of the brain that influence self-control and attention. [7] Other studies have pointed to environmental factors, such as mothers’ smoking and drinking alcohol during pregnancy and the consumption of lead and food additives by those who are affected. [8] [9] [10] Social factors, such as family stress and poverty, also contribute to ADHD. [11]
Jared’s kindergarten teacher has voiced her concern to Jared’s parents about his difficulties with interacting with other children and his delay in developing normal language. Jared is able to maintain eye contact and enjoys mixing with other children, but he cannot communicate with them very well. He often responds to questions or comments with long-winded speeches about trucks or some other topic that interests him, and he seems to lack awareness of other children’s wishes and needs.
Jared’s concerned parents took him to a multidisciplinary child development center for consultation. Here he was tested by a pediatric neurologist, a psychologist, and a child psychiatrist. The pediatric neurologist found that Jared’s hearing was normal, and there were no signs of any neurological disorder. He diagnosed Jared with a pervasive developmental disorder, because while his comprehension and expressive language was poor, he was still able to carry out nonverbal tasks, such as drawing a picture or doing a puzzle.
Based on her observation of Jared’s difficulty interacting with his peers, and the fact that he did not respond warmly to his parents, the psychologist diagnosed Jared with autistic disorder (autism), a disorder of neural development characterized by impaired social interaction and communication and by restricted and repetitive behavior, and in which symptoms begin before 7 years of age. The psychologist believed that the autism diagnosis was correct because, like other children with autism, Jared has a poorly developed ability to see the world from the perspective of others; engages in unusual behaviors such as talking about trucks for hours; and responds to stimuli, such as the sound of a car or an airplane, in unusual ways.
The child psychiatrist believed that Jared’s language problems and social skills were not severe enough to warrant a diagnosis of autistic disorder and instead proposed a diagnosis of Asperger’s disorder, a developmental disorder that affects a child’s ability to socialize and communicate effectively with others and in which symptoms begin before 7 years of age. The symptoms of Asperger’s are almost identical to those of autism (with the exception of a delay in language development), and the child psychiatrist simply saw these problems as less extreme.
Imagine how Jared’s parents must have felt at this point. Clearly there is something wrong with their child, but even the experts cannot agree on exactly what the problem is. Diagnosing problems such as Jared’s is difficult, yet the number of children like him is increasing dramatically. Disorders related to autism and Asperger’s disorder now affect almost 1% of American children. [1] The milder forms of autism, and particularly Asperger’s, have accounted for most of this increase in diagnosis.
Although for many years autism was thought to be primarily a socially determined disorder, in which parents who were cold, distant, and rejecting created the problem, current research suggests that biological factors are most important. The heritability of autism has been estimated to be as high as 90%. [2] Scientists speculate that autism is caused by an unknown, genetically determined brain abnormality that occurs early in development. It is likely that several different brain sites are affected, [3] and the search for these areas is being conducted in many scientific laboratories.
But does Jared have autism or Asperger’s? The problem is that diagnosis is not exact (remember the idea of “categories”), and the experts themselves are often unsure how to classify behavior. Furthermore, the appropriate classifications change with time and new knowledge. The American Psychiatric Association has recently posted on its website a proposal to eliminate the term Asperger’s syndrome from the upcoming DSM-V. Whether or not Asperger’s will remain a separate disorder will be made known when the next DSM-V is published in 2013.
For each of the following children, specify which disorder he or she most likely has.
In the last module on childhood disorders, we discussed that identifying and clearly describing psychological disorders is not simple task. The same kind of challenge exists in the medical world for clearly identifying and describing diseases and disorders of other parts of the body.
In this module, you learn about a dramatic and controversial disorder called dissociative identity disorder. It is a rare condition that has captured the imagination of writers, movie makers, and the general public. The clinical community has engaged in fierce debates for decades about the causes and characteristics of dissociative identity disorder, with many psychologists arguing that the disorder is not real.
First, we describe the disorder using terminology from the DSM. Then we explain some of the concerns that people have raised that have led to doubts about the validity of the diagnosis and interpretation of the patients' behaviors. Finally, we review some of the research that has been conducted to answer the questions using the tools of science.
You may remember the story of Sybil (a pseudonym for Shirley Ardell Mason, who was born in 1923), a person who, over a period of 40 years, claimed to possess 16 distinct personalities. Mason was in therapy for many years trying to integrate these personalities into one complete self. A TV movie about Mason’s life, starring Sally Field as Sybil, appeared in 1976.
Sybil suffered from the most severe of the dissociative disorders, dissociative identity disorder. Dissociative identity disorder is a psychological disorder in which two or more distinct and individual personalities exist in the same person, and there is an extreme memory disruption regarding personal information about the other personalities. [1] Dissociative identity disorder was once known as multiple personality disorder, and this label is still sometimes used. This disorder is sometimes mistakenly referred to as schizophrenia.
Reports of cases of dissociative identity disorder suggest that there can be more than 10 different personalities in one individual. Switches from one personality to another tend to occur suddenly, often triggered by a stressful situation. [2] The host personality is the personality in control of the body most of the time, and the alter personalities tend to differ from each other in terms of age, race, gender, language, manners, and even sexual orientation. [3] A shy, introverted individual may develop a boisterous, extroverted alter personality. Each personality has unique memories and social relationships. [4] Women are more frequently diagnosed with dissociative identity disorder than are men, and when they are diagnosed also tend to have more “personalities.” [5]
Although this disorder has received a lot of attention by the media and general public, it is arguably the most controversial disorder in the DSM. In fact, clinicians and researchers disagree about the legitimacy of dissociative identity disorder. Some clinicians argue that the descriptions in the DSM accurately reflect the symptoms of these patients, whereas others believe that patients are faking, role-playing, or using the disorder as a way to justify behavior. [6] [7] [8] [9]
Some have hypothesized that patients who are distressed and highly suggestible can be influenced by therapists to exhibit symptoms of the disorder. [8] Even the diagnosis of Shirley Ardell Mason (Sybil) is disputed. Some experts claim that Mason was highly hypnotizable and that her therapist unintentionally “suggested” the existence of her multiple personalities. [10]
Using Science to Resolve Controversies
How can we determine the legitimacy of dissociative identity disorder? Because psychology is a science, careful research is the best method to resolve such controversies. Lilienfeld and Lynn [8] review some studies that have done just this. For example, Modestin [11] conducted a survey in Switzerland that that revealed that two-thirds of cases of dissociative identity disorder were diagnosed by just .09% of clinicians. Ninety percent of clinicians reported never having seen a patient with dissociative identity disorder. This suggests that some clinicians diagnose many people with dissociative identity disorder, and other clinicians never diagnose anyone with the disorder. Future research is needed to determine whether the clinicians who are diagnosing many cases of dissociative identity disorder are simply better at identifying the disorder or if they are giving the diagnosis too frequently.
Laboratory experiments provide a different kind of evidence regarding dissociative identity disorder. For example, Spanos, Weekes, and Bertrand [12] (see also Lilienfeld and Lynn, 2003 [8] ) conducted an experiment in which participants were asked to role-play the part of a murderer and be interviewed by a psychiatrist. Some participants were randomly assigned to receive an interview involving hypnosis during which it was suggested they had elements of dissociative identity disorder. For example, they were told by the interviewer, “I've talked a bit to [name], but I think perhaps there might be another part of [name] that I haven't talked to, another part that maybe feels somewhat differently from the part that I've talked to. And I would like to communicate with that other part.” The control group of participants received no suggestion of this kind. Many of the participants who received the suggestion that they may have another personality adopted a different name for this “alter” personality and even referred to him or her in the third person. None of the control participants did this. By no means does this “prove” that dissociative identity disorder can be created in the laboratory, but it does provide evidence that people can show signs consistent with this disorder under the right conditions (e.g., a suggestive interview). Studies like this and other types of research can advance knowledge on how disorders develop so that diagnostic controversies can be cleared up.
Lucien Masson, a 60-year-old Vietnam veteran from Arizona, put it simply: “Sascha is the best medicine I’ve ever had.”
Lucien is speaking about his friend, companion, and perhaps even his therapist, a Russian wolfhound named Sascha. Lucien suffers from posttraumatic stress disorder (PTSD), a disorder that has had a profoundly negative impact on his life for many years. His symptoms include panic attacks, nightmares, and road rage. Lucien has tried many solutions, consulting with doctors, psychiatrists, and psychologists and using a combination of drugs, group therapy, and anger-management classes.
But Sascha seems to be the best therapist of all. He helps out in many ways. If a stranger gets too close to Lucien in public, Sascha will block the stranger with his body. Sascha is trained to sense when Lucien is about to have a nightmare, waking him before it starts. Before road rage can set in, Sascha gently whimpers, reminding his owner that it doesn’t pay to get upset about nutty drivers.
In the same way, former Army medic Jo Hanna Schaffer speaks of her Chihuahua, Cody: “I never took a pill for PTSD that did as much for me as Cody has done.”
Persian Gulf War veteran Karen Alexander feels the same way about her Bernese mountain dog, Cindy: “She’ll come up and touch me, and that is enough of a stimulus to break the loop, bring me back to reality. Sometimes I’ll scratch my hand until it’s raw and won’t realize until she comes up to me and brings me out. She’s such a grounding influence for me.”
These dramatic stories of improvement from debilitating disorders can be attributed to an alternative psychological therapy, based on established behavioral principles, provided by “psychiatric service dogs.” The dogs are trained to help people with a variety of mental disorders, including panic attacks, anxiety disorder, obsessive-compulsive disorder, and bipolar disorder. They help veterans of Iraq and Afghanistan cope with their traumatic brain injuries as well as with PTSD.
The dogs are trained to perform specific behaviors that are helpful to their owners. If the dog’s owner is depressed, the dog will snuggle up and offer physical comfort; if the owner is having a panic attack, the owner can calm himself by massaging the dog’s body. The serenity shown by the dogs in all situations seems to reassure the PTSD sufferer that all must be well. Service dogs are constant, loving companions who provide emotional support and companionship to their embattled, often isolated owners. [1] [2] [3] [4]
Despite the reports of success from many users, it is important to keep in mind that the utility of psychiatric service dogs has not yet been tested empirically to review long-term benefits. Organizations such as Pet for Vets, however, help pair shelter dogs with vets who are emotionally wounded. Watch this video for more inspiring stories about how dogs help veterans cope with PTSD.
Psychological disorders create a tremendous individual, social, and economic drain on society. Disorders make it difficult for people to engage in productive lives and effectively contribute to their family and to society. Disorders lead to disability and absenteeism in the workplace as well as to physical problems, premature death, and suicide. At a societal level, the costs are staggering. It has been estimated that the annual financial burden of each case of anxiety disorder is over $3,000 per year, meaning that the annual cost of anxiety disorders alone in the United States runs into the trillions of dollars. [5] [6]
The goal of this unit is to review the techniques that are used to treat psychological disorder. Just as psychologists consider the causes of disorder in terms of the bio-psycho-social model of illness, treatment is also based on psychological, biological, and social approaches.
Janice has been seriously depressed lately. A close friend finally persuaded her to get professional help. She discovers that there are many different approaches to treating psychological problems, but she isn’t sure what is best for her condition. She decides to interview several potential therapists to see what each one of them can tell her about the approach she might expect from that psychologist.
In this exercise, you will listen in on just a few seconds of each of Janice’s interviews. Based entirely on the information you hear, decide if the therapist is suggesting a psychological approach to treatment, a biological approach to treatment, or a social approach to treatment.
A clinician may focus on any or all of the three approaches to treatment, but in making a decision about which to use, he or she will always rely on his or her knowledge about existing empirical tests of the effectiveness of different treatments. These tests, known as outcome studies, carefully compare people who receive a given treatment with people who do not receive a treatment, or with people who receive a different type of treatment. Taken together, these studies have confirmed that many types of therapies are effective in treating disorder.
Treatment for psychological disorder begins when the individual who is experiencing distress visits a counselor or therapist, perhaps in a church, a community center, a hospital, or a private practice. The therapist will begin by systematically learning about the patient’s needs through a formal psychological assessment, which is an evaluation of the patient’s psychological and mental health. During the assessment the psychologist may give personality tests such as the Minnesota Multiphasic Personal Inventory (MMPI-2) or projective tests, and will conduct a thorough interview with the patient. The therapist may get more information from family members or school personnel.
In addition to the psychological assessment, the patient is usually seen by a physician to gain information about potential Axis III (physical) problems. In some cases of psychological disorder—and particularly for sexual problems—medical treatment is the preferred course of action. For instance, men who are experiencing erectile dysfunction disorder may need surgery to increase blood flow or local injections of muscle relaxants. Or they may be prescribed medications (Viagra, Cialis, or Levitra) that provide an increased blood supply to the penis, which are successful in increasing performance in about 70% of men who take them.
After the medical and psychological assessments are completed, the therapist will make a formal diagnosis using the detailed descriptions of the disorder provided in the Diagnostic and Statistical Manual of Mental Disorders (DSM; see below). The therapist will summarize the information about the patient on each of the five DSM axes, and the diagnosis will likely be sent to an insurance company to justify payment for the treatment.
To be diagnosed with ADHD the individual must display either A or B below: [8]
A. Six or more of the following symptoms of inattention have been present for at least 6 months to a point that is disruptive and inappropriate for developmental level:
B. Six or more of the following symptoms of hyperactivity-impulsivity have been present for at least 6 months to an extent that is disruptive and inappropriate for developmental level:
Imagine that you are a school psychologist. You are trying to decide if you should recommend to a middle school that Mark Wilshire, a 6th grade student, should receive special consideration in school activities. Mark’s parents have been interested in his progress and cooperative with school officials, but they are concerned about him and wonder if he has a learning disability. You suspect that he may have normal learning abilities, but that he might have a “behavior disorder” called Attention Deficit Hyperactivity Disorder, usually called by its initials: ADHD. Here is a report compile about Mark. By law in your state, Mark has a right to receive specific accommodations, such as a quiet place to take tests, if he qualifies for an ADHD diagnosis. If he does not, then he will not receive any special treatment.
Look over the report you have from a psychologist who interviewed Mark. Based on this information, do you feel that Mark should be given further testing for ADHD or that there is insufficient evidence of ADHD and other possible reasons for his problems, such as a learning disability, should be pursued?
Your school requires that a recommendation for ADHD assessment must indicate if the student qualifies under (A) Inattention type ADHD or (B) Hyperactivity-Impulsivity type ADHD. Use the symptom charts to decide.
Case Report on Mark Wilshire
Mark is 12-years-old and is of average height and weight for his age. He has come to the clinic because his parents are concerned about his poor performance at school. Mark’s teachers report that they like Mark and he is friendly and respectful, but he is consistently late with assignments, often forgets regularly scheduled meetings, and frequently fails to follow directions even when these directions are repeated several times. His work is often incomplete, sloppy, and poorly edited. He constantly loses things, like books and pens, and he often “daydreams” during class activities. When other children are being disruptive, Mark is unable to continue with his work, so he becomes part of the disruptive group, requiring attention from the teacher.
During the interview, Mark was well behaved. He would frequently squirm in his chair and fidget, but he was able to control himself enough to remain engaged in the conversation. He listened carefully to questions and answered appropriately and with insight. He was distracted several times by noises in the waiting room, but when his attention was reestablished, he understood questions and answered them with well thought out explanations. His language was typical of a 12-year-old boy.
Instructions: Please complete the following assessment of this student by identifying (checking the appropriate boxes) all symptoms that apply to Mark Wilshire. When you are done, click the Submit button so you can compare your checklist with the correct one. You many also review feedback referring to excerpts of the report by clicking on the yellow bubbles.
When you have finished, please assess the other set of symptoms.
Symptom List A: Inactivity Type ADHD
Symptom List B: Hyperactivity-Impulsivity Type ADHD
If a diagnosis is made, the therapist will select a course of therapy that he or she feels will be most effective. One approach to treatment is psychotherapy, the professional treatment for psychological disorder through techniques designed to encourage communication of conflicts and insight. The fundamental aspect of psychotherapy is that the patient directly confronts the disorder and works with the therapist to help reduce it. Therapy includes assessing the patient’s issues and problems, planning a course of treatment, setting goals for change, the treatment itself, and an evaluation of the patient’s progress. Therapy is practiced by thousands of psychologists and other trained practitioners in the United States and around the world, and is responsible for billions of dollars of the health budget.
To many people, therapy involves a patient lying on a couch with a therapist sitting behind and nodding sagely as the patient speaks. Though this approach to therapy (known as psychoanalysis) is still practiced, it is in the minority. It is estimated that there are over 400 different kinds of therapy practiced by people in many fields, and the most important of these are shown in the figure below. The therapists who provide these treatments include psychiatrists and clinical psychologists, as well as social workers, psychiatric nurses, and couples, marriage, and family therapists.
Many people who would benefit from psychotherapy do not get it, either because they do not know how to find it or because they feel that they will be stigmatized and embarrassed if they seek help. The decision to not seek help is a very poor choice because the effectiveness of mental health treatments is well documented and, no matter where a person lives, there are treatments available. [1]
The first step in seeking help for psychological problems is to accept the stigma. It is possible that some of your colleagues, friends, and family members will know that you are seeking help and some may at first think more negatively of you for it. But you must get past these unfair and close-minded responses. Feeling good about yourself is the most important thing you can do, and seeking help may be the first step in doing so.
One question is how to determine if someone needs help. This question is not always easy to answer because there is no clear demarcation between “normal” and “abnormal” behavior. Most generally, you will know that you or others need help when the person’s psychological state is negatively influencing his or her everyday behavior, when the behavior is adversely affecting those around the person, and when the problems continue over a period of time. Often people seek therapy as a result of a life-changing event such as diagnosis of a fatal illness, an upcoming marriage or divorce, or the death of a loved one. But therapy is also effective for general depression and anxiety, as well as for specific everyday problems.
There are a wide variety of therapy choices, many of which are free. Begin in your school, community, or church, asking about community health or counseling centers and pastoral counseling. You may want to ask friends and family members for recommendations. You’ll probably be surprised at how many people have been to counseling, and how many recommend it.
There are many therapists who offer a variety of treatment options. Be sure to ask about the degrees that the therapist has earned, and about the reputation of the center in which the therapy occurs. If you have choices, try to find a person or location that you like, respect, and trust. This will allow you to be more open, and you will get more out of the experience. Your sessions with the help provider will require discussing your family history, personality, and relationships, and you should feel comfortable sharing this information.
Remember also that confronting issues requires time to reflect, energy to get to the appointments and deal with consequential feelings, and discipline to explore your issues on your own. Success at therapy is difficult, and it takes effort.
The bottom line is that going for therapy should not be a difficult decision for you. All people have the right to appropriate mental health care just as they have a right to general health care. Just as you go to a dentist for a toothache, you may go to therapy for psychological difficulties. Furthermore, you can be confident that you will be treated with respect and that your privacy will be protected, because therapists follow ethical principles in their practices. The following provides a summary of these principles as developed by the American Psychological Association. [2]
Psychodynamic therapy (psychoanalysis) is a psychological treatment based on Freudian and neo-Freudian personality theories in which the therapist helps the patient explore the unconscious dynamics of personality. The analyst engages with the patient, usually in one-on-one sessions, often with the patient lying on a couch and facing away. The goal of the psychotherapy is for the patient to talk about his or her personal concerns and anxieties, allowing the therapist to try to understand the underlying unconscious problems that are causing the symptoms (the process of interpretation). The analyst may try out some interpretations on the patient and observe how he or she responds to them.
The patient may be asked to verbalize his or her thoughts through free association, in which the therapist listens while the client talks about whatever comes to mind, without any censorship or filtering. The client may also be asked to report on his or her dreams, and the therapist will use dream analysis to analyze the symbolism of the dreams in an effort to probe the unconscious thoughts of the client and interpret their significance. On the basis of the thoughts expressed by the patient, the analyst discovers the unconscious conflicts causing the patient’s symptoms and interprets them for the patient.
The goal of psychotherapy is to help the patient develop insight—that is, an understanding of the unconscious causes of the disorder, [3] [4] but the patient often shows resistance to these new understandings, using defense mechanisms to avoid the painful feelings in his or her unconscious. The patient might forget or miss appointments, or act out with hostile feelings toward the therapist. The therapist attempts to help the patient develop insight into the causes of the resistance. The sessions may also lead to transference, in which the patient unconsciously redirects feelings experienced in an important personal relationship toward the therapist. For instance, the patient may transfer feelings of guilt that come from the father or mother to the therapist. Some therapists believe that transference should be encouraged, as it allows the client to resolve hidden conflicts and work through feelings that are present in the relationships.
One problem with traditional psychoanalysis is that the sessions may take place several times a week, go on for many years, and cost thousands of dollars. To help more people benefit, modern psychodynamic approaches frequently use shorter-term, focused, and goal-oriented approaches. In these “brief psychodynamic therapies,” the therapist helps the client determine the important issues to be discussed at the beginning of treatment and usually takes a more active role than in classic psychoanalysis. [5]
Now click on “Psychodynamic” therapy and we will see what percentage of psychotherapists in the United States practice Psychodynamic Therapy.
Listen to each segment of conversation between Terrance and his therapist, Dr. Robinson, and then decide which characteristic of psychodynamic therapy is represented.
Just as psychoanalysis is based on the personality theories of Freud and the neo-Freudians, humanistic therapy is a psychological treatment based on the personality theories of Carl Rogers and other humanistic psychologists. Humanistic therapy is based on the idea that people develop psychological problems when they are burdened by limits and expectations placed on them by themselves and others, and the treatment emphasizes the person’s capacity for self-realization and fulfillment. Humanistic therapies attempt to promote growth and responsibility by helping clients consider their own situations and the world around them and how they can work to achieve their life goals.
Carl Rogers developed person-centered therapy (or client-centered therapy), an approach to treatment in which the client is helped to grow and develop as the therapist provides a comfortable, nonjudgmental environment. In his book, A Way of Being, [6] Rogers argued that therapy was most productive when the therapist created a positive relationship with the client—a therapeutic alliance. The therapeutic alliance is a relationship between the client and the therapist that is facilitated when the therapist is genuine (i.e., he or she creates no barriers to free-flowing thoughts and feelings), when the therapist treats the client with unconditional positive regard (i.e., values the client without any qualifications, displaying an accepting attitude toward whatever the client is feeling at the moment), and when the therapist develops empathy with the client (i.e., that he or she actively listens to and accurately perceives the personal feelings that the client experiences).
The development of a positive therapeutic alliance has been found to be exceedingly important to successful therapy. The ideas of genuineness, empathy, and unconditional positive regard in a nurturing relationship in which the therapist actively listens to and reflects the feelings of the client is probably the most fundamental part of contemporary psychotherapy. [7]
Psychodynamic and humanistic therapies are recommended primarily for people suffering from generalized anxiety or mood disorders, and who desire to feel better about themselves overall. But the goals of people with other psychological disorders, such as phobias, sexual problems, and obsessive-compulsive disorder (OCD), are more specific. A person with a social phobia may want to be able to leave his or her house, a person with a sexual dysfunction may want to improve his or her sex life, and a person with OCD may want to learn to stop letting his obsessions or compulsions interfere with everyday activities. In these cases it is not necessary to revisit childhood experiences or consider our capacities for self-realization—we simply want to deal with what is happening in the present.
Now click on “Humanistic” therapy and we will see what percentage of psychotherapists in the United States practice Psychodynamic Therapy.
Behavioral therapy is psychological treatment that is based on principles of learning. The most direct approach is through operant conditioning using reward or punishment. Reinforcement may be used to teach new skills to people, for instance, those with autism or schizophrenia. [1] [2] [3] If the patient has trouble dressing or grooming, then reinforcement techniques, such as providing tokens that can be exchanged for snacks, are used to reinforce appropriate behaviors such as putting on one’s clothes in the morning or taking a shower at night. If the patient has trouble interacting with others, reinforcement will be used to teach the client how to more appropriately respond in public, for instance, by maintaining eye contact, smiling when appropriate, and modulating tone of voice.
As the patient practices the different techniques, the appropriate behaviors are shaped through reinforcement to allow the client to manage more complex social situations. In some cases observational learning may also be used; the client may be asked to observe the behavior of others who are more socially skilled to acquire appropriate behaviors. People who learn to improve their interpersonal skills through skills training may be more accepted by others and this social support may have substantial positive effects on their emotions.
Exposure therapy is a behavioral therapy based on the classical conditioning principle of extinction in which people are confronted with a feared stimulus with the goal of decreasing their negative emotional responses to it. [4] Exposure treatment can be carried out in real situations or through imagination, and it is used in the treatment of panic disorder, agoraphobia, social phobia, OCD, and posttraumatic stress disorder (PTSD).
In flooding, a client is exposed to the source of his fear all at once. An agoraphobic might be taken to a crowded shopping mall or someone with an extreme fear of heights to the top of a tall building. The assumption is that the fear will subside as the client habituates to the situation while receiving emotional support from the therapist during the stressful experience. An advantage of the flooding technique is that it is quick and often effective, but a disadvantage is that the patient may relapse after a short period of time.
More frequently, the exposure is done more gradually. Systematic desensitization is a behavioral treatment that combines imagining or experiencing the feared object or situation with relaxation exercises. [4] The client and the therapist work together to prepare a hierarchy of fears, starting with the least frightening, and moving to the most frightening scenario surrounding the object as shown in the table below. The patient then confronts her fears in a systematic manner, sometimes using her imagination but usually, when possible, in real life.
Hierarchy of Fears Used in Systematic Desensitization | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
From Flat World Knowledge, Introduction to Psychology, CC-BY-NC-SA. | ||||||||||||||||||||
|
Desensitization techniques use the principle of counterconditioning, in which a second incompatible response (relaxation, e.g., through deep breathing) is conditioned to an already conditioned response (the fear response). The continued pairing of the relaxation responses with the feared stimulus as the patient works up the hierarchy gradually leads the fear response to be extinguished and the relaxation response to take its place.
Behavioral therapy works best when people directly experience the feared object. Fears of spiders are more directly habituated when the patient interacts with a real spider, and fears of flying are best extinguished when the patient gets on a real plane. But it is often difficult and expensive to create these experiences for the patient. Recent advances in virtual reality have allowed clinicians to provide behavior therapy in what seem like real situations to the patient. In virtual reality CBT, the therapist uses computer-generated, three-dimensional, lifelike images of the feared stimulus in a systematic desensitization program. Specially designed computer equipment, often with a head-mount display, is used to create a simulated environment. A common use is in helping soldiers who are experiencing PTSD return to the scene of the trauma and learn how to cope with the stress it invokes.
Some of the advantages of the virtual reality treatment approach are that it is economical, the treatment session can be held in the therapist’s office with no loss of time or confidentiality, the session can easily be terminated as soon as a patient feels uncomfortable, and many patients who have resisted live exposure to the object of their fears are willing to try the new virtual reality option first.
Aversion therapy is a type of behavior therapy in which positive punishment is used to reduce the frequency of an undesirable behavior. An unpleasant stimulus is intentionally paired with a harmful or socially unacceptable behavior until the behavior becomes associated with unpleasant sensations and is hopefully reduced. A child who wets his bed may be required to sleep on a pad that sounds an alarm when it senses moisture. Over time, the positive punishment produced by the alarm reduces the bedwetting behavior. [5] Aversion therapy is also used to stop other specific behaviors such as nail biting. [6]
Alcoholism has long been treated with aversion therapy. [7] In a standard approach, patients are treated at a hospital where they are administered a drug, antabuse, that makes them nauseous if they consume any alcohol. The technique works very well if the user keeps taking the drug, [8] but unless it is combined with other approaches, the patients are likely to relapse after they stop the drug.
Now click on “Behavioral” therapy and we will see what percentage of psychotherapists in the United States practice Behavioral Therapy.
While behavioral approaches focus on the actions of the patient, cognitive therapy is a psychological treatment that helps clients identify incorrect or distorted beliefs that are contributing to disorder. In cognitive therapy the therapist helps the patient develop new, healthier ways of thinking about themselves and about the others around them. The idea of cognitive therapy is that changing thoughts will change emotions, and that the new emotions will then influence behavior.
The goal of cognitive therapy is not necessarily to get people to think more positively but rather to think more accurately. For instance, a person who thinks “no one cares about me” is likely to feel rejected, isolated, and lonely. If the therapist can remind the person that she has a mother or daughter who does care about her, more positive feelings will likely follow. Similarly, changing beliefs from “I have to be perfect” to “No one is always perfect—I’m doing pretty good,” from “I am a terrible student” to “I am doing well in some of my courses,” or from “She did that on purpose to hurt me” to “Maybe she didn’t realize how important it was to me” may all be helpful.
The psychiatrist Aaron T. Beck (1921–) and the psychologist Albert Ellis (1913–2007) together provided the basic principles of cognitive therapy. Ellis [9] called his approach rational emotive behavior therapy (REBT) or rational emotive therapy (RET), and he focused on pointing out the flaws in the patient’s thinking. Ellis noticed that people experiencing strong negative emotions tend to personalize and overgeneralize their beliefs, leading to an inability to see situations accurately. [10] In REBT, the therapist’s goal is to challenge these irrational thought patterns, helping the patient replace the irrational thoughts with more rational ones, leading to the development of more appropriate emotional reactions and behaviors.
Beck’s [11] [12] cognitive therapy was based on his observation that people who were depressed generally had a large number of highly accessible negative thoughts that influenced their thinking. His goal was to develop a short-term therapy for depression that would modify these unproductive thoughts. Beck’s approach challenges the client to test his beliefs against concrete evidence. If a client claims that “everybody at work is out to get me,” the therapist might ask him to provide instances to corroborate the claim. At the same time the therapist might point out contrary evidence, such as the fact that a certain coworker is actually a loyal friend or that the patient’s boss had recently praised him.
Now click on “Cognitive” therapy and we will see what percentage of psychotherapists in the United States practice Cognitive Therapy.
Cognitive-behavior therapy (CBT) is a structured approach to treatment that attempts to reduce psychological disorders through systematic procedures based on cognitive and behavioral principles. As you can see in following figure, CBT is based on the idea that there is a recursive link among our thoughts, our feelings, and our behavior. For instance, if we are feeling depressed, our negative thoughts (“I am doing poorly in my chemistry class”) lead to negative feelings (“I feel hopeless and sad”), which then contribute to negative behaviors (lethargy, disinterest, lack of studying). When we or other people look at the negative behavior, the negative thoughts are reinforced and the cycle repeats itself. [13] Similarly, in panic disorder a patient may misinterpret his or her feelings of anxiety as a sign of an impending physical or mental catastrophe (such as a heart attack), leading to an avoidance of a particular place or social situation. The fact that the patient is avoiding the situation reinforces the negative thoughts. Again, the thoughts, feelings, and behavior amplify and distort each other.
CBT is a very broad approach that is used for the treatment of a variety of problems, including mood, anxiety, personality, eating, substance abuse, attention-deficit, and psychotic disorders. CBT treats the symptoms of the disorder (the behaviors or the cognitions) and does not attempt to address the underlying issues that cause the problem. The goal is simply to stop the negative cycle by intervening to change cognition or behavior. The client and the therapist work together to develop the goals of the therapy, the particular ways that the goals will be reached, and the timeline for reaching them. The procedures are problem-solving and action-oriented, and the client is forced to take responsibility for his or her own treatment. The client is assigned tasks to complete that will help improve the disorder and takes an active part in the therapy. The treatment usually lasts between 10 and 20 sessions.
Depending on the particular disorder, some CBT treatments may be primarily behavioral in orientation, focusing on the principles of classical, operant, and observational learning, whereas other treatments are more cognitive, focused on changing negative thoughts related to the disorder. But almost all CBT treatments use a combination of behavioral and cognitive approaches.
To this point we have considered the different approaches to psychotherapy under the assumption that a therapist will use only one approach with a given patient. But this is not the case; as you saw in the pie chart "The Many Types of Therapy Practiced in the United States," the most commonly practiced approach to therapy is an eclectic therapy, an approach to treatment in which the therapist uses whichever techniques seem most useful and relevant for a given patient. For bipolar disorder, for instance, the therapist may use both psychological skills training to help the patient cope with the severe highs and lows, but may also suggest that the patient consider biomedical drug therapies. [14] Treatment for major depressive disorder usually involves antidepressant drugs as well as CBT to help the patient deal with particular problems. [15]
As we have seen in the unit on disorders, one of the most commonly diagnosed disorders is borderline personality disorder (BPD). Consider this description, typical of the type of borderline patient who arrives at a therapist’s office:
Even as an infant, it seemed that there was something different about Bethany. She was an intense baby, easily upset and difficult to comfort. She had very severe separation anxiety—if her mother left the room, Bethany would scream until she returned. In her early teens, Bethany became increasingly sullen and angry. She started acting out more and more—yelling at her parents and teachers and engaging in impulsive behavior such as promiscuity and running away from home. At times Bethany would have a close friend at school, but some conflict always developed and the friendship would end.
By the time Bethany turned 17, her mood changes were totally unpredictable. She was fighting with her parents almost daily, and the fights often included violent behavior on Bethany’s part. At times she seemed terrified to be without her mother, but at other times she would leave the house in a fit of rage and not return for a few days. One day, Bethany’s mother noticed scars on Bethany’s arms. When confronted about them, Bethany said that one night she just got more and more lonely and nervous about a recent breakup until she finally stuck a lit cigarette into her arm. She said, “I didn’t really care for him that much, but I had to do something dramatic.”
When she was 18 Bethany rented a motel room where she took an overdose of sleeping pills. Her suicide attempt was not successful, but the authorities required that she seek psychological help.
Most therapists will deal with a case such as Bethany’s using an eclectic approach. First, because her negative mood states are so severe, they will likely recommend that she start taking antidepressant medications. These drugs are likely to help her feel better and will reduce the possibility of another suicide attempt, but they will not change the underlying psychological problems. Therefore, the therapist will also provide psychotherapy.
The first sessions of the therapy will likely be based primarily on creating trust. Person-centered approaches will be used in which the therapist attempts to create a therapeutic alliance conducive to a frank and open exchange of information.
If the therapist is trained in a psychodynamic approach, he or she will probably begin intensive face-to-face psychotherapy sessions at least three times a week. The therapist may focus on childhood experiences related to Bethany’s attachment difficulties but will also focus in large part on the causes of the present behavior. The therapist will understand that because Bethany does not have good relationships with other people, she will likely seek a close bond with the therapist, but the therapist will probably not allow the transference relationship to develop fully. The therapist will also realize that Bethany will probably try to resist the work of the therapist.
Most likely the therapist will also use principles of CBT. For one, cognitive therapy will likely be used in an attempt to change Bethany’s distortions of reality. She feels that people are rejecting her, but she is probably bringing these rejections on herself. If she can learn to better understand the meaning of other people’s actions, she may feel better. And the therapist will likely begin using some techniques of behavior therapy, for instance, by rewarding Bethany for successful social interactions and progress toward meeting her important goals.
The eclectic therapist will continue to monitor Bethany’s behavior as the therapy continues, bringing into play whatever therapeutic tools seem most beneficial. Hopefully, Bethany will stay in treatment long enough to make some real progress in repairing her broken life.
One example of an eclectic treatment approach that has been shown to be successful in treating BPD is dialectical behavioral therapy (DBT). [16] DBT is essentially a cognitive therapy, but it includes a particular emphasis on attempting to enlist the help of the patient in his or her own treatment. A dialectical behavioral therapist begins by attempting to develop a positive therapeutic alliance with the client, and then tries to encourage the patient to become part of the treament process. In DBT the therapist aims to accept and validate the client’s feelings at any given time while nonetheless informing the client that some feelings and behaviors are maladaptive, and showing the client better alternatives. The therapist will use both individual and group therapy, helping the patient work toward improving interpersonal effectiveness, emotion regulation, and distress tolerance skills.
Now click on “Eclectic (combination)” types of therapy and we will see what percentage of psychotherapists in the United States practice Eclectic types of therapy.
Instructions: Below are four examples of therapy sessions between a client and therapist. Read each of the dialogues and determine the type of therapy that is most likely taking place.
Example 1:
Therapist: So, tell me about your childhood.
Client: Well, I had a tough relationship with my father. He never seemed to trust me to do the right thing. I think that still influences how I deal with men today. It even has had an impact on my sleeping habits too.
Therapist: Tell me more about your sleeping habits. Please feel free to recall any dreams you can remember so we can explore them.
Client: Just last night I had the wildest dream. I was running in a forest being chased by ninjas. I couldn’t seem to escape. They were all around me. What does it mean?
Example 2:
Client: Thank you for meeting with me on such short notice, Dr. Lovelace. I’ve heard such good things about you and how warm and understanding you are, which is why I decided to see you for therapy.
Therapist: I am here to help. I believe that every person has the capacity to change and grow. We all have the answers to our problems, we just may need a little guidance sometimes. Tell me more about yourself and why you are here, Susan.
Client: Well, I’m here because I’ve been doing bad things lately. I’ve been yelling at my kids, and I’ve been very irritable with my coworkers. This is so unlike me. Do you think that makes me a bad person?
Therapist: I can understand why you might feel upset with yourself, Susan. This doesn’t match with your concept of yourself, and it makes you feel bad. I’m not judging you; I accept you for who you are. How do you think you can better line up your ideas about yourself (as being a good person) with reality?
Example 3:
Therapist: I understand you are here because you have a terrible fear of flying. Is that correct?
Client: Yes, doctor, it is. I get horrible panic attacks going into airports and even thinking about getting on a plane causes me some anxiety. What can I do to fix this?
Therapist: Let’s start by creating a hierarchy of fears that you have around flying. Please start with what causes you the least amount of anxiety to the most amount of anxiety around this issue. Then we’ll work through these anxieties and help you to relax at each stage. You can’t be relaxed and anxious at the same time, right?
Client: Good point, doctor. I guess the least amount of anxiety would be thinking about planes. The next would be booking a flight. Next would be going to the airport, and after that, stepping on the plane and getting ready for take-off. I think you know that the flying part would produce the most anxiety!
Example 4:
Client: I’ve been coming here for a few weeks now, Dr. Harris, and you’ve asked me to monitor how I’m thinking about things.
Therapist: Yes. How is it going? What have been some of your automatic thoughts, Lisa?
Client: Well, I find that I keep doubting myself and telling myself that I’m not a good student. If I get an answer wrong in class, I tell myself I’m stupid and that other people must think I’m so incompetent. I think this is what is leading me to feel sad and depressed most days.
Therapist: Thank you for sharing that, Lisa. Negative thoughts and pessimistic thinking can lead us to have bad feelings about ourselves. I like to encourage my clients to talk to themselves as a friend would. A friend would never speak to us that way, so why should we?
Like other medical problems, psychological disorders may in some cases be treated biologically. Biomedical therapies are treatments designed to reduce psychological disorder by influencing the action of the central nervous system. These therapies primarily involve the use of medications but also include direct methods of brain intervention, including electroconvulsive therapy (ECT),transcranial magnetic stimulation (TMS), and psychosurgery.
Psychologists understand that an appropriate balance of neurotransmitters in the brain is necessary for mental health. If there is a proper balance of chemicals, then the person’s mental health will be acceptable, but psychological disorder will result if there is a chemical imbalance. The most frequently used biological treatments provide the patient with medication that influences the production and reuptake of neurotransmitters in the central nervous system (CNS). The use of these drugs is rapidly increasing, and drug therapy is now the most common approach to treatment of most psychological disorders.
Unlike some medical therapies that can be targeted toward specific symptoms, current psychological drug therapies are not so specific; they don’t change particular behaviors or thought processes, and they don’t really solve psychological disorders. However, although they cannot “cure” disorder, drug therapies are nevertheless useful therapeutic approaches, particularly when combined with psychological therapy, in treating a variety of psychological disorders. The best drug combination for the individual patient is usually found through trial and error. [1]
Attention-deficit/hyperactivity disorder (ADHD) is frequently treated with biomedical therapy, usually along with cognitive-behavior therapy (CBT). The most commonly prescribed drugs for ADHD are psychostimulants, including Ritalin, Adderall, and Dexedrine. Short-acting forms of the drugs are taken as pills and last between 4 and 12 hours, but some of the drugs are also available in long-acting forms (skin patches) that can be worn on the hip and last up to 12 hours. The patch is placed on the child early in the morning and worn all day.
Stimulants improve the major symptoms of ADHD, including inattention, impulsivity, and hyperactivity, often dramatically, in about 75% of the children who take them. [2] But the effects of the drugs wear off quickly. Additionally, the best drug and best dosage varies from child to child, so it may take some time to find the correct combination.
It may seem surprising to you that a disorder that involves hyperactivity is treated with a psychostimulant, a drug that normally increases activity. The answer lies in the dosage. When large doses of stimulants are taken, they increase activity, but in smaller doses the same stimulants improve attention and decrease motor activity. [3]
The most common side effects of psychostimulants in children include decreased appetite, weight loss, sleeping problems, and irritability as the effect of the medication tapers off. Stimulant medications may also be associated with a slightly reduced growth rate in children, although in most cases growth isn’t permanently affected. [4]
Antidepressant medications are drugs designed to improve moods. Although they are used primarily in the treatment of depression, they are also effective for patients who suffer from anxiety, phobias, and obsessive-compulsive disorders. Antidepressants work by influencing the production and reuptake of neurotransmitters that relate to emotion, including serotonin, norepinephrine, and dopamine. Although exactly why they work is not yet known, as the amount of the neurotransmitters in the CNS is increased through the action of the drugs, the person often experiences less depression.
The original antidepressants were the tricyclic antidepressants, with the brand names of Tofranil and Elavil, and the monamine oxidase inhibitors (MAOIs), including Nardil and Parnate. These medications work by increasing the amount of serotonin, norepinephrine, and dopamine at the synapses, but they also have severe side effects including potential increases in blood pressure and the need to follow particular diets.
The antidepressants most prescribed today are the selective serotonin reuptake inhibitors (SSRIs), including Prozac, Paxil, and Zoloft, which are designed to selectively block the reuptake of serotonin at the synapse, thereby leaving more serotonin available in the CNS. SSRIs are safer and have fewer side effects than the tricyclics or the MAOIs. [5] [6] SSRIs are effective, but patients taking them often suffer a variety of sometimes unpleasant side effects, including dry mouth, constipation, blurred vision, headache, agitation, drowsiness, as well as a reduction in sexual enjoyment.
Recently, there has been concern that SSRIs may increase the risk of suicide among teens and young adults, probably because when the medications begin working they give patients more energy, which may lead them to commit the suicide that they had been planning but lacked the energy to go through with. This concern has led the FDA to put a warning label on SSRI medications and has led doctors to be more selective about prescribing antidepressants to this age group. [7] [8] [9]
Because the effects of antidepressants may take weeks or even months to develop, doctors usually work with each patient to determine which medications are most effective, and may frequently change medications over the course of therapy. In some cases other types of antidepressants may be used instead of or in addition to the SSRIs. These medications also work by blocking the reuptake of neurotransmitters, including serotonin, norepinephrine, and dopamine. Brand names of these medications include Effexor and Wellbutrin.
Patients who are suffering from bipolar disorder are not helped by the SSRIs or other antidepressants because their disorder also involves the experience of overly positive moods. Treatment is more complicated for these patients, often involving a combination of antipsychotics and antidepressants along with mood stabilizing medications. [10] The most well-known mood stabilizer, lithium carbonate (or “lithium”), was approved by the FDA in the 1970s for treating both manic and depressive episodes, and it has proven very effective. Anticonvulsant medications can also be used as mood stabilizers. Another drug, Depakote, has also proven very effective, and some bipolar patients may do better with it than with lithium. [11]
People who take lithium must have regular blood tests to be sure that the levels of the drug are in the appropriate range. Potential negative side effects of lithium are loss of coordination, slurred speech, frequent urination, and excessive thirst. Though side effects often cause patients to stop taking their medication, it is important that treatment be continuous, rather than intermittent. There is no cure for bipolar disorder, but drug therapy does help many people.
Antianxiety medications are drugs that help relieve fear or anxiety. They work by increasing the action of the neurotransmitter GABA. The increased level of GABA helps inhibit the action of the sympathetic division of the autonomic nervous system, creating a calming experience.
The most common class of antianxiety medications is the tranquilizers, known as benzodiazepines. These drugs, which are prescribed millions of times a year, include Ativan, Valium, and Xanax. The benzodiazepines act within a few minutes to treat mild anxiety disorders but also have major side effects. They are addictive, frequently leading to tolerance, and they can cause drowsiness, dizziness, and unpleasant withdrawal symptoms including relapses into increased anxiety. [12] Furthermore, because the effects of the benzodiazepines are very similar to those of alcohol, they are very dangerous when combined with it.
Until the middle of the 20th century, schizophrenia was inevitably accompanied by the presence of positive symptoms, including bizarre, disruptive, and potentially dangerous behavior. As a result, schizophrenics were locked in asylums to protect them from themselves and to protect society from them. In the 1950s, a drug called chlorpromazine (Thorazine) was discovered that could reduce many of the positive symptoms of schizophrenia. Chlorpromazine was the first of many antipsychotic drugs.
Antipsychotic drugs (neuroleptics) are drugs that treat the symptoms of schizophrenia and related psychotic disorders. Today there are many antipsychotics, including Thorazine, Haldol, Clozaril, Risperdal, and Zyprexa. Some of these drugs treat the positive symptoms of schizophrenia, and some treat both the positive, negative, and cognitive symptoms.
The discovery of chlorpromazine and its use in clinics has been described as the single greatest advance in psychiatric care, because it has dramatically improved the prognosis of patients in psychiatric hospitals worldwide. Using antipsychotic medications has allowed hundreds of thousands of people to move out of asylums into individual households or community mental health centers, and in many cases to live near-normal lives.
Antipsychotics reduce the positive symptoms of schizophrenia by reducing the transmission of dopamine at the synapses in the limbic system, and they improve negative symptoms by influencing levels of serotonin. [13] Despite their effectiveness, antipsychotics have some negative side effects, including restlessness, muscle spasms, dizziness, and blurred vision. In addition, their long-term use can cause permanent neurological damage, a condition called tardive dyskinesia that causes uncontrollable muscle movements, usually in the mouth area. [14] Newer antipsychotics treat more symptoms with fewer side effects than older medications do. [15]
In cases of severe disorder it may be desirable to directly influence brain activity through electrical activation of the brain or through brain surgery. Electroconvulsive therapy (ECT) (as shown in the figure below) is a medical procedure designed to alleviate psychological disorder in which electric currents are passed through the brain, deliberately triggering a brief seizure. ECT has been used since the 1930s to treat severe depression.
When it was first developed, the procedure involved strapping the patient to a table before the electricity was administered. The patient was knocked out by the shock, went into severe convulsions, and awoke later, usually without any memory of what had happened. Today ECT is used only in the most severe cases when all other treatments have failed, and the practice is more humane. The patient is first given muscle relaxants and a general anesthesia, and precisely calculated electrical currents are used to achieve the most benefit with the fewest possible risks.
ECT is very effective; about 80% of people who undergo three sessions of ECT report dramatic relief from their depression. ECT reduces suicidal thoughts and is assumed to have prevented many suicides. [1] On the other hand, the positive effects of ECT do not always last; over one-half of patients who undergo ECT experience relapse within one year, although antidepressant medication can help reduce this outcome. [2] ECT may also cause short-term memory loss or cognitive impairment. [3] [4]
Although ECT continues to be used, newer approaches to treating chronic depression are also being developed. A newer and gentler method of brain stimulation is transcranial magnetic stimulation (TMS), a medical procedure designed to reduce psychological disorder that uses a pulsing magnetic coil to electrically stimulate the brain as shown in the following figure. TMS seems to work by activating neural circuits in the prefrontal cortex, which is less active in people with depression, causing an elevation of mood. TMS can be performed without sedation, does not cause seizures or memory loss, and may be as effective as ECT. [5] [6] TMS has also been used in the treatment of Parkinson’s disease and schizophrenia.
Still other biomedical therapies are being developed for people with severe depression that persists over years. One approach involves implanting a device in the chest that stimulates the vagus nerve, a major nerve that descends from the brain stem toward the heart. [7] [8] When the vagus nerve is stimulated by the device, it activates brain structures that are less active in severely depressed people.
Psychosurgery, that is, surgery that removes or destroys brain tissue in the hope of improving disorder, is reserved for the most severe cases. The most well-known psychosurgery is the prefrontal lobotomy. Developed in 1935 by Nobel Prize winner Egas Moniz to treat severe phobias and anxiety, the procedure destroys the connections between the prefrontal cortex and the rest of the brain. Lobotomies were performed on thousands of patients. The procedure—which was never validated scientifically—left many patients in worse condition than before, subjecting the already suffering patients and their families to further heartbreak. [9] Perhaps the most notable failure was the lobotomy performed on Rosemary Kennedy, the sister of President John F. Kennedy, which left her severely incapacitated.
There are very few centers that still conduct psychosurgery today, and when such surgeries are performed, they are much more limited in nature and called cingulotomy. [10] The ability to more accurately image and localize brain structures using modern neuroimaging techniques suggests that new, more accurate, and more beneficial developments in psychosurgery may soon be available. [11]
Although the individual therapies that we have discussed so far in this unit focus primarily on the psychological and biological aspects of the bio-psycho-social model of disorder, the social dimension is never out of the picture. Therapists understand that disorder is caused, and potentially prevented, in large part by the people with whom we interact. A person with schizophrenia does not live in a vacuum. He interacts with his family members and with the other members of the community, and the behavior of those people may influence his disease. And depression and anxiety are created primarily by the affected individual’s perceptions (and misperceptions) of the important people around them. Thus prevention and treatment are influenced in large part by the social context in which the person is living.
Practitioners sometimes incorporate the social setting in which disorder occurs by conducting therapy in groups. Group therapy is psychotherapy in which clients receive psychological treatment together with others. A professionally trained therapist guides the group, usually between 6 and 10 participants, to create an atmosphere of support and emotional safety for the participants. [1]
Group therapy provides a safe place where people come together to share problems or concerns, to better understand their own situations, and to learn from and with each other. Group therapy is often cheaper than individual therapy, as the therapist can treat more people at the same time, but economy is only one part of its attraction. Group therapy allows people to help each other, by sharing ideas, problems, and solutions. It provides social support, offers the knowledge that other people are facing and successfully coping with similar situations, and allows group members to model the successful behaviors of other group members. Group therapy makes explicit the idea that our interactions with others may create, intensify, and potentially alleviate disorders.
Group therapy has met with much success in the more than 50 years it has been in use, and it has generally been found to be as or more effective than individual therapy. [2] Group therapy is particularly effective for people who have life-altering illness, as it helps them cope better with their disease, enhances the quality of their lives, and in some cases has even been shown to help them live longer. [3]
Sometimes group therapy is conducted with people who are in close relationships. Couples therapy is treatment in which two people who are cohabitating, married, or dating meet together with the practitioner to discuss their concerns and issues about their relationship. These therapies are in some cases educational, providing the couple with information about what is to be expected in a relationship. The therapy may focus on such topics as sexual enjoyment, communication, or the symptoms of one of the partners (e.g., depression).
Family therapy involves families meeting together with a therapist. In some cases the meeting is precipitated by a particular problem with one family member, such as a diagnosis of bipolar disorder in a child. Family therapy is based on the assumption that the problem, even if it is primarily affecting one person, is the result of an interaction among the people in the family.
Group therapy is based on the idea that people can be helped by the positive social relationships that others provide. One way for people to gain this social support is by joining a self-help group, which is a voluntary association of people who share a common desire to overcome psychological disorder or improve their well-being. [4] Self-help groups have been used to help individuals cope with many types of addictive behaviors. Three of the best-known self-help groups are Alcoholics Anonymous, of which there are more than 2 million members in the United States; Gamblers Anonymous; and Overeaters Anonymous.
The idea behind self-help groups is similar to that of group therapy, but the groups are open to a broader spectrum of people. As in group therapy, the benefits include social support, education, and observational learning. Religion and spirituality are often emphasized, and self-blame is discouraged. Regular group meetings are held with the supervision of a trained leader.
The social aspect of disorder is also understood and treated at the community level. Community mental health services are psychological treatments and interventions that are distributed at the community level. Community mental health services are provided by nurses, psychologists, social workers, and other professionals in sites such as schools, hospitals, police stations, drug treatment clinics, and residential homes. The goal is to establish programs that will help people get the mental health services that they need. [1]
Unlike traditional therapy, the primary goal of community mental health services is prevention. Just as widespread vaccination of children has eliminated diseases such as polio and smallpox, mental health services are designed to prevent psychological disorder. [2] Community prevention can be focused on one more of three levels: primary prevention, secondary prevention, and tertiary prevention.
Primary prevention is prevention in which all members of the community receive the treatment. Examples of primary prevention are programs designed to encourage all pregnant women to avoid cigarettes and alcohol because of the risk of health problems for the fetus, and programs designed to remove dangerous lead paint from homes.
Secondary prevention is more limited and focuses on people who are most likely to need it—those who display risk factors for a given disorder. Risk factors are the social, environmental, and economic vulnerabilities that make it more likely than average that a given individual will develop a disorder. [3] The following presents a list of potential risk factors for psychological disorders.
Finally, tertiary prevention is treatment, such as psychotherapy or biomedical therapy, that focuses on people who are already diagnosed with disorder.
Community mental health workers practicing secondary prevention will focus on youths with these markers of future problems.
Community prevention programs are designed to provide support during childhood or early adolescence with the hope that the interventions will prevent disorders from appearing or will keep existing disorders from expanding. Interventions include such things as help with housing, counseling, group therapy, emotional regulation, job and skills training, literacy training, social responsibility training, exercise, stress management, rehabilitation, family therapy, or removing a child from a stressful or dangerous home situation.
The goal of community interventions is to make it easier for individuals to continue to live a normal life in the face of their problems. Community mental health services are designed to make it less likely that vulnerable populations will end up in institutions or on the streets. In summary, their goal is to allow at-risk individuals to continue to participate in community life by assisting them within their own communities.
Secondary prevention focuses on people who are at risk for disorder or for harmful behaviors. Suicide is a leading cause of death worldwide, and prevention efforts can help people consider other alternatives, particularly if it can be determined who is most at risk. Determining whether a person is at risk of suicide is difficult, however, because people are motivated to deny or conceal such thoughts to avoid intervention or hospitalization. One recent study found that 78% of patients who die by suicide explicitly deny suicidal thoughts in their last verbal communications before killing themselves. [4]
Nock and colleagues [5] tested the possibility that implicit measures of the association between the self-concept and death might provide a more direct behavioral marker of suicide risk that would allow professionals to more accurately determine whether a person is likely to commit suicide in comparison to existing self-report measures. They measured implicit associations about death and suicide in 157 people seeking treatment at a psychiatric emergency department.
The participants all completed a version of the Implicit Association Test (IAT), which was designed to assess the strength of a person’s mental associations between death and the self. [6] Using a notebook computer, participants classified stimuli representing the constructs of “death” (i.e., die, dead, deceased, lifeless, and suicide) and “life” (i.e., alive, survive, live, thrive, and breathing) and the attributes of “me” (i.e., I, myself, my, mine, and self) and “not me” (i.e., they, them, their, theirs, and other). Response latencies for all trials were recorded and analyzed, and the strength of each participant’s association between “death” and “me” was calculated.
The researchers then followed participants over the next 6 months to test whether the measured implicit association of death with self could be used to predict future suicide attempts. The authors also tested whether scores on the IAT would add to prediction of risk above and beyond other measures of risk, including questionnaire and interview measures of suicide risk. Scores on the IAT predicted suicide attempts in the next 6 months above all the other risk factors that were collected by the hospital staff, including past history of suicide attempts. These results suggest that measures of implicit cognition may be useful for determining risk factors for clinical behaviors such as suicide.
We have seen that psychologists and other practitioners employ a variety of treatments in their attempts to reduce the negative outcomes of psychological disorders. But we have not yet considered the important question of whether these treatments are effective, and if they are, which approaches are most effective for which people and for which disorders. Accurate empirical answers to these questions are important as they help practitioners focus their efforts on the techniques that have been proven to be most promising, and will guide societies as they make decisions about how to spend public money to improve the quality of life of their citizens. [1]
Psychologists use outcome research, that is, studies that assess the effectiveness of medical treatments, to determine the effectiveness of different therapies. In these studies the independent variable is the type of the treatment—for instance, whether it was psychological or biological in orientation or how long it lasted. In most cases characteristics of the client (e.g., his or her gender, age, disease severity, and prior psychological histories) are also collected as control variables. The dependent measure is an assessment of the benefit received by the client. In some cases we might simply ask the client if she feels better, and in other cases we may directly measure behavior: Can the client now get in the airplane and take a flight? Has the client remained out of juvenile detention?
In every case the scientists evaluating the therapy must keep in mind the potential that other effects rather than the treatment itself might be important, that some treatments that seem effective might not be, and that some treatments might actually be harmful, at least in the sense that money and time are spent on programs or drugs that do not work.
One threat to the validity of outcome research studies is natural improvement—the possibility that people might get better over time, even without treatment. People who begin therapy or join a self-help group do so because they are feeling bad or engaging in unhealthy behaviors. After being in a program over a period of time, people frequently feel that they are getting better. But it is possible that they would have improved even if they had not attended the program, and that the program is not actually making a difference. To demonstrate that the treatment is effective, the people who participate in it must be compared with another group of people who do not get treatment.
Another possibility is that therapy works, but that it doesn’t really matter which type of therapy it is. Nonspecific treatment effects occur when the patient gets better over time simply by coming to therapy, even though it doesn’t matter what actually happens at the therapy sessions. The idea is that therapy works, in the sense that it is better than doing nothing, but that all therapies are pretty much equal in what they are able to accomplish. Finally, placebo effects are improvements that occur as a result of the expectation that one will get better rather than from the actual effects of a treatment.
Thousands of studies have been conducted to test the effectiveness of psychotherapy, and by and large they find evidence that it works. Some outcome studies compare a group that gets treatment with another (control) group that gets no treatment. For instance, Ruwaard, Broeksteeg, Schrieken, Emmelkamp, and Lange [2] found that patients who interacted with a therapist over a website showed more reduction in symptoms of panic disorder than did a similar group of patients who were on a waiting list but did not get therapy. Although studies such as this one control for the possibility of natural improvement (the treatment group improved more than the control group, which would not have happened if both groups had only been improving naturally over time), they do not control for either nonspecific treatment effects or for placebo effects. The people in the treatment group might have improved simply by being in the therapy (nonspecific effects), or they may have improved because they expected the treatment to help them (placebo effects).
An alternative is to compare a group that gets “real” therapy with a group that gets only a placebo. For instance, Keller and colleagues [3] had adolescents who were experiencing anxiety disorders take pills that they thought would reduce anxiety for 8 weeks. However, one-half of the patients were randomly assigned to actually receive the antianxiety drug Paxil, while the other half received a placebo drug that did not have any medical properties. The researchers ruled out the possibility that only placebo effects were occurring because they found that both groups improved over the 8 weeks, but the group that received Paxil improved significantly more than the placebo group did.
Studies that use a control group that gets no treatment or a group that gets only a placebo are informative, but they also raise ethical questions. If the researchers believe that their treatment is going to work, why would they deprive some of their participants, who are in need of help, of the possibility for improvement by putting them in a control group?
Another type of outcome study compares different approaches with each other. For instance, Herbert and colleagues [4] tested whether social skills training could boost the results received for the treatment of social anxiety disorder with cognitive-behavioral therapy (CBT) alone. As you can see in the figure below, they found that people in both groups improved, but CBT coupled with social skills training showed significantly greater gains than CBT alone.
Other studies [5] [6] have compared brief sessions of psychoanalysis with longer-term psychoanalysis in the treatment of anxiety disorder, humanistic therapy with psychodynamic therapy in treating depression, and cognitive therapy with drug therapy in treating anxiety. [7] [8] These studies are advantageous because they compare the specific effects of one type of treatment with another, while allowing all patients to get treatment.
Because there are thousands of studies testing the effectiveness of psychotherapy, and the independent and dependent variables in the studies vary widely, the results are often combined using a meta-analysis. A meta-analysis is a statistical technique that uses the results of existing studies to integrate and draw conclusions about those studies. In one important meta-analysis analyzing the effect of psychotherapy, Smith, Glass, and Miller [1] summarized studies that compared different types of therapy or that compared the effectiveness of therapy against a control group. To find the studies, the researchers systematically searched computer databases and the reference sections of previous research reports to locate every study that met the inclusion criteria. Over 475 studies were located, and these studies used over 10,000 research participants.
The results of each of these studies were systematically coded, and a measure of the effectiveness of treatment known as the effect size was created for each study. Smith and her colleagues found that the average effect size for the influence of therapy was 0.85, indicating that psychotherapy had a relatively large positive effect on recovery. What this means is that, overall, receiving psychotherapy for behavioral problems is substantially better for the individual than not receiving therapy as shown in the following figure. Although they did not measure it, psychotherapy presumably has large societal benefits as well—the cost of the therapy is likely more than made up for by the increased productivity of those who receive it.
Other meta-analyses have also found substantial support for the effectiveness of specific therapies, including cognitive therapy, CBT, [2] [3] couples and family therapy, [4] and psychoanalysis. [5] On the basis of these and other meta-analyses, a list of bold—that is, therapies that are known to be effective—has been developed. [6] [7] These therapies include cognitive therapy and behavioral therapy for depression; cognitive therapy, exposure therapy, and stress inoculation training for anxiety; CBT for bulimia; and behavior modification for bed-wetting.
Smith, Glass, and Miller [1] did not find much evidence that any one type of therapy was more effective than any other type, and more recent meta-analyses have not tended to find many differences either. [8] What this means is that a good part of the effect of therapy is nonspecific, in the sense that simply coming to any type of therapy is helpful in comparison to not coming. This is true partly because there are fewer distinctions among the ways that different therapies are practiced than the theoretical differences among them would suggest. What a good therapist practicing psychodynamic approaches does in therapy is often not much different from what a humanist or a cognitive-behavioral therapist does, and so no one approach is really likely to be better than the other.
What all good therapies have in common is that they give people hope; help them think more carefully about themselves and about their relationships with others; and provide a positive, empathic, and trusting relationship with the therapist—the therapeutic alliance. [9] This is why many self-help groups are also likely to be effective and perhaps why having a psychiatric service dog may also make us feel better.
Although there are fewer of them because fewer studies have been conducted, meta-analyses also support the effectiveness of drug therapies for psychological disorder. For instance, the use of psychostimulants to reduce the symptoms of attention-deficit/hyperactivity disorder (ADHD) is well known to be successful, and many studies find that the positive and negative symptoms of schizophrenia are substantially reduced by the use of antipsychotic medications. [1]
People who take antidepressants for mood disorders or antianxiety medications for anxiety disorders almost always report feeling better, although drugs are less helpful for phobic disorder and obsessive-compulsive disorder. Some of these improvements are almost certainly the result of placebo effects, [2] but the medications do work, at least in the short term. An analysis of U.S. Food and Drug Administration databases found effect sizes of 0.26 for Prozac, 0.26 for Zoloft, 0.24 for Celexa, 0.31 for Lexapro, and 0.30 for Cymbalta. The overall average effect size for antidepressant medications approved by the FDA between 1987 and 2004 was 0.31. [3] [4]
One problem with drug therapies is that although they provide temporary relief, they don’t treat the underlying cause of the disorder. Once the patient stops taking the drug, the symptoms often return in full force. In addition many drugs have negative side effects, and some also have the potential for addiction and abuse. Different people have different reactions, and all drugs carry warning labels. As a result, although these drugs are frequently prescribed, doctors attempt to prescribe the lowest doses possible for the shortest possible periods of time.
Older patients face special difficulties when they take medications for mental illness. Older people are more sensitive to drugs, and drug interactions are more likely because older patients tend to take a variety of different drugs every day. They are more likely to forget to take their pills, to take too many or too few, or to mix them up due to poor eyesight or faulty memory.
Like all types of drugs, medications used in the treatment of mental illnesses can carry risks to an unborn infant. Tranquilizers should not be taken by women who are pregnant or expecting to become pregnant, because they may cause birth defects or other infant problems, especially if taken during the first trimester. Some selective serotonin reuptake inhibitors (SSRIs) may also increase risks to the fetus, [5] [6] as do antipsychotics. [7]
Decisions on medication should be carefully weighed and based on each person’s needs and circumstances. Medications should be selected based on available scientific research, and they should be prescribed at the lowest possible dose. All people must be monitored closely while they are on medications.
Measuring the effectiveness of community action approaches to mental health is difficult because they occur in community settings and impact a wide variety of people, and it is difficult to find and assess valid outcome measures. Nevertheless, research has found that a variety of community interventions can be effective in preventing a variety of psychological disorders. [8]
Data suggest that federally funded prevention programs such as the Special Supplemental Program for Women, Infants, and Children (WIC), which provides federal grants to states for supplemental foods, health-care referral, and nutrition education for low-income women and their children, are successful. WIC mothers have higher birth weight babies and lower infant mortality than other low-income mothers. [9] And the average blood-lead levels among children have fallen approximately 80% since the late 1970s as a result of federal legislation designed to remove lead paint from housing. [10]
Although some of the many community-based programs designed to reduce alcohol, tobacco, and drug abuse; violence and delinquency; and mental illness have been successful, the changes brought about by even the best of these programs are, on average, modest. [11] [12] This does not necessarily mean that the programs are not useful. What is important is that community members continue to work with researchers to help determine which aspects of which programs are most effective and to concentrate efforts on the most productive approaches. [13] The most beneficial preventive interventions for young people involve coordinated, systemic efforts to enhance their social and emotional competence and health. Many psychologists continue to work to promote policies that support community prevention as a model of preventing disorder.
Please begin this unit on consciousness by watching the following video. It features Darren Brown, an English illusionist, mentalist, and professional skeptic.
Mentalism is a type of performing art in which the performer appears to read the mind of another. Such powers may seem to be the result of psychic or paranormal practices, but in reality is the result of a combination of suggestion, misdirection, and psychology. Darren Brown himself claims no supernatural ability and often denounces those who do. As you witnessed at the end of the video, Darren Brown was able to strongly influence the behavior of the advertisers without their being aware that they had been manipulated. Although the advertisers had been shown all of the stimuli that later made it into what they thought was their original art piece, it did not register in their conscious awareness.
Consciousness is defined as our subjective awareness of ourselves and our environment. [1] The experience of consciousness is fundamental to human nature. We all know what it means to be conscious, and we assume (although we can never be sure) that others experience their consciousness similarly to how we experience ours.
The study of consciousness has long been important to psychologists and plays a role in many important psychological theories. For instance, Sigmund Freud’s personality theories differentiated between the unconscious and the conscious aspects of behavior. According to Freud, our unconscious mind could influence conscious aspects of our behaviors. Some examples he gave included memory repression, phobias, denial, and "Freudian slips."
Present-day psychologists distinguish between automatic (unconscious, or autonomic) and controlled (conscious) behaviors and between implicit (unconscious) and explicit (conscious) memory. [2] [3]
There are some useful distinctions between the terms we just introduced. Let’s take a brief look at them.
Some philosophies and religions argue that the mind (or soul) and the body are separate entities, a concept called dualism. However, most psychologists agree that consciousness and the mind are biologically based and exist within the brain, a concept called monism.
The differences between monism and dualism may seem subtle, but these distinctions are extremely important for the study of psychology. Put simply, in order to study psychology experimentally, we must assume that conscious behaviors are the result of, and are not separate from, the physical brain. Fields such as cognitive neuroscience, which looks to correlate brain activity to behavior through the use of functional brain imaging, require the assumption that monism is true. After all, it makes little sense to look at the brain to study memory, learning, and other behaviors if those behaviors actually arise from a nonmaterial source separate from the brain itself. So remember, although either side can be argued for in a philosophical debate, we must assume that monism is correct in the study of psychology.
The study of consciousness is also important to the fundamental psychological question regarding the presence of free will. Although we may understand and believe that some of our behaviors are caused by forces outside our awareness (i.e., unconscious), we nevertheless believe we have control over the behaviors we engage in. It is possible, however, for an individual to engage in a complex behavior, such as driving a car and causing an accident, without being at all conscious of his or her actions. Although such events are rare and even shocking, psychologists are increasingly certain that a great deal of our behavior is caused by processes of which we are unaware and over which we have little or no control. [4] [5]
Our experience of consciousness is functional because we use it to guide and control our behavior and to think logically about problems. [6] Consciousness allows us to plan activities and to monitor our progress toward the goals we set for ourselves. In addition, consciousness is fundamental to our sense of morality—we believe we have the free will to perform moral actions while avoiding immoral behaviors.
On the other hand, consciousness may become aversive. For instance, negative emotional reactions occur when we are not living up to our anticipated goals or expectations or when we believe that other people perceive us negatively. In these cases, we may engage in maladaptive behaviors, such as the use of alcohol or other psychoactive drugs, to escape from consciousness. [7]
Because the brain varies in its current level and type of activity, consciousness is transitory. For instance, if we drink too much coffee or beer, the caffeine or alcohol influences the activity in our brain. Consequently, our consciousness may change. Being anesthetized before an operation or experiencing a concussion after a knock on the head may result in loss of consciousness, which also alters brain activity. We also lose consciousness when we sleep, and it is with this altered state of consciousness that we begin our unit.
The lives of all organisms, including humans, are influenced by regularly occurring cycles of behaviors known as biological rhythms. One important biological rhythm is the daily circadian rhythm (from the Latin circa, meaning “about” or “approximately,” and dian, meaning “daily”) that guides the daily waking and sleeping cycle in many animals, including humans.
Sleep is influenced by ambient light. The ganglion cells in the retina send signals to a brain area above the thalamus called the suprachiasmatic nucleus, which is the body’s primary circadian “pacemaker.” The suprachiasmatic nucleus analyzes the strength and duration of the light stimulus and sends signals to the pineal gland when the ambient light level is low or its duration is short. In response, the pineal gland secretes melatonin, a powerful hormone that facilitates the onset of sleep.
Although our conscious state changes as we sleep, the brain nevertheless remains active. The patterns of sleep have been tracked in thousands of research participants who have spent nights sleeping in research labs while their brain waves were recorded by monitors, such as an electroencephalogram, or EEG.
Sleep researchers have found that sleeping people undergo a fairly consistent pattern of sleep stages, each lasting about 90 minutes. Although you might not be aware of it, you go through cycles of sleep, or different stages of sleep as the night progresses. Each sleep stage has its own distinct pattern of brain activity. [1] As you can see in the following figure, these stages are of two major types: rapid eye movement (REM) sleep (dreaming) and slow-wave sleep (deep sleep). People seem to need an adequate amount of sleep as well as a certain amount of REM and slow-wave sleep each night in order to feel rested. Slow-wave sleep is part of a larger sleep type known as NREM (non-REM) sleep, which is often divided into three stages: N1, N2, and N3.
During REM sleep, our awareness of external events is dramatically reduced, and consciousness is dominated primarily by internally generated images and a lack of overt thinking. [2]
As you can see in the following figure, the brain waves recorded by an EEG as we sleep show that the brain’s activity changes during each stage of sleeping. When we are awake, our brain activity is characterized by the presence of very fast beta waves. When we first begin to fall asleep, the waves get longer (alpha waves), and as we move into stage N1 sleep, characterized by drowsiness, the brain begins to produce even slower theta waves. During stage N1 sleep, some muscle tone is lost, as well as most awareness of the environment. Some people may experience sudden jerks or twitches and even vivid hallucinations during this initial stage of sleep.
Normally, if we are allowed to keep sleeping, we move from stage N1 to stage N2 sleep. During stage N2, muscular activity is further decreased and conscious awareness of the environment is lost. This stage typically represents about half of the total sleep time in normal adults. Stage N2 sleep is characterized by theta waves interspersed with bursts of rapid brain activity known as sleep spindles.
Stage N3, also known as slow-wave sleep, is the deepest level of sleep, characterized by an increased proportion of very slow delta waves. This is the stage in which most sleep abnormalities, such as sleepwalking, sleeptalking, nightmares, and bedwetting occur. Some skeletal muscle tone remains, making it possible for affected individuals to rise from their beds and engage in sometimes very complex behaviors, but consciousness is distant. Even in the deepest sleep, however, we are still aware of the external world. If smoke enters the room or if we hear the cry of a baby, we are likely to react even though we are sound asleep. These occurrences demonstrate the extent to which we process information outside consciousness.
After falling initially into a very deep sleep, the brain begins to become more active again, and we normally move into the first period of REM sleep about 90 minutes after falling asleep. REM sleep is accompanied by an increase in heart rate, facial twitches, and the repeated rapid eye movements that give this stage its name. People who are awakened during REM sleep almost always report that they were dreaming, while those awakened in other stages of sleep report dreams much less often. REM sleep is also emotional sleep. Activity in the limbic system, including the amygdala, is increased during REM sleep, and the genitals become aroused even if the content of the dreams we are having is not sexual. A typical 25-year-old man may have an erection nearly half of the night, and the common “morning erection” is left over from the last REM period before waking.
Normally, we go through several cycles of REM and NREM sleep each night. The length of the REM portion of the cycle tends to increase through the night, from about 5 to 10 minutes early in the night to 15 to 20 minutes shortly before awakening in the morning. Dreams also tend to become more elaborate and vivid as the night goes on. Eventually, as the sleep cycle finishes, the brain resumes its faster alpha and beta waves and we awake, normally refreshed.
According to a recent poll, [1] about one-fourth of American adults say they get a good night’s sleep only a few nights a month or less. These people are suffering from a sleep disorder known as insomnia, defined as persistent difficulty falling or staying asleep. Insomnia can be caused by a wide range of things, from physical and psychological illness to simply staying out later than usual on the weekend.
Sometimes the sleep that the insomniac does get is disturbed and nonrestorative, and the lack of quality sleep produces impairment of functioning during the day. Ironically, the problem may be compounded by people’s anxiety over insomnia itself: Their fear of being unable to sleep may wind up keeping them awake. Some people may also develop a conditioned anxiety to the bedroom or the bed.
People who have difficulty sleeping may turn to drugs to help them sleep. Barbiturates, benzodiazepines, and other sedatives are frequently marketed and prescribed as sleep aids, but they may interrupt the natural stages of the sleep cycle and ultimately are likely to do more harm than good. In some cases, they may also promote dependence.
Another common sleep problem is sleep apnea, a sleep disorder characterized by pauses in breathing that last at least 10 seconds during sleep. [2] In addition to preventing restorative sleep, sleep apnea can cause high blood pressure and may raise the risk of stroke and heart attack. [3] Most sleep apnea is caused by an obstruction of the walls of the throat that occurs when we fall asleep. It is most common in obese or older individuals who have lost muscle tone and is particularly common in men. Sleep apnea caused by obstructions is usually treated with an air machine that uses a mask to create a continuous pressure that prevents the airway from collapsing or with a mouthpiece that keep the airway open. If all other treatments have failed, sleep apnea may be treated with surgery to open the airway.
Narcolepsy is a disorder characterized by extreme daytime sleepiness with frequent episodes of “nodding off.” The syndrome may also be accompanied by attacks of cataplexy in which the individual loses muscle tone, resulting in a partial or complete collapse. It is estimated that at least 200,000 Americans suffer from narcolepsy, although only about a quarter of these people have been diagnosed. [4] Narcolepsy is in part the result of genetics—people who suffer from the disease lack neurotransmitters that are important in keeping us alert [5] —and in part the result of a lack of deep sleep. While most people descend through the sequence of sleep stages, then move back up to REM sleep soon after falling asleep, narcolepsy sufferers move directly into REM and undergo numerous awakenings during the night, often preventing them from getting good sleep.
Narcolepsy can be treated with stimulants, such as amphetamines, to counteract the daytime sleepiness, or with antidepressants to treat a presumed underlying depression. However, since these drugs further disrupt already abnormal sleep cycles, these approaches may, in the long run, make the problem worse. Many sufferers find relief by taking a number of planned short naps during the day, and some individuals may find it easier to work in jobs that allow them to sleep during the day and work at night.
Other sleep disorders occur when cognitive or motor processes that should be turned off or reduced in magnitude during sleep operate at higher than normal levels. [6] One example is somnamulism (sleepwalking), in which the person leaves the bed and moves around while still asleep. Sleepwalking is more common in childhood, with the most frequent occurrences around the age of 12 years. About 4% of adults experience somnambulism. [6]
Sleep terrors is a disruptive sleep disorder, most frequently experienced in childhood, that may involve loud screams and intense panic. The sufferer cannot wake from sleep even though he or she is trying to. In extreme cases, sleep terrors may result in bodily harm or property damage as the sufferer moves about abruptly. Up to 3% of adults suffer from sleep terrors, which typically occur in sleep stage N3. [6]
Other sleep disorders include bruxism, in which the sufferer grinds his teeth during sleep; restless legs syndrome, in which the sufferer reports an itching, burning, or otherwise uncomfortable feeling in his or her legs, usually exacerbated when resting or asleep; and periodic limb movement disorder, which involves sudden involuntary movement of limbs. The latter can cause sleep disruption and injury for both the sufferer and bed partner.
Although many sleep disorders occur during non-REM sleep, REM sleep behavior disorder [6] is a condition in which people (usually middle-aged or older men) engage in vigorous and bizarre physical activities during REM sleep in response to intense, violent dreams. As their actions may injure themselves or their sleeping partners, this disorder, thought to be neurological in nature, is normally treated with medications.
We already discussed actions you can take if you have trouble sleeping, but how much sleep do you really need?
Our preferred sleep times and our sleep requirements vary throughout our life cycle. Newborns tend to sleep between 16 and 18 hours per day, preschoolers tend to sleep between 10 and 12 hours per day, school-aged children and teenagers usually prefer at least 9 hours of sleep per night, and most adults say they require 7 to 8 hours per night. [1] [2] There are also individual differences in need for sleep. Some people do quite well with fewer than 6 hours of sleep per night, whereas others need 9 hours or more. Studies suggest that adults should get between 7 and 9 hours of sleep per night, and yet Americans now average fewer than 7 hours. In fact, the most recent research by the National Sleep Foundation found that the average U.S. adult reported getting only 6.7 hours of sleep per night, which is less than the recommended range propose by the National Sleep Foundation. [2]
Getting needed rest is difficult in part because school and work schedules still follow the early-to-rise timetable that was set years ago. We tend to stay up late to enjoy activities in the evening but then are forced to get up early to go to work or school. The situation is particularly bad for college students, who are likely to combine a heavy academic schedule with an active social life and who may, in some cases, also work. Getting enough sleep is a luxury that many of us seem to be unable or unwilling to afford, and yet sleeping is one of the most important things we can do for ourselves. Continued over time, a nightly deficit of even only 1 or 2 hours can have a substantial impact on mood and performance.
Sleep has a vital restorative function, and a prolonged lack of sleep results in increased anxiety, diminished performance, and, if severe and extended, may even result in death. Many road accidents involve sleep deprivation, and people who are sleep deprived show decrements in driving performance similar to those who have ingested alcohol. [3] [4] Poor treatment by doctors [5] and a variety of industrial accidents have also been traced in part to the effects of sleep deprivation.
In 1964, 17-year-old high school student Randy Gardner remained awake for 264 hours (11 days) in order to set a new Guinness World Record. At the request of his worried parents, he was monitored by a U.S. Navy psychiatrist, Lieutenant Commander John J. Ross. This chart maps the progression of his behavioral changes over the 11 days. [6]
Good sleep is also important to our health and longevity. It is no surprise that we sleep more when we are sick, because sleep works to fight infection. Sleep deprivation suppresses immune responses that fight off infection, and it can lead to obesity, hypertension, and memory impairment. [7] [8] Sleeping well can even save our lives. Dew and colleagues [9] found that older adults who had better sleep patterns also lived longer.
Dreams are the succession of images, thoughts, sounds, and emotions that passes through our minds while we sleep. When people are awakened from REM sleep, they usually report that they have been dreaming, suggesting that people normally dream several times a night but that most dreams are forgotten on awakening. [1] The content of our dreams generally relates to our everyday experiences and concerns and frequently to our fears and failures. [2] [3]
Many cultures regard dreams as having great significance for the dreamer, either by revealing something important about the dreamer’s present circumstances or predicting his or her future. The Austrian psychologist Sigmund Freud [4] analyzed the dreams of his patients to help him understand their unconscious needs and desires, and psychotherapists still make use of this technique today. Freud believed that the primary function of dreams was wish fulfillment, or the idea that dreaming allows us to act out the desires that we must repress during the day. He differentiated between the manifest content of the dream (i.e., its literal actions) and its latent content (i.e., the hidden psychological meaning of the dream). Freud believed that the real meaning of dreams is often suppressed by the unconscious mind in order to protect the individual from thoughts and feelings that are hard to cope with. By uncovering the real meaning of dreams through psychoanalysis, Freud believed that people could better understand their problems and resolve the issues that create difficulties in their lives.
Although Freud and others have focused on the meaning of dreams, other theories about the causes of dreams are less concerned with their content. One possibility is that we dream primarily to help with consolidation, or the moving of information into long-term memory. [5] [6] Rauchs, Desgranges, Foret, and Eustache [7] found that rats that had been deprived of REM sleep after learning a new task were less able to perform the task again later than were rats that had been allowed to dream, and these differences were greater on tasks that involved learning unusual information or developing new behaviors. Payne and Nadel [8] argued that the content of dreams is the result of consolidation—we dream about the things that are being moved into long-term memory. Thus dreaming may be an important part of the learning that we do while sleeping. [9]
The activation-synthesis theory of dreaming [10] [11] proposes still another explanation for dreaming: that dreams are our brain’s interpretation of the random firing of neurons in the brain stem. According to this approach, the signals from the brain stem are sent to the cortex, just as they are when we are awake, but because the pathways from the cortex to skeletal muscles are disconnected during REM sleep, the cortex does not know how to interpret the signals. As a result, the cortex strings the messages together into the coherent stories we experience as dreams.
Although researchers are still trying to determine the exact causes of dreaming, one thing remains clear—we need to dream. If we are deprived of REM sleep, we quickly become less able to engage in the important tasks of everyday life, until we are finally able to dream again.
A psychoactive drug is a chemical that changes our states of consciousness, and particularly our perceptions and moods. This includes over-the-counter drugs as well as prescription drugs and those that are taken illegally for recreational purposes. It is important to remember that psychoactive drugs can include things that we do not typically associate with drug use, such as certain materials found in everyday foods and beverages. While we typically use the word drug to refer to illegal substances and medication, it can be used to describe all chemicals that change our state of consciousness, perceptions, and mood. The four primary classes of psychoactive drugs are stimulants, depressants, opioids, and hallucinogens.
Psychoactive drugs affect consciousness by influencing how neurotransmitters operate at the synapses of the central nervous system (CNS). Some psychoactive drugs are agonists, which mimic the operation of a neurotransmitter; some are antagonists, which block the action of a neurotransmitter; and some work by blocking the reuptake of neurotransmitters at the synapse.
In some cases, the effects of psychoactive drugs mimic other naturally occurring states of consciousness. For instance, sleeping pills are prescribed to create drowsiness, and benzodiazepines are prescribed to create a state of relaxation. In other cases, psychoactive drugs are taken for recreational purposes with the goal of creating pleasurable states of consciousness or of escaping our normal consciousness.
The use of psychoactive drugs, especially illegal drugs, has the potential to create very negative side effects. This does not mean that all drugs are dangerous but rather that all drugs can be dangerous, particularly if they are used regularly over long periods of time. Psychoactive drugs create negative effects not so much through their initial use but through the continued and increased use that ultimately may lead to drug abuse.
The problem is that many drugs create tolerance, or an increase in the dose required to produce the same effect. The consequence of tolerance is that the user must increase the dosage and/or the number of times per day the drug is taken. As the use of the drug increases, the user may develop a dependence, a need to use a drug or other substance regularly. Dependence can be psychological, in which the drug is desired and has become part of the everyday life without the serious physical effects if the drug is not obtained; or physical, in which serious physical and mental effects appear when the drug is withdrawn. Cigarette smokers who try to quit, for example, experience physical withdrawal symptoms, such as becoming tired and irritable. In addition, smokers experience psychological cravings to enjoy a cigarette in particular situations, such as after a meal or when they are with friends.
Users may wish to stop using the drug, but when they reduce their dosage, they experience withdrawal, or negative experiences that accompany reducing or stopping drug use, including physical pain and other symptoms. When the user powerfully craves the drug and is driven to repeatedly seek it out, with potentially high physical, social, financial, and/or legal cost, we say that he or she has developed an addiction to the drug.
It is a common belief that addiction is an overwhelming, irresistibly powerful force and that withdrawal from drugs is always an unbearably painful experience. But the reality is more complicated and in many cases less extreme. For one, even drugs that we do not generally think of as being addictive, such as caffeine, can be very difficult to quit for some people. On the other hand, drugs that are normally associated with addiction, including amphetamines, cocaine, and heroin, do not immediately create addiction in their users. Even for a highly addictive drug like cocaine, only about 15% of users become addicted. [1] [2] Furthermore, the rate of addiction is lower for those who are taking drugs for medical reasons than for those who are using drugs recreationally. Patients who have become physically dependent on morphine administered during the course of medical treatment for pain or a disease are able to be rapidly weaned off the drug afterward, without future addiction. Robins, Davis, and Goodwin [3] found that the majority of soldiers who had become addicted to morphine while overseas were quickly able to stop using after returning home.
This does not mean that using recreational drugs is not dangerous. For people who do become addicted to drugs, the success rate of recovery is low. Recreational drugs are generally illegal and carry with them potential criminal consequences if a person is caught and arrested. The way drugs and other substances are taken can also negatively impact the user's health.
Furthermore, the quality and contents of illegal drugs are generally unknown, and the doses can vary substantially from purchase to purchase.
Another problem is the unintended consequences of combining drugs, which can produce serious side effects. Combining drugs is dangerous because their combined effects on the CNS can increase dramatically and can lead to accidental or even deliberate overdoses. For instance, ingesting alcohol or benzodiazepines along with the usual dose of heroin is a frequent cause of overdose deaths in opiate addicts. Furthermore, combining alcohol and cocaine can have a dangerous impact on the cardiovascular system. [4]
Although all recreational drugs are dangerous, some can be more deadly than others. One way to determine how dangerous recreational drugs are is to calculate a safety ratio. A safety ratio is calculated on the basis of the dose that is likely to be fatal divided by the normal dose needed to feel the effects of the drug. Drugs with lower ratios are more dangerous because the difference between the normal and the lethal dose is small. For instance, heroin has a safety ratio of 6 because the average fatal dose is only six times greater than the average effective dose. Marijuana has a safety ratio of 1,000. This is not to say that smoking marijuana cannot be deadly, but it is much less likely than heroin to be deadly. The safety ratios of common recreational drugs are shown in the following table.
Popular Recreational Drugs and Their Safety Ratios | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Drugs with lower safety ratios have a greater risk of brain damage and death. From Flat World Knowledge, Introduction to Psychology. Source: Gable, R. (2004). Comparison of acute lethal toxicity of commonly abused psychoactive substances. Addiction 99(6):686–696. CC-BY-NC-SA. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
A stimulant is a psychoactive drug that operates by blocking the reuptake of dopamine, norepinephrine, and serotonin in the synapses of the CNS. By blocking reuptake of the neurotransmitter into the presynaptic cell, the neurotransmitter stays in the synapse longer. Because more of these neurotransmitters remain active in the brain, the result is an increase in the activity of the sympathetic division of the autonomic nervous system (ANS). Effects of stimulants include increased heart and breathing rates, pupil dilation, and increases in blood sugar accompanied by decreases in appetite. For these reasons, stimulants are frequently used to help people stay awake and to control weight.
Used in moderation, some stimulants may increase alertness, but used in an irresponsible fashion, they can quickly create dependency. A major problem is the “crash” that results when the drug loses its effectiveness and the activity of the neurotransmitters returns to normal. The withdrawal from stimulants can create profound depression and lead to an intense desire to repeat the high.
Caffeine is a bitter psychoactive drug found in the beans, leaves, and fruits of plants, where it acts as a natural pesticide. Caffeine acts as a mood enhancer and provides energy.
Although the U.S. Food and Drug Administration lists caffeine as a safe food substance, it has at least some characteristics of dependence. People who reduce their caffeine intake often report being irritable, restless, and drowsy. They can also experience strong headaches, and these withdrawal symptoms may last up to a week. Most experts feel that using small amounts of caffeine during pregnancy is safe, but larger amounts of caffeine can be harmful to the fetus. [1]
Nicotine is a psychoactive drug found in the nightshade family of plants, where it acts as a natural pesticide. Nicotine is the main cause for the dependence-forming properties of tobacco use, and tobacco use is a major health threat. Nicotine creates both psychological and physical addiction, and it is one of the hardest addictions to break. Nicotine content in cigarettes has slowly increased over the years, making quitting smoking more and more difficult. Nicotine is also found in smokeless (chewing) tobacco.
People who want to quit smoking sometimes use other drugs to help them. For instance, the prescription drug Chantix acts as an antagonist, binding to nicotine receptors in the synapse, which prevents users from receiving the normal stimulant effect when they smoke. At the same time, the drug releases dopamine, the reward neurotransmitter. In this way, Chantix dampens nicotine withdrawal symptoms and cravings. In many cases, people are able to get past the physical dependence, allowing them to quit smoking at least temporarily. In the long run, however, the psychological enjoyment of smoking may lead to relapse.
Cocaine is an addictive drug obtained from the leaves of the coca plant. In the late 19th and early 20th centuries, it was a primary constituent in many popular tonics and elixirs, and although it was removed in 1905, it was one of the original ingredients in Coca-Cola. Today cocaine is taken illegally as recreational drug.
Cocaine has a variety of adverse effects on the body. It constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can cause headaches, abdominal pain, and nausea. Since cocaine also tends to decrease appetite, chronic users may become malnourished. The intensity and duration of cocaine’s effects, which include increased energy and reduced fatigue, depend on how the drug is taken. The faster the drug is absorbed into the bloodstream and delivered to the brain, the more intense the high. Injecting or smoking cocaine produces a faster, stronger high than snorting it. However, the faster the drug is absorbed, the faster the effects subside. The high from snorting cocaine may last 30 minutes, whereas the high from smoking “crack” cocaine may last only 10 minutes. In order to sustain the high, the user must administer the drug again, which may lead to frequent use, often in higher doses, over a short period of time. [2] Cocaine has a safety ratio of 15, making it a very dangerous recreational drug.
Amphetamine is a stimulant that produces increased wakefulness and focus along with decreased fatigue and appetite. Amphetamine is used in prescription medications to treat attention deficit hyperactivity disorder (ADHD) and narcolepsy and to control appetite. Some brand names of amphetamines are Adderall, Benzedrine, Dexedrine, and Vyvanse. But amphetamines are also used illegally as a recreational drug (“speed”). The methylated version of amphetamine, methamphetamine (“meth” or “crank”), is currently favored by users, partly because it is available in ampoules ready for use by injection. [3] Meth is a highly dangerous drug with a safety ratio of only 10.
Amphetamines may produce a very high level of tolerance, leading users to increase their intake, often in “jolts” taken every half hour or so. Although the level of physical dependency is small, amphetamines may produce very strong psychological dependence, effectively amounting to addiction. Continued use of stimulants may result in severe psychological depression. The effects of the stimulant methylenedioxymethamphetamine (MDMA), also known as Ecstasy, provide a good example. MDMA is a strong stimulant that very successfully prevents the reuptake of serotonin, dopamine, and norepinephrine. It is so effective that, when used repeatedly, it can seriously deplete the amount of neurotransmitters available in the brain, producing a catastrophic mental and physical “crash” resulting in serious, long-lasting depression. MDMA also affects the temperature-regulating mechanisms of the brain, so in high doses, and especially when combined with vigorous physical activity like dancing, it can cause the body to become so drastically overheated that users can literally “burn up” and die from hyperthermia and dehydration.
In contrast to stimulants, which work to increase neural activity, a depressant acts to slow down consciousness. A depressant is a psychoactive drug that reduces the activity of the CNS. Depressants are widely used as prescription medicines to relieve pain, to lower heart rate and respiration, and as anticonvulsants. Depressants change consciousness by increasing the production of the neurotransmitter GABA and decreasing the production of the neurotransmitter acetylcholine, usually at the level of the thalamus and the reticular formation. The outcome of depressant use (similar to the effects of sleep) is a reduction in the transmission of impulses from the lower brain to the cortex. [1]
The most common depressant is alcohol, a colorless liquid produced by the fermentation of sugar or starch, which is the intoxicating agent in fermented drinks. Alcohol is the oldest and most widely used drug of abuse in the world. Note that alcohol is not a “safe” drug by any means—its safety ratio is only 10.
Alcohol consumption is measured in blood alcohol concentration (BAC). Different concentrations of alcohol in the human body can have different effects on the individual. However, note that the effects of alcohol on any given individual can also vary greatly depending on things like hydration and tolerance. Therefore, the blood alcohol concentrations given in the activity below are just estimates used for illustrative purposes.
Alcohol use is of high cost to societies because so many people abuse alcohol and because judgment after drinking can be substantially impaired. It is estimated that almost half of automobile fatalities are caused by alcohol use, and excessive alcohol consumption is involved in a majority of violent crimes, including rape and murder. [2] Alcohol increases the likelihood that people will respond aggressively to provocations. [3] [4] [5] Even people who are not normally aggressive may react with aggression when they are intoxicated. Alcohol use also leads to rioting, unprotected sex, and other negative outcomes.
Alcohol increases aggression in part because it reduces the ability of the person who has consumed it to inhibit his or her aggression. [6] When people are intoxicated, they become more self-focused and less aware of the social situation. As a result, they become less likely to notice the social constraints that normally prevent them from engaging aggressively, and they are less likely to use those social constraints to guide them. For instance, we might normally notice the presence of a police officer or other people around us, which would remind us that being aggressive is not appropriate. But when we are drunk, we are less likely to be so aware. The narrowing of attention that occurs when we are intoxicated also prevents us from anticipating the negative outcomes of our aggression. When we are sober, we realize that being aggressive may produce a host of problems, but we are less likely to realize these potential consequences when we have been drinking. [7] Alcohol also influences aggression through expectations. In other words, if we expect that alcohol will make us more aggressive, then we tend to become more aggressive when we drink.
Barbiturates are depressants that are commonly prescribed as sleeping pills and painkillers due to their ability to cause a wide range of effects from mild sedation to total anesthesia. Brand names include Luminal (phenobarbital), Mebaraland, Nembutal, Seconal, and Sombulex. In small to moderate doses, barbiturates produce relaxation and sleepiness, but in higher doses, symptoms may include sluggishness, difficulty in thinking, slowness of speech, drowsiness, faulty judgment, and eventually coma or even death. [8] Barbiturates have been used as recreational drugs because in low doses they can produce effects similar to alcohol (e.g., dizziness, poor concentration, impaired judgement). However, individuals who use barbiturates may also exhibit fatigue and irritability, and barbiturate use can lead to drug dependence, tolerance, and respiratory depression (which can lead to death). Barbiturates have now been largely replaced by a different class of depressants, benzodiazepines. These drugs have less potential for lethal overdose. Today barbiturates are more commonly used in general anesthesia and treatments for epilepsy.
Thiopental (sodium amytal or sodium pentothal), a barbiturate, is often mistakenly called a truth serum. Proponents of this idea claimed that individuals who take the drug before an interview or questioning are more likely to provide the truth while under its influence. In reality, this drug cannot and does not force anyone to tell the truth. It is possible that the drug may appear to have this effect due to its ability to decrease the inhibitions of the individual being questioned, but the same could be said about other barbiturates and other types of depressant drugs.
Related to barbiturates, benzodiazepines are a family of depressants used to treat anxiety, insomnia, seizures, and muscle spasms. In low doses, they produce mild sedation and relieve anxiety; in high doses, they induce sleep. For these reasons, they are commonly used to treat anxiety and insomnia. Benzodiazepines are also used to treat epilepsy and alcohol withdrawal. In the United States, benzodiazepines are among the most widely prescribed medications that affect the CNS. Brand names include Centrax, Dalmane, Doral, Halcion, Librium, ProSom, Restoril, Xanax, and Valium.
It is possible to overdose on benzodiazepines, but they are much less toxic than their predecessors (barbiturates), and death rarely occurs when they are taken alone. However, when taken in combination with other drugs, the potential for overdose and complications increases. The lack of coordination, drowsiness, and dizziness that are side effects of benzodiazepines tend to increase their danger. This is particularly true for the elderly, in whom these side effects can result in falls and other injuries. These side effects also make it more likely for an individual to be involved in a (potentially fatal) car accident.
Toxic inhalants is a broad term used to describe recreation drugs that are inhaled to create a change in consciousness. These drugs are easily accessible as the vapors of glue, gasoline, propane, hair spray, and spray paint. Inhalants are some of the most dangerous recreational drugs, with a safety index below 10, and their continued use may lead to permanent brain damage. Users typically inhale the vapors or the aerosol using plastic bags or open containers or by breathing through solvent-soaked material. This behavior is sometimes called “huffing.” The effects of toxic inhalants can vary widely from alcohol-like intoxication to euphoria or hallucinations depending on the substance used and the dose taken.
Related drugs are the nitrites (amyl and butyl nitrite; “poppers,” “rush,” “locker room”) and anesthetics such as nitrous oxide (laughing gas) and ether.
Opioids are chemicals that increase activity in opioid receptor neurons in the brain and in the digestive system, producing euphoria, analgesia, slower breathing, and constipation. Their chemical makeup is similar to the endorphins, the neurotransmitters that serve as the body’s “natural pain reducers.” Natural opioids are derived from the opium poppy, which is widespread in Eurasia, but they can also be created synthetically.
The opioids activate the sympathetic division of the ANS, causing blood pressure and heart rate to increase, often to dangerous levels that can lead to heart attack or stroke. At the same time the drugs also influence the parasympathetic division, leading to constipation and other negative side effects. Symptoms of opioid withdrawal include diarrhea, insomnia, restlessness, irritability, and vomiting, all accompanied by a strong craving for the drug.
The powerful psychological dependence of the opioids and the severe effects of withdrawal make it very difficult for opioid abusers to quit using. In addition, because many users take these drugs intravenously and share contaminated needles, they run a very high risk of being infected with diseases. Opioid addicts suffer a high rate of infections such as HIV, pericarditis (an infection of the membrane around the heart), and hepatitis B, any of which can be fatal.
Opium is the dried juice of the unripe seed capsule of the opium poppy. It may be the oldest drug on record, known to the Sumerians before 4000 BC. It was originally used in religious rituals and has been used as an analgesic in early as well as modern cultures. Opium was made popular during the opium trade in the 17th century, when it was brought from China to the rest of the world by the British. It was prohibited by many countries by the beginning of the 20th century.
Morphine and heroin are stronger, more addictive drugs derived from opium, while codeine is a weaker analgesic and less addictive member of the opiate family. Opioids are often abused due to the effects that they produce, which include a “body high” as well as intense feelings of euphoria and relaxation. However, these drugs can be extremely dangerous in part due to the extremely addictive nature of opioids. When morphine was first refined from opium in the early 19th century, it was touted as a cure for opium addiction, but it did not take long to discover that it was actually more addicting than raw opium. When heroin was produced a few decades later, it was also initially thought to be a more potent, less addictive painkiller but was soon found to be much more addictive than morphine. Today, morphine is still used in hospitals as a painkiller to reduce severe pain, but it is given in a limited fashion due to its high potential for addiction and psychological dependence. Other synthetic opioids, such as oxycodone, are more typically prescribed in cases of severe pain because they have fewer side effects. Heroin is about twice as addictive as morphine and creates severe tolerance, moderate physical dependence, and severe psychological dependence. Because of this, it is considered to be one of the most dangerous street drugs. The danger of heroin is demonstrated in the fact that it has the lowest safety ratio (6) of all the drugs (see the table “Popular Recreational Drugs and Their Safety Ratios”).
The drugs that produce the most extreme alteration of consciousness are the hallucinogens, psychoactive drugs that alter sensation and perception and that may create hallucinations. The hallucinogens are frequently known as “psychedelics.”
Drugs in this class include lysergic acid diethylamide (LSD, or “acid”), mescaline, and phencyclidine (PCP), as well as a number of natural plants including cannabis (marijuana), peyote, and psilocybin (shrooms). The chemical compositions of the hallucinogens are similar to the neurotransmitters serotonin and epinephrine, and they act primarily as agonists by mimicking the action of serotonin at the synapses. The hallucinogens may produce striking changes in perception through one or more of the senses. The precise effects a user experiences are a function not only of the drug itself but also of the user’s preexisting mental state and expectations of the drug experience. In other words, the user tends to extract what he or she brings to the experience. The hallucinations that may be experienced when taking these drugs are strikingly different from everyday experience and frequently are more similar to dreams than to everyday consciousness.
Although the hallucinogens are powerful drugs that produce striking “mind-altering” effects, they do not produce physiological or psychological tolerance or dependence. While they are not addictive and pose little physical threat to the body, their use is not advisable in any situation in which the user needs to be alert and attentive, exercise focused awareness or good judgment, or demonstrate normal mental functioning, such as driving a car, studying, or operating machinery.
Cannabis (marijuana) is the most widely used hallucinogen and is used for recreational and medicinal purposes. Until it was banned in the United States under the Marijuana Tax Act of 1938, it was widely used for medical purposes. In recent years, cannabis has been prescribed for the treatment of pain and nausea, particularly in cancer sufferers, as well as for a wide variety of other physical and psychological disorders. [1] Although medical marijuana is now legal in several American states, it is still banned under federal law, putting those states in conflict with the federal government. Marijuana also acts as a stimulant, producing giggling, laughing, and mild intoxication. It acts to enhance perception of sights, sounds, and smells and may produce a sensation of time slowing down. It is much less likely to lead to antisocial acts than is the other popular intoxicant, alcohol, and it is also the one psychedelic drug whose use has not declined in recent years. [2]
LSD is a psychedelic drug that is known for its psychological side effects, which include altered thinking processes, altered sensory experiences, and an altered sense of time. However, the effects of LSD can vary widely from person to person depending on factors such as dosage, previous experiences, current state of mind, and the environment. Although it was initially used as a therapeutic agent, LSD became popular as a recreational drug in 1960s youth counterculture, which later resulted in its prohibition. LSD is not addictive and has a very low toxicity relative to dosage. However, adverse psychiatric reactions such as anxiety or delusions have been known to occur. LSD is typically taken orally, usually on an absorbent substrate such as a sugar cube or paper.
Psilocybin is a naturally occurring compound found in a number of mushroom species. Effects can include euphoria, visual hallucinations, changes in perception, and a distorted sense of time. As with other hallucinogens, the effects of psilocybin are highly variable and depend on the mindset of the individual and the environment that he or she is in. Psilocybin has low toxicity and does not cause dependence, but it does have negative side effects such as nausea, disorientation, confusion, and panic attacks. Negative incidents involving psilocybin typically involve the simultaneous use of other drugs, including alcohol.
People have used, and often abused, psychoactive drugs for thousands of years. Perhaps this should not be surprising, because many people find using drugs to be fun and enjoyable. Even when we know the potential costs of using drugs, we may engage in them anyway because the pleasures of using the drugs are occurring right now, whereas the potential costs are abstract and occur in the future.
Because drug and alcohol abuse is a behavior that has such important negative consequences for so many people, researchers have tried to understand what leads people to use drugs. Researchers [1] tested the hypothesis that cigarette smoking was related to a desire to take risks. The study compared risk-taking behavior in adolescents who reported having tried a cigarette at least once with those who reported that they had never tried smoking.
Participants in the research were 125 5th- through 12th-graders attending after-school programs throughout inner-city neighborhoods in the Washington, DC, metropolitan area. Eighty percent of the adolescents indicated that they had never tried even a puff of a cigarette, and 20% indicated that they had had at least one puff of a cigarette.
The participants were tested in a laboratory where they completed the Balloon Analogue Risk Task (BART), a measure of risk taking. [2] The BART is a computer task in which the participant pumps up a series of simulated balloons by pressing on a computer key. With each pump the balloon appears bigger on the screen, and more money accumulates in a temporary “bank account.” However, when a balloon is pumped up too far, the computer generates a popping sound, the balloon disappears from the screen, and all the money in the temporary bank is lost. At any point during each balloon trial, the participant can stop pumping up the balloon, click on a button, transfer all money from the temporary bank to the permanent bank, and begin with a new balloon.
Because the participants do not have precise information about the probability of each balloon exploding, and because each balloon is programmed to explode after a different number of pumps, the participants have to determine how much to pump up the balloon. The number of pumps that participants take is used as a measure of their tolerance for risk. Low-tolerance people tend to make a few pumps and then collect the money, whereas more risky people pump more times into each balloon.
Supporting the hypothesis that risk tolerance is related to smoking, Lejuez et al. [1] found that the tendency to take risks was indeed correlated with cigarette use: The participants who indicated that they had puffed on a cigarette had significantly higher risk-taking scores on the BART than did those who had never tried smoking.
Individual ambitions, expectations, and values also influence drug use. Vaughan, Corbin, and Fromme [3] found that college students who expressed positive academic values and strong ambitions had less alcohol consumption and alcohol-related problems, and cigarette smoking has declined more among youth from wealthier and more educated homes than among those from lower socioeconomic backgrounds. [4]
Drug use is in part the result of socialization. Children try drugs when their friends convince them to do it, and these decisions are based on social norms about the risks and benefits of various drugs. In the period 1991 to 1997, the percentage of 12th-graders who responded that they perceived “great harm in regular marijuana use” declined from 79% to 58%, while annual use of marijuana in this group rose from 24% to 39%. [4] And students binge drink in part when they see that other people around them are also binging. [5]
All recreational drug use is associated with at least some risks. Young people have experimented with cigarettes, alcohol, and other dangerous drugs for many generations, and those who begin using drugs early are more likely to use more dangerous drugs later. [6]
The development of this CC-OLI Introduction to Psychology course and its ongoing improvement is a collaborative effort involving many individuals whose expertise, time, and dedication to this course we wish to recognize.
|