You are viewing a complimentary preview of this book. For options to unlock the full book, please login or visit our catalog to create a FlatWorld Account and see purchase options.
Introducing Psychology
Brain, Person, Group

v5.1 Stephen M. Kosslyn and Robin S. Rosenberg

1.3 The Research Process: How We Find Things Out

You could probably speculate for hours about how Tiger Woods came to be such a superb golfer or why his personal life fell into such disarray and why his game is no longer great. How could you find out for sure? In this section we consider the scientific method, which specifies how to answer questions objectively and definitively.

Looking Ahead: Learning Objectives

  1. What is the scientific method, and how is it used to study the mind and behavior?

  2. What scientific research techniques are commonly used to study the mind and behavior?

  3. What are key characteristics of good scientific studies in psychology?

  4. How are ethical principles applied to scientific studies in psychology and in psychotherapy?

The Scientific Method

Psychology is a science because it relies on a specific method of inquiry. The  is a way to gather facts that will lead to the formulation and validation (or refutation) of a theory. It involves systematically observing events, formulating a question, forming a  hypothesis about the relation between variables in an attempt to answer the question, collecting new observations to test the hypothesis, using such evidence to formulate a theory, and, finally, testing the theory. Let’s take a closer look at the scientific method, one step at a time.

Step 1: Systematically Observing Events

All science ultimately begins with observations. Although these observations may begin as impressions or anecdotes, to be the first step in science they must move beyond that—they must lead to systematic observations. Scientists want to know the facts, as free as possible from interpretations of their implications. Facts are established by collecting , which are careful, objective descriptions or numerical measurements of a phenomenon. Data are “objective” because researchers try to eliminate distortions based on their personal beliefs, emotions, or interpretations; while data are collected, their interpretation must be set aside and saved for later. In addition, if data are systematically collected (for example, by being sure to collect data from different parts of the country when comparing changes over time in national attitudes about free speech), the investigator or somebody else should be able to collect comparable data by repeating the data collection procedure; when a study is repeated so that the data can be compared to those collected originally, the study is called a .

Scientists often prefer quantitative data (numerical measurements), such as how many men versus women are diagnosed with depression, in part because such data are relatively easy to summarize, analyze, and report. However, many scientists—especially in the relatively early phases of an investigation—rely on systematic qualitative observations (descriptions), which may simply document that a certain event occurs or may describe specific characteristics of the event, such as which toys children play with while at their preschool.

Objectively and systematically observing events is just the first step in the scientific method because events typically can be interpreted in more than one way. For example, a smile could mean happiness, agreement, politeness, or amusement. To sort out the proper interpretation of data, researchers need to follow up observations with the additional steps described in the sections to follow.

Step 2: Formulating a Question

Most of science is about answering questions that are raised by observations. For example, a scientist might note that even as a child, Tiger Woods’s abilities made him stand out from his peers. Why? Was there something different about his brain or the way he was being raised? Questions can come from many sources: For example, in addition to direct observation (which occurs when the scientist notices something that piques his or her interest), they can arise from a “secondhand” observation, reported by someone else (and often published in a journal article), or from reflection (which may even occur when the scientist is puzzling about why a certain event does not occur—such as toddlers learning to read at exactly the same time that they learn to talk). 

Step 3: Forming a Hypothesis

To try to answer a question, scientists make an educated guess about the answer, called a . A hypothesis is a tentative idea that might explain a set of observations. Hypotheses typically specify a relationship between two or more variables (such as between the amount of time a child sleeps and his or her activity level during the day); hypotheses often propose that one variable (or set of variables) cause another. By the term , researchers mean an aspect of a situation that can vary; more precisely, a variable is a characteristic of a substance, quantity, or entity that is measurable. Examples of variables that could be of interest to psychologists include the amount of time someone needs to choose a particular reward, whether or not a person was raised in a single-family household, or what part of the country a person lives in.

For example, say you decide to take golf lessons. On the golf course, you notice a particularly good player and ask her for tips. She confesses that when she’s off the course, she often practices her swings mentally, visualizing herself whacking the ball straight down the fairway or out of a sand trap; she assures you that the more time she has spent doing such mental practice, the better her game has become. You can translate this idea into a hypothesis, speculating that there’s a connection between two variables—the time people spend visualizing swings and their golf score. This idea appeals to you, in part because it means that you can practice at night or whenever you are bored.

But at this point all you have is a hypothesis. Before you go to the trouble of imagining yourself swinging a club, over and over and over, you want to test the hypothesis to find out whether it’s correct.

Step 4: Testing the Hypothesis

To test the hypothesis, you need to collect new data. But before you can do that, you must make the key concepts of the hypothesis specific enough to test. An  defines a concept by indicating how it is measured or manipulated. For example, you could define “improvement in playing golf” in terms of the number of putts sunk or the distance the ball is driven.

In fact, much research has been conducted on mental practice, and many studies have documented that it actually does improve performance, as it is measured—operationally defined—in various ways. In a typical study, people are divided into two groups, and the performance of both groups is assessed. One group then uses mental practice for a specified period, while the other is asked to take an equal amount of time to visualize familiar golf courses (but does not imagine practicing). Then the golf performance of both groups is assessed again. Usually, the people who engage in mental practice show greater improvement (Driskell et al., 1994; White & Hardy, 1995; Yagueez et al., 1998).

To test hypotheses, scientists make two types of observations: (1) those that directly address the object of study, such as how many times in an hour a mother speaks to her infant or how far a golf ball traveled after being hit, and (2) those that indirectly address the object of study, which might be the nature of thoughts, motivations, or emotions. Many sorts of behavior can be used to draw inferences about mental events. For example, when people smile without really meaning it (as they often do when posing for photographs), the muscles they use are not the same ones they use when they produce a sincere smile (Ekman, 1985). By studying what’s observable (the particular muscles being used), researchers can learn about the unobservable (the mental state of the smiler).

Step 5: Formulating a Theory

Now we consider the aspect of the scientific method that uses “such evidence to formulate a theory.” A  consists of concepts or principles that explain a set of research findings. Whereas a hypothesis focuses on possible relationships among variables, a theory focuses on the underlying reasons why certain relationships may exist in data. In our example, the idea that mental practice leads to better performance is a hypothesis, not a theory. A theory might explain that mental practice works because the brain structures that are used to perform an action (playing golf) are also engaged when you mentally practice that action; thus, after practicing it mentally, the relevant brain mechanisms operate more efficiently when you later perform the actual behavior.

Step 6: Testing the Theory

Finally, what do we mean by “testing the theory”? Researchers evaluate a theory by testing its predictions. It’s not enough that a theory can, after the fact, explain previous observations; to be worth its salt, it has to be able to predict new ones. As illustrated in Figure 1.1, theories produce , which are new hypotheses that should be confirmed if the theory is correct. For example, the theory of mental practice predicts that the parts of the brain used to produce a behavior—in our example, swinging a golf club—are activated by merely imagining the behavior. This prediction has been tested by using brain-scanning techniques to observe what happens in the brain when an individual imagines making certain movements. And, in fact, parts of the brain that are used to control actual movements are activated when the movements are only imagined (Jeannerod, 1994, 1995; Kosslyn et al., 1998; Parsons, 1987, 1994; Parsons & Fox, 1998).

Each time a theory makes a correct prediction, the theory is supported, and each time it fails to make a correct prediction, the theory is weakened. If enough of its predictions fail, the theory must be rejected and the data explained in some other way. The theorist is no doubt disappointed when his or her theory is shot down, but this is not a bad thing for science. In fact, a good theory is falsifiable; that is, it makes predictions it cannot “squirm out of.” A falsifiable theory can be rejected if the predictions are not confirmed.

Putting the Steps Together

Now that you understand all the steps of the scientific method, you could use it to investigate an enormous range of questions. For instance, say you’ve heard that putting a crystal under your bed will improve your athletic ability. Should you believe this? The scientific method is just what you need to decide whether to take this claim seriously! Let’s walk through the steps of the scientific method, applied to the role that crystals under people’s beds might play in improving their athletic performance:

  1. You start with a systematic observation. The observation might be secondhand: Let’s say your friends report that they’ve carefully noted that they scored higher when a crystal was under their beds the night before they play.

  2. You formulate a question to guide further investigation: Can crystals under a person’s bed while he or she sleeps really improve the individual’s athletic performance the next day?

  3. You formulate a hypothesis about the relation between variables: Having a crystal under someone’s bed (that’s one variable: the crystal is there versus not there) will lead him or her to perform better at his or her usual sport on the following day (that’s another variable, the measure of performance).

  4. You test the hypothesis: Before some of your friend’s athletic games, you sneak a crystal under her bed, making sure that your friend never knows when the crystal is there and when it isn’t (we’ll discuss why this is important shortly); then you observe whether she plays better on days after the crystal was present.

  5. If the hypothesis is supported (your friend does play better on the days after the crystal is under her bed), you need a theory to explain the relationship between the variables (how the crystal affects performance). For example, you might propose that the crystal focuses the magnetic field of the earth, which helps the blood circulation of people who are sleeping above it (blood has iron in it, which is affected  by magnetic fields). This theory makes the prediction that putting the crystal in a magnetically shielded box while under the bed should disrupt its effects on performance.

  6. Finally, you test the theory: You might go ahead and put the crystal in a magnetically shielded box and see whether your friend still plays better on days after the crystal has been under the bed while she sleeps.

Figure 1.1 The Scientific Method

Research begins by observing relevant events and then formulating a question. These events lead to a hypothesis about a possible answer to the question, which is then tested. When enough is known about the relations among the relevant variables, a theory can be formulated. The theory in turn produces predictions, which are new hypotheses and are in turn tested. If these theory-based hypotheses are confirmed, the theory is supported; if they fail to bear fruit, then the theory must be altered or simply rejected, and the whole process repeated.

The steps of the scientific method: Step 1, Systematically
 Observe Events – Observe that people who claim to use mental practice seem to
 play golf well. Step 2, Formulate Question – Can mental practice improve actual
 performance? Step 3, Formulate Hypothesis – Mental practice can help people to
 perform better. Step 4, Test Hypothesis – Compare the golf performance of one
 group before and after mental practice with that of another group that does not
 engage in mental practice. Step 5, Formulate Theory – Brain areas that are
 modified by actual practice are also modified by mental practice. Step 6, Test
 Theory – Identify brain areas activated during actual practice; measure the
 activation of those areas while people engage in mental practice; relate the
 amount of activation during mental practice to the level of improved
 performance.

The Psychologist’s Toolbox: Techniques of Scientific Research

Although all sound psychological investigations rely on the scientific method, researchers working in the different areas of psychology often pose different questions and try to answer them using different methods. Psychologists use a variety of research tools, each with its own advantages and disadvantages.

Would putting a crystal under Tiger Woods's bed make him play better?

Tiger Woods lines up a shot.
Try your own experiment with mental practice: Pick an athletic, musical, or artistic activity you enjoy and that you do regularly. For one week, notice or observe how well you perform that activity. During the second week, mentally practice that activity ten minutes each day; during the following week, notice whether your performance has changed. This informal exercise is not science because you aren’t systematically collecting data. If you want to do this exercise more formally and use the scientific method, use the Try it activity at the end of the chapter.

Descriptive Research: Tell It Like It Is

Any individual researcher need not go through all of the steps of the scientific method in order to do science. In fact, he or she can focus solely on the first step, observing events systematically; some research is devoted simply to describing “things as they are.” Although—as we’ve just seen—there is more to the scientific method than observation, it’s no accident that “observing events” is a key part of the scientific method: Theorizing without facts is a little like cooking without ingredients. Researchers carry out such observations in several ways: naturalistic observations, case studies, and surveys.

Naturalistic Observation

As noted earlier, data are collected via objective and systematic observation that can be repeated by others. Researchers use naturalistic observation to collect such data from real-world settings by observing events as they naturally occur. For example, researchers have carefully observed how caregivers interact with young children and noted that the caregivers changed their language and speech patterns when talking to the young children. Specifically, when caregivers interacted with young children (compared to older children or adults), they used short sentences and spoke in a high-pitched voice. This modified way of speaking is known as child-directed speech (Liu, Tsao & Kuhl, 2009; Snow, 1991, 1999).

Naturalistic observations can lead psychologists to notice a phenomenon and are often the first step of the scientific method. Throughout this book, we invite you to use the tools of psychology (such as naturalistic observation) and the fruits of psychological research to see yourself and the world around you in a new light. We encourage you to observe, formulate questions and hypotheses, and collect data—about yourself and others. In order to help make the link between what we describe in this text and your lives, we include suggestions or questions in the margin, like the one here, which asks you to observe or introspect about how some psychological phenomena arise in your life. That is, we encourage you to Think Like a Psychologist.

Some scientists observe animals in the wilds of Africa; others observe sea life in the depths of the ocean; and others observe humans in their natural habitats.

A woman uses her tablet device at a mall.

Although naturalistic observation is an essential part of science, it is only the first step of the entire method. For instance, documenting the existence of child-directed speech does not explain why caregivers use it; perhaps they think it will help children understand them, or they use it to entertain the children, or they are simply imitating other caregivers they have heard before. Naturalistic observation is difficult to use to test specific hypotheses for a finding. The problem is that to test your hypothesis, you must seek out a specific situation where the relevant variables happen to be appropriate. For example, a psychologist testing a theory that playground bullies pick on smaller children would have to go to playgrounds and wait to observe bullies and the children who are bullied.

Case Studies

A  in psychology is a scientific study that focuses on a single participant, examining his or her psychological characteristics (at any or all of the levels of analysis) in detail. For example, a researcher might study a single professional athlete, such as Tiger Woods, looking closely at his brain function, personal beliefs, and social circumstances. The goal of such a study typically is not to understand that one person, but rather to discover underlying principles that can be applied to all similar people. Case studies are used in many areas of psychology and often focus on a single level of analysis. For example:

  1. Neuropsychologists may study an individual brain-damaged patient in depth to discover which abilities are “knocked out” following certain types of damage.

  2. Psychologists who study mental illness might examine an usual case of bulimia in a 10-year-old to discover what special factors might contribute to an eating disorder in someone so young.

  3. Cognitive psychologists may investigate how an unusually gifted memory expert is able to retain huge amounts of information almost perfectly.

  4. Personality psychologists might study how a professional athlete remains motivated enough to practice for years and years with no guarantee that he or she will ever succeed.

Brain damage following an accident can cause someone to fail to name fruits and vegetables while still being able to name other objects (Hart et al., 1985). A case study would examine such a person in detail, documenting precisely what sorts of things could and could not be named.

A man examines vegetables at a grocery store.

Like all other methods, case studies have their limitations. In particular, we must always be cautious about generalizing from a single case; that is, we must be careful not to assume that the findings in the case study necessarily extend to all other similar cases. Any particular person may be unusual for many reasons, and so the findings about that individual may or may not apply to similar people in general.

Surveys

A  is a set of questions that people are asked about their beliefs, attitudes, preferences, or activities. A survey is a relatively inexpensive way to collect a lot of data (which consists of the responses to the questions) quickly, especially when it is conducted over the Internet.

However, surveys have a number of limitations. For instance, some of them require the respondents to introspect about their feelings and beliefs, and, as we discussed earlier in this chapter, people may not be capable of reporting accurately all such information. In particular, we humans do not have conscious access to much of what motivates us; sometimes we aren’t even aware of the beliefs that drive our behavior (for example, we may not be aware of some aspects of our attitudes about race, even though they affect how we respond to members of particular races; Phelps et al., 2000). There are no hard-and-fast rules about precisely which sorts of questions people can answer accurately in a survey, but some types of information clearly can be answered through a survey, whereas other types of information clearly cannot be answered through a survey. For example, you could use a survey to ask people how often they play golf but not to ask people how their brains work or to report subtle behaviors, such as their body language, which they may unconsciously display.

Moreover, even if they are capable of answering a survey question, people may not always respond honestly; this is especially a problem when the survey touches on sensitive personal issues, such as sexual preferences. Furthermore, not everyone who is asked to respond does, in fact, take the survey. Because a particular factor (such as one’s income or age) may incline some people but not others to respond, it is difficult to know whether the responses obtained are actually representative of the whole group that the survey was designed to assess.

Finally, many pitfalls must be avoided when designing surveys. Survey questions have to be carefully worded so that they don’t lead the respondents to answer in a certain way and yet still get at the phenomena of interest. For instance, the question “Don’t you think that it’s bad to lie?” is phrased in a way that will lead most people to answer “yes,” simply because it is clear that yes is probably the “right” answer to the question. Similarly, people tend to answer differently with different types of response scales, such as those that have relatively many or few possible choices (Schwarz, 1999). For instance, the different choices below would probably lead some people to choose different answers, depending on which of the four response scales they were given. How would you answer when given the choices in each row below the question?

Don’t you think that it’s bad to lie?
YesNo
YesNot SureNo
Always YesMostly YesMostly NoAlways No
Always YesMostly YesNot SureMostly NoAlways No

Correlational Research: Do Birds of a Feather Flock Together?

Researchers can use another method to study the relationships among variables—a method that relies on the idea of correlation.  A correlation is a relationship in which two variables are measured for each person, group, or entity, and the variations in measurements of one variable are compared to the variations in measurements of the other variable. For example, height is correlated with weight: Taller people tend to be heavier than smaller people. A correlation coefficient is a number between -1 and +1 that indicates how closely related two measured variables are; the higher the coefficient (in either the positive or negative direction), the better the value of one measurement can predict the value of the other. The  is often simply referred to as a correlation because the coefficient is the numerical summary of the relationship between two variables.

Figure 1.2 Strength of Correlation

Three graphs showing different types of correlation. The first graph shows zero correlation, the second positive correlation, and the third negative correlation

Figure 1.2 illustrates three predicted correlations between variables Graph 1 shows a positive correlation, a relationship in which increases in one variable (height) are accompanied by increases in another (weight); a positive correlation is indicated by a correlation value that falls between 0 and 1.0. Graph 2 shows a negative correlation, a relationship in which increases in one variable (age) are accompanied by decreases in another (health); a negative correlation is indicated by a correlation value that is between -1.0 and 0. The closer the correlation is to 1.0 or to -1.0, the stronger the relationship; visually, the more tightly the data points cluster around a slanted line, the higher the correlation.  Finally, Graph 3 shows a zero correlation, which indicates no relationship between the two variables (height and aggressiveness); they do not vary together.

Researchers have found that the lower the level of a chemical called monoamine oxidase (MAO) in the blood, the more the person will tend to seek out thrilling activities (such as skydiving and bungee jumping; Zuckerman, 1995). Thus, there is a negative correlation between the two measures: As MAO levels go down, thrill seeking goes up. But this correlation doesn't indicate that low MAO levels cause thrill-seeking behavior.

A thrill-seeker engaging in bungee jumping.

Correlations always compare two sets of measurements at a time. Sometimes several pairs of variables are compared, but each pair of measurements requires a separate correlation. Thus, correlational research has two steps:

  1. Obtaining measurements of two variables (such as height and weight or age and health status);

  2. Examining the way that one set of measurements goes up or down in tandem with another set of measurements—in our example, it would be the extent to which height and weight go up or down together.

The main advantage of correlational research is that it allows researchers to compare variables that cannot be manipulated directly (which is what happens in experiments, which we discuss in the following section). The main disadvantage is that correlations indicate only that the values of two variables tend to vary together, not that values of one cause the values of the other. For example, evidence suggests a small correlation between poor eyesight and intelligence (Belkin & Rosner, 1987; Miller, 1992; Williams et al., 1988), but poor eyesight doesn’t cause someone to be smarter! Remember: Correlation does not imply causation.

Experimental Research: Manipulating and Measuring

Much psychological research relies on conducting experiments, controlled situations in which the investigator observes the effects of altering variables. Experiments provide the strongest way to test a hypothesis because they can provide evidence that changes in one variable cause changes in another.

Independent and Dependent Variables

The variables in a situation—such as “time spent mentally practicing” and “golf score”—are the aspects of the situation that can vary. In an experiment, the investigator deliberately manipulates the value of one variable, which is called the  (so called because it can be changed independently of anything else), and measures another, called the  (so called because the investigator is looking to see whether the values of this variable depend on those of the independent variable). Going back to our mental practice of golf example, the amount of time participants in the experiment spend mentally practicing is the independent variable (it is deliberately varied), and their golf score is the dependent variable (it is measured, and it is hypothesized to depend on the amount of time spent visualizing), as illustrated in Figure 1.3. (In addition to manipulating the independent variable, researchers can select particular values of it, such as different points of time.)

By examining the link between independent and dependent variables, a researcher hopes to discover exactly which factor is causing an : the difference in the value of the dependent variable that arises from changes in the independent variable. In our mental practice example, the effect is the change in participants’ golf scores from the first assessment of their performance (before mental practice) to the second assessment (after mental practice).

Figure 1.3 Relationship Between Independent Variables

The independent variable is what is manipulated—in this example, the amount of time participants spend mentally practicing. The dependent variable, what is measured, is in this case their golf score.

A series of four images that show the correlation between the independent variable, the amount of time a person practices mentally, and the dependent variable, their subsequent performance on the golf course. In the top images, the golfer imagines herself practicing and achieves a good result. The second set of images shows the golfer not mentally practicing and yields a poor swing.

Demonstrating that changes in an independent variable are accompanied by changes in the dependent variable usually is not enough to confirm a hypothesis: In most cases, it’s relatively easy to offer more than one interpretation of a relation between an independent and a dependent variable. Hence, additional research is necessary to understand what this relation means. In our example, say we had tested only one group, the one that used mental practice. The fact that these players improved would not necessarily show that mental practice improves one’s actual golf performance. Perhaps simply playing during the first assessment (before mental practice) is enough to cause an improvement at the second assessment. Or perhaps people are simply more relaxed at the time of the second assessment, and that is why they perform better. Researchers can narrow down the explanations by testing additional groups or conducting separate experiments; only by eliminating other possibilities can researchers come to know why varying the independent variable produces an effect on the dependent variable.

One particularly vexing problem in experiments is the possibility of a , or confounding variable, which is any other aspect of the situation (such as the anxiety that accompanies taking a test) that varies along with the independent variable (or variables) of interest and could be the actual basis for what is measured. Because more than one variable is in fact changing, you cannot tell which one is actually responsible for any observed effect. When a confound is present, varying one independent variable has the (unintended) consequence of varying one or more other independent variables—and they might in turn be responsible  for changing the dependent variable. For example, if the participants were more relaxed at the time of the second assessment in the golf mental practice experiment, that would be a confound with the effects of mental practice per se—and you can’t know whether both or only one of these variables are responsible for improved performance. Confounds thus lead to results that are ambiguous, that do not have a clear-cut interpretation (see Figure 1.4). As we discuss in the following section, researchers have developed several ways to eliminate confounds.

Figure 1.4 Confounding Variables in Everyday Life

Suppose that Professor Moret has made a startling observation: Students who slump when they sit are more intelligent than those who sit up straight. He plans to alert the college admissions office to this finding and urge them to require applicants to supply full-body photos while students are sitting. But is it intelligence that accompanies poor posture? Perhaps a strong motivation to do well leads to more intense poring over books, with hunched posture. Or is it that shy people tend both to hunch and to be more comfortable spending time alone studying? Can you think of other possible confounding variables that should make Professor Moret take pause?

A woman slumps at her computer

Experimental and Control Groups and Conditions

One way to remove the possible influence of confounds requires comparing two groups—the experimental and control groups: An  receives the complete procedure that defines the experiment. A  is treated identically to the experimental group except that the independent variable is not manipulated but rather is held constant; a good control group holds constant—or controls— all of the variables in the experimental group (and hence does not receive the manipulations of the independent variable that constitute the treatment). For example, in the mental practice experiment, the experimental group does mental practice, and the control group does not do mental practice—but is otherwise treated the same as the experimental group in every other respect, taking an equal amount of time to visualize familiar golf courses as the experimental group does in imagining practicing.

However, if the kinds of people assigned to the experimental and control groups differ markedly, say, in age, gender, relevant history, or ability to learn (or any combination of such variables), those factors would make it difficult to interpret the experiment’s results; any difference in the groups’ golf scores could have been caused by any of those characteristics. To eliminate these sorts of confounds, researchers rely on : Participants are assigned randomly, that is, by chance, to the experimental and the control groups, so that the members of the two groups are likely to be comparable in all relevant ways. Without random assignment, an experimenter might unconsciously assign better (or worse) golfers to the mental practice group. Indeed, in order to ensure that participants are randomly assigned to their group, researchers may assign each participant a number and then use a computer-generated list of random numbers to determine the  group for  each participant.

Rather than using two different groups to disentangle confounds, researchers might have a single group but one that has two different conditions: an experimental condition and a control condition. The  is analogous to an experimental group: it is the part of a study in which participants receive the complete procedure that defines the experiment. The  is analogous a control group: it is the part of the study in which the same participants receive the same procedure as in the experimental condition except that the independent variable of interest is not manipulated. Thus, the same people are tested twice, once in each condition. For example, in an experimental condition you could put a crystal under a group of golfers’ beds before they slept, and in the control condition you would do everything else the same but not put a crystal there. You would test all the golfers in both conditions. To avoid confounding the order of testing with the condition (experimental versus control), you would test half the participants in the control condition before testing them in the experimental condition and would test the other half of the participants in the experimental condition before testing them in the control condition.

Quasi-Experimental Design

Like an experimental design, a quasi-experimental design includes independent and dependent variables and assesses the effects of different values of the independent variable on the values of the dependent variable. However, unlike true experiments, in quasi-experiments, participants are not randomly assigned to conditions and the conditions typically are selected from naturally occurring situations—that is, they are not created by the investigator’s manipulating the independent variable. For example, if you wanted to study the effects of having been in an earthquake on how well people sleep at night, you would need to select and compare participants who had or had not been in an earthquake. You could not randomly assign participants to the different groups.

Quasi-experimental designs are used because it is not always possible or desirable to assign people to different groups randomly. For instance, let’s say that you want to discover whether the effects of mental practice are different for people of different ages, and so you decide to test four groups of people: teenagers, college students, middle-aged people, and elderly people. Obviously, you cannot assign people to the different age groups randomly. Rather, you select groups from what nature has provided. When composing the groups, you should control for as many variables—such as people’s health and education levels—as you can in order to make the groups as similar as possible. By “control for,” we mean that you should ensure that values of variables (such as education level or health) don’t differ systematically in the different groups. (In fact, in the example we just gave of dividing participants into four age groups, a variety of variables are not controlled for and therefore create confounds. For instance, the middle-aged and elderly participants will be more likely to have finished college than the teens and college students.) Similarly, if you want to track changes over time, it is not possible to assign people randomly to the groups because you are taking measurements only from people you have measured before.

Unfortunately, because groups can never be perfectly equated on all characteristics, you can never be certain exactly which of the differences among groups produce the observed results. And thus, you cannot draw conclusions from quasi-experiments with as much confidence as you can from genuine experiments.

Table 1.3 summarizes the key research methods used in psychology, along with their relative strengths and weaknesses. When thinking about the various methods, keep in mind that even though experiments are the most rigorous, they cannot always be performed—particularly  if you are interested in studying large groups or if it is difficult (or unethical) to manipulate the independent variables. In addition, we must stress that different combinations of the methods are often used. For example, observational methods are used as part of correlational research, and observational, correlational, or experimental research can be conducted with single individuals (case studies).

Table 1.3 Summary of Research Methods in Psychology


Method

Key Characteristic(s)

Advantage(s)

Disadvantage(s)

Naturalistic observation

Observed events are carefully documented

Forms the foundation for additional research by documenting the existence or key characteristics of an event or situation

Cannot control for confounding variables or change the variables to discover critical factors that underlie a phenomenon

Case study

A single participant is analyzed in depth

Can provide in-depth understanding of the particular person’s abilities or deficits

Cannot assume that the findings generalize to all other similar cases

Survey

A number of participants answer specific questions

Is a relatively inexpensive way to collect a lot of data quickly

Limited by how the questions are stated and by what people can and are willing to report

Correlational research

Relations among different variables are documented

Allows variables that cannot be manipulated directly to be compared

Cannot infer causation

Experimental design

Participants are assigned randomly to groups, and the effects of manipulating one or more independent variables on a dependent variable are studied

Allows for the rigorous control of variables; is able to establish causal relations between independent and dependent variables

Not all phenomena can be studied in controlled laboratory experiments (in part, because not all characteristics can be manipulated)

Quasi-experimental design

Similar to an experiment but participants are not assigned to groups randomly and conditions are often selected, not created

Allows for the study of real-world phenomena that cannot be studied in experiments

Cannot control relevant aspects of the independent variables.

Meta-Analysis

 is a statistical technique that allows researchers to combine results from different studies on the same topic in order to discover whether there is a relationship among variables (Cooper, 2010). A meta-analysis is particularly useful when the results from many studies are not entirely consistent, with some showing an effect and some not. Thus, the technique is not a way to collect new data by observing phenomena—it is a way to analyze data that have already been collected.

A meta-analysis can sometimes reveal results that are not evident in all the individual studies that go into them. This is possible because studies almost always involve observing or testing a sample from a population; the  is the group that is measured or observed, and the  is the entire set of relevant people or animals (perhaps defined in terms of age or gender). The crucial fact is that there is always variation in the population; just as people vary in height and weight, they also vary in their behavioral tendencies, cognitive abilities, emotional reactivity, and personality characteristics. Thus, samples drawn from a population will not be the same in every respect; and if samples drawn from different populations are relatively small, the luck of the draw could obscure an overall difference that actually exists between the populations. For example, if you stopped the first two males and first two females you saw on the street and measured their heights, the females might actually be taller than the males. In this example, the two populations would be all males and all females, and the samples would be the two people drawn from each population. And these samples would—by the luck of the draw—not accurately reflect the general characteristics of each of the populations.

Suppose you wanted to know whether Tiger Wood’s upbringing
 played a crucial role in leading him to become a professional golfer. How would
 you go about studying this? Don’t assume that it has to be a case study. Which
 specific questions would you ask? What are the best methods for answering them
 and the drawbacks of each method?

The problem of variation in samples is particularly severe when the difference of interest—the effect—is not great and the sample sizes are small. In our example of male and female heights, if men averaged 8 feet tall and women 4 feet tall, small samples would not be a problem; you would quickly figure out the usual height difference between men and women. But if men averaged 5 feet 10 inches and women averaged 5 feet 9 inches (and the heights in both populations varied by an average of plus-or-minus 6 inches), you would need to measure many men and women before you were assured of finding the difference. By combining the samples from many studies, a meta-analysis allows you to detect even subtle differences or relations among variables (Rosenthal, 1991).

Be a Critical Consumer of Psychology

No research technique is always used perfectly, so you must be a critical consumer of all science, including the science of psychology. Whenever you read a report of a psychological finding in a newspaper, a journal article, or a book (including this one), think about other ways that you could interpret the finding. In order to do this, begin by looking for potential confounds. But confounds aren’t the only aspect of a study that can lead to alternative explanations; here are a few other issues that can cloud the interpretation of studies.

Reliability: Count on It!

Some data should be taken more seriously than other data because data differ in their quality. One way to evaluate the quality of the data is in terms of reliability.  means consistency. A reliable car is one you can count on to behave consistently, starting even on cold mornings. Data are reliable if you obtain the same findings each time the variable is measured. For example, an IQ score is reliable if you get the same score each time you take the test.

Validity: What Does It Really Mean?

Something is said to be valid if it is what it claims to be; a valid driver’s license, for example, is one that confers the right to drive because it was, in fact, issued by the state and has not expired. In research,  means that a method does in fact measure what it is supposed to measure. A study can be reliable without being valid, but to be valid it must be reliable.

Bias: Playing with Loaded Dice

The outcome of a study can also be affected by what the participants or the researchers believe or expect to happen or by their habitual ways of responding to or conceiving of situations.  occurs when (conscious or unconscious) beliefs, expectations, or habits alter how participants in a study respond or affect how a researcher sets up or conducts a study, thereby influencing its outcome. Bias can take many forms. For example,  occurs when research participants tend to respond in a particular way regardless of their actual knowledge or beliefs. For instance, many people tend to say “yes” more than “no,” particularly in some Asian cultures (such as that of Japan). This sort of bias toward responding in “acceptable” ways is a devilish problem for survey research. For example, even though residents of a community in Japan might not want a local, publicly funded golf course, the politician pushing the idea could take advantage of this cultural response bias by having a pollster ask residents “Do you support using public funds to build golf courses, which will increase recreational options?” Respondents would be more likely to say “yes” simply because they are more likely to answer “yes” to any question.

Another form of bias is , one form of which occurs when researchers do not choose the participants at random but instead select them so that an attribute is over- or underrepresented. For example, say you wanted to know the average heights of male and females, and you went to shopping malls to measure people. You would fall victim to sampling bias if you measured males outside a toy store (because you would likely  be measuring little boys) but measured females outside a fashion outlet for tall people (because you would likely be measuring especially tall women). (As you’ve probably noticed, such bias leads to a confound—in this case between where the samples were chosen and the type of sample.)

Sampling bias isn’t just something that sometimes spoils otherwise good research studies. Have you read about the U.S. presidential election of 2016? Based on surveys, most pundits incorrectly predicted that Hillary Clinton would be the winner. What led them astray? In part, sampling bias. The news organizations that conducted the surveys did not ask a representative sample of the relevant population—namely people in specific districts. In the United States, presidents are elected not by popular vote, but by the electoral college—and hence the winner is determined by who wins enough delegates. To be an accurate predictor, the surveys needed to sample among all likely voters in the various key districts, determining who would win each district and therefore each key state. By focusing on the overall popular vote, or even overall state votes, the pundits were misled about who would be the likely winner.

Experimenter Expectancy Effects: Making It Happen

Another set of problems that can plague psychological research goes by the name of experimenter expectancy effects, even though these effects can occur in all types of psychological investigations (not just experiments).  occur when an investigator’s expectations lead him or her (consciously or unconsciously) to treat  participants in a way that encourages them to produce the expected results. Here’s a famous example, with an unusual participant: Clever Hans was a horse that lived in Germany in the early 1890s. He apparently could add (Rosenthal, 1976). When a questioner (one of several) called out two numbers to add (for example, “6 plus 4”), Hans would tap out the correct answer with his hoof. Was Hans a genius horse? Was he psychic? No. Despite appearances, Hans wasn’t really adding. He seemed to be able to add, but he responded only when his questioner stood in his line of sight and knew the answer. It turned out that each questioner, who expected Hans to begin tapping, always looked at Hans’s hoof right after asking the question—thereby cuing Hans to start tapping. When Hans had tapped out the right number, the questioner always looked up—cuing Hans to stop tapping. Although, in fact, Hans could not add, he was a pretty bright horse: He was not trained to do this; he “figured it out” on his own.

The story of Hans nicely illustrates the power of experimenter expectancy effects. The cues offered by Hans’s questioners were completely unintentional; they had no wish to mislead (and, in fact, some of them were probably doubters). But such cues led Hans to respond in specific way. In psychological research, the cues can be as subtle as when a researcher makes eye contact with or smiles at a participant. For instance, if you were a researcher interviewing participants about their attitudes about alcohol, your own views could affect how you behave, which in turn affects what the participants say: If you smile whenever they mention an attitude that is similar to yours and frown when they mention an attitude that is different from yours. In turn, the participants may try to please you by saying what they think (perhaps unconsciously) you want to hear.

At least for experiments, you can guarantee that experimenter expectancy effects won’t occur by using a particular type of experimental arrangement called a double-blind design. In a , the participant is “blind” to (unaware of) the predictions of the study—and hence unable consciously or unconsciously to serve up the expected result— and the experimenter is also “blind” to the group to which the participant has been assigned to or the particular condition the participant is receiving, and thus is unable to induce the expected results. Clever Hans failed when the questioner did not know the answer to the question. In an experiment, participants are no different than Clever Hans; they also can’t respond to cues that aren’t provided to them.

Psychology and Pseudopsychology: What’s Flaky and What Isn’t?

The field of psychology—rooted in science—must be distinguished from : theories or statements that at first glance look like psychology but in fact are superstition or unsupported opinions. For example, astrology—along with palm reading and tea-leaf reading and all their relatives—is not a branch of psychology but is pseudopsychology. Pseudopsychology is not psychology at all. It may look and sound like psychology, but it is not science.

Unfortunately, some people are so interested in “knowing more about themselves” or in getting help for their problems that they turn to pseudopsychology. So popular is astrology, for example, that you can get your daily horoscope delivered daily to you via the Internet. And advice found in some (but not all) self-help books falls into the category of pseudoscience. For instance, at one point in history, some self-help gurus told people that screaming was good because it would “let it all out” (the “it” being anger and hurt)—but there was absolutely no evidence that such screams did any more than annoy the neighbors. In fact, research suggests that venting anger in this way sometimes may end up increasing your angry feelings, not diminishing them (Bushman, 2002; Lohr et al., 2007). (It’s a good idea to check whether the advice dispensed in a self-help book is supported by research before you shell out your hard-earned cash and possibly waste your time reading  it.)

Dogbert (Dilbert’s dog) is thinking scientifically about astrology. He proposes a relationship among seasonal differences in diet, sunlight, and other factors and personality characteristics. These variables can be quantified and their relationships tested. If these hypotheses are not supported by the data, but Dogbert believes in astrology nevertheless, he’s crossed the line into pseudopsychology.

A strip of the newspaper comic, Dilbert. In the first panel, Dilbert says to his dog, Dogbert, It’s amazing that people believe in astrology, as if the stars could affect your personality. Dogbert responds in the second panel, Well, seasonal differences in diet, sunlight and natural rhythms could affect expectant mothers, which could have predictable results on fetal brain development. In the third panel, Dogbert continues, Maybe the ancients simply used the stars to measure the timing of these patterns. Dilbert says, If they were so smart, why didn’t they invent watches?

It’s not always obvious what’s psychology and what’s pseudopsychology. One key to distinguishing the two is the method; we’ve already seen what goes into the scientific method, which characterizes all science—regardless of the topic. For example, is extrasensory perception (ESP) pseudopsychology? ESP refers to a collection of mental abilities that do not rely on the ordinary senses or abilities. Telepathy, for instance, is the ability to read minds. This sounds not only wonderful but magical. No wonder people are fascinated by the possibility that they, too, may have latent, untapped, extraordinary abilities. But the mere fact that the topic may seem “on the fringe” does not mean that it is pseudopsychology. Similarly, the mere fact that many experiments on ESP have come up empty, failing to produce solid evidence for the abilities, does not mean that the experiments themselves are bad or “unscientific.” One can conduct a perfectly good experiment—taking care to guard against confounds, bias, and expectancy effects, for instance—even on ESP (Moulton & Kosslyn, 2008).

For example, let’s say that you want to study telepathy.  You might arrange to test pairs of participants, with one member of each pair acting as “sender” and the other as “receiver.” Both the sender and receiver would each look at different sets of the same four playing cards (say an ace, a two, a three, and a four). The sender would focus on one of the cards (say, the ace) and would “send” the receiver a mental image of the chosen card. The receiver’s job would be to try to receive this image and report which card the sender is focusing on. By chance alone, with only four cards to choose from, the receiver would be right about 25% of the time. So the question is, can the receiver do better than mere guesswork? In this study, you would measure the percentage of times the receiver picks the right card and compare this to what you would expect from guessing alone.

But wait! What if the sender, like the questioners of Clever Hans, provided visible cues (accidentally or on purpose) that have nothing to do with ESP, perhaps smiling when “sending” an ace, and grimacing when “sending” a two. A better experiment would have sender and receiver in different rooms, thus controlling for such possible confounds. Furthermore, what if people have an unconscious bias to prefer red over black cards, which leads both the sender and the receiver to select red cards more often than would occur by chance? This difficulty can be countered by including a control condition, in which a receiver guesses cards when the sender is not actually sending. Such guesses will reveal response biases (such as a preference for red cards), which exist independently of messages sent via ESP.

Just following the scientific method will not eliminate the possibility of pseudopsychology. For instance, if solid studies conclusively show that there is nothing to ESP, then people who claim to have it or to understand it will be trying to sell a bill of goods—and will be engaging in pseudopsychology. If proper studies are conducted, we must accept their conclusions. To persist in beliefs in the face of contradictory results would be to engage in pseudopsychology.

Ethics: Doing It Right

Let’s say that Tiger Woods wants to learn how to overcome pain so that he can practice hard even when he is hurt—but practicing when injured might cause long-term damage to his body. Would it be ethical for a sport psychologist to teach Woods—or anyone else— techniques to help Woods continue to work out even in the presence of damaging pain? Or, what if Woods decided that he was “addicted to sex” and was desperate to be cured; would it be ethical for a therapist to treat him with new, unproven techniques? To address these questions, we must learn more about the ethical code that guides the research and treatment undertaken by psychologists.

Ethics in Research

Psychologists have developed a set of rules to ensure that researchers follow sound ethical standards, especially when participants’ rights may conflict with a research method or clinical treatment. Certain methods are obviously unethical:  No psychologist would cause people to become addicted to drugs to see how easily they can overcome the addiction or would abuse people to help them overcome a psychological problem. But many research situations are not so clear-cut. 

Research with People: Human Guinea Pigs?

Some New York psychiatrists were taking fluid from the spines of severely depressed teenagers at regular intervals in order to see whether the presence of certain chemicals in the teens’ bodies could predict which particular teens would attempt suicide. As required by law, the youths’ parents had given permission for the researchers to draw the fluids. However, this study was one of at least ten that a court ruling brought to a screeching halt in 1996 (New York Times, December 5, page A1). The New York State Appeals Court found that the rules for the treatment of children and mentally ill people in experimental settings were unconstitutional because they did not properly protect these participants from abuse by researchers. However, the researchers claimed that without these studies they would never be able to develop the most effective drugs for treating serious impairments, some of which might lead to suicide. Do the potential benefits of such studies outweigh the pain they cause?

At the time of this study, New York was more lax in its research policies than many other states. California, Connecticut, Massachusetts, and Illinois have strict rules regarding when researchers can conduct studies in which the pain outweighs the gain or that have risks and do not benefit participants directly. In these cases, the study can be conducted only when the participants themselves—not family members—provide informed consent. means that before agreeing to take part, potential participants in a study must be told what they will be asked to do and must be advised of the possible risks and benefits of the procedure. They are also told that they can withdraw from the study at any time without being penalized. Only after an individual clearly understands this information and gives consent by signature can he or she take part in a study. (Because minors, and some patients, may not be able to understand the information adequately, they might not be able legally to provide informed consent.) But not all states have such rules, and there are no general federal laws that regulate all research with human participants.

Nevertheless, a study that uses funds from the U.S. government or from most private funding sources must be approved by an institutional review board (IRB) at the university, hospital, or other institution that sponsors or hosts the study. The IRB monitors all research projects at that institution, not just those of psychologists. An IRB usually includes not only scientists but also physicians, clergy, and representatives from the local community. The IRB considers the potential risks and benefits of each research study and decides whether the study can be performed. These risks and benefits are considered from all three levels of analysis: effects on the brain (for example, of drugs), the person (for example, through imparting false beliefs), and the group (for example, being embarrassed in front of others or humiliated). Deceiving participants with false or misleading information is permitted only when the participants will not be harmed and the knowledge gained clearly outweighs the use of dishonesty. If there is any chance that participants might respond negatively to being in a proposed study, many universities and hospitals require researchers to discuss their proposed studies with the board, to explain in more detail what they are doing and why. Concerns about the ethical treatment of human participants lead most IRBs to insist that participants be , that is, interviewed after the study to ask about their experience and to explain why it was conducted. The purposes of debriefing are to ensure that participants do not have negative reactions from participating and that they have understood the purposes of the study.                   

In 2010, after reports circulated that some psychologists were involved in the torture of detainees at Guantanamo Bay, a U.S. detention facility in Cuba, the American Psychological Association added a clause to its code of ethics: The code now clearly states that it is unethical for psychologists “to participate in, facilitate, assist, or otherwise engage in torture.” (American Psychological Association, 2010).

Research with Animals

In large parts of India, animals are not eaten (some are even considered sacred). Many in that culture may believe that animal research is not appropriate.

A man steers around a cow in the street. The cow is considered sacred in some parts of India.

Animals are studied in some types of psychological research, particularly studies that focus on understanding the brain. Animals, of course, can’t give informed consent, don’t volunteer, and can’t decide to withdraw from the study if they experience pain or are uncomfortable. But this doesn’t mean that animals lack protection. Animal studies, like human ones, must have the stamp of approval of an IRB. The IRB makes sure the animals are housed properly (in cages that are large enough and cleaned often enough) and that they are not mistreated. Researchers are not allowed to cause animals pain unless that is explicitly what is being studied—and even then, they must justify in detail the potential benefits to humans (and possibly to animals, by advancing veterinary medicine) of inflicting the pain.

Is it ethical to test animals at all? This is not an easy question to answer. Researchers who study animals argue that their research is ethical. They point out that although there are substitutes for eating meat and wearing leather, there is no substitute for the use of animals in certain kinds of research; if animals are not used, the research will not be conducted. So, if the culture  allows the use of animals for food and clothing, it is not clear why animals should not be studied in laboratories if the animals do not suffer and the findings produce important knowledge. This is not a cut-and-dried issue, however, and thoughtful people disagree.

Ethics in Clinical Practice

New therapies are developed continually, but only qualified therapists who have been trained appropriately or are learning the therapy under supervision can ethically provide it. For instance, imagine that Dr. Singh has developed a new type of therapy that she claims is particularly effective for patients who are afraid of some social situations, such as public speaking or meeting strangers. You are a therapist who has a patient struggling with such difficulties and is not responding to conventional therapy. You haven’t been trained in Singh’s therapy, but you want to help your patient. According to the American Psychological Association guidelines (see Table 1.4), you can try this therapy only if you’ve received training or will be supervised by someone trained in this method.

Some ethical decisions, such as not administering treatments you haven’t been trained to provide, are relatively straightforward, but the ethical code can also come into conflict with the state laws under which psychologists and other psychotherapists practice. For instance, generally speaking, psychologists should obtain specific permission from a patient before they communicate about the patient with people other than professionals who are treating him or her. However, the law makes certain exceptions to the need to maintain confidentiality, such as when a life or (in some states) property is at stake.

Indeed, difficult cases sometimes cause new laws to be written. In 1969,  a patient at the University of California told a psychologist at the student health center that he wanted to kill someone and named the person. The campus police were told; they interviewed the patient and let him go. The patient later killed his targeted victim. The dead woman’s parents sued the psychologist and the university for “failure to warn.” The case eventually wound its way to California’s highest court. One issue the court debated was whether the therapist had the right to divulge confidential information from therapy sessions, even when someone’s safety is at risk. The court ruled that a therapist is obligated to use reasonable care to protect a potential victim. More specifically, in California (and in most other states now), if a patient has told his or her mental health clinician that he or she plans to harm a specific other person, and the clinician has reason to believe the patient can and will follow through with that plan, the clinician must take steps to protect the targeted person from harm—even though doing so may violate the patient’s confidentiality. Similar guidelines apply to cases of potential suicide.

Further, a therapist should not engage in sexual relations with a patient or mistreat a patient physically or emotionally. The American Psychological Association has developed many detailed ethical guidelines based on the principles listed in Table 1.4.

Table 1.4 General Ethical Principles and Code of Conduct for Psychologists


Principle A: Beneficence and Nonmaleficence

“Psychologists strive to benefit those with whom they work and take care to do no harm.... Because psychologists’ scientific and professional judgments and actions may affect the lives of others, they are alert to and guard against personal, financial, social, organizational, or political factors that might lead to misuse of their influence.”

Principle B: Fidelity and Responsibility

“Psychologists uphold professional standards of conduct, clarify their professional roles and obligations, accept appropriate responsibility for their behavior, and seek to manage conflicts of interest that could lead to exploitation or harm.”

Principle C: Integrity

“Psychologists seek to promote accuracy, honest, and truthfulness in the science, teaching, and practice of psychology. In these activities psychologists do not steal, cheat, or engage in fraud, subterfuge, or intentional misrepresentation of fact. Psychologists strive to keep their promises and to avoid unwise or unclear commitments.”

Principle D: Justice

“Psychologists recognize that fairness and justice entitle all persons to access to and benefit from the contributions of psychology and to equal quality in the processes, procedures, and services being conducted by psychologists.”

Principle E: Respect for People’s Rights and Dignity

“Psychologists respect the dignity and worth of all people, and the rights of individuals to privacy, confidentiality, and self-determination.... Psychologists are aware of and respect cultural, individual, and role differences, including those based on age, gender, gender identity, race, ethnicity, culture, national origin, religion, sexual orientation, disability, language, and socioeconomic status and consider these factors when working with members of such groups.”
Note: These are direct quotes with portions abridged; a complete description can be found at http://www.apa.org/ethics/code/index.aspx.

New Frontiers: Neuroethics

As research in psychology continues to progress at an increasing pace, new issues have emerged that would have been in the realm of science fiction a few years ago. To address one set of these issues, a new branch of ethics, called neuroethics, is focusing on the possible dangers and benefits of research on the brain, including the use of neuroscience to predict and control individual behavior. Although a relatively young field, this field has its own journal, Neuroethics, and addresses questions that are hotbeds of debate.  So far, neuroethicists have more questions than answers. For example, is it ethical to scan people’s brains to discover whether they are telling the truth? To require young children to take medication that helps them pay attention better in school? To encourage adults with no cognitive problems to take “cognitive enhancing” pills or medical and surgical procedures (Darby & Pascual-Leone, 2017)? Should computer games that can enhance brain function be regulated—either by physicians or the government? These are the kinds of questions that neuroethicists try to resolve.

Consider the field of neuroethics applied to convicted and future criminals. Suppose that the brains of murderers could be shown conclusively to have a distinctive characteristic (perhaps one region that is much smaller than normal). Should we then scan people’s brains and watch them carefully if they have this characteristic? Should this characteristic be used as a criterion for parole for prisoners? Would it be ethical to alter prisoners’ brain function if it makes them less likely to reoffend when released (Craig, 2016)? Such questions have no easy answers.

Using Cognitive Enhancers

Some books, films, and TV shows address neuroethical issues. For instance, the Netflix series Limitless (and the film of the same name), starring Jake McDorman, explores what might happen if some people were allowed to use “cognitive enhancers” to increase their intelligence.

3.555 to 4.975- Get ready to become
somebody who matters.
9.525 to 11.175- What if suddenly you knew
11.175 to 12.855how much strength you had in your hands?
14.165 to 17.785What if you could remember
every documentary you ever saw?
17.785 to 21.025Every trig class? Could you do the math?
21.025 to 22.975Could you figure out exactly where
22.975 to 24.055that train is going to stop?
31.115 to 33.005- He knew the train was gonna stop.
33.005 to 35.475There's an explanation
for what you saw today,
35.475 to 36.495but every time he takes it,
36.495 to 38.655he basically becomes the
smartest person in the world.
40.195 to 43.255That's a resource. Let's
make it our resource.
48.725 to 49.215- It's time.
53.635 to 54.845It's about time you and me talked.

One organization devoted to neuroethical questions, The Center for Cognitive Liberty and Ethics, asserts that two fundamental principles should form the core of neuroethics: First, individuals should never be forced to use technologies or drugs that alter how their brains function. Second, individuals should not be prohibited from using such technologies or drugs if they so desire, provided that such use would not lead the individuals to harm others. However, not everyone agrees with these principles.

Looking At Levels: Graph Design for the Human Mind

.

In our earlier discussion about research, we did not consider the final result of successfully using the scientific method—namely, communicating new findings and theories to others. Science is a community activity, and effective communication is a key part of the practice of science. Graphs are one way to convey scientific findings without overwhelming the reader (Kosslyn, 2006). But what kind of graph should be used? The answer to this question depends on what message needs to be conveyed. Bar graphs are better than line graphs when you need to convey comparisons between discrete data points (such as specific numbers of Democratic versus Republican voters in various regions), whereas line graphs are better when you need to convey trends (such as changes in the numbers of Democratic and Republican supporters in different parts of the United States over time) (Zacks & Tversky, 1999). Bars end at discrete locations, and thus it’s easy to compare data points simply by comparing the heights of the bars. In contrast, bars are not as useful for conveying trends because the reader needs mentally to connect the tops of bars, creating a line in order to determine visually whether there is a trend. Hence, if that’s what you want to convey, it’s better to give the reader the line in the first place. But if the reader needs to compare discrete data points, a line isn’t so good: Now the reader must “mentally break down” the line into specific points, which requires effort (Kosslyn, 2006).

Think about this finding from the levels-of-analysis perspective: When designing a graph, you need to respect the way the human perceptual system works (level of the brain). If you choose an inappropriate graph, you are asking the reader to work harder to understand your message—which he or she may not be willing to do. But that’s not all there is to it. Researchers who focus on the level of the group argue that people use graphs both to communicate data as well as to impress others—for example, by making bars look three-dimensional, which doesn’t convey information about the data and actually requires the reader to worker harder to understand the graph (Tractinsky & Meyer, 1999).

In addition, researchers have studied events at the level of the person, such as the qualities of graphs that presenters prefer. One finding is that a presenter’s preferences depend in part on the conclusions likely to be drawn from the data. For instance, participants preferred a visually elaborate three-dimensional graph when they were presenting undesirable information (such as financial losses) (Tractinsky & Meyer, 1999). Why might this be? Perhaps because the graph obscures the data? Or perhaps because a fancy graph might partly compensate for undesired findings?

As usual, events at the three levels interact: Depending on the graph you choose for a specific purpose, you will reach the readers (level of the group) more effectively if they do not have to work hard to understand the display (level of the brain), and the particular message (level of the person) may influence both the type of graph you choose as well as how motivated the readers are to understand it. Not only do events at the different levels interact, but also these events themselves often must be understood at the different levels. For example, what occurs in the brain when someone is “impressed”? Trying to impress someone is clearly a social event, but it relies on events at the other levels of analysis. Similarly, “communication” is more than a social event—it also involves conveying content to readers (level of the person) and, ultimately, engaging their brains. Any psychological event can be understood fully only by considering it from all three levels.

Looking Back: Key Takeaways

  1. What is the scientific method, and how is it used to study the mind and behavior? The scientific method is a way to gather facts that will lead to the formulation and validation of a theory. This method relies on systematically observing events, formulating a question, forming a hypothesis about the relation between variables in an attempt to answer the question, collecting new observations to test the hypothesis, using such evidence to formulate a theory, and, finally, testing the theory.

  2. What scientific research techniques are commonly used to study the mind and behavior? Psychologists use naturalistic observation, involving careful documentation of events; case studies, which are detailed analyses of a single participant; surveys, in which participants are asked sets of specific questions about their beliefs, attitudes, preferences, or activities; correlational research, in which the relations among variables are documented but causation cannot be inferred; experimental designs, assigning participants randomly to groups and studying the effects of changing the value of one or more independent variables on the value of a dependent variable; and quasi-experimental designs, which are similar to experimental designs except that participants are not randomly assigned to groups and conditions are selected from naturally occurring variations. Finally, psychologists also use meta-analyses. When a meta-analysis is used, the results of many studies are considered in a single overall analysis.

  3. What are key characteristics of good scientific studies in psychology? The measures taken must be reliable (repeatable), valid (assess what they are supposed to assess), unbiased, and free of experimenter expectancy effects (which lead participants to respond in specific ways). The studies must be well designed (eliminate confounds and use appropriate controls), and the results must be properly interpreted (alternative accounts are ruled out).

  4. How are ethical principles applied to scientific studies in psychology and  in psychotherapy? For research with humans, informed consent is necessary before a person can participate in a study (informed consent requires that the person appreciate the potential risks and benefits of taking part in a study). In the vast majority of cases, studies must be approved in advance by an Institutional Review Board (IRB), and participants must be debriefed after the study to ensure that they have no negative reactions and to confirm that they understand the purpose of the study. For research with animals, the animals must be treated humanely, animal studies must be approved in advance by an IRB, and animals cannot be caused pain or discomfort unless that is what is being studied (even then, researchers must justify the potential benefits to humans or animals). For psychotherapy, in general, strict confidentiality is observed except where the law stipulates that confidentiality should be violated, such as when a specific person may be harmed, when suicide appears to be a real possibility, or (in some states) when another’s property may be damaged. Moreover, therapists should not take advantage of their special relationships with patients in any way. A new branch of ethics, called neuroethics, focuses on the potential benefits and dangers of research on the brain.