19+ Experimental Design Examples (Methods + Types)

practical psychology logo

Ever wondered how scientists discover new medicines, psychologists learn about behavior, or even how marketers figure out what kind of ads you like? Well, they all have something in common: they use a special plan or recipe called an "experimental design."

Imagine you're baking cookies. You can't just throw random amounts of flour, sugar, and chocolate chips into a bowl and hope for the best. You follow a recipe, right? Scientists and researchers do something similar. They follow a "recipe" called an experimental design to make sure their experiments are set up in a way that the answers they find are meaningful and reliable.

Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense.

experimental design test tubes

Long ago, people didn't have detailed game plans for experiments. They often just tried things out and saw what happened. But over time, people got smarter about this. They started creating structured plans—what we now call experimental designs—to get clearer, more trustworthy answers to their questions.

In this article, we'll take you on a journey through the world of experimental designs. We'll talk about the different types, or "flavors," of experimental designs, where they're used, and even give you a peek into how they came to be.

What Is Experimental Design?

Alright, before we dive into the different types of experimental designs, let's get crystal clear on what experimental design actually is.

Imagine you're a detective trying to solve a mystery. You need clues, right? Well, in the world of research, experimental design is like the roadmap that helps you find those clues. It's like the game plan in sports or the blueprint when you're building a house. Just like you wouldn't start building without a good blueprint, researchers won't start their studies without a strong experimental design.

So, why do we need experimental design? Think about baking a cake. If you toss ingredients into a bowl without measuring, you'll end up with a mess instead of a tasty dessert.

Similarly, in research, if you don't have a solid plan, you might get confusing or incorrect results. A good experimental design helps you ask the right questions ( think critically ), decide what to measure ( come up with an idea ), and figure out how to measure it (test it). It also helps you consider things that might mess up your results, like outside influences you hadn't thought of.

For example, let's say you want to find out if listening to music helps people focus better. Your experimental design would help you decide things like: Who are you going to test? What kind of music will you use? How will you measure focus? And, importantly, how will you make sure that it's really the music affecting focus and not something else, like the time of day or whether someone had a good breakfast?

In short, experimental design is the master plan that guides researchers through the process of collecting data, so they can answer questions in the most reliable way possible. It's like the GPS for the journey of discovery!

History of Experimental Design

Around 350 BCE, people like Aristotle were trying to figure out how the world works, but they mostly just thought really hard about things. They didn't test their ideas much. So while they were super smart, their methods weren't always the best for finding out the truth.

Fast forward to the Renaissance (14th to 17th centuries), a time of big changes and lots of curiosity. People like Galileo started to experiment by actually doing tests, like rolling balls down inclined planes to study motion. Galileo's work was cool because he combined thinking with doing. He'd have an idea, test it, look at the results, and then think some more. This approach was a lot more reliable than just sitting around and thinking.

Now, let's zoom ahead to the 18th and 19th centuries. This is when people like Francis Galton, an English polymath, started to get really systematic about experimentation. Galton was obsessed with measuring things. Seriously, he even tried to measure how good-looking people were ! His work helped create the foundations for a more organized approach to experiments.

Next stop: the early 20th century. Enter Ronald A. Fisher , a brilliant British statistician. Fisher was a game-changer. He came up with ideas that are like the bread and butter of modern experimental design.

Fisher invented the concept of the " control group "—that's a group of people or things that don't get the treatment you're testing, so you can compare them to those who do. He also stressed the importance of " randomization ," which means assigning people or things to different groups by chance, like drawing names out of a hat. This makes sure the experiment is fair and the results are trustworthy.

Around the same time, American psychologists like John B. Watson and B.F. Skinner were developing " behaviorism ." They focused on studying things that they could directly observe and measure, like actions and reactions.

Skinner even built boxes—called Skinner Boxes —to test how animals like pigeons and rats learn. Their work helped shape how psychologists design experiments today. Watson performed a very controversial experiment called The Little Albert experiment that helped describe behaviour through conditioning—in other words, how people learn to behave the way they do.

In the later part of the 20th century and into our time, computers have totally shaken things up. Researchers now use super powerful software to help design their experiments and crunch the numbers.

With computers, they can simulate complex experiments before they even start, which helps them predict what might happen. This is especially helpful in fields like medicine, where getting things right can be a matter of life and death.

Also, did you know that experimental designs aren't just for scientists in labs? They're used by people in all sorts of jobs, like marketing, education, and even video game design! Yes, someone probably ran an experiment to figure out what makes a game super fun to play.

So there you have it—a quick tour through the history of experimental design, from Aristotle's deep thoughts to Fisher's groundbreaking ideas, and all the way to today's computer-powered research. These designs are the recipes that help people from all walks of life find answers to their big questions.

Key Terms in Experimental Design

Before we dig into the different types of experimental designs, let's get comfy with some key terms. Understanding these terms will make it easier for us to explore the various types of experimental designs that researchers use to answer their big questions.

Independent Variable : This is what you change or control in your experiment to see what effect it has. Think of it as the "cause" in a cause-and-effect relationship. For example, if you're studying whether different types of music help people focus, the kind of music is the independent variable.

Dependent Variable : This is what you're measuring to see the effect of your independent variable. In our music and focus experiment, how well people focus is the dependent variable—it's what "depends" on the kind of music played.

Control Group : This is a group of people who don't get the special treatment or change you're testing. They help you see what happens when the independent variable is not applied. If you're testing whether a new medicine works, the control group would take a fake pill, called a placebo , instead of the real medicine.

Experimental Group : This is the group that gets the special treatment or change you're interested in. Going back to our medicine example, this group would get the actual medicine to see if it has any effect.

Randomization : This is like shaking things up in a fair way. You randomly put people into the control or experimental group so that each group is a good mix of different kinds of people. This helps make the results more reliable.

Sample : This is the group of people you're studying. They're a "sample" of a larger group that you're interested in. For instance, if you want to know how teenagers feel about a new video game, you might study a sample of 100 teenagers.

Bias : This is anything that might tilt your experiment one way or another without you realizing it. Like if you're testing a new kind of dog food and you only test it on poodles, that could create a bias because maybe poodles just really like that food and other breeds don't.

Data : This is the information you collect during the experiment. It's like the treasure you find on your journey of discovery!

Replication : This means doing the experiment more than once to make sure your findings hold up. It's like double-checking your answers on a test.

Hypothesis : This is your educated guess about what will happen in the experiment. It's like predicting the end of a movie based on the first half.

Steps of Experimental Design

Alright, let's say you're all fired up and ready to run your own experiment. Cool! But where do you start? Well, designing an experiment is a bit like planning a road trip. There are some key steps you've got to take to make sure you reach your destination. Let's break it down:

  • Ask a Question : Before you hit the road, you've got to know where you're going. Same with experiments. You start with a question you want to answer, like "Does eating breakfast really make you do better in school?"
  • Do Some Homework : Before you pack your bags, you look up the best places to visit, right? In science, this means reading up on what other people have already discovered about your topic.
  • Form a Hypothesis : This is your educated guess about what you think will happen. It's like saying, "I bet this route will get us there faster."
  • Plan the Details : Now you decide what kind of car you're driving (your experimental design), who's coming with you (your sample), and what snacks to bring (your variables).
  • Randomization : Remember, this is like shuffling a deck of cards. You want to mix up who goes into your control and experimental groups to make sure it's a fair test.
  • Run the Experiment : Finally, the rubber hits the road! You carry out your plan, making sure to collect your data carefully.
  • Analyze the Data : Once the trip's over, you look at your photos and decide which ones are keepers. In science, this means looking at your data to see what it tells you.
  • Draw Conclusions : Based on your data, did you find an answer to your question? This is like saying, "Yep, that route was faster," or "Nope, we hit a ton of traffic."
  • Share Your Findings : After a great trip, you want to tell everyone about it, right? Scientists do the same by publishing their results so others can learn from them.
  • Do It Again? : Sometimes one road trip just isn't enough. In the same way, scientists often repeat their experiments to make sure their findings are solid.

So there you have it! Those are the basic steps you need to follow when you're designing an experiment. Each step helps make sure that you're setting up a fair and reliable way to find answers to your big questions.

Let's get into examples of experimental designs.

1) True Experimental Design

notepad

In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

Researchers carefully pick an independent variable to manipulate (remember, that's the thing they're changing on purpose) and measure the dependent variable (the effect they're studying). Then comes the magic trick—randomization. By randomly putting participants into either the control or experimental group, scientists make sure their experiment is as fair as possible.

No sneaky biases here!

True Experimental Design Pros

The pros of True Experimental Design are like the perks of a VIP ticket at a concert: you get the best and most trustworthy results. Because everything is controlled and randomized, you can feel pretty confident that the results aren't just a fluke.

True Experimental Design Cons

However, there's a catch. Sometimes, it's really tough to set up these experiments in a real-world situation. Imagine trying to control every single detail of your day, from the food you eat to the air you breathe. Not so easy, right?

True Experimental Design Uses

The fields that get the most out of True Experimental Designs are those that need super reliable results, like medical research.

When scientists were developing COVID-19 vaccines, they used this design to run clinical trials. They had control groups that received a placebo (a harmless substance with no effect) and experimental groups that got the actual vaccine. Then they measured how many people in each group got sick. By comparing the two, they could say, "Yep, this vaccine works!"

So next time you read about a groundbreaking discovery in medicine or technology, chances are a True Experimental Design was the VIP behind the scenes, making sure everything was on point. It's been the go-to for rigorous scientific inquiry for nearly a century, and it's not stepping off the stage anytime soon.

2) Quasi-Experimental Design

So, let's talk about the Quasi-Experimental Design. Think of this one as the cool cousin of True Experimental Design. It wants to be just like its famous relative, but it's a bit more laid-back and flexible. You'll find quasi-experimental designs when it's tricky to set up a full-blown True Experimental Design with all the bells and whistles.

Quasi-experiments still play with an independent variable, just like their stricter cousins. The big difference? They don't use randomization. It's like wanting to divide a bag of jelly beans equally between your friends, but you can't quite do it perfectly.

In real life, it's often not possible or ethical to randomly assign people to different groups, especially when dealing with sensitive topics like education or social issues. And that's where quasi-experiments come in.

Quasi-Experimental Design Pros

Even though they lack full randomization, quasi-experimental designs are like the Swiss Army knives of research: versatile and practical. They're especially popular in fields like education, sociology, and public policy.

For instance, when researchers wanted to figure out if the Head Start program , aimed at giving young kids a "head start" in school, was effective, they used a quasi-experimental design. They couldn't randomly assign kids to go or not go to preschool, but they could compare kids who did with kids who didn't.

Quasi-Experimental Design Cons

Of course, quasi-experiments come with their own bag of pros and cons. On the plus side, they're easier to set up and often cheaper than true experiments. But the flip side is that they're not as rock-solid in their conclusions. Because the groups aren't randomly assigned, there's always that little voice saying, "Hey, are we missing something here?"

Quasi-Experimental Design Uses

Quasi-Experimental Design gained traction in the mid-20th century. Researchers were grappling with real-world problems that didn't fit neatly into a laboratory setting. Plus, as society became more aware of ethical considerations, the need for flexible designs increased. So, the quasi-experimental approach was like a breath of fresh air for scientists wanting to study complex issues without a laundry list of restrictions.

In short, if True Experimental Design is the superstar quarterback, Quasi-Experimental Design is the versatile player who can adapt and still make significant contributions to the game.

3) Pre-Experimental Design

Now, let's talk about the Pre-Experimental Design. Imagine it as the beginner's skateboard you get before you try out for all the cool tricks. It has wheels, it rolls, but it's not built for the professional skatepark.

Similarly, pre-experimental designs give researchers a starting point. They let you dip your toes in the water of scientific research without diving in head-first.

So, what's the deal with pre-experimental designs?

Pre-Experimental Designs are the basic, no-frills versions of experiments. Researchers still mess around with an independent variable and measure a dependent variable, but they skip over the whole randomization thing and often don't even have a control group.

It's like baking a cake but forgetting the frosting and sprinkles; you'll get some results, but they might not be as complete or reliable as you'd like.

Pre-Experimental Design Pros

Why use such a simple setup? Because sometimes, you just need to get the ball rolling. Pre-experimental designs are great for quick-and-dirty research when you're short on time or resources. They give you a rough idea of what's happening, which you can use to plan more detailed studies later.

A good example of this is early studies on the effects of screen time on kids. Researchers couldn't control every aspect of a child's life, but they could easily ask parents to track how much time their kids spent in front of screens and then look for trends in behavior or school performance.

Pre-Experimental Design Cons

But here's the catch: pre-experimental designs are like that first draft of an essay. It helps you get your ideas down, but you wouldn't want to turn it in for a grade. Because these designs lack the rigorous structure of true or quasi-experimental setups, they can't give you rock-solid conclusions. They're more like clues or signposts pointing you in a certain direction.

Pre-Experimental Design Uses

This type of design became popular in the early stages of various scientific fields. Researchers used them to scratch the surface of a topic, generate some initial data, and then decide if it's worth exploring further. In other words, pre-experimental designs were the stepping stones that led to more complex, thorough investigations.

So, while Pre-Experimental Design may not be the star player on the team, it's like the practice squad that helps everyone get better. It's the starting point that can lead to bigger and better things.

4) Factorial Design

Now, buckle up, because we're moving into the world of Factorial Design, the multi-tasker of the experimental universe.

Imagine juggling not just one, but multiple balls in the air—that's what researchers do in a factorial design.

In Factorial Design, researchers are not satisfied with just studying one independent variable. Nope, they want to study two or more at the same time to see how they interact.

It's like cooking with several spices to see how they blend together to create unique flavors.

Factorial Design became the talk of the town with the rise of computers. Why? Because this design produces a lot of data, and computers are the number crunchers that help make sense of it all. So, thanks to our silicon friends, researchers can study complicated questions like, "How do diet AND exercise together affect weight loss?" instead of looking at just one of those factors.

Factorial Design Pros

This design's main selling point is its ability to explore interactions between variables. For instance, maybe a new study drug works really well for young people but not so great for older adults. A factorial design could reveal that age is a crucial factor, something you might miss if you only studied the drug's effectiveness in general. It's like being a detective who looks for clues not just in one room but throughout the entire house.

Factorial Design Cons

However, factorial designs have their own bag of challenges. First off, they can be pretty complicated to set up and run. Imagine coordinating a four-way intersection with lots of cars coming from all directions—you've got to make sure everything runs smoothly, or you'll end up with a traffic jam. Similarly, researchers need to carefully plan how they'll measure and analyze all the different variables.

Factorial Design Uses

Factorial designs are widely used in psychology to untangle the web of factors that influence human behavior. They're also popular in fields like marketing, where companies want to understand how different aspects like price, packaging, and advertising influence a product's success.

And speaking of success, the factorial design has been a hit since statisticians like Ronald A. Fisher (yep, him again!) expanded on it in the early-to-mid 20th century. It offered a more nuanced way of understanding the world, proving that sometimes, to get the full picture, you've got to juggle more than one ball at a time.

So, if True Experimental Design is the quarterback and Quasi-Experimental Design is the versatile player, Factorial Design is the strategist who sees the entire game board and makes moves accordingly.

5) Longitudinal Design

pill bottle

Alright, let's take a step into the world of Longitudinal Design. Picture it as the grand storyteller, the kind who doesn't just tell you about a single event but spins an epic tale that stretches over years or even decades. This design isn't about quick snapshots; it's about capturing the whole movie of someone's life or a long-running process.

You know how you might take a photo every year on your birthday to see how you've changed? Longitudinal Design is kind of like that, but for scientific research.

With Longitudinal Design, instead of measuring something just once, researchers come back again and again, sometimes over many years, to see how things are going. This helps them understand not just what's happening, but why it's happening and how it changes over time.

This design really started to shine in the latter half of the 20th century, when researchers began to realize that some questions can't be answered in a hurry. Think about studies that look at how kids grow up, or research on how a certain medicine affects you over a long period. These aren't things you can rush.

The famous Framingham Heart Study , started in 1948, is a prime example. It's been studying heart health in a small town in Massachusetts for decades, and the findings have shaped what we know about heart disease.

Longitudinal Design Pros

So, what's to love about Longitudinal Design? First off, it's the go-to for studying change over time, whether that's how people age or how a forest recovers from a fire.

Longitudinal Design Cons

But it's not all sunshine and rainbows. Longitudinal studies take a lot of patience and resources. Plus, keeping track of participants over many years can be like herding cats—difficult and full of surprises.

Longitudinal Design Uses

Despite these challenges, longitudinal studies have been key in fields like psychology, sociology, and medicine. They provide the kind of deep, long-term insights that other designs just can't match.

So, if the True Experimental Design is the superstar quarterback, and the Quasi-Experimental Design is the flexible athlete, then the Factorial Design is the strategist, and the Longitudinal Design is the wise elder who has seen it all and has stories to tell.

6) Cross-Sectional Design

Now, let's flip the script and talk about Cross-Sectional Design, the polar opposite of the Longitudinal Design. If Longitudinal is the grand storyteller, think of Cross-Sectional as the snapshot photographer. It captures a single moment in time, like a selfie that you take to remember a fun day. Researchers using this design collect all their data at one point, providing a kind of "snapshot" of whatever they're studying.

In a Cross-Sectional Design, researchers look at multiple groups all at the same time to see how they're different or similar.

This design rose to popularity in the mid-20th century, mainly because it's so quick and efficient. Imagine wanting to know how people of different ages feel about a new video game. Instead of waiting for years to see how opinions change, you could just ask people of all ages what they think right now. That's Cross-Sectional Design for you—fast and straightforward.

You'll find this type of research everywhere from marketing studies to healthcare. For instance, you might have heard about surveys asking people what they think about a new product or political issue. Those are usually cross-sectional studies, aimed at getting a quick read on public opinion.

Cross-Sectional Design Pros

So, what's the big deal with Cross-Sectional Design? Well, it's the go-to when you need answers fast and don't have the time or resources for a more complicated setup.

Cross-Sectional Design Cons

Remember, speed comes with trade-offs. While you get your results quickly, those results are stuck in time. They can't tell you how things change or why they're changing, just what's happening right now.

Cross-Sectional Design Uses

Also, because they're so quick and simple, cross-sectional studies often serve as the first step in research. They give scientists an idea of what's going on so they can decide if it's worth digging deeper. In that way, they're a bit like a movie trailer, giving you a taste of the action to see if you're interested in seeing the whole film.

So, in our lineup of experimental designs, if True Experimental Design is the superstar quarterback and Longitudinal Design is the wise elder, then Cross-Sectional Design is like the speedy running back—fast, agile, but not designed for long, drawn-out plays.

7) Correlational Design

Next on our roster is the Correlational Design, the keen observer of the experimental world. Imagine this design as the person at a party who loves people-watching. They don't interfere or get involved; they just observe and take mental notes about what's going on.

In a correlational study, researchers don't change or control anything; they simply observe and measure how two variables relate to each other.

The correlational design has roots in the early days of psychology and sociology. Pioneers like Sir Francis Galton used it to study how qualities like intelligence or height could be related within families.

This design is all about asking, "Hey, when this thing happens, does that other thing usually happen too?" For example, researchers might study whether students who have more study time get better grades or whether people who exercise more have lower stress levels.

One of the most famous correlational studies you might have heard of is the link between smoking and lung cancer. Back in the mid-20th century, researchers started noticing that people who smoked a lot also seemed to get lung cancer more often. They couldn't say smoking caused cancer—that would require a true experiment—but the strong correlation was a red flag that led to more research and eventually, health warnings.

Correlational Design Pros

This design is great at proving that two (or more) things can be related. Correlational designs can help prove that more detailed research is needed on a topic. They can help us see patterns or possible causes for things that we otherwise might not have realized.

Correlational Design Cons

But here's where you need to be careful: correlational designs can be tricky. Just because two things are related doesn't mean one causes the other. That's like saying, "Every time I wear my lucky socks, my team wins." Well, it's a fun thought, but those socks aren't really controlling the game.

Correlational Design Uses

Despite this limitation, correlational designs are popular in psychology, economics, and epidemiology, to name a few fields. They're often the first step in exploring a possible relationship between variables. Once a strong correlation is found, researchers may decide to conduct more rigorous experimental studies to examine cause and effect.

So, if the True Experimental Design is the superstar quarterback and the Longitudinal Design is the wise elder, the Factorial Design is the strategist, and the Cross-Sectional Design is the speedster, then the Correlational Design is the clever scout, identifying interesting patterns but leaving the heavy lifting of proving cause and effect to the other types of designs.

8) Meta-Analysis

Last but not least, let's talk about Meta-Analysis, the librarian of experimental designs.

If other designs are all about creating new research, Meta-Analysis is about gathering up everyone else's research, sorting it, and figuring out what it all means when you put it together.

Imagine a jigsaw puzzle where each piece is a different study. Meta-Analysis is the process of fitting all those pieces together to see the big picture.

The concept of Meta-Analysis started to take shape in the late 20th century, when computers became powerful enough to handle massive amounts of data. It was like someone handed researchers a super-powered magnifying glass, letting them examine multiple studies at the same time to find common trends or results.

You might have heard of the Cochrane Reviews in healthcare . These are big collections of meta-analyses that help doctors and policymakers figure out what treatments work best based on all the research that's been done.

For example, if ten different studies show that a certain medicine helps lower blood pressure, a meta-analysis would pull all that information together to give a more accurate answer.

Meta-Analysis Pros

The beauty of Meta-Analysis is that it can provide really strong evidence. Instead of relying on one study, you're looking at the whole landscape of research on a topic.

Meta-Analysis Cons

However, it does have some downsides. For one, Meta-Analysis is only as good as the studies it includes. If those studies are flawed, the meta-analysis will be too. It's like baking a cake: if you use bad ingredients, it doesn't matter how good your recipe is—the cake won't turn out well.

Meta-Analysis Uses

Despite these challenges, meta-analyses are highly respected and widely used in many fields like medicine, psychology, and education. They help us make sense of a world that's bursting with information by showing us the big picture drawn from many smaller snapshots.

So, in our all-star lineup, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, the Factorial Design is the strategist, the Cross-Sectional Design is the speedster, and the Correlational Design is the scout, then the Meta-Analysis is like the coach, using insights from everyone else's plays to come up with the best game plan.

9) Non-Experimental Design

Now, let's talk about a player who's a bit of an outsider on this team of experimental designs—the Non-Experimental Design. Think of this design as the commentator or the journalist who covers the game but doesn't actually play.

In a Non-Experimental Design, researchers are like reporters gathering facts, but they don't interfere or change anything. They're simply there to describe and analyze.

Non-Experimental Design Pros

So, what's the deal with Non-Experimental Design? Its strength is in description and exploration. It's really good for studying things as they are in the real world, without changing any conditions.

Non-Experimental Design Cons

Because a non-experimental design doesn't manipulate variables, it can't prove cause and effect. It's like a weather reporter: they can tell you it's raining, but they can't tell you why it's raining.

The downside? Since researchers aren't controlling variables, it's hard to rule out other explanations for what they observe. It's like hearing one side of a story—you get an idea of what happened, but it might not be the complete picture.

Non-Experimental Design Uses

Non-Experimental Design has always been a part of research, especially in fields like anthropology, sociology, and some areas of psychology.

For instance, if you've ever heard of studies that describe how people behave in different cultures or what teens like to do in their free time, that's often Non-Experimental Design at work. These studies aim to capture the essence of a situation, like painting a portrait instead of taking a snapshot.

One well-known example you might have heard about is the Kinsey Reports from the 1940s and 1950s, which described sexual behavior in men and women. Researchers interviewed thousands of people but didn't manipulate any variables like you would in a true experiment. They simply collected data to create a comprehensive picture of the subject matter.

So, in our metaphorical team of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, and Meta-Analysis is the coach, then Non-Experimental Design is the sports journalist—always present, capturing the game, but not part of the action itself.

10) Repeated Measures Design

white rat

Time to meet the Repeated Measures Design, the time traveler of our research team. If this design were a player in a sports game, it would be the one who keeps revisiting past plays to figure out how to improve the next one.

Repeated Measures Design is all about studying the same people or subjects multiple times to see how they change or react under different conditions.

The idea behind Repeated Measures Design isn't new; it's been around since the early days of psychology and medicine. You could say it's a cousin to the Longitudinal Design, but instead of looking at how things naturally change over time, it focuses on how the same group reacts to different things.

Imagine a study looking at how a new energy drink affects people's running speed. Instead of comparing one group that drank the energy drink to another group that didn't, a Repeated Measures Design would have the same group of people run multiple times—once with the energy drink, and once without. This way, you're really zeroing in on the effect of that energy drink, making the results more reliable.

Repeated Measures Design Pros

The strong point of Repeated Measures Design is that it's super focused. Because it uses the same subjects, you don't have to worry about differences between groups messing up your results.

Repeated Measures Design Cons

But the downside? Well, people can get tired or bored if they're tested too many times, which might affect how they respond.

Repeated Measures Design Uses

A famous example of this design is the "Little Albert" experiment, conducted by John B. Watson and Rosalie Rayner in 1920. In this study, a young boy was exposed to a white rat and other stimuli several times to see how his emotional responses changed. Though the ethical standards of this experiment are often criticized today, it was groundbreaking in understanding conditioned emotional responses.

In our metaphorical lineup of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, and Non-Experimental Design is the journalist, then Repeated Measures Design is the time traveler—always looping back to fine-tune the game plan.

11) Crossover Design

Next up is Crossover Design, the switch-hitter of the research world. If you're familiar with baseball, you'll know a switch-hitter is someone who can bat both right-handed and left-handed.

In a similar way, Crossover Design allows subjects to experience multiple conditions, flipping them around so that everyone gets a turn in each role.

This design is like the utility player on our team—versatile, flexible, and really good at adapting.

The Crossover Design has its roots in medical research and has been popular since the mid-20th century. It's often used in clinical trials to test the effectiveness of different treatments.

Crossover Design Pros

The neat thing about this design is that it allows each participant to serve as their own control group. Imagine you're testing two new kinds of headache medicine. Instead of giving one type to one group and another type to a different group, you'd give both kinds to the same people but at different times.

Crossover Design Cons

What's the big deal with Crossover Design? Its major strength is in reducing the "noise" that comes from individual differences. Since each person experiences all conditions, it's easier to see real effects. However, there's a catch. This design assumes that there's no lasting effect from the first condition when you switch to the second one. That might not always be true. If the first treatment has a long-lasting effect, it could mess up the results when you switch to the second treatment.

Crossover Design Uses

A well-known example of Crossover Design is in studies that look at the effects of different types of diets—like low-carb vs. low-fat diets. Researchers might have participants follow a low-carb diet for a few weeks, then switch them to a low-fat diet. By doing this, they can more accurately measure how each diet affects the same group of people.

In our team of experimental designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, and Repeated Measures Design is the time traveler, then Crossover Design is the versatile utility player—always ready to adapt and play multiple roles to get the most accurate results.

12) Cluster Randomized Design

Meet the Cluster Randomized Design, the team captain of group-focused research. In our imaginary lineup of experimental designs, if other designs focus on individual players, then Cluster Randomized Design is looking at how the entire team functions.

This approach is especially common in educational and community-based research, and it's been gaining traction since the late 20th century.

Here's how Cluster Randomized Design works: Instead of assigning individual people to different conditions, researchers assign entire groups, or "clusters." These could be schools, neighborhoods, or even entire towns. This helps you see how the new method works in a real-world setting.

Imagine you want to see if a new anti-bullying program really works. Instead of selecting individual students, you'd introduce the program to a whole school or maybe even several schools, and then compare the results to schools without the program.

Cluster Randomized Design Pros

Why use Cluster Randomized Design? Well, sometimes it's just not practical to assign conditions at the individual level. For example, you can't really have half a school following a new reading program while the other half sticks with the old one; that would be way too confusing! Cluster Randomization helps get around this problem by treating each "cluster" as its own mini-experiment.

Cluster Randomized Design Cons

There's a downside, too. Because entire groups are assigned to each condition, there's a risk that the groups might be different in some important way that the researchers didn't account for. That's like having one sports team that's full of veterans playing against a team of rookies; the match wouldn't be fair.

Cluster Randomized Design Uses

A famous example is the research conducted to test the effectiveness of different public health interventions, like vaccination programs. Researchers might roll out a vaccination program in one community but not in another, then compare the rates of disease in both.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, and Crossover Design is the utility player, then Cluster Randomized Design is the team captain—always looking out for the group as a whole.

13) Mixed-Methods Design

Say hello to Mixed-Methods Design, the all-rounder or the "Renaissance player" of our research team.

Mixed-Methods Design uses a blend of both qualitative and quantitative methods to get a more complete picture, just like a Renaissance person who's good at lots of different things. It's like being good at both offense and defense in a sport; you've got all your bases covered!

Mixed-Methods Design is a fairly new kid on the block, becoming more popular in the late 20th and early 21st centuries as researchers began to see the value in using multiple approaches to tackle complex questions. It's the Swiss Army knife in our research toolkit, combining the best parts of other designs to be more versatile.

Here's how it could work: Imagine you're studying the effects of a new educational app on students' math skills. You might use quantitative methods like tests and grades to measure how much the students improve—that's the 'numbers part.'

But you also want to know how the students feel about math now, or why they think they got better or worse. For that, you could conduct interviews or have students fill out journals—that's the 'story part.'

Mixed-Methods Design Pros

So, what's the scoop on Mixed-Methods Design? The strength is its versatility and depth; you're not just getting numbers or stories, you're getting both, which gives a fuller picture.

Mixed-Methods Design Cons

But, it's also more challenging. Imagine trying to play two sports at the same time! You have to be skilled in different research methods and know how to combine them effectively.

Mixed-Methods Design Uses

A high-profile example of Mixed-Methods Design is research on climate change. Scientists use numbers and data to show temperature changes (quantitative), but they also interview people to understand how these changes are affecting communities (qualitative).

In our team of experimental designs, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, and Cluster Randomized Design is the team captain, then Mixed-Methods Design is the Renaissance player—skilled in multiple areas and able to bring them all together for a winning strategy.

14) Multivariate Design

Now, let's turn our attention to Multivariate Design, the multitasker of the research world.

If our lineup of research designs were like players on a basketball court, Multivariate Design would be the player dribbling, passing, and shooting all at once. This design doesn't just look at one or two things; it looks at several variables simultaneously to see how they interact and affect each other.

Multivariate Design is like baking a cake with many ingredients. Instead of just looking at how flour affects the cake, you also consider sugar, eggs, and milk all at once. This way, you understand how everything works together to make the cake taste good or bad.

Multivariate Design has been a go-to method in psychology, economics, and social sciences since the latter half of the 20th century. With the advent of computers and advanced statistical software, analyzing multiple variables at once became a lot easier, and Multivariate Design soared in popularity.

Multivariate Design Pros

So, what's the benefit of using Multivariate Design? Its power lies in its complexity. By studying multiple variables at the same time, you can get a really rich, detailed understanding of what's going on.

Multivariate Design Cons

But that complexity can also be a drawback. With so many variables, it can be tough to tell which ones are really making a difference and which ones are just along for the ride.

Multivariate Design Uses

Imagine you're a coach trying to figure out the best strategy to win games. You wouldn't just look at how many points your star player scores; you'd also consider assists, rebounds, turnovers, and maybe even how loud the crowd is. A Multivariate Design would help you understand how all these factors work together to determine whether you win or lose.

A well-known example of Multivariate Design is in market research. Companies often use this approach to figure out how different factors—like price, packaging, and advertising—affect sales. By studying multiple variables at once, they can find the best combination to boost profits.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, Cluster Randomized Design is the team captain, and Mixed-Methods Design is the Renaissance player, then Multivariate Design is the multitasker—juggling many variables at once to get a fuller picture of what's happening.

15) Pretest-Posttest Design

Let's introduce Pretest-Posttest Design, the "Before and After" superstar of our research team. You've probably seen those before-and-after pictures in ads for weight loss programs or home renovations, right?

Well, this design is like that, but for science! Pretest-Posttest Design checks out what things are like before the experiment starts and then compares that to what things are like after the experiment ends.

This design is one of the classics, a staple in research for decades across various fields like psychology, education, and healthcare. It's so simple and straightforward that it has stayed popular for a long time.

In Pretest-Posttest Design, you measure your subject's behavior or condition before you introduce any changes—that's your "before" or "pretest." Then you do your experiment, and after it's done, you measure the same thing again—that's your "after" or "posttest."

Pretest-Posttest Design Pros

What makes Pretest-Posttest Design special? It's pretty easy to understand and doesn't require fancy statistics.

Pretest-Posttest Design Cons

But there are some pitfalls. For example, what if the kids in our math example get better at multiplication just because they're older or because they've taken the test before? That would make it hard to tell if the program is really effective or not.

Pretest-Posttest Design Uses

Let's say you're a teacher and you want to know if a new math program helps kids get better at multiplication. First, you'd give all the kids a multiplication test—that's your pretest. Then you'd teach them using the new math program. At the end, you'd give them the same test again—that's your posttest. If the kids do better on the second test, you might conclude that the program works.

One famous use of Pretest-Posttest Design is in evaluating the effectiveness of driver's education courses. Researchers will measure people's driving skills before and after the course to see if they've improved.

16) Solomon Four-Group Design

Next up is the Solomon Four-Group Design, the "chess master" of our research team. This design is all about strategy and careful planning. Named after Richard L. Solomon who introduced it in the 1940s, this method tries to correct some of the weaknesses in simpler designs, like the Pretest-Posttest Design.

Here's how it rolls: The Solomon Four-Group Design uses four different groups to test a hypothesis. Two groups get a pretest, then one of them receives the treatment or intervention, and both get a posttest. The other two groups skip the pretest, and only one of them receives the treatment before they both get a posttest.

Sound complicated? It's like playing 4D chess; you're thinking several moves ahead!

Solomon Four-Group Design Pros

What's the pro and con of the Solomon Four-Group Design? On the plus side, it provides really robust results because it accounts for so many variables.

Solomon Four-Group Design Cons

The downside? It's a lot of work and requires a lot of participants, making it more time-consuming and costly.

Solomon Four-Group Design Uses

Let's say you want to figure out if a new way of teaching history helps students remember facts better. Two classes take a history quiz (pretest), then one class uses the new teaching method while the other sticks with the old way. Both classes take another quiz afterward (posttest).

Meanwhile, two more classes skip the initial quiz, and then one uses the new method before both take the final quiz. Comparing all four groups will give you a much clearer picture of whether the new teaching method works and whether the pretest itself affects the outcome.

The Solomon Four-Group Design is less commonly used than simpler designs but is highly respected for its ability to control for more variables. It's a favorite in educational and psychological research where you really want to dig deep and figure out what's actually causing changes.

17) Adaptive Designs

Now, let's talk about Adaptive Designs, the chameleons of the experimental world.

Imagine you're a detective, and halfway through solving a case, you find a clue that changes everything. You wouldn't just stick to your old plan; you'd adapt and change your approach, right? That's exactly what Adaptive Designs allow researchers to do.

In an Adaptive Design, researchers can make changes to the study as it's happening, based on early results. In a traditional study, once you set your plan, you stick to it from start to finish.

Adaptive Design Pros

This method is particularly useful in fast-paced or high-stakes situations, like developing a new vaccine in the middle of a pandemic. The ability to adapt can save both time and resources, and more importantly, it can save lives by getting effective treatments out faster.

Adaptive Design Cons

But Adaptive Designs aren't without their drawbacks. They can be very complex to plan and carry out, and there's always a risk that the changes made during the study could introduce bias or errors.

Adaptive Design Uses

Adaptive Designs are most often seen in clinical trials, particularly in the medical and pharmaceutical fields.

For instance, if a new drug is showing really promising results, the study might be adjusted to give more participants the new treatment instead of a placebo. Or if one dose level is showing bad side effects, it might be dropped from the study.

The best part is, these changes are pre-planned. Researchers lay out in advance what changes might be made and under what conditions, which helps keep everything scientific and above board.

In terms of applications, besides their heavy usage in medical and pharmaceutical research, Adaptive Designs are also becoming increasingly popular in software testing and market research. In these fields, being able to quickly adjust to early results can give companies a significant advantage.

Adaptive Designs are like the agile startups of the research world—quick to pivot, keen to learn from ongoing results, and focused on rapid, efficient progress. However, they require a great deal of expertise and careful planning to ensure that the adaptability doesn't compromise the integrity of the research.

18) Bayesian Designs

Next, let's dive into Bayesian Designs, the data detectives of the research universe. Named after Thomas Bayes, an 18th-century statistician and minister, this design doesn't just look at what's happening now; it also takes into account what's happened before.

Imagine if you were a detective who not only looked at the evidence in front of you but also used your past cases to make better guesses about your current one. That's the essence of Bayesian Designs.

Bayesian Designs are like detective work in science. As you gather more clues (or data), you update your best guess on what's really happening. This way, your experiment gets smarter as it goes along.

In the world of research, Bayesian Designs are most notably used in areas where you have some prior knowledge that can inform your current study. For example, if earlier research shows that a certain type of medicine usually works well for a specific illness, a Bayesian Design would include that information when studying a new group of patients with the same illness.

Bayesian Design Pros

One of the major advantages of Bayesian Designs is their efficiency. Because they use existing data to inform the current experiment, often fewer resources are needed to reach a reliable conclusion.

Bayesian Design Cons

However, they can be quite complicated to set up and require a deep understanding of both statistics and the subject matter at hand.

Bayesian Design Uses

Bayesian Designs are highly valued in medical research, finance, environmental science, and even in Internet search algorithms. Their ability to continually update and refine hypotheses based on new evidence makes them particularly useful in fields where data is constantly evolving and where quick, informed decisions are crucial.

Here's a real-world example: In the development of personalized medicine, where treatments are tailored to individual patients, Bayesian Designs are invaluable. If a treatment has been effective for patients with similar genetics or symptoms in the past, a Bayesian approach can use that data to predict how well it might work for a new patient.

This type of design is also increasingly popular in machine learning and artificial intelligence. In these fields, Bayesian Designs help algorithms "learn" from past data to make better predictions or decisions in new situations. It's like teaching a computer to be a detective that gets better and better at solving puzzles the more puzzles it sees.

19) Covariate Adaptive Randomization

old person and young person

Now let's turn our attention to Covariate Adaptive Randomization, which you can think of as the "matchmaker" of experimental designs.

Picture a soccer coach trying to create the most balanced teams for a friendly match. They wouldn't just randomly assign players; they'd take into account each player's skills, experience, and other traits.

Covariate Adaptive Randomization is all about creating the most evenly matched groups possible for an experiment.

In traditional randomization, participants are allocated to different groups purely by chance. This is a pretty fair way to do things, but it can sometimes lead to unbalanced groups.

Imagine if all the professional-level players ended up on one soccer team and all the beginners on another; that wouldn't be a very informative match! Covariate Adaptive Randomization fixes this by using important traits or characteristics (called "covariates") to guide the randomization process.

Covariate Adaptive Randomization Pros

The benefits of this design are pretty clear: it aims for balance and fairness, making the final results more trustworthy.

Covariate Adaptive Randomization Cons

But it's not perfect. It can be complex to implement and requires a deep understanding of which characteristics are most important to balance.

Covariate Adaptive Randomization Uses

This design is particularly useful in medical trials. Let's say researchers are testing a new medication for high blood pressure. Participants might have different ages, weights, or pre-existing conditions that could affect the results.

Covariate Adaptive Randomization would make sure that each treatment group has a similar mix of these characteristics, making the results more reliable and easier to interpret.

In practical terms, this design is often seen in clinical trials for new drugs or therapies, but its principles are also applicable in fields like psychology, education, and social sciences.

For instance, in educational research, it might be used to ensure that classrooms being compared have similar distributions of students in terms of academic ability, socioeconomic status, and other factors.

Covariate Adaptive Randomization is like the wise elder of the group, ensuring that everyone has an equal opportunity to show their true capabilities, thereby making the collective results as reliable as possible.

20) Stepped Wedge Design

Let's now focus on the Stepped Wedge Design, a thoughtful and cautious member of the experimental design family.

Imagine you're trying out a new gardening technique, but you're not sure how well it will work. You decide to apply it to one section of your garden first, watch how it performs, and then gradually extend the technique to other sections. This way, you get to see its effects over time and across different conditions. That's basically how Stepped Wedge Design works.

In a Stepped Wedge Design, all participants or clusters start off in the control group, and then, at different times, they 'step' over to the intervention or treatment group. This creates a wedge-like pattern over time where more and more participants receive the treatment as the study progresses. It's like rolling out a new policy in phases, monitoring its impact at each stage before extending it to more people.

Stepped Wedge Design Pros

The Stepped Wedge Design offers several advantages. Firstly, it allows for the study of interventions that are expected to do more good than harm, which makes it ethically appealing.

Secondly, it's useful when resources are limited and it's not feasible to roll out a new treatment to everyone at once. Lastly, because everyone eventually receives the treatment, it can be easier to get buy-in from participants or organizations involved in the study.

Stepped Wedge Design Cons

However, this design can be complex to analyze because it has to account for both the time factor and the changing conditions in each 'step' of the wedge. And like any study where participants know they're receiving an intervention, there's the potential for the results to be influenced by the placebo effect or other biases.

Stepped Wedge Design Uses

This design is particularly useful in health and social care research. For instance, if a hospital wants to implement a new hygiene protocol, it might start in one department, assess its impact, and then roll it out to other departments over time. This allows the hospital to adjust and refine the new protocol based on real-world data before it's fully implemented.

In terms of applications, Stepped Wedge Designs are commonly used in public health initiatives, organizational changes in healthcare settings, and social policy trials. They are particularly useful in situations where an intervention is being rolled out gradually and it's important to understand its impacts at each stage.

21) Sequential Design

Next up is Sequential Design, the dynamic and flexible member of our experimental design family.

Imagine you're playing a video game where you can choose different paths. If you take one path and find a treasure chest, you might decide to continue in that direction. If you hit a dead end, you might backtrack and try a different route. Sequential Design operates in a similar fashion, allowing researchers to make decisions at different stages based on what they've learned so far.

In a Sequential Design, the experiment is broken down into smaller parts, or "sequences." After each sequence, researchers pause to look at the data they've collected. Based on those findings, they then decide whether to stop the experiment because they've got enough information, or to continue and perhaps even modify the next sequence.

Sequential Design Pros

This allows for a more efficient use of resources, as you're only continuing with the experiment if the data suggests it's worth doing so.

One of the great things about Sequential Design is its efficiency. Because you're making data-driven decisions along the way, you can often reach conclusions more quickly and with fewer resources.

Sequential Design Cons

However, it requires careful planning and expertise to ensure that these "stop or go" decisions are made correctly and without bias.

Sequential Design Uses

In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality control in manufacturing, environmental monitoring, and financial modeling. In these areas, being able to make quick decisions based on incoming data can be a big advantage.

This design is often used in clinical trials involving new medications or treatments. For example, if early results show that a new drug has significant side effects, the trial can be stopped before more people are exposed to it.

On the flip side, if the drug is showing promising results, the trial might be expanded to include more participants or to extend the testing period.

Think of Sequential Design as the nimble athlete of experimental designs, capable of quick pivots and adjustments to reach the finish line in the most effective way possible. But just like an athlete needs a good coach, this design requires expert oversight to make sure it stays on the right track.

22) Field Experiments

Last but certainly not least, let's explore Field Experiments—the adventurers of the experimental design world.

Picture a scientist leaving the controlled environment of a lab to test a theory in the real world, like a biologist studying animals in their natural habitat or a social scientist observing people in a real community. These are Field Experiments, and they're all about getting out there and gathering data in real-world settings.

Field Experiments embrace the messiness of the real world, unlike laboratory experiments, where everything is controlled down to the smallest detail. This makes them both exciting and challenging.

Field Experiment Pros

On one hand, the results often give us a better understanding of how things work outside the lab.

While Field Experiments offer real-world relevance, they come with challenges like controlling for outside factors and the ethical considerations of intervening in people's lives without their knowledge.

Field Experiment Cons

On the other hand, the lack of control can make it harder to tell exactly what's causing what. Yet, despite these challenges, they remain a valuable tool for researchers who want to understand how theories play out in the real world.

Field Experiment Uses

Let's say a school wants to improve student performance. In a Field Experiment, they might change the school's daily schedule for one semester and keep track of how students perform compared to another school where the schedule remained the same.

Because the study is happening in a real school with real students, the results could be very useful for understanding how the change might work in other schools. But since it's the real world, lots of other factors—like changes in teachers or even the weather—could affect the results.

Field Experiments are widely used in economics, psychology, education, and public policy. For example, you might have heard of the famous "Broken Windows" experiment in the 1980s that looked at how small signs of disorder, like broken windows or graffiti, could encourage more serious crime in neighborhoods. This experiment had a big impact on how cities think about crime prevention.

From the foundational concepts of control groups and independent variables to the sophisticated layouts like Covariate Adaptive Randomization and Sequential Design, it's clear that the realm of experimental design is as varied as it is fascinating.

We've seen that each design has its own special talents, ideal for specific situations. Some designs, like the Classic Controlled Experiment, are like reliable old friends you can always count on.

Others, like Sequential Design, are flexible and adaptable, making quick changes based on what they learn. And let's not forget the adventurous Field Experiments, which take us out of the lab and into the real world to discover things we might not see otherwise.

Choosing the right experimental design is like picking the right tool for the job. The method you choose can make a big difference in how reliable your results are and how much people will trust what you've discovered. And as we've learned, there's a design to suit just about every question, every problem, and every curiosity.

So the next time you read about a new discovery in medicine, psychology, or any other field, you'll have a better understanding of the thought and planning that went into figuring things out. Experimental design is more than just a set of rules; it's a structured way to explore the unknown and answer questions that can change the world.

Related posts:

  • Experimental Psychologist Career (Salary + Duties + Interviews)
  • 40+ Famous Psychologists (Images + Biographies)
  • 11+ Psychology Experiment Ideas (Goals + Methods)
  • The Little Albert Experiment
  • 41+ White Collar Job Examples (Salary + Path)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Explore Psychology

Psychology Experiment Ideas

Categories Psychology Education

Quick Ideas | Experiment Ideas | Designing Your Experiment | Types of Research

If you are taking a psychology class, you might at some point be asked to design an imaginary experiment or perform an experiment or study. The idea you ultimately choose to use for your psychology experiment may depend upon the number of participants you can find, the time constraints of your project, and limitations in the materials available to you.

Consider these factors before deciding which psychology experiment idea might work for your project.

This article discusses some ideas you might try if you need to perform a psychology experiment or study.

Table of Contents

A Quick List of Experiment Ideas

If you are looking for a quick experiment idea that would be easy to tackle, the following might be some research questions you want to explore:

  • How many items can people hold in short-term memory ?
  • Are people with a Type A personality more stressed than those with a Type B personality?
  • Does listening to upbeat music increase heart rate?
  • Are men or women better at detecting emotions ?
  • Are women or men more likely to experience imposter syndrome ?
  • Will students conform if others in the group all share an opinion that is different from their own?
  • Do people’s heartbeat or breathing rates change in response to certain colors?
  • How much do people rely on nonverbal communication to convey information in a conversation?
  • Do people who score higher on measures of emotional intelligence also score higher on measures of overall well-being?
  • Do more successful people share certain personality traits ?

Most of the following ideas are easily conducted with a small group of participants, who may likely be your classmates. Some of the psychology experiment or study ideas you might want to explore:

Sleep and Short-Term Memory

Does sleep deprivation have an impact on short-term memory ?

Ask participants how much sleep they got the night before and then conduct a task to test short-term memory for items on a list.

Social Media and Mental Health

Is social media usage linked to anxiety or depression?

Ask participants about how many hours a week they use social media sites and then have them complete a depression and anxiety assessment.

Procrastination and Stress

How does procrastination impact student stress levels?

Ask participants about how frequently they procrastinate on their homework and then have them complete an assessment looking at their current stress levels.

Caffeine and Cognition

How does caffeine impact performance on a Stroop test?

In the Stroop test , participants are asked to tell the color of a word, rather than just reading the word. Have a control group consume no caffeine and then complete a Stroop test, and then have an experimental group consume caffeine before completing the same test. Compare results.

Color and Memory

Does the color of text have any impact on memory?

Randomly assign participants to two groups. Have one group memorize words written in black ink for two minutes. Have the second group memorize the same words for the same amount of time, but instead written in red ink. Compare the results.

Weight Bias

How does weight bias influence how people are judged by others?

Find pictures of models in a magazine who look similar, including similar hair and clothing, but who differ in terms of weight. Have participants look at the two models and then ask them to identify which one they think is smarter, wealthier, kinder, and healthier.

Assess how each model was rated and how weight bias may have influenced how they were described by participants.

Music and Exercise

Does music have an effect on how hard people work out?

Have people listen to different styles of music while jogging on a treadmill and measure their walking speed, heart rate, and workout length.

The Halo Effect

How does the Halo Effect influence how people see others?

Show participants pictures of people and ask them to rate the photos in terms of how attractive, kind, intelligent, helpful, and successful the people in the images are.

How does the attractiveness of the person in the photo correlate to how participants rate other qualities? Are attractive people more likely to be perceived as kind, funny, and intelligent?

Eyewitness Testimony

How reliable is eyewitness testimony?

Have participants view video footage of a car crash. Ask some participants to describe how fast the cars were going when they “hit into” each other. Ask other participants to describe how fast the cars were going when they “smashed into” each other.

Give the participants a memory test a few days later and ask them to recall if they saw any broken glass at the accident scene. Compare to see if those in the “smashed into” condition were more likely to report seeing broken glass than those in the “hit into” group.

The experiment is a good illustration of how easily false memories can be triggered.

Simple Psychology Experiment Ideas

If you are looking for a relatively simple psychology experiment idea, here are a few options you might consider.

The Stroop Effect

This classic experiment involves presenting participants with words printed in different colors and asking them to name the color of the ink rather than read the word. Students can manipulate the congruency of the word and the color to test the Stroop effect.

Memory Recall

Students can design a simple experiment to test memory recall by presenting participants with a list of items to remember and then asking them to recall the items after a delay. Students can manipulate the length of the delay or the type of encoding strategy used to see the effect on recall.

Social Conformity

Students can test social conformity by presenting participants with a simple task and manipulating the responses of confederates to see if the participant conforms to the group response.

Selective Attention

Students can design an experiment to test selective attention by presenting participants with a video or audio stimulus and manipulating the presence or absence of a distracting stimulus to see the effect on attention.

Implicit Bias

Students can test implicit bias by presenting participants with a series of words or images and measuring their response time to categorize the stimuli into different categories.

The Primacy/Recency Effect

Students can test the primacy /recency effect by presenting participants with a list of items to remember and manipulating the order of the items to see the effect on recall.

Sleep Deprivation

Students can test the effect of sleep deprivation on cognitive performance by comparing the performance of participants who have had a full night’s sleep to those who have been deprived of sleep.

These are just a few examples of simple psychology experiment ideas for students. The specific experiment will depend on the research question and resources available.

Elements of a Good Psychology Experiment

Finding psychology experiment ideas is not necessarily difficult, but finding a good experimental or study topic that is right for your needs can be a little tough. You need to find something that meets the guidelines and, perhaps most importantly, is approved by your instructor.

Requirements may vary, but you need to ensure that your experiment, study, or survey is:

  • Easy to set up and carry out
  • Easy to find participants willing to take part
  • Free of any ethical concerns

In some cases, you may need to present your idea to your school’s institutional review board before you begin to obtain permission to work with human participants.

Consider Your Own Interests

At some point in your life, you have likely pondered why people behave in certain ways. Or wondered why certain things seem to always happen. Your own interests can be a rich source of ideas for your psychology experiments.

As you are trying to come up with a topic or hypothesis, try focusing on the subjects that fascinate you the most. If you have a particular interest in a topic, look for ideas that answer questions about the topic that you and others may have. Examples of topics you might choose to explore include:

  • Development
  • Personality
  • Social behavior

This can be a fun opportunity to investigate something that appeals to your interests.

Read About Classic Experiments

Sometimes reviewing classic psychological experiments that have been done in the past can give you great ideas for your own psychology experiments. For example, the false memory experiment above is inspired by the classic memory study conducted by Elizabeth Loftus.

Textbooks can be a great place to start looking for topics, but you might want to expand your search to research journals. When you find a study that sparks your interest, read through the discussion section. Researchers will often indicate ideas for future directions that research could take.

Ask Your Instructor

Your professor or instructor is often the best person to consult for advice right from the start.

In most cases, you will probably receive fairly detailed instructions about your assignment. This may include information about the sort of topic you can choose or perhaps the type of experiment or study on which you should focus.

If your instructor does not assign a specific subject area to explore, it is still a great idea to talk about your ideas and get feedback before you get too invested in your topic idea. You will need your teacher’s permission to proceed with your experiment anyway, so now is a great time to open a dialogue and get some good critical feedback.

Experiments vs. Other Types of Research

One thing to note, many of the ideas found here are actually examples of surveys or correlational studies .

For something to qualify as a tru e experiment, there must be manipulation of an independent variable .

For many students, conducting an actual experiment may be outside the scope of their project or may not be permitted by their instructor, school, or institutional review board.

If your assignment or project requires you to conduct a true experiment that involves controlling and manipulating an independent variable, you will need to take care to choose a topic that will work within the guidelines of your assignment.

Types of Psychology Experiments

There are many different types of psychology experiments that students could perform. Examples of psychological research methods you might use include:

Correlational Study

This type of study examines the relationship between two variables. Students could collect data on two variables of interest, such as stress and academic performance, and see if there is a correlation between the two.

Experimental Study

In an experimental study, students manipulate one variable and observe the effect on another variable. For example, students could manipulate the type of music participants listen to and observe its effect on their mood.

Observational Study

Observational studies involve observing behavior in a natural setting . Students could observe how people interact in a public space and analyze the patterns they see.

Survey Study

Students could design a survey to collect data on a specific topic, such as attitudes toward social media, and analyze the results.

A case study involves in-depth analysis of a single individual or group. Students could conduct a case study of a person with a particular disorder, such as anxiety or depression, and examine their experiences and treatment options.

Quasi-Experimental Study

Quasi-experimental studies are similar to experimental studies, but participants are not randomly assigned to groups. Students could investigate the effects of a treatment or intervention on a particular group, such as a classroom of students who receive a new teaching method.

Longitudinal Study

Longitudinal studies involve following participants over an extended period of time. Students could conduct a longitudinal study on the development of language skills in children or the effects of aging on cognitive abilities.

These are just a few examples of the many different types of psychology experiments that students could perform. The specific type of experiment will depend on the research question and the resources available.

Steps for Doing a Psychology Experiment

When conducting a psychology experiment, students should follow several important steps. Here is a general outline of the process:

Define the Research Question

Before conducting an experiment, students should define the research question they are trying to answer. This will help them to focus their study and determine the variables they need to manipulate and measure.

Develop a Hypothesis

Based on the research question, students should develop a hypothesis that predicts the experiment’s outcome. The hypothesis should be testable and measurable.

Select Participants

Students should select participants who meet the criteria for the study. Participants should be informed about the study and give informed consent to participate.

Design the Experiment

Students should design the experiment to test their hypothesis. This includes selecting the appropriate variables, creating a plan for manipulating and measuring them, and determining the appropriate control conditions.

Collect Data

Once the experiment is designed, students should collect data by following the procedures they have developed. They should record all data accurately and completely.

Analyze the Data

After collecting the data, students should analyze it to determine if their hypothesis was supported or not. They can use statistical analyses to determine if there are significant differences between groups or if there are correlations between variables.

Interpret the Results

Based on the analysis, students should interpret the results and draw conclusions about their hypothesis. They should consider the study’s limitations and their findings’ implications.

Report the Results

Finally, students should report the results of their study. This may include writing a research paper or presenting their findings in a poster or oral presentation.

Britt MA. Psych Experiments . Avon, MA: Adams Media; 2007.

Martin DW. Doing Psychology Experiments. Belmont, CA: Cengage Learning; 2008.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Logo for Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Experimental Research

Learning Objectives

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a  between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 university students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assigns participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average IQs, similar average levels of motivation, similar average numbers of health problems, and so on. This matching is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called  random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as they are tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence.  Table 5.2  shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

4 B
5 C
6 A

Random assignment is not guaranteed to control all extraneous variables across conditions. The process is random, so it is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this possibility is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Matched Groups

An alternative to simple random assignment of participants to conditions is the use of a matched-groups design . Using this design, participants in the various conditions are matched on the dependent variable or on some extraneous variable(s) prior the manipulation of the independent variable. This guarantees that these variables will not be confounded across the experimental conditions. For instance, if we want to determine whether expressive writing affects people’s health then we could start by measuring various health-related variables in our prospective research participants. We could then use that information to rank-order participants according to how healthy or unhealthy they are. Next, the two healthiest participants would be randomly assigned to complete different conditions (one would be randomly assigned to the traumatic experiences writing condition and the other to the neutral writing condition). The next two healthiest participants would then be randomly assigned to complete different conditions, and so on until the two least healthy participants. This method would ensure that participants in the traumatic experiences writing condition are matched to participants in the neutral writing condition with respect to health at the beginning of the study. If at the end of the experiment, a difference in health was detected across the two conditions, then we would know that it is due to the writing manipulation and not to pre-existing differences in health.

Within-Subjects Experiments

In a  within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive  and  an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book .  However, not all experiments can use a within-subjects design nor would it be desirable to do so.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in order effects. An order effect   occurs when participants’ responses in the various conditions are affected by the order of conditions to which they were exposed. One type of order effect is a carryover effect. A  carryover effect  is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a  practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This  type of effect is called a  context effect (or contrast effect) . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This knowledge could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is  counterbalancing , which means testing different participants in different orders. The best method of counterbalancing is complete counterbalancing   in which an equal number of participants complete each possible order of conditions. For example, half of the participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others half would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With four conditions, there would be 24 different orders; with five conditions there would be 120 possible orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus, random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

A more efficient way of counterbalancing is through a Latin square design which randomizes through having equal rows and columns. For example, if you have four treatments, you must have four versions. Like a Sudoku puzzle, no treatment can repeat in a row or column. For four versions of four treatments, the Latin square design would look like:

A B C D
B C D A
C D A B
D A B C

You can see in the diagram above that the square has been constructed to ensure that each condition appears at each ordinal position (A appears first once, second once, third once, and fourth once) and each condition precedes and follows each other condition one time. A Latin square for an experiment with 6 conditions would by 6 x 6 in dimension, one for an experiment with 8 conditions would be 8 x 8 in dimension, and so on. So while complete counterbalancing of 6 conditions would require 720 orders, a Latin square would only require 6 orders.

Finally, when the number of conditions is large experiments can use  random counterbalancing  in which the order of the conditions is randomly determined for each participant. Using this technique every possible order of conditions is determined and then one of these orders is randomly selected for each participant. This is not as powerful a technique as complete counterbalancing or partial counterbalancing using a Latin squares design. Use of random counterbalancing will result in more random error, but if order effects are likely to be small and the number of conditions is large, this is an option available to researchers.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the  lack  of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this problem, he asked participants to rate two numbers on how large they were on a scale of 1-to-10 where 1 was “very very small” and 10 was “very very large”.  One group of participants were asked to rate the number 9 and another group was asked to rate the number 221 (Birnbaum, 1999) [1] . Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this  difference  is because participants spontaneously compared 9 with other one-digit numbers (in which case it is  relatively large) and compared 221 with other three-digit numbers (in which case it is relatively  small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. 

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This possibility means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect any effect of the independent variable upon the dependent variable. Within-subjects experiments also require fewer participants than between-subjects experiments to detect an effect of the same size.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this design is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This difficulty is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often take exactly this type of mixed methods approach.

  • Birnbaum, M.H. (1999). How to show that 9>221: Collect judgments in a between-subjects design. Psychological Methods, 4 (3), 243-249. ↵

An experiment in which each participant is tested in only one condition.

Means using a random process to decide which participants are tested in which conditions.

All the conditions occur once in the sequence before any of them is repeated.

An experiment design in which the participants in the various conditions are matched on the dependent variable or on some extraneous variable(s) prior the manipulation of the independent variable.

An experiment in which each participant is tested under all conditions.

An effect that occurs when participants' responses in the various conditions are affected by the order of conditions to which they were exposed.

An effect of being tested in one condition on participants’ behavior in later conditions.

An effect where participants perform a task better in later conditions because they have had a chance to practice it.

An effect where participants perform a task worse in later conditions because they become tired or bored.

Unintended influences on respondents’ answers because they are not related to the content of the item but to the context in which the item appears.

Varying the order of the conditions in which participants are tested, to help solve the problem of order effects in within-subjects experiments.

A method in which an equal number of participants complete each possible order of conditions. 

A method in which the order of the conditions is randomly determined for each participant.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 6: Experimental Research

Experimental Design

Learning Objectives

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a  between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 university  students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This matching is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called  random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence.  Table 6.2  shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.3 Block Randomization Sequence for Assigning Nine Participants to Three Conditions
Participant Condition
1 A
2 C
3 B
4 B
5 C
6 A
7 C
8 B
9 A

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this possibility is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a  treatment  is any intervention meant to change people’s behaviour for the better. This  intervention  includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a  treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a  no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A  placebo  is a simulated treatment that lacks any active ingredient or element that should make it effective, and a  placebo effect  is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008) [1] .

Placebo effects are interesting in their own right (see  Note “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works.  Figure 6.2  shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in  Figure 6.2 ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

""

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This  difference  is what is shown by a comparison of the two outer bars in  Figure 6.2 .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This disclosure allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999) [2] . There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002) [3] . The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.  However, not all experiments can use a within-subjects design nor would it be desirable to.

Carryover Effects and Counterbalancing

The primary disad vantage of within-subjects designs is that they can result in carryover effects. A  carryover effect  is an effect of being tested in one condition on participants’ behaviour in later conditions. One type of carryover effect is a  practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This  type of effect  is called a  context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This  knowledge  could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is  counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

An efficient way of counterbalancing is through a Latin square design which randomizes through having equal rows and columns. For example, if you have four treatments, you must have four versions. Like a Sudoku puzzle, no treatment can repeat in a row or column. For four versions of four treatments, the Latin square design would look like:

A B C D
B C D A
C D A B
D A B C

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 is “larger” than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this problem, he asked participants to rate two numbers on how large they were on a scale of 1-to-10 where 1 was “very very small” and 10 was “very very large”.  One group of participants were asked to rate the number 9 and another group was asked to rate the number 221 (Birnbaum, 1999) [4] . Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this difference is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small) .

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This possibility means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this design is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This difficulty is true for many designs that involve a treatment meant to produce long-term change in participants’ behaviour (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often take exactly this type of mixed methods approach.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.
  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g.,  dog ) are recalled better than abstract nouns (e.g.,  truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.
  • Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590. ↵
  • Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press. ↵
  • Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88. ↵
  • Birnbaum, M.H. (1999). How to show that 9>221: Collect judgments in a between-subjects design. Psychological Methods, 4(3), 243-249. ↵

An experiment in which each participant is only tested in one condition.

A method of controlling extraneous variables across conditions by using a random process to decide which participants will be tested in the different conditions.

All the conditions of an experiment occur once in the sequence before any of them is repeated.

Any intervention meant to change people’s behaviour for the better.

A condition in a study where participants receive treatment.

A condition in a study that the other condition is compared to. This group does not receive the treatment or intervention that the other conditions do.

A type of experiment to research the effectiveness of psychotherapies and medical treatments.

A type of control condition in which participants receive no treatment.

A simulated treatment that lacks any active ingredient or element that should make it effective.

A positive effect of a treatment that lacks any active ingredient or element to make it effective.

Participants receive a placebo that looks like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness.

Participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it.

Each participant is tested under all conditions.

An effect of being tested in one condition on participants’ behaviour in later conditions.

Participants perform a task better in later conditions because they have had a chance to practice it.

Participants perform a task worse in later conditions because they become tired or bored.

Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions.

Testing different participants in different orders.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

experimental design ideas psychology

Breadcrumbs Section. Click here to navigate to respective pages.

Experimental Design in Psychology

Experimental Design in Psychology

DOI link for Experimental Design in Psychology

Get Citation

This text is about doing science and the active process of reading, learning, thinking, generating ideas, designing experiments, and the logistics surrounding each step of the research process. In easy-to-read, conversational language, Kim MacLin teaches students experimental design principles and techniques using a tutorial approach in which students read, critique, and analyze over 75 actual experiments from every major area of psychology. She provides them with real-world information about how science in psychology is conducted and how they can participate.

Recognizing that students come to an experimental design course with their own interests and perspectives, MacLin covers many subdisciplines of psychology throughout the text, including IO psychology, child psychology, social psychology, behavioral psychology, cognitive psychology, clinical psychology, health psychology, educational/school psychology, legal psychology, and personality psychology, among others. Part I of the text is content oriented and provides an overview of the principles of experimental design. Part II contains annotated research articles for students to read and analyze. Classic articles have been retained and 11 new ones have been added, featuring contemporary case studies, information on the Open Science movement, expanded coverage on ethics in research, and a greater focus on becoming a better writer, clarity and precision in writing, and reducing bias in language.

This edition is up to date with the latest APA Publication Manual (7th edition) and includes an overview of the updated bias-free language guidelines, the use of singular "they," the new ethical compliance checklist, and other key changes in APA style. This text is essential reading for students and researchers interested in and studying experimental design in psychology.

TABLE OF CONTENTS

Part i | 138  pages, basic principles in experimental design, chapter chapter 1 | 21  pages, an introduction to scientific inquiry, chapter chapter 2 | 7  pages, the psychological literature, chapter chapter 3 | 12  pages, basic experimental design in psychology, chapter chapter 4 | 15  pages, advanced design techniques, chapter chapter 5 | 15  pages, using experimental design to control variables, chapter chapter 6 | 10  pages, control of subject variables, chapter chapter 7 | 9  pages, design critiques, chapter chapter 8 | 13  pages, ethics of experimental research, chapter chapter 9 | 34  pages, the research process, part ii | 186  pages, analysis of experiments, chapter chapter 10 | 10  pages, the look of love, chapter chapter 11 | 16  pages, emotions and chronic fatigue, chapter chapter 12 | 16  pages, temperature and loneliness, chapter chapter 13 | 8  pages, violent media, chapter chapter 14 | 7  pages, aggression and schizophrenia, chapter chapter 15 | 11  pages, workplace deviance, chapter chapter 16 | 14  pages, controlling racial prejudice, chapter chapter 17 | 9  pages, children’s reasoning, chapter chapter 18 | 8  pages, false confessions, chapter chapter 19 | 11  pages, androgens and toy preference, chapter chapter 20 | 13  pages, language-trained chimpanzees, chapter chapter 21 | 16  pages, peer excellence and quitting, chapter chapter 22 | 8  pages, remembering and eyes, chapter chapter 23 | 19  pages, non-suicidal self-injury, chapter chapter 24 | 8  pages, police responses to criminal suspects, chapter chapter 25 | 7  pages, sleep learning, chapter | 1  pages, a final, final note.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

American Psychological Association Logo

Particularly Exciting Experiments in Psychology

photo of stack of 6 APA journals that focus on experimental psychology

Summaries of research trends in experimental psychology

collection of human faces showing ethnic diversity

Out-Group Face Recognition

The featured studies investigate how face recognition is influenced by same vs other race, age, or gender.

control knob with the word RISK pointing to the HIGH level designation

Risky Decision-Making in Adolescents

Two studies examine risk-taking behavior in adolescents vs. adults.

Asian female teacher teaching mixed race kids

Attention and Autism Spectrum Disorder

The featured studies investigate the nature of attention deficits among people with Autism Spectrum Disorder.

GPS route map display including location notifier

Wayfinding Acquisition and Transfer

Navigation can be based on learning specific paths, routes, the overall map-like layout, or configuration of the environment.

angry man shouting

Attention to Emotion

Attention is biased toward negative emotional expressions.

Read previous issues of PeePs

Particularly Exciting Experiments in Psychology™ (PeePs) is a free summary of ongoing research trends common to six APA journals that focus on experimental psychology.

Browse Current Tables of Contents

Cover of Journal of Experimental Psychology: Human Perception and Performance (mobile)

Journal of Experimental Psychology: Human Perception and Performance

Cover of Journal of Experimental Psychology: Learning, Memory, and Cognition (mobile)

Journal of Experimental Psychology: Learning, Memory, and Cognition

Cover of Journal of Comparative Psychology (mobile)

Journal of Comparative Psychology

Cover of Journal of Experimental Psychology: General (mobile)

Journal of Experimental Psychology: General

Cover of Journal of Experimental Psychology: Animal Learning and Cognition (mobile)

Journal of Experimental Psychology: Animal Learning and Cognition

Cover of Behavioral Neuroscience (mobile)

Behavioral Neuroscience

Contact APA Publications

Experimental Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

helpful professor logo

15 Experimental Design Examples

15 Experimental Design Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

experimental design types and definition, explained below

Experimental design involves testing an independent variable against a dependent variable. It is a central feature of the scientific method .

A simple example of an experimental design is a clinical trial, where research participants are placed into control and treatment groups in order to determine the degree to which an intervention in the treatment group is effective.

There are three categories of experimental design . They are:

  • Pre-Experimental Design: Testing the effects of the independent variable on a single participant or a small group of participants (e.g. a case study).
  • Quasi-Experimental Design: Testing the effects of the independent variable on a group of participants who aren’t randomly assigned to treatment and control groups (e.g. purposive sampling).
  • True Experimental Design: Testing the effects of the independent variable on a group of participants who are randomly assigned to treatment and control groups in order to infer causality (e.g. clinical trials).

A good research student can look at a design’s methodology and correctly categorize it. Below are some typical examples of experimental designs, with their type indicated.

Experimental Design Examples

The following are examples of experimental design (with their type indicated).

1. Action Research in the Classroom

Type: Pre-Experimental Design

A teacher wants to know if a small group activity will help students learn how to conduct a survey. So, they test the activity out on a few of their classes and make careful observations regarding the outcome.

The teacher might observe that the students respond well to the activity and seem to be learning the material quickly.

However, because there was no comparison group of students that learned how to do a survey with a different methodology, the teacher cannot be certain that the activity is actually the best method for teaching that subject.

2. Study on the Impact of an Advertisement

An advertising firm has assigned two of their best staff to develop a quirky ad about eating a brand’s new breakfast product.

The team puts together an unusual skit that involves characters enjoying the breakfast while engaged in silly gestures and zany background music. The ad agency doesn’t want to spend a great deal of money on the ad just yet, so the commercial is shot with a low budget. The firm then shows the ad to a small group of people just to see their reactions.

Afterwards they determine that the ad had a strong impact on viewers so they move forward with a much larger budget.

3. Case Study

A medical doctor has a hunch that an old treatment regimen might be effective in treating a rare illness.

The treatment has never been used in this manner before. So, the doctor applies the treatment to two of their patients with the illness. After several weeks, the results seem to indicate that the treatment is not causing any change in the illness. The doctor concludes that there is no need to continue the treatment or conduct a larger study with a control condition.

4. Fertilizer and Plant Growth Study

An agricultural farmer is exploring different combinations of nutrients on plant growth, so she does a small experiment.

Instead of spending a lot of time and money applying the different mixes to acres of land and waiting several months to see the results, she decides to apply the fertilizer to some small plants in the lab.

After several weeks, it appears that the plants are responding well. They are growing rapidly and producing dense branching. She shows the plants to her colleagues and they all agree that further testing is needed under better controlled conditions .

5. Mood States Study

A team of psychologists is interested in studying how mood affects altruistic behavior. They are undecided however, on how to put the research participants in a bad mood, so they try a few pilot studies out.

They try one suggestion and make a 3-minute video that shows sad scenes from famous heart-wrenching movies.

They then recruit a few people to watch the clips and measure their mood states afterwards.

The results indicate that people were put in a negative mood, but since there was no control group, the researchers cannot be 100% confident in the clip’s effectiveness.

6. Math Games and Learning Study

Type: Quasi-Experimental Design

Two teachers have developed a set of math games that they think will make learning math more enjoyable for their students. They decide to test out the games on their classes.

So, for two weeks, one teacher has all of her students play the math games. The other teacher uses the standard teaching techniques. At the end of the two weeks, all students take the same math test. The results indicate that students that played the math games did better on the test.

Although the teachers would like to say the games were the cause of the improved performance, they cannot be 100% sure because the study lacked random assignment . There are many other differences between the groups that played the games and those that did not.

Learn More: Random Assignment Examples

7. Economic Impact of Policy

An economic policy institute has decided to test the effectiveness of a new policy on the development of small business. The institute identifies two cities in a third-world country for testing.

The two cities are similar in terms of size, economic output, and other characteristics. The city in which the new policy was implemented showed a much higher growth of small businesses than the other city.

Although the two cities were similar in many ways, the researchers must be cautious in their conclusions. There may exist other differences between the two cities that effected small business growth other than the policy.

8. Parenting Styles and Academic Performance

Psychologists want to understand how parenting style affects children’s academic performance.

So, they identify a large group of parents that have one of four parenting styles: authoritarian, authoritative, permissive, or neglectful. The researchers then compare the grades of each group and discover that children raised with the authoritative parenting style had better grades than the other three groups. Although these results may seem convincing, it turns out that parents that use the authoritative parenting style also have higher SES class and can afford to provide their children with more intellectually enriching activities like summer STEAM camps.

9. Movies and Donations Study

Will the type of movie a person watches affect the likelihood that they donate to a charitable cause? To answer this question, a researcher decides to solicit donations at the exit point of a large theatre.

He chooses to study two types of movies: action-hero and murder mystery. After collecting donations for one month, he tallies the results. Patrons that watched the action-hero movie donated more than those that watched the murder mystery. Can you think of why these results could be due to something other than the movie?

10. Gender and Mindfulness Apps Study

Researchers decide to conduct a study on whether men or women benefit from mindfulness the most. So, they recruit office workers in large corporations at all levels of management.

Then, they divide the research sample up into males and females and ask the participants to use a mindfulness app once each day for at least 15 minutes.

At the end of three weeks, the researchers give all the participants a questionnaire that measures stress and also take swabs from their saliva to measure stress hormones.

The results indicate the women responded much better to the apps than males and showed lower stress levels on both measures.

Unfortunately, it is difficult to conclude that women respond to apps better than men because the researchers could not randomly assign participants to gender. This means that there may be extraneous variables that are causing the results.

11. Eyewitness Testimony Study

Type: True Experimental Design

To study the how leading questions on the memories of eyewitnesses leads to retroactive inference , Loftus and Palmer (1974) conducted a simple experiment consistent with true experimental design.

Research participants all watched the same short video of two cars having an accident. Each were randomly assigned to be asked either one of two versions of a question regarding the accident.

Half of the participants were asked the question “How fast were the two cars going when they smashed into each other?” and the other half were asked “How fast were the two cars going when they contacted each other?”

Participants’ estimates were affected by the wording of the question. Participants that responded to the question with the word “smashed” gave much higher estimates than participants that responded to the word “contacted.”

12. Sports Nutrition Bars Study

A company wants to test the effects of their sports nutrition bars. So, they recruited students on a college campus to participate in their study. The students were randomly assigned to either the treatment condition or control condition.

Participants in the treatment condition ate two nutrition bars. Participants in the control condition ate two similar looking bars that tasted nearly identical, but offered no nutritional value.

One hour after consuming the bars, participants ran on a treadmill at a moderate pace for 15 minutes. The researchers recorded their speed, breathing rates, and level of exhaustion.

The results indicated that participants that ate the nutrition bars ran faster, breathed more easily, and reported feeling less exhausted than participants that ate the non-nutritious bar.

13. Clinical Trials

Medical researchers often use true experiments to assess the effectiveness of various treatment regimens. For a simplified example: people from the population are randomly selected to participate in a study on the effects of a medication on heart disease.

Participants are randomly assigned to either receive the medication or nothing at all. Three months later, all participants are contacted and they are given a full battery of heart disease tests.

The results indicate that participants that received the medication had significantly lower levels of heart disease than participants that received no medication.

14. Leadership Training Study

A large corporation wants to improve the leadership skills of its mid-level managers. The HR department has developed two programs, one online and the other in-person in small classes.

HR randomly selects 120 employees to participate and then randomly assigned them to one of three conditions: one-third are assigned to the online program, one-third to the in-class version, and one-third are put on a waiting list.

The training lasts for 6 weeks and 4 months later, supervisors of the participants are asked to rate their staff in terms of leadership potential. The supervisors were not informed about which of their staff participated in the program.

The results indicated that the in-person participants received the highest ratings from their supervisors. The online class participants came in second, followed by those on the waiting list.

15. Reading Comprehension and Lighting Study

Different wavelengths of light may affect cognitive processing. To put this hypothesis to the test, a researcher randomly assigned students on a college campus to read a history chapter in one of three lighting conditions: natural sunlight, artificial yellow light, and standard fluorescent light.

At the end of the chapter all students took the same exam. The researcher then compared the scores on the exam for students in each condition. The results revealed that natural sunlight produced the best test scores, followed by yellow light and fluorescent light.

Therefore, the researcher concludes that natural sunlight improves reading comprehension.

See Also: Experimental Study vs Observational Study

Experimental design is a central feature of scientific research. When done using true experimental design, causality can be infered, which allows researchers to provide proof that an independent variable affects a dependent variable. This is necessary in just about every field of research, and especially in medical sciences.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Number Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Word Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Outdoor Games for Kids
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 50 Incentives to Give to Students

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Abnormal Psychology
  • Assessment (IB)
  • Biological Psychology
  • Cognitive Psychology
  • Criminology
  • Developmental Psychology
  • Extended Essay
  • General Interest
  • Health Psychology
  • Human Relationships
  • IB Psychology
  • IB Psychology HL Extensions
  • Internal Assessment (IB)
  • Love and Marriage
  • Post-Traumatic Stress Disorder
  • Prejudice and Discrimination
  • Qualitative Research Methods
  • Research Methodology
  • Revision and Exam Preparation
  • Social and Cultural Psychology
  • Studies and Theories
  • Teaching Ideas

Lesson Idea: Experimental Designs

Travis Dixon February 15, 2018 Internal Assessment (IB) , Research Methodology

experimental design ideas psychology

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
The purpose of this activity is to help you learn about design choices experimenters have and to think about the benefits and limitations of using each design. You will also learn about terminology for extraneous variables and other controls. It is designed to be studied during the Quantitative Methods unit (Chapter 6, 6.1b). It should take about 15-20 minutes.

Key Questions:

  • What are common experimental designs and controls?
  • How and why are controls used in experimental research?
  • When might it be impossible to use some of these controls?
  • Textbook lesson 6.1b, pg 312-313

Read the summaries of the research aims below and working with a group decide which experimental design you would choose and why.

Here are your three choices for experimental design:

  • For example, in a study on the effects of testosterone on brain activity, all participants would have injections of testosterone and their brain activity measured. They all would then also have a placebo treatment on a different day. Thus, the procedures are repeated on each participant.
  • For example, in the testosterone experiment mentioned, you could divide your sample in half and one group would receive an injection of testosterone and another group would receive an injection of a placebo. Thus, the groups are independent of one another.
  • For example, perhaps you have reason to suspect that “aggressiveness” might be a variable other than testosterone that could affect brain function in your experiment. You could conduct a number of tests to give each participant a score of their level of aggressiveness. You would then match them with another participant who had the closest score and then one would receive the testosterone treatment and the other the placebo. 

Study Summaries

Read the following summaries and decide which experimental design you would choose and explain why. More information on these studies can be found in the textbook or on this blog if you are not familiar with them.

#1 Note Taking

  • The aim of this study is to see which type of note-taking is more effective for long-term memory of information, taking notes with pen and paper or on a laptop. You have a group of high school participants as your sample.

#2 Misinformation effect

  • This is a replication of Loftus and Palmer’s (1974) experiment on the effect of leading questions on the speed estimates of cars in an accident. You want to find out if the verb used in a critical question affects the speed your participants estimate the car to be travelling at. You have a group of students about to sit their first driving tests as your participants.

#3 A Bobo doll experiment

  • This is a replication of Bandura’s famous experiments that test the effects of observing violent behaviour on the behaviour in children. You have participants aged 3-6 years old from a nearby kindergarten. You’re going to have two conditions in your experiment: observing a violent adult and a control condition where the kids don’t observe anyone at all.

#4 Laundry schema study

  • You’ve decided to replicate Bransford and Johnson’s classic (1972) experiment on the effects of prior knowledge on comprehension. You have a group of international high school students from a range of nationalities as your participants. You’re going to have two conditions in your experiment – title before and no title and you want to see which one will have a bigger effect on the comprehension of the passage.

#5 Length of short-term memory

  • This study is a replication of Peterson and Peterson’s (1959) study that aimed to test the duration of short-term memory. Participants are asked to memorize trigrams (a group of three consonants, e.g. MGT, PLR, KFB, etc.). Participants are read three trigrams and in one condition they repeat them straight away with no delay. In another condition they repeat after 18 seconds delay. In the original, after 18 seconds there’s almost no memory of the trigrams and you want to see if these results can be replicated.

One method of choosing a design is to first think about possible extraneous variables that might confound your results. Once you have these identified, you can then decide which design might be best for controlling for this variable.

Another approach could be to choose one of the design types first and then run through the experiment in your mind and think about potential problems with the use of this method.

Fast Finishers Extension #1

If you have chosen your design types appropriately, there’s probably a particular term for the extraneous variable you have controlled for. Using the textbook (pg. 312-313), see which of these variables you have controlled for with your choices:

  • Participant expectancy effect
  • Order effects
  • Participant variability

Fast Finishers Extension #2

Using the same section of the textbook, how could you use these additional controls in one or more of the experiments above?

  • Counter-balancing
  • Single-blind/double-blind design

There are lots of new terms in this lesson and do not feel overwhelmed if you cannot get them all right away. But do take careful note of this section of the textbook because it will come in very handy when you are designing your IA and writing up your report.

Think you know all the terminology in this lesson? Try this crossword puzzle to see how much you’ve learned.

Travis Dixon

Travis Dixon is an IB Psychology teacher, author, workshop leader, examiner and IA moderator.

  • Random article
  • Teaching guide
  • Privacy & cookies

experimental design ideas psychology

10 great psychology experiments

by Chris Woodford . Last updated: December 31, 2021.

S tare in the mirror and you'll find a strong sense of self staring back. Every one of us thinks we have a good idea who we are and what we're about—how we laugh and live and love, and all the complicated rest. But if you're a student of psychology —the fascinating science of human behaviour—you may well stare at your reflection with a wary eye. Because you'll know already that the ideas you have about yourself and other people can be very wide of the mark.

You might think you can learn a lot about human behaviour simply by observing yourself, but psychologists know that isn't really true. "Introspection" (thinking about yourself) has long been considered a suspect source of psychological research, even though one of the founding fathers of the science, William James, gained many important insights with its help. [1] Fortunately, there are thousands of rigorous experiments you can study that will do the job much more objectively and scientifically. And here's a quick selection of 10 of my favourites.

Listen instead... or scroll to keep reading

1: are you really paying attention (simons & chabris, 1999).

“ ...our findings suggest that unexpected events are often overlooked... ” Simons & Chabris, 1999

You can read a book or you can listen to the radio, but can you do both at once? Maybe you can listen to a soft-rock album you've heard hundreds of times before and simultaneously plod your way through an undemanding crime novel, but how about listening to a complex political debate while trying to revise for a politics exam? What about listening to a German radio station while reading a French novel? What about mixing things up a bit more. You can iron your clothes while listening to the radio, no problem. But how about trying to follow (and visualize) the radio commentary on a football game while driving a highway you've never been along before? That's much more challenging because both things call on your brain's ability to process spatial information and one tends to interfere with the other. (There are very good reasons why it's unwise to use a cellphone while you're driving—and in some countries it's illegal.)

Generally speaking, we can do—and pay attention—to only so many things at once. That's no big surprise. However human attention works (and there are many theories about that), it's obviously not unlimited. What is surprising is how we pay attention to some things, in some situations, but not others. Psychologists have long studied something they call the cocktail-party effect . If you're at a noisy party, you can selectively switch your attention to any of the voices around you, just like tuning in a radio, while ignoring all the rest. Even more striking, if you're listening to one person and someone else happens to say your name, your ears will prick up and your attention will instantly switch to the other person instead. So your brain must be aware of much more than you think, even if it's not giving everything its full attention, all the time. [2]

Photo: Would you spot a gorilla if it were in plain sight? Picture by Richard Ruggiero courtesy of US Fish and Wildlife Service National Digital Library .

Sometimes, when we're really paying attention, we aren't easily distracted, even by drastic changes we ought to notice. A particularly striking demonstration of this comes from the work of Daniel Simons and Christopher Chabris (1999), who built on earlier work by the esteemed cognitive psychologist Ulric Neisser and colleagues. [3] Simons and Chabris made a video of people in black or white shirts throwing a basketball back and forth and asked viewers to count the number of passes made by the white-shirted players. You can watch it here .

Half the viewers failed to notice something else that happens at the same time (the gorilla-suited person wandering across the set)—an extraordinary example of something psychologists call inattentional blindness (in plain English: failure to see something you really should have spotted). A related phenomenon called change blindness explains why we generally fail to notice things like glaring continuity errors in movies: we don't expect to see them—and so we don't. Whether experiments like "the invisible gorilla" allow us to conclude broader things about human nature is a moot point, but it's certainly fair to say (as Simons and Chabris argue) that they reveal "critically important limitations of our cognitive abilities." None of us are as smart as we like to think, but just because we fail and fall short that doesn't make us bad people; we'd do a lot better if we understood and recognized our shortcomings. [4]

2: Are you trying too hard? (Aronson, 1966)

No-one likes a smart-aleck, so the saying goes, but just how true is that? Even if you really hate someone who has everything—the good looks, the great house, the well-paid job—it tuns out that there are certain circumstances in which you'll like them a whole lot more: if they suddenly make a stupid mistake. This not-entirely-surprising bit of psychology mirrors everyday experience: we like our fellow humans slightly flawed, down-to-earth, and somewhat relatable. Known as the pratfall effect , it was famously demonstrated back in 1966 by social psychologist Elliot Aronson. [5]

“ ...a superior person may be viewed as superhuman and, therefore, distant; a blunder tends to humanize him and, consequently, increases his attractiveness. ” Aronson et al, 1966

Aronson made taped audio recordings of two very different people talking about themselves and answering 50 difficult questions, which were supposedly part of an interview for a college quiz team. One person was very superior, got almost all the questions right, and revealed (in passing) that they were generally excellent at what they did (an honors student, yearbook editor, and member of the college track team). The other person was much more mediocre, got many questions wrong, and revealed (in passing) that they were much more of a plodder (average grades in high school, proofreader of the yearbook, and failed to make the track team). In the experiment, "subjects" (that's what psychologists call the people who take part in their trials) had to listen to the recordings of the two people and rate them on various things, including their likeability. But there was a twist. In some of the taped interviews, an extra bit (the "pratfall") was added at the end where either the superior person or the mediocrity suddenly shouted "Oh my goodness I've spilled coffee all over my new suit", accompanied by the sounds of a clattering chair and general chaos (noises that were identically spliced onto both tapes).

Artwork: Mistakes make you more likeable—if you're considered competent to begin with.

What Aronson found was that the superior person was rated more attractive with the pratfall at the end of their interview; the inferior person, less so. In other words, a pratfall can really work in your favor, but only if you're considered halfway competent to begin with; if not, it works against you. Knowingly or otherwise, smart celebrities and politicians often appear to take advantage of this to improve their popularity.

3: Is the past a foreign country? (Loftus and Palmer, 1974)

Attention isn't the only thing that lets us down; memory is hugely infallible too—and it's one of the strangest and most complex things psychologists study. Can you remember where you were when the Twin Towers fell in 2001 or (if you're much older and willing to go back further) when JFK was shot in Dallas in 1963? You might remember a girl you were in kindergarten with 20 years ago, but perhaps you can't remember the guy you met last week, last night, or even 10 minutes ago. What about the so-called tip-of-the-tongue phenomenon where you're certain you know a word or fact or name, and you can even describe what it's like ("It's a really short word, maybe beginning with 'F'..."), but you can't bring it instantly to mind? [6] How about the madeleine effect, where the taste or smell or something suddenly sets off an incredibly powerful involuntary memory ? What about déjà-vu : a jarring true-false memory—the strong sense something is very familiar when it can't possibly be? [7] How about the curious split between short- and long-term memories or between "procedural memory" (knowing how to do things or follow instructions) and "declarative memory" (knowing facts), which breaks down further into "semantic memory" (general knowledge about things) and "episodic memory" (specific things that have happened to you). What about the many flavors of selective memory failure, such as seniors who can remember the name of a high-school sweetheart but can't recall their own name? Or sudden episodes of amnesia? Human memory is a massive—and massively complex—subject. And any comprehensive theory of it needs to be able to explain a lot.

“ ...the questions asked subsequent to an event can cause a reconstruction in one's memory of that event.. ” Loftus & Palmer, 1974

Much of the time, poor memory is just a nuisance and we all have tricks for working around it—from slapping Post-It notes on the mirror to setting reminders on our phones. But there's one situation where poor memories can be a matter of life or death: in criminal investigation and court testimony. Suppose you give evidence in a trial based on events you think you remember that happened years ago—and suppose your evidence helps to convict a "murderer" who's subsequently sentenced to death. But what if your memory was quite wrong and the person was innocent?

One of the most famous studies of just how flawed our memories can be was made by psychologists Elizabeth Loftus and John Palmer in 1974. [8] After showing their subjects footage of a car accident, they tested their memories some time later by asking "About how fast were the cars going when they smashed into each other?" or using "collided," "bumped," "contacted," or "hit" in place of smashed. Those asked the first—leading—question reported higher speeds. Later, the subjects were asked if they'd seen any broken glass and those asked the leading question ("smashed") were much more likely to say "yes" even though there was no broken glass in the film. So our memories are much more fluid, far less fixed, than we suppose.

Artwork: The words we use to probe our memories can affect the memories we think we have.

This classic experiment very powerfully illustrates the potential unreliability of eyewitness testimony in criminal investigations, but the work of Elizabeth Loftus on so-called "false memory syndrome" has had far-reaching impacts in provocative areas, such as people's alleged recollections of alien abduction , multiple personality disorder , and memories of childhood abuse . Ultimately, what it demonstrates is that memory is fallible and remembering is sometimes less of a mechanical activity (pulling a dusty book from long-neglected library shelf) than a creative and recreative one (rewriting the book partly or completely to compensate for the fact that the print has faded with time). [9]

4. Do you cave in to peer pressure? (Milgram, 1963)

Experiments like the three we've considered so far might cast an uncomfortable shadow, yet most of us are still convinced we're rational, reasonable people, most of the time. Asked to predict how we'd behave in any given situation, we'd be able to give a pretty good account of ourselves—or so you might think. Consider the question of whether you'd ever, under any circumstances, torture another human being and you'd probably be appalled at the prospect. "Of course not!" And yet, as Yale University's Stanley Milgram famously demonstrated in the 1960s and 1970s, you'd probably be mistaken. [10]

Artwork: The Milgram experiment: a shocking turn of events.

Milgram's experiments on obedience to authority have been widely discussed and offered as explanations for all kinds of things, from minor everyday cruelty to the appalling catalogue of repugnant human behavior witnessed during the Nazi Holocaust. Today, they're generally considered unethical because they're deceptive and could, potentially, damage the mental health of people taking part in them (a claim Milgram himself investigated and refuted). [26]

“ ...the conflict stems from the opposition of two deeply ingrained behavior dispositions: first, the disposition not to harm other people, and second, the tendency to obey those whom we perceive to be legitimate authorities. ” Milgram, 1963

Though Milgram's studies have not been repeated, related experiments have sought to shed more light on why people find themselves participating in quite disturbing forms of behavior. One explanation is that, like willing actors, we simply assume the roles we're given and play our parts well. In 1972, Stanford University's Philip Zimbardo set up an entire "pretend prison" and assigned his subjects roles as prisoners or guards. Quite quickly, the guards went beyond simple play acting and actually took on the roles of sadistic bullies, exposing the prisoners to all kinds of rough and degrading treatment, while the prisoners resigned themselves to their fate or took on the roles of rebels. [11] More recently, Zimbardo has argued that his work sheds light on atrocities such as the torture at the Abu Ghraib prison in 2004, when US army guards were found to have tortured and degraded Iraqi prisoners under their guard in truly shocking ways.

5. Are you a slave to pleasure? (Olds and Milner, 1954)

Why do we do the things we do? Why do we eat or drink, play football, watch TV... or do the legions of other things we feel compelled to do each day? How, when we take these sorts of behaviors to extremes, do we become addicted to things like drink and drugs, gambling or sex? Are they ordinary pleasures taken to extremes or something altogether different? Obsessions, compulsions, and addictive behaviors are complex and very difficult to treat, but what causes them... and how do we treat them?

Artwork: A rat will happily stimulate the "pleasure centre" in its brain.

“ It appears that motivation, like sensation, has local centers in the brain. ” James Olds, Scientific American, 1956.

The Olds and Milner ICSS (intracranial self-stimulation) experiment was widely interpreted as the discovery of a "pleasure center" in the brain, but we have to take that suggestion with quite a pinch of salt. It's fascinating, but also quite reductively depressing, to imagine that a lot of the things humans feel compelled to do each day—from work and eating to sport and sex—are motivated by nothing more than the need to scratch a deep neural itch: to repeatedly stimulate a "hungry" part of our brain. While it offers important insights into addictive behavior, the idea that all of our complex human pleasure-seeking stems from something so crudely behavioral—stimulus and reward—seems absurdly over-simple. It's fascinating to search for references to Olds and Milner's work and see it quoted in books with such titles as Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich . But it's quite a stretch from a rat pushing on a pedal to making arguments of that kind. [14]

6: Are you asleep at the wheel? (Libet, 1983)

Being a conscious, active human being is a bit like driving a car: looking out through your eyes is like staring through a windshield, seeing (perceiving) things and responding to them, as they see and respond to you. Consciousness, in other words, feels like a "top-down" thing; like the driver of a car, we're always in control, willing the world to bend to our way, making things happen according to ideas our brains we devise beforehand. But how true is that really? If you are a driver, you'll know that much of what you do depends on a kind of mental "auto-pilot" or cruise control. As a practiced driver, you barely have to think about what you're doing at all—it's completely automatic. We're only really aware of just how effort-full and attentive drivers need to be when we first start learning. We soon learn to do most of the things involved in driving without being consciously aware of them at all—and that's true of other things too, not just driving a car. Seen this way, driving seems impressive—but if you think again about the Simons and Chabris gorilla experiment, and consider its implications for sitting behind the wheel, you might want to take the bus in future.

Still, you might think, you're always, ultimately, in charge and in control: you're the driver , not the passenger, even if you are sometimes dozy at the wheel. And yet, a remarkable series of experiments by Benjamin Libet, in the 1980s, appeared to demonstrate something entirely different: far from consciously making things happen, sometimes we become conscious of what we've done after the fact. In Libet's experiments, he made people watch a clock and move their wrist when it reached a certain time. But their brain activity (which he was also monitoring) showed a peak a fraction of a second before their conscious decision to move, suggesting, at least in this case, that consciousness is the effect, not the cause. [15]

“ Many of our mental functions are carried out unconsciously , without conscious awareness. ” Benjamin Libet, Mind Time, 2004, p.2.

On the face of it, Libet's work seems to have extraordinary implications for the study of consciousness. It's almost like we're zombies sitting at the wheel of a self-driving car. Is the whole idea of conscious free will just an illusion, an accidental artefact of knee-jerk behavior that happens much more automatically? You can certainly try to argue it that way, as many people have. On the other hand, it's important to remember that this is a highly constrained laboratory experiment and you can't automatically extrapolate from that to more general human behavior. (Apart from anything else, the methodology of Libet's experiments has been questioned. [16] ) While you could try to argue that a complex decision (to buy a house or quit your job) is made unconsciously or subconsciously in whatever manner and we rationalize or become conscious of it after the fact, experiments like Libet's aren't offering evidence for that. Sometimes, it's too much of a stretch to argue from simple, highly contrived, very abstract laboratory experiments to bigger, bolder, and more general everyday behavior.

On the other hand, it's quite likely that some behavior that we believe to be consciously pre-determined is anything but, as William James (and, independently, Carl Lange) reasoned way back in the late 19th century. In a famous example James offered, we assume we run from a scary bear because we see the bear and feel afraid. But James believed the reasoning here is back to front: we see the bear, run, and only feel afraid because we find ourselves running from a bear! (How we arrive at emotions is a whole huge topic of its own. The James-Lange theory eventually spawned more developed theories by Walter Cannon and Philip Bard, who believed emotions and their causes happen simultaneously, and Stanley Shachter and Jerome Singer, who believe emotions stem both from our bodily reactions and how we think about them.) [17]

7: Why are you so attached? (Harlow et al, 1971)

“ Love is a wondrous state, deep, tender, and rewarding. Because of its intimate and personal nature it is regarded by some as an improper topic for experimental research. ” Harry Harlow, 1958.

Artwork: Animals crave proper comfort, not just the simple "reduction" of "drives" like hunger. Photo courtesy of NASA and Wikimedia Commons .

There's an obvious evolutionary reason why we get attached to other people: one way or another, it improves our chances of surviving, mating, and passing on our genes to future generations. Attachment begins at birth, but our attachment to our mothers isn't motivated purely by a simple need for nourishment (through breastfeeding or whatever it might be). One of the most famous psychological experiments of all time demonstrated this back in the early 1970s. The University of Wisconsin's Harry Harlow and his wife Margaret tested what happened when newborn baby monkeys were separated from their mothers and "raised," instead by crude, mechanical surrogates. In particular, Harlow looked at how the monkeys behaved toward two rival "mothers", one with a wooden head and a wire body that had a feeding bottle attached, and one made from soft, warm, comforting cloth. Perhaps surprisingly, the babies preferred the cloth mother. Even when they ventured over to the wire mother for food, they soon returned to the cloth mother for comfort and reassurance. [18]

The fascinating thing about this study is that it suggests the need for comfort is at least as important as the (more obviously fundamental) need for nourishment, so busting the cold, harsh claims of hard-wired behaviorists, who believed our attachment to our mothers was all about mechanistic "drive reduction," or knee-jerk stimulus and response. Ultimately, we love the loving—Harlow's "contact comfort"—and perhaps things like habits, routines, and traditions can all be interpreted in this light.

8: Are you as rational as you think? (Wason, 1966)

“ ... I have concentrated mainly on the mistakes, assumptions, and stereotyped behavior which occur when people have to reason about abstract material. But... we seldom do reason about abstract material. ” Peter Wason, 1966.

Like everyone else, you probably have your moments of wild, reckless abandon, but faced with the task of making a calm, rational judgment about something, how well do you think you'd do? It's not a question of what you know or how clever you are, but how well you can make a judgment or a decision. Suppose, for example, you had to hire the best applicant for a job based on a pile of résumés. Or what if you had to find a new apartment by the end of the month and you had a limited selection to pick among. What if you were on the jury of a trial and had to sit through weeks or evidence to reach a verdict? How well do you think you'd do? Probably, given all the information, you feel you'd make a fair job of it: you have faith in your judgment. And yet, decades of research into human decision-making suggests you'll massively overestimate your own ability. Overconfident and under-informed, you'll jump to hasty conclusions, swayed by glaring biases you don't even notice. In the words of Daniel Kahneman, probably the world's leading expert on human rationality, your brain opts to think "fast" (reaches a quick and dirty decision) when sometimes it'd be better off thinking "slow" (reaching a more considered verdict). [25]

A classic demonstration of how poorly we think was devised by British psychologist Peter Wason in 1966. The experimenter puts a set of four white cards in front of you, each of which has a letter on one side and a number on the other. Then they tell you that if a card has a vowel on one side, it has an even number on the other side. Finally, they ask you which cards you need to turn over to verify if that statement is true. Suppose the cards show A, D, 4, and 7. The obvious answer, offered by most people, is A and 4 or just A. But the correct answer is actually A and 7. Once you've turned over A, it serves no purpose to turn over D or 4: turning over D tells us nothing, because it's not a vowel, while turning over 4 doesn't provide extra proof or disprove the statement. By turning over 7, however, you can potentially disprove the theory if you reveal a vowel on the other side of it. Wason's four-card test demonstrates what's known as "confirmation bias"—our failure to seek out evidence that contradicts things we believe. [19]

Artwork: Peter Wason's four-card selection test. If a card has a vowel on one side, it has an even number on the other. Which cards do you need to turn over to confirm this?

As with the other experiments here, you could extrapolate and argue that Wason's abstract reasoning test is echoed by bigger and wider failings we see in ourselves. Perhaps it goes some way to explaining things like online "echo chambers" and "filter bubbles", where we tend to watch, read, and listen to things that reinforce things we already believe—intellectual cloth mothers, you might call them—rather than challenging those comfortable beliefs or putting them to the test. But, again, a simple laboratory test is exactly what it is: a simple, laboratory test. And other, broader personal or social conclusions don't automatically follow on from it. (Indeed, you might recognize the tendency to argue that way as a confirmation bias all of its own.)

9: How do you learn things? (Pavlov, 1890s)

Learning might seem a very conscious and deliberate thing, especially if you hate the subject you're studying or merely sitting in school. What could be worse than "rote" learning your times table, practising French vocabulary, or revising for an exam? We also learn a lot of things less consciously—sometimes without any conscious effort at all. Animals (other than humans) don't sit in classrooms all day but they learn plenty of things. Even one of the simplest (a sea-slug called Aplysia californica ) will learn to withdraw its syphon and gill if you give it an electric shock, as Eric Kandel and James Schwartz famously discovered. [20]

“ The animal must respond to changes in the environment in such a manner that its responsive activity is directed toward the preservation of its existence. ” Ivan Pavlov, 1926.

So how does learning come about? At its most basic, it involves making connections or "associations" between things, something that was probed by Russian psychologist Ivan Pavlov in perhaps the most famous psychology experiment of all time. Pavlov looked at how dogs behave when he gave them food. Normally, he found dogs would salivate (a response) when he brought them a plate of food (a stimulus). We call this an unconditioned response (meaning default, normal, or just untrained): it's what the dogs do naturally. Now, with the food a distant doggy memory, Pavlov rang a bell (a neutral stimulus) and found it produced no response at all (the dogs didn't salivate). In the next phase of the experiment, he brought the dogs plates of food and rang a bell at the same time and found, again, that they salivated. So again, we have an unconditioned response, but this time to a pair of stimuli. Finally, after a period of this training, he tested what happened when he just rang the bell and, to his surprise, found that they salivated once again. In the jargon of psychology, we say the dogs had become "conditioned" to respond to the bell alone: they associated the bell with food and so responded by salivating. We call this a conditioned (trained or learned) response: the dogs have learned that the sound of the bell is generally linked to the appearance of food. [21]

experimental design ideas psychology

Pavlov's work on conditioning was hugely influential—indeed, it was a key inspiration for the theory of behaviorism . Advanced by such luminaries as B.F. Skinner and J.B. Watson, this was the idea that animal behavior is largely a matter of stimulus and response and mental states—thinking, feeling, emoting, and reasoning—is irrelevant. But, as with all the other experiments here, it's a stretch to argue that we're all quasi-automated zombies raised in a kind of collective cloud of mind-control conditioning. It's true that we learn some things by simple, behavioural association, and animals like Aplysia may learn everything they know that way, but it doesn't follow that all animals learn everything by making endless daisy-chains of stimulus and response. [22]

10: You're happier than you realize (Seligman, 1975)

Money makes the world go round—or so goes the lyric of a famous song. But if you're American Martin Seligman, you'd probably think "happiness" was a better candidate for what powers the planet, or should. When I was studying psychology at college back in the mid-1980s, Professor Seligman came along to give a guest lecture—and it proved to be one of the most thought-provoking talks I would ever attend.

“ The time has finally arrived for a science that seeks to understand positive emotion, build strength and virtue, and provide guideposts for... 'the good life'. ” Martin Seligman, Authentic Happiness, 2003.

Though now widely and popularly known for his work in a field he calls positive psychology , Seligman originally made his name researching mental illness and how people came to be depressed. Taking a leaf from Pavlov's book, his subjects were dogs. Rather than feeding them and ringing bells, he studied what happened when he gave dogs electric shocks and either offered them an opportunity to escape or restrained them in a harness so they couldn't. What he discovered was that dogs that couldn't avoid the shocks became demoralised and depressed—they "learned helpnessness"—and eventually didn't even try to avoid punishment, even when (once again) they were allowed to. [23]

You can easily construct a whole (behavioural) theory of mental illness on the basis of Seligman's learned helplessness experiments but, once again, there's much more to it than that. People don't become depressed purely because they're in impossible situations where problems seem (to use the terminology) "internal" (their own fault), "global" (affecting all aspects of their life), and "stable" (impossible to change). Many different factors—neurochemical, behavioral, cognitive, and social—feed into depression and, as a result, there are just as many forms of treatment.

What's really interesting about Seligman's work is what he did next. In the 1990s, he realized psychologists were obsessed with mental illness and negativity when, in his view, they should probably spend more time figuring out what makes people happy. So began his more recent quest to understand "positive psychology" and the things we can all do to make our lives feel more fulfilled. The key, in his view, is working out and playing to what he calls our "signature strengths" (things we're good at that we enjoy doing). His ideas, which trace back to those early experiments on learned helpless in hapless dogs, have proved hugely influential, prompting many psychologists to switch their attention to developing a useful, practical "science of happiness." [24]

If you liked this article...

Don't want to read our articles try listening instead, find out more, on this website.

  • Introduction to psychology
  • The science of chocolate
  • Neural networks
  • Science of happiness

Other websites

For older readers, for younger readers, references ↑    see for example the classic discussion of consciousness in chapter 9: the stream of thought in principles of psychology (volume 1) by william james, henry holt, 1890. ↑    donald broadbent carried out notable early work on "selective attention" as this is called. see, for example, the role of auditory localization in attention and memory span by d.e. broadbent, j exp psychol, 1954, volume 47 number 3, pp.191–6. ↑     [pdf] gorillas in our midst: sustained inattentional blindness for dynamic events by daniel j simons, christopher f chabris, perception, 1999, volume 28, pp.1059–1074. ↑     the invisible gorilla and other ways our intuition deceives us by christopher chabris and daniel j. simons. harpercollins, 2010. ↑     [pdf] the effect of a pratfall on increasing interpersonal attractiveness by elliot aronson, ben willerman, and joanne floyd, psychon. sci., 1966, volume 4 number 6,pp.227–228. ↑     the 'tip of the tongue' phenomenon by roger brown and david mcneill, journal of verbal learning and verbal behavior, volume 5, issue 4, august 1966, pp.325–337. ↑     the cognitive neuropsychology of déjà vu by chris moulin, psychology press, 2017. ↑     reconstruction of automobile destruction: an example of the interaction between language and memory by elizabeth loftus and john palmer, journal of verbal learning & verbal behavior, volume 13 issue 5, pp.585–589. ↑     "that doesn't mean it really happened": an interview with elizabeth loftus by carrie poppy, the sceptical inquirer, september 8, 2016. ↑     behavioral study of obedience by stanley milgram, journal of abnormal and social psychology, 1963, volume 67, pp.371–378. ↑     a study of prisoners and guards in a simulated prison by craig haney, curtis banks, and philip zimbardo, naval research review, 1973, volume 30, pp.4–17. ↑     dr. robert g. heath: a controversial figure in the history of deep brain stimulation by christen m. o'neal et al, neurosurg focus 43 (3):e12, 2017. serendipity and the cerebral localization of pleasure by alan a. baumeister, journal of the history of the neurosciences, basic and clinical perspectives, volume 15, 2006. issue 2. the 'gay cure' experiments that were written out of scientific history by robert colvile, mosaic science, 4 july 2016. ↑     positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain by j. olds and p. millner, j comp physiol psychol, 1954 dec;47(6):419–27. ↑     the pleasure areas by h.j. campbell, methuen, 1973. ↑     mind time: the temporal factor in consciousness by benjamin libet, harvard university press, 2004. ↑     exposing some holes in libet's classic free will study by christian jarrett, bps research digest, 2008. ↑    for a decent overview, see the section "theories of emotion" in 58: emotion in psychology by openstaxcollege. ↑     the nature of love by harry f. harlow, american psychologist, 13, pp.673–685. for a more general account, see love at goon park: harry harlow and the science of affection by by deborah blum, basic books, 2002. ↑     reasoning by p.c. wason, in foss, brian (ed.). new horizons in psychology. penguin, 1966, p.145. ↑     eric kandel and aplysia californica: their role in the elucidation of mechanisms of memory and the study of psychotherapy by michael robertson and garry walter, acta neuropsychiatrica, volume 22, issue 4, august 2010, pp.195–196. ↑     conditioned reflexes; an investigation of the physiological activity of the cerebral cortex by i.p pavlov. dover, 1960. ↑     pavlov's dogs by tim tully, current biology, 2003, volume 13, issue 4, 18 february 2003, pp.r117–r119. ↑     learned helplessness: theory and evidence by steven maier and martin seligman, journal of experimental psychology: general, 1976, volume 105, number 1, pp3.–46. ↑     authentic happiness by martin seligman, nicholas brealey, 2003. ↑     thinking fast and slow by daniel kahneman, penguin, 2011. ↑     subject reaction: the neglected factor in the ethics of experimentation by stanley milgram, the hastings center report, vol. 7, no. 5 (oct., 1977), pp. 19–23. please do not copy our articles onto blogs and other websites articles from this website are registered at the us copyright office. copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties. text copyright © chris woodford 2021. all rights reserved. full copyright notice and terms of use . follow us, rate this page, tell your friends, cite this page, more to explore on our website....

  • Get the book
  • Send feedback

5.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a  between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 university  students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assigns participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This matching is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called  random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence.  Table 5.2  shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

4 B
5 C
6 A

Random assignment is not guaranteed to control all extraneous variables across conditions. The process is random, so it is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this possibility is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Matched Groups

An alternative to simple random assignment of participants to conditions is the use of a matched-groups design . Using this design, participants in the various conditions are matched on the dependent variable or on some extraneous variable(s) prior the manipulation of the independent variable. This guarantees that these variables will not be confounded across the experimental conditions. For instance, if we want to determine whether expressive writing affects people’s health then we could start by measuring various health-related variables in our prospective research participants. We could then use that information to rank-order participants according to how healthy or unhealthy they are. Next, the two healthiest participants would be randomly assigned to complete different conditions (one would be randomly assigned to the traumatic experiences writing condition and the other to the neutral writing condition). The next two healthiest participants would then be randomly assigned to complete different conditions, and so on until the two least healthy participants. This method would ensure that participants in the traumatic experiences writing condition are matched to participants in the neutral writing condition with respect to health at the beginning of the study. If at the end of the experiment, a difference in health was detected across the two conditions, then we would know that it is due to the writing manipulation and not to pre-existing differences in health.

Within-Subjects Experiments

In a  within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive  and  an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book .  However, not all experiments can use a within-subjects design nor would it be desirable to do so.

One disadvantage of within-subjects experiments is that they make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This  knowledge could  lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in order effects. An order effect  occurs when participants’ responses in the various conditions are affected by the order of conditions to which they were exposed. One type of order effect is a carryover effect. A  carryover effect  is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a  practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This  type of effect is called a  context effect (or contrast effect) . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. 

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is  counterbalancing , which means testing different participants in different orders. The best method of counterbalancing is complete counterbalancing  in which an equal number of participants complete each possible order of conditions. For example, half of the participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others half would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With four conditions, there would be 24 different orders; with five conditions there would be 120 possible orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus, random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

A more efficient way of counterbalancing is through a Latin square design which randomizes through having equal rows and columns. For example, if you have four treatments, you must have four versions. Like a Sudoku puzzle, no treatment can repeat in a row or column. For four versions of four treatments, the Latin square design would look like:

A B C D
B C D A
C D A B
D A B C

You can see in the diagram above that the square has been constructed to ensure that each condition appears at each ordinal position (A appears first once, second once, third once, and fourth once) and each condition preceded and follows each other condition one time. A Latin square for an experiment with 6 conditions would by 6 x 6 in dimension, one for an experiment with 8 conditions would be 8 x 8 in dimension, and so on. So while complete counterbalancing of 6 conditions would require 720 orders, a Latin square would only require 6 orders.

Finally, when the number of conditions is large experiments can use  random counterbalancing  in which the order of the conditions is randomly determined for each participant. Using this technique every possible order of conditions is determined and then one of these orders is randomly selected for each participant. This is not as powerful a technique as complete counterbalancing or partial counterbalancing using a Latin squares design. Use of random counterbalancing will result in more random error, but if order effects are likely to be small and the number of conditions is large, this is an option available to researchers.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the  lack  of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this problem, he asked participants to rate two numbers on how large they were on a scale of 1-to-10 where 1 was “very very small” and 10 was “very very large”.  One group of participants were asked to rate the number 9 and another group was asked to rate the number 221 (Birnbaum, 1999) [1] . Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this  difference  is because participants spontaneously compared 9 with other one-digit numbers (in which case it is  relatively large) and compared 221 with other three-digit numbers (in which case it is relatively  small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. 

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This possibility means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this design is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This difficulty is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often take exactly this type of mixed methods approach.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or counterbalancing of orders of conditions in within-subjects experiments is a fundamental element of experimental research. The purpose of these techniques is to control extraneous variables so that they do not become confounding variables.
  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g.,  dog ) are recalled better than abstract nouns (e.g.,  truth).
  • Birnbaum, M.H. (1999). How to show that 9>221: Collect judgments in a between-subjects design. Psychological Methods, 4 (3), 243-249. ↵

Creative Commons License

Share This Book

  • Increase Font Size

Logo for TRU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2.2 Research Designs in Psychology

Learning objectives.

  • Differentiate the goals of descriptive, correlational, and experimental research designs, and explain the advantages and disadvantages of each.

Psychologists agree that if their ideas and theories about human behaviour are to be taken seriously, they must be backed up by data. Researchers have a variety of research designs available to them in testing their predictions. A research design  is the specific method a researcher uses to collect, analyze, and interpret data. Psychologists use three major types of research designs in their research, and each provides an essential avenue for scientific investigation. Descriptive research  is designed to provide a snapshot of the current state of affairs. Correlational research  is designed to discover relationships among variables. Experimental research is designed to assess cause and effect. Each of the three research designs has specific strengths and limitations, and it is important to understand how each differs. See the table below for a summary.

Table 2.2. Characteristics of three major research designs
Research Design Goal Advantages Disadvantages
Descriptive To create a snapshot of the current state of affairs. Provides a relatively complete picture of what is occurring at a given time. Allows the development of questions for further study. Does not assess relationships among variables. Cannot be used to draw inferences about cause and effect.
Correlational To assess the relationships between and among two or more variables. Allows testing of expected relationships between and among variables and the making of predictions. Can assess these relationships in everyday life events. Cannot be used to draw inferences about cause and effect.
Experimental To assess the causal impact of one or more experimental manipulations on a dependent variable. Allows conclusions to be drawn about the causal relationships among variables. Cannot experimentally manipulate many important variables. May be expensive and time-consuming.
Data source: Stangor, 2011.

Descriptive research: Assessing the current state of affairs

Descriptive research is designed to create a snapshot of the current thoughts, feelings, or behaviour of individuals. This section reviews four types of descriptive research: case studies, surveys and tests, naturalistic observation, and laboratory observation.

Sometimes the data in a descriptive research project are collected from only a small set of individuals, often only one person or a single small group. These research designs are known as case studies , which are descriptive records of one or more individual’s experiences and behaviour. Sometimes case studies involve ordinary individuals, as when developmental psychologist Jean Piaget used his observation of his own children to develop his stage theory of cognitive development. More frequently, case studies are conducted on individuals who have unusual or abnormal experiences or characteristics, this may include those who find themselves in particularly difficult or stressful situations. The assumption is that carefully studying individuals can give us results that tell us something about human nature. Of course, one individual cannot necessarily represent a larger group of people who were in the same circumstances.

Sigmund Freud was a master of using the psychological difficulties of individuals to draw conclusions about basic psychological processes. Freud wrote case studies of some of his most interesting patients and used these careful examinations to develop his important theories of personality. One classic example is Freud’s description of “Little Hans,” a child whose fear of horses was interpreted in terms of repressed sexual impulses and the Oedipus complex (Freud, 1909/1964).

Another well-known case study is of Phineas Gage, a man whose thoughts and emotions were extensively studied by cognitive psychologists after a railroad spike was blasted through his skull in an accident. Although there are questions about the interpretation of this case study (Kotowicz, 2007), it did provide early evidence that the brain’s frontal lobe is involved in emotion and morality (Damasio et al., 2005). An interesting example of a case study in clinical psychology is described by Milton Rokeach (1964), who investigated in detail the beliefs of and interactions among three patients with schizophrenia, all of whom were convinced they were Jesus Christ.

Research using case studies has some unique challenges when it comes to interpreting the data. By definition, case studies are based on one or a very small number of individuals. While their situations may be unique, we cannot know how well they represent what would be found in other cases. Furthermore, the information obtained in a case study may be inaccurate or incomplete. While researchers do their best to objectively understand one case, making any generalizations to other people is problematic. Researchers can usually only speculate about cause and effect, and even then, they must do so with great caution. Case studies are particularly useful when researchers are starting out to study something about which there is not much research or as a source for generating hypotheses that can be tested using other research designs.

In other cases, the data from descriptive research projects come in the form of a survey , which is a measure administered through either an interview or a written questionnaire to get a picture of the beliefs or behaviours of a sample of people of interest. The people chosen to participate in the research, known as the sample , are selected to be representative of all the people that the researcher wishes to know about, known as the population . The representativeness of samples is enormously important. For example, a representative sample of Canadians must reflect Canada’s demographic make-up in terms of age, sex, gender orientation, socioeconomic status, ethnicity, and so on. Research based on unrepresentative samples is limited in generalizability , meaning it will not apply well to anyone who was not represented in the sample. Psychologists use surveys to measure a wide variety of behaviours, attitudes, opinions, and facts. Surveys could be used to measure the amount of exercise people get every week, eating or drinking habits, attitudes towards climate change, and so on. These days, many surveys are available online, and they tend to be aimed at a wide audience. Statistics Canada is a rich source of surveys of Canadians on a diverse array of topics. Their databases are searchable and downloadable, and many deal with topics of interest to psychologists, such as mental health, wellness, and so on. Their raw data may be used by psychologists who are able to take advantage of the fact that the data have already been collected. This is called archival research .

Related to surveys are psychological tests . These are measures developed by psychologists to assess one’s score on a psychological construct, such as extroversion, self-esteem, or aptitude for a particular career. The difference between surveys and tests is really down to what is being measured, with surveys more likely to be fact-gathering and tests more likely to provide a score on a psychological construct.

As you might imagine, respondents to surveys and psychological tests are not always accurate or truthful in their replies. Respondents may also skew their answers in the direction they think is more socially desirable or in line with what the researcher expects. Sometimes people do not have good insight into their own behaviour and are not accurate in judging themselves. Sometimes tests have built-in social desirability or lie scales that attempt to help researchers understand when someone’s scores might need to be discarded from the research because they are not accurate.

Tests and surveys are only useful if they are valid and reliable . Validity exists when an instrument actually measures what you think it measures (e.g., a test of intelligence that actually measures how many years of education you have lacks validity). Demonstrating the validity of a test or survey is the responsibility of any researcher who uses the instrument. Reliability is a related but different construct; it exists when a test or survey gives the same responses from time to time or in different situations. For example, if you took an intelligence test three times and every time it gave you a different score, that would not be a reliable test. Demonstrating the reliability of tests and surveys is another responsibility of researchers. There are different types of validity and reliability, and there is a branch of psychology devoted to understanding not only how to demonstrate that tests and surveys are valid and reliable, but also how to improve them.

An important criticism of psychological research is its reliance on so-called WEIRD samples (Henrich, Heine, & Norenzayan, 2010). WEIRD stands for Western, educated, industrialized, rich, and democratic. People fitting the WEIRD description have been over-represented in psychological research, while people from poorer, less-educated backgrounds, for example, have participated far less often. This criticism is important because in psychology we may be trying to understand something about people in general. For example, if we want to understand whether early enrichment programs can boost IQ scores later, we need to conduct this research using people from a variety of backgrounds and situations. Most of the world’s population is not WEIRD, so psychologists trying to conduct research that has broad generalizability need to expand their participant pool to include a more representative sample.

Another type of descriptive research is  naturalistic observation , which refers to research based on the observation of everyday events. For instance, a developmental psychologist who watches children on a playground and describes what they say to each other while they play is conducting naturalistic observation, as is a biopsychologist who observes animals in their natural habitats. Naturalistic observation is challenging because, in order for it to be accurate, the observer must be effectively invisible. Imagine walking onto a playground, armed with a clipboard and pencil to watch children a few feet away. The presence of an adult may change the way the children behave; if the children know they are being watched, they may not behave in the same ways as they would when no adult is present. Researchers conducting naturalistic observation studies have to find ways to recede into the background so that their presence does not cause the behaviour they are watching to change. They also must find ways to record their observations systematically and completely — not an easy task if you are watching children, for example. As such, it is common to have multiple observers working independently; their combined observations can provide a more accurate record of what occurred.

Sometimes, researchers conducting observational research move out of the natural world and into a laboratory. Laboratory observation allows much more control over the situation and setting in which the participants will be observed. The downside to moving into a laboratory is the potential artificiality of the setting; the participants may not behave the same way in the lab as they would in the natural world, so the behaviour that is observed may not be completely authentic. Consider the researcher who is interested in aggression in children. They might go to a school playground and record what occurs; however, this could be quite time-consuming if the frequency is low or if the children are playing some distance away and their behaviour is difficult to interpret. Instead, the researcher could construct a play setting in a laboratory and attempt to observe aggressive behaviours in this smaller and more controlled context; for instance, they could only provide one highly desirable toy instead of one for each child. What they gain in control, they lose in artificiality. In this example, the possibility for children to act differently in the lab than they would in the real world would create a challenge in interpreting results.

Correlational research: Seeking relationships among variables

In contrast to descriptive research — which is designed primarily to provide a snapshot of behaviour, attitudes, and so on — correlational research involves measuring the relationship between two variables. Variables can be behaviours, attitudes, and so on. Anything that can be measured is a potential variable. The key aspect of correlational research is that the researchers are not asking some of their participants to do one thing and others to do something else; all of the participants are providing scores on the same two variables. Correlational research is not about how an individual scores; rather, it seeks to understand the association between two things in a larger sample of people. The previous comments about the representativeness of the sample all apply in correlational research. Researchers try to find a sample that represents the population of interest.

An example of correlation research would be to measure the association between height and weight. We should expect that there is a relationship because taller people have more mass and therefore should weigh more than short people. We know from observation, however, that there are many tall, thin people just as there are many short, overweight people. In other words, we would expect that in a group of people, height and weight should be systematically related (i.e., correlated), but the degree of relatedness is not expected to be perfect. Imagine we repeated this study with samples representing different populations: elite athletes, women over 50, children under 5, and so on. We might make different predictions about the relationship between height and weight based on the characteristics of the sample. This highlights the importance of obtaining a representative sample.

Psychologists make frequent use of correlational research designs. Examples might be the association between shyness and number of Facebook friends, between age and conservatism, between time spent on social media and grades in school, and so on. Correlational research designs tend to be relatively less expensive because they are time-limited and can often be conducted without much equipment. Online survey platforms have made data collection easier than ever. Some correlational research does not even necessitate collecting data; researchers using archival data sets as described above simply download the raw data from another source. For example, suppose you were interested in whether or not height is related to the number of points scored in hockey players. You could extract data for both variables from nhl.com , the official National Hockey League website, and conduct archival research using the data that have already been collected.

Correlational research designs look for associations between variables. A statistic that measures that association is the correlation coefficient. Correlation coefficients can be either positive or negative, and they range in value from -1.0 through 0 to 1.0. The most common statistical measure is the Pearson correlation coefficient , which is symbolized by the letter r . Positive values of r (e.g., r = .54 or r = .67) indicate that the relationship is positive, whereas negative values of r (e.g., r = –.30 or r = –.72) indicate negative relationships. The closer the coefficient is to -1 or +1, and the further away from zero, the greater the size of the association between the two variables. For instance, r = –.54 is a stronger relationship than r = .30, and r = .72 is a stronger relationship than r = –.57. Correlations of 0 indicate no relationship between the two variables.

Examples of positive correlation coefficients would include those between height and weight, between education and income, and between age and mathematical abilities in children. In each case, people who score higher, or lower, on one of the variables also tend to score higher, or lower, on the other variable. Negative correlations occur when people score high on one variable and low on the other. Examples of negative linear relationships include those between the age of a child and the number of diapers the child uses and between time practising and errors made on a learning task. In these cases, people who score higher on one of the variables tend to score lower on the other variable. Note that the correlation coefficient does not tell you anything about one specific person’s score.

One way of organizing the data from a correlational study with two variables is to graph the values of each of the measured variables using a scatterplot. A scatterplot  is a visual image of the relationship between two variables (see Figure 2.3 ). A point is plotted for each individual at the intersection of his or her scores for the two variables. In this example, data extracted from the official National Hockey League (NHL) website of 30 randomly picked hockey players for the 2017/18 season. For each of these players, there is a dot representing player height and number of points (i.e., goals plus assists). The slope or angle of the dotted line through the middle of the scatter tells us something about the strength and direction of the correlation. In this case, the line slopes up slightly to the right, indicating a positive but small correlation. In these NHL players, there is not much of relationship between height and points. The Pearson correlation calculated for this sample is r = 0.14. It is possible that the correlation would be totally different in a different sample of players, such as a greater number, only those who played a full season, only rookies, only forwards, and so on.

For practise constructing and interpreting scatterplots, see the following:

  • Interactive Quiz: Positive and Negative Associations in Scatterplots (Khan Academy, 2018)

When the association between the variables on the scatterplot can be easily approximated with a straight line, the variables are said to have a linear relationship . We are only going to consider linear relationships here. Just be aware that some pairs of variables have non-linear relationships, such as the relationship between physiological arousal and performance. Both high and low arousal are associated with sub-optimal performance, shown by a U-shaped scatterplot curve.

The most important limitation of correlational research designs is that they cannot be used to draw conclusions about the causal relationships among the measured variables; in other words, we cannot know what causes what in correlational research. Consider, for instance, a researcher who has hypothesized that viewing violent behaviour will cause increased aggressive play in children. The researcher has collected, from a sample of Grade 4 children, a measure of how many violent television shows each child views during the week as well as a measure of how aggressively each child plays on the school playground. From the data collected, the researcher discovers a positive correlation between the two measured variables.

Although this positive correlation appears to support the researcher’s hypothesis, it cannot be taken to indicate that viewing violent television causes aggressive behaviour. Although the researcher is tempted to assume that viewing violent television causes aggressive play, there are other possibilities. One alternative possibility is that the causal direction is exactly opposite of what has been hypothesized; perhaps children who have behaved aggressively at school are more likely to prefer violent television shows at home.

Still another possible explanation for the observed correlation is that it has been produced by a so-called third variable , one that is not part of the research hypothesis but that causes both of the observed variables and, thus, the correlation between them. In our example, a potential third variable is the discipline style of the children’s parents. Parents who use a harsh and punitive discipline style may allow children to watch violent television and to behave aggressively in comparison to children whose parents use less different types of discipline.

To review, whenever we have a correlation that is not zero, there are three potential pathways of cause and effect that must be acknowledged. The easiest way to practise understanding this challenge is to automatically designate the two variables X and Y. It does not matter which is which. Then, think through any ways in which X might cause Y. Then, flip the direction of cause and effect, and consider how Y might cause X. Finally, and possibly the most challenging, try to think of other variables — let’s call these C — that were not part of the original correlation, which cause both X and Y. Understanding these potential explanations for correlational research is an important aspect of scientific literacy. In the above example, we have shown how X (i.e., viewing violent TV) could cause Y (i.e., aggressive behaviour), how Y could cause X, and how C (i.e., parenting) could cause both X and Y.

Test your understanding with each example below. Find three different interpretations of cause and effect using the procedure outlined above. In each case, identify variables X, Y, and C:

  • A positive correlation between dark chocolate consumption and health
  • A negative correlation between sleep and smartphone use
  • A positive correlation between children’s aggressiveness and time spent playing video games
  • A negative association between time spent exercising and consumption of junk food

In sum, correlational research designs have both strengths and limitations. One strength is that they can be used when experimental research is not possible or when fewer resources are available. Correlational designs also have the advantage of allowing the researcher to study behaviour as it occurs in everyday life. We can also use correlational designs to make predictions, such as predicting the success of job trainees based on their test scores during training. They are also excellent sources of suggested avenues for further research, but we cannot use such correlational information to understand cause and effect. For that, researchers rely on experiments.

Experimental research: Understanding the causes of behaviour

The goal of experimental research design is to provide definitive conclusions about the causal relationships among the variables in the research hypothesis. In an experimental research design, there are independent variables and dependent variables. The independent variable  is the one manipulated by the researchers so that there is more than one condition. The dependent variable is the outcome or score on the measure of interest that is dependent on the actions of the independent variable. Let’s consider a classic drug study to illustrate the relationship between independent and dependent variables. To begin, a sample of people with a medical condition are randomly assigned to one of two conditions. In one condition, they are given a drug over a period of time. In the other condition, a placebo is given for the same period of time. To be clear, a placebo is a type of medication that looks like the real thing but is actually chemically inert, sometimes referred to as a”sugar pill.” After the testing period, the groups are compared to see if the drug condition shows better improvement in health than the placebo condition.

While the basic design of experiments is quite simple, the success of experimental research rests on meeting a number of criteria. Some important criteria are:

  • Participants must be randomly assigned to the conditions so that there are no differences between the groups. In the drug study example, you could not assign the males to the drug condition and the females to the placebo condition. The groups must be demographically equivalent.
  • There must be a control condition. Having a condition that does not receive treatment allows experimenters to compare the results of the drug to the results of placebo.
  • The only thing that can change between the conditions is the independent variable. For example, the participants in the drug study should receive the medication at the same place, from the same person, at the same time, and so on, for both conditions. Experiments often employ double-blind procedures in which neither the experimenter nor the participants know which condition any participant is in during the experiment. In a single-blind procedure, the participants do not know which condition they are in.
  • The sample size has to be large and diverse enough to represent the population of interest. For example, a pharmaceutical company should not use only men in their drug study if the drug will eventually be prescribed to women as well.
  • Experimenter effects should be minimized. This means that if there is a difference in scores on the dependent variable, they should not be attributable to something the experimenter did or did not do. For example, if an experiment involved comparing a yoga condition with an exercise condition, experimenters would need to make sure that they treated the participants exactly the same in each condition. They would need to control the amount of time they spent with the participants, how much they interacted verbally, smiled at the participants, and so on. Experimenters often employ research assistants who are blind to the participants’ condition to interact with the participants.

As you can probably see, much of experimental design is about control. The experimenters have a high degree of control over who does what. All of this tight control is to try to ensure that if there is a difference between the different levels of the independent variable, it is detectable. In other words, if there is even a small difference between a drug and placebo, it is detected. Furthermore, this level of control is aimed at ensuring that the only difference between conditions is the one the experimenters are testing while making correct and accurate determinations about cause and effect.

Research Focus

Video games and aggression

Consider an experiment conducted by Craig Anderson and Karen Dill (2000). The study was designed to test the hypothesis that viewing violent video games would increase aggressive behaviour. In this research, male and female undergraduates from Iowa State University were given a chance to play with either a violent video game (e.g., Wolfenstein 3D) or a nonviolent video game (e.g., Myst). During the experimental session, the participants played their assigned video games for 15 minutes. Then, after the play, each participant played a competitive game with an opponent in which the participant could deliver blasts of white noise through the earphones of the opponent. The operational definition of the dependent variable (i.e., aggressive behaviour) was the level and duration of noise delivered to the opponent. The design of the experiment is shown below (see Figure 2.4 ).

There are two strong advantages of the experimental research design. First, there is assurance that the independent variable, also known as the experimental manipulation , occurs prior to the measured dependent variable; second, there is creation of initial equivalence between the conditions of the experiment, which is made possible by using random assignment to conditions.

Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to the measurement of the dependent variable. This eliminates the possibility of reverse causation. Second, the influence of common-causal variables is controlled, and thus eliminated, by creating initial equivalence among the participants in each of the experimental conditions before the manipulation occurs.

The most common method of creating equivalence among the experimental conditions is through random assignment to conditions, a procedure in which the condition that each participant is assigned to is determined through a random process, such as drawing numbers out of an envelope or using a random number table. Anderson and Dill first randomly assigned about 100 participants to each of their two groups: Group A and Group B. Since they used random assignment to conditions, they could be confident that, before the experimental manipulation occurred, the students in Group A were, on average, equivalent to the students in Group B on every possible variable, including variables that are likely to be related to aggression, such as parental discipline style, peer relationships, hormone levels, diet — and in fact everything else.

Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation; they had the participants in Group A play the violent game and the participants in Group B play the nonviolent game. Then, they compared the dependent variable (i.e., the white noise blasts) between the two groups, finding that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game.

Anderson and Dill had from the outset created initial equivalence between the groups. This initial equivalence allowed them to observe differences in the white noise levels between the two groups after the experimental manipulation, leading to the conclusion that it was the independent variable, and not some other variable, that caused these differences. The idea is that the only thing that was different between the students in the two groups was the video game they had played.

Sometimes, experimental research has a confound. A confound is a variable that has slipped unwanted into the research and potentially caused the results because it has created a systematic difference between the levels of the independent variable. In other words, the confound caused the results, not the independent variable. For example, suppose you were a researcher who wanted to know if eating sugar just before an exam was beneficial. You obtain a large sample of students, divide them randomly into two groups, give everyone the same material to study, and then give half of the sample a chocolate bar containing high levels of sugar and the other half a glass of water before they write their test. Lo and behold, you find the chocolate bar group does better. However, the chocolate bar also contains caffeine, fat and other ingredients. These other substances besides sugar are potential confounds; for example, perhaps caffeine rather than sugar caused the group to perform better. Confounds introduce a systematic difference between levels of the independent variable such that it is impossible to distinguish between effects due to the independent variable and effects due to the confound.

Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Do people act the same in a laboratory as they do in real life? Often researchers are forced to balance the need for experimental control with the use of laboratory conditions that can only approximate real life.

Additionally, it is very important to understand that many of the variables that psychologists are interested in are not things that can be manipulated experimentally. For example, psychologists interested in sex differences cannot randomly assign participants to be men or women. If a researcher wants to know if early attachments to parents are important for the development of empathy, or in the formation of adult romantic relationships, the participants cannot be randomly assigned to childhood attachments. Thus, a large number of human characteristics cannot be manipulated or assigned. This means that research may look experimental because it has different conditions (e.g., men or women, rich or poor, highly intelligent or not so intelligent, etc.); however, it is quasi-experimental . The challenge in interpreting quasi-experimental research is that the inability to randomly assign the participants to condition results in uncertainty about cause and effect. For example, if you find that men and women differ in some ability, it could be biology that is the cause, but it is equally likely it could be the societal experience of being male or female that is responsible.

Of particular note, while experiments are the gold standard for understanding cause and effect, a large proportion of psychology research is not experimental for a variety of practical and ethical reasons.

Key Takeaways

  • Descriptive, correlational, and experimental research designs are used to collect and analyze data.
  • Descriptive designs include case studies, surveys, psychological tests, naturalistic observation, and laboratory observation. The goal of these designs is to get a picture of the participants’ current thoughts, feelings, or behaviours.
  • Correlational research designs measure the relationship between two or more variables. The variables may be presented on a scatterplot to visually show the relationships. The Pearson correlation coefficient is a measure of the strength of linear relationship between two variables. Correlations have three potential pathways for interpreting cause and effect.
  • Experimental research involves the manipulation of an independent variable and the measurement of a dependent variable. Done correctly, experiments allow researchers to make conclusions about cause and effect. There are a number of criteria that must be met in experimental design. Not everything can be studied experimentally, and laboratory experiments may not replicate real-life conditions well.

Exercises and Critical Thinking

  • There is a negative correlation between how close students sit to the front of the classroom and their final grade in the class. Explain some possible reasons for this.
  • Imagine you are tasked with creating a survey of online habits of Canadian teenagers. What questions would you ask and why? How valid and reliable would your test be?
  • Imagine a researcher wants to test the hypothesis that participating in psychotherapy will cause a decrease in reported anxiety. Describe the type of research design the investigator might use to draw this conclusion. What would be the independent and dependent variables in the research?

Image Attributions

Figure 2.2. This Might Be Me in a Few Years by Frank Kovalchek is used under a CC BY 2.0 license.

Figure 2.3. Used under a CC BY-NC-SA 4.0 license.

Figure 2.4. Used under a CC BY-NC-SA 4.0 license.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.  Journal of Personality and Social Psychology, 78 (4), 772–790.

Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., Damasio, A. R., Cacioppo, J. T., & Berntson, G. G. (2005). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. In  Social neuroscience: Key readings (pp. 21–28). New York, NY: Psychology Press.

Freud, S. (1909/1964). Analysis of phobia in a five-year-old boy. In E. A. Southwell & M. Merbaum (Eds.),  Personality: Readings in theory and research (pp. 3–32). Belmont, CA: Wadsworth. (Original work published 1909)

Henrich, J., Heine, S. J., & Norenzaya, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 , 61–83.

Kotowicz, Z. (2007). The strange case of Phineas Gage.  History of the Human Sciences, 20 (1), 115–131.

Rokeach, M. (1964).  The three Christs of Ypsilanti: A psychological study . New York, NY: Knopf.

Stangor, C. (2011). Research methods for the behavioral sciences (4th ed.) . Mountain View, CA: Cengage.

Psychology - 1st Canadian Edition Copyright © 2020 by Sally Walters is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Storyboard That

  • My Storyboards

Exploring the Art of Experimental Design: A Step-by-Step Guide for Students and Educators

Experimental design for students.

Experimental design is a key method used in subjects like biology, chemistry, physics, psychology, and social sciences. It helps us figure out how different factors affect what we're studying, whether it's plants, chemicals, physical laws, human behavior, or how society works. Basically, it's a way to set up experiments so we can test ideas, see what happens, and make sense of our results. It's super important for students and researchers who want to answer big questions in science and understand the world better. Experimental design skills can be applied in situations ranging from problem solving to data analysis; they are wide reaching and can frequently be applied outside the classroom. The teaching of these skills is a very important part of science education, but is often overlooked when focused on teaching the content. As science educators, we have all seen the benefits practical work has for student engagement and understanding. However, with the time constraints placed on the curriculum, the time needed for students to develop these experimental research design and investigative skills can get squeezed out. Too often they get a ‘recipe’ to follow, which doesn’t allow them to take ownership of their practical work. From a very young age, they start to think about the world around them. They ask questions then use observations and evidence to answer them. Students tend to have intelligent, interesting, and testable questions that they love to ask. As educators, we should be working towards encouraging these questions and in turn, nurturing this natural curiosity in the world around them.

Teaching the design of experiments and letting students develop their own questions and hypotheses takes time. These materials have been created to scaffold and structure the process to allow teachers to focus on improving the key ideas in experimental design. Allowing students to ask their own questions, write their own hypotheses, and plan and carry out their own investigations is a valuable experience for them. This will lead to students having more ownership of their work. When students carry out the experimental method for their own questions, they reflect on how scientists have historically come to understand how the universe works.

Experimental Design

Take a look at the printer-friendly pages and worksheet templates below!

What are the Steps of Experimental Design?

Embarking on the journey of scientific discovery begins with mastering experimental design steps. This foundational process is essential for formulating experiments that yield reliable and insightful results, guiding researchers and students alike through the detailed planning, experimental research design, and execution of their studies. By leveraging an experimental design template, participants can ensure the integrity and validity of their findings. Whether it's through designing a scientific experiment or engaging in experimental design activities, the aim is to foster a deep understanding of the fundamentals: How should experiments be designed? What are the 7 experimental design steps? How can you design your own experiment?

This is an exploration of the seven key experimental method steps, experimental design ideas, and ways to integrate design of experiments. Student projects can benefit greatly from supplemental worksheets and we will also provide resources such as worksheets aimed at teaching experimental design effectively. Let’s dive into the essential stages that underpin the process of designing an experiment, equipping learners with the tools to explore their scientific curiosity.

1. Question

This is a key part of the scientific method and the experimental design process. Students enjoy coming up with questions. Formulating questions is a deep and meaningful activity that can give students ownership over their work. A great way of getting students to think of how to visualize their research question is using a mind map storyboard.

Free Customizable Experimental Design in Science Questions Spider Map

Ask students to think of any questions they want to answer about the universe or get them to think about questions they have about a particular topic. All questions are good questions, but some are easier to test than others.

2. Hypothesis

A hypothesis is known as an educated guess. A hypothesis should be a statement that can be tested scientifically. At the end of the experiment, look back to see whether the conclusion supports the hypothesis or not.

Forming good hypotheses can be challenging for students to grasp. It is important to remember that the hypothesis is not a research question, it is a testable statement . One way of forming a hypothesis is to form it as an “if... then...” statement. This certainly isn't the only or best way to form a hypothesis, but can be a very easy formula for students to use when first starting out.

An “if... then...” statement requires students to identify the variables first, and that may change the order in which they complete the stages of the visual organizer. After identifying the dependent and independent variables, the hypothesis then takes the form if [change in independent variable], then [change in dependent variable].

For example, if an experiment were looking for the effect of caffeine on reaction time, the independent variable would be amount of caffeine and the dependent variable would be reaction time. The “if, then” hypothesis could be: If you increase the amount of caffeine taken, then the reaction time will decrease.

3. Explanation of Hypothesis

What led you to this hypothesis? What is the scientific background behind your hypothesis? Depending on age and ability, students use their prior knowledge to explain why they have chosen their hypotheses, or alternatively, research using books or the internet. This could also be a good time to discuss with students what a reliable source is.

For example, students may reference previous studies showing the alertness effects of caffeine to explain why they hypothesize caffeine intake will reduce reaction time.

4. Prediction

The prediction is slightly different to the hypothesis. A hypothesis is a testable statement, whereas the prediction is more specific to the experiment. In the discovery of the structure of DNA, the hypothesis proposed that DNA has a helical structure. The prediction was that the X-ray diffraction pattern of DNA would be an X shape.

Students should formulate a prediction that is a specific, measurable outcome based on their hypothesis. Rather than just stating "caffeine will decrease reaction time," students could predict that "drinking 2 cans of soda (90mg caffeine) will reduce average reaction time by 50 milliseconds compared to drinking no caffeine."

5. Identification of Variables

Below is an example of a Discussion Storyboard that can be used to get your students talking about variables in experimental design.

Experimental Design in Science Discussion Storyboard with Students

The three types of variables you will need to discuss with your students are dependent, independent, and controlled variables. To keep this simple, refer to these as "what you are going to measure", "what you are going to change", and "what you are going to keep the same". With more advanced students, you should encourage them to use the correct vocabulary.

Dependent variables are what is measured or observed by the scientist. These measurements will often be repeated because repeated measurements makes your data more reliable.

The independent variables are variables that scientists decide to change to see what effect it has on the dependent variable. Only one is chosen because it would be difficult to figure out which variable is causing any change you observe.

Controlled variables are quantities or factors that scientists want to remain the same throughout the experiment. They are controlled to remain constant, so as to not affect the dependent variable. Controlling these allows scientists to see how the independent variable affects the dependent variable within the experimental group.

Use this example below in your lessons, or delete the answers and set it as an activity for students to complete on Storyboard That.

How temperature affects the amount of sugar able to be dissolved in water
Independent VariableWater Temperature
(Range 5 different samples at 10°C, 20°C, 30°C, 40°C and 50°C)
Dependent VariableThe amount of sugar that can be dissolved in the water, measured in teaspoons.
Controlled Variables

Identifying Variables Storyboard with Pictures | Experimental Design Process St

6. Risk Assessment

Ultimately this must be signed off on by a responsible adult, but it is important to get students to think about how they will keep themselves safe. In this part, students should identify potential risks and then explain how they are going to minimize risk. An activity to help students develop these skills is to get them to identify and manage risks in different situations. Using the storyboard below, get students to complete the second column of the T-chart by saying, "What is risk?", then explaining how they could manage that risk. This storyboard could also be projected for a class discussion.

Risk Assessment Storyboard for Experimental Design in Science

7. Materials

In this section, students will list the materials they need for the experiments, including any safety equipment that they have highlighted as needing in the risk assessment section. This is a great time to talk to students about choosing tools that are suitable for the job. You are going to use a different tool to measure the width of a hair than to measure the width of a football field!

8. General Plan and Diagram

It is important to talk to students about reproducibility. They should write a procedure that would allow their experimental method to be reproduced easily by another scientist. The easiest and most concise way for students to do this is by making a numbered list of instructions. A useful activity here could be getting students to explain how to make a cup of tea or a sandwich. Act out the process, pointing out any steps they’ve missed.

For English Language Learners and students who struggle with written English, students can describe the steps in their experiment visually using Storyboard That.

Not every experiment will need a diagram, but some plans will be greatly improved by including one. Have students focus on producing clear and easy-to-understand diagrams that illustrate the experimental group.

For example, a procedure to test the effect of sunlight on plant growth utilizing completely randomized design could detail:

  • Select 10 similar seedlings of the same age and variety
  • Prepare 2 identical trays with the same soil mixture
  • Place 5 plants in each tray; label one set "sunlight" and one set "shade"
  • Position sunlight tray by a south-facing window, and shade tray in a dark closet
  • Water both trays with 50 mL water every 2 days
  • After 3 weeks, remove plants and measure heights in cm

9. Carry Out Experiment

Once their procedure is approved, students should carefully carry out their planned experiment, following their written instructions. As data is collected, students should organize the raw results in tables, graphs, photos or drawings. This creates clear documentation for analyzing trends.

Some best practices for data collection include:

  • Record quantitative data numerically with units
  • Note qualitative observations with detailed descriptions
  • Capture set up through illustrations or photos
  • Write observations of unexpected events
  • Identify data outliers and sources of error

For example, in the plant growth experiment, students could record:

GroupSunlightSunlightSunlightShadeShade
Plant ID12312
Start Height5 cm4 cm5 cm6 cm4 cm
End Height18 cm17 cm19 cm9 cm8 cm

They would also describe observations like leaf color change or directional bending visually or in writing.

It is crucial that students practice safe science procedures. Adult supervision is required for experimentation, along with proper risk assessment.

Well-documented data collection allows for deeper analysis after experiment completion to determine whether hypotheses and predictions were supported.

Completed Examples

Editable Scientific Investigation Design Example: Moldy Bread

Resources and Experimental Design Examples

Using visual organizers is an effective way to get your students working as scientists in the classroom.

There are many ways to use these investigation planning tools to scaffold and structure students' work while they are working as scientists. Students can complete the planning stage on Storyboard That using the text boxes and diagrams, or you could print them off and have students complete them by hand. Another great way to use them is to project the planning sheet onto an interactive whiteboard and work through how to complete the planning materials as a group. Project it onto a screen and have students write their answers on sticky notes and put their ideas in the correct section of the planning document.

Very young learners can still start to think as scientists! They have loads of questions about the world around them and you can start to make a note of these in a mind map. Sometimes you can even start to ‘investigate’ these questions through play.

The foundation resource is intended for elementary students or students who need more support. It is designed to follow exactly the same process as the higher resources, but made slightly easier. The key difference between the two resources are the details that students are required to think about and the technical vocabulary used. For example, it is important that students identify variables when they are designing their investigations. In the higher version, students not only have to identify the variables, but make other comments, such as how they are going to measure the dependent variable or utilizing completely randomized design. As well as the difference in scaffolding between the two levels of resources, you may want to further differentiate by how the learners are supported by teachers and assistants in the room.

Students could also be encouraged to make their experimental plan easier to understand by using graphics, and this could also be used to support ELLs.

Customizable Foundation Experimental Design Steps T Chart Template

Students need to be assessed on their science inquiry skills alongside the assessment of their knowledge. Not only will that let students focus on developing their skills, but will also allow them to use their assessment information in a way that will help them improve their science skills. Using Quick Rubric , you can create a quick and easy assessment framework and share it with students so they know how to succeed at every stage. As well as providing formative assessment which will drive learning, this can also be used to assess student work at the end of an investigation and set targets for when they next attempt to plan their own investigation. The rubrics have been written in a way to allow students to access them easily. This way they can be shared with students as they are working through the planning process so students know what a good experimental design looks like.

Proficient
13 Points
Emerging
7 Points
Beginning
0 Points
Proficient
11 Points
Emerging
5 Points
Beginning
0 Points

Printable Resources

Return to top

Print Ready Experimental Design Idea Sheet

Related Activities

Chemical Reactions Experiment Worksheet

Additional Worksheets

If you're looking to add additional projects or continue to customize worksheets, take a look at several template pages we've compiled for you below. Each worksheet can be copied and tailored to your projects or students! Students can also be encouraged to create their own if they want to try organizing information in an easy to understand way.

  • Lab Worksheets
  • Discussion Worksheets
  • Checklist Worksheets

Related Resources

  • Scientific Method Steps
  • Science Discussion Storyboards
  • Developing Critical Thinking Skills

How to Teach Students the Design of Experiments

Encourage questioning and curiosity.

Foster a culture of inquiry by encouraging students to ask questions about the world around them.

Formulate testable hypotheses

Teach students how to develop hypotheses that can be scientifically tested. Help them understand the difference between a hypothesis and a question.

Provide scientific background

Help students understand the scientific principles and concepts relevant to their hypotheses. Encourage them to draw on prior knowledge or conduct research to support their hypotheses.

Identify variables

Teach students about the three types of variables (dependent, independent, and controlled) and how they relate to experimental design. Emphasize the importance of controlling variables and measuring the dependent variable accurately.

Plan and diagram the experiment

Guide students in developing a clear and reproducible experimental procedure. Encourage them to create a step-by-step plan or use visual diagrams to illustrate the process.

Carry out the experiment and analyze data

Support students as they conduct the experiment according to their plan. Guide them in collecting data in a meaningful and organized manner. Assist them in analyzing the data and drawing conclusions based on their findings.

Frequently Asked Questions about Experimental Design for Students

What are some common experimental design tools and techniques that students can use.

Common experimental design tools and techniques that students can use include random assignment, control groups, blinding, replication, and statistical analysis. Students can also use observational studies, surveys, and experiments with natural or quasi-experimental designs. They can also use data visualization tools to analyze and present their results.

How can experimental design help students develop critical thinking skills?

Experimental design helps students develop critical thinking skills by encouraging them to think systematically and logically about scientific problems. It requires students to analyze data, identify patterns, and draw conclusions based on evidence. It also helps students to develop problem-solving skills by providing opportunities to design and conduct experiments to test hypotheses.

How can experimental design be used to address real-world problems?

Experimental design can be used to address real-world problems by identifying variables that contribute to a particular problem and testing interventions to see if they are effective in addressing the problem. For example, experimental design can be used to test the effectiveness of new medical treatments or to evaluate the impact of social interventions on reducing poverty or improving educational outcomes.

What are some common experimental design pitfalls that students should avoid?

Common experimental design pitfalls that students should avoid include failing to control variables, using biased samples, relying on anecdotal evidence, and failing to measure dependent variables accurately. Students should also be aware of ethical considerations when conducting experiments, such as obtaining informed consent and protecting the privacy of research subjects.

  • 353/365 ~ Second Fall #running #injury • Ray Bouknight • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Always Writing • mrsdkrebs • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Batteries • Razor512 • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Bleed for It • zerojay • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Bulbs • Roo Reynolds • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • Change • dominiccampbell • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Children • Quang Minh (YILKA) • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • Danger • KatJaTo • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • draw • Asja. • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Epic Fireworks Safety Goggles • EpicFireworks • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • GERMAN BUNSEN • jasonwoodhead23 • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Heart Dissection • tjmwatson • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • ISST 2014 Munich • romanboed • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Lightbulb! • Matthew Wynn • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Mini magnifying glass • SkintDad.co.uk • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • Plants • henna lion • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • Plants • Graham S Dean Photography • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Pré Treino.... São Carlos está foda com essa queimada toda #asma #athsma #ashmatt #asthma • .v1ctor Casale. • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • puzzle • olgaberrios • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Puzzled • Brad Montgomery • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Question Mark • ryanmilani • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Radiator • Conal Gallagher • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Red Tool Box • marinetank0 • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Remote Control • Sean MacEntee • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • stopwatch • Search Engine People Blog • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Thinking • Caramdir • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • Thumb Update: The hot-glue induced burn now has a purple blister. Purple is my favorite color. (September 26, 2012 at 04:16PM) • elisharene • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • Washing my Hands 2 • AlishaV • License Attribution (http://creativecommons.org/licenses/by/2.0/)
  • Windows • Stanley Zimny (Thank You for 18 Million views) • License Attribution, Non Commercial (http://creativecommons.org/licenses/by-nc/2.0/)
  • wire • Dyroc • License Attribution (http://creativecommons.org/licenses/by/2.0/)

Pricing for Schools & Districts

Limited Time

  • 10 Teachers for One Year
  • 2 Hours of Virtual PD

30 Day Money Back Guarantee • New Customers Only • Full Price After Introductory Offer • Access is for 1 Calendar Year

  • 30 Day Money Back Guarantee
  • New Customers Only
  • Full Price After Introductory Offer

Limited Time. New Customers Only

Back to school special!

Purchase orders must be received by 9/6/24.

30 Day Money Back Guarantee. New Customers Only. Full Price After Introductory Offer. Access is for 1 Calendar Year

Generating a Quote

This is usually pretty quick :)

Quote Sent!

Email Sent to

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

experimental design ideas psychology

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

50+ Research Topics for Psychology Papers

How to Find Psychology Research Topics for Your Student Paper

  • Specific Branches of Psychology
  • Topics Involving a Disorder or Type of Therapy
  • Human Cognition
  • Human Development
  • Critique of Publications
  • Famous Experiments
  • Historical Figures
  • Specific Careers
  • Case Studies
  • Literature Reviews
  • Your Own Study/Experiment

Are you searching for a great topic for your psychology paper ? Sometimes it seems like coming up with topics of psychology research is more challenging than the actual research and writing. Fortunately, there are plenty of great places to find inspiration and the following list contains just a few ideas to help get you started.

Finding a solid topic is one of the most important steps when writing any type of paper. It can be particularly important when you are writing a psychology research paper or essay. Psychology is such a broad topic, so you want to find a topic that allows you to adequately cover the subject without becoming overwhelmed with information.

I can always tell when a student really cares about the topic they chose; it comes through in the writing. My advice is to choose a topic that genuinely interests you, so you’ll be more motivated to do thorough research.

In some cases, such as in a general psychology class, you might have the option to select any topic from within psychology's broad reach. Other instances, such as in an  abnormal psychology  course, might require you to write your paper on a specific subject such as a psychological disorder.

As you begin your search for a topic for your psychology paper, it is first important to consider the guidelines established by your instructor.

Research Topics Within Specific Branches of Psychology

The key to selecting a good topic for your psychology paper is to select something that is narrow enough to allow you to really focus on the subject, but not so narrow that it is difficult to find sources or information to write about.

One approach is to narrow your focus down to a subject within a specific branch of psychology. For example, you might start by deciding that you want to write a paper on some sort of social psychology topic. Next, you might narrow your focus down to how persuasion can be used to influence behavior .

Other social psychology topics you might consider include:

  • Prejudice and discrimination (i.e., homophobia, sexism, racism)
  • Social cognition
  • Person perception
  • Social control and cults
  • Persuasion, propaganda, and marketing
  • Attraction, romance, and love
  • Nonverbal communication
  • Prosocial behavior

Psychology Research Topics Involving a Disorder or Type of Therapy

Exploring a psychological disorder or a specific treatment modality can also be a good topic for a psychology paper. Some potential abnormal psychology topics include specific psychological disorders or particular treatment modalities, including:

  • Eating disorders
  • Borderline personality disorder
  • Seasonal affective disorder
  • Schizophrenia
  • Antisocial personality disorder
  • Profile a  type of therapy  (i.e., cognitive-behavioral therapy, group therapy, psychoanalytic therapy)

Topics of Psychology Research Related to Human Cognition

Some of the possible topics you might explore in this area include thinking, language, intelligence, and decision-making. Other ideas might include:

  • False memories
  • Speech disorders
  • Problem-solving

Topics of Psychology Research Related to Human Development

In this area, you might opt to focus on issues pertinent to  early childhood  such as language development, social learning, or childhood attachment or you might instead opt to concentrate on issues that affect older adults such as dementia or Alzheimer's disease.

Some other topics you might consider include:

  • Language acquisition
  • Media violence and children
  • Learning disabilities
  • Gender roles
  • Child abuse
  • Prenatal development
  • Parenting styles
  • Aspects of the aging process

Do a Critique of Publications Involving Psychology Research Topics

One option is to consider writing a critique paper of a published psychology book or academic journal article. For example, you might write a critical analysis of Sigmund Freud's Interpretation of Dreams or you might evaluate a more recent book such as Philip Zimbardo's  The Lucifer Effect: Understanding How Good People Turn Evil .

Professional and academic journals are also great places to find materials for a critique paper. Browse through the collection at your university library to find titles devoted to the subject that you are most interested in, then look through recent articles until you find one that grabs your attention.

Topics of Psychology Research Related to Famous Experiments

There have been many fascinating and groundbreaking experiments throughout the history of psychology, providing ample material for students looking for an interesting term paper topic. In your paper, you might choose to summarize the experiment, analyze the ethics of the research, or evaluate the implications of the study. Possible experiments that you might consider include:

  • The Milgram Obedience Experiment
  • The Stanford Prison Experiment
  • The Little Albert Experiment
  • Pavlov's Conditioning Experiments
  • The Asch Conformity Experiment
  • Harlow's Rhesus Monkey Experiments

Topics of Psychology Research About Historical Figures

One of the simplest ways to find a great topic is to choose an interesting person in the  history of psychology  and write a paper about them. Your paper might focus on many different elements of the individual's life, such as their biography, professional history, theories, or influence on psychology.

While this type of paper may be historical in nature, there is no need for this assignment to be dry or boring. Psychology is full of fascinating figures rife with intriguing stories and anecdotes. Consider such famous individuals as Sigmund Freud, B.F. Skinner, Harry Harlow, or one of the many other  eminent psychologists .

Psychology Research Topics About a Specific Career

​Another possible topic, depending on the course in which you are enrolled, is to write about specific career paths within the  field of psychology . This type of paper is especially appropriate if you are exploring different subtopics or considering which area interests you the most.

In your paper, you might opt to explore the typical duties of a psychologist, how much people working in these fields typically earn, and the different employment options that are available.

Topics of Psychology Research Involving Case Studies

One potentially interesting idea is to write a  psychology case study  of a particular individual or group of people. In this type of paper, you will provide an in-depth analysis of your subject, including a thorough biography.

Generally, you will also assess the person, often using a major psychological theory such as  Piaget's stages of cognitive development  or  Erikson's eight-stage theory of human development . It is also important to note that your paper doesn't necessarily have to be about someone you know personally.

In fact, many professors encourage students to write case studies on historical figures or fictional characters from books, television programs, or films.

Psychology Research Topics Involving Literature Reviews

Another possibility that would work well for a number of psychology courses is to do a literature review of a specific topic within psychology. A literature review involves finding a variety of sources on a particular subject, then summarizing and reporting on what these sources have to say about the topic.

Literature reviews are generally found in the  introduction  of journal articles and other  psychology papers , but this type of analysis also works well for a full-scale psychology term paper.

Topics of Psychology Research Based on Your Own Study or Experiment

Many psychology courses require students to design an actual psychological study or perform some type of experiment. In some cases, students simply devise the study and then imagine the possible results that might occur. In other situations, you may actually have the opportunity to collect data, analyze your findings, and write up your results.

Finding a topic for your study can be difficult, but there are plenty of great ways to come up with intriguing ideas. Start by considering your own interests as well as subjects you have studied in the past.

Online sources, newspaper articles, books , journal articles, and even your own class textbook are all great places to start searching for topics for your experiments and psychology term papers. Before you begin, learn more about  how to conduct a psychology experiment .

What This Means For You

After looking at this brief list of possible topics for psychology papers, it is easy to see that psychology is a very broad and diverse subject. While this variety makes it possible to find a topic that really catches your interest, it can sometimes make it very difficult for some students to select a good topic.

If you are still stumped by your assignment, ask your instructor for suggestions and consider a few from this list for inspiration.

  • Hockenbury, SE & Nolan, SA. Psychology. New York: Worth Publishers; 2014.
  • Santrock, JW. A Topical Approach to Lifespan Development. New York: McGraw-Hill Education; 2016.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Logo for Maricopa Open Digital Press

Research in Developmental Psychology

What you’ll learn to do: examine how to do research in lifespan development.

Desk shown from above, pair of hands seen gesturing towards a graph

How do we know what changes and stays the same (and when and why) in lifespan development? We rely on research that utilizes the scientific method so that we can have confidence in the findings. How data are collected may vary by age group and by the type of information sought. The developmental design (for example, following individuals as they age over time or comparing individuals of different ages at one point in time) will affect the data and the conclusions that can be drawn from them about actual age changes. What do you think are the particular challenges or issues in conducting developmental research, such as with infants and children? Read on to learn more.

Learning outcomes

  • Explain how the scientific method is used in researching development
  • Compare various types and objectives of developmental research
  • Describe methods for collecting research data (including observation, survey, case study, content analysis, and secondary content analysis)
  • Explain correlational research
  • Describe the value of experimental research
  • Compare the advantages and disadvantages of developmental research designs (cross-sectional, longitudinal, and sequential)
  • Describe challenges associated with conducting research in lifespan development

Research in Lifespan Development

How do we know what we know.

question mark

An important part of learning any science is having a basic knowledge of the techniques used in gathering information. The hallmark of scientific investigation is that of following a set of procedures designed to keep questioning or skepticism alive while describing, explaining, or testing any phenomenon. Not long ago a friend said to me that he did not trust academicians or researchers because they always seem to change their story. That, however, is exactly what science is all about; it involves continuously renewing our understanding of the subjects in question and an ongoing investigation of how and why events occur. Science is a vehicle for going on a never-ending journey. In the area of development, we have seen changes in recommendations for nutrition, in explanations of psychological states as people age, and in parenting advice. So think of learning about human development as a lifelong endeavor.

Personal Knowledge

How do we know what we know? Take a moment to write down two things that you know about childhood. Okay. Now, how do you know? Chances are you know these things based on your own history (experiential reality), what others have told you, or cultural ideas (agreement reality) (Seccombe and Warner, 2004). There are several problems with personal inquiry or drawing conclusions based on our personal experiences.

Our assumptions very often guide our perceptions, consequently, when we believe something, we tend to see it even if it is not there. Have you heard the saying, “seeing is believing”? Well, the truth is just the opposite: believing is seeing. This problem may just be a result of cognitive ‘blinders’ or it may be part of a more conscious attempt to support our own views. Confirmation bias is the tendency to look for evidence that we are right and in so doing, we ignore contradictory evidence.

Philosopher Karl Popper suggested that the distinction between that which is scientific and that which is unscientific is that science is falsifiable; scientific inquiry involves attempts to reject or refute a theory or set of assumptions (Thornton, 2005). A theory that cannot be falsified is not scientific. And much of what we do in personal inquiry involves drawing conclusions based on what we have personally experienced or validating our own experience by discussing what we think is true with others who share the same views.

Science offers a more systematic way to make comparisons and guard against bias. One technique used to avoid sampling bias is to select participants for a study in a random way. This means using a technique to ensure that all members have an equal chance of being selected. Simple random sampling may involve using a set of random numbers as a guide in determining who is to be selected. For example, if we have a list of 400 people and wish to randomly select a smaller group or sample to be studied, we use a list of random numbers and select the case that corresponds with that number (Case 39, 3, 217, etc.). This is preferable to asking only those individuals with whom we are familiar to participate in a study; if we conveniently chose only people we know, we know nothing about those who had no opportunity to be selected. There are many more elaborate techniques that can be used to obtain samples that represent the composition of the population we are studying. But even though a randomly selected representative sample is preferable, it is not always used because of costs and other limitations. As a consumer of research, however, you should know how the sample was obtained and keep this in mind when interpreting results. It is possible that what was found was limited to that sample or similar individuals and not generalizable to everyone else.

Scientific Methods

The particular method used to conduct research may vary by discipline and since lifespan development is multidisciplinary, more than one method may be used to study human development. One method of scientific investigation involves the following steps:

  • Determining a research question
  • Reviewing previous studies addressing the topic in question (known as a literature review)
  • Determining a method of gathering information
  • Conducting the study
  • Interpreting the results
  • Drawing conclusions; stating limitations of the study and suggestions for future research
  • Making the findings available to others (both to share information and to have the work scrutinized by others)

The findings of these scientific studies can then be used by others as they explore the area of interest. Through this process, a literature or knowledge base is established. This model of scientific investigation presents research as a linear process guided by a specific research question. And it typically involves quantitative research , which relies on numerical data or using statistics to understand and report what has been studied.

Another model of research, referred to as qualitative research, may involve steps such as these:

  • Begin with a broad area of interest and a research question
  • Gain entrance into a group to be researched
  • Gather field notes about the setting, the people, the structure, the activities, or other areas of interest
  • Ask open-ended, broad “grand tour” types of questions when interviewing subjects
  • Modify research questions as the study continues
  • Note patterns or consistencies
  • Explore new areas deemed important by the people being observed
  • Report findings

In this type of research, theoretical ideas are “grounded” in the experiences of the participants. The researcher is the student and the people in the setting are the teachers as they inform the researcher of their world (Glazer & Strauss, 1967). Researchers should be aware of their own biases and assumptions, acknowledge them, and bracket them in efforts to keep them from limiting accuracy in reporting. Sometimes qualitative studies are used initially to explore a topic and more quantitative studies are used to test or explain what was first described.

A good way to become more familiar with these scientific research methods, both quantitative and qualitative, is to look at journal articles, which are written in sections that follow these steps in the scientific process. Most psychological articles and many papers in the social sciences follow the writing guidelines and format dictated by the  American Psychological Association  (APA). In general, the structure follows: abstract (summary of the article), introduction or literature review, methods explaining how the study was conducted, results of the study, discussion and interpretation of findings, and references.

Link to Learning

Brené Brown is a bestselling author and social work professor at the University of Houston. She conducts grounded theory research by collecting qualitative data from large numbers of participants. In Brené Brown’s TED Talk The Power of Vulnerability , Brown refers to herself as a storyteller-researcher as she explains her research process and summarizes her results.

Research Methods and Objectives

The main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called  descriptive, or qualitative, studies . These studies are used to describe general or specific behaviors and attributes that are observed and measured. In the early stages of research, it might be difficult to form a hypothesis, especially when there is not any existing literature in the area. In these situations designing an experiment would be premature, as the question of interest is not yet clearly defined as a hypothesis. Often a researcher will begin with a non-experimental approach, such as a descriptive study, to gather more information about the topic before designing an experiment or correlational study to address a specific hypothesis. Some examples of descriptive questions include:

  • “How much time do parents spend with their children?”
  • “How many times per week do couples have intercourse?”
  • “When is marital satisfaction greatest?”

The main types of descriptive studies include observation, case studies, surveys, and content analysis (which we’ll examine further in the module). Descriptive research is distinct from  correlational research , in which psychologists formally test whether a relationship exists between two or more variables.  Experimental research  goes a step further beyond descriptive and correlational research and randomly assigns people to different conditions, using hypothesis testing to make inferences about how these conditions affect behavior. Some experimental research includes explanatory studies, which are efforts to answer the question “why” such as:

  • “Why have rates of divorce leveled off?”
  • “Why are teen pregnancy rates down?”
  • “Why has the average life expectancy increased?”

Evaluation research is designed to assess the effectiveness of policies or programs. For instance, research might be designed to study the effectiveness of safety programs implemented in schools for installing car seats or fitting bicycle helmets. Do children who have been exposed to the safety programs wear their helmets? Do parents use car seats properly? If not, why not?

Research Methods

We have just learned about some of the various models and objectives of research in lifespan development. Now we’ll dig deeper to understand the methods and techniques used to describe, explain, or evaluate behavior.

All types of research methods have unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control over how or what kind of data was collected.

Types of Descriptive Research

Observation.

Observational studies , also called naturalistic observation, involve watching and recording the actions of participants. This may take place in the natural setting, such as observing children at play in a park, or behind a one-way glass while children are at play in a laboratory playroom. The researcher may follow a checklist and record the frequency and duration of events (perhaps how many conflicts occur among 2-year-olds) or may observe and record as much as possible about an event as a participant (such as attending an Alcoholics Anonymous meeting and recording the slogans on the walls, the structure of the meeting, the expressions commonly used, etc.). The researcher may be a participant or a non-participant. What would be the strengths of being a participant? What would be the weaknesses?

In general, observational studies have the strength of allowing the researcher to see how people behave rather than relying on self-report. One weakness of self-report studies is that what people do and what they say they do are often very different. A major weakness of observational studies is that they do not allow the researcher to explain causal relationships. Yet, observational studies are useful and widely used when studying children. It is important to remember that most people tend to change their behavior when they know they are being watched (known as the Hawthorne effect ) and children may not survey well.

Case Studies

Case studies  involve exploring a single case or situation in great detail. Information may be gathered with the use of observation, interviews, testing, or other methods to uncover as much as possible about a person or situation. Case studies are helpful when investigating unusual situations such as brain trauma or children reared in isolation. And they are often used by clinicians who conduct case studies as part of their normal practice when gathering information about a client or patient coming in for treatment. Case studies can be used to explore areas about which little is known and can provide rich detail about situations or conditions. However, the findings from case studies cannot be generalized or applied to larger populations; this is because cases are not randomly selected and no control group is used for comparison. (Read The Man Who Mistook His Wife for a Hat by Dr. Oliver Sacks as a good example of the case study approach.)

A person is checking off boxes on a paper survey

Surveys  are familiar to most people because they are so widely used. Surveys enhance accessibility to subjects because they can be conducted in person, over the phone, through the mail, or online. A survey involves asking a standard set of questions to a group of subjects. In a highly structured survey, subjects are forced to choose from a response set such as “strongly disagree, disagree, undecided, agree, strongly agree”; or “0, 1-5, 6-10, etc.” Surveys are commonly used by sociologists, marketing researchers, political scientists, therapists, and others to gather information on many variables in a relatively short period of time. Surveys typically yield surface information on a wide variety of factors, but may not allow for an in-depth understanding of human behavior.

Surveys are useful in examining stated values, attitudes, opinions, and reporting on practices. However, they are based on self-report, or what people say they do rather than on observation, and this can limit accuracy. Validity refers to accuracy and reliability refers to consistency in responses to tests and other measures; great care is taken to ensure the validity and reliability of surveys.

Content Analysis

Content analysis  involves looking at media such as old texts, pictures, commercials, lyrics, or other materials to explore patterns or themes in culture. An example of content analysis is the classic history of childhood by Aries (1962) called “Centuries of Childhood” or the analysis of television commercials for sexual or violent content or for ageism. Passages in text or television programs can be randomly selected for analysis as well. Again, one advantage of analyzing work such as this is that the researcher does not have to go through the time and expense of finding respondents, but the researcher cannot know how accurately the media reflects the actions and sentiments of the population.

Secondary content analysis, or archival research, involves analyzing information that has already been collected or examining documents or media to uncover attitudes, practices, or preferences. There are a number of data sets available to those who wish to conduct this type of research. The researcher conducting secondary analysis does not have to recruit subjects but does need to know the quality of the information collected in the original study. And unfortunately, the researcher is limited to the questions asked and data collected originally.

Correlational and Experimental Research

Correlational research.

When scientists passively observe and measure phenomena it is called correlational research . Here, researchers do not intervene and change behavior, as they do in experiments. In correlational research, the goal is to identify patterns of relationships, but not cause and effect. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

So, what if you wanted to test whether spending money on others is related to happiness, but you don’t have $20 to give to each participant in order to have them spend it for your experiment? You could use a correlational design—which is exactly what Professor Elizabeth Dunn (2008) at the University of British Columbia did when she conducted research on spending and happiness. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were.

Understanding Correlation

Scatterplot of the association between happiness and ratings of the past month, a positive correlation (r = .81)

With a positive correlation , the two variables go up or down together. In a scatterplot, the dots form a pattern that extends from the bottom left to the upper right (just as they do in Figure 1). The r value for a positive correlation is indicated by a positive number (although, the positive sign is usually omitted). Here, the r value is .81. For the example above, the direction of the association is positive. This means that people who perceived the past month as being good reported feeling happier, whereas people who perceived the month as being bad reported feeling less happy.

A negative correlation is one in which the two variables move in opposite directions. That is, as one variable goes up, the other goes down. Figure 2 shows the association between the average height of males in a country (y-axis) and the pathogen prevalence (or commonness of disease; x-axis) of that country. In this scatterplot, each dot represents a country. Notice how the dots extend from the top left to the bottom right. What does this mean in real-world terms? It means that people are shorter in parts of the world where there is more disease. The r-value for a negative correlation is indicated by a negative number—that is, it has a minus (–) sign in front of it. Here, it is –.83.

Scatterplot showing the association between average male height and pathogen prevalence, a negative correlation (r = –.83).

Experimental Research

Experiments  are designed to test  hypotheses  (or specific statements about the relationship between  variables ) in a controlled setting in an effort to explain how certain factors or events produce outcomes. A variable is anything that changes in value. Concepts are operationalized  or transformed into variables in research which means that the researcher must specify exactly what is going to be measured in the study. For example, if we are interested in studying marital satisfaction, we have to specify what marital satisfaction really means or what we are going to use as an indicator of marital satisfaction. What is something measurable that would indicate some level of marital satisfaction? Would it be the amount of time couples spend together each day? Or eye contact during a discussion about money? Or maybe a subject’s score on a marital satisfaction scale? Each of these is measurable but these may not be equally valid or accurate indicators of marital satisfaction. What do you think? These are the kinds of considerations researchers must make when working through the design.

The experimental method is the only research method that can measure cause and effect relationships between variables. Three conditions must be met in order to establish cause and effect. Experimental designs are useful in meeting these conditions:

  • The independent and dependent variables must be related.  In other words, when one is altered, the other changes in response. The independent variable is something altered or introduced by the researcher; sometimes thought of as the treatment or intervention. The dependent variable is the outcome or the factor affected by the introduction of the independent variable; the dependent variable  depends on the independent variable. For example, if we are looking at the impact of exercise on stress levels, the independent variable would be exercise; the dependent variable would be stress.
  • The cause must come before the effect.  Experiments measure subjects on the dependent variable before exposing them to the independent variable (establishing a baseline). So we would measure the subjects’ level of stress before introducing exercise and then again after the exercise to see if there has been a change in stress levels. (Observational and survey research does not always allow us to look at the timing of these events which makes understanding causality problematic with these methods.)
  • The cause must be isolated.  The researcher must ensure that no outside, perhaps unknown variables, are actually causing the effect we see. The experimental design helps make this possible. In an experiment, we would make sure that our subjects’ diets were held constant throughout the exercise program. Otherwise, the diet might really be creating a change in stress level rather than exercise.

A basic experimental design involves beginning with a sample (or subset of a population) and randomly assigning subjects to one of two groups: the  experimental group or the control group . Ideally, to prevent bias, the participants would be blind to their condition (not aware of which group they are in) and the researchers would also be blind to each participant’s condition (referred to as “ double blind “). The experimental group is the group that is going to be exposed to an independent variable or condition the researcher is introducing as a potential cause of an event. The control group is going to be used for comparison and is going to have the same experience as the experimental group but will not be exposed to the independent variable. This helps address the placebo effect, which is that a group may expect changes to happen just by participating. After exposing the experimental group to the independent variable, the two groups are measured again to see if a change has occurred. If so, we are in a better position to suggest that the independent variable caused the change in the dependent variable . The basic experimental model looks like this:

Experimental Group X X X
Control Group X X

The major advantage of the experimental design is that of helping to establish cause and effect relationships. A disadvantage of this design is the difficulty of translating much of what concerns us about human behavior into a laboratory setting.

Developmental Research Designs

Now you know about some tools used to conduct research about human development. Remember,  research methods  are tools that are used to collect information. But it is easy to confuse research methods and research design. Research design is the strategy or blueprint for deciding how to collect and analyze information. Research design dictates which methods are used and how. Developmental research designs are techniques used particularly in lifespan development research. When we are trying to describe development and change, the research designs become especially important because we are interested in what changes and what stays the same with age. These techniques try to examine how age, cohort, gender, and social class impact development.

Cross-sectional designs

The majority of developmental studies use cross-sectional designs because they are less time-consuming and less expensive than other developmental designs. Cross-sectional research designs are used to examine behavior in participants of different ages who are tested at the same point in time. Let’s suppose that researchers are interested in the relationship between intelligence and aging. They might have a hypothesis (an educated guess, based on theory or observations) that intelligence declines as people get older. The researchers might choose to give a certain intelligence test to individuals who are 20 years old, individuals who are 50 years old, and individuals who are 80 years old at the same time and compare the data from each age group. This research is cross-sectional in design because the researchers plan to examine the intelligence scores of individuals of different ages within the same study at the same time; they are taking a “cross-section” of people at one point in time. Let’s say that the comparisons find that the 80-year-old adults score lower on the intelligence test than the 50-year-old adults, and the 50-year-old adults score lower on the intelligence test than the 20-year-old adults. Based on these data, the researchers might conclude that individuals become less intelligent as they get older. Would that be a valid (accurate) interpretation of the results?

Text stating that the year of study is 2010 and an experiment looks at cohort A with 20 year olds, cohort B of 50 year olds and cohort C with 80 year olds

No, that would not be a valid conclusion because the researchers did not follow individuals as they aged from 20 to 50 to 80 years old. One of the primary limitations of cross-sectional research is that the results yield information about age differences  not necessarily changes with age or over time. That is, although the study described above can show that in 2010, the 80-year-olds scored lower on the intelligence test than the 50-year-olds, and the 50-year-olds scored lower on the intelligence test than the 20-year-olds, the data used to come up with this conclusion were collected from different individuals (or groups of individuals). It could be, for instance, that when these 20-year-olds get older (50 and eventually 80), they will still score just as high on the intelligence test as they did at age 20. In a similar way, maybe the 80-year-olds would have scored relatively low on the intelligence test even at ages 50 and 20; the researchers don’t know for certain because they did not follow the same individuals as they got older.

It is also possible that the differences found between the age groups are not due to age, per se, but due to cohort effects. The 80-year-olds in this 2010 research grew up during a particular time and experienced certain events as a group. They were born in 1930 and are part of the Traditional or Silent Generation. The 50-year-olds were born in 1960 and are members of the Baby Boomer cohort. The 20-year-olds were born in 1990 and are part of the Millennial or Gen Y Generation. What kinds of things did each of these cohorts experience that the others did not experience or at least not in the same ways?

You may have come up with many differences between these cohorts’ experiences, such as living through certain wars, political and social movements, economic conditions, advances in technology, changes in health and nutrition standards, etc. There may be particular cohort differences that could especially influence their performance on intelligence tests, such as education level and use of computers. That is, many of those born in 1930 probably did not complete high school; those born in 1960 may have high school degrees, on average, but the majority did not attain college degrees; the young adults are probably current college students. And this is not even considering additional factors such as gender, race, or socioeconomic status. The young adults are used to taking tests on computers, but the members of the other two cohorts did not grow up with computers and may not be as comfortable if the intelligence test is administered on computers. These factors could have been a factor in the research results.

Another disadvantage of cross-sectional research is that it is limited to one time of measurement. Data are collected at one point in time and it’s possible that something could have happened in that year in history that affected all of the participants, although possibly each cohort may have been affected differently. Just think about the mindsets of participants in research that was conducted in the United States right after the terrorist attacks on September 11, 2001.

Longitudinal research designs

Middle aged woman holding own photograph of her younger self.

Longitudinal   research involves beginning with a group of people who may be of the same age and background (cohort) and measuring them repeatedly over a long period of time. One of the benefits of this type of research is that people can be followed through time and be compared with themselves when they were younger; therefore changes with age over time are measured. What would be the advantages and disadvantages of longitudinal research? Problems with this type of research include being expensive, taking a long time, and subjects dropping out over time. Think about the film, 63 Up , part of the Up Series mentioned earlier, which is an example of following individuals over time. In the videos, filmed every seven years, you see how people change physically, emotionally, and socially through time; and some remain the same in certain ways, too. But many of the participants really disliked being part of the project and repeatedly threatened to quit; one disappeared for several years; another died before her 63rd year. Would you want to be interviewed every seven years? Would you want to have it made public for all to watch?   

Longitudinal research designs are used to examine behavior in the same individuals over time. For instance, with our example of studying intelligence and aging, a researcher might conduct a longitudinal study to examine whether 20-year-olds become less intelligent with age over time. To this end, a researcher might give an intelligence test to individuals when they are 20 years old, again when they are 50 years old, and then again when they are 80 years old. This study is longitudinal in nature because the researcher plans to study the same individuals as they age. Based on these data, the pattern of intelligence and age might look different than from the cross-sectional research; it might be found that participants’ intelligence scores are higher at age 50 than at age 20 and then remain stable or decline a little by age 80. How can that be when cross-sectional research revealed declines in intelligence with age?

The same person, "Person A" is 20 years old in 2010, 50 years old in 2040, and 80 in 2070.

Since longitudinal research happens over a period of time (which could be short term, as in months, but is often longer, as in years), there is a risk of attrition. Attrition occurs when participants fail to complete all portions of a study. Participants may move, change their phone numbers, die, or simply become disinterested in participating over time. Researchers should account for the possibility of attrition by enrolling a larger sample into their study initially, as some participants will likely drop out over time. There is also something known as  selective attrition— this means that certain groups of individuals may tend to drop out. It is often the least healthy, least educated, and lower socioeconomic participants who tend to drop out over time. That means that the remaining participants may no longer be representative of the whole population, as they are, in general, healthier, better educated, and have more money. This could be a factor in why our hypothetical research found a more optimistic picture of intelligence and aging as the years went by. What can researchers do about selective attrition? At each time of testing, they could randomly recruit more participants from the same cohort as the original members, to replace those who have dropped out.

The results from longitudinal studies may also be impacted by repeated assessments. Consider how well you would do on a math test if you were given the exact same exam every day for a week. Your performance would likely improve over time, not necessarily because you developed better math abilities, but because you were continuously practicing the same math problems. This phenomenon is known as a practice effect. Practice effects occur when participants become better at a task over time because they have done it again and again (not due to natural psychological development). So our participants may have become familiar with the intelligence test each time (and with the computerized testing administration). Another limitation of longitudinal research is that the data are limited to only one cohort.

Sequential research designs

Sequential research designs include elements of both longitudinal and cross-sectional research designs. Similar to longitudinal designs, sequential research features participants who are followed over time; similar to cross-sectional designs, sequential research includes participants of different ages. This research design is also distinct from those that have been discussed previously in that individuals of different ages are enrolled into a study at various points in time to examine age-related changes, development within the same individuals as they age, and to account for the possibility of cohort and/or time of measurement effects. In 1965, K. Warner Schaie described particular sequential designs: cross-sequential, cohort sequential, and time-sequential. The differences between them depended on which variables were focused on for analyses of the data (data could be viewed in terms of multiple cross-sectional designs or multiple longitudinal designs or multiple cohort designs). Ideally, by comparing results from the different types of analyses, the effects of age, cohort, and time in history could be separated out.

Challenges Conducting Developmental Research

The previous sections describe research tools to assess development across the lifespan, as well as the ways that research designs can be used to track age-related changes and development over time. Before you begin conducting developmental research, however, you must also be aware that testing individuals of certain ages (such as infants and children) or making comparisons across ages (such as children compared to teens) comes with its own unique set of challenges. In the final section of this module, let’s look at some of the main issues that are encountered when conducting developmental research, namely ethical concerns, recruitment issues, and participant attrition.

Ethical Concerns

You may already know that Institutional Review Boards (IRBs) must review and approve all research projects that are conducted at universities, hospitals, and other institutions (each broad discipline or field, such as psychology or social work, often has its own code of ethics that must also be followed, regardless of institutional affiliation). An IRB is typically a panel of experts who read and evaluate proposals for research. IRB members want to ensure that the proposed research will be carried out ethically and that the potential benefits of the research outweigh the risks and potential harm (psychological as well as physical harm) for participants.

What you may not know though, is that the IRB considers some groups of participants to be more vulnerable or at-risk than others. Whereas university students are generally not viewed as vulnerable or at-risk, infants and young children commonly fall into this category. What makes infants and young children more vulnerable during research than young adults? One reason infants and young children are perceived as being at increased risk is due to their limited cognitive capabilities, which makes them unable to state their willingness to participate in research or tell researchers when they would like to drop out of a study. For these reasons, infants and young children require special accommodations as they participate in the research process. Similar issues and accommodations would apply to adults who are deemed to be of limited cognitive capabilities.

When thinking about special accommodations in developmental research, consider the informed consent process. If you have ever participated in scientific research, you may know through your own experience that adults commonly sign an informed consent statement (a contract stating that they agree to participate in research) after learning about a study. As part of this process, participants are informed of the procedures to be used in the research, along with any expected risks or benefits. Infants and young children cannot verbally indicate their willingness to participate, much less understand the balance of potential risks and benefits. As such, researchers are oftentimes required to obtain written informed consent from the parent or legal guardian of the child participant, an adult who is almost always present as the study is conducted. In fact, children are not asked to indicate whether they would like to be involved in a study at all (a process known as assent) until they are approximately seven years old. Because infants and young children cannot easily indicate if they would like to discontinue their participation in a study, researchers must be sensitive to changes in the state of the participant (determining whether a child is too tired or upset to continue) as well as to parent desires (in some cases, parents might want to discontinue their involvement in the research). As in adult studies, researchers must always strive to protect the rights and well-being of the minor participants and their parents when conducting developmental research.

Recruitment

An additional challenge in developmental science is participant recruitment. Recruiting university students to participate in adult studies is typically easy.  Unfortunately, young children cannot be recruited in this way. Given these limitations, how do researchers go about finding infants and young children to be in their studies?

The answer to this question varies along multiple dimensions. Researchers must consider the number of participants they need and the financial resources available to them, among other things. Location may also be an important consideration. Researchers who need large numbers of infants and children may attempt to recruit them by obtaining infant birth records from the state, county, or province in which they reside. Researchers can choose to pay a recruitment agency to contact and recruit families for them.  More economical recruitment options include posting advertisements and fliers in locations frequented by families, such as mommy-and-me classes, local malls, and preschools or daycare centers. Researchers can also utilize online social media outlets like Facebook, which allows users to post recruitment advertisements for a small fee. Of course, each of these different recruitment techniques requires IRB approval. And if children are recruited and/or tested in school settings, permission would need to be obtained ahead of time from teachers, schools, and school districts (as well as informed consent from parents or guardians).

And what about the recruitment of adults? While it is easy to recruit young college students to participate in research, some would argue that it is too easy and that college students are samples of convenience. They are not randomly selected from the wider population, and they may not represent all young adults in our society (this was particularly true in the past with certain cohorts, as college students tended to be mainly white males of high socioeconomic status). In fact, in the early research on aging, this type of convenience sample was compared with another type of convenience sample—young college students tended to be compared with residents of nursing homes! Fortunately, it didn’t take long for researchers to realize that older adults in nursing homes are not representative of the older population; they tend to be the oldest and sickest (physically and/or psychologically). Those initial studies probably painted an overly negative view of aging, as young adults in college were being compared to older adults who were not healthy, had not been in school nor taken tests in many decades, and probably did not graduate high school, let alone college. As we can see, recruitment and random sampling can be significant issues in research with adults, as well as infants and children. For instance, how and where would you recruit middle-aged adults to participate in your research?

A tired looking mother closes her eyes and rubs her forehead as her baby cries.

Another important consideration when conducting research with infants and young children is attrition . Although attrition is quite common in longitudinal research in particular (see the previous section on longitudinal designs for an example of high attrition rates and selective attrition in lifespan developmental research), it is also problematic in developmental science more generally, as studies with infants and young children tend to have higher attrition rates than studies with adults.  Infants and young children are more likely to tire easily, become fussy, and lose interest in the study procedures than are adults. For these reasons, research studies should be designed to be as short as possible – it is likely better to break up a large study into multiple short sessions rather than cram all of the tasks into one long visit to the lab. Researchers should also allow time for breaks in their study protocols so that infants can rest or have snacks as needed. Happy, comfortable participants provide the best data.

Conclusions

Lifespan development is a fascinating field of study – but care must be taken to ensure that researchers use appropriate methods to examine human behavior, use the correct experimental design to answer their questions, and be aware of the special challenges that are part-and-parcel of developmental research. After reading this module, you should have a solid understanding of these various issues and be ready to think more critically about research questions that interest you. For example, what types of questions do you have about lifespan development? What types of research would you like to conduct? Many interesting questions remain to be examined by future generations of developmental scientists – maybe you will make one of the next big discoveries!

Woman reading to two young children

Lifespan development is the scientific study of how and why people change or remain the same over time. As we are beginning to see, lifespan development involves multiple domains and many ages and stages that are important in and of themselves, but that are also interdependent and dynamic and need to be viewed holistically. There are many influences on lifespan development at individual and societal levels (including genetics); cultural, generational, economic, and historical contexts are often significant. And how developmental research is designed and data are collected, analyzed, and interpreted can affect what is discovered about human development across the lifespan.

Lifespan Development Copyright © 2020 by Julie Lazzara is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Interact Blog

Learn list building with quizzes

  • Make a quiz with AI

Everything you need to build your list with a quiz.

Browse hundreds of customizable and conversion focused templates organized by industry.

Connect your our custom built integrations and all the Zaps you’ll ever need.

Browse some of the top performing quizzes in the wild!

Quiz tutorials, list building, case studies, and everything else to grow your business with a quiz.

Creator stories from business owners just like you.

This comprehensive quiz course is your step-by-step guide all the way through from ideation to execution.

We’re all about creating connections and bringing humanity back into the world of marketing.

Get the goods on what our customers are saying about us!

Let’s grow together!

Existing customer? Login

Narrative Summary of The Section of Psychology

experimental design ideas psychology

Overview:  

As someone interested in the history of psychology, I find Jastrow’s description of the psychology exhibits at the World’s Columbian Exposition fascinating. He delves into the purpose and design of the psychology laboratory, outlining various tests designed to assess mental abilities like judgment, touch, memory, and reaction time. The text also explores the diverse apparatus displayed in the apparatus room, providing insights into the methods used to study sensation, perception, movement, and other psychological phenomena.

Main Parts:

  • The Psychology Laboratory:  This section describes the laboratory’s purpose and design, focusing on its role in testing and collecting data on various mental abilities. The laboratory aimed to provide individuals with an understanding of their own strengths and weaknesses through a series of tests, contributing to a broader statistical understanding of human mental capabilities.
  • The Series of Tests:  Jastrow meticulously details the tests conducted in the laboratory, including their purpose, methodology, and the types of mental abilities being measured. These tests spanned a wide range, examining judgment of length and weight, touch sensitivity, reaction times, memory, and more. Each test was carefully described, including the apparatus used and the type of information obtained.
  • Apparatus Room:  This section explores the vast collection of apparatus displayed in the apparatus room, highlighting the devices used for research on various psychological phenomena. The apparatus was organized by its purpose, showcasing devices for studying touch, vision, color sense, hearing, movement, and more. Jastrow provides detailed descriptions of the types of apparatus and their applications in understanding human mental functions.

View on Life:  While the text focuses on the scientific study of psychology, there is a sense of humanism present. Jastrow emphasizes the importance of understanding individual strengths and weaknesses, highlighting the value of individual testing as a means of self-discovery. He also stresses the importance of research into the factors influencing mental development and the implications for education, medicine, and society.

Scenarios:  The text doesn’t present specific scenarios or situations encountered by individuals taking the tests. However, it implicitly suggests that the testing environment would have provided individuals with a glimpse into their own mental abilities, potentially leading to self-reflection and a greater awareness of their own strengths and weaknesses.

Challenges:  Jastrow acknowledges the difficulties associated with mental testing, noting that mental capabilities are subject to variations and fluctuations. Factors like novelty, fatigue, and physical condition can influence test results, leading to potential inaccuracies in individual records. These challenges, however, do not detract from the overall value of the research, which aims to provide a broader statistical understanding of human mental capabilities.

Conflict:  The text does not present any significant conflicts or challenges faced by the individuals taking the tests.

Plot:  There is no overarching plot or story arc in the text. It presents a descriptive account of the psychology exhibits at the World’s Columbian Exposition, highlighting the various tests and apparatus used to study mental phenomena.

Point of View:  The text is written from a third-person perspective, offering an objective description of the psychology exhibits at the exposition. This perspective allows for a detailed and informative presentation of the laboratory, its tests, and the apparatus displayed.

How It’s Written:  Jastrow’s writing style is clear, concise, and informative. He uses a formal and academic tone, employing scientific terminology and providing detailed descriptions of the apparatus and test methods. For example, in describing the “Judgment of lengths by finger movements” test, he writes: “Five bars with terminal stops are arranged horizontally, the one above the other and a little behind the other; the subject passes his forefinger to and fro along these bars and so forms a judgment of their relative length.” This sentence exemplifies his precise and objective approach to describing the tests.

Tone:  The overall tone of the text is informative and objective. Jastrow presents a factual account of the psychology exhibits, emphasizing the scientific methods and apparatus used to study mental phenomena.

Life Choices:  The text does not explicitly discuss life choices made by individuals or the reasoning behind those choices.

Lessons:  The text implicitly suggests a number of lessons related to psychology and the importance of understanding human mental abilities. These lessons include:

  • The value of self-awareness:  Understanding one’s strengths and weaknesses through testing can provide valuable insights into oneself and one’s capabilities.
  • The influence of factors on mental performance:  Factors like fatigue, novelty, and physical condition can affect mental performance, highlighting the importance of considering these factors when assessing mental abilities.
  • The importance of research:  Jastrow emphasizes the importance of ongoing research into mental processes and their implications for education, medicine, and society.

Characters:  The primary characters in the text are the individuals participating in the tests, referred to as “the subject.” Their traits are not detailed, but they are implied to be individuals seeking to understand their own mental abilities.

Themes:  The text explores several important themes related to psychology and the study of human mental processes:

  • Objectivity in scientific inquiry:  Jastrow emphasizes the importance of objective observation and measurement in the study of psychology, showcasing the scientific methods used to analyze and understand mental phenomena.
  • The power of self-discovery:  The text suggests that understanding one’s own mental strengths and weaknesses can lead to self-discovery and personal growth.
  • The impact of psychology on society:  Jastrow highlights the potential applications of psychology in education, medicine, and other fields, suggesting that the study of mental processes can contribute to a better understanding of human behavior and well-being.

Principles:  The text implicitly suggests several fundamental truths about psychology, including:

  • The mind is a complex system:  The diverse range of tests and apparatus demonstrates the complexity of the human mind and the many factors that contribute to mental performance.
  • Mental abilities can be measured:  The tests showcase the possibility of objectively measuring and analyzing various mental capabilities.
  • Understanding mental processes can improve human life:  Jastrow’s emphasis on the potential applications of psychology suggests that understanding mental processes can lead to improvements in education, medicine, and society as a whole.

Intentions:

  • Intentions of the characters:  The individuals participating in the tests likely sought to gain a better understanding of their own mental abilities. They may have been curious about their strengths and weaknesses or seeking to improve their performance in specific areas.
  • Intentions of the reader:  The reader of this text is likely someone interested in the history of psychology and the development of scientific methods for studying mental phenomena. They may be interested in learning about the apparatus used, the types of tests conducted, and the early efforts to understand human mental processes.

Unique Vocabulary:  Jastrow utilizes specific terminology related to psychology and the apparatus used for testing, including terms like “æsthesiometers,” “kymograph,” “chronoscope,” and “tachistoscope.” These words highlight the scientific approach taken to study mental phenomena.

Anecdotes:  The text does not contain any specific anecdotes or key stories to illustrate points. It is primarily a descriptive account of the exhibits at the World’s Columbian Exposition.

Ideas:  Jastrow puts forth several ideas about psychology, including:

  • The importance of testing and measurement in understanding mental abilities.
  • The role of the psychology laboratory in contributing to a broader understanding of human mental capabilities.
  • The diverse range of apparatus and techniques used to study mental phenomena.
  • The potential applications of psychology in education, medicine, and society.

Facts and Findings:  The text does not contain any specific facts or findings, but rather provides a descriptive account of the psychology exhibits.

Statistics:  The text mentions that the tests conducted were based on data collected from 850 individuals at nine different colleges, with college students predominating.

Points of View:  The text is written from a third-person perspective, offering an objective description of the psychology exhibits. This perspective allows for a detailed and informative presentation of the laboratory, its tests, and the apparatus displayed. The perspective is scientific and focused on the methods and apparatus used to study mental phenomena.

Perspective:  The text presents a scientific perspective on the study of psychology, emphasizing objective observation, measurement, and the use of apparatus to understand mental processes. The perspective is also historically significant, offering a glimpse into the early development of experimental psychology in the late 19th century.

What is the best quiz for you business?

Quizzes are super effective for lead generation and selling products. Find the best quiz for your business by answering a few questions.

This Old Experiment With Mice Led to Bleak Predictions for Humanity’s Future

From the 1950s to the 1970s, researcher John Calhoun gave rodents unlimited food and studied their behavior in overcrowded conditions

Maris Fessenden ; Updated by Rudy Molinek

mouse utopia

What does utopia look like for mice and rats? According to a researcher who did most of his work in the 1950s through 1970s, it might include limitless food, multiple levels and secluded little condos. These were all part of John Calhoun’s experiments to study the effects of population density on behavior. But what looked like rodent paradises at first quickly spiraled into out-of-control overcrowding, eventual population collapse and seemingly sinister behavior patterns.

In other words, the mice were not nice.

Working with rats between 1958 and 1962, and with mice from 1968 to 1972, Calhoun set up experimental rodent enclosures at the National Institute of Mental Health’s Laboratory of Psychology. He hoped to learn more about how humans might behave in a crowded future. His first 24 attempts ended early due to constraints on laboratory space. But his 25th attempt at a utopian habitat, which began in 1968, would become a landmark psychological study. According to Gizmodo ’s Esther Inglis-Arkell, Calhoun’s “Universe 25” started when the researcher dropped four female and four male mice into the enclosure.

By the 560th day, the population peaked with over 2,200 individuals scurrying around, waiting for food and sometimes erupting into open brawls. These mice spent most of their time in the presence of hundreds of other mice. When they became adults, those mice that managed to produce offspring were so stressed out that parenting became an afterthought.

“Few females carried pregnancies to term, and the ones that did seemed to simply forget about their babies,” wrote Inglis-Arkell in 2015. “They’d move half their litter away from danger and forget the rest. Sometimes they’d drop and abandon a baby while they were carrying it.”

A select group of mice, which Calhoun called “the beautiful ones,” secluded themselves in protected places with a guard posted at the entry. They didn’t seek out mates or fight with other mice, wrote Will Wiles in Cabinet magazine in 2011, “they just ate, slept and groomed, wrapped in narcissistic introspection.”

Eventually, several factors combined to doom the experiment. The beautiful ones’ chaste behavior lowered the birth rate. Meanwhile, out in the overcrowded common areas, the few remaining parents’ neglect increased infant mortality. These factors sent the mice society over a demographic cliff. Just over a month after population peaked, around day 600, according to Distillations magazine ’s Sam Kean, no baby mice were surviving more than a few days. The society plummeted toward extinction as the remaining adult mice were just “hiding like hermits or grooming all day” before dying out, writes Kean.

YouTube Logo

Calhoun launched his experiments with the intent of translating his findings to human behavior. Ideas of a dangerously overcrowded human population were popularized by Thomas Malthus at the end of the 18th century with his book An Essay on the Principle of Population . Malthus theorized that populations would expand far faster than food production, leading to poverty and societal decline. Then, in 1968, the same year Calhoun set his ill-fated utopia in motion, Stanford University entomologist Paul Ehrlich published The Population Bomb . The book sparked widespread fears of an overcrowded and dystopic imminent future, beginning with the line, “The battle to feed all of humanity is over.”

Ehrlich suggested that the impending collapse mirrored the conditions Calhoun would find in his experiments. The cause, wrote Charles C. Mann for Smithsonian magazine in 2018, would be “too many people, packed into too-tight spaces, taking too much from the earth. Unless humanity cut down its numbers—soon—all of us would face ‘mass starvation’ on ‘a dying planet.’”

Calhoun’s experiments were interpreted at the time as evidence of what could happen in an overpopulated world. The unusual behaviors he observed—such as open violence, a lack of interest in sex and poor pup-rearing—he dubbed “behavioral sinks.”

After Calhoun wrote about his findings in a 1962 issue of Scientific American , that term caught on in popular culture, according to a paper published in the Journal of Social History . The work tapped into the era’s feeling of dread that crowded urban areas heralded the risk of moral decay.

Events like the murder of Kitty Genovese in 1964—in which false reports claimed 37 witnesses stood by and did nothing as Genovese was stabbed repeatedly—only served to intensify the worry. Despite the misinformation, media discussed the case widely as emblematic of rampant urban moral decay. A host of science fiction works—films like Soylent Green , comics like 2000 AD —played on Calhoun’s ideas and those of his contemporaries . For example, Soylent Green ’s vision of a dystopic future was set in a world maligned by pollution, poverty and overpopulation.

Now, interpretations of Calhoun’s work have changed. Inglis-Arkell explains that the main problem of the habitats he created wasn’t really a lack of space. Rather, it seems likely that Universe 25’s design enabled aggressive mice to stake out prime territory and guard the pens for a limited number of mice, leading to overcrowding in the rest of the world.

However we interpret Calhoun’s experiments, though, we can take comfort in the fact that humans are not rodents. Follow-up experiments by other researchers, which looked at human subjects, found that crowded conditions didn’t necessarily lead to negative outcomes like stress, aggression or discomfort.

“Rats may suffer from crowding,” medical historian Edmund Ramsden told the NIH Record ’s Carla Garnett in 2008, “human beings can cope.”

Get the latest stories in your inbox every weekday.

Maris Fessenden | | READ MORE

Maris Fessenden is a freelance science writer and artist who appreciates small things and wide open spaces.

Rudy Molinek | READ MORE

Rudy Molinek is  Smithsonian  magazine's 2024 AAAS Mass Media Fellow.

Study Postgraduate

Behavioural and economic science (science track) (msc) (2024 entry).

A student and member of staff from Psychology having a conversation.

Course code

30 September 2024

1 year full-time

Qualification

University of Warwick

Explore our Behavioural and Economic Science (Science Track) taught Master's degree.

Our MSc in Behavioural and Economic Science (Science Track) combines multidisciplinary expertise from the departments of Psychology and Economics, as well as Warwick Business School. This course offers you training in basic psychology and behavioural economics, whilst allowing you to focus on the cognitive science of judgement and decision-making.

Course overview

This innovative course in the growing area of decision science and behavioural economics combines multidisciplinary expertise from the Department of Psychology, Department of Economics and Warwick Business School (WBS). The course emphasises both theoretical foundations and real-world application of core and advanced areas of behavioural economics, and the cognitive science of judgement and decision making. The Science Track variation of the course is designed for students with a first degree in a science-based subject, such as Psychology, Maths, Biology, etc. or a subject with a strong quantitative element, such as Business, Finance, etc.

A variation of the course is offered by the Department of Economics and is available if you have a first degree in Economics.

Skills from this degree

By the end of the course, you should be able to:

  • Gain a deeper understanding of how and why people make the choices they do
  • Understand how influencing such choices is important across a variety of domains, from public policy (e.g. encouraging people to save for pensions), through to industry (e.g. how to place a new product in the market), and individual behaviour (e.g. why people drink and eat too much).
  • Develop a theoretical understanding of key models and results in behavioural economics and judgment and decision making
  • Design, conduct and analyse behavioural experiments
  • Implement models of choice
  • Access and analyse large-scale datasets
  • Initiate economic enquiry and test economic models
  • Assess and deploy potential behavioural interventions

General entry requirements

Minimum requirements.

2:1 undergraduate degree (or equivalent) in a related subject.

The MSc Behavioural and Economic Science is a quantitative degree and you should feel comfortable taking a mathematical approach to your thinking.

On the MSc we cover the use of statistics to make sense of behavioural data (e.g. regression and ANOVAs). We introduce the R and Matlab programming languages for statistics and mathematical modelling (though we do not assume you have previous experience of these languages). We use maths in economic and psychological models.

You should be familiar with some of: elementary calculus, basic geometry, a really basic knowledge of sets, functions like logarithms, exponentials, powers, probability and probability distributions. You do not need to know all of these things, but you should not be frightened about learning about them! Such a quantitative approach is a really great way to understand data from field studies and experiments, and big data sets and surveys. It is also a great way to formalise and think about ideas about how people behave and the aggregate consequences of this behaviour.

English language requirements

You can find out more about our English language requirements Link opens in a new window . This course requires the following:

  • IELTS overall score of 7.0, minimum component scores of two at 6.0/6.5 and the rest at 7.0 or above.

International qualifications

We welcome applications from students with other internationally recognised qualifications.

For more information, please visit the international entry requirements page Link opens in a new window .

Additional requirements

There are no additional entry requirements for this course.

Core modules

You will usually study three core modules across Psychology, Economics, and WBS, as well as complete a Behavioural and Economic Science project during the summer.

The three modules usually include:

Behavioural Microeconomics

The aim of this module is to examine the foundations of microeconomic analysis from a behavioural perspective and introduce basic microeconomic concepts to non-economists. It will achieve this objective by subjecting many of the fundamental assumptions made in standard undergraduate degree courses to close critical scrutiny. It will familiarise you with recent research developments in behavioural economics and the possible implications for theory and policy raised by these developments.

Issues in Psychological Science

This module covers core psychology and behavioural science content relevant to later modules in the degree, including memory, attention, perception, personality and individual differences, choice, and subjective well-being. It will provide you with the psychological background to enable you to understand and critically evaluate material on those later modules. Through a combination of lectures, seminars, and laboratory-based sessions, you will learn about both models and data in the relevant areas of psychology. You will also learn basic MATLAB programming and model implementation.

Methods and Analysis in Behavioural Science

The purpose of the module is to introduce you to experimental design and statistical programming. Behavioural scientists need statistical analysis of experimental data and of large data sets. This module covers these topics to allow you to understand how to test hypotheses, plan experimental design and perform statistical analysis using R.

Optional modules

Optional modules can vary from year to year. Example optional modules may include:

  • Experimental Economics
  • Behavioural Economics
  • Principles of Cognition
  • Psychological Models of Choice
  • Behavioural Change: Nudging and Persuasion
  • Bayesian Approaches in Behavioural Science
  • Neuroeconomics
  • Behavioural Ethics

You will choose a number of optional modules to complete.

You will have a combination of lectures, seminars, and practical classes/workshops, depending on the module. Lectures introduce you to a particular topic, seminars build on that knowledge, and practical classes/workshops allow you to put what you are learning into practice alongside tutors knowledgeable in the topic.

Class sizes

Class sizes will naturally vary, however this course typically has around 25-30 students.

Typical contact hours

Teaching occurs throughout the week, with an average of 8-12 hours of lectures and 5-7 hours of workshops, practical classes and/or seminars per week. You will also have meetings with your personal tutor at regular intervals throughout your course.

We typically assess modules through a mix of assessment types, which include worksheets, essays, research reports, modelling and data analysis, class tests, exams, and presentations.

Your timetable

Your personalised timetable will be complete when you are registered for all modules, compulsory and optional, and you have been allocated to your lectures, seminars and other small group classes. Your compulsory modules will be registered for you and you will be able to choose your optional modules when you join us.

Your career

Graduates from this course have gone on to work at places including: Decision Technology, the Commonwealth Bank, the Bursara Center, the Behavioural Insights Team, and Cowry Consulting.

Our department has a dedicated professionally-qualified Senior Careers Consultant offering impartial advice and guidance together with workshops and events throughout the year. We also encourage you to attend a number of networking events held each year, and we hold a series of careers-focused workshops which have previously included topics such as:

  • Careers in Behavioural Science
  • Applying for PhDs in the Behavioural Science

Psychology at Warwick

A playground for the mind

Our research-driven department can offer you the kind of physical and intellectual environment that’ll inspire you to succeed. We pride ourselves on being a friendly, inclusive academic community offering a stimulating, intellectual environment to students and staff. We’re large enough to provide excellent resources and education, but also small enough to know who you are and provide one-to-one support.

Find out more about us on our website. Link opens in a new window

Our Postgraduate courses

  • Behavioural and Data Science (MSc)
  • Behavioural and Economic Science (MSc)
  • Clinical Applications of Psychology (MSc)
  • Mental Health and Wellbeing (MSc)
  • Psychological Research (MSc)
  • Psychology (MSc by Research)
  • Psychology (MPhil/PhD)

Tuition fees

Tuition fees are payable for each year of your course at the start of the academic year, or at the start of your course, if later. Academic fees cover the cost of tuition, examinations and registration and some student amenities.

Find your taught course fees  

Fee Status Guidance

We carry out an initial fee status assessment based on the information you provide in your application. Students will be classified as Home or Overseas fee status. Your fee status determines tuition fees, and what financial support and scholarships may be available. If you receive an offer, your fee status will be clearly stated alongside the tuition fee information.

Do you need your fee classification to be reviewed?

If you believe that your fee status has been classified incorrectly, you can complete a fee status assessment questionnaire. Please follow the instructions in your offer information and provide the documents needed to reassess your status.

Find out more about how universities assess fee status

Additional course costs

As well as tuition fees and living expenses, some courses may require you to cover the cost of field trips or costs associated with travel abroad.

For departmental specific costs, please see the Modules tab on the course web page for the list of core and optional core modules with hyperlinks to our  Module Catalogue  (please visit the Department’s website if the Module Catalogue hyperlinks are not provided).

Associated costs can be found on the Study tab for each module listed in the Module Catalogue (please note most of the module content applies to 2022/23 year of study). Information about module department specific costs should be considered in conjunction with the more general costs below:

  • Core text books
  • Printer credits
  • Dissertation binding
  • Robe hire for your degree ceremony

Scholarships and bursaries

experimental design ideas psychology

Scholarships and financial support

Find out about the different funding routes available, including; postgraduate loans, scholarships, fee awards and academic department bursaries.

experimental design ideas psychology

Living costs

Find out more about the cost of living as a postgraduate student at the University of Warwick.

experimental design ideas psychology

Find out how to apply to us, ask your questions, and find out more.

How to apply.

The application process for courses that start in September and October 2024 will open on 2 October 2023.

There are three application deadlines for the course:

  • The early deadline is 31 December 2023, all applications received by this date will be considered in January 2024 and a decision returned soon thereafter.
  • The middle deadline is 31 March 2024, all applications received between 1 January 2024 and 31 March 2024 will be considered in April 2024 and a decision returned soon thereafter.
  • The late, and final, deadline is 30 June 2024, all applications received between 1 April 2024 and 30 June 2024 will be considered in April 2024 and a decision returned soon thereafter.

Applications will close on 30 June 2024 and no applications received after this date will be considered.

How to apply for a postgraduate taught course  

experimental design ideas psychology

After you’ve applied

Find out how we process your application.

experimental design ideas psychology

Applicant Portal

Track your application and update your details.

experimental design ideas psychology

Admissions statement

See Warwick’s postgraduate admissions policy.

experimental design ideas psychology

Join a live chat

Ask questions and engage with Warwick.

Warwick Hosted Events Link opens in a new window

Postgraduate fairs.

Throughout the year we attend exhibitions and fairs online and in-person around the UK. These events give you the chance to explore our range of postgraduate courses, and find out what it’s like studying at Warwick. You’ll also be able to speak directly with our student recruitment team, who will be able to help answer your questions.

Join a live chat with our staff and students, who are here to answer your questions and help you learn more about postgraduate life at Warwick. You can join our general drop-in sessions or talk to your prospective department and student services.

Departmental events

Some academic departments hold events for specific postgraduate programmes, these are fantastic opportunities to learn more about Warwick and your chosen department and course.

See our online departmental events

Warwick Talk and Tours

A Warwick talk and tour lasts around two hours and consists of an overview presentation from one of our Recruitment Officers covering the key features, facilities and activities that make Warwick a leading institution. The talk is followed by a campus tour which is the perfect way to view campus, with a current student guiding you around the key areas on campus.

Connect with us

Learn more about Postgraduate study at the University of Warwick.

Page updates

We have revised the information on this page since publication. See the edits we have made and content history .

Why Warwick

Discover why Warwick is one of the best universities in the UK and renowned globally.

9th in the UK (The Guardian University Guide 2024) Link opens in a new window

69th in the world Link opens in a new window (QS World University Rankings 2025) Link opens in a new window

6th most targeted university by the UK's top 100 graduate employers Link opens in a new window

(The Graduate Market in 2024, High Fliers Research Ltd. Link opens in a new window )

About the information on this page

This information is applicable for 2024 entry. Given the interval between the publication of courses and enrolment, some of the information may change. It is important to check our website before you apply. Please read our terms and conditions to find out more.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

machines-logo

Article Menu

experimental design ideas psychology

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Design, analysis and experiment of a modular deployable continuum robot.

experimental design ideas psychology

1. Introduction

2. design of slpm unit based on origami structure, 3. kinematic analysis, 3.1. forward kinematics, 3.2. workspace, 3.3. inverse kinematics, 3.3.1. inverse kinematics of single module, 3.3.2. inverse kinematics of robot, 4. finite element simulation, 5. control system and experiment, 5.1. control system, 5.2. experiment of continuum robot, 5.2.1. folding performance evaluation, 5.2.2. bending performance evaluation, 5.2.3. accuracy testing and optimisation, 6. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

  • Russo, M.; Sadati, S.M.H.; Dong, X.; Mohammad, A.; Walker, I.D.; Bergeles, C.; Xu, K.; Axinte, D.A. Continuum Robots: An Overview. Adv. Intell. Syst. 2023 , 5 , 2200367. [ Google Scholar ] [ CrossRef ]
  • Gravagne, I.A.; Rahn, C.D.; Walker, I.D. Large deflection dynamics, and control for planar continuum robots. IEEE/ASME Trans. Mechatron. 2003 , 8 , 299–307. [ Google Scholar ] [ CrossRef ]
  • Hannan, M.W.; Walker, I.D. Kinematics and the implementation of an elephant’s trunk manipulator and other continuum style robots. J. Robot. Syst. 2003 , 20 , 45–63. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jones, B.A.; Walker, I.D. Kinematics for multisection continuum robots. IEEE Trans. Robot. 2006 , 22 , 43–55. [ Google Scholar ] [ CrossRef ]
  • Neppalli, S.; Csencsits, M.A.; Jones, B.A.; Walker, I.D. Closed-Form Inverse Kinematics for Continuum Manipulators. Adv. Robot. 2009 , 23 , 2077–2091. [ Google Scholar ] [ CrossRef ]
  • Braganza, D.; Dawson, D.M.; Walker, I.D.; Nath, N. A neural network controller for continuum robots. IEEE Trans. Robot. 2007 , 23 , 1270–1277. [ Google Scholar ] [ CrossRef ]
  • Xu, K.; Simaan, N. An investigation of the intrinsic force sensing capabilities of continuum robots. IEEE Trans. Robot. 2008 , 24 , 576–587. [ Google Scholar ] [ CrossRef ]
  • Xu, K.; Simaan, N. Analytic Formulation for Kinematics, Statics, and Shape Restoration of Multibackbone Continuum Robots via Elliptic Integrals. J Mech Robot 2010 , 2 , 011006. [ Google Scholar ] [ CrossRef ]
  • Sun, Y.; Lueth, T.C. Enhancing Torsional Stiffness of Continuum Robots Using 3-D Topology Optimized Flexure Joints. IEEE/ASME Trans. Mechatron. 2023 , 28 , 1844–1852. [ Google Scholar ] [ CrossRef ]
  • Geng, S.N.; Wang, Y.Y.; Wang, C.; Kang, R.J. A Space Tendon-Driven Continuum Robot. In Proceedings of the Advances in Swarm Intelligence: 9th International Conference, Shanghai, China, 17–22 June 2018. [ Google Scholar ]
  • Ranzani, T.; Gerboni, G.; Cianchetti, M.; Menciassi, A. A bioinspired soft manipulator for minimally invasive surgery. Bioinspiration Biomim. 2015 , 10 , 035008. [ Google Scholar ] [ CrossRef ]
  • Greer, J.D.; Morimoto, T.K.; Okamura, A.M.; Hawkes, E.W. Series pneumatic artificial muscles (sPAMs) and application to a soft continuum robot. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 5503–5510. [ Google Scholar ]
  • Caasenbrood, B.; Pogromsky, A.; Nijmeijer, H. A Computational Design Framework for Pressure-driven Soft Robots through Nonlinear Topology Optimization. In Proceedings of the 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 15 May–15 July 2020; pp. 633–638. [ Google Scholar ]
  • Chautems, C.; Tonazzini, A.; Floreano, D.; Nelson, B.J. A variable stiffness catheter controlled with an external magnetic field. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 24–28 September 2017; pp. 181–186. [ Google Scholar ]
  • Dupont, P.E.; Lock, J.; Itkowitz, B.; Butler, E. Design and Control of Concentric-Tube Robots. IEEE Trans. Robot. 2009 , 26 , 209–225. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Donat, H.; Gu, J.C.; Steil, J.J. Real-Time Shape Estimation for Concentric Tube Continuum Robots with a Single Force/Torque Sensor. Front. Robot. AI 2021 , 8 , 734033. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chikhaoui, M.T.; Granna, J.; Starke, J.; Burgner-Kahrs, J. Toward motion coordination control and design optimization for dual-arm concentric tube continuum robots. IEEE Robot. Autom. Lett. 2018 , 3 , 1793–1800. [ Google Scholar ] [ CrossRef ]
  • Liu, H.B.; Teng, X.Y.; Qiao, Z.Z.; Yu, H.B.; Cai, S.X.; Yang, W.G. A concentric tube magnetic continuum robot with multiple stiffness levels and high flexibility for potential endovascular intervention. J. Magn. Magn. Mater. 2024 , 597 , 172023. [ Google Scholar ] [ CrossRef ]
  • Bryson, C.E.; Rucker, D.C. Toward Parallel Continuum Manipulators. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 778–785. [ Google Scholar ]
  • Till, J.; Bryson, C.E.; Chung, S.; Orekhov, A.; Rucker, D.C. Efficient Computation of Multiple Coupled Cosserat Rod Models for Real-Time Simulation and Control of Parallel Continuum Manipulators. In Proceedings of the 2015 IEEE international conference on robotics and automation (ICRA), Washington, DC, USA, 25–30 May 2015; pp. 5067–5074. [ Google Scholar ]
  • Black, C.B.; Till, J.; Rucker, C. Parallel Continuum Robots: Modeling, Analysis, and Actuation-Based Force Sensing. IEEE Trans. Robot. 2017 , 34 , 29–47. [ Google Scholar ] [ CrossRef ]
  • Orekhov, A.L.; Black, C.B.; Till, J.; Chung, S.; Rucker, D.C. Analysis and validation of a teleoperated surgical parallel continuum manipulator. IEEE Robot. Autom. Lett. 2016 , 1 , 828–835. [ Google Scholar ] [ CrossRef ]
  • Orekhov, A.L.; Bryson, C.E.; Till, J.; Chung, S.; Rucker, D.C. A Surgical Parallel Continuum Manipulator with a Cable-Driven Grasper. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 5264–5267. [ Google Scholar ]
  • Wu, G.L.; Shi, G.L. Experimental statics calibration of a multi-constraint parallel continuum robot. Mech. Mach. Theory 2019 , 136 , 72–85. [ Google Scholar ] [ CrossRef ]
  • Lilge, S.; Nuelle, K.; Boettcher, G.; Spindeldreier, S.; Burgner-Kahrs, J. Tendon Actuated Continuous Structures in Planar Parallel Robots: A Kinematic Analysis. J. Mech. Robot. 2021 , 13 , 011025. [ Google Scholar ] [ CrossRef ]
  • Mauze, B.; Dahmouche, R.; Laurent, G.J.; André, A.N.; Rougeot, P.; Sandoz, P.; Clévy, C. Nanometer Precision with a Planar Parallel Continuum Robot. IEEE Robot. Autom. Lett. 2020 , 5 , 3806–3813. [ Google Scholar ] [ CrossRef ]
  • Castledine, N.P.; Boyle, J.H.; Kim, J. Design of a Modular Continuum Robot Segment for use in a General Purpose Manipulator. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, Canada, 20–24 May 2019; pp. 4430–4435. [ Google Scholar ]
  • Yang, H.D.; Asbeck, A.T. Design and Characterization of a Modular Hybrid Continuum Robotic Manipulator. IEEE/ASME Trans. Mechatron. 2020 , 25 , 2812–2823. [ Google Scholar ] [ CrossRef ]
  • Gomez, V.; Hernando, M.; Aguado, E.; Bajo, D.; Rossi, C. Design and Kinematic Modeling of a Soft Continuum Telescopic Arm for the Self-Assembly Mechanism of a Modular Robot. Soft Robot. 2024 , 11 , 347–360. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yang, J.Z.; Peng, H.J.; Zhou, W.Y.; Zhang, J.; Wu, Z.G. A modular approach for dynamic modeling of multisegment continuum robots. Mech. Mach. Theory 2021 , 165 , 104429. [ Google Scholar ] [ CrossRef ]
  • Dai, J.S. Configuration transformation and mathematical description of manipulation of origami cartons. In Proceedings of the Origami6: Proceedings of the 6th International Meeting of Origami Science, Mathematics, and Education, Tokyo, Japan, 11–13 August 2015; pp. 163–173. [ Google Scholar ]
  • Chen, Y.; Liang, J.B.; Shi, P.; Feng, J.; Sareh, P.; Dai, J.S. Inverse design of programmable Poisson’s ratio and in-plane stiffness for generalized four-fold origami. Compos. Struct. 2023 , 311 , 116789. [ Google Scholar ] [ CrossRef ]
  • Wang, R.Q.; Song, Y.Q.; Dai, J.S. Reconfigurability of the origami-inspired integrated 8R kinematotropic metamorphic mechanism and its evolved 6R and 4R mechanisms. Mech. Mach. Theory 2021 , 161 , 104245. [ Google Scholar ] [ CrossRef ]
  • Zhuang, Z.; Guan, Y.; Xu, S.; Dai, J.S. Reconfigurability in automobiles—structure, manufacturing and algorithm for automobiles. Int. J. Automot. Manuf. Mater. 2022 , 1 , 1–11. [ Google Scholar ] [ CrossRef ]
  • Zhang, K.T.; Qiu, C.; Dai, J.S. An Extensible Continuum Robot With Integrated Origami Parallel Modules. J. Mech. Robot. 2016 , 8 , 031010. [ Google Scholar ] [ CrossRef ]
  • Guan, Y.T.; Zhuang, Z.M.; Zhang, Z.; Dai, J.S. Design, Analysis, and Experiment of the Origami Robot Based on Spherical-Linkage Parallel Mechanism. J. Mech. Des. 2023 , 145 , 081701. [ Google Scholar ] [ CrossRef ]
  • Hanna, B.H.; Lund, J.M.; Lang, R.J.; Magleby, S.P.; Howell, L.L. Waterbomb base: A symmetric single-vertex bistable origami mechanism. Smart Mater. Struct. 2014 , 23 , 094009. [ Google Scholar ] [ CrossRef ]
  • Lee, D.Y.; Kim, J.K.; Sohn, C.Y.; Heo, J.M.; Cho, K.J. High-load capacity origami transformable wheel. Sci. Robot. 2021 , 6 , eabe0201. [ Google Scholar ] [ CrossRef ]
  • Zhang, K.T.; Fang, Y.F.; Fang, H.R.; Dai, J.S. Geometry and Constraint Analysis of the Three-Spherical Kinematic Chain Based Parallel Mechanism. J. Mech. Robot. 2010 , 2 , eabe0201. [ Google Scholar ] [ CrossRef ]
  • Zhuang, Z.M.; Zhang, Z.; Guan, Y.T.; Wei, W.; Li, M.; Tang, Z.; Kang, R.J.; Song, Z.B.; Dai, J.S. Design and Control of SLPM-Based Extensible Continuum Arm. J. Mech. Robot. 2022 , 14 , 061003. [ Google Scholar ] [ CrossRef ]
  • Li, Y.N.; Huang, H.L.; Li, B. Design of a Deployable Continuum Robot Using Elastic Kirigami-Origami. IEEE Robot. Autom. Lett. 2023 , 8 , 8382–8389. [ Google Scholar ] [ CrossRef ]
  • Dai, J. Screw algebra and kinematic approaches for mechanisms and robotics. In Springer Tracks in Advanced Robotics ; Springer: London, UK, 2014. [ Google Scholar ]
  • Rastegar, J.; Fardanesh, B. Manipulation workspace analysis using the Monte Carlo method. Mech. Mach. Theory 1990 , 25 , 233–239. [ Google Scholar ] [ CrossRef ]
  • Zhai, Z.R.; Wang, Y.; Jiang, H.Q. Origami-inspired, on-demand deployable and collapsible mechanical metamaterials with tunable stiffness. Proc. Natl. Acad. Sci. 2018 , 115 , 2032–2037. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhang, Z.; Tang, S.J.; Fan, W.C.; Xun, Y.H.; Wang, H.; Chen, G.L. Design and analysis of hybrid-driven origami continuum robots with extensible and stiffness-tunable sections. Mech. Mach. Theory 2022 , 169 , 104607. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

MaterialDensity (g/cm )Poisson’s RatioThickness/mm
Connection Platecarbon fiber1.800.12.0
PET FilmPET1.380.3840.1
Rigid PlatePLA 0.421.0
TrajectoryMean Error before
Compensation/mm
Mean Error after
Compensation/mm
Average Error
Reduction Rate
Linear5.06301.792664.6%
Circular4.57601.563465.8%
Square4.15161.453265.0%
TrajectoryStandard Deviation before
Compensation/mm
Standard Deviation after
Compensation/mm
Average Error
Reduction Rate
Linear2.45910.758069.2%
Circular2.30870.360884.4%
Square1.63450.520768.1%
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Jia, A.; Liu, X.; Guan, Y.; Liu, Y.; Helian, Q.; Liu, C.; Zhuang, Z.; Kang, R. Design, Analysis and Experiment of a Modular Deployable Continuum Robot. Machines 2024 , 12 , 544. https://doi.org/10.3390/machines12080544

Jia A, Liu X, Guan Y, Liu Y, Helian Q, Liu C, Zhuang Z, Kang R. Design, Analysis and Experiment of a Modular Deployable Continuum Robot. Machines . 2024; 12(8):544. https://doi.org/10.3390/machines12080544

Jia, Aihu, Xinyu Liu, Yuntao Guan, Yongxi Liu, Qianze Helian, Chenshuo Liu, Zheming Zhuang, and Rongjie Kang. 2024. "Design, Analysis and Experiment of a Modular Deployable Continuum Robot" Machines 12, no. 8: 544. https://doi.org/10.3390/machines12080544

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. 15 Experimental Design Examples (2024)

    experimental design ideas psychology

  2. Experimental Design in Psychology (AQA A Level)

    experimental design ideas psychology

  3. PPT

    experimental design ideas psychology

  4. ⭐ Experimental design examples psychology. 5.2 Experimental Design

    experimental design ideas psychology

  5. PPT

    experimental design ideas psychology

  6. Experimental Designs- Research Methodology AS/A level psychology CAIE

    experimental design ideas psychology

COMMENTS

  1. 11+ Psychology Experiment Ideas (Goals + Methods)

    The Marshmallow Test. One of the most talked-about experiments of the 20th century was the Marshmallow Test, conducted by Walter Mischel in the late 1960s at Stanford University.. The goal was simple but profound: to understand a child's ability to delay gratification and exercise self-control.. Children were placed in a room with a marshmallow and given a choice: eat the marshmallow now or ...

  2. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  3. 19+ Experimental Design Examples (Methods

    1) True Experimental Design. In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

  4. Great Ideas for Psychology Experiments to Explore

    If you are looking for an idea for psychology experiments, start your search early and make sure you have the time you need. Doing background research, choosing an experimental design, and actually performing your experiment can be quite the process. Keep reading to find some great psychology experiment ideas that can serve as inspiration.

  5. Psychology Experiment Ideas

    Students can design an experiment to test selective attention by presenting participants with a video or audio stimulus and manipulating the presence or absence of a distracting stimulus to see the effect on attention. ... Your own interests can be a rich source of ideas for your psychology experiments. As you are trying to come up with a topic ...

  6. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  7. Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  8. Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  9. Experimental Design and Statistics for Psychology

    Experimental Design and Statistics for Psychology: A First Course is a concise, straighforward and accessible introduction to the design of psychology experiments and the statistical tests used to make sense of their results. Makes abundant use of charts, diagrams and figures. Assumes no prior knowledge of statistics. Invaluable to all psychology students needing a firm grasp of the basics ...

  10. Experimental Design in Psychology

    This text is about doing science and the active process of reading, learning, thinking, generating ideas, designing experiments, and the logistics surrounding each step of the research process. In easy-to-read, conversational language, Kim MacLin teaches students experimental design principles and techniques using a tutorial approach in which students read, critique, and analyze over 75 actual ...

  11. Particularly Exciting Experiments in Psychology

    Attention to Emotion. Attention is biased toward negative emotional expressions. Read previous issues of PeePs. Date created: 2014. Particularly Exciting Experiments in Psychology™ (PeePs) is a free summary of ongoing research trends common to six APA journals that focus on experimental psychology.

  12. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  13. 15 Experimental Design Examples

    15 Experimental Design Examples. Written by Chris Drew (PhD) | October 9, 2023. Experimental design involves testing an independent variable against a dependent variable. It is a central feature of the scientific method. A simple example of an experimental design is a clinical trial, where research participants are placed into control and ...

  14. How to Conduct a Psychology Experiment

    This type of design is often used if it is not feasible or ethical to perform a randomized controlled trial. True Experimental Design . A true experimental design, also known as a randomized controlled trial, includes both of the elements that pre-experimental designs and quasi-experimental designs lack—control groups and random assignment to ...

  15. Lesson Idea: Experimental Designs

    Lesson Idea: Experimental Designs. Travis Dixon February 15, 2018 Internal Assessment (IB), Research Methodology. Researchers have many decisions to make when designing their studies. In this lesson, you'll put yourself in the shoes of an experimenter and will have to make a series of design choices and justify them. +4.

  16. Experimental Design in Psychology A Case Approach

    This text is about doing science and the active process of reading, learning, thinking, generating ideas, designing experiments, and the logistics surrounding each step of the research process. In easy-to-read, conversational language, Kim MacLin teaches students experimental design principles and techniques using a tutorial approach in which students read, critique, and analyze over 75 actual ...

  17. 10 great psychology experiments

    Pavlov's Dog: And 49 Other Experiments That Revolutionised Psychology by Adam Hart-Davies, Elwin Street, 2018. A very quick run through of a few more famous scientific experiments. Opening Skinner's Box: Great Psychological Experiments of the 20th Century by Lauren Slater, Bloomsbury, 2005/2016.

  18. 5.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  19. 2.2 Research Designs in Psychology

    Correlational research is designed to discover relationships among variables. Experimental research is designed to assess cause and effect. Each of the three research designs has specific strengths and limitations, and it is important to understand how each differs. See the table below for a summary. Table 2.2.

  20. Experimental Design Steps & Activities

    Explore experimental design steps, templates, and methods. ... psychology, and social sciences. It helps us figure out how different factors affect what we're studying, whether it's plants, chemicals, physical laws, human behavior, or how society works. ... experimental design ideas, and ways to integrate design of experiments. Student projects ...

  21. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  22. 50+ Research Topics for Psychology Papers

    Topics of Psychology Research Related to Human Cognition. Some of the possible topics you might explore in this area include thinking, language, intelligence, and decision-making. Other ideas might include: Dreams. False memories. Attention. Perception.

  23. Research in Developmental Psychology

    A basic experimental design involves beginning with a sample (or subset of a population) and randomly assigning subjects to one of two groups: the experimental group or the control group. Ideally, to prevent bias, the participants would be blind to their condition (not aware of which group they are in) and the researchers would also be blind to ...

  24. Narrative Summary of The Section of Psychology

    He delves into the purpose and design of the psychology laboratory, outlining various tests designed to assess mental abilities like judgment, touch, memory, and reaction time. ... Ideas: Jastrow puts forth several ideas about psychology, including: ... offering a glimpse into the early development of experimental psychology in the late 19th ...

  25. This Old Experiment With Mice Led to Bleak Predictions for Humanity's

    Working with rats between 1958 and 1962, and with mice from 1968 to 1972, Calhoun set up experimental rodent enclosures at the National Institute of Mental Health's Laboratory of Psychology.

  26. Behavioural and Economic Science (Science Track) (MSc) (2024 Entry)

    The purpose of the module is to introduce you to experimental design and statistical programming. Behavioural scientists need statistical analysis of experimental data and of large data sets. This module covers these topics to allow you to understand how to test hypotheses, plan experimental design and perform statistical analysis using R.

  27. Machines

    Previous research has proposed a variety of structural solutions suitable for continuum robots. Walker proposed a continuum robot with a backbone and systematically studied the structural design [2,3], kinematics [4,5], and intelligent control [], which made an outstanding contribution to the promotion of the spine-type continuum robot.Xu [7,8] proposed a continuum robot with a super-elastic ...