19+ Experimental Design Examples (Methods + Types)

practical psychology logo

Ever wondered how scientists discover new medicines, psychologists learn about behavior, or even how marketers figure out what kind of ads you like? Well, they all have something in common: they use a special plan or recipe called an "experimental design."

Imagine you're baking cookies. You can't just throw random amounts of flour, sugar, and chocolate chips into a bowl and hope for the best. You follow a recipe, right? Scientists and researchers do something similar. They follow a "recipe" called an experimental design to make sure their experiments are set up in a way that the answers they find are meaningful and reliable.

Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense.

experimental design test tubes

Long ago, people didn't have detailed game plans for experiments. They often just tried things out and saw what happened. But over time, people got smarter about this. They started creating structured plans—what we now call experimental designs—to get clearer, more trustworthy answers to their questions.

In this article, we'll take you on a journey through the world of experimental designs. We'll talk about the different types, or "flavors," of experimental designs, where they're used, and even give you a peek into how they came to be.

What Is Experimental Design?

Alright, before we dive into the different types of experimental designs, let's get crystal clear on what experimental design actually is.

Imagine you're a detective trying to solve a mystery. You need clues, right? Well, in the world of research, experimental design is like the roadmap that helps you find those clues. It's like the game plan in sports or the blueprint when you're building a house. Just like you wouldn't start building without a good blueprint, researchers won't start their studies without a strong experimental design.

So, why do we need experimental design? Think about baking a cake. If you toss ingredients into a bowl without measuring, you'll end up with a mess instead of a tasty dessert.

Similarly, in research, if you don't have a solid plan, you might get confusing or incorrect results. A good experimental design helps you ask the right questions ( think critically ), decide what to measure ( come up with an idea ), and figure out how to measure it (test it). It also helps you consider things that might mess up your results, like outside influences you hadn't thought of.

For example, let's say you want to find out if listening to music helps people focus better. Your experimental design would help you decide things like: Who are you going to test? What kind of music will you use? How will you measure focus? And, importantly, how will you make sure that it's really the music affecting focus and not something else, like the time of day or whether someone had a good breakfast?

In short, experimental design is the master plan that guides researchers through the process of collecting data, so they can answer questions in the most reliable way possible. It's like the GPS for the journey of discovery!

History of Experimental Design

Around 350 BCE, people like Aristotle were trying to figure out how the world works, but they mostly just thought really hard about things. They didn't test their ideas much. So while they were super smart, their methods weren't always the best for finding out the truth.

Fast forward to the Renaissance (14th to 17th centuries), a time of big changes and lots of curiosity. People like Galileo started to experiment by actually doing tests, like rolling balls down inclined planes to study motion. Galileo's work was cool because he combined thinking with doing. He'd have an idea, test it, look at the results, and then think some more. This approach was a lot more reliable than just sitting around and thinking.

Now, let's zoom ahead to the 18th and 19th centuries. This is when people like Francis Galton, an English polymath, started to get really systematic about experimentation. Galton was obsessed with measuring things. Seriously, he even tried to measure how good-looking people were ! His work helped create the foundations for a more organized approach to experiments.

Next stop: the early 20th century. Enter Ronald A. Fisher , a brilliant British statistician. Fisher was a game-changer. He came up with ideas that are like the bread and butter of modern experimental design.

Fisher invented the concept of the " control group "—that's a group of people or things that don't get the treatment you're testing, so you can compare them to those who do. He also stressed the importance of " randomization ," which means assigning people or things to different groups by chance, like drawing names out of a hat. This makes sure the experiment is fair and the results are trustworthy.

Around the same time, American psychologists like John B. Watson and B.F. Skinner were developing " behaviorism ." They focused on studying things that they could directly observe and measure, like actions and reactions.

Skinner even built boxes—called Skinner Boxes —to test how animals like pigeons and rats learn. Their work helped shape how psychologists design experiments today. Watson performed a very controversial experiment called The Little Albert experiment that helped describe behaviour through conditioning—in other words, how people learn to behave the way they do.

In the later part of the 20th century and into our time, computers have totally shaken things up. Researchers now use super powerful software to help design their experiments and crunch the numbers.

With computers, they can simulate complex experiments before they even start, which helps them predict what might happen. This is especially helpful in fields like medicine, where getting things right can be a matter of life and death.

Also, did you know that experimental designs aren't just for scientists in labs? They're used by people in all sorts of jobs, like marketing, education, and even video game design! Yes, someone probably ran an experiment to figure out what makes a game super fun to play.

So there you have it—a quick tour through the history of experimental design, from Aristotle's deep thoughts to Fisher's groundbreaking ideas, and all the way to today's computer-powered research. These designs are the recipes that help people from all walks of life find answers to their big questions.

Key Terms in Experimental Design

Before we dig into the different types of experimental designs, let's get comfy with some key terms. Understanding these terms will make it easier for us to explore the various types of experimental designs that researchers use to answer their big questions.

Independent Variable : This is what you change or control in your experiment to see what effect it has. Think of it as the "cause" in a cause-and-effect relationship. For example, if you're studying whether different types of music help people focus, the kind of music is the independent variable.

Dependent Variable : This is what you're measuring to see the effect of your independent variable. In our music and focus experiment, how well people focus is the dependent variable—it's what "depends" on the kind of music played.

Control Group : This is a group of people who don't get the special treatment or change you're testing. They help you see what happens when the independent variable is not applied. If you're testing whether a new medicine works, the control group would take a fake pill, called a placebo , instead of the real medicine.

Experimental Group : This is the group that gets the special treatment or change you're interested in. Going back to our medicine example, this group would get the actual medicine to see if it has any effect.

Randomization : This is like shaking things up in a fair way. You randomly put people into the control or experimental group so that each group is a good mix of different kinds of people. This helps make the results more reliable.

Sample : This is the group of people you're studying. They're a "sample" of a larger group that you're interested in. For instance, if you want to know how teenagers feel about a new video game, you might study a sample of 100 teenagers.

Bias : This is anything that might tilt your experiment one way or another without you realizing it. Like if you're testing a new kind of dog food and you only test it on poodles, that could create a bias because maybe poodles just really like that food and other breeds don't.

Data : This is the information you collect during the experiment. It's like the treasure you find on your journey of discovery!

Replication : This means doing the experiment more than once to make sure your findings hold up. It's like double-checking your answers on a test.

Hypothesis : This is your educated guess about what will happen in the experiment. It's like predicting the end of a movie based on the first half.

Steps of Experimental Design

Alright, let's say you're all fired up and ready to run your own experiment. Cool! But where do you start? Well, designing an experiment is a bit like planning a road trip. There are some key steps you've got to take to make sure you reach your destination. Let's break it down:

  • Ask a Question : Before you hit the road, you've got to know where you're going. Same with experiments. You start with a question you want to answer, like "Does eating breakfast really make you do better in school?"
  • Do Some Homework : Before you pack your bags, you look up the best places to visit, right? In science, this means reading up on what other people have already discovered about your topic.
  • Form a Hypothesis : This is your educated guess about what you think will happen. It's like saying, "I bet this route will get us there faster."
  • Plan the Details : Now you decide what kind of car you're driving (your experimental design), who's coming with you (your sample), and what snacks to bring (your variables).
  • Randomization : Remember, this is like shuffling a deck of cards. You want to mix up who goes into your control and experimental groups to make sure it's a fair test.
  • Run the Experiment : Finally, the rubber hits the road! You carry out your plan, making sure to collect your data carefully.
  • Analyze the Data : Once the trip's over, you look at your photos and decide which ones are keepers. In science, this means looking at your data to see what it tells you.
  • Draw Conclusions : Based on your data, did you find an answer to your question? This is like saying, "Yep, that route was faster," or "Nope, we hit a ton of traffic."
  • Share Your Findings : After a great trip, you want to tell everyone about it, right? Scientists do the same by publishing their results so others can learn from them.
  • Do It Again? : Sometimes one road trip just isn't enough. In the same way, scientists often repeat their experiments to make sure their findings are solid.

So there you have it! Those are the basic steps you need to follow when you're designing an experiment. Each step helps make sure that you're setting up a fair and reliable way to find answers to your big questions.

Let's get into examples of experimental designs.

1) True Experimental Design

notepad

In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

Researchers carefully pick an independent variable to manipulate (remember, that's the thing they're changing on purpose) and measure the dependent variable (the effect they're studying). Then comes the magic trick—randomization. By randomly putting participants into either the control or experimental group, scientists make sure their experiment is as fair as possible.

No sneaky biases here!

True Experimental Design Pros

The pros of True Experimental Design are like the perks of a VIP ticket at a concert: you get the best and most trustworthy results. Because everything is controlled and randomized, you can feel pretty confident that the results aren't just a fluke.

True Experimental Design Cons

However, there's a catch. Sometimes, it's really tough to set up these experiments in a real-world situation. Imagine trying to control every single detail of your day, from the food you eat to the air you breathe. Not so easy, right?

True Experimental Design Uses

The fields that get the most out of True Experimental Designs are those that need super reliable results, like medical research.

When scientists were developing COVID-19 vaccines, they used this design to run clinical trials. They had control groups that received a placebo (a harmless substance with no effect) and experimental groups that got the actual vaccine. Then they measured how many people in each group got sick. By comparing the two, they could say, "Yep, this vaccine works!"

So next time you read about a groundbreaking discovery in medicine or technology, chances are a True Experimental Design was the VIP behind the scenes, making sure everything was on point. It's been the go-to for rigorous scientific inquiry for nearly a century, and it's not stepping off the stage anytime soon.

2) Quasi-Experimental Design

So, let's talk about the Quasi-Experimental Design. Think of this one as the cool cousin of True Experimental Design. It wants to be just like its famous relative, but it's a bit more laid-back and flexible. You'll find quasi-experimental designs when it's tricky to set up a full-blown True Experimental Design with all the bells and whistles.

Quasi-experiments still play with an independent variable, just like their stricter cousins. The big difference? They don't use randomization. It's like wanting to divide a bag of jelly beans equally between your friends, but you can't quite do it perfectly.

In real life, it's often not possible or ethical to randomly assign people to different groups, especially when dealing with sensitive topics like education or social issues. And that's where quasi-experiments come in.

Quasi-Experimental Design Pros

Even though they lack full randomization, quasi-experimental designs are like the Swiss Army knives of research: versatile and practical. They're especially popular in fields like education, sociology, and public policy.

For instance, when researchers wanted to figure out if the Head Start program , aimed at giving young kids a "head start" in school, was effective, they used a quasi-experimental design. They couldn't randomly assign kids to go or not go to preschool, but they could compare kids who did with kids who didn't.

Quasi-Experimental Design Cons

Of course, quasi-experiments come with their own bag of pros and cons. On the plus side, they're easier to set up and often cheaper than true experiments. But the flip side is that they're not as rock-solid in their conclusions. Because the groups aren't randomly assigned, there's always that little voice saying, "Hey, are we missing something here?"

Quasi-Experimental Design Uses

Quasi-Experimental Design gained traction in the mid-20th century. Researchers were grappling with real-world problems that didn't fit neatly into a laboratory setting. Plus, as society became more aware of ethical considerations, the need for flexible designs increased. So, the quasi-experimental approach was like a breath of fresh air for scientists wanting to study complex issues without a laundry list of restrictions.

In short, if True Experimental Design is the superstar quarterback, Quasi-Experimental Design is the versatile player who can adapt and still make significant contributions to the game.

3) Pre-Experimental Design

Now, let's talk about the Pre-Experimental Design. Imagine it as the beginner's skateboard you get before you try out for all the cool tricks. It has wheels, it rolls, but it's not built for the professional skatepark.

Similarly, pre-experimental designs give researchers a starting point. They let you dip your toes in the water of scientific research without diving in head-first.

So, what's the deal with pre-experimental designs?

Pre-Experimental Designs are the basic, no-frills versions of experiments. Researchers still mess around with an independent variable and measure a dependent variable, but they skip over the whole randomization thing and often don't even have a control group.

It's like baking a cake but forgetting the frosting and sprinkles; you'll get some results, but they might not be as complete or reliable as you'd like.

Pre-Experimental Design Pros

Why use such a simple setup? Because sometimes, you just need to get the ball rolling. Pre-experimental designs are great for quick-and-dirty research when you're short on time or resources. They give you a rough idea of what's happening, which you can use to plan more detailed studies later.

A good example of this is early studies on the effects of screen time on kids. Researchers couldn't control every aspect of a child's life, but they could easily ask parents to track how much time their kids spent in front of screens and then look for trends in behavior or school performance.

Pre-Experimental Design Cons

But here's the catch: pre-experimental designs are like that first draft of an essay. It helps you get your ideas down, but you wouldn't want to turn it in for a grade. Because these designs lack the rigorous structure of true or quasi-experimental setups, they can't give you rock-solid conclusions. They're more like clues or signposts pointing you in a certain direction.

Pre-Experimental Design Uses

This type of design became popular in the early stages of various scientific fields. Researchers used them to scratch the surface of a topic, generate some initial data, and then decide if it's worth exploring further. In other words, pre-experimental designs were the stepping stones that led to more complex, thorough investigations.

So, while Pre-Experimental Design may not be the star player on the team, it's like the practice squad that helps everyone get better. It's the starting point that can lead to bigger and better things.

4) Factorial Design

Now, buckle up, because we're moving into the world of Factorial Design, the multi-tasker of the experimental universe.

Imagine juggling not just one, but multiple balls in the air—that's what researchers do in a factorial design.

In Factorial Design, researchers are not satisfied with just studying one independent variable. Nope, they want to study two or more at the same time to see how they interact.

It's like cooking with several spices to see how they blend together to create unique flavors.

Factorial Design became the talk of the town with the rise of computers. Why? Because this design produces a lot of data, and computers are the number crunchers that help make sense of it all. So, thanks to our silicon friends, researchers can study complicated questions like, "How do diet AND exercise together affect weight loss?" instead of looking at just one of those factors.

Factorial Design Pros

This design's main selling point is its ability to explore interactions between variables. For instance, maybe a new study drug works really well for young people but not so great for older adults. A factorial design could reveal that age is a crucial factor, something you might miss if you only studied the drug's effectiveness in general. It's like being a detective who looks for clues not just in one room but throughout the entire house.

Factorial Design Cons

However, factorial designs have their own bag of challenges. First off, they can be pretty complicated to set up and run. Imagine coordinating a four-way intersection with lots of cars coming from all directions—you've got to make sure everything runs smoothly, or you'll end up with a traffic jam. Similarly, researchers need to carefully plan how they'll measure and analyze all the different variables.

Factorial Design Uses

Factorial designs are widely used in psychology to untangle the web of factors that influence human behavior. They're also popular in fields like marketing, where companies want to understand how different aspects like price, packaging, and advertising influence a product's success.

And speaking of success, the factorial design has been a hit since statisticians like Ronald A. Fisher (yep, him again!) expanded on it in the early-to-mid 20th century. It offered a more nuanced way of understanding the world, proving that sometimes, to get the full picture, you've got to juggle more than one ball at a time.

So, if True Experimental Design is the quarterback and Quasi-Experimental Design is the versatile player, Factorial Design is the strategist who sees the entire game board and makes moves accordingly.

5) Longitudinal Design

pill bottle

Alright, let's take a step into the world of Longitudinal Design. Picture it as the grand storyteller, the kind who doesn't just tell you about a single event but spins an epic tale that stretches over years or even decades. This design isn't about quick snapshots; it's about capturing the whole movie of someone's life or a long-running process.

You know how you might take a photo every year on your birthday to see how you've changed? Longitudinal Design is kind of like that, but for scientific research.

With Longitudinal Design, instead of measuring something just once, researchers come back again and again, sometimes over many years, to see how things are going. This helps them understand not just what's happening, but why it's happening and how it changes over time.

This design really started to shine in the latter half of the 20th century, when researchers began to realize that some questions can't be answered in a hurry. Think about studies that look at how kids grow up, or research on how a certain medicine affects you over a long period. These aren't things you can rush.

The famous Framingham Heart Study , started in 1948, is a prime example. It's been studying heart health in a small town in Massachusetts for decades, and the findings have shaped what we know about heart disease.

Longitudinal Design Pros

So, what's to love about Longitudinal Design? First off, it's the go-to for studying change over time, whether that's how people age or how a forest recovers from a fire.

Longitudinal Design Cons

But it's not all sunshine and rainbows. Longitudinal studies take a lot of patience and resources. Plus, keeping track of participants over many years can be like herding cats—difficult and full of surprises.

Longitudinal Design Uses

Despite these challenges, longitudinal studies have been key in fields like psychology, sociology, and medicine. They provide the kind of deep, long-term insights that other designs just can't match.

So, if the True Experimental Design is the superstar quarterback, and the Quasi-Experimental Design is the flexible athlete, then the Factorial Design is the strategist, and the Longitudinal Design is the wise elder who has seen it all and has stories to tell.

6) Cross-Sectional Design

Now, let's flip the script and talk about Cross-Sectional Design, the polar opposite of the Longitudinal Design. If Longitudinal is the grand storyteller, think of Cross-Sectional as the snapshot photographer. It captures a single moment in time, like a selfie that you take to remember a fun day. Researchers using this design collect all their data at one point, providing a kind of "snapshot" of whatever they're studying.

In a Cross-Sectional Design, researchers look at multiple groups all at the same time to see how they're different or similar.

This design rose to popularity in the mid-20th century, mainly because it's so quick and efficient. Imagine wanting to know how people of different ages feel about a new video game. Instead of waiting for years to see how opinions change, you could just ask people of all ages what they think right now. That's Cross-Sectional Design for you—fast and straightforward.

You'll find this type of research everywhere from marketing studies to healthcare. For instance, you might have heard about surveys asking people what they think about a new product or political issue. Those are usually cross-sectional studies, aimed at getting a quick read on public opinion.

Cross-Sectional Design Pros

So, what's the big deal with Cross-Sectional Design? Well, it's the go-to when you need answers fast and don't have the time or resources for a more complicated setup.

Cross-Sectional Design Cons

Remember, speed comes with trade-offs. While you get your results quickly, those results are stuck in time. They can't tell you how things change or why they're changing, just what's happening right now.

Cross-Sectional Design Uses

Also, because they're so quick and simple, cross-sectional studies often serve as the first step in research. They give scientists an idea of what's going on so they can decide if it's worth digging deeper. In that way, they're a bit like a movie trailer, giving you a taste of the action to see if you're interested in seeing the whole film.

So, in our lineup of experimental designs, if True Experimental Design is the superstar quarterback and Longitudinal Design is the wise elder, then Cross-Sectional Design is like the speedy running back—fast, agile, but not designed for long, drawn-out plays.

7) Correlational Design

Next on our roster is the Correlational Design, the keen observer of the experimental world. Imagine this design as the person at a party who loves people-watching. They don't interfere or get involved; they just observe and take mental notes about what's going on.

In a correlational study, researchers don't change or control anything; they simply observe and measure how two variables relate to each other.

The correlational design has roots in the early days of psychology and sociology. Pioneers like Sir Francis Galton used it to study how qualities like intelligence or height could be related within families.

This design is all about asking, "Hey, when this thing happens, does that other thing usually happen too?" For example, researchers might study whether students who have more study time get better grades or whether people who exercise more have lower stress levels.

One of the most famous correlational studies you might have heard of is the link between smoking and lung cancer. Back in the mid-20th century, researchers started noticing that people who smoked a lot also seemed to get lung cancer more often. They couldn't say smoking caused cancer—that would require a true experiment—but the strong correlation was a red flag that led to more research and eventually, health warnings.

Correlational Design Pros

This design is great at proving that two (or more) things can be related. Correlational designs can help prove that more detailed research is needed on a topic. They can help us see patterns or possible causes for things that we otherwise might not have realized.

Correlational Design Cons

But here's where you need to be careful: correlational designs can be tricky. Just because two things are related doesn't mean one causes the other. That's like saying, "Every time I wear my lucky socks, my team wins." Well, it's a fun thought, but those socks aren't really controlling the game.

Correlational Design Uses

Despite this limitation, correlational designs are popular in psychology, economics, and epidemiology, to name a few fields. They're often the first step in exploring a possible relationship between variables. Once a strong correlation is found, researchers may decide to conduct more rigorous experimental studies to examine cause and effect.

So, if the True Experimental Design is the superstar quarterback and the Longitudinal Design is the wise elder, the Factorial Design is the strategist, and the Cross-Sectional Design is the speedster, then the Correlational Design is the clever scout, identifying interesting patterns but leaving the heavy lifting of proving cause and effect to the other types of designs.

8) Meta-Analysis

Last but not least, let's talk about Meta-Analysis, the librarian of experimental designs.

If other designs are all about creating new research, Meta-Analysis is about gathering up everyone else's research, sorting it, and figuring out what it all means when you put it together.

Imagine a jigsaw puzzle where each piece is a different study. Meta-Analysis is the process of fitting all those pieces together to see the big picture.

The concept of Meta-Analysis started to take shape in the late 20th century, when computers became powerful enough to handle massive amounts of data. It was like someone handed researchers a super-powered magnifying glass, letting them examine multiple studies at the same time to find common trends or results.

You might have heard of the Cochrane Reviews in healthcare . These are big collections of meta-analyses that help doctors and policymakers figure out what treatments work best based on all the research that's been done.

For example, if ten different studies show that a certain medicine helps lower blood pressure, a meta-analysis would pull all that information together to give a more accurate answer.

Meta-Analysis Pros

The beauty of Meta-Analysis is that it can provide really strong evidence. Instead of relying on one study, you're looking at the whole landscape of research on a topic.

Meta-Analysis Cons

However, it does have some downsides. For one, Meta-Analysis is only as good as the studies it includes. If those studies are flawed, the meta-analysis will be too. It's like baking a cake: if you use bad ingredients, it doesn't matter how good your recipe is—the cake won't turn out well.

Meta-Analysis Uses

Despite these challenges, meta-analyses are highly respected and widely used in many fields like medicine, psychology, and education. They help us make sense of a world that's bursting with information by showing us the big picture drawn from many smaller snapshots.

So, in our all-star lineup, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, the Factorial Design is the strategist, the Cross-Sectional Design is the speedster, and the Correlational Design is the scout, then the Meta-Analysis is like the coach, using insights from everyone else's plays to come up with the best game plan.

9) Non-Experimental Design

Now, let's talk about a player who's a bit of an outsider on this team of experimental designs—the Non-Experimental Design. Think of this design as the commentator or the journalist who covers the game but doesn't actually play.

In a Non-Experimental Design, researchers are like reporters gathering facts, but they don't interfere or change anything. They're simply there to describe and analyze.

Non-Experimental Design Pros

So, what's the deal with Non-Experimental Design? Its strength is in description and exploration. It's really good for studying things as they are in the real world, without changing any conditions.

Non-Experimental Design Cons

Because a non-experimental design doesn't manipulate variables, it can't prove cause and effect. It's like a weather reporter: they can tell you it's raining, but they can't tell you why it's raining.

The downside? Since researchers aren't controlling variables, it's hard to rule out other explanations for what they observe. It's like hearing one side of a story—you get an idea of what happened, but it might not be the complete picture.

Non-Experimental Design Uses

Non-Experimental Design has always been a part of research, especially in fields like anthropology, sociology, and some areas of psychology.

For instance, if you've ever heard of studies that describe how people behave in different cultures or what teens like to do in their free time, that's often Non-Experimental Design at work. These studies aim to capture the essence of a situation, like painting a portrait instead of taking a snapshot.

One well-known example you might have heard about is the Kinsey Reports from the 1940s and 1950s, which described sexual behavior in men and women. Researchers interviewed thousands of people but didn't manipulate any variables like you would in a true experiment. They simply collected data to create a comprehensive picture of the subject matter.

So, in our metaphorical team of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, and Meta-Analysis is the coach, then Non-Experimental Design is the sports journalist—always present, capturing the game, but not part of the action itself.

10) Repeated Measures Design

white rat

Time to meet the Repeated Measures Design, the time traveler of our research team. If this design were a player in a sports game, it would be the one who keeps revisiting past plays to figure out how to improve the next one.

Repeated Measures Design is all about studying the same people or subjects multiple times to see how they change or react under different conditions.

The idea behind Repeated Measures Design isn't new; it's been around since the early days of psychology and medicine. You could say it's a cousin to the Longitudinal Design, but instead of looking at how things naturally change over time, it focuses on how the same group reacts to different things.

Imagine a study looking at how a new energy drink affects people's running speed. Instead of comparing one group that drank the energy drink to another group that didn't, a Repeated Measures Design would have the same group of people run multiple times—once with the energy drink, and once without. This way, you're really zeroing in on the effect of that energy drink, making the results more reliable.

Repeated Measures Design Pros

The strong point of Repeated Measures Design is that it's super focused. Because it uses the same subjects, you don't have to worry about differences between groups messing up your results.

Repeated Measures Design Cons

But the downside? Well, people can get tired or bored if they're tested too many times, which might affect how they respond.

Repeated Measures Design Uses

A famous example of this design is the "Little Albert" experiment, conducted by John B. Watson and Rosalie Rayner in 1920. In this study, a young boy was exposed to a white rat and other stimuli several times to see how his emotional responses changed. Though the ethical standards of this experiment are often criticized today, it was groundbreaking in understanding conditioned emotional responses.

In our metaphorical lineup of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, and Non-Experimental Design is the journalist, then Repeated Measures Design is the time traveler—always looping back to fine-tune the game plan.

11) Crossover Design

Next up is Crossover Design, the switch-hitter of the research world. If you're familiar with baseball, you'll know a switch-hitter is someone who can bat both right-handed and left-handed.

In a similar way, Crossover Design allows subjects to experience multiple conditions, flipping them around so that everyone gets a turn in each role.

This design is like the utility player on our team—versatile, flexible, and really good at adapting.

The Crossover Design has its roots in medical research and has been popular since the mid-20th century. It's often used in clinical trials to test the effectiveness of different treatments.

Crossover Design Pros

The neat thing about this design is that it allows each participant to serve as their own control group. Imagine you're testing two new kinds of headache medicine. Instead of giving one type to one group and another type to a different group, you'd give both kinds to the same people but at different times.

Crossover Design Cons

What's the big deal with Crossover Design? Its major strength is in reducing the "noise" that comes from individual differences. Since each person experiences all conditions, it's easier to see real effects. However, there's a catch. This design assumes that there's no lasting effect from the first condition when you switch to the second one. That might not always be true. If the first treatment has a long-lasting effect, it could mess up the results when you switch to the second treatment.

Crossover Design Uses

A well-known example of Crossover Design is in studies that look at the effects of different types of diets—like low-carb vs. low-fat diets. Researchers might have participants follow a low-carb diet for a few weeks, then switch them to a low-fat diet. By doing this, they can more accurately measure how each diet affects the same group of people.

In our team of experimental designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, and Repeated Measures Design is the time traveler, then Crossover Design is the versatile utility player—always ready to adapt and play multiple roles to get the most accurate results.

12) Cluster Randomized Design

Meet the Cluster Randomized Design, the team captain of group-focused research. In our imaginary lineup of experimental designs, if other designs focus on individual players, then Cluster Randomized Design is looking at how the entire team functions.

This approach is especially common in educational and community-based research, and it's been gaining traction since the late 20th century.

Here's how Cluster Randomized Design works: Instead of assigning individual people to different conditions, researchers assign entire groups, or "clusters." These could be schools, neighborhoods, or even entire towns. This helps you see how the new method works in a real-world setting.

Imagine you want to see if a new anti-bullying program really works. Instead of selecting individual students, you'd introduce the program to a whole school or maybe even several schools, and then compare the results to schools without the program.

Cluster Randomized Design Pros

Why use Cluster Randomized Design? Well, sometimes it's just not practical to assign conditions at the individual level. For example, you can't really have half a school following a new reading program while the other half sticks with the old one; that would be way too confusing! Cluster Randomization helps get around this problem by treating each "cluster" as its own mini-experiment.

Cluster Randomized Design Cons

There's a downside, too. Because entire groups are assigned to each condition, there's a risk that the groups might be different in some important way that the researchers didn't account for. That's like having one sports team that's full of veterans playing against a team of rookies; the match wouldn't be fair.

Cluster Randomized Design Uses

A famous example is the research conducted to test the effectiveness of different public health interventions, like vaccination programs. Researchers might roll out a vaccination program in one community but not in another, then compare the rates of disease in both.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, and Crossover Design is the utility player, then Cluster Randomized Design is the team captain—always looking out for the group as a whole.

13) Mixed-Methods Design

Say hello to Mixed-Methods Design, the all-rounder or the "Renaissance player" of our research team.

Mixed-Methods Design uses a blend of both qualitative and quantitative methods to get a more complete picture, just like a Renaissance person who's good at lots of different things. It's like being good at both offense and defense in a sport; you've got all your bases covered!

Mixed-Methods Design is a fairly new kid on the block, becoming more popular in the late 20th and early 21st centuries as researchers began to see the value in using multiple approaches to tackle complex questions. It's the Swiss Army knife in our research toolkit, combining the best parts of other designs to be more versatile.

Here's how it could work: Imagine you're studying the effects of a new educational app on students' math skills. You might use quantitative methods like tests and grades to measure how much the students improve—that's the 'numbers part.'

But you also want to know how the students feel about math now, or why they think they got better or worse. For that, you could conduct interviews or have students fill out journals—that's the 'story part.'

Mixed-Methods Design Pros

So, what's the scoop on Mixed-Methods Design? The strength is its versatility and depth; you're not just getting numbers or stories, you're getting both, which gives a fuller picture.

Mixed-Methods Design Cons

But, it's also more challenging. Imagine trying to play two sports at the same time! You have to be skilled in different research methods and know how to combine them effectively.

Mixed-Methods Design Uses

A high-profile example of Mixed-Methods Design is research on climate change. Scientists use numbers and data to show temperature changes (quantitative), but they also interview people to understand how these changes are affecting communities (qualitative).

In our team of experimental designs, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, and Cluster Randomized Design is the team captain, then Mixed-Methods Design is the Renaissance player—skilled in multiple areas and able to bring them all together for a winning strategy.

14) Multivariate Design

Now, let's turn our attention to Multivariate Design, the multitasker of the research world.

If our lineup of research designs were like players on a basketball court, Multivariate Design would be the player dribbling, passing, and shooting all at once. This design doesn't just look at one or two things; it looks at several variables simultaneously to see how they interact and affect each other.

Multivariate Design is like baking a cake with many ingredients. Instead of just looking at how flour affects the cake, you also consider sugar, eggs, and milk all at once. This way, you understand how everything works together to make the cake taste good or bad.

Multivariate Design has been a go-to method in psychology, economics, and social sciences since the latter half of the 20th century. With the advent of computers and advanced statistical software, analyzing multiple variables at once became a lot easier, and Multivariate Design soared in popularity.

Multivariate Design Pros

So, what's the benefit of using Multivariate Design? Its power lies in its complexity. By studying multiple variables at the same time, you can get a really rich, detailed understanding of what's going on.

Multivariate Design Cons

But that complexity can also be a drawback. With so many variables, it can be tough to tell which ones are really making a difference and which ones are just along for the ride.

Multivariate Design Uses

Imagine you're a coach trying to figure out the best strategy to win games. You wouldn't just look at how many points your star player scores; you'd also consider assists, rebounds, turnovers, and maybe even how loud the crowd is. A Multivariate Design would help you understand how all these factors work together to determine whether you win or lose.

A well-known example of Multivariate Design is in market research. Companies often use this approach to figure out how different factors—like price, packaging, and advertising—affect sales. By studying multiple variables at once, they can find the best combination to boost profits.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, Cluster Randomized Design is the team captain, and Mixed-Methods Design is the Renaissance player, then Multivariate Design is the multitasker—juggling many variables at once to get a fuller picture of what's happening.

15) Pretest-Posttest Design

Let's introduce Pretest-Posttest Design, the "Before and After" superstar of our research team. You've probably seen those before-and-after pictures in ads for weight loss programs or home renovations, right?

Well, this design is like that, but for science! Pretest-Posttest Design checks out what things are like before the experiment starts and then compares that to what things are like after the experiment ends.

This design is one of the classics, a staple in research for decades across various fields like psychology, education, and healthcare. It's so simple and straightforward that it has stayed popular for a long time.

In Pretest-Posttest Design, you measure your subject's behavior or condition before you introduce any changes—that's your "before" or "pretest." Then you do your experiment, and after it's done, you measure the same thing again—that's your "after" or "posttest."

Pretest-Posttest Design Pros

What makes Pretest-Posttest Design special? It's pretty easy to understand and doesn't require fancy statistics.

Pretest-Posttest Design Cons

But there are some pitfalls. For example, what if the kids in our math example get better at multiplication just because they're older or because they've taken the test before? That would make it hard to tell if the program is really effective or not.

Pretest-Posttest Design Uses

Let's say you're a teacher and you want to know if a new math program helps kids get better at multiplication. First, you'd give all the kids a multiplication test—that's your pretest. Then you'd teach them using the new math program. At the end, you'd give them the same test again—that's your posttest. If the kids do better on the second test, you might conclude that the program works.

One famous use of Pretest-Posttest Design is in evaluating the effectiveness of driver's education courses. Researchers will measure people's driving skills before and after the course to see if they've improved.

16) Solomon Four-Group Design

Next up is the Solomon Four-Group Design, the "chess master" of our research team. This design is all about strategy and careful planning. Named after Richard L. Solomon who introduced it in the 1940s, this method tries to correct some of the weaknesses in simpler designs, like the Pretest-Posttest Design.

Here's how it rolls: The Solomon Four-Group Design uses four different groups to test a hypothesis. Two groups get a pretest, then one of them receives the treatment or intervention, and both get a posttest. The other two groups skip the pretest, and only one of them receives the treatment before they both get a posttest.

Sound complicated? It's like playing 4D chess; you're thinking several moves ahead!

Solomon Four-Group Design Pros

What's the pro and con of the Solomon Four-Group Design? On the plus side, it provides really robust results because it accounts for so many variables.

Solomon Four-Group Design Cons

The downside? It's a lot of work and requires a lot of participants, making it more time-consuming and costly.

Solomon Four-Group Design Uses

Let's say you want to figure out if a new way of teaching history helps students remember facts better. Two classes take a history quiz (pretest), then one class uses the new teaching method while the other sticks with the old way. Both classes take another quiz afterward (posttest).

Meanwhile, two more classes skip the initial quiz, and then one uses the new method before both take the final quiz. Comparing all four groups will give you a much clearer picture of whether the new teaching method works and whether the pretest itself affects the outcome.

The Solomon Four-Group Design is less commonly used than simpler designs but is highly respected for its ability to control for more variables. It's a favorite in educational and psychological research where you really want to dig deep and figure out what's actually causing changes.

17) Adaptive Designs

Now, let's talk about Adaptive Designs, the chameleons of the experimental world.

Imagine you're a detective, and halfway through solving a case, you find a clue that changes everything. You wouldn't just stick to your old plan; you'd adapt and change your approach, right? That's exactly what Adaptive Designs allow researchers to do.

In an Adaptive Design, researchers can make changes to the study as it's happening, based on early results. In a traditional study, once you set your plan, you stick to it from start to finish.

Adaptive Design Pros

This method is particularly useful in fast-paced or high-stakes situations, like developing a new vaccine in the middle of a pandemic. The ability to adapt can save both time and resources, and more importantly, it can save lives by getting effective treatments out faster.

Adaptive Design Cons

But Adaptive Designs aren't without their drawbacks. They can be very complex to plan and carry out, and there's always a risk that the changes made during the study could introduce bias or errors.

Adaptive Design Uses

Adaptive Designs are most often seen in clinical trials, particularly in the medical and pharmaceutical fields.

For instance, if a new drug is showing really promising results, the study might be adjusted to give more participants the new treatment instead of a placebo. Or if one dose level is showing bad side effects, it might be dropped from the study.

The best part is, these changes are pre-planned. Researchers lay out in advance what changes might be made and under what conditions, which helps keep everything scientific and above board.

In terms of applications, besides their heavy usage in medical and pharmaceutical research, Adaptive Designs are also becoming increasingly popular in software testing and market research. In these fields, being able to quickly adjust to early results can give companies a significant advantage.

Adaptive Designs are like the agile startups of the research world—quick to pivot, keen to learn from ongoing results, and focused on rapid, efficient progress. However, they require a great deal of expertise and careful planning to ensure that the adaptability doesn't compromise the integrity of the research.

18) Bayesian Designs

Next, let's dive into Bayesian Designs, the data detectives of the research universe. Named after Thomas Bayes, an 18th-century statistician and minister, this design doesn't just look at what's happening now; it also takes into account what's happened before.

Imagine if you were a detective who not only looked at the evidence in front of you but also used your past cases to make better guesses about your current one. That's the essence of Bayesian Designs.

Bayesian Designs are like detective work in science. As you gather more clues (or data), you update your best guess on what's really happening. This way, your experiment gets smarter as it goes along.

In the world of research, Bayesian Designs are most notably used in areas where you have some prior knowledge that can inform your current study. For example, if earlier research shows that a certain type of medicine usually works well for a specific illness, a Bayesian Design would include that information when studying a new group of patients with the same illness.

Bayesian Design Pros

One of the major advantages of Bayesian Designs is their efficiency. Because they use existing data to inform the current experiment, often fewer resources are needed to reach a reliable conclusion.

Bayesian Design Cons

However, they can be quite complicated to set up and require a deep understanding of both statistics and the subject matter at hand.

Bayesian Design Uses

Bayesian Designs are highly valued in medical research, finance, environmental science, and even in Internet search algorithms. Their ability to continually update and refine hypotheses based on new evidence makes them particularly useful in fields where data is constantly evolving and where quick, informed decisions are crucial.

Here's a real-world example: In the development of personalized medicine, where treatments are tailored to individual patients, Bayesian Designs are invaluable. If a treatment has been effective for patients with similar genetics or symptoms in the past, a Bayesian approach can use that data to predict how well it might work for a new patient.

This type of design is also increasingly popular in machine learning and artificial intelligence. In these fields, Bayesian Designs help algorithms "learn" from past data to make better predictions or decisions in new situations. It's like teaching a computer to be a detective that gets better and better at solving puzzles the more puzzles it sees.

19) Covariate Adaptive Randomization

old person and young person

Now let's turn our attention to Covariate Adaptive Randomization, which you can think of as the "matchmaker" of experimental designs.

Picture a soccer coach trying to create the most balanced teams for a friendly match. They wouldn't just randomly assign players; they'd take into account each player's skills, experience, and other traits.

Covariate Adaptive Randomization is all about creating the most evenly matched groups possible for an experiment.

In traditional randomization, participants are allocated to different groups purely by chance. This is a pretty fair way to do things, but it can sometimes lead to unbalanced groups.

Imagine if all the professional-level players ended up on one soccer team and all the beginners on another; that wouldn't be a very informative match! Covariate Adaptive Randomization fixes this by using important traits or characteristics (called "covariates") to guide the randomization process.

Covariate Adaptive Randomization Pros

The benefits of this design are pretty clear: it aims for balance and fairness, making the final results more trustworthy.

Covariate Adaptive Randomization Cons

But it's not perfect. It can be complex to implement and requires a deep understanding of which characteristics are most important to balance.

Covariate Adaptive Randomization Uses

This design is particularly useful in medical trials. Let's say researchers are testing a new medication for high blood pressure. Participants might have different ages, weights, or pre-existing conditions that could affect the results.

Covariate Adaptive Randomization would make sure that each treatment group has a similar mix of these characteristics, making the results more reliable and easier to interpret.

In practical terms, this design is often seen in clinical trials for new drugs or therapies, but its principles are also applicable in fields like psychology, education, and social sciences.

For instance, in educational research, it might be used to ensure that classrooms being compared have similar distributions of students in terms of academic ability, socioeconomic status, and other factors.

Covariate Adaptive Randomization is like the wise elder of the group, ensuring that everyone has an equal opportunity to show their true capabilities, thereby making the collective results as reliable as possible.

20) Stepped Wedge Design

Let's now focus on the Stepped Wedge Design, a thoughtful and cautious member of the experimental design family.

Imagine you're trying out a new gardening technique, but you're not sure how well it will work. You decide to apply it to one section of your garden first, watch how it performs, and then gradually extend the technique to other sections. This way, you get to see its effects over time and across different conditions. That's basically how Stepped Wedge Design works.

In a Stepped Wedge Design, all participants or clusters start off in the control group, and then, at different times, they 'step' over to the intervention or treatment group. This creates a wedge-like pattern over time where more and more participants receive the treatment as the study progresses. It's like rolling out a new policy in phases, monitoring its impact at each stage before extending it to more people.

Stepped Wedge Design Pros

The Stepped Wedge Design offers several advantages. Firstly, it allows for the study of interventions that are expected to do more good than harm, which makes it ethically appealing.

Secondly, it's useful when resources are limited and it's not feasible to roll out a new treatment to everyone at once. Lastly, because everyone eventually receives the treatment, it can be easier to get buy-in from participants or organizations involved in the study.

Stepped Wedge Design Cons

However, this design can be complex to analyze because it has to account for both the time factor and the changing conditions in each 'step' of the wedge. And like any study where participants know they're receiving an intervention, there's the potential for the results to be influenced by the placebo effect or other biases.

Stepped Wedge Design Uses

This design is particularly useful in health and social care research. For instance, if a hospital wants to implement a new hygiene protocol, it might start in one department, assess its impact, and then roll it out to other departments over time. This allows the hospital to adjust and refine the new protocol based on real-world data before it's fully implemented.

In terms of applications, Stepped Wedge Designs are commonly used in public health initiatives, organizational changes in healthcare settings, and social policy trials. They are particularly useful in situations where an intervention is being rolled out gradually and it's important to understand its impacts at each stage.

21) Sequential Design

Next up is Sequential Design, the dynamic and flexible member of our experimental design family.

Imagine you're playing a video game where you can choose different paths. If you take one path and find a treasure chest, you might decide to continue in that direction. If you hit a dead end, you might backtrack and try a different route. Sequential Design operates in a similar fashion, allowing researchers to make decisions at different stages based on what they've learned so far.

In a Sequential Design, the experiment is broken down into smaller parts, or "sequences." After each sequence, researchers pause to look at the data they've collected. Based on those findings, they then decide whether to stop the experiment because they've got enough information, or to continue and perhaps even modify the next sequence.

Sequential Design Pros

This allows for a more efficient use of resources, as you're only continuing with the experiment if the data suggests it's worth doing so.

One of the great things about Sequential Design is its efficiency. Because you're making data-driven decisions along the way, you can often reach conclusions more quickly and with fewer resources.

Sequential Design Cons

However, it requires careful planning and expertise to ensure that these "stop or go" decisions are made correctly and without bias.

Sequential Design Uses

In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality control in manufacturing, environmental monitoring, and financial modeling. In these areas, being able to make quick decisions based on incoming data can be a big advantage.

This design is often used in clinical trials involving new medications or treatments. For example, if early results show that a new drug has significant side effects, the trial can be stopped before more people are exposed to it.

On the flip side, if the drug is showing promising results, the trial might be expanded to include more participants or to extend the testing period.

Think of Sequential Design as the nimble athlete of experimental designs, capable of quick pivots and adjustments to reach the finish line in the most effective way possible. But just like an athlete needs a good coach, this design requires expert oversight to make sure it stays on the right track.

22) Field Experiments

Last but certainly not least, let's explore Field Experiments—the adventurers of the experimental design world.

Picture a scientist leaving the controlled environment of a lab to test a theory in the real world, like a biologist studying animals in their natural habitat or a social scientist observing people in a real community. These are Field Experiments, and they're all about getting out there and gathering data in real-world settings.

Field Experiments embrace the messiness of the real world, unlike laboratory experiments, where everything is controlled down to the smallest detail. This makes them both exciting and challenging.

Field Experiment Pros

On one hand, the results often give us a better understanding of how things work outside the lab.

While Field Experiments offer real-world relevance, they come with challenges like controlling for outside factors and the ethical considerations of intervening in people's lives without their knowledge.

Field Experiment Cons

On the other hand, the lack of control can make it harder to tell exactly what's causing what. Yet, despite these challenges, they remain a valuable tool for researchers who want to understand how theories play out in the real world.

Field Experiment Uses

Let's say a school wants to improve student performance. In a Field Experiment, they might change the school's daily schedule for one semester and keep track of how students perform compared to another school where the schedule remained the same.

Because the study is happening in a real school with real students, the results could be very useful for understanding how the change might work in other schools. But since it's the real world, lots of other factors—like changes in teachers or even the weather—could affect the results.

Field Experiments are widely used in economics, psychology, education, and public policy. For example, you might have heard of the famous "Broken Windows" experiment in the 1980s that looked at how small signs of disorder, like broken windows or graffiti, could encourage more serious crime in neighborhoods. This experiment had a big impact on how cities think about crime prevention.

From the foundational concepts of control groups and independent variables to the sophisticated layouts like Covariate Adaptive Randomization and Sequential Design, it's clear that the realm of experimental design is as varied as it is fascinating.

We've seen that each design has its own special talents, ideal for specific situations. Some designs, like the Classic Controlled Experiment, are like reliable old friends you can always count on.

Others, like Sequential Design, are flexible and adaptable, making quick changes based on what they learn. And let's not forget the adventurous Field Experiments, which take us out of the lab and into the real world to discover things we might not see otherwise.

Choosing the right experimental design is like picking the right tool for the job. The method you choose can make a big difference in how reliable your results are and how much people will trust what you've discovered. And as we've learned, there's a design to suit just about every question, every problem, and every curiosity.

So the next time you read about a new discovery in medicine, psychology, or any other field, you'll have a better understanding of the thought and planning that went into figuring things out. Experimental design is more than just a set of rules; it's a structured way to explore the unknown and answer questions that can change the world.

Related posts:

  • Experimental Psychologist Career (Salary + Duties + Interviews)
  • 40+ Famous Psychologists (Images + Biographies)
  • 11+ Psychology Experiment Ideas (Goals + Methods)
  • The Little Albert Experiment
  • 41+ White Collar Job Examples (Salary + Path)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Experimental Design: Types, Examples & Methods

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Related Articles

Mixed Methods Research

Research Methodology

Mixed Methods Research

Conversation Analysis

Conversation Analysis

Discourse Analysis

Discourse Analysis

Phenomenology In Qualitative Research

Phenomenology In Qualitative Research

Ethnography In Qualitative Research

Ethnography In Qualitative Research

Narrative Analysis In Qualitative Research

Narrative Analysis In Qualitative Research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 24 June 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Triangulation

Triangulation in Research – Types, Methods and...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Correlational Research Design

Correlational Research – Methods, Types and...

Exploratory Research

Exploratory Research – Types, Methods and...

Survey Research

Survey Research – Types, Methods, Examples

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Experimental Design: Definition and Types

By Jim Frost 3 Comments

What is Experimental Design?

An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions.

An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental design as the design of experiments (DOE). Both terms are synonymous.

Scientist who developed an experimental design for her research.

Ultimately, the design of experiments helps ensure that your procedures and data will evaluate your research question effectively. Without an experimental design, you might waste your efforts in a process that, for many potential reasons, can’t answer your research question. In short, it helps you trust your results.

Learn more about Independent and Dependent Variables .

Design of Experiments: Goals & Settings

Experiments occur in many settings, ranging from psychology, social sciences, medicine, physics, engineering, and industrial and service sectors. Typically, experimental goals are to discover a previously unknown effect , confirm a known effect, or test a hypothesis.

Effects represent causal relationships between variables. For example, in a medical experiment, does the new medicine cause an improvement in health outcomes? If so, the medicine has a causal effect on the outcome.

An experimental design’s focus depends on the subject area and can include the following goals:

  • Understanding the relationships between variables.
  • Identifying the variables that have the largest impact on the outcomes.
  • Finding the input variable settings that produce an optimal result.

For example, psychologists have conducted experiments to understand how conformity affects decision-making. Sociologists have performed experiments to determine whether ethnicity affects the public reaction to staged bike thefts. These experiments map out the causal relationships between variables, and their primary goal is to understand the role of various factors.

Conversely, in a manufacturing environment, the researchers might use an experimental design to find the factors that most effectively improve their product’s strength, identify the optimal manufacturing settings, and do all that while accounting for various constraints. In short, a manufacturer’s goal is often to use experiments to improve their products cost-effectively.

In a medical experiment, the goal might be to quantify the medicine’s effect and find the optimum dosage.

Developing an Experimental Design

Developing an experimental design involves planning that maximizes the potential to collect data that is both trustworthy and able to detect causal relationships. Specifically, these studies aim to see effects when they exist in the population the researchers are studying, preferentially favor causal effects, isolate each factor’s true effect from potential confounders, and produce conclusions that you can generalize to the real world.

To accomplish these goals, experimental designs carefully manage data validity and reliability , and internal and external experimental validity. When your experiment is valid and reliable, you can expect your procedures and data to produce trustworthy results.

An excellent experimental design involves the following:

  • Lots of preplanning.
  • Developing experimental treatments.
  • Determining how to assign subjects to treatment groups.

The remainder of this article focuses on how experimental designs incorporate these essential items to accomplish their research goals.

Learn more about Data Reliability vs. Validity and Internal and External Experimental Validity .

Preplanning, Defining, and Operationalizing for Design of Experiments

A literature review is crucial for the design of experiments.

This phase of the design of experiments helps you identify critical variables, know how to measure them while ensuring reliability and validity, and understand the relationships between them. The review can also help you find ways to reduce sources of variability, which increases your ability to detect treatment effects. Notably, the literature review allows you to learn how similar studies designed their experiments and the challenges they faced.

Operationalizing a study involves taking your research question, using the background information you gathered, and formulating an actionable plan.

This process should produce a specific and testable hypothesis using data that you can reasonably collect given the resources available to the experiment.

  • Null hypothesis : The jumping exercise intervention does not affect bone density.
  • Alternative hypothesis : The jumping exercise intervention affects bone density.

To learn more about this early phase, read Five Steps for Conducting Scientific Studies with Statistical Analyses .

Formulating Treatments in Experimental Designs

In an experimental design, treatments are variables that the researchers control. They are the primary independent variables of interest. Researchers administer the treatment to the subjects or items in the experiment and want to know whether it causes changes in the outcome.

As the name implies, a treatment can be medical in nature, such as a new medicine or vaccine. But it’s a general term that applies to other things such as training programs, manufacturing settings, teaching methods, and types of fertilizers. I helped run an experiment where the treatment was a jumping exercise intervention that we hoped would increase bone density. All these treatment examples are things that potentially influence a measurable outcome.

Even when you know your treatment generally, you must carefully consider the amount. How large of a dose? If you’re comparing three different temperatures in a manufacturing process, how far apart are they? For my bone mineral density study, we had to determine how frequently the exercise sessions would occur and how long each lasted.

How you define the treatments in the design of experiments can affect your findings and the generalizability of your results.

Assigning Subjects to Experimental Groups

A crucial decision for all experimental designs is determining how researchers assign subjects to the experimental conditions—the treatment and control groups. The control group is often, but not always, the lack of a treatment. It serves as a basis for comparison by showing outcomes for subjects who don’t receive a treatment. Learn more about Control Groups .

How your experimental design assigns subjects to the groups affects how confident you can be that the findings represent true causal effects rather than mere correlation caused by confounders. Indeed, the assignment method influences how you control for confounding variables. This is the difference between correlation and causation .

Imagine a study finds that vitamin consumption correlates with better health outcomes. As a researcher, you want to be able to say that vitamin consumption causes the improvements. However, with the wrong experimental design, you might only be able to say there is an association. A confounder, and not the vitamins, might actually cause the health benefits.

Let’s explore some of the ways to assign subjects in design of experiments.

Completely Randomized Designs

A completely randomized experimental design randomly assigns all subjects to the treatment and control groups. You simply take each participant and use a random process to determine their group assignment. You can flip coins, roll a die, or use a computer. Randomized experiments must be prospective studies because they need to be able to control group assignment.

Random assignment in the design of experiments helps ensure that the groups are roughly equivalent at the beginning of the study. This equivalence at the start increases your confidence that any differences you see at the end were caused by the treatments. The randomization tends to equalize confounders between the experimental groups and, thereby, cancels out their effects, leaving only the treatment effects.

For example, in a vitamin study, the researchers can randomly assign participants to either the control or vitamin group. Because the groups are approximately equal when the experiment starts, if the health outcomes are different at the end of the study, the researchers can be confident that the vitamins caused those improvements.

Statisticians consider randomized experimental designs to be the best for identifying causal relationships.

If you can’t randomly assign subjects but want to draw causal conclusions about an intervention, consider using a quasi-experimental design .

Learn more about Randomized Controlled Trials and Random Assignment in Experiments .

Randomized Block Designs

Nuisance factors are variables that can affect the outcome, but they are not the researcher’s primary interest. Unfortunately, they can hide or distort the treatment results. When experimenters know about specific nuisance factors, they can use a randomized block design to minimize their impact.

This experimental design takes subjects with a shared “nuisance” characteristic and groups them into blocks. The participants in each block are then randomly assigned to the experimental groups. This process allows the experiment to control for known nuisance factors.

Blocking in the design of experiments reduces the impact of nuisance factors on experimental error. The analysis assesses the effects of the treatment within each block, which removes the variability between blocks. The result is that blocked experimental designs can reduce the impact of nuisance variables, increasing the ability to detect treatment effects accurately.

Suppose you’re testing various teaching methods. Because grade level likely affects educational outcomes, you might use grade level as a blocking factor. To use a randomized block design for this scenario, divide the participants by grade level and then randomly assign the members of each grade level to the experimental groups.

A standard guideline for an experimental design is to “Block what you can, randomize what you cannot.” Use blocking for a few primary nuisance factors. Then use random assignment to distribute the unblocked nuisance factors equally between the experimental conditions.

You can also use covariates to control nuisance factors. Learn about Covariates: Definition and Uses .

Observational Studies

In some experimental designs, randomly assigning subjects to the experimental conditions is impossible or unethical. The researchers simply can’t assign participants to the experimental groups. However, they can observe them in their natural groupings, measure the essential variables, and look for correlations. These observational studies are also known as quasi-experimental designs. Retrospective studies must be observational in nature because they look back at past events.

Imagine you’re studying the effects of depression on an activity. Clearly, you can’t randomly assign participants to the depression and control groups. But you can observe participants with and without depression and see how their task performance differs.

Observational studies let you perform research when you can’t control the treatment. However, quasi-experimental designs increase the problem of confounding variables. For this design of experiments, correlation does not necessarily imply causation. While special procedures can help control confounders in an observational study, you’re ultimately less confident that the results represent causal findings.

Learn more about Observational Studies .

For a good comparison, learn about the differences and tradeoffs between Observational Studies and Randomized Experiments .

Between-Subjects vs. Within-Subjects Experimental Designs

When you think of the design of experiments, you probably picture a treatment and control group. Researchers assign participants to only one of these groups, so each group contains entirely different subjects than the other groups. Analysts compare the groups at the end of the experiment. Statisticians refer to this method as a between-subjects, or independent measures, experimental design.

In a between-subjects design , you can have more than one treatment group, but each subject is exposed to only one condition, the control group or one of the treatment groups.

A potential downside to this approach is that differences between groups at the beginning can affect the results at the end. As you’ve read earlier, random assignment can reduce those differences, but it is imperfect. There will always be some variability between the groups.

In a  within-subjects experimental design , also known as repeated measures, subjects experience all treatment conditions and are measured for each. Each subject acts as their own control, which reduces variability and increases the statistical power to detect effects.

In this experimental design, you minimize pre-existing differences between the experimental conditions because they all contain the same subjects. However, the order of treatments can affect the results. Beware of practice and fatigue effects. Learn more about Repeated Measures Designs .

Assigned to one experimental condition Participates in all experimental conditions
Requires more subjects Fewer subjects
Differences between subjects in the groups can affect the results Uses same subjects in all conditions.
No order of treatment effects. Order of treatments can affect results.

Design of Experiments Examples

For example, a bone density study has three experimental groups—a control group, a stretching exercise group, and a jumping exercise group.

In a between-subjects experimental design, scientists randomly assign each participant to one of the three groups.

In a within-subjects design, all subjects experience the three conditions sequentially while the researchers measure bone density repeatedly. The procedure can switch the order of treatments for the participants to help reduce order effects.

Matched Pairs Experimental Design

A matched pairs experimental design is a between-subjects study that uses pairs of similar subjects. Researchers use this approach to reduce pre-existing differences between experimental groups. It’s yet another design of experiments method for reducing sources of variability.

Researchers identify variables likely to affect the outcome, such as demographics. When they pick a subject with a set of characteristics, they try to locate another participant with similar attributes to create a matched pair. Scientists randomly assign one member of a pair to the treatment group and the other to the control group.

On the plus side, this process creates two similar groups, and it doesn’t create treatment order effects. While matched pairs do not produce the perfectly matched groups of a within-subjects design (which uses the same subjects in all conditions), it aims to reduce variability between groups relative to a between-subjects study.

On the downside, finding matched pairs is very time-consuming. Additionally, if one member of a matched pair drops out, the other subject must leave the study too.

Learn more about Matched Pairs Design: Uses & Examples .

Another consideration is whether you’ll use a cross-sectional design (one point in time) or use a longitudinal study to track changes over time .

A case study is a research method that often serves as a precursor to a more rigorous experimental design by identifying research questions, variables, and hypotheses to test. Learn more about What is a Case Study? Definition & Examples .

In conclusion, the design of experiments is extremely sensitive to subject area concerns and the time and resources available to the researchers. Developing a suitable experimental design requires balancing a multitude of considerations. A successful design is necessary to obtain trustworthy answers to your research question and to have a reasonable chance of detecting treatment effects when they exist.

Share this:

examples of design an experiment

Reader Interactions

' src=

March 23, 2024 at 2:35 pm

Dear Jim You wrote a superb document, I will use it in my Buistatistics course, along with your three books. Thank you very much! Miguel

' src=

March 23, 2024 at 5:43 pm

Thanks so much, Miguel! Glad this post was helpful and I trust the books will be as well.

' src=

April 10, 2023 at 4:36 am

What are the purpose and uses of experimental research design?

Comments and Questions Cancel reply

Calcworkshop

Experimental Design in Statistics w/ 11 Examples!

// Last Updated: September 20, 2020 - Watch Video //

A proper experimental design is a critical skill in statistics.

Jenn (B.S., M.Ed.) of Calcworkshop® teaching why experimental design is important

Jenn, Founder Calcworkshop ® , 15+ Years Experience (Licensed & Certified Teacher)

Without proper controls and safeguards, unintended consequences can ruin our study and lead to wrong conclusions.

So let’s dive in to see what’s this is all about!

What’s the difference between an observational study and an experimental study?

An observational study is one in which investigators merely measure variables of interest without influencing the subjects.

And an experiment is a study in which investigators administer some form of treatment on one or more groups?

In other words, an observation is hands-off, whereas an experiment is hands-on.

So what’s the purpose of an experiment?

To establish causation (i.e., cause and effect).

All this means is that we wish to determine the effect an independent explanatory variable has on a dependent response variable.

The explanatory variable explains a response, similar to a child falling and skins their knee and starting to cry. The child is crying in response to falling and skinning their knee. So the explanatory variable is the fall, and the response variable is crying.

explanatory vs response variable in everyday life

Explanatory Vs Response Variable In Everyday Life

Let’s look at another example. Suppose a medical journal describes two studies in which subjects who had a seizure were randomly assigned to two different treatments:

  • No treatment.
  • A high dose of vitamin C.

The subjects were observed for a year, and the number of seizures for each subject was recorded. Identify the explanatory variable (independent variable), response variable (dependent variable), and include the experimental units.

The explanatory variable is whether the subject received either no treatment or a high dose of vitamin C. The response variable is whether the subject had a seizure during the time of the study. The experimental units in this study are the subjects who recently had a seizure.

Okay, so using the example above, notice that one of the groups did not receive treatment. This group is called a control group and acts as a baseline to see how a new treatment differs from those who don’t receive treatment. Typically, the control group is given something called a placebo, a substance designed to resemble medicine but does not contain an active drug component. A placebo is a dummy treatment, and should not have a physical effect on a person.

Before we talk about the characteristics of a well-designed experiment, we need to discuss some things to look out for:

  • Confounding
  • Lurking variables

Confounding happens when two explanatory variables are both associated with a response variable and also associated with each other, causing the investigator not to be able to identify their effects and the response variable separately.

A lurking variable is usually unobserved at the time of the study, which influences the association between the two variables of interest. In essence, a lurking variable is a third variable that is not measured in the study but may change the response variable.

For example, a study reported a relationship between smoking and health. A study of 1430 women were asked whether they smoked. Ten years later, a follow-up survey observed whether each woman was still alive or deceased. The researchers studied the possible link between whether a woman smoked and whether she survived the 10-year study period. They reported that:

  • 21% of the smokers died
  • 32% of the nonsmokers died

So, is smoking beneficial to your health, or is there something that could explain how this happened?

Older women are less likely to be smokers, and older women are more likely to die. Because age is a variable that influences the explanatory and response variable, it is considered a confounding variable.

But does smoking cause death?

Notice that the lurking variable, age, can also be a contributing factor. While there is a correlation between smoking and mortality, and also a correlation between smoking and age, we aren’t 100% sure that they are the cause of the mortality rate in women.

lurking confounding correlation causation diagram

Lurking – Confounding – Correlation – Causation Diagram

Now, something important to point out is that a lurking variable is one that is not measured in the study that could influence the results. Using the example above, some other possible lurking variables are:

  • Stress Level.

These variables were not measured in the study but could influence smoking habits as well as mortality rates.

What is important to note about the difference between confounding and lurking variables is that a confounding variable is measured in a study, while a lurking variable is not.

Additionally, correlation does not imply causation!

Alright, so now it’s time to talk about blinding: single-blind, double-blind experiments, as well as the placebo effect.

A single-blind experiment is when the subjects are unaware of which treatment they are receiving, but the investigator measuring the responses knows what treatments are going to which subject. In other words, the researcher knows which individual gets the placebo and which ones receive the experimental treatment. One major pitfall for this type of design is that the researcher may consciously or unconsciously influence the subject since they know who is receiving treatment and who isn’t.

A double-blind experiment is when both the subjects and investigator do not know who receives the placebo and who receives the treatment. A double-blind model is considered the best model for clinical trials as it eliminates the possibility of bias on the part of the researcher and the possibility of producing a placebo effect from the subject.

The placebo effect is when a subject has an effect or response to a fake treatment because they “believe” that the result should occur as noted by Yale . For example, a person struggling with insomnia takes a placebo (sugar pill) but instantly falls asleep because they believe they are receiving a sleep aid like Ambien or Lunesta.

placebo effect real life example

Placebo Effect – Real Life Example

So, what are the three primary requirements for a well-designed experiment?

  • Randomization

In a controlled experiment , the researchers, or investigators, decide which subjects are assigned to a control group and which subjects are assigned to a treatment group. In doing so, we ensure that the control and treatment groups are as similar as possible, and limit possible confounding influences such as lurking variables. A replicated experiment that is repeated on many different subjects helps reduce the chance of variation on the results. And randomization means we randomly assign subjects into control and treatment groups.

When subjects are divided into control groups and treatment groups randomly, we can use probability to predict the differences we expect to observe. If the differences between the two groups are higher than what we would expect to see naturally (by chance), we say that the results are statistically significant.

For example, if it is surmised that a new medicine reduces the effects of illness from 72 hours to 71 hours, this would not be considered statistically significant. The difference from 72 hours to 71 hours is not substantial enough to support that the observed effect was due to something other than normal random variation.

Now there are two major types of designs:

  • Completely-Randomized Design (CRD)
  • Block Design

A completely randomized design is the process of assigning subjects to control and treatment groups using probability, as seen in the flow diagram below.

completely randomized design example

Completely Randomized Design Example

A block design is a research method that places subjects into groups of similar experimental units or conditions, like age or gender, and then assign subjects to control and treatment groups using probability, as shown below.

randomized block design example

Randomized Block Design Example

Additionally, a useful and particular case of a blocking strategy is something called a matched-pair design . This is when two variables are paired to control for lurking variables.

For example, imagine we want to study if walking daily improved blood pressure. If the blood pressure for five subjects is measured at the beginning of the study and then again after participating in a walking program for one month, then the observations would be considered dependent samples because the same five subjects are used in the before and after observations; thus, a matched-pair design.

Please note that our video lesson will not focus on quasi-experiments. A quasi experimental design lacks random assignments; therefore, the independent variable can be manipulated prior to measuring the dependent variable, which may lead to confounding. For the sake of our lesson, and all future lessons, we will be using research methods where random sampling and experimental designs are used.

Together we will learn how to identify explanatory variables (independent variable) and response variables (dependent variables), understand and define confounding and lurking variables, see the effects of single-blind and double-blind experiments, and design randomized and block experiments.

Experimental Designs – Lesson & Examples (Video)

1 hr 06 min

  • Introduction to Video: Experiments
  • 00:00:29 – Observational Study vs Experimental Study and Response and Explanatory Variables (Examples #1-4)
  • Exclusive Content for Members Only
  • 00:09:15 – Identify the response and explanatory variables and the experimental units and treatment (Examples #5-6)
  • 00:14:47 – Introduction of lurking variables and confounding with ice cream and homicide example
  • 00:18:57 – Lurking variables, Confounding, Placebo Effect, Single Blind and Double Blind Experiments (Example #7)
  • 00:27:20 – What was the placebo effect and was the experiment single or double blind? (Example #8)
  • 00:30:36 – Characteristics of a well designed and constructed experiment that is statistically significant
  • 00:35:08 – Overview of Complete Randomized Design, Block Design and Matched Pair Design
  • 00:44:23 – Design and experiment using complete randomized design or a block design (Examples #9-10)
  • 00:56:09 – Identify the response and explanatory variables, experimental units, lurking variables, and design an experiment to test a new drug (Example #11)
  • Practice Problems with Step-by-Step Solutions
  • Chapter Tests with Video Solutions

Get access to all the courses and over 450 HD videos with your subscription

Monthly and Yearly Plans Available

Get My Subscription Now

Still wondering if CalcWorkshop is right for you? Take a Tour and find out how a membership can take the struggle out of learning math.

5 Star Excellence award from Shopper Approved for collecting at least 100 5 star reviews

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved June 25, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

helpful professor logo

15 Experimental Design Examples

15 Experimental Design Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

experimental design types and definition, explained below

Experimental design involves testing an independent variable against a dependent variable. It is a central feature of the scientific method .

A simple example of an experimental design is a clinical trial, where research participants are placed into control and treatment groups in order to determine the degree to which an intervention in the treatment group is effective.

There are three categories of experimental design . They are:

  • Pre-Experimental Design: Testing the effects of the independent variable on a single participant or a small group of participants (e.g. a case study).
  • Quasi-Experimental Design: Testing the effects of the independent variable on a group of participants who aren’t randomly assigned to treatment and control groups (e.g. purposive sampling).
  • True Experimental Design: Testing the effects of the independent variable on a group of participants who are randomly assigned to treatment and control groups in order to infer causality (e.g. clinical trials).

A good research student can look at a design’s methodology and correctly categorize it. Below are some typical examples of experimental designs, with their type indicated.

Experimental Design Examples

The following are examples of experimental design (with their type indicated).

1. Action Research in the Classroom

Type: Pre-Experimental Design

A teacher wants to know if a small group activity will help students learn how to conduct a survey. So, they test the activity out on a few of their classes and make careful observations regarding the outcome.

The teacher might observe that the students respond well to the activity and seem to be learning the material quickly.

However, because there was no comparison group of students that learned how to do a survey with a different methodology, the teacher cannot be certain that the activity is actually the best method for teaching that subject.

2. Study on the Impact of an Advertisement

An advertising firm has assigned two of their best staff to develop a quirky ad about eating a brand’s new breakfast product.

The team puts together an unusual skit that involves characters enjoying the breakfast while engaged in silly gestures and zany background music. The ad agency doesn’t want to spend a great deal of money on the ad just yet, so the commercial is shot with a low budget. The firm then shows the ad to a small group of people just to see their reactions.

Afterwards they determine that the ad had a strong impact on viewers so they move forward with a much larger budget.

3. Case Study

A medical doctor has a hunch that an old treatment regimen might be effective in treating a rare illness.

The treatment has never been used in this manner before. So, the doctor applies the treatment to two of their patients with the illness. After several weeks, the results seem to indicate that the treatment is not causing any change in the illness. The doctor concludes that there is no need to continue the treatment or conduct a larger study with a control condition.

4. Fertilizer and Plant Growth Study

An agricultural farmer is exploring different combinations of nutrients on plant growth, so she does a small experiment.

Instead of spending a lot of time and money applying the different mixes to acres of land and waiting several months to see the results, she decides to apply the fertilizer to some small plants in the lab.

After several weeks, it appears that the plants are responding well. They are growing rapidly and producing dense branching. She shows the plants to her colleagues and they all agree that further testing is needed under better controlled conditions .

5. Mood States Study

A team of psychologists is interested in studying how mood affects altruistic behavior. They are undecided however, on how to put the research participants in a bad mood, so they try a few pilot studies out.

They try one suggestion and make a 3-minute video that shows sad scenes from famous heart-wrenching movies.

They then recruit a few people to watch the clips and measure their mood states afterwards.

The results indicate that people were put in a negative mood, but since there was no control group, the researchers cannot be 100% confident in the clip’s effectiveness.

6. Math Games and Learning Study

Type: Quasi-Experimental Design

Two teachers have developed a set of math games that they think will make learning math more enjoyable for their students. They decide to test out the games on their classes.

So, for two weeks, one teacher has all of her students play the math games. The other teacher uses the standard teaching techniques. At the end of the two weeks, all students take the same math test. The results indicate that students that played the math games did better on the test.

Although the teachers would like to say the games were the cause of the improved performance, they cannot be 100% sure because the study lacked random assignment . There are many other differences between the groups that played the games and those that did not.

Learn More: Random Assignment Examples

7. Economic Impact of Policy

An economic policy institute has decided to test the effectiveness of a new policy on the development of small business. The institute identifies two cities in a third-world country for testing.

The two cities are similar in terms of size, economic output, and other characteristics. The city in which the new policy was implemented showed a much higher growth of small businesses than the other city.

Although the two cities were similar in many ways, the researchers must be cautious in their conclusions. There may exist other differences between the two cities that effected small business growth other than the policy.

8. Parenting Styles and Academic Performance

Psychologists want to understand how parenting style affects children’s academic performance.

So, they identify a large group of parents that have one of four parenting styles: authoritarian, authoritative, permissive, or neglectful. The researchers then compare the grades of each group and discover that children raised with the authoritative parenting style had better grades than the other three groups. Although these results may seem convincing, it turns out that parents that use the authoritative parenting style also have higher SES class and can afford to provide their children with more intellectually enriching activities like summer STEAM camps.

9. Movies and Donations Study

Will the type of movie a person watches affect the likelihood that they donate to a charitable cause? To answer this question, a researcher decides to solicit donations at the exit point of a large theatre.

He chooses to study two types of movies: action-hero and murder mystery. After collecting donations for one month, he tallies the results. Patrons that watched the action-hero movie donated more than those that watched the murder mystery. Can you think of why these results could be due to something other than the movie?

10. Gender and Mindfulness Apps Study

Researchers decide to conduct a study on whether men or women benefit from mindfulness the most. So, they recruit office workers in large corporations at all levels of management.

Then, they divide the research sample up into males and females and ask the participants to use a mindfulness app once each day for at least 15 minutes.

At the end of three weeks, the researchers give all the participants a questionnaire that measures stress and also take swabs from their saliva to measure stress hormones.

The results indicate the women responded much better to the apps than males and showed lower stress levels on both measures.

Unfortunately, it is difficult to conclude that women respond to apps better than men because the researchers could not randomly assign participants to gender. This means that there may be extraneous variables that are causing the results.

11. Eyewitness Testimony Study

Type: True Experimental Design

To study the how leading questions on the memories of eyewitnesses leads to retroactive inference , Loftus and Palmer (1974) conducted a simple experiment consistent with true experimental design.

Research participants all watched the same short video of two cars having an accident. Each were randomly assigned to be asked either one of two versions of a question regarding the accident.

Half of the participants were asked the question “How fast were the two cars going when they smashed into each other?” and the other half were asked “How fast were the two cars going when they contacted each other?”

Participants’ estimates were affected by the wording of the question. Participants that responded to the question with the word “smashed” gave much higher estimates than participants that responded to the word “contacted.”

12. Sports Nutrition Bars Study

A company wants to test the effects of their sports nutrition bars. So, they recruited students on a college campus to participate in their study. The students were randomly assigned to either the treatment condition or control condition.

Participants in the treatment condition ate two nutrition bars. Participants in the control condition ate two similar looking bars that tasted nearly identical, but offered no nutritional value.

One hour after consuming the bars, participants ran on a treadmill at a moderate pace for 15 minutes. The researchers recorded their speed, breathing rates, and level of exhaustion.

The results indicated that participants that ate the nutrition bars ran faster, breathed more easily, and reported feeling less exhausted than participants that ate the non-nutritious bar.

13. Clinical Trials

Medical researchers often use true experiments to assess the effectiveness of various treatment regimens. For a simplified example: people from the population are randomly selected to participate in a study on the effects of a medication on heart disease.

Participants are randomly assigned to either receive the medication or nothing at all. Three months later, all participants are contacted and they are given a full battery of heart disease tests.

The results indicate that participants that received the medication had significantly lower levels of heart disease than participants that received no medication.

14. Leadership Training Study

A large corporation wants to improve the leadership skills of its mid-level managers. The HR department has developed two programs, one online and the other in-person in small classes.

HR randomly selects 120 employees to participate and then randomly assigned them to one of three conditions: one-third are assigned to the online program, one-third to the in-class version, and one-third are put on a waiting list.

The training lasts for 6 weeks and 4 months later, supervisors of the participants are asked to rate their staff in terms of leadership potential. The supervisors were not informed about which of their staff participated in the program.

The results indicated that the in-person participants received the highest ratings from their supervisors. The online class participants came in second, followed by those on the waiting list.

15. Reading Comprehension and Lighting Study

Different wavelengths of light may affect cognitive processing. To put this hypothesis to the test, a researcher randomly assigned students on a college campus to read a history chapter in one of three lighting conditions: natural sunlight, artificial yellow light, and standard fluorescent light.

At the end of the chapter all students took the same exam. The researcher then compared the scores on the exam for students in each condition. The results revealed that natural sunlight produced the best test scores, followed by yellow light and fluorescent light.

Therefore, the researcher concludes that natural sunlight improves reading comprehension.

See Also: Experimental Study vs Observational Study

Experimental design is a central feature of scientific research. When done using true experimental design, causality can be infered, which allows researchers to provide proof that an independent variable affects a dependent variable. This is necessary in just about every field of research, and especially in medical sciences.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 19 Top Cognitive Psychology Theories (Explained)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 119 Bloom’s Taxonomy Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ All 6 Levels of Understanding (on Bloom’s Taxonomy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Self-Actualization Examples (Maslow's Hierarchy)

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

examples of design an experiment

  • Statistics for Experimenters
  • Hunter Award

101 Ways to Design an Experiment, or Some Ideas About Teaching Design of Experiments by William G. Hunter

williamghunter.net > Articles > 101 Ways to Design an Experiment

I want to share some ideas about teaching design of experiments. They are related to something I have often wondered about: whether it is possible to let students experience first-hand all the steps involved in an experimental investigation-thinking of the problem, deciding what experiments might shed light on the problem, planning the runs to be made, carrying them out, analyzing the results, and writing a report summarizing the work. One curiosity about most courses on experimental designing, it seems to me, is that students get no practice designing realistic experiments although, from homework assignments, they do get practice analyzing data. Clearly, however, because of limitations of time and money, if students are to design experiments and actually carry them out, they cannot be involved with elaborate investigations. Therefore, the key question is this: Is it feasible for students to devise their own simple experiments and carry them through to completion and, if so, is it of any educational value to have them do so? I believe the answer to both parts of the question is yes, and the purpose of this paper is to explain why.

The particular design course I have taught most often is a one-semester course that includes these standard statistical techniques: t-tests (paired and unpaired), analysis of variance (primarily for one-way and two-way layouts), factorial and fractional factorial designs (emphasis given to two-level designs), the method of least squares (for linear and nonlinear models), and response surface methodology. The value of randomization and blocking is stressed. Special attention is given to these questions: What are the assumptions being made? What if they are violated? What common pitfalls are encountered in practice? What precautions can be taken to avoid these pitfalls? In analyzing data how can one determine whether the model is adequate? Homework problems provide ample opportunity for carefully examining residuals, especially by plotting them. The material for this course is discussed in the context of the iterative nature of experimental investigations.

Most of those who have taken this course have been graduate students, principally in engineering (chemical, civil, mechanical, industrial, agricultural) but also in a variety of other fields including statistics, food science, forestry, chemistry, and biology. There is a prerequisite of a one-semester introductory statistics course, but this requirement is customarily waived for graduate students with the understanding that they do a little extra work to catch up.

Simulated Data

One possibility is to use simulated data, and the scope here is wide, especially with the availability of computers. At times I have given assignments of this kind, especially response surface problems. Each student receives his or her own sets of data based upon the designs he or she chooses.

The problem might be set up as one involving a chemist who wishes to find the best settings of these five variables-temperature, concentration, pH, stirring rate, and amount of catalyst-and to determine the local geography of the response surface(s) near the optimum. To define the region of operability, ranges are specified for each of these variables. Perhaps more than one response can be measured, for instance, yield and cost. The student is given a certain budget, either in terms of runs or money, the latter being appropriate if there is an option provided for different types of experiments which have different costs. The student can ask for data in, say, three stages. Between these stages the accumulated data can be analyzed so that future experiments can be planned on the basis of all available information.

In generating the data, which contains experimental error, there are many possibilities. Different models can be used for each student, the models not necessarily being the usual simple first-order or second-order linear models. Not all variables need to be important, that is, some may be dummy variables (different ones for different students). Time trends and other abnormalities can be deliberately introduced into the data provided to the students.

The student prepares a report including a summary of the most important facts discovered about his or her system and perhaps containing a contour map of the response surface(s) for the two most important variables (if three of the five variables are dummies, this map should correspond to the true surface from which the data were generated). It is instructive then to compare each student's findings with the corresponding true situation.

Students enjoy games of this type and learn a considerable amount from them. For many it is the first time they realize just how frustrating the presence of an appreciable amount of experimental error can be. The typical prearranged undergraduate laboratory experiments in physics and chemistry, of course, have all important known sources of experimental error removed (typically the data are supposed to fall on a straight line-exactly-or else).

One's first reaction might be that there are not enough possibilities for experiments of this kind. But this is incorrect, as is illustrated by Table 1, which lists some of the experiments reported by the students. Experiments number 1-63 are of the home type and experiments number 64-101 are of the laboratory type. Note the variety of studies done. To save space, for most variables the levels used are not given. Anyway, they are not essential for our purposes here. Most of these experiments were factorial designs. Let us look briefly at the first two home experiments and the first two laboratory experiments.

Bicycle Experiment

In experiment number 1 the student, Norman Miller, using a factorial design with all points replicated, studied the effects of three variables-seat height (26, 30 inches), light generator (on or off), and tire pressure (40, 55 psi)-on two responses-time required to ride his bicycle over a particular course and his pulse rate at the finish of each run (pulse rate at the start was virtually constant). To him the most surprising result was how much he was slowed down by having the generator on. The average time for each run was approximately 50 seconds. He discovered that raising the seat reduced the time by about 10 seconds, having the generator on increased it by about one-third that amount and inflating the tires to 55 psi reduced the time by about the same amount that the generator increased it. He planned further experiments.

Popcorn Experiment

In experiment number 2 the student, Karen Vlasek, using a factorial design with four replicated center points, determined the effects of three variables on the amount of popcorn produced. She found, for example, that although double the yield was obtained with the gourmet popcorn, it cost three times as much as the regular popcorn. By using this experimental design she discovered approximately what combination of variables gave her best results. She noted that it differed from those recommended by the manufacturer of her popcorn popper and both suppliers of popcorn.

Dilution experiment

In experiment number 64 the student, Dean Hafeman, studied a routine laboratory procedure (a dilution) that was performed many times each day where he worked-almost on a mass production basis. The manufacturer of the equipment used for this work emphasized that the key operations, the raising and lowering of two plungers, had to be done slowly for good results. The student wondered what difference it would make if these operations were done quickly. He set up a factorial design in which the variables were the raising and lowering of plunger A and the raising and lowering of plunger B. The two levels of each variable were slow and fast. To his surprise, he found that none of the variables had any measurable effect on the readings. This conclusion had important practical implications in his laboratory because it meant that good results could be obtained even if the plungers were moved quickly; consequently a considerable amount of time could be saved in doing this routing work.

Trouble-shooting Experiment

In experiment number 65 the student, Rodger Melton, solved a trouble-shooting problem that he encountered in his research work. In one piece of his apparatus an extremely small quantity of a certain chemical was distilled to be collected in a second piece of the apparatus. Unfortunately, some of this material condensed prematurely in the line between these two pieces of apparatus. Was there a way to prevent this? By using a factorial design the problem was solved, it being discovered that by suitably adjusting the voltage and using a J-tube none of the material condensed prematurely. The column temperature, which was discovered to be minor consequence as far as premature condensation was concerned (a surprise), could be set to maximize throughput.

Most Popular Experiments

The most popular home experiments have concerned cooking since recipes lend themselves so readily to variations. What to measure for the response has sometimes created a problem. Usually a quality characteristic such as taste has been determined (preferably independently by a number of judges) on a 1-5 or 1-10 scale. Growing seeds has also been an easy and popular experiment. In the laboratory experiments, sensitivity or robustness tests have been the most common (the dilution experiment, number 65, discussed above is of this type). Typically the experimenter varies the conditions for a standard analytical procedure (for example, for the measurement of chemical oxygen demand, COD) to see how much the measured value is affected. That is, if the standard procedure calls for the addition of 20 ml. of a particular chemical, 18 ml. and 22 ml. might be tried. Results from such tests are revealing no matter which way they turn out. One student, for example, concluded ``The results sort of speak for themselves. The test is not very robust.'' Another student, who studied a different test, reported ``The results of the Yates analysis show that the COD test is indeed robust.''

Structuring the Assignment

I have always made these assignments completely open, saying that they could study anything that interested them. I have tended to favor home rather than laboratory experiments. I have suggested they choose something they care about, preferably something they've wondered about. Such projects seem to turn out better than those picked for no particularly good reason. Here is how a few of the reports began: ``Ever since we came to Madison my family has experienced difficulty in making bread that will rise properly.'' ``Since moving to Madison, my green thumb has turned black. Every plant I have tried to grow has died.'' (Nothing works in Madison?) ``This experiment deals with how best to prepare pancakes to satisfy the group of four of us living together.'' ``I rent an efficiency on the second floor of an apartment building which has cooking facilities on the first floor only. When I cook rice, my staple food,I have to make one to three visits to the kitchen to make sure it is ready to be served and not burned. Because of this inconvenience, I wanted to study the effects of certain variables on the cooking time of rice.'' ``My wife and I were wondering if our oldest daughter had a favorite toy.'' ``For the home brewer, a small kitchen blender does a good job of grinding malt, provided the right levels of speed, batch size and time are used. This is the basis of the experimental design.'' ``During my career as a beer drinker, various questions have arisen.'' ``I do much of the maintenance and repair work around my home, and some of the repairs require the use of epoxy glue. I was curious about some of the factors affecting its performance.'' ``My wife and I are interested in indoor plants, and often we like to give them as gifts. We usually select a cutting from one of our fifty or so plants, put it in a glass of water until it develops roots, and then pot it. We have observed that sometimes the cutting roots quickly and sometimes it roots slowly, so we decided to experiment with several factors that we thought might be important in this process.'' ``I chose to find out how my shotguns were firing. I reload my own shells with powders that were recommended to me, one for short range shooting and one for long range shooting. I had my doubts if the recommendations were valid.''

What Did the Students Learn?

The conclusion reached in this last experiment was: ``As it looks now, I should use my Gun A with powder C for close range shooting, such as for grouse and woodcock. I should use gun B and powder D for longer range shooting as for ducks and geese.'' As is illustrated by this example and the first four discussed above, the students sometimes learned things that were directly useful to them. Some other examples: ``Spending $70 extra to buy tape deck 2 is not justified as the difference in sound is better with the other, or probably there is no difference. The synthesizer appears not to affect the quality of the sound.'' In operating my calculator I can anticipate increasing operation time by an additional 15 minutes and 23 seconds on the average by charging 60 minutes instead of 30 minutes.'' ``In conclusion, the Chinese dumplings turned out very pretty and very delicious, especially the ones with thin skins. I think this was a successful experiment.

Naturally, not all experiments were successful. ``A better way to have run the experiment would have been to...'' Various troubles arose. ``The reason that there is only one observation for the eighth row is that one of the cups was knocked over by a curious cat.'' ``One observation made during the experiment was that the child's posture may have affected the duration of the ride. Mark (13 pounds) leaned back, thus distributing his weight more evenly. On the other hand, Mike (22 pounds) preferred to sit forward, which may have made the restoring action of the spring more difficult.'' (The trouble here was that the variable the student wanted to study was weight, not posture.) Another student, who was studying factors that affected how fast snow melted on sidewalks, had some of his data destroyed because the sun came out brightly (and unexpectedly) one day near the end of his experiment and melted all the snow.

Because of such troubles these simple experiments have served as useful vehicles for discussing important practical points that arise in more serious scientific investigations. Excellent questions for this purpose have arisen from these studies. ``Do I really need to use a completely randomized experiment? It will take much longer to do that way?'' There have been good examples that illustrate the sequential nature of experimentation and show how carefully conceived experimental designs can help in solving problems.''...This must have been the main reason why the first experiment completely failed. I decided to try another factorial design. Synchronization of the flash unit and camera still bothered me. I decided to experiment with...'' some other factors.

As a result of these projects students seem to get a much better appreciation of the efficiency and beauty of experimental designs. For example, in this last experiment the student concluded: ``The factorial design proved to be efficient in solving the problem. I did get off on the wrong track initially, but the information learned concerning synchronization is quite valuable.'' Another student: ``It is interesting to see how a few experiments can give so much information.''

There is another point, and it is not the least important. Most of the students had fun with these projects. And I did, too. Just looking through Table 1 suggests why this is so, I think. One report ended simply: ``This experiment was really fun!'' Many students have reported that this was the best part of the course.

There is a tendency sometimes for experimenters to discount what they have learned, this being true not only for students in this class, but also for experimenters in general. That is, they learn more than they realize. Hindsight is the culprit. On pondering a certain conclusion, one is prone to say ``Oh yes, that makes sense. Yes, that's the way it should be. That's what I would have expected.'' While this reaction is often correct, one is sometimes just fooling oneself, that is, interrogation at the outset would have produced exactly the opposite opinion. So that students could more accurately gauge what they learned from their simple experiments, I tried the following and it seemed to work: after having decided on the experimental runs to perform, the student guessed what his or her major conclusions would be and wrote them down. Upon completion of the assignment, these guesses were checked against the actual results, which immediately provided a clear picture of what was learned (the surprises) and what was confirmed (the non-surprises).

I now tend to spend much more time introducing each new topic than I used to. Providing appropriate motivation is extremely important. For classes I have had the privilege of teaching-whether in universities or elsewhere-I have found that it has been better to use concrete examples followed by the general theory rather than the reverse. I now try to describe a particular problem in some detail, preferably a real one with which I am familiar, and then pose the question: What would YOU do? I find it helpful to resist the temptation to move on too quickly to the prepared lecture so that there is ample time for students to consider this question seriously, to discuss it, to ask questions of clarification, to express ideas they have, and ultimately (and this really the object of the exercise) to realize that a genuine problem exists and they do not know how to solve it. They are then eager to learn. And after we have finished with that particular topic they know they have learned something of value. (I realize as I write this that I have been strongly influenced by George Barnard, who masterfully conducted a seminar in this manner at Imperial College, London, in 1964-65, which I was fortunate to have attended.)

Current examples are well-received, especially controversies (for example, weather modification experiments). Some useful sources are court cases, advertisements, TV and radio commercials, and ``Consumer Reports''. An older controversy still of considerable interest from a pedagogical point of view is the AD-X2 battery additive case. Gosset's comments on the Lanarkshire Milk Experiment are still illuminating. Sometimes trying to get the data that support a particular TV commercial or the facts from both parties of a dispute has made an interesting side project to carry along through a semester.

Having each student exercise his or her own initiative in thinking up an experiment and carrying it through to completion has turned out successfully. Using games involving simulated data has also been useful. I have incorporated such projects, principally of the former type, into courses I have taught, and I urge others to consider doing the same. Why?

First of all, it's fun. The students have generally welcomed the opportunity to learn something about a particular question they have wondered about. I have been fascinated to see what they have chosen to study and what conclusions they have reached, so it has been fun for me, too. The students and I have certainly learned interesting things we did not know before. Why doesn't my bread rise? Why don't my flowers grow? Is this analytical procedure robust? Will carrying a crutch make it easier for me to get a ride hitchhiking? (Incidentally, it made it harder.)

Secondly, the students have gotten a lot out of such experiences. There is a definite deepening of understanding that comes from having been through a study from start to finish-deciding on a problem, the variables, the ranges of the variables, and how to measure the response(s), actually running the experiment and collecting the data, analyzing the results, learning what the practical consequences are, and finally writing a report. Being veterans, not of the war certainly but of a minor skirmish at least, the students seem more comfortable and confident with the entire subject of the design of experiments, especially as they share their experiences with one another.

Thirdly, I have found it particularly worthwhile to discuss with them in class some of the practical questions that naturally emerge from these studies. ``What can I do about missing data?'' ``These first three readings are questionable because I think I didn't have my technique perfected then-What should I do?'' ``A most unusual thing happened during this run, so should I analyze this result with all the others or leave it out?'' They are genuinely interested in such questions because they have actually encountered them, not just read about them in a textbook. Sometimes there is no simple answer, and lively and valuable discussions then occur. Such discussions, I hope, help them understand that, when they confront real problems later on which refuse to look like those in the textbooks no matter how they are viewed, there are alternatives to pretending they do and charging ahead regardless or forgetting about them in hopes they will go away or adopting a ``non-statistical'' approach-in a word, there are alternatives to panic.

Table 1. List of some studies done by students in an experimental design course.

  • variables: seat height (26, 30 inches), generator (off,on), tire pressure (40, 55 psi) responses: time to complete fixed course on bicycle and pulse rate at finish
  • variables: brand of popcorn (ordinary, gourmet), size of batch (1/3,2/3 cup), popcorn to oil ratio (low, high) responses: yield of popcorn
  • variables: amount of yeast, amount of sugar, liquid (milk, water), rise temperature, rise time responses: quality of bread, especially the total rise
  • variables: number of pills, amount of cough syrup, use of vaporizer responses: how well twins, who had colds, slept during the night
  • variables: speed of film, light (normal, diffused), shutter speed responses: quality of slides made close up with flash attachment on camera
  • variables: hours of illumination, water temperature, specific gravity of water responses: growth rate of algae in salt water aquarium
  • variables: temperature, amount of sugar, food prior to drink (water, salted popcorn) responses: taste of Koolaid
  • variables: direction in which radio is facing, antenna angle, antenna slant responses: strength of radio signal from particular AM station in Chicago
  • variables: blending speed, amount of water, temperature of water, soaking time before blending responses: blending time for soy beans
  • variables: charge time, digits fixed, number of calculations performed responses: operation time for pocket calculator
  • variables: clothes dryer (A,B), temperature setting, load responses: time until dryer stops
  • variables: pan (aluminum, iron), burner on stove, cover for pan (no, yes) responses: time to boil water
  • variables: aspirin buffered? (no, yes) dose, water temperature responses: hours of relief from migraine headache
  • variables: amount of milk powder added to milk, heating temperature, incubation temperature responses: taste comparison of homemade yogurt and commercial brand
  • variables: pack on back (no, yes), footwear (tennis shoes, boots), run (7, 14 flights of steps) responses: time required to run up steps and heartbeat at top
  • variables: width to height ratio of sheet of balsa wood, slant angle, dihedral angle, weight added, thickness of wood responses: length of flight of model airplane
  • variables: level of coffee in cup, devices (nothing, spoon placed across top of cup facing up), speed of walking responses: how much coffee spilled while walking
  • variables: type of stitch, yarn gauge, needle size responses: cost of knitting scarf, dollars per square foot
  • variables: type of drink (beer, rum), number of drinks, rate of drinking, hours after last meal responses: time to get steel ball through a maze
  • variables: size of order, time of day, sex of server responses: cost of order of french fries, in cents per ounce
  • variables: brand of gasoline, driving speed, temperature responses: gas mileage for car
  • variables: stamp (first class, air mail), zip code (used, not used), time of day when letter mailed responses: number of days required for letter to be delivered to another city
  • variables: side of face (left, right), beard history (shaved once in two years0-sideburns, shaved over 600 times in two years-just below sideburns) responses: length of whiskers 3 days after shaving
  • variables: eyes used (both, right), location of observer, distance responses: number of times (out of 15) that correct gender of passerby was determined by experimenter with poor eyesight wearing no glasses
  • variables: distance to target, guns (A,B), powders(C,D) responses: number of shot that penetrated a one foot diameter circle on the target
  • variables: oven temperature, length of heating, amount of water responses: height of cake
  • variables: strength of developer, temperature, degree of agitation responses: density of photographic film
  • variables: brand of rubber band, size, temperature responses: length of rubber band before it broke
  • variables: viscosity of oil, type of pick-up shoes, number of teeth in gear responses: speed of H.O. scale slot racers
  • variables: type of tire, brand of gas, driver (A,B) responses: time for car to cover one-quarter mile
  • variables: temperature, stirring rate, amount of solvent responses: time to dissolve table salt
  • variables: amounts of cooking wine, oyster sauce,sesame oil responses: taste of stewed chicken
  • variables: type of surface, object (slide rule, ruler, silver dollar), pushed? (no,yes) responses: angle necessary to make object slide
  • variables: ambient temperature, choke setting, number of charges responses: number of kicks necessary to start motorcycle
  • variables: temperature, location in oven, biscuits covered while baking? (no,yes) responses: time to bake biscuits
  • variables: temperature of water, amount of grease, amount of water conditioner responses: quantity of suds produced in kitchen blender
  • variables: person putting daughter to bed (mother, father), bed time, place (home, grandparents) responses: toys child chose to sleep with
  • variables: amount of light in room, type of music played, volume responses: correct answers on simple arithmetic test, time required to complete test, words remembered (from list of 15)
  • variables: amounts of added Turkish, Latakia, and Perique tobaccos responses: bite, smoking characteristics, aroma, and taste of tobacco mixture
  • variables: temperature, humidity, rock salt responses: time to melt ice
  • variables: number of cards dealt at one time, position of picker relative to the dealer responses: points in games of sheepshead, a card game
  • variables: marijuana (no, yes),tequila (no, yes),sauna (no, yes) responses: pleasure experienced in subsequent sexual intercourse
  • variables: amounts of flour, eggs, milk responses: taste of pancakes, consensus of group of four living together
  • variables: brand of suntan lotion, altitude, skier responses: time to get sun burned
  • variables: amount of sleep the night before, substantial exercise during the day? (no, yes), eat right before going to bed? (no, yes) responses: soundness of sleep, average reading from five persons
  • variables: brand of tape deck used for playing music, bass level, treble level, synthesizer? (no, yes) responses: clearness and quality of sound, and absence of noise
  • variables: type of filter paper, beverage to be filtered, volume of beverage responses: time to filter
  • variables: type of ski, temperature, type of wax responses: time to go down ski slope
  • variables: ambient temperature for dough when rising, amount of vegetable oil, number of onions responses: four quality characteristics of pizza
  • variables: amount of fertilizer, location of seeds (3 x 3 Latin square) responses: time for seeds to germinate
  • variables: speed of kitchen blender, batch size of malt, blending time responses: quality of ground malt for brewing beer
  • variables: soft drink (A,B), container (can, bottle), sugar free? (no, yes) responses: taste of drink from paper cup
  • variables: child's weight (13, 22 pounds),spring tension (4, 8 cranks), swing orientation (level, tilted) responses: number of swings and duration of these swings obtained from an automatic infant swing
  • variables: orientation of football, kick (ordinary, soccer style),steps taken before kick, shoe (soft, hard) responses: distance football was kicked
  • variables: weight of bowling ball, spin, bowling lane (A, B) responses: bowling pins knocked down
  • variables: distance from basket type of shot, location on floor responses: number of shots made (out of 10) with basketball
  • variables: temperature, position of glass when pouring soft drink, amount of sugar added responses: amount of foam produced when pouring soft drink into glass
  • variables: brand of epoxy glue, ratio of hardener to resin, thickness of application, smoothness of surface, curing time responses: strength of bond between two strips of aluminum
  • variables: amount of plant hormone, water (direct from tap, stood out for 24 hours), window in which plant was put responses: root lengths of cuttings from purple passion vine after 21 days
  • variables: amount of detergent (1/4, 1/2 cup), bleach (none, 1 cup), fabric softener (not used, used) responses: ability to remove oil and grape juice stains
  • variables: skin thickness, water temperature, amount of salt responses: time to cook Chinese meat dumpling
  • variables: appearance (with and without a crutch), location, time responses: time to get a ride hitchhiking and number of cars that passed before getting a ride
  • variables: frequency of watering plants, use of plant food (no, yes), temperature of water responses: growth rate of house plants
  • variables: plunger A up (slow, fast),plunger A down (slow, fast), plunger B up (slow, fast) plunger B down (slow, fast) responses: reproducibility of automatic diluter, optical density readings made with spectrophotometer
  • variables: temperature of gas chromatograph column, tube type (U, J), voltage responses: size of unwanted droplet
  • variables: temperature, gas pressure, welding speed responses: strength of polypropylene weld,manual operation
  • variables: concentration of lysozyme, pH, ionic strength, temperature responses: rate of chemical reaction
  • variables: anhydrous barium peroxide powder,sulfur,charcoal dust responses: length of time fuse powder burned and the evenness of burning
  • variables: air velocity, air temperature, rice bed depth responses: time to dry wild rice
  • variables: concentration of lactose crystal, crystal size, rate of agitation responses: spread ability of caramel candy
  • variables: positions of coating chamber, distribution plate, and lower chamber responses: number of particles caught in a fluidized bed collector
  • variables: proportional band, manual reset, regulator pressure responses: sensitivity of a pneumatic valve control system for a heat exchanger
  • variables: chloride concentration, phase ratio, total amine concentration, amount of preservative added responses: degree of separation of zinc from copper accomplished by extraction
  • variables: temperature, nitrate concentration, amount of preservative added responses: measured nitrate concentration in sewage, comparison of three different methods
  • variables: solar radiation collector size, ratio of storage capacity to collector size, extent of short-term intermittency of radiation, average daily radiation on three successive days responses: efficiency of solar space-heating system, a computer simulation
  • variables: pH, dissolved oxygen content of water, temperature responses: extent of corrosion of iron
  • variables: amount of sulfuric acid, time of shaking milk-acid mixture, time of final tempering responses: measurement of butterfat content of milk
  • variables: mode (batch, time-sharing), job size, system utilization (low, high) responses: time to complete job on computer
  • variables: flow rate of carrier gas, polarity of stationary liquid phase, temperature responses: two different measures of efficiency of operation of gas chromatograph
  • variables: pH of assay buffer, incubation time, concentration of binder responses: measured cortisol level in human blood plasma
  • variables: aluminum, boron, cooling time responses: extent of rock candy fracture of cast steel
  • variables: magnification, read out system (micrometer, electronic), stage light responses: measurement of angle with photogrammetric instrument
  • variables: riser height, mold hardness, carbon equivalent responses: changes in height, width, and length dimensions of cast metal
  • variables: amperage, contact tube height, travel speed, edge preparation responses: quality of weld made by submerged arc welding process
  • variables: time, amount of magnesium oxide, amount of alloy responses: recovery of material by steam distillation
  • variables: pH, depth, time responses: final moisture content of alfalfa protein
  • variables: deodorant, concentration of chemical, incubation time responses: odor produced by material isolated from decaying manure, after treatment
  • variables: temperature variation, concentration of cupric sulfate concentration of sulfuric acid responses: limiting currents on totaling disk electrode
  • variables: air flow, diameter of bead, heat shield (no, yes) responses: measured temperature of a heated plate
  • variables: voltage, warm-up procedure, bulb age responses: sensitivity of micro densitometer
  • variables: pressure, amount of ferric chloride added, amount of lime added responses: efficiency of vacuum filtration of sludge
  • variables: longitudinal feed rate, transverse feed rate, depth of cut responses: longitudinal and thrust forces for surface grinding operation
  • variables: time between preparation of sample and refluxing, reflux time, time between end of reflux and start of titrating responses: chemical oxygen demand of samples with same amount of waste (acetanilide)
  • variables: speed of rotation, thrust load, method of lubrication responses: torque of taper roller bearings
  • variables: type of activated carbon, amount of carbon, pH responses: adsorption characteristics of activated carbon used with municipal waste water
  • variables: amounts of nickel, manganese, carbon responses: impact strength of steel alloy
  • variables: form (broth, gravy), added broth (no, yes), added fat (no, yes), type of meat (lamb, beef) responses: percentage of panelists correctly identifying which samples were lamb
  • variables: well (A, B), depth of probe, method of analysis (peak height, planimeter) responses: methane concentration in completed sanitary landfill
  • variables: paste (A, B), preparation of skin (no, yes), site (sternum, forearm) responses: electrocardiogram reading
  • variables: lime dosage, time of flocculation, mixing speed responses: removal of turbidity and hardness from water
  • variables: temperature difference between surface and bottom waters, thickness of surface layer, jet distance to thermocline, velocity of jet, temperature difference between jet and bottom waters responses: mixing time for an initially thermally stratified tank of water

Articles by Bill

Thoughts about bill's work and life, articles by george box.

Related Topics

  • Cardiovascular
  • Drug Delivery
  • General Hospital
  • Neurological
  • Radiological

Recent in  Sectors

examples of design an experiment

  • Design & Engineering
  • New Technologies

Recent in  Product Development

examples of design an experiment

  • 3D Printing
  • Contract Manufacturing
  • Sterilization

Recent in  Manufacturing

examples of design an experiment

  • Medical Device Regulations

Recent in  Regulatory & Quality

examples of design an experiment

  • Artificial Intelligence
  • Augmented & Virtual Reality
  • Medical IoT
  • Wearable Medical Devices

Recent in  Digital Health

examples of design an experiment

Recent in  Business

examples of design an experiment

  • Source Products & Supplies
  • Outsource Services
  • Browse Equipment
  • Supplier News

Design of Experiments: An Overview and Application Example

March 1, 1996

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

John S. Kim and James W. Kalb

A strategy for planning research known as design of experiments (DOE) was first introduced in the early 1920s when a scientist at a small agricultural research station in England, Sir Ronald Fisher, showed how one could conduct valid experiments in the presence of many naturally fluctuating conditions such as temperature, soil condition, and rainfall. The design principles that he developed for agricultural experiments have been successfully adapted to industrial and military applications since the 1940s.

In the past decade, the application of DOE has gained acceptance in the United States as an essential tool for improving the quality of goods and services. This recognition is partially due to the work of Genichi Taguchi, a Japanese quality expert, who promoted the use of DOE in designing robust products--those relatively insensitive to environmental fluctuations. It is also due to the recent availability of many user-friendly software packages, improved training, and accumulated successes with DOE applications.

DOE techniques are not new to the health-care industry. Medical researchers have long understood the importance of carefully designed experiments. These techniques, however, have not been applied as rigorously in the product and design phases as in the clinical evaluation phase of product development. The recent focus by FDA on process validation underscores the need for well-planned experimentation. Such experiments can provide data that will enable device manufacturers to identify the causes of performance variations and to eliminate or reduce such variations by controlling key process parameters, thereby improving product quality.

Properly designed and executed experiments will generate more-precise data while using substantially fewer experimental runs than alternative approaches. They will lead to results that can be interpreted using relatively simple statistical techniques, in contrast to the information gathered in observational studies, which can be exceedingly difficult to interpret. This article discusses the concept of process validation and shows how simple two-level factorial experimental designs can rapidly increase the user's knowledge about the behavior of the process being studied.

THE PROCESS VALIDATION CONCEPT

The purpose of process validation is to accumulate data that demonstrate with a high degree of confidence that the process will continue to produce products meeting predetermined requirements. Because such capability is necessary to ensure that products perform safely and effectively, process validation is required by FDA's good manufacturing practices (GMP) regulation. For products that will be exported to the European Union, the International Organization for Standardization's ISO 9000 series of standards also requires that certain processes be identified, validated, and monitored.

Table I shows the sequence of events in the product development cycle that lead to process validation, along with the tasks to be accomplished at each phase and selected tools to be used. As the table indicates, during the process development phase the process should be evaluated to determine what would happen when conditions occur that stress it. Such studies, often called process characterization, can be done by varying the key elements of the process (i.e., equipment, materials, and input parameters such as temperature, pressure, and so forth) and determining which sources of variation have the most impact on process performance. One proven method to determine the sources of variability is DOE.

The process should also be challenged to discover how outputs change as process variables fluctuate within allowable limits. This testing is essential to learning what steps must be taken to protect the process if worst-case conditions for input variables ever occur during actual manufacturing operations. Once again, an effective method for studying various combinations of variables is DOE. In particular, simple two-level factorial and fractional factorial designs are useful techniques for worst-case-scenario studies.

TRADITIONAL VS. FACTORIAL DESIGNS

One traditional method of experimentation is to evaluate only one variable (or factor) at a time--all of the variables are held constant during test runs except the one being studied. This type of experiment reveals the effect of the chosen variable under set conditions; it does not show what would happen if the other variables also changed.

For example, blood coagulation rate could be studied as a function of ion concentration and the concentration of the enzyme thrombin. To measure the effect of varying thrombin levels, the ion concentration is held constant at a prechosen low level. Since there is variability in the coagulation time measurement, at least two experiments should be run at each point, for a total of four runs. Figure 1 shows the design and hypothetical results for such an experiment. The average effect of changing the thrombin level from low to high is the average at the high level minus the average at the low level, or

Similarly, to measure the effect of ion concentration, thrombin is held at its low level and another experiment is performed with ion concentration at its high level. Again, two runs are necessary to determine the average effect. Using the results shown in Figure 1, the average effect of ion concentration is

After a total of six runs it is known that at the low ion concentration, the coagulation rate goes up with an increasing thrombin level, and at the low thrombin concentration, the coagulation rate goes up with an increasing ion level. But what would happen if both variables were at their high level? If the effect of each factor stayed the same, the result would be a simple combination of the two effects. For the above example, such an assumption would result in the sum of the low-level average and the two high-level averages, or

9.5 + 20 + 11.5 = 41.

It was Fisher's idea that it was much better to vary all the factors at once using a factorial design, in which experiments are run for all combinations of levels for all of the factors. With such a study design, testing will reveal what the effect of one variable would be when the other factors are changing. Using a factorial design for the blood coagulation example, as shown in Figure 2, running a test with both variables at their high level yielded a rate of 60, not 41 as previously estimated. If the goal of the study were to maximize coagulation rate, it would be important to discover this synergistic response, and it could not be detected with the one-factor-at-a-time experiment.

Another advantage of the factorial design is its efficiency. As indicated in the figure, only one run would be needed for each point, since there will be two runs at each level of each factor. Thus, the factorial design allows each factor to be evaluated with the same precision as in the one-factor-at-a-time experiment, but with only two-thirds the number of runs. Montgomery has shown that this relative efficiency of the factorial experiments increases as the number of variables increases (see bibliography, page 88). In other words, the effort saved by such internal replication becomes even more dramatic as more factors are added to an experiment.

Calculation of the Main Effects. With a factorial design, the average main effect of changing thrombin level from low to high can be calculated as the average response at the high level minus the average response at the low level, or, using the data from Figure 2,

Similarly, the average main effect of ion concentration is the average response at the high level minus the average response at the low level, or

The fact that these effects have a positive value indicates that the response (i.e., the coagulation rate) increases as the variables increase. The larger the magnitude of the effect, the more critical the variable.

Estimate of the Interaction. A factorial design makes it possible not only to determine the main effects of each variable, but also to estimate the interaction (i.e., synergistic effect) between the two factors, a calculation that is impossible with the one-factor-at-a-time experiment design. As shown in Figure 2, the effects of thrombin at the low and high levels of ion concentration are 20 and 40, respectively. Thus, the effect of thrombin concentration depends upon the level of ion concentration; in other words, there is an interaction between the two variables. The interaction effect is the average difference between the effect of thrombin at the high level of ion concentration and the effect of thrombin at the low level of ion concentration, or

SETTING UP A TWO-LEVEL FACTORIAL DESIGN

A two-factor, two-level factorial design is normally set up by building a table using minus signs to show the low levels of the factors and plus signs to show the high levels of the factors. Table II shows a factorial design for the application example. The first column in the table shows the run number for the four possible runs. The next two columns show the level of each main factor, A and B, in each run, and the fourth column shows the resulting level of the interaction between these factors, which is found by multiplying their coded levels (­1 or +1). Columns 5 and 6 show the actual values assigned to the low and high variable levels in the design. Test runs using each of these four combinations constitute the experiment. The last column contains the responses from the experiment, which in Table II are the data from Figure 2. Filling in this column requires the hard work of running each experiment and then recording the result.

In some studies there may be more than two important variables. For example, pH level has an important influence on coagulation rate and could be a third factor in the example experiment. The resulting three-factor, two-level design is shown in Table III and Figure 3. With three two-level factors, eight experiments will be required, and there will be four replicates of each level of each factor, further increasing the precision of the result. There will be three two-factor interactions and a three-factor interaction to evaluate. Usually, interactions involving three or more factors are not important and can be disregarded.

As in a two-factor experiment, the average effect of each factor can be calculated by subtracting the average response at the low level from the average response at the high level. Using the data from Figure 3, the effect of thrombin would be

Table IV lists all of the effects in the blood coagulation experiment.

RANDOMIZATION AND BLOCKING

It is well recognized that the planning activities that precede the actual test runs are critical to the successful resolution of the experimenter's problem. In planning an experiment, it is necessary to limit any bias that may be introduced by the experimental units or experimental conditions. Strategies such as randomization and blocking can be used to minimize the effect of nuisance or noise elements.

Consider what would happen in the application example if the evaluation of co-agulation rate were sensitive to ambient temperature and the temperature rose during the experiment. If the test runs were performed in the order in Table III--all of the low-pH combinations followed by all of the high-pH ones--the effect of the temperature change would be assigned to pH, thereby confusing an unknown trend with a design factor. By randomizing the order in which the test combinations are run, researchers can eliminate the effects of unknown trending variables on the results of the experiment.

Blocking can be used to prevent experimental results being influenced by variations from batch to batch, machine to machine, day to day, or shift to shift. In the eight-run, three-factor study, for example, let's assume there was only enough thrombin enzyme in a batch for four mixes. Let's also assume any batch-to-batch difference could affect the conclusions. Then the two batches could be assigned to the two coded levels (­1 or +1) of the three-factor interaction, which is shown as ABC in the design illustrated in Table III. This strategy is called blocking a factor on ABC. (ABC can be used as the blocking factor because the three-factor interaction is regarded as unimportant.) Because the study design is balanced, each batch of thrombin will be used the same number of times for each level of each factor. Thus, its influence is averaged out and is removed from the analysis.

In Figure 4, the two levels of the blocking factor, ABC, are shown as circles and squares. When 20 was added to each circle (the low level of ABC) and the effects of each variable recalculated, the results did not differ from those shown in Table IV. The effect of thrombin, for example, became

Since only the ABC effect changed by the magnitude of the difference between batches, the batch-to-batch difference had been successfully removed from the experiment by its inclusion in the experimental setup. Without this blocking, the determination of the variables' effects would have been less precise, or missed altogether.

FRACTIONAL FACTORIAL DESIGNS

One disadvantage of two-level factorial designs is that the size of the study increases by a factor of two for each additional factor. For example, with eight factors, 256 runs would theoretically be necessary. Fortunately, because three-factor and higher-order interactions are rarely important, such intensive efforts are seldom required. For most purposes, it is only necessary to evaluate the main effects of each variable and the two-factor interactions, which can be done with only a fraction of the runs in a full factorial design. Such designs are called Resolution V designs. If there are some two-factor interactions that are known to be impossible, one can further reduce the number of runs by using Resolution IV designs. Table V compares the number of runs in full and fractional factorial designs with from two to eight variables. For the earlier example of eight factors, one can create an efficient design that may require as little as 16 runs.

RESPONSE SURFACE DESIGNS

Another disadvantage of two-level designs is that the experimental runs cannot detect if there are curvilinear effects in the region of optimum settings. To check on this possibility, every factorial design should include a center point at the zero (0) level of all the factors. If curvature is present, the response at this point will be much larger or smaller than the response expected from the linear model. Figure 5, for example, shows a response that has a maximum in between the two-level factorial design.

If curvature is present, the factorial design can be expanded to allow estimation of the response surface. One way to do this is to add experimental points. The central composite design shown in Figure 6 uses the factorial design as the base and adds what are known as star points. Special methods are available to calculate these star points, which provide desirable statistical properties to the study results. The result of such an expanded design is usually a contour plot of the response surface or a surface plot, such as Figure 7, which clearly shows a maximum.

Carefully planned, statistically designed experiments offer clear advantages over traditional one-factor-at-a-time alternatives. These techniques are particularly useful tools for process validation, where the effects of various factors on the process must be determined. Not only is the DOE concept easily understood, the factorial experiment designs are easy to construct, efficient, and capable of determining interaction effects. Results are easy to interpret and lead to statistically justified conclusions. The designs can be configured to block out extraneous factors or expanded to cover response surface plotting.

Those implementing a DOE strategy will find that computer software is an essential tool for developing and running factorial experiments. The Design-Expert program was used to create the response surface in Figure 7, for example. Other user-friendly DOE software includes BBN, CADE, Design-Ease, JMP, Statistica, and Statgraphics. Finally, for those who wish to learn more about DOE, a bibliography has been included here.

BIBLIOGRAPHY

Box GEP, and Draper N, Empirical Model Building and Response Surfaces, New York, Wiley, 1987.

Box GEP, Hunter W, and Hunter JS, Statistics for Experimenters, New York, Wiley, 1978.

Montgomery DC, Design and Analysis of Experiments, 3rd ed, New York, Wiley, 1990.

Ross PJ, Taguchi Techniques for Quality Engineering, New York, McGraw-Hill, 1988.

John S. Kim and James W. Kalb are the director, corporate statistical resources, and the senior applications scientist, respectively, at Medtronic, Inc. (Minneapolis). Kim is also a member of the MD&DI editorial advisory board.

Originally published March, 1996

You May Also Like

Editors' Choice

examples of design an experiment

Sovato Touts Successful Remote RAS Procedures Over 500 Mile Distance

Insulet Reports Positive Type 2 Diabetes Study Results

Medical Device Manufacturers Seek More Sustainability, Supplier ‘Maturity’

Dermtech Files Chapter 11 Bankruptcy, Cuts 15 Employees

examples of design an experiment

  • Certified ScrumMaster (CSM) Certification
  • Certified Scrum Product Owner (CSPO) Certification
  • Leading SAFe 6.0 Certification
  • Professional Scrum Master-Advanced™ (PSM-A) Training
  • SAFe 6.0 Scrum Master (SSM) Certification
  • Implementing SAFe 6.0 (SPC) Certification
  • SAFe 6.0 Release Train Engineer (RTE) Certification
  • SAFe 6.0 Product Owner Product Manager (POPM) Certification
  • ICP-ACC Certification
  • Agile Master's Program
  • Agile Excellence Master's Program
  • Kanban Management Professional (KMP I: Kanban System Design) Certification
  • Professional Scrum Product Owner I (PSPO I) Training
  • View All Courses

Accreditation Bodies

  • Project Management Professional (PMP) Certification
  • PRINCE2 Certification
  • PRINCE2 Foundation Certification
  • PRINCE2 Practitioner Certification
  • Change Management Training
  • Project Management Techniques Training
  • Certified Associate in Project Management (CAPM) Certification
  • Program Management Professional (PgMP) Certification
  • Portfolio Management Professional (PfMP) Certification
  • Oracle Primavera P6 Certification
  • Project Management Master's Program
  • Microsoft Project Training
  • Data Science Bootcamp
  • Data Engineer Bootcamp
  • Data Analyst Bootcamp
  • AI Engineer Bootcamp
  • Data Science with Python Certification
  • Python for Data Science
  • Machine Learning with Python
  • Data Science with R
  • Machine Learning with R
  • Deep Learning Certification Training
  • Natural Language Processing (NLP)

Enhance your career prospects with our Data Science Training

Embark on a Data Science career with our Data Analyst Bootcamp

Elevate your Data Science career with our AI Engineer Bootcamp

  • DevOps Foundation Certification
  • Docker with Kubernetes Training
  • Certified Kubernetes Administrator (CKA) Certification
  • Kubernetes Training
  • Docker Training
  • DevOps Training
  • DevOps Leader Training
  • Jenkins Training
  • Openstack Training
  • Ansible Certification
  • Chef Training
  • AWS Certified Solutions Architect - Associate
  • Multi-Cloud Engineer Bootcamp
  • AWS Cloud Practitioner Certification
  • Developing on AWS
  • AWS DevOps Certification
  • Azure Solution Architect Certification
  • Azure Fundamentals Certification
  • Azure Administrator Certification
  • Azure Data Engineer Certification
  • Azure Devops Certification
  • AWS Cloud Architect Master's Program
  • AWS Certified SysOps Administrator Certification
  • Azure Security Engineer Certification
  • Azure AI Solution Certification Training

Supercharge your career with our Multi-Cloud Engineer Bootcamp

  • Full-Stack Developer Bootcamp
  • UI/UX Design Bootcamp
  • Full-Stack [Java Stack] Bootcamp
  • Software Engineer Bootcamp
  • Software Engineer Bootcamp (with PMI)
  • Front-End Development Bootcamp
  • Back-End Development Bootcamp
  • React Training
  • Node JS Training
  • Angular Training (Version 12)
  • Javascript Training
  • PHP and MySQL Training

Work on real-world projects, build practical developer skills

Hands-on, work experience-based learning

Start building in-demand tech skills

  • ITIL 4 Foundation Certification
  • ITIL Practitioner Certification
  • ISO 14001 Foundation Certification
  • ISO 20000 Certification
  • ISO 27000 Foundation Certification
  • ITIL 4 Specialist: Create, Deliver and Support Training
  • ITIL 4 Specialist: Drive Stakeholder Value Training
  • ITIL 4 Strategist Direct, Plan and Improve Training
  • FAAANG/MAANG Interview Preparation
  • Python Certification Training
  • Advanced Python Course
  • R Programming Language Certification
  • Advanced R Course
  • Java Training
  • Java Deep Dive
  • Scala Training
  • Advanced Scala
  • C# Training
  • Microsoft .Net Framework Training
  • Tableau Certification
  • Data Visualisation with Tableau Certification
  • Microsoft Power BI Certification
  • TIBCO Spotfire Training
  • Data Visualisation with Qlikview Certification
  • Sisense BI Certification
  • Blockchain Professional Certification
  • Blockchain Solutions Architect Certification
  • Blockchain Security Engineer Certification
  • Blockchain Quality Engineer Certification
  • Blockchain 101 Certification
  • Hadoop Administration Course
  • Big Data and Hadoop Course
  • Big Data Analytics Course
  • Apache Spark and Scala Training
  • Apache Storm Training
  • Apache Kafka Training
  • Comprehensive Pig Training
  • Comprehensive Hive Training
  • Android Development Course
  • IOS Development Course
  • React Native Course
  • Ionic Training
  • Xamarin Studio Training
  • Xamarin Certification
  • OpenGL Training
  • NativeScript for Mobile App Development
  • Selenium Certification Training
  • ISTQB Foundation Certification
  • ISTQB Advanced Level Security Tester Training
  • ISTQB Advanced Level Test Manager Certification
  • ISTQB Advanced Level Test Analyst Certification
  • ISTQB Advanced Level Technical Test Analyst Certification
  • Silk Test Workbench Training
  • Automation Testing using TestComplete Training
  • Cucumber Training
  • Functional Testing Using Ranorex Training
  • Teradata Certification Training
  • Certified Business Analysis Professional (CBAP®)
  • Entry Certificate in Business Analysis™ (ECBA™)
  • Certification of Capability in Business Analysis™ (CCBA®)
  • Business Case Writing Course
  • Professional in Business Analysis (PMI-PBA) Certification
  • Agile Business Analysis Certification
  • Six Sigma Green Belt Certification
  • Six Sigma Black Belt Certification
  • Six Sigma Yellow Belt Certification
  • CMMIV1.3 Training
  • Cyber Security Bootcamp
  • Certified Ethical Hacker (CEH v12) Certification
  • Certified Information Systems Auditor (CISA) Certification
  • Certified Information Security Manager (CISM) Certification
  • Certified Information Systems Security Professional (CISSP) Certification
  • Cybersecurity Master's Program
  • Certified Cloud Security Professional (CCSP) Certification
  • Certified Information Privacy Professional - Europe (CIPP-E) Certification
  • Control Objectives for Information and Related Technology (COBIT5) Foundation
  • Payment Card Industry Security Standards (PCI-DSS) Certification
  • Introduction to Forensic
  • Digital Marketing Course
  • PPC Training
  • Web Analytics Course
  • Social Media Marketing Course
  • Content Marketing Course
  • E-Mail Marketing Course
  • Display Advertizing Course
  • Conversion Optimization Course
  • Mobile Marketing Course
  • Introduction to the European Union General Data Protection Regulation
  • Financial Risk Management (FRM) Level 1 Certification
  • Financial Risk Management (FRM) Level 2 Certification
  • Risk Management and Internal Controls Training
  • Data Protection-Associate
  • Credit Risk Management
  • Budget Analysis and Forecasting
  • International Financial Reporting Standards (IFRS) for SMEs
  • Diploma In International Financial Reporting
  • Certificate in International Financial Reporting
  • Corporate Governance
  • Finance for Non-Finance Managers
  • Financial Modeling with Excel
  • Auditing and Assurance
  • MySQL Course
  • Redis Certification
  • MongoDB Developer Course
  • Postgresql Training
  • Neo4j Certification
  • Mariadb Course
  • Hbase Training
  • MongoDB Administrator Course
  • Conflict Management Training
  • Communication Course
  • International Certificate In Advanced Leadership Skills
  • Soft Skills Training
  • Soft Skills for Corporate Career Growth
  • Soft Skills Leadership Training
  • Building Team Trust Workshop
  • CompTIA A+ Certification
  • CompTIA Cloud Essentials Certification
  • CompTIA Cloud+ Certification
  • CompTIA Mobility+ Certification
  • CompTIA Network+ Certification
  • CompTIA Security+ Certification
  • CompTIA Server+ Certification
  • CompTIA Project+ Certification
  • Master of Business Administration from Golden Gate University Training
  • MBA from Deakin Business School with Multiple Specializations
  • Master of Business Administration from Jindal Global Business School Training
  • Master of Business Administration from upGrad Training
  • MS Excel 2010
  • Advanced Excel 2013
  • Certified Supply Chain Professional
  • Software Estimation and Measurement Using IFPUG FPA
  • Software Size Estimation and Measurement using IFPUG FPA & SNAP
  • Leading and Delivering World Class Product Development Course
  • Product Management and Product Marketing for Telecoms IT and Software
  • Foundation Certificate in Marketing
  • Flow Measurement and Custody Transfer Training Course

Project Management Tutorial

By knowledgehut ., 1. what is project management, 2. activity-based costing, 3. agile project management, 4. basic management skills, 5. basic quality tools, 6. benchmarking process, 7. cause and effect diagram, 8. change management process, 9. communication management, 10. communication blocker, 11. communication methods, 12. communication channels, 13. communication model, 14. conflict management, 15. critical path method (cpm), 16. critical chain method, 17. crisis management, 18. decision making process, 19. design of experiment, 20. effective communication skills, 21. effective presentation skills, 22. enterprise resource planning, 23. event chain methodology, 24. extreme project management, 25. gantt chart tool, 26. just-in-time (jit) manufacturing, 27. knowledge management, 28. leads, lags & float, 29. management best practices, 30. management styles, 31. management by objective (mbo), 32. monte carlo analysis, 33. motivation theories, 34. negotiation skills, 35. organization structures, 36. pert estimation technique, 37. prince2 project management methodology, 38. pareto chart tool, 39. powerful leadership skills, 40. process-based management, 41. procurement documents, 42. procurement management, 43. project activity diagram, 44. project charter, 45. project contract types, 46. project cost control, 47. project kick-off meeting, 48. project lessons learnt, 49. project management methodologies, 50. project management office, 51. project management processes, 52. project management tools, 53. project management triangle, 54. project manager goals, 55. project portfolio management, 56. project quality plan, 57. project records management, 58. project risk categories, 59. project risk management, 60. project scope definition, 61. project selection methods, 62. project success criteria, 63. project time management, 64. project management software, 65. project workforce management, 66. quality assurance and quality control, 67. raci chart tool, 68. rewards and recognition, 69. requirements collection, 70. resource levelling, 71. staffing management plan, 72. stakeholder management, 73. statement of work (sow), 74. stress management techniques, 75. structured brainstorming, 76. succession planning, 77. supply chain management, 78. team building program, 79. team motivation, 80. the balanced score card, 81. the halo effect, 82. the make or buy decision, 83. the rule of seven, 84. the virtual team, 85. total productive maintenance, 86. total quality management, 87. traditional project management, 88. work breakdown structure, design of experiment.

Design of Experiments (DOE) is also referred to as Designed Experiments or Experimental Design – are defined as the systematic procedure carried out under controlled conditions in order to discover an unknown effect, to test or establish a hypothesis, or to illustrate a known effect. It involves determining the relationship between input factors affecting a process and the output of that process. It helps to manage process inputs in order to optimize the output.

A simple example of DOE:

While doing interior design of a new house, the final effect of interior design will depend on various factors such as colour of walls, lights, floors, placements of various objects in the house, sizes and shapes of the objects and many more. Each of these factors will have an impact on the final outcome of interior decoration. While variation in each factor alone can impact, a variation in a combination of these factors at the same time also will impact the final outcome. 

Hence it needs to be studied how each of these factors impact the final outcome, which are the critical factors impacting the most, which are the most important combination of these factors impacting the final outcome significantly.

The interior designer can plan and conduct some experiments. Get to know more about  DOE with our PMP online training .

Basics of DOE

The method was coined by Sir Ronald A. Fisher in the 1920s and 1930s. Design of Experiment is a powerful data collection and analysis tool that can be used in a variety of experimental situations.

 It allows manipulating multiple input factors and determining their effect on a desired output (response). By changing multiple inputs at the same time, DOE helps to identify important interactions that may be missed when experimenting with only one factor at a time. We can investigate all possible combinations (full factorial) or only a portion of the possible combinations (fractional factorial).

A well planned and executed experiment may provide a great deal of information about the effect on a response variable due to one or more factors. Many experiments involve holding certain factors constant and altering the levels of another variable. This "one factor at a time" (OFAT) approach to process knowledge is, however, inefficient when compared with changing multiple factor levels simultaneously.

A well-performed experiment may provide answers to the following such as:

  • What are the key factors in a process? (both controllable and uncontrollable)
  • At what settings would the process deliver acceptable performance?
  • What are the key, main and interaction effects in the process?
  • What settings would bring about less variation in the output?

A repetitive approach to gaining knowledge should be taken up, typically involving these consecutive steps:

  • A screening design that narrows the field of variables under assessment.
  • A “full factorial” design that studies the response of every combination of factors and factor levels, and an attempt to zero in on a region of values where the process is close to optimization.

A basic approach to a Design of Experiment

We need to follow the below steps in sequence for conducting a DOE.

  • Define the problem(s)
  • Determine objective(s)
  • Design experiments 
  • Conduct experiments and collect data
  • Analyse data
  • Interpret results
  • Verify predicted results

DOE has been in use for many years in manufacturing industry. Below are some of the benefits/improvements we can expect from conducting DOEs:

  • reduce time to design/develop new products & processes
  • improve performance of existing processes
  • improve reliability and performance of products
  • achieve product & process robustness
  • evaluation of materials, design alternatives, setting component & system tolerances, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *

A valuable piece of knowledge. Thank you!

Eniola Samson

This blog is appreciated, thanks.

I like the article. Thank you very much.

Thank you for the information.

The content of the motivation theories are well explained and its has been of great help to me . Thank you for making it that easy for easy understanding.

Suggested Tutorials

PRINCE2 Tutorial [Video]

USEFUL LINKS

  • PMP Course in Chennai
  • PMP Online Training in Kolkata
  • PMP Course in Montreal
  • PMP Course Online in Amsterdam

Get a 1:1 Mentorship call with our Career Advisor

Your Message (Optional)

Subscribe to our newsletter..

  • Login Forgot Password

Don't Miss Out on Exclusive Discounts on Courses!

Future-proof your career with the latest in-demand courses.

  • Get Job-Ready Digital Skills
  • Experience Outcome-Based Immersive Learning
  • Get Trained by a Stellar Pool of Industry Experts
  • Best-In-Class Industry-Vetted Curriculum

By tapping submit, you agree to KnowledgeHut Privacy Policy and Terms & Conditions

Examples

Qualitative Research Design

Ai generator.

examples of design an experiment

Qualitative Research Design is a method focused on understanding and interpreting the experiences of individuals or groups. Unlike quantitative research , which quantifies data and identifies patterns through statistical analysis, Qualitative Research Design explores phenomena in depth using interviews, focus groups, and observations. This approach gathers rich narratives that provide insights into thoughts, feelings, and behaviors, uncovering underlying reasons and motivations. Essential in fields like social sciences, education, and health, a strong Qualitative Research Proposal or Qualitative Research Plan must carefully consider the Research Design and relevant Research Terms for a comprehensive approach.

What is Qualitative Research Design?

Qualitative Research Design is a method that aims to understand and interpret the meaning and experiences of individuals or groups. It employs in-depth techniques like interviews, focus groups, and observations to gather detailed, rich narratives. Unlike quantitative research, which uses statistical analysis to identify patterns, qualitative research seeks to uncover the underlying reasons and motivations behind thoughts, feelings, and behaviors.

Types of Qualitative Research Design

Types of Qualitative Research Design

1. Ethnography

Ethnography involves the detailed study of cultures or social groups through direct observation and participation. Researchers immerse themselves in the group’s daily life to understand their customs, behaviors, and social interactions. This method is often used to study communities, workplaces, or organizations. Example : Observing and interviewing members of a remote community to understand their social practices and traditions.

2. Grounded Theory

Grounded theory aims to generate a theory grounded in the data collected from participants. Researchers gather data through interviews, observations, and other methods, then use coding techniques to develop a theory. This approach is useful for studying processes, actions, and interactions, such as developing a theory on how people cope with job loss. Example : Analyzing interviews with employees to develop a theory about workplace motivation.

3. Focus Groups

Focus groups involve guided discussions with a small group of participants to explore their perceptions, opinions, and attitudes towards a particular topic. This method allows researchers to gather a wide range of insights and observe group dynamics. Focus groups are commonly used in market research, social science studies, and product development. Example : Conducting focus groups with parents to understand their views on remote learning during the COVID-19 pandemic.

4. Interviews

Interviews are one-on-one conversations between the researcher and the participant, designed to gather in-depth information on the participant’s experiences, thoughts, and feelings. Interviews can be structured, semi-structured, or unstructured, allowing flexibility in exploring the research topic. This method is widely used across various qualitative research studies. Example : Conducting semi-structured interviews with veterans to explore their reintegration experiences into civilian life.

5. Narrative Research

Narrative research focuses on the stories and personal accounts of individuals. Researchers collect narratives through interviews, journals, letters, or autobiographies and analyze them to understand how people make sense of their experiences. This type of research might explore life stories, personal journeys, or historical accounts. Example : Collecting and analyzing life stories of refugees to understand their migration experiences.

6. Action Research

Action research is a participatory approach that involves researchers and participants working together to address a problem or improve a situation. This method focuses on practical solutions and often includes cycles of planning, action, observation, and reflection. It is commonly used in educational settings to improve teaching practices, school policies, or community development projects. Example : Teachers working together to implement and assess a new curriculum in their school.

Qualitative Research Design Methods

MethodData CollectionFocusExample
Case StudyInterviews, documentsSingle case analysisImpact of teaching method
EthnographyParticipant observationCultural understandingTribal community practices
Grounded TheoryInterviews, observationsTheory developmentCoping with chronic illness
PhenomenologyIn-depth interviewsLived experiencesParental grief
Narrative ResearchLife stories, interviewsPersonal narrativesRefugee resettlement stories
Focus GroupsGroup discussionsGroup perspectivesTeenagers’ views on social media
Content AnalysisText, media analysisPatterns and themesMedia portrayal of mental health

Interviews are one-on-one conversations designed to gather in-depth information about a participant’s experiences, thoughts, and feelings. They can be structured, semi-structured, or unstructured, allowing flexibility in exploring topics. Example : Semi-structured interviews with veterans to explore their reintegration experiences into civilian life.

Focus Groups

Focus groups involve guided discussions with small groups to explore their perceptions, opinions, and attitudes on a topic. This method gathers diverse insights and observes group dynamics. Example : Focus groups with parents to understand their views on remote learning during the COVID-19 pandemic.

Observational Studies

Observational studies involve systematically watching and recording behaviors and interactions in natural settings without interference. Example : Observing children in a playground to study social development and peer relationships.

Discussion Boards

Discussion boards are online forums where participants post responses and engage in discussions. This method collects data from participants in different locations over time. Example : Analyzing posts on a discussion board for chronic illness patients to understand their coping strategies and support systems.

Difference between Qualitative Research vs. Quantitative Research

AspectQualitative ResearchQuantitative Research
Explores phenomena through non-numerical data, focusing on understanding meanings, experiences, and concepts.Investigates phenomena through numerical data, focusing on measuring and quantifying variables.
Interviews, focus groups, observations, document analysis.Surveys, experiments, questionnaires, existing statistical data.
Non-numerical, descriptive data (words, images, objects).Numerical data (numbers, statistics).
Thematic analysis, content analysis, narrative analysis.Statistical analysis, mathematical modeling.
Gain in-depth insights and understand complexities of human behavior and social phenomena.Test hypotheses, measure variables, and determine relationships or effects.
Studying cultural practices, exploring personal experiences, understanding social interactions.Examining the effectiveness of a new drug, analyzing survey results, studying demographic trends.
– Provides detailed and rich data.
– Captures participants’ perspectives and context.
– Flexible and adaptive to new findings.
– Allows for hypothesis testing.
– Results can be generalized to larger populations.
– Can establish patterns and predict outcomes.

Characteristics of Qualitative Research Design

  • Naturalistic Inquiry: Conducted in natural settings where participants experience the issue or phenomenon under study.
  • Contextual Understanding: Emphasizes understanding the cultural, social, and historical contexts of participants.
  • Participant Perspectives: Prioritizes the views, feelings, and interpretations of participants.
  • Flexibility and Adaptiveness: Designs are flexible and can be adjusted as new insights emerge.
  • Rich, Descriptive Data: Collects detailed data in words, images, and objects for comprehensive understanding.
  • Inductive Approach: Develops theories and patterns from the data collected rather than testing predefined theories.
  • Emergent Design: Research design evolves during the study based on emerging themes and insights.
  • Multiple Data Sources: Uses various data sources like interviews, focus groups, observations, and document analysis.
  • Subjectivity and Reflexivity: Researchers acknowledge their influence on the research process and examine their biases and assumptions.
  • Holistic Perspective: Considers the entire phenomenon and its complexity, looking at interrelated components.
  • Iterative Process: Data collection and analysis occur simultaneously in an iterative manner.
  • Ethical Considerations: Ensures informed consent, confidentiality, and sensitivity to participants’ needs and well-being.
  • Detailed Reporting: Results are reported in a detailed narrative style, often using direct quotes from participants.

How to Find Qualitative Research Design

1. identify the research problem.

Define the specific problem or phenomenon you want to study. For example, you might explore the experiences of first-generation college students.

2. Conduct a Literature Review

Review existing research to understand what has been studied and identify gaps. This helps to build a foundation for your research.

3. Formulate Research Questions

Create open-ended questions to guide your study. Example: “What challenges do first-generation college students face?”

4. Choose a Qualitative Research Approach

Select a methodology that fits your research question, such as phenomenology, grounded theory, ethnography, case study, or narrative research.

5. Select the Research Setting

Decide where you will conduct your study, such as a university campus or online forums relevant to your topic.

6. Identify and Recruit Participants

Determine criteria for participant selection and recruit individuals who meet these criteria, such as first-generation college students.

7. Choose Data Collection Methods

Select methods like interviews, focus groups, observations, or document analysis to gather rich data.

8. Collect and Analyze Data

Gather your data and analyze it by identifying patterns and themes. Use coding and software tools if necessary.

9. Validate Findings

Ensure the credibility of your research through techniques like triangulation, member checking, and peer debriefing.

FAQ’s

How does qualitative research differ from quantitative research.

Qualitative research focuses on understanding meaning and experiences, while quantitative research measures variables and uses statistical analysis to test hypotheses.

What is the purpose of qualitative research?

The purpose is to gain in-depth insights into people’s behaviors, motivations, and social interactions to understand complex phenomena.

What methods are commonly used in qualitative research?

Common methods include interviews, focus groups, participant observation, and content analysis of texts and media.

What is a case study in qualitative research?

A case study is an in-depth exploration of a single case or multiple cases within a real-life context to uncover detailed insights.

What is narrative research in qualitative research?

Narrative research explores the stories and personal accounts of individuals to understand how they make sense of their experiences.

How is data analyzed in qualitative research?

Data analysis involves coding and categorizing data to identify patterns, themes, and meanings, often using software like NVivo or manual methods.

What is the role of the researcher in qualitative research?

The researcher acts as a primary instrument for data collection and analysis, often engaging closely with participants and their contexts.

What are the strengths of qualitative research?

Strengths include rich, detailed data, the ability to explore complex issues, and flexibility in data collection and analysis.

What are the limitations of qualitative research?

Limitations include potential researcher bias, time-consuming data collection, and challenges in generalizing findings to larger populations.

How is validity ensured in qualitative research?

Validity is ensured through strategies like triangulation, member checking, prolonged engagement, and reflexivity to enhance credibility and trustworthiness.

Twitter

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

More From Forbes

Top 3 tips to encourage creative thinking within your team.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Top 3 Tips to Encourage Creative Thinking Within Your Team

Some of us are lucky not to have spent the majority of our careers under fluorescent lighting that’s either too harsh or too dim, sitting at nondescript beige desks arranged in grid patterns, industrial gray carpeting on our feet, and those awful cubicles that make even the widest office spaces look and feel cramped.

In place of these drab settings, many offices now enjoy Google-esque spaces with open floor designs, vibrant colors, and flexible workstations. Many have even ditched the office altogether to work wherever an internet connection is present.

All these developments contribute to a workplace atmosphere where creativity can flourish. When people feel good about where they work, they're more likely to think outside the box, collaborate effectively, and come up with innovative solutions.

However, while the environment plays a huge part, investing in modern office furniture and design only goes so far. If you really want your team to be more creative, here are three tips to help you achieve that.

Give Your Team Some Freedom

As any successful business owner will tell you, scalability is key. You want your business process to be streamlined and efficient so that it’s easier for you to take your operations to the next level. That being said, there’s also value in giving your team some flexibility when it comes to getting things done.

China Delivers Another Economic Blow To Russia

‘the acolyte’ rotten tomatoes score keeps falling, and maybe it should if we ever hope to find balance in the force, dr. disrespect issues shocking statement, finally revealing why he was banned from twitch.

Instead of rigid processes, try experimenting with a results-oriented approach. Be as specific as you can when defining your project goals, but give your team some leeway in how they approach their tasks.

While you’re at it, you can encourage creativity by letting members of different teams brainstorm ideas together at the very beginning instead of having them work in silos. This way, collaboration and diverse perspectives can emerge right from the start of the project, and your team is not limited by rigid protocol and departmental boundaries.

Keep the Workload at 80/20

You may want to look into Google’s 80/20 rule . Under this policy, employees are encouraged to spend only 80% of their time on core duties, while 20% can be dedicated to projects of personal interest. Gmail is a famous product of this policy, having started as a personal project by Google Engineer Paul Buchheit.

Companies have implemented similar projects to encourage their employees’ creativity. For example, Apple’s Blue Sky program allows select employees to spend a few weeks of their time working on personal passion projects. Professional social media platform LinkedIn also has a similar project called " InCubator ," where employees can spend up to three months working on personal projects.

Obviously, you don’t have to implement the 80/20 rule strictly. There will be days when work demands more focus on regular responsibilities, and that’s perfectly normal. After all, you’re still running a business and don’t want to distract people from their work.

What matters most is fostering a culture where creativity and innovation are valued and encouraged, whether through formal policies like the 80/20 rule or similar programs.

Make Creativity a Part Of Your Culture

I like to think of culture as a shared but often unspoken set of behaviors that people follow as members of an organization. While many companies like to codify their culture in hopes of reinforcing desired behaviors, it’s really the everyday actions and shared experiences that truly shape and sustain it.

So, if you really want to institute creativity as a part of your culture, you want to reward creative behaviors daily. For example, make it a point to publicly recognize and reward creative achievements and innovative solutions. Highlight these successes in team meetings, newsletters, or company-wide announcements to show that creativity is valued and appreciated.

You want to serve as an example of what it means to be creative on a leadership level. When leaders take risks, experiment with new ideas, and show openness to unconventional approaches, they set the tone for the rest of the organization.

As a leader, you should also be approachable and open to feedback. This helps create an environment where team members feel comfortable expressing their ideas, concerns, and suggestions without hesitation.

Ultimately, you want to be consistent in your efforts to make creativity a part of your culture. By consistently recognizing and rewarding creativity, creating a supportive atmosphere, and leading by example—you’re setting the stage for a steady stream of fresh, game-changing ideas.

It won’t happen overnight, but if you stick with it and keep pushing for creativity to be a big part of your team’s culture, you’ll set the stage for some real innovation to happen. Good luck!

Sho Dewan

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

Game Developer Logo

Related Topics

  • Summer Game Fest 2024
  • Game Industry Layoffs
  • Generative AI
  • Investments & Aquisitions
  • Unionization
  • Cooking Game Deep Dives
  • Q&A's
  • Postmortems
  • Programming
  • Extended Reality

Recent in  More

examples of design an experiment

  • Browse Latest Blogs
  • Submit Your Blog Post
  • Frequently Asked Questions
  • Blogging Rules and Guidelines
  • Game Artist
  • Game Animation
  • Video Game Designer
  • Game Programmer
  • Gameplay Engineer

Game Developer Logo

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post

Leveraging Emotional Design Features to Enhance Character Perception in Pixel Art

This article explores how emotional design features - colours and shapes - influence player perception of pixel art characters. By understanding these design elements, game developers can create characters that elicit specific emotional responses, improving player engagement and retention. The article includes practical tips on applying these findings to character design in pixel art.

Picture of Timur Shakirov

June 20, 2024

examples of design an experiment

Introduction

The relationship between character design and player perception is crucial for creating engaging gaming experiences. While previous research has focused on the impact of character appearance on player perceptions, there is limited knowledge about how emotional design features specifically affect pixel art characters. The research described in this article aims to bridge that gap, providing insights that can help developers create emotionally resonant characters using cost-efficient pixel art.

The Value of Emotional Design Features

Character appearance significantly shapes player perceptions and emotional responses. Even the slightest alterations in a character's design can substantially change how players perceive and connect with them. Players often deeply empathise with non-playable characters in narrative-driven games like the Mass Effect series. This connection enhances the gaming experience and encourages player retention.

1.png

Colour and shape psychology is widely used in narrative-driven games, such as Mass Effect.

Character appearance is formed by design choices, also called emotional design features. Colours and shapes are among the most basic emotional design features and among the few actually used in pixel art. This article aims to describe how and to what extent emotional design features impact character perception in pixel art.

Colour and Shape Psychology

Colour and shape significantly influence players' emotional responses and perceptions in games. Warm colours like red and yellow evoke danger, excitement and energy, while cool colours like blue and green promote trust and calmness. Neutral colours, such as black and white, convey power and purity, respectively. Shapes also play a role: circles and curves are seen as friendly and safe, squares as stable and reliable, and triangles as dynamic and aggressive. Game developers can use these psychological principles to enhance character development and storytelling, creating more engaging and immersive gaming experiences.

2.png

For the sake of simplicity, this experiment was limited to 4 colours—3 primary and 1 secondary—and 3 shapes. Each colour and shape has strong associations, common in design psychology research.

The Experiment Design Process

To see if this would apply to pixel art, we conducted an experiment among 48 participants, predominantly male (61%), with a balanced age distribution predominantly under 34 years old.

The participants were split into two groups—8 were observed in a controlled environment, while 60 others completed the experiment from their homes, providing insights into natural settings.

Participants were shown twelve characters individually, one by one, selecting associated traits from a predefined list. This was facilitated through a digital questionnaire available at https://mgtexp.com. A detailed experiment description is provided below to help understand its design and execution:

3.png

Hypotheses. A hypothesis was created for every character based on previous research – for example, the red triangular character would be perceived as “dangerous”, and the green round-shaped one as “friendly”.

Data collection. Participants were shown the characters individually. For each character, they selected associated traits from a predefined list, choosing 48 traits in total – 3 for every character. Moreover, participants responded to personal questions about age group, gender, and gaming experience.

Analysis. All responses were collected and subjected to statistical analyses to determine which colours and shapes were most strongly associated with each trait. The responses' correlation with personal data was also analysed. This analysis helped derive the conclusions discussed below.

Data Insights

Based on the data analysis, four different situations were determined:

19 hypotheses - the original hypothesis was confirmed, and the data was statistically significant at a significance level of 0.05;

6 hypotheses - the original hypothesis was rejected due to the wrong hypothetical value, but the data was still statistically significant at a significance level of 0.05, meaning that there was a correlation between colour and shape and the resulting answers;

6 hypotheses - the original hypothesis was rejected due to the statistical insignificance of the data at a significance level of 0.05, but the hypothetical value was still the most common answer;

5 hypotheses - the original hypothesis was rejected due to the inconsistency of the answers, and the data was statistically insignificant at a significance level of 0.05.

According to the analysis, the participants' responses could be predicted at least for 25 characters out of 36. Some of the analyses have shown inconsistent results, suggesting that colours have a stronger impact on character perception than shapes. It was found that colours and shapes influence each other, alternating character perception when working together. Moreover, there seems to be a gender and personal experience bias in character perception for secondary colours.

Key Findings

Primary Colours and Perception: Primary colours strongly influence character perception, consistently evoking specific emotional responses. For example, red signifies danger or excitement, while blue conveys calmness or sadness.

Shapes and Their Interplay with Colours: Shapes impact character perception to a lesser extent than colours. However, the combination of colours and shapes can enhance or mitigate their individual effects. For example, a red triangle might be perceived as more aggressive than a red circle.

Ambiguity of Secondary Colours: Secondary colours like purple tend to be more ambiguous and subject to personal interpretation, leading to less consistent emotional responses from players. It was also discovered that there could be a gender difference in how players perceive secondary colours.

Practical Application

To apply these findings, game developers should consider the following:

Controlled Use of Primary Colours: Leveraging primary colours can create characters with predictable emotional impacts. For instance, using red for antagonistic or dangerous characters and blue for trustworthy allies can help establish clear emotional cues.

Combining Colours and Shapes: Thoughtfully combining colours and shapes can enhance character design. For example, a friendly character might be designed using soft, round shapes and green colours. On the other hand, if the goal is to create a controversial character, mixing “positive” colours and “negative” shapes (and vice versa) can be considered.

Avoid Overuse of Secondary Colours: Given their ambiguous nature, secondary colours should be used sparingly and strategically to avoid inconsistent emotional responses. While purple is widely used as a “villain” colour in digital character design, the data suggests that this colour elicits mixed feelings if no other emotional design features specifically portray a character as a villain.

Testing and Feedback: Regularly test character designs with target audiences to gauge their emotional responses and adjust designs accordingly. While the study's initial hypotheses were based on popular colour psychology patterns, only half were confirmed.

While this research confirms that emotional design features significantly impact character perception in pixel art, its findings also suggest that players perceive secondary colours very subjectively. By understanding and applying these insights, game developers can create more engaging and emotionally resonant characters, improving player enjoyment and retention.

Read more about:

About the Author(s)

Timur Shakirov

Timur Shakirov

You May Also Like

Latest News

examples of design an experiment

Behind the GDC scenes with Beth Elderkin and Sam Warnke: Game Developer Podcast ep. 43

What to do about Game Engines with Rez Graham and Bryant Francis: Game Developer Podcast Ep. 42

Road to the IGF 2024 with Joel Couture: Game Developer Podcast ep. 41

Accessibility and fancy footwork with GLYDR's John Warren - Game Developer Podcast ep. 40

Cooking Games Spotlight: Deep Dives, Interviews, and More

examples of design an experiment

Featured Blogs

examples of design an experiment

Game Developer Essentials

examples of design an experiment

  • Forms and Surveys
  • Most Recent
  • Presentations
  • Infographics
  • Data Visualizations
  • Video & Animation
  • Case Studies
  • Design for Business
  • Digital Marketing
  • Design Inspiration
  • Visual Thinking
  • Product Updates
  • Visme Webinars
  • Artificial Intelligence

How to Execute Your Next Email Marketing Campaign: Guide, Best Practices & Examples

How to Execute Your Next Email Marketing Campaign: Guide, Best Practices & Examples

Written by: Mahnoor Sheikh

examples of design an experiment

As of 2023, there are over 4.3 billion email users worldwide. More importantly, an astounding 77% of marketers have seen an increase in email engagement in the last 12 months.

We know email works. But the challenge isn't about sending emails — it's about executing effective email marketing campaigns that drive results.

Unfortunately, most emails never get opened, read or acted upon. They either end up in spam or get thrown into trash by unimpressed recipients.

That’s why it’s crucial to follow up-to-date email marketing best practices, use tried-and-tested tools and learn from the best in the industry.

In this article, we’ll walk you through all things email marketing. You’ll learn how to create amazing emails, send them to the right audience, measure performance and more.

We’ve also included real-life examples and templates to inspire your own campaigns.

Before we dive in, here’s a short selection of 8 customizable email form templates you can easily edit and publish with Visme. View more templates below:

examples of design an experiment

Table of Contents

What is an email marketing campaign, how email marketing can help your business, how to execute an email marketing campaign, how to measure the success of an email marketing campaign, 5 email marketing campaign examples.

  • An email marketing campaign is an email (or multiple emails) sent to a group of recipients with a clear objective in mind. Email campaigns allow businesses to engage with their audiences in a personalized and direct manner.
  • Email marketing can help your business save time and money, reach a large audience, build deeper relationships with customers and track measurable results.
  • To execute a successful email campaign, you need to set clear goals, define the audience, craft a compelling message, design your email, test multiple versions to see which works best, and then send and monitor your email’s performance.
  • Measuring your campaign’s performance helps you understand what works and what doesn’t. Some key metrics to track include your email’s open rate, click-through rate, conversion rate, bounce rate and unsubscribe rate.
  • Design beautiful and interactive email forms that attract subscribers and valuable leads for your business.

An email marketing campaign is an email (or multiple emails) sent to a group of recipients with a clear objective in mind. Email campaigns allow businesses to engage with their audiences in a personalized, direct manner.

It's not just about broadcasting a message. It's about initiating a conversation, building relationships and driving results. And that’s why smart brands don’t just randomly send out emails — they plan and strategize.

It begins with defining a goal, like promoting a product or nurturing leads. This goal shapes the content, timing and target audience of your emails.

For example, let’s say you’re a software company planning an email marketing campaign to nurture new subscribers into customers.

Your first email might welcome subscribers and introduce them to your brand. The second might showcase your product’s unique features. Finally, a third might offer a free trial. Each email would help move subscribers towards the desired action — conversion.

Here's an example of a catchy email newsletter for your next email campaign.

Modern Newsletter

In case you’re not fully convinced about the power of email for your business, here’s why you need to start sending out email marketing campaigns right away.

Reach a Large Audience

Email provides a wider reach than most (if not all) marketing channels. Think about it— who do you know that doesn't use email?

From your teenage nephew to your grandparents, almost everyone has an email address. That's billions of potential customers at your fingertips.

The best part? Email is used on both desktop and mobile. This means customers can read your messages anywhere and anytime.

Build Deeper Relationships

Email marketing isn't just about delivering a message — it's about fostering meaningful relationships with your customers.

Regular, ongoing communication through email builds trust and keeps your brand on top of subscribers’ minds.

Additionally, it’s easy to target and personalize emails. You can tailor content to your customers’ unique needs and interests, showing that you understand and value them.

This high level of personalization is a key reason why email consistently offers high engagement rates.

Track Measurable Results

When it comes to marketing, knowing what works and what doesn't is half the battle.

Thankfully, there are plenty of email marketing tools out there to help you measure the performance of your campaigns.

Track who opened your emails, which links they clicked and even how much revenue each email generated. All of these insights help you fine-tune your strategy and craft even more successful email marketing campaigns over time.

Save Time and Money

Email marketing can be incredibly cost-effective, especially when compared to traditional advertising channels. In fact, the average ROI for email is $36 for every $1 spent !

Using automation software can also help you save tons of valuable time. You can easily set up email workflows, personalization tags and more.

This means you can deliver highly personalized campaigns without spending countless hours on manual work.

Made with Visme Infographic Maker

Now, let’s get to the meaty part: how to create and launch a successful email campaign. The steps below will take you from start to finish — goal-setting, understanding the audience, designing the email and sending it out to the world.

Step 1: Choose a Campaign Goal

Before you even start thinking about your email content or design, you need to define the objectives of your campaign.

Having clear, specific goals ensures your campaign has direction and purpose. It’s also easier to measure performance when you know what you set out to achieve in the first place.

Are you trying to drive website traffic, increase product sales, re-engage inactive customers or nurture leads? You might be looking to get more registrations for an upcoming event.

Whatever it is, your campaign goal will drive every subsequent decision you make, like defining a target audience, crafting a compelling message and even creating an automated workflow.

For example, let’s say you’re a fitness brand and your campaign goal is to increase the sales of your running shoes line.

Based on your goal, you might want to send your email campaign to a particular audience segment (e.g. those interested in running or shoes). You might also want to include a special offer or discount to motivate purchase.

You could also plan to send a series of three emails as part of your sales campaign — a product promotion email, a follow-up email with a discount code and a third email reminding subscribers to redeem the discount code before it expires.

Welcome Email Flowchart Infographic

Step 2: Define the Audience

The right message to the wrong audience will likely lead to poor engagement. Knowing who needs to hear your message helps in crafting personalized content that resonates with them.

But defining your audience is about more than just knowing their age or location — it's about understanding their needs, interests and behavior.

If you're a clothing retailer, for example, you might segment your audience by demographics (men, women), purchase behavior (frequent buyers, occasional shoppers) or product preferences (activewear, formal wear).

Email marketing tools can help you segment your audience based on these characteristics. Remember — the more relevant your message, the higher your chances of engagement!

But how do you collect this data in the first place? Here are some ideas:

  • Purchase and browse history
  • Customer surveys and feedback
  • Email opt-in forms with custom fields

Use Visme to design beautiful and interactive forms to collect contacts using premade templates and an intuitive editor. Go beyond names and emails and add custom fields to your forms like age, location, industry or favorite pet. Use this data to understand subscribers and send them targeted emails!

Email marketing campaigns- Email Sign up Forms

Step 3: Craft Your Message

Next, you want to create the meat of your email: the content. This includes the subject line, the body and the call-to-action(s).

And this where the magic happens.

Your message carries your campaign's value proposition — it's your opportunity to convince subscribers why they should engage with your brand or email.

Keep your email copy clear, concise and focused on benefits.

How can your brand or product improve your subscriber’s life? Why should they engage with your email or visit your site? What’s in it in for them?

Also, personalize the subject line and body copy as much as possible to resonate with each individual recipient.

For example, you could address contacts by their name or pull product info from their buying or browsing history. You can also leverage data like subscriber location, interests and age to send unique, tailored email campaigns (e.g. birthday discount.)

Don't forget to include a compelling call-to-action that aligns with your campaign goal.

Use actionable verbs to motivate readers to click. Here are some examples:

  • Start your free trial now!
  • Go to your cart
  • Grab your [product]!

Step 4: Design Your Email

No one wants to read plain text emails — it’s not the ‘90s.

Add borders, a nice, bold email header, images, icons, animations, GIFs and other graphic elements to package your carefully crafted message.

Good design does more than just make your brand look attractive. It also enhances your email’s impact, improves readability and draws attention to any CTAs.

Choose a clean, mobile-friendly template that aligns with your brand's aesthetic. Visme has a bunch of great-looking email templates you can easily customize and use:

examples of design an experiment

You can also tap into Visme’s built-in graphic assets to design your emails .

Browse millions of stock photos, videos, GIFs, illustrations, icons, shapes, borders and more from inside the editor — all free and editable!

Be careful not to go overboard — too many visuals can distract from your message.

Additionally, if you want to create a series of emails, you can do so quickly in Visme.

Save one email as a branded template and then reuse it as many times as you want. This helps you create multiple emails aligned with your brand and design theme.

Step 5: Test Your Email

Before you hit send, make sure to test your email.

This includes checking for any errors, broken links or display issues on different devices and email clients. But that’s just the basic stuff.

Go beyond that and run A/B tests with different subject lines, content, images, CTAs and sending times to see which version/s your audience responds to best.

For example, you might send an email with a humorous subject line to half of your audience and an email with a straightforward subject line to the other half. Comparing the open rates of these two emails can give you insights into what type of subject line your audience prefers.

Step 6: Send Email and Monitor Performance

Once you've dotted the i's and crossed the t's, it's time to send your email. Use your email marketing platform to send out your email at a time that's most likely to result in high open rates (this might require some experimentation.)

But your job doesn't end there. You also need to track your campaign’s performance and identify opportunities for improvement. This is where email analytics come into play.

Analytics data will guide your future campaigns, help you refine your strategy and continuously improve. You might even want to pivot current campaigns based on real-time insights.

We’ll talk more about measuring email campaign performance and the KPIs to track later on in this article.

 Measuring your campaign’s performance helps you understand what works and what doesn’t. It also gives you insight into how your audience interacts with your brand, helping you adjust your strategy and maximize the impact of future campaigns.

Most email marketing tools offer built-in analytics to help you track performance metrics. If you’re not sure where to start, here are some KPIs to keep an eye on:

  • Open Rate: This shows you the percentage of recipients who opened your email. For example, an open rate of 20% tells you one in five subscribers clicked on your email. If your open rate is lower than you’d like, consider spicing up your subject lines to make them more enticing.
  • Click-Through Rate (CTR): This tells you the percentage of recipients who clicked on one or more links within your email. If your goal was to drive traffic to a specific blog post, a high CTR shows you’ve done a good job of motivating readers to do so.
  • Conversion Rate: If open rate and CTR tell us about engagement, conversion rate tells us about the action. It's the percentage of recipients who clicked on a link within your email and completed a desired action like making a purchase or filling out a form.
  • Bounce Rate: This reflects the percentage of your emails that couldn't be delivered. A high bounce rate could indicate that you need to clean up your email list, removing non-exisistent, inactive or incorrect addresses.
  • Unsubscribe Rate: This is the percentage of recipients who decided they no longer wanted to receive your emails. A high unsubscribe rate could be a wake-up call to re-evaluate your content strategy and ensure you’re delivering value to your subscribers.

Measuring the success of an email marketing campaign is about more than just data collection. It's about turning numbers into narratives, patterns into action plans and insights into results.

Use this KPIs report template to visualize KPIs, track progress and bring your team (and management) up to speed.

examples of design an experiment

Want to learn from real brands that are doing it right? Here are some handpicked examples of awesome email marketing campaigns to get your creative juices flowing.

Bee email Marketing campaign

Image Source

Campaign Goal: Drive Blog Traffic

This email marketing campaign from BEE is a great example of how to get more eyeballs on your blog content. Their message is short and sweet: here are two articles to help you design high-converting landing pages. And their email puts those resources front and center.

Why it works

  • Creative header copy grabs attention and establishes the theme
  • Relevant and actionable CTAs under each post’s preview
  • Great balance of visuals and text throughout the email

Casper email Marketing campaign

Campaign Goal: Increase Sales

Creating a product bundle is a great way to increase average order value (AOV). Casper knows this and promotes their bundle with a beautiful, irresistible email campaign. The highlight of this email is the 25% discount offer — but there’s a lot more to love about this campaign.

  • Bold and compelling header copy communicates the value upfront
  • Descriptive, actionable CTA designed in contrasting colors is hard to miss
  • High-quality images help readers visualize each product in the bundle
  • Section for SMS opt-in helps collect phone numbers and reach contacts via text

GoDaddy email Marketing campaign

Campaign Goal: Re-engage Inactive Subscriber

GoDaddy keeps it super simple with a clean and crisp design, succinct email copy, bullet points and actionable CTAs. They don’t use any images in their email but they still establish a good balance with shapes and relevant icons aligned with their brand’s visual style.

  • Short and snappy copy speaks directly to the target audience (i.e. customers who’ve engaged with the brand before)
  • Primary CTAs at both the start and end of the email are hard to miss
  • Secondary CTAs (and offers) to convert subscribers with varying needs and goals

Asana email Marketing campaign

Campaign Goal: Drive Event Registrations

This email from Asana gets right down to business — no fluffy intros or explanations. This is exactly what their audience of busy executives needs. We also love that the design is minimalistic and on-brand, with lots of white space to improve readability.

  • Concise email puts the who, what, when and why of the webinar upfront
  • Highlights important event info (day, date, time & timezone) in bold
  • Short bullets draw attention to the key talking points of the webinar
  • Speaker’s headshot and brand logo helps put a face to the name

Supercharge Your Email Campaigns with Visme

There you have it — your roadmap to executing a powerful email marketing campaign.

We've walked you through setting clear goals, identifying our target audience, crafting compelling messages, designing engaging emails and tracking success with key metrics.

Now, it's time to rev up your email marketing engine with Visme Forms . Build eye-catching, interactive forms (no coding required!) to turn casual website visitors into subscribers.

Why? So you can start sending them your amazing email campaigns, of course!

If you’re interested in learning more about email marketing, watch our video on newsletter design tips or browse our email templates to visualize your messages.

Create successful email campaigns using Visme

examples of design an experiment

Trusted by leading brands

Capterra

Recommended content for you:

14 Top Qualtrics Alternatives & Competitors in 2024

Create Beautiful Forms That Convert.

Improve your data collection from emails, leads, to surveys and more, by using beautifully designed forms that convert up 2X better.

examples of design an experiment

About the Author

Mahnoor Sheikh is the content marketing manager at Visme. She has years of experience in content strategy and execution, SEO copywriting and graphic design. She is also the founder of MASH Content and is passionate about tea, kittens and traveling with her husband. Get in touch with her on LinkedIn .

examples of design an experiment

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

AP®︎/College Statistics

Course: ap®︎/college statistics   >   unit 6.

  • Introduction to experiment design
  • The language of experiments
  • Principles of experiment design
  • Matched pairs experiment design

Experiment designs

  • Experiment design considerations

examples of design an experiment

  • (Choice A)   A stratified random design A A stratified random design
  • (Choice B)   A randomized block design where the 4 ‍   sections are the blocks B A randomized block design where the 4 ‍   sections are the blocks
  • (Choice C)   A completely randomized design C A completely randomized design
  • (Choice D)   A matched pairs design where the 2 ‍   forms are the pair D A matched pairs design where the 2 ‍   forms are the pair
  • (Choice E)   A randomized block design where the 2 ‍   forms are the blocks E A randomized block design where the 2 ‍   forms are the blocks

IMAGES

  1. Experimental Design Steps

    examples of design an experiment

  2. PPT

    examples of design an experiment

  3. Design of Experiment

    examples of design an experiment

  4. Experimental Study Design: Types, Methods, Advantages

    examples of design an experiment

  5. Design Of Experiment Study

    examples of design an experiment

  6. PPT

    examples of design an experiment

VIDEO

  1. Implications of sample Design

  2. Types of Definition and Your Not-Self Purpose

  3. Why Design Thinking Matters in Public Policy

  4. Experimental Design

  5. Setting Up a Multi-Brand Design System with Supernova

  6. Experimental Research Designs

COMMENTS

  1. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  2. 19+ Experimental Design Examples (Methods + Types)

    1) True Experimental Design. In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

  3. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  4. A Quick Guide to Experimental Design

    A good experimental design requires a strong understanding of the system you are studying. There are five key steps in designing an experiment: Consider your variables and how they are related. Write a specific, testable hypothesis. Design experimental treatments to manipulate your independent variable.

  5. Experimental Design

    Experimental Design. Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes ...

  6. Experimental Design: Definition and Types

    An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions. An experiment is a data collection ...

  7. Designing an Experiment: Step-by-step Guide

    Designing an experiment means planning exactly how you'll test your hypothesis to reach valid conclusions. This video will walk you through the decisions you...

  8. Experimental Design in Statistics (w/ 11 Examples!)

    00:44:23 - Design and experiment using complete randomized design or a block design (Examples #9-10) 00:56:09 - Identify the response and explanatory variables, experimental units, lurking variables, and design an experiment to test a new drug (Example #11) Practice Problems with Step-by-Step Solutions.

  9. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  10. Design of experiments

    The design of experiments ( DOE or DOX ), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions ...

  11. Introduction to experimental design (video)

    Introduction to experimental design. Scientific progress hinges on well-designed experiments. Most experiments start with a testable hypothesis. To avoid errors, researchers may randomly divide subjects into control and experimental groups. Both groups should receive a treatment, like a pill (real or placebo), to counteract the placebo effect.

  12. Experimental Design

    Experimental Design | Types, Definition & Examples. Published on June 9, 2024 by Julia Merkus, MA. An experimental design is a systematic plan for conducting an experiment that aims to test a hypothesis or answer a research question.. It involves manipulating one or more independent variables (IVs) and measuring their effect on one or more dependent variables (DVs) while controlling for other ...

  13. Introduction to experiment design (video)

    You use blocking to minimize the potential variables (also known as extraneous variables) from influencing your experimental result. Let's use the experiment example that Mr.Khan used in the video. To verify the effect of the pill, we need to make sure that the person's gender, health, or other personal traits don't affect the result.

  14. 15 Experimental Design Examples (2024)

    15 Experimental Design Examples. Written by Chris Drew (PhD) | October 9, 2023. Experimental design involves testing an independent variable against a dependent variable. It is a central feature of the scientific method. A simple example of an experimental design is a clinical trial, where research participants are placed into control and ...

  15. Introduction to experiment design (video)

    Block design are for experiments and a stratified sample is used for sampling. Blocking implies that there is some known variable that can affect the response variable or the overall experiment. In the video the example would have been gender because maybe there were more men in the treatment group than the control group and women would react ...

  16. 3 Examples of an Experiment Design

    An experiment design is a plan to execute an experiment. This includes details such as a hypothesis , treatments and controls that allow others to evaluate your experiment or replicate it. The following are illustrative examples of an experiment design.

  17. Designing an Experiment: 8 Steps Plus Experimental Design Types

    How to design an experiment. To design your own experiment, consider following these steps and examples: 1. Determine your specific research question. To begin, craft a specific research question. A research question is a topic you are hoping to learn more about. In order to create the best possible results, try to make your topic as specific ...

  18. 101 Ways to Design an Experiment, or Some Ideas About Teaching Design

    In experiment number 2 the student, Karen Vlasek, using a factorial design with four replicated center points, determined the effects of three variables on the amount of popcorn produced. She found, for example, that although double the yield was obtained with the gourmet popcorn, it cost three times as much as the regular popcorn.

  19. Design of Experiments: An Overview and Application Example

    For the above example, such an assumption would result in the sum of the low-level average and the two high-level averages, or. 9.5 + 20 + 11.5 = 41. It was Fisher's idea that it was much better to vary all the factors at once using a factorial design, in which experiments are run for all combinations of levels for all of the factors.

  20. Design of Experiment (DOE) in Project Management

    Basics of DOE. The method was coined by Sir Ronald A. Fisher in the 1920s and 1930s. Design of Experiment is a powerful data collection and analysis tool that can be used in a variety of experimental situations. It allows manipulating multiple input factors and determining their effect on a desired output (response).

  21. The scientific method and experimental design

    A hypothesis is the process of making careful observations. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere.

  22. Quantitative Research Design

    Quantitative research design is a systematic approach used to investigate phenomena by collecting and analyzing numerical data. It involves the use of structured tools such as surveys, experiments, and statistical analysis to quantify variables and identify patterns, relationships, and cause-and-effect dynamics.

  23. Qualitative Research Design

    Qualitative Research Design is a method focused on understanding and interpreting the experiences of individuals or groups. Unlike quantitative research, which quantifies data and identifies patterns through statistical analysis, Qualitative Research Design explores phenomena in depth using interviews, focus groups, and observations.This approach gathers rich narratives that provide insights ...

  24. Top 3 Tips To Encourage Creative Thinking Within Your Team

    Leaders must serve as an example of what it means to be creative. When they take risks & experiment with new ideas, they set the tone for the entire team. Find out how.

  25. Controlled experiments (article)

    However, experiments with more than one independent variable have to follow specific design guidelines, and the results must be analyzed using a special class of statistical tests to disentangle the effects of the two variables. ... As a more realistic example of a controlled experiment, let's examine a recent study on coral bleaching. Corals ...

  26. Featured Blog

    For the sake of simplicity, this experiment was limited to 4 colours—3 primary and 1 secondary—and 3 shapes. Each colour and shape has strong associations, common in design psychology research. ... Thoughtfully combining colours and shapes can enhance character design. For example, a friendly character might be designed using soft, round ...

  27. How to Execute Your Next Email Marketing Campaign: Guide, Best ...

    This email marketing campaign from BEE is a great example of how to get more eyeballs on your blog content. Their message is short and sweet: here are two articles to help you design high-converting landing pages. And their email puts those resources front and center. Why it works. Creative header copy grabs attention and establishes the theme

  28. Experiment designs (practice)

    Experiment designs. Rita teaches environmental biology. She has 2 forms of a midterm exam, and she wonders if either form is harder than the other. She teaches approximately 200 total students in 4 sections of the class. She randomly assigns half of the students in each section to each form of the exam. Rita will then see if the average scores ...