Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

en experimental approach

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved September 14, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Observational Research

Observational Research – Methods and Guide

Triangulation

Triangulation in Research – Types, Methods and...

Research Methods

Research Methods – Types, Examples and Guide

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Mixed Research methods

Mixed Methods Research – Types & Analysis

Survey Research

Survey Research – Types, Methods, Examples

Experimental Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

en experimental approach

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Participant Engagement

Participant Engagement: Strategies + Improving Interaction

Sep 12, 2024

Employee Recognition Programs

Employee Recognition Programs: A Complete Guide

Sep 11, 2024

Agile Qual for Rapid Insights

A guide to conducting agile qualitative research for rapid insights with Digsite 

Cultural Insights

Cultural Insights: What it is, Importance + How to Collect?

Sep 10, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 college students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. Table 6.2 “Block Randomization Sequence for Assigning Nine Participants to Three Conditions” shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.2 Block Randomization Sequence for Assigning Nine Participants to Three Conditions

Participant Condition
4 B
5 C
6 A

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008).

Placebo effects are interesting in their own right (see Note 6.28 “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

en experimental approach

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted - Enago

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

en experimental approach

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

en experimental approach

Which among these features would you prefer the most in a peer review assistant?

Instant insights, infinite possibilities

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

  • Types of experimental

Log in or sign up

Get started for free

Experimentation in Scientific Research: Variables and controls in practice

by Anthony Carpi, Ph.D., Anne E. Egger, Ph.D.

Listen to this reading

Did you know that experimental design was developed more than a thousand years ago by a Middle Eastern scientist who studied light? All of us use a form of experimental research in our day to day lives when we try to find the spot with the best cell phone reception, try out new cooking recipes, and more. Scientific experiments are built on similar principles.

Experimentation is a research method in which one or more variables are consciously manipulated and the outcome or effect of that manipulation on other variables is observed.

Experimental designs often make use of controls that provide a measure of variability within a system and a check for sources of error.

Experimental methods are commonly applied to determine causal relationships or to quantify the magnitude of response of a variable.

Anyone who has used a cellular phone knows that certain situations require a bit of research: If you suddenly find yourself in an area with poor phone reception, you might move a bit to the left or right, walk a few steps forward or back, or even hold the phone over your head to get a better signal. While the actions of a cell phone user might seem obvious, the person seeking cell phone reception is actually performing a scientific experiment: consciously manipulating one component (the location of the cell phone) and observing the effect of that action on another component (the phone's reception). Scientific experiments are obviously a bit more complicated, and generally involve more rigorous use of controls , but they draw on the same type of reasoning that we use in many everyday situations. In fact, the earliest documented scientific experiments were devised to answer a very common everyday question: how vision works.

  • A brief history of experimental methods

Figure 1: Alhazen (965-ca.1039) as pictured on an Iraqi 10,000-dinar note

Figure 1: Alhazen (965-ca.1039) as pictured on an Iraqi 10,000-dinar note

One of the first ideas regarding how human vision works came from the Greek philosopher Empedocles around 450 BCE . Empedocles reasoned that the Greek goddess Aphrodite had lit a fire in the human eye, and vision was possible because light rays from this fire emanated from the eye, illuminating objects around us. While a number of people challenged this proposal, the idea that light radiated from the human eye proved surprisingly persistent until around 1,000 CE , when a Middle Eastern scientist advanced our knowledge of the nature of light and, in so doing, developed a new and more rigorous approach to scientific research . Abū 'Alī al-Hasan ibn al-Hasan ibn al-Haytham, also known as Alhazen , was born in 965 CE in the Arabian city of Basra in what is present-day Iraq. He began his scientific studies in physics, mathematics, and other sciences after reading the works of several Greek philosophers. One of Alhazen's most significant contributions was a seven-volume work on optics titled Kitab al-Manazir (later translated to Latin as Opticae Thesaurus Alhazeni – Alhazen's Book of Optics ). Beyond the contributions this book made to the field of optics, it was a remarkable work in that it based conclusions on experimental evidence rather than abstract reasoning – the first major publication to do so. Alhazen's contributions have proved so significant that his likeness was immortalized on the 2003 10,000-dinar note issued by Iraq (Figure 1).

Alhazen invested significant time studying light , color, shadows, rainbows, and other optical phenomena. Among this work was a study in which he stood in a darkened room with a small hole in one wall. Outside of the room, he hung two lanterns at different heights. Alhazen observed that the light from each lantern illuminated a different spot in the room, and each lighted spot formed a direct line with the hole and one of the lanterns outside the room. He also found that covering a lantern caused the spot it illuminated to darken, and exposing the lantern caused the spot to reappear. Thus, Alhazen provided some of the first experimental evidence that light does not emanate from the human eye but rather is emitted by certain objects (like lanterns) and travels from these objects in straight lines. Alhazen's experiment may seem simplistic today, but his methodology was groundbreaking: He developed a hypothesis based on observations of physical relationships (that light comes from objects), and then designed an experiment to test that hypothesis. Despite the simplicity of the method , Alhazen's experiment was a critical step in refuting the long-standing theory that light emanated from the human eye, and it was a major event in the development of modern scientific research methodology.

Comprehension Checkpoint

  • Experimentation as a scientific research method

Experimentation is one scientific research method , perhaps the most recognizable, in a spectrum of methods that also includes description, comparison, and modeling (see our Description , Comparison , and Modeling modules). While all of these methods share in common a scientific approach, experimentation is unique in that it involves the conscious manipulation of certain aspects of a real system and the observation of the effects of that manipulation. You could solve a cell phone reception problem by walking around a neighborhood until you see a cell phone tower, observing other cell phone users to see where those people who get the best reception are standing, or looking on the web for a map of cell phone signal coverage. All of these methods could also provide answers, but by moving around and testing reception yourself, you are experimenting.

  • Variables: Independent and dependent

In the experimental method , a condition or a parameter , generally referred to as a variable , is consciously manipulated (often referred to as a treatment) and the outcome or effect of that manipulation is observed on other variables. Variables are given different names depending on whether they are the ones manipulated or the ones observed:

  • Independent variable refers to a condition within an experiment that is manipulated by the scientist.
  • Dependent variable refers to an event or outcome of an experiment that might be affected by the manipulation of the independent variable .

Scientific experimentation helps to determine the nature of the relationship between independent and dependent variables . While it is often difficult, or sometimes impossible, to manipulate a single variable in an experiment , scientists often work to minimize the number of variables being manipulated. For example, as we move from one location to another to get better cell reception, we likely change the orientation of our body, perhaps from south-facing to east-facing, or we hold the cell phone at a different angle. Which variable affected reception: location, orientation, or angle of the phone? It is critical that scientists understand which aspects of their experiment they are manipulating so that they can accurately determine the impacts of that manipulation . In order to constrain the possible outcomes of an experimental procedure, most scientific experiments use a system of controls .

  • Controls: Negative, positive, and placebos

In a controlled study, a scientist essentially runs two (or more) parallel and simultaneous experiments: a treatment group, in which the effect of an experimental manipulation is observed on a dependent variable , and a control group, which uses all of the same conditions as the first with the exception of the actual treatment. Controls can fall into one of two groups: negative controls and positive controls .

In a negative control , the control group is exposed to all of the experimental conditions except for the actual treatment . The need to match all experimental conditions exactly is so great that, for example, in a trial for a new drug, the negative control group will be given a pill or liquid that looks exactly like the drug, except that it will not contain the drug itself, a control often referred to as a placebo . Negative controls allow scientists to measure the natural variability of the dependent variable(s), provide a means of measuring error in the experiment , and also provide a baseline to measure against the experimental treatment.

Some experimental designs also make use of positive controls . A positive control is run as a parallel experiment and generally involves the use of an alternative treatment that the researcher knows will have an effect on the dependent variable . For example, when testing the effectiveness of a new drug for pain relief, a scientist might administer treatment placebo to one group of patients as a negative control , and a known treatment like aspirin to a separate group of individuals as a positive control since the pain-relieving aspects of aspirin are well documented. In both cases, the controls allow scientists to quantify background variability and reject alternative hypotheses that might otherwise explain the effect of the treatment on the dependent variable .

  • Experimentation in practice: The case of Louis Pasteur

Well-controlled experiments generally provide strong evidence of causality, demonstrating whether the manipulation of one variable causes a response in another variable. For example, as early as the 6th century BCE , Anaximander , a Greek philosopher, speculated that life could be formed from a mixture of sea water, mud, and sunlight. The idea probably stemmed from the observation of worms, mosquitoes, and other insects "magically" appearing in mudflats and other shallow areas. While the suggestion was challenged on a number of occasions, the idea that living microorganisms could be spontaneously generated from air persisted until the middle of the 18 th century.

In the 1750s, John Needham, a Scottish clergyman and naturalist, claimed to have proved that spontaneous generation does occur when he showed that microorganisms flourished in certain foods such as soup broth, even after they had been briefly boiled and covered. Several years later, the Italian abbot and biologist Lazzaro Spallanzani , boiled soup broth for over an hour and then placed bowls of this soup in different conditions, sealing some and leaving others exposed to air. Spallanzani found that microorganisms grew in the soup exposed to air but were absent from the sealed soup. He therefore challenged Needham's conclusions and hypothesized that microorganisms suspended in air settled onto the exposed soup but not the sealed soup, and rejected the idea of spontaneous generation .

Needham countered, arguing that the growth of bacteria in the soup was not due to microbes settling onto the soup from the air, but rather because spontaneous generation required contact with an intangible "life force" in the air itself. He proposed that Spallanzani's extensive boiling destroyed the "life force" present in the soup, preventing spontaneous generation in the sealed bowls but allowing air to replenish the life force in the open bowls. For several decades, scientists continued to debate the spontaneous generation theory of life, with support for the theory coming from several notable scientists including Félix Pouchet and Henry Bastion. Pouchet, Director of the Rouen Museum of Natural History in France, and Bastion, a well-known British bacteriologist, argued that living organisms could spontaneously arise from chemical processes such as fermentation and putrefaction. The debate became so heated that in 1860, the French Academy of Sciences established the Alhumbert prize of 2,500 francs to the first person who could conclusively resolve the conflict. In 1864, Louis Pasteur achieved that result with a series of well-controlled experiments and in doing so claimed the Alhumbert prize.

Pasteur prepared for his experiments by studying the work of others that came before him. In fact, in April 1861 Pasteur wrote to Pouchet to obtain a research description that Pouchet had published. In this letter, Pasteur writes:

Paris, April 3, 1861 Dear Colleague, The difference of our opinions on the famous question of spontaneous generation does not prevent me from esteeming highly your labor and praiseworthy efforts... The sincerity of these sentiments...permits me to have recourse to your obligingness in full confidence. I read with great care everything that you write on the subject that occupies both of us. Now, I cannot obtain a brochure that I understand you have just published.... I would be happy to have a copy of it because I am at present editing the totality of my observations, where naturally I criticize your assertions. L. Pasteur (Porter, 1961)

Pasteur received the brochure from Pouchet several days later and went on to conduct his own experiments . In these, he repeated Spallanzani's method of boiling soup broth, but he divided the broth into portions and exposed these portions to different controlled conditions. Some broth was placed in flasks that had straight necks that were open to the air, some broth was placed in sealed flasks that were not open to the air, and some broth was placed into a specially designed set of swan-necked flasks, in which the broth would be open to the air but the air would have to travel a curved path before reaching the broth, thus preventing anything that might be present in the air from simply settling onto the soup (Figure 2). Pasteur then observed the response of the dependent variable (the growth of microorganisms) in response to the independent variable (the design of the flask). Pasteur's experiments contained both positive controls (samples in the straight-necked flasks that he knew would become contaminated with microorganisms) and negative controls (samples in the sealed flasks that he knew would remain sterile). If spontaneous generation did indeed occur upon exposure to air, Pasteur hypothesized, microorganisms would be found in both the swan-neck flasks and the straight-necked flasks, but not in the sealed flasks. Instead, Pasteur found that microorganisms appeared in the straight-necked flasks, but not in the sealed flasks or the swan-necked flasks.

Figure 2: Pasteur's drawings of the flasks he used (Pasteur, 1861). Fig. 25 D, C, and B (top) show various sealed flasks (negative controls); Fig. 26 (bottom right) illustrates a straight-necked flask directly open to the atmosphere (positive control); and Fig. 25 A (bottom left) illustrates the specially designed swan-necked flask (treatment group).

Figure 2: Pasteur's drawings of the flasks he used (Pasteur, 1861). Fig. 25 D, C, and B (top) show various sealed flasks (negative controls); Fig. 26 (bottom right) illustrates a straight-necked flask directly open to the atmosphere (positive control); and Fig. 25 A (bottom left) illustrates the specially designed swan-necked flask (treatment group).

By using controls and replicating his experiment (he used more than one of each type of flask), Pasteur was able to answer many of the questions that still surrounded the issue of spontaneous generation. Pasteur said of his experimental design, "I affirm with the most perfect sincerity that I have never had a single experiment, arranged as I have just explained, which gave me a doubtful result" (Porter, 1961). Pasteur's work helped refute the theory of spontaneous generation – his experiments showed that air alone was not the cause of bacterial growth in the flask, and his research supported the hypothesis that live microorganisms suspended in air could settle onto the broth in open-necked flasks via gravity .

  • Experimentation across disciplines

Experiments are used across all scientific disciplines to investigate a multitude of questions. In some cases, scientific experiments are used for exploratory purposes in which the scientist does not know what the dependent variable is. In this type of experiment, the scientist will manipulate an independent variable and observe what the effect of the manipulation is in order to identify a dependent variable (or variables). Exploratory experiments are sometimes used in nutritional biology when scientists probe the function and purpose of dietary nutrients . In one approach, a scientist will expose one group of animals to a normal diet, and a second group to a similar diet except that it is lacking a specific vitamin or nutrient. The researcher will then observe the two groups to see what specific physiological changes or medical problems arise in the group lacking the nutrient being studied.

Scientific experiments are also commonly used to quantify the magnitude of a relationship between two or more variables . For example, in the fields of pharmacology and toxicology, scientific experiments are used to determine the dose-response relationship of a new drug or chemical. In these approaches, researchers perform a series of experiments in which a population of organisms , such as laboratory mice, is separated into groups and each group is exposed to a different amount of the drug or chemical of interest. The analysis of the data that result from these experiments (see our Data Analysis and Interpretation module) involves comparing the degree of the organism's response to the dose of the substance administered.

In this context, experiments can provide additional evidence to complement other research methods . For example, in the 1950s a great debate ensued over whether or not the chemicals in cigarette smoke cause cancer. Several researchers had conducted comparative studies (see our Comparison in Scientific Research module) that indicated that patients who smoked had a higher probability of developing lung cancer when compared to nonsmokers. Comparative studies differ slightly from experimental methods in that you do not consciously manipulate a variable ; rather you observe differences between two or more groups depending on whether or not they fall into a treatment or control group. Cigarette companies and lobbyists criticized these studies, suggesting that the relationship between smoking and lung cancer was coincidental. Several researchers noted the need for a clear dose-response study; however, the difficulties in getting cigarette smoke into the lungs of laboratory animals prevented this research. In the mid-1950s, Ernest Wynder and colleagues had an ingenious idea: They condensed the chemicals from cigarette smoke into a liquid and applied this in various doses to the skin of groups of mice. The researchers published data from a dose-response experiment of the effect of tobacco smoke condensate on mice (Wynder et al., 1957).

As seen in Figure 3, the researchers found a positive relationship between the amount of condensate applied to the skin of mice and the number of cancers that developed. The graph shows the results of a study in which different groups of mice were exposed to increasing amounts of cigarette tar. The black dots indicate the percentage of each sample group of mice that developed cancer for a given amount cigarette smoke "condensate" applied to their skin. The vertical lines are error bars, showing the amount of uncertainty . The graph shows generally increasing cancer rates with greater exposure. This study was one of the first pieces of experimental evidence in the cigarette smoking debate , and it helped strengthen the case for cigarette smoke as the causative agent in lung cancer in smokers.

Figure 3: Percentage of mice with cancer versus the amount cigarette smoke

Figure 3: Percentage of mice with cancer versus the amount cigarette smoke "condensate" applied to their skin (source: Wynder et al., 1957).

Sometimes experimental approaches and other research methods are not clearly distinct, or scientists may even use multiple research approaches in combination. For example, at 1:52 a.m. EDT on July 4, 2005, scientists with the National Aeronautics and Space Administration (NASA) conducted a study in which a 370 kg spacecraft named Deep Impact was purposely slammed into passing comet Tempel 1. A nearby spacecraft observed the impact and radioed data back to Earth. The research was partially descriptive in that it documented the chemical composition of the comet, but it was also partly experimental in that the effect of slamming the Deep Impact probe into the comet on the volatilization of previously undetected compounds , such as water, was assessed (A'Hearn et al., 2005). It is particularly common that experimentation and description overlap: Another example is Jane Goodall 's research on the behavior of chimpanzees, which can be read in our Description in Scientific Research module.

  • Limitations of experimental methods

en experimental approach

Figure 4: An image of comet Tempel 1 67 seconds after collision with the Deep Impact impactor. Image credit: NASA/JPL-Caltech/UMD http://deepimpact.umd.edu/gallery/HRI_937_1.html

While scientific experiments provide invaluable data regarding causal relationships, they do have limitations. One criticism of experiments is that they do not necessarily represent real-world situations. In order to clearly identify the relationship between an independent variable and a dependent variable , experiments are designed so that many other contributing variables are fixed or eliminated. For example, in an experiment designed to quantify the effect of vitamin A dose on the metabolism of beta-carotene in humans, Shawna Lemke and colleagues had to precisely control the diet of their human volunteers (Lemke, Dueker et al. 2003). They asked their participants to limit their intake of foods rich in vitamin A and further asked that they maintain a precise log of all foods eaten for 1 week prior to their study. At the time of their study, they controlled their participants' diet by feeding them all the same meals, described in the methods section of their research article in this way:

Meals were controlled for time and content on the dose administration day. Lunch was served at 5.5 h postdosing and consisted of a frozen dinner (Enchiladas, Amy's Kitchen, Petaluma, CA), a blueberry bagel with jelly, 1 apple and 1 banana, and a large chocolate chunk cookie (Pepperidge Farm). Dinner was served 10.5 h post dose and consisted of a frozen dinner (Chinese Stir Fry, Amy's Kitchen) plus the bagel and fruit taken for lunch.

While this is an important aspect of making an experiment manageable and informative, it is often not representative of the real world, in which many variables may change at once, including the foods you eat. Still, experimental research is an excellent way of determining relationships between variables that can be later validated in real world settings through descriptive or comparative studies.

Design is critical to the success or failure of an experiment . Slight variations in the experimental set-up could strongly affect the outcome being measured. For example, during the 1950s, a number of experiments were conducted to evaluate the toxicity in mammals of the metal molybdenum, using rats as experimental subjects . Unexpectedly, these experiments seemed to indicate that the type of cage the rats were housed in affected the toxicity of molybdenum. In response, G. Brinkman and Russell Miller set up an experiment to investigate this observation (Brinkman & Miller, 1961). Brinkman and Miller fed two groups of rats a normal diet that was supplemented with 200 parts per million (ppm) of molybdenum. One group of rats was housed in galvanized steel (steel coated with zinc to reduce corrosion) cages and the second group was housed in stainless steel cages. Rats housed in the galvanized steel cages suffered more from molybdenum toxicity than the other group: They had higher concentrations of molybdenum in their livers and lower blood hemoglobin levels. It was then shown that when the rats chewed on their cages, those housed in the galvanized metal cages absorbed zinc plated onto the metal bars, and zinc is now known to affect the toxicity of molybdenum. In order to control for zinc exposure, then, stainless steel cages needed to be used for all rats.

Scientists also have an obligation to adhere to ethical limits in designing and conducting experiments . During World War II, doctors working in Nazi Germany conducted many heinous experiments using human subjects . Among them was an experiment meant to identify effective treatments for hypothermia in humans, in which concentration camp prisoners were forced to sit in ice water or left naked outdoors in freezing temperatures and then re-warmed by various means. Many of the exposed victims froze to death or suffered permanent injuries. As a result of the Nazi experiments and other unethical research , strict scientific ethical standards have been adopted by the United States and other governments, and by the scientific community at large. Among other things, ethical standards (see our Scientific Ethics module) require that the benefits of research outweigh the risks to human subjects, and those who participate do so voluntarily and only after they have been made fully aware of all the risks posed by the research. These guidelines have far-reaching effects: While the clearest indication of causation in the cigarette smoke and lung cancer debate would have been to design an experiment in which one group of people was asked to take up smoking and another group was asked to refrain from smoking, it would be highly unethical for a scientist to purposefully expose a group of healthy people to a suspected cancer causing agent. As an alternative, comparative studies (see our Comparison in Scientific Research module) were initiated in humans, and experimental studies focused on animal subjects. The combination of these and other studies provided even stronger evidence of the link between smoking and lung cancer than either one method alone would have.

  • Experimentation in modern practice

Like all scientific research , the results of experiments are shared with the scientific community, are built upon, and inspire additional experiments and research. For example, once Alhazen established that light given off by objects enters the human eye, the natural question that was asked was "What is the nature of light that enters the human eye?" Two common theories about the nature of light were debated for many years. Sir Isaac Newton was among the principal proponents of a theory suggesting that light was made of small particles . The English naturalist Robert Hooke (who held the interesting title of Curator of Experiments at the Royal Society of London) supported a different theory stating that light was a type of wave, like sound waves . In 1801, Thomas Young conducted a now classic scientific experiment that helped resolve this controversy . Young, like Alhazen, worked in a darkened room and allowed light to enter only through a small hole in a window shade (Figure 5). Young refocused the beam of light with mirrors and split the beam with a paper-thin card. The split light beams were then projected onto a screen, and formed an alternating light and dark banding pattern – that was a sign that light was indeed a wave (see our Light I: Particle or Wave? module).

Figure 5: Young's split-light beam experiment helped clarify the wave nature of light.

Figure 5: Young's split-light beam experiment helped clarify the wave nature of light.

Approximately 100 years later, in 1905, new experiments led Albert Einstein to conclude that light exhibits properties of both waves and particles . Einstein's dual wave-particle theory is now generally accepted by scientists.

Experiments continue to help refine our understanding of light even today. In addition to his wave-particle theory , Einstein also proposed that the speed of light was unchanging and absolute. Yet in 1998 a group of scientists led by Lene Hau showed that light could be slowed from its normal speed of 3 x 10 8 meters per second to a mere 17 meters per second with a special experimental apparatus (Hau et al., 1999). The series of experiments that began with Alhazen 's work 1000 years ago has led to a progressively deeper understanding of the nature of light. Although the tools with which scientists conduct experiments may have become more complex, the principles behind controlled experiments are remarkably similar to those used by Pasteur and Alhazen hundreds of years ago.

Table of Contents

Activate glossary term highlighting to easily identify key terms within the module. Once highlighted, you can click on these terms to view their definitions.

Activate NGSS annotations to easily identify NGSS standards within the module. Once highlighted, you can click on them to view these standards.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Experimental Method Works in Psychology

sturti/Getty Images

The Experimental Process

Types of experiments, potential pitfalls of the experimental method.

The experimental method is a type of research procedure that involves manipulating variables to determine if there is a cause-and-effect relationship. The results obtained through the experimental method are useful but do not prove with 100% certainty that a singular cause always creates a specific effect. Instead, they show the probability that a cause will or will not lead to a particular effect.

At a Glance

While there are many different research techniques available, the experimental method allows researchers to look at cause-and-effect relationships. Using the experimental method, researchers randomly assign participants to a control or experimental group and manipulate levels of an independent variable. If changes in the independent variable lead to changes in the dependent variable, it indicates there is likely a causal relationship between them.

What Is the Experimental Method in Psychology?

The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

For example, researchers may want to learn how different visual patterns may impact our perception. Or they might wonder whether certain actions can improve memory . Experiments are conducted on many behavioral topics, including:

The scientific method forms the basis of the experimental method. This is a process used to determine the relationship between two variables—in this case, to explain human behavior .

Positivism is also important in the experimental method. It refers to factual knowledge that is obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables. Then they formulate a hypothesis, manipulate the variables, and collect data on the results. Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on the experiment outcome.

History of the Experimental Method

The idea of using experiments to better understand human psychology began toward the end of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879.

Wundt is often called the father of experimental psychology. He believed that experiments could help explain how psychology works, and used this approach to study consciousness .

Wundt coined the term "physiological psychology." This is a hybrid of physiology and psychology, or how the body affects the brain.

Other early contributors to the development and evolution of experimental psychology as we know it today include:

  • Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus
  • Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions
  • Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology
  • Georg Elias Müller (1850-1934), who performed an early experiment on attitude which involved the sensory discrimination of weights and revealed how anticipation can affect this discrimination

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable

The dependent variable is the effect that the experimenter is measuring. If a researcher was investigating how sleep influences test scores, for example, the test scores would be the dependent variable.

Independent Variable

The independent variable is the variable that the experimenter manipulates. In the previous example, the amount of sleep an individual gets would be the independent variable.

A hypothesis is a tentative statement or a guess about the possible relationship between two or more variables. In looking at how sleep influences test scores, the researcher might hypothesize that people who get more sleep will perform better on a math test the following day. The purpose of the experiment, then, is to either support or reject this hypothesis.

Operational definitions are necessary when performing an experiment. When we say that something is an independent or dependent variable, we must have a very clear and specific definition of the meaning and scope of that variable.

Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment. Types of extraneous variables include participant variables, situational variables, demand characteristics, and experimenter effects. In some cases, researchers can take steps to control for extraneous variables.

Demand Characteristics

Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in a psychology experiment. This can sometimes cause participants to alter their behavior, which can affect the results of the experiment.

Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables. 

Confounding Variables

Confounding variables are variables that can affect the dependent variable, but that experimenters cannot control for. Confounding variables can make it difficult to determine if the effect was due to changes in the independent variable or if the confounding variable may have played a role.

Psychologists, like other scientists, use the scientific method when conducting an experiment. The scientific method is a set of procedures and principles that guide how scientists develop research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:

  • Identifying a problem to study
  • Devising the research protocol
  • Conducting the experiment
  • Analyzing the data collected
  • Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their academic careers. Learning how to conduct an experiment is important to understanding how psychologists prove and disprove theories in this field.

There are a few different types of experiments that researchers might use when studying psychology. Each has pros and cons depending on the participants being studied, the hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control over the variables. These experiments can also be easier for other researchers to replicate. The drawback of this research type is that what takes place in a lab is not always what takes place in the real world.

Field Experiments

Sometimes researchers opt to conduct their experiments in the field. For example, a social psychologist interested in researching prosocial behavior might have a person pretend to faint and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings. But it is more difficult for researchers to control the many variables existing in these settings that could potentially influence the experiment's results.

Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-experiment. Quasi-experiments are often referred to as natural experiments because the researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to manipulate the independent variable in the situation (personality traits). Participants also cannot be randomly assigned because they naturally fall into pre-existing groups based on their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where scientists are interested in studying phenomena in natural, real-world settings. It's also beneficial if there are limits on research funds or time.

Field experiments can be either quasi-experiments or true experiments.

Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors, Researchers use experiments to study many aspects of psychology.

A 2019 study investigated whether splitting attention between electronic devices and classroom lectures had an effect on college students' learning abilities. It found that dividing attention between these two mediums did not affect lecture comprehension. However, it did impact long-term retention of the lecture information, which affected students' exam performance.

An experiment used participants' eye movements and electroencephalogram (EEG) data to better understand cognitive processing differences between experts and novices. It found that experts had higher power in their theta brain waves than novices, suggesting that they also had a higher cognitive load.

A study looked at whether chatting online with a computer via a chatbot changed the positive effects of emotional disclosure often received when talking with an actual human. It found that the effects were the same in both cases.

One experimental study evaluated whether exercise timing impacts information recall. It found that engaging in exercise prior to performing a memory task helped improve participants' short-term memory abilities.

Sometimes researchers use the experimental method to get a bigger-picture view of psychological behaviors and impacts. For example, one 2018 study examined several lab experiments to learn more about the impact of various environmental factors on building occupant perceptions.

A 2020 study set out to determine the role that sensation-seeking plays in political violence. This research found that sensation-seeking individuals have a higher propensity for engaging in political violence. It also found that providing access to a more peaceful, yet still exciting political group helps reduce this effect.

While the experimental method can be a valuable tool for learning more about psychology and its impacts, it also comes with a few pitfalls.

Experiments may produce artificial results, which are difficult to apply to real-world situations. Similarly, researcher bias can impact the data collected. Results may not be able to be reproduced, meaning the results have low reliability .

Since humans are unpredictable and their behavior can be subjective, it can be hard to measure responses in an experiment. In addition, political pressure may alter the results. The subjects may not be a good representation of the population, or groups used may not be comparable.

And finally, since researchers are human too, results may be degraded due to human error.

What This Means For You

Every psychological research method has its pros and cons. The experimental method can help establish cause and effect, and it's also beneficial when research funds are limited or time is of the essence.

At the same time, it's essential to be aware of this method's pitfalls, such as how biases can affect the results or the potential for low reliability. Keeping these in mind can help you review and assess research studies more accurately, giving you a better idea of whether the results can be trusted or have limitations.

Colorado State University. Experimental and quasi-experimental research .

American Psychological Association. Experimental psychology studies human and animals .

Mayrhofer R, Kuhbandner C, Lindner C. The practice of experimental psychology: An inevitably postmodern endeavor . Front Psychol . 2021;11:612805. doi:10.3389/fpsyg.2020.612805

Mandler G. A History of Modern Experimental Psychology .

Stanford University. Wilhelm Maximilian Wundt . Stanford Encyclopedia of Philosophy.

Britannica. Gustav Fechner .

Britannica. Hermann von Helmholtz .

Meyer A, Hackert B, Weger U. Franz Brentano and the beginning of experimental psychology: implications for the study of psychological phenomena today . Psychol Res . 2018;82:245-254. doi:10.1007/s00426-016-0825-7

Britannica. Georg Elias Müller .

McCambridge J, de Bruin M, Witton J.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: A systematic review .  PLoS ONE . 2012;7(6):e39116. doi:10.1371/journal.pone.0039116

Laboratory experiments . In: The Sage Encyclopedia of Communication Research Methods. Allen M, ed. SAGE Publications, Inc. doi:10.4135/9781483381411.n287

Schweizer M, Braun B, Milstone A. Research methods in healthcare epidemiology and antimicrobial stewardship — quasi-experimental designs . Infect Control Hosp Epidemiol . 2016;37(10):1135-1140. doi:10.1017/ice.2016.117

Glass A, Kang M. Dividing attention in the classroom reduces exam performance . Educ Psychol . 2019;39(3):395-408. doi:10.1080/01443410.2018.1489046

Keskin M, Ooms K, Dogru AO, De Maeyer P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking . ISPRS Int J Geo-Inf . 2020;9(7):429. doi:10.3390.ijgi9070429

Ho A, Hancock J, Miner A. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot . J Commun . 2018;68(4):712-733. doi:10.1093/joc/jqy026

Haynes IV J, Frith E, Sng E, Loprinzi P. Experimental effects of acute exercise on episodic memory function: Considerations for the timing of exercise . Psychol Rep . 2018;122(5):1744-1754. doi:10.1177/0033294118786688

Torresin S, Pernigotto G, Cappelletti F, Gasparella A. Combined effects of environmental factors on human perception and objective performance: A review of experimental laboratory works . Indoor Air . 2018;28(4):525-538. doi:10.1111/ina.12457

Schumpe BM, Belanger JJ, Moyano M, Nisa CF. The role of sensation seeking in political violence: An extension of the significance quest theory . J Personal Social Psychol . 2020;118(4):743-761. doi:10.1037/pspp0000223

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

en experimental approach

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

en experimental approach

Experimental Research

Experimental Research

Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc.

This article is a part of the guide:

  • Pretest-Posttest
  • Third Variable
  • Research Bias
  • Independent Variable
  • Between Subjects

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable.

The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.

Experimental Research is often used where:

  • There is time priority in a causal relationship ( cause precedes effect )
  • There is consistency in a causal relationship (a cause will always lead to the same effect)
  • The magnitude of the correlation is great.

(Reference: en.wikipedia.org)

The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment .

This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group , the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure.

A very wide definition of experimental research, or a quasi experiment , is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition.

A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.

en experimental approach

Aims of Experimental Research

Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation . Experimental research is important to society - it helps us to improve our everyday lives.

en experimental approach

Identifying the Research Problem

After deciding the topic of interest, the researcher tries to define the research problem . This helps the researcher to focus on a more narrow research area to be able to study it appropriately.  Defining the research problem helps you to formulate a  research hypothesis , which is tested against the  null hypothesis .

The research problem is often operationalizationed , to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study.

An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.

Constructing the Experiment

There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

Sampling Groups to Study

Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group , whilst others are tested under the experimental conditions.

Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization , "quasi-randomization" and pairing.

Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors .

Here are some common sampling techniques :

  • probability sampling
  • non-probability sampling
  • simple random sampling
  • convenience sampling
  • stratified sampling
  • systematic sampling
  • cluster sampling
  • sequential sampling
  • disproportional sampling
  • judgmental sampling
  • snowball sampling
  • quota sampling

Creating the Design

The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results.

Typical Designs and Features in Experimental Design

  • Pretest-Posttest Design Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect.
  • Control Group Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect . A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.
  • Randomized Controlled Trials Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables
  • Solomon Four-Group Design With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.
  • Between Subjects Design Grouping Participants to Different Conditions
  • Within Subject Design Participants Take Part in the Different Conditions - See also: Repeated Measures Design
  • Counterbalanced Measures Design Testing the effect of the order of treatments when no control group is available/ethical
  • Matched Subjects Design Matching Participants to Create Similar Experimental- and Control-Groups
  • Double-Blind Experiment Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this.
  • Bayesian Probability Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

Pilot Study

It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right.

Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment.

If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s) . Those two different pilots are likely to give the researcher good information about any problems in the experiment.

Conducting the Experiment

An experiment is typically carried out by manipulating a variable, called the independent variable , affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s) , is measured.

Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables , if possible, or randomizing variables to minimize effects that can be traced back to third variables . Researchers only want to measure the effect of the independent variable(s) when conducting an experiment , allowing them to conclude that this was the reason for the effect.

Analysis and Conclusions

In quantitative research , the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect.

The aim of an analysis is to draw a conclusion , together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results.

If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation .

Experiments are more often of quantitative nature than qualitative nature, although it happens.

Examples of Experiments

This website contains many examples of experiments. Some are not true experiments , but involve some kind of manipulation to investigate a phenomenon. Others fulfill most or all criteria of true experiments.

Here are some examples of scientific experiments:

Social Psychology

  • Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous?
  • Asch Experiment - Will people conform to group behavior?
  • Stanford Prison Experiment - How do people react to roles? Will you behave differently?
  • Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior
  • Law Of Segregation - The Mendel Pea Plant Experiment
  • Transforming Principle - Griffith's Experiment about Genetics
  • Ben Franklin Kite Experiment - Struck by Lightning
  • J J Thomson Cathode Ray Experiment
  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Oskar Blakstad (Jul 10, 2008). Experimental Research. Retrieved Sep 14, 2024 from Explorable.com: https://explorable.com/experimental-research

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

en experimental approach

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

en experimental approach

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter
  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Mastering Research: The Principles of Experimental Design

David Costello

In a world overflowing with information and data, how do we differentiate between mere observation and genuine knowledge? The answer lies in the realm of experimental design. At its core, experimental design is a structured method used to investigate the relationships between different variables. It's not merely about collecting data, but about ensuring that this data is reliable, valid, and can lead to meaningful conclusions.

The significance of a well-structured research process cannot be understated. From medical studies determining the efficacy of a new drug, to businesses testing a new marketing strategy, or environmental scientists assessing the impact of climate change on a specific ecosystem – a robust experimental design serves as the backbone. Without it, we run the risk of drawing flawed conclusions or making decisions based on erroneous or biased information.

The beauty of experimental design is its universality. It's a tool that transcends disciplines, bringing rigor and credibility to investigations across fields. Whether you're in the world of biotechnology, finance, psychology, or countless other domains, understanding the tenets of experimental design will ensure that your inquiries are grounded in sound methodology, paving the way for discoveries that can shape industries and change lives.

Core principles

Types of experimental designs, steps in designing an experiment, pitfalls and challenges, case studies, tools and software, future progress, how experimental design has evolved over time.

Delving into the annals of scientific history, we find that experimental design, as a formalized discipline, is relatively young. However, the spirit of experimentation is ancient, sewn deeply into the fabric of human curiosity. As early as Ancient Greece, rudimentary experimental methods were employed to understand natural phenomena . Yet, the structured approach we recognize today took centuries to develop.

The Renaissance era witnessed a surge in scientific curiosity and methodical investigation . This period marked a shift from reliance on anecdotal evidence and dogmatic beliefs to empirical observation. Notably, Sir Francis Bacon , during the early 17th century, championed the empirical method, emphasizing the need for systematic data collection and analysis.

But it was during the late 19th and early 20th centuries that the discipline truly began to crystallize. The burgeoning fields of psychology, agriculture, and biology demanded rigorous methods to validate their findings. The introduction of statistical methods and controlled experiments in agricultural research set a benchmark for research methodologies across various disciplines.

From its embryonic stages of simple observation to the sophisticated, statistically driven methodologies of today, experimental design has been shaped by the demands of the times and the relentless pursuit of truth by generations of researchers. It has evolved from mere intuition-based inquiries to a framework of control, randomization, and replication, ensuring that our conclusions stand up to the strictest scrutiny.

Key figures and their contributions

When charting the evolution of experimental design, certain luminaries stand tall, casting long shadows of influence that still shape the field today. Let's delve into a few of these groundbreaking figures:

  • Contribution: Often heralded as the father of modern statistics, Fisher introduced many concepts that form the backbone of experimental design. His work in the 1920s and 1930s laid the groundwork for the design of experiments.
  • Legacy: Fisher's introduction of the randomized controlled trial, analysis of variance ( ANOVA ), and the principle of maximum likelihood estimation revolutionized statistics and experimental methodology. His book, The Design of Experiments , remains a classic reference in the field.
  • Contribution: A prolific figure in the world of statistics, Pearson developed the method of moments , laying the foundation for many statistical tests.
  • Legacy: Pearson's chi-squared test is one of the many techniques he introduced, which researchers still widely use today to test the independence of categorical variables.
  • Contribution: Together, they conceptualized the framework for the theory of hypothesis testing , which is a staple in modern experimental design.
  • Legacy: Their delineation of Type I and Type II errors and the introduction of confidence intervals have become fundamental concepts in statistical inference.
  • Contribution: While better known as a nursing pioneer, Nightingale was also a gifted statistician. She employed statistics and well-designed charts to advocate for better medical practices and hygiene during the Crimean War .
  • Legacy: Nightingale's application of statistical methods to health underscores the importance of data in decision-making processes and set a precedent for evidence-based health policies.
  • Contribution: Box made significant strides in the areas of quality control and time series analysis.
  • Legacy: The Box-Jenkins (or ARIMA) model for time series forecasting and the Box-Behnken designs for response surface methodology are testaments to his lasting influence in both experimental design and statistical forecasting.

These trailblazers, among many others, transformed experimental design from a nascent field of inquiry into a robust and mature discipline. Their innovations continue to guide researchers and inform methodologies, bridging the gap between curiosity and concrete understanding.

Randomization: ensuring each subject has an equal chance of being in any group

Randomization is the practice of allocating subjects or experimental units to different groups or conditions entirely by chance. This means each participant, or experimental unit, has an equal likelihood of being assigned to any specific group or condition.

Why is this method of assignment held in such high regard, and why is it so fundamental to the research process? Let's delve into the pivotal role randomization plays and its overarching importance in maintaining the rigor of experimental endeavors.

  • Eliminating Bias: By allocating subjects randomly, we prevent any unintentional bias in group assignments. This ensures that the groups are more likely to be comparable in all major respects. Without randomization, researchers might, even inadvertently, assign certain types of participants to one group over another, leading to skewed results.
  • Balancing Unknown Factors: There are always lurking variables that researchers might be unaware of or unable to control. Randomization helps in ensuring that these unobserved or uncontrolled variables are equally distributed across groups, thereby ensuring that the groups are comparable in all major respects.
  • Foundation for Statistical Analysis: Randomization is the bedrock upon which much of statistical inference is built. It allows researchers to make probabilistic statements about the outcomes of their studies. Without randomization, many of the statistical tools employed in analyzing experimental results would be inappropriate or invalid.
  • Enhancing External Validity: A randomized study increases the chances that the results are generalizable to a broader population. Because participants are randomly selected, the findings can often be extrapolated to similar groups outside the study.

While randomization is a powerful tool, it's not without its challenges. For instance, in smaller samples, randomization might not always guarantee perfectly balanced groups. Moreover, in some contexts, like when studying the effects of a surgical technique, randomization might be ethically challenging.

Nevertheless, in the grand scheme of experimental design, randomization remains a gold standard. It's a bulwark against biases, both known and unknown, ensuring that research conclusions are drawn from a foundation of fairness and rigor.

Replication: repeating the experiment to ensure results are consistent

At its essence, replication involves conducting an experiment again, under the same conditions, to verify its results. It's like double-checking your math on a complex equation—reassuring yourself and others that the outcome is consistent and not just a random occurrence or due to unforeseen errors.

So, what makes this practice of repetition so indispensable to the research realm? Let's delve deeper into the role replication plays in solidifying and authenticating scientific insights.

  • Verifying Results: Even with the most rigorous experimental designs, errors can creep in, or unusual random events can skew results. Replicating an experiment helps confirm that the findings are genuine and not a result of such anomalies.
  • Reducing Uncertainty: Every experiment comes with a degree of uncertainty. By replicating the study, this uncertainty can be reduced, providing a clearer picture of the phenomenon under investigation.
  • Uncovering Variability: Results can vary due to numerous reasons—slight differences in conditions, experimental materials, or even the subjects themselves. Replication can help identify and quantify this variability, lending more depth to the understanding of results.
  • Building Scientific Consensus: Replication is fundamental in building trust within the scientific community. When multiple researchers, possibly across different labs or even countries, reproduce the same results, it strengthens the validity of the findings.
  • Enhancing Generalizability: Repeated experiments, especially when performed in different locations or with diverse groups, can ensure that the results apply more broadly and are not confined to specific conditions or populations.

While replication is a robust tool in the researcher's arsenal, it isn't always straightforward. Sometimes, especially in fields like psychology or medicine, replicating the exact conditions of the original study can be challenging. Furthermore, in our age of rapid publication, there might be a bias towards novel findings rather than repeated studies, potentially undervaluing the importance of replication.

In conclusion, replication stands as a sentinel of validity in experimental design. While one experiment can shed light on a phenomenon, it's the repeated and consistent results that truly illuminate our understanding, ensuring that what we believe is based not on fleeting chance but on reliable and consistent evidence.

Control: keeping other variables constant while testing the variable of interest

In its simplest form, control means keeping all factors and conditions, save for the variable being studied, consistent and unchanged. It's akin to setting a stage where everything remains static, allowing the spotlight to shine solely on the lead actor: our variable of interest.

What exactly elevates this principle to such a paramount position in the scientific realm? Let's unpack the fundamental reasons that underscore the indispensability of control in experimental design.

  • Isolating the Variable of Interest: With numerous factors potentially influencing an experiment, it's crucial to ensure that the observed effects result solely from the variable being studied. Control aids in achieving this isolation, ensuring that extraneous variables don't cloud the results.
  • Eliminating Confounding Effects: Without proper control, other variables might interact with the variable of interest, leading to misleading or confounded outcomes. By keeping everything else constant, control ensures the purity of results.
  • Enhancing the Credibility of Results: When an experiment is well-controlled, its results become more trustworthy. It demonstrates that the researcher has accounted for potential disturbances, leading to a more precise understanding of the relationship between variables.
  • Facilitating Replication: A well-controlled experiment provides a consistent framework, making it easier for other researchers to replicate the study and validate its findings.
  • Aiding in Comparisons: By ensuring that all other variables remain constant, control allows for a clearer comparison between different experimental groups or conditions.

Maintaining strict control is not always feasible, especially in field experiments or when dealing with complex systems. In such cases, researchers often rely on statistical controls or randomization to account for the influence of extraneous variables.

In the grand tapestry of experimental research, control serves as the stabilizing thread, ensuring that the patterns we observe are genuine reflections of the variable under scrutiny. It's a testament to the meticulous nature of scientific inquiry, underscoring the need for precision and care in every step of the experimental journey.

Completely randomized design

The Completely Randomized Design (CRD) is an experimental setup where all the experimental units (e.g., participants, plants, animals) are allocated to different groups entirely by chance. There's no stratification, clustering, or blocking. In essence, every unit has an equal opportunity to be assigned to any group.

Here are the advantages that make it a favored choice for many researchers:

  • Simplicity: CRD is easy to understand and implement, making it suitable for experiments where the primary goal is to compare the effects of different conditions or interventions without considering other complicating factors.
  • Flexibility: Since the only criterion is random assignment, CRD can be employed in various experimental scenarios, irrespective of the number of conditions or experimental units.
  • Statistical Robustness: Due to its random nature, the CRD is amenable to many statistical analyses. When the assumptions of independence, normality, and equal variances are met, CRD allows for straightforward application of techniques like ANOVA to discern the effects of different conditions.

However, like any tool in the research toolkit, the Completely Randomized Design doesn't come without its caveats. It's crucial to acknowledge the limitations and considerations that accompany CRD, ensuring that its application is both judicious and informed.

  • Efficiency: In situations where there are recognizable subgroups or blocks within the experimental units, a CRD might not be the most efficient design. Variability within blocks could overshadow the effects of different conditions.
  • Environmental Factors: If the experimental units are spread across different environments or conditions, these uncontrolled variations might confound the effects being studied, leading to less precise or even misleading conclusions.
  • Size: In cases where the sample size is small, the sheer randomness of CRD might result in uneven group sizes, potentially reducing the power of the study.

The Completely Randomized Design stands as a testament to the power of randomness in experimental research. While it might not be the best fit for every scenario, especially when there are known sources of variability, it offers a robust and straightforward approach for many research questions. As with all experimental designs, the key is to understand its strengths and limitations, applying it judiciously based on the specifics of the research at hand.

Randomized block design

The Randomized Block Design (RBD) is an experimental configuration where units are first divided into blocks or groups based on some inherent characteristic or source of variability. Within these blocks, units are then randomly assigned to different conditions or categories. Essentially, it's a two-step process: first, grouping similar units, and then, randomizing assignments within these groups.

Here are the positive attributes of the Randomized Block Design that underscore its value in experimental research:

  • Control Over Variability: By grouping similar experimental units into blocks, RBD effectively reduces the variability that might otherwise confound the results. This enhances the experiment's power and precision.
  • More Accurate Comparisons: Since conditions are randomized within blocks of similar units, comparisons between different effects become more accurate and meaningful.
  • Flexibility: RBD can be employed in scenarios with any number of conditions and blocks. Its flexible nature makes it suitable for diverse experimental needs.

While the merits of the Randomized Block Design are widely recognized, understanding its potential limitations and considerations is paramount to ensure that research outcomes are both insightful and grounded in reality:

  • Complexity: Designing and analyzing an RBD can be more complex than simpler designs like CRD. It requires careful consideration of how to define blocks and how to randomize conditions within them.
  • Assumption of Homogeneity: RBD assumes that the variability within blocks is less than the variability between them. If this assumption is violated, the design might lose its efficiency.
  • Increased Sample Size: To maintain power, RBD might necessitate a larger sample size, especially if there are numerous blocks.

The Randomized Block Design stands as an exemplary method to combine the best of both worlds: the robustness of randomization and the sensitivity to inherent variability. While it might demand more meticulous planning and design, its capacity to deliver more refined insights makes it a valuable tool in the realm of experimental research.

Factorial design

A factorial design is an experimental setup where two or more independent variables, or factors, are simultaneously tested, not only for their individual effects but also for their combined or interactive effects. If you imagine an experiment where two factors are varied at two levels each, you would have a 2x2 factorial design, resulting in four unique experimental conditions.

Here are the advantages you should consider regarding this methodology:

  • Efficiency: Instead of conducting separate experiments for each factor, researchers can study multiple factors in a single experiment, conserving resources and time.
  • Comprehensive Insights: Factorial designs allow for the exploration of interactions between factors. This is crucial because in real-world situations, factors often don't operate in isolation.
  • Generalizability: By varying multiple factors simultaneously, the results tend to be more generalizable across a broader range of conditions.
  • Optimization: By revealing how factors interact, factorial designs can guide practitioners in optimizing conditions for desired outcomes.

No methodology is without its nuances, and while factorial designs boast numerous strengths, they come with their own set of limitations and considerations:

  • Complexity: As the number of factors or levels increases, the design can become complex, demanding more experimental units and potentially complicating data analysis.
  • Potential for Confounding: If not carefully designed, there's a risk that effects from one factor might be mistakenly attributed to another, especially in higher-order factorial designs.
  • Resource Intensive: While factorial designs can be efficient, they can also become resource-intensive as the number of conditions grows.

The factorial design stands out as an essential tool for researchers aiming to delve deep into the intricacies of multiple factors and their interactions. While it requires meticulous planning and interpretation, its capacity to provide a holistic understanding of complex scenarios renders it invaluable in experimental research.

Matched pair design

A Matched Pair Design , also known simply as a paired design, is an experimental setup where participants are grouped into pairs based on one or more matching criteria, often a specific characteristic or trait. Once matched, one member of each pair is subjected to one condition while the other experiences a different condition or control. This design is particularly powerful when comparing just two conditions, as it reduces the variability between subjects.

As we explore the advantages of this design, it becomes evident why it's often the methodology of choice for certain investigative contexts:

  • Control Over Variability: By matching participants based on certain criteria, this design controls for variability due to those criteria, thereby increasing the experiment's sensitivity and reducing error.
  • Efficiency: With a paired approach, fewer subjects may be required compared to completely randomized designs, potentially making the study more time and resource-efficient.
  • Direct Comparisons: The design facilitates direct comparisons between conditions, as each pair acts as its own control.

As with any research methodology, the Matched Pair Design, despite its distinct advantages, comes with inherent limitations and critical considerations:

  • Matching Complexity: The process of matching participants can be complicated, demanding meticulous planning and potentially excluding subjects who don't fit pairing criteria.
  • Not Suitable for Multiple Conditions: This design is most effective when comparing two conditions. When there are more than two conditions to compare, other designs might be more appropriate.
  • Potential Dependency Issues: Since participants are paired, statistical analyses must account for potential dependencies between paired observations.

The Matched Pair Design stands as a great tool for experiments where controlling for specific characteristics is crucial. Its emphasis on paired precision can lead to more reliable results, but its effective implementation requires careful consideration of the matching criteria and statistical analyses. As with all designs, understanding its nuances is key to leveraging its strengths and mitigating potential challenges.

Covariate design

A Covariate Design , also known as Analysis of Covariance (ANCOVA), is an experimental approach wherein the main effects of certain independent variables, as well as the effect of one or more covariates, are considered. Covariates are typically variables that are not of primary interest to the researcher but may influence the outcome variable. By including these covariates in the analysis, researchers can control for their effect, providing a clearer picture of the relationship between the primary independent variables and the outcome.

While many designs aim for clarity by isolating variables, the Covariate Design embraces and controls for the intricacies, presenting a series of compelling advantages. As we unpack these benefits, the appeal of incorporating covariates into experimental research becomes increasingly evident:

  • Increased Precision: By controlling for covariates, this design can lead to more precise estimates of the main effects of interest.
  • Efficiency: Including covariates can help explain more of the variability in the outcome, potentially leading to more statistically powerful results with smaller sample sizes.
  • Flexibility: The design offers the flexibility to account for and control multiple extraneous factors, allowing for more comprehensive analyses.

Every research approach, no matter how robust, comes with its own set of challenges and nuances. The Covariate Design is no exception to this rule:

  • Assumption Testing: Covariate Design requires certain assumptions to be met, such as linearity and homogeneity of regression slopes, which, if violated, can lead to misleading results.
  • Complexity: Incorporating covariates adds complexity to the experimental setup and the subsequent statistical analysis.
  • Risk of Overadjustment: If not chosen judiciously, covariates can lead to overadjustment, potentially masking true effects or leading to spurious findings.

The Covariate Design stands out for its ability to refine experimental results by accounting for potential confounding factors. This heightened precision, however, demands a keen understanding of the design's assumptions and the intricacies involved in its implementation. It serves as a powerful option in the researcher's arsenal, provided its complexities are navigated with knowledge and care.

Designing an experiment requires careful planning, an understanding of the underlying scientific principles, and a keen attention to detail. The essence of a well-designed experiment lies in ensuring both the integrity of the research and the validity of the results it yields. The experimental design acts as the backbone of the research, laying the foundation upon which meaningful conclusions can be drawn. Given the importance of this phase, it's paramount for researchers to approach it methodically. To assist in this experimental setup, here's a step-by-step guide to help you navigate this crucial task with precision and clarity.

  • Identify the Research Question or Hypothesis: Before delving into the experimental process, it's crucial to have a clear understanding of what you're trying to investigate. This begins with defining a specific research question or formulating a hypothesis that predicts the outcome of your study. A well-defined research question or hypothesis serves as the foundation for the entire experimental process.
  • Choose the Appropriate Experimental Design: Depending on the nature of your research question and the specifics of your study, you'll need to choose the most suitable experimental design. Whether it's a Completely Randomized Design, a Randomized Block Design, or any other setup, your choice will influence how you conduct the experiment and analyze the data.
  • Select the Subjects/Participants: Determine who or what will be the subjects of your study. This could range from human participants to animal models or even plants, depending on your field of study. It's vital to ensure that the selected subjects are representative of the larger population you aim to generalize to.
  • Allocate Subjects to Different Groups: Once you've chosen your participants, you'll need to decide how to allocate them to different experimental groups. This could involve random assignment or other methodologies, ensuring that each group is comparable and that the effects of confounding variables are minimized.
  • Implement the Experiment and Gather Data: With everything in place, conduct the experiment according to your chosen design. This involves exposing each group to the relevant conditions and then gathering data based on the outcomes you're measuring.
  • Analyze the Data: Once you've collected your data, it's time to dive into the numbers. Using statistical tools and techniques, analyze the data to determine whether there are significant differences between your groups, and if your hypothesis is supported.
  • Interpret the Results and Draw Conclusions: Data analysis will provide you with statistical outcomes, but it's up to you to interpret what these numbers mean in the context of your research question. Draw conclusions based on your findings, and consider their implications for your field and future research endeavors.

By following these steps, you can ensure a structured and systematic approach to your experimental research, paving the way for insightful and valid results.

Confounding variables: external factors that might influence the outcome

One of the most common challenges faced in experimental design is the presence of confounding variables. These are external factors that unintentionally vary along with the factor you are investigating, potentially influencing the outcome of the experiment. The danger of confounding variables lies in their ability to provide alternative explanations for any observed effect, thereby muddying the waters of your results.

For instance, if you were investigating the effect of a new drug on blood pressure and failed to control for factors like caffeine intake or stress levels, you might mistakenly attribute changes in blood pressure to the drug when they were actually caused by these other uncontrolled factors.

Properly identifying and controlling for confounding variables is essential. Failure to do so can lead to false conclusions and misinterpretations of data. Addressing them either through the experimental design itself, like by using randomization or matched groups, or in the analysis phase, such as through statistical controls, ensures that the observed effects can be confidently attributed to the variable or condition being studied rather than to extraneous influences.

External validity: making sure results can be generalized to broader contexts

A paramount challenge in experimental design is guaranteeing external validity. This concept refers to the degree to which the findings of a study can be generalized to settings, populations, times, and measures different from those specifically used in the study.

The dilemma often arises in highly controlled environments, such as laboratories. While these settings allow for precise conditions and minimized confounding variables, they might not always reflect real-world scenarios. For instance, a study might find a specific teaching method effective in a quiet, one-on-one setting. However, if that same method doesn't perform as well in a busy classroom with 30 students, the study's external validity becomes questionable.

For researchers, the challenge is to strike a balance. While controlling for potential confounding variables is paramount, it's equally crucial to ensure the experimental conditions maintain a certain degree of real-world relevance. To enhance external validity, researchers may use strategies such as diversifying participant pools, varying experimental conditions, or even conducting field experiments. Regardless of the approach, the ultimate goal remains: to ensure the experiment's findings can be meaningfully applied in broader, real-world contexts.

Ethical considerations: ensuring the safety and rights of participants

Any experimental design undertaking must prioritize the well-being, dignity, and rights of participants. Upholding these values not only ensures the moral integrity of any study but also is crucial in ensuring the reliability and validity of the research .

All participants, whether human or animal, are entitled to respect and their safety should never be placed in jeopardy. For human subjects, it's imperative that they are adequately briefed about the research aims, potential risks, and benefits. This highlights the significance of informed consent, a process where participants acknowledge their comprehension of the study and willingly agree to participate.

Beyond the initiation of the experiment, ethical considerations continue to play a pivotal role. It's vital to maintain the privacy and confidentiality of the participants, ensuring that the collected data doesn't lead to harm or stigmatization. Extra caution is needed when experiments involve vulnerable groups, such as children or the elderly. Furthermore, researchers should be equipped to offer necessary support or point towards professional help should participants experience distress because of the experimental procedures. It's worth noting that many research institutions have ethical review boards to ensure all experiments uphold these principles, fortifying the credibility and authenticity of the research process.

The Stanford Prison Experiment (1971)

The Stanford Prison Experiment , conducted in 1971 by psychologist Philip Zimbardo at Stanford University, stands as one of the most infamous studies in the annals of psychology. The primary objective of the experiment was to investigate the inherent psychological mechanisms and behaviors that emerge when individuals are placed in positions of power and subordination. To this end, volunteer participants were randomly assigned to roles of either prison guards or inmates in a simulated prison environment.

Zimbardo's design sought to create an immersive environment, ensuring that participants genuinely felt the dynamics of their assigned roles. The mock prison was set up in the basement of Stanford's psychology building, complete with cells and guard quarters. Participants assigned to the role of guards were provided with uniforms, batons, and mirrored sunglasses to prevent eye contact. Those assigned as prisoners wore smocks and stocking caps, emphasizing their status. To enhance the realism, an unannounced "arrest" was made for the "prisoners" at their homes by the local police department. Throughout the experiment, no physical violence was permitted; however, the guards were allowed to establish their own rules to maintain order and ensure the prisoners attended the daily counts.

Scheduled to run for two weeks, the experiment was terminated after only six days due to the extreme behavioral transformations observed. The guards rapidly became authoritarian, implementing degrading and abusive strategies to maintain control. In contrast, the prisoners exhibited signs of intense emotional distress, and some even demonstrated symptoms of depression. Zimbardo himself became deeply involved, initially overlooking the adverse effects on the participants. The study's findings highlighted the profound impact that situational dynamics and perceived roles can have on behavior. While it was severely criticized for ethical concerns, it underscored the depths to which human behavior could conform to assigned roles, leading to significant discussions on the ethics of research and the power dynamics inherent in institutional settings.

The Stanford Prison Experiment is particularly relevant to experimental design for these reasons:

  • Control vs. Realism: One of the challenging dilemmas in experimental design is striking a balance between controlling variables and maintaining ecological validity (how experimental conditions mimic real-world situations). Zimbardo's study attempted to create a highly controlled environment with the mock prison but also sought to maintain a sense of realism by arresting participants at their homes and immersing them in their roles. The consequences of this design, however, were unforeseen and extreme behavioral transformations.
  • Ethical Considerations: A cornerstone of experimental design involves ensuring the safety, rights, and well-being of participants. The Stanford Prison Experiment is often cited as an example of what can go wrong when these principles are not rigorously adhered to. The psychological distress faced by participants wasn't anticipated in the original design and wasn't adequately addressed during its execution. This oversight emphasizes the critical importance of periodic assessment of participants' well-being and the flexibility to adapt or terminate the study if adverse effects arise.
  • Role of the Researcher: Zimbardo's involvement and the manner in which he became part of the experiment highlight the potential biases and impacts a researcher can have on an experiment's outcome. In experimental design, it's crucial to consider the researcher's role and minimize any potential interference or influence they might have on the study's results.
  • Interpretation of Results: The aftermath of the experiment brought forth critical discussions on how results are interpreted and presented. It emphasized the importance of considering external influences, participant expectations, and other confounding variables when deriving conclusions from experimental data.

In essence, the Stanford Prison Experiment serves as a cautionary tale in experimental design. It underscores the importance of ethical considerations, participant safety, the potential pitfalls of high realism without safeguards, and the unintended consequences that can emerge even in well-planned experiments.

Meselson-Stahl Experiment (1958)

The Meselson-Stahl Experiment , conducted in 1958 by biologists Matthew Meselson and Franklin Stahl , holds a significant place in molecular biology. The duo set out to determine the mechanism by which DNA replicates, aiming to understand if it follows a conservative, semi-conservative, or dispersive model.

Utilizing Escherichia coli (E. coli) bacteria, Meselson and Stahl grew cultures in a medium containing a heavy isotope of nitrogen, 15 N, allowing the bacteria's DNA to incorporate this heavy isotope. Subsequently, they transferred the bacteria to a medium with the more common 14 N isotope and allowed it to replicate. By using ultracentrifugation, they separated DNA based on density, expecting distinct bands on a gradient depending on the replication model.

The observed patterns over successive bacterial generations revealed a single band that shifted from the heavy to light position, supporting the semi-conservative replication model. This meant that during DNA replication, each of the two strands of a DNA molecule serves as a template for a new strand, leading to two identical daughter molecules. The experiment's elegant design and conclusive results provided pivotal evidence for the molecular mechanism of DNA replication, reshaping our understanding of genetic continuity.

The Meselson-Stahl Experiment is particularly relevant to experimental design for these reasons:

  • Innovative Techniques: The use of isotopic labeling and density gradient ultracentrifugation was pioneering, showcasing the importance of utilizing and even developing novel techniques tailored to address specific scientific questions.
  • Controlled Variables: By methodically controlling the growth environment and the nitrogen sources, Meselson and Stahl ensured that any observed differences in DNA density were due to the replication mechanism itself, and not extraneous factors.
  • Direct Comparison: The experiment design allowed for direct comparison between the expected results of different replication models and the actual observed outcomes, facilitating a clear and decisive conclusion.
  • Clarity in Hypothesis: The researchers had clear expectations for the results of each potential replication model, which helped in accurately interpreting the outcomes.

Reflecting on the Meselson-Stahl Experiment, it serves as an exemplar in experimental biology. Their meticulous approach, combined with innovative techniques, answered a fundamental biological question with clarity. This experiment not only resolved a significant debate in molecular biology but also showcased the power of well-designed experimental methods in revealing nature's intricate processes.

The Hawthorne Studies (1920s-1930s)

The Hawthorne Studies , conducted between the 1920s and 1930s at Western Electric's Hawthorne plant in Chicago, represent a pivotal shift in organizational and industrial psychology. Initially intended to study the relationship between lighting conditions and worker productivity, the research evolved into a broader investigation of the various factors influencing worker output and morale. These studies have since shaped our understanding of human relations and the socio-psychological aspects of the workplace.

The Hawthorne Studies comprised several experiments, but the most notable were the "relay assembly tests" and the "bank wiring room studies." In the relay assembly tests, researchers made various manipulations to the working conditions of a small group of female workers, such as altering light levels, giving rest breaks, and changing the length of the workday. The intent was to identify which conditions led to the highest levels of productivity. Conversely, the bank wiring room studies were observational in nature. Here, the researchers aimed to understand the group dynamics and social structures that emerged among male workers, without any experimental manipulations.

Surprisingly, in the relay assembly tests, almost every change—whether it was an improvement or a return to original conditions—led to increased worker productivity. Even when conditions were reverted to their initial state, worker output remained higher than before. This puzzling phenomenon led researchers to speculate that the mere act of being observed and the knowledge that one's performance was being monitored led to increased effort and productivity, a phenomenon now referred to as the Hawthorne Effect . The bank wiring room studies, on the other hand, shed light on how informal group norms and social relations could influence individual productivity, often more significantly than monetary incentives.

These studies challenged the then-dominant scientific management approach, which viewed workers primarily as mechanical entities whose productivity could be optimized through physical and environmental adjustments. Instead, the Hawthorne Studies highlighted the importance of psychological and social factors in the workplace, laying the foundation for the human relations movement in organizational management.

The Hawthorne Studies are particularly relevant to experimental design for these reasons:

  • Observer Effect: The Hawthorne Studies introduced the idea that the mere act of observation could alter participants' behavior. This has significant implications for experimental design, emphasizing the need to account for and minimize observer-induced changes in behavior.
  • Complexity of Human Behavior: While the initial focus was on physical conditions (like lighting), the results demonstrated that human behavior and performance are influenced by a myriad of interrelated factors. This underscores the importance of considering psychological, social, and environmental variables when designing experiments.
  • Unintended Outcomes: The unintended discovery of the Hawthorne Effect exemplifies that experimental outcomes can sometimes diverge from initial expectations. Researchers should remain open to such unexpected findings, as they can lead to new insights and directions.
  • Evolution of Experimental Focus: The shift from purely environmental manipulations to observational studies in the Hawthorne research highlights the flexibility required in experimental design. As new findings emerge, it's crucial for researchers to adapt their methodologies to better address evolving research questions.

In summary, the Hawthorne Studies serve as a testament to the evolving nature of experimental research and the profound effects that observation, social dynamics, and psychological factors can have on outcomes. They highlight the importance of adaptability, holistic understanding, and the acknowledgment of unexpected results in the realm of experimental design.

Michelson-Morley Experiment (1887)

The Michelson-Morley Experiment , conducted in 1887 by physicists Albert A. Michelson and Edward W. Morley , is considered one of the foundational experiments in the world of physics. The primary aim was to detect the relative motion of matter through the hypothetical luminiferous aether, a medium through which light was believed to propagate.

Michelson and Morley designed an apparatus known as the interferometer . This device split a beam of light so that it traveled in two perpendicular directions. After reflecting off mirrors, the two beams would recombine, and any interference patterns observed would indicate differences in their travel times. If the aether wind existed, the Earth's motion through the aether would cause such an interference pattern. The experiment was conducted at different times of the year, considering Earth's motion around the sun might influence the results.

Contrary to expectations, the experiment found no significant difference in the speed of light regardless of the direction of measurement or the time of year. This null result was groundbreaking. It effectively disproved the existence of the luminiferous aether and paved the way for the theory of relativity introduced by Albert Einstein in 1905 , which fundamentally changed our understanding of time and space.

The Michelson-Morley Experiment is particularly relevant to experimental design for these reasons:

  • Methodological Rigor: The precision and care with which the experiment was designed and conducted set a new standard for experimental physics.
  • Dealing with Null Results: Rather than being discarded, the absence of the expected result became the main discovery, emphasizing the importance of unexpected outcomes in scientific research.
  • Impact on Theoretical Foundations: The experiment's findings had profound implications, showing that experiments can challenge and even overturn prevailing theoretical frameworks.
  • Iterative Testing: The experiment was not just a one-off. Its repeated tests at different times underscore the value of replication and varied conditions in experimental design.

Through their meticulous approach and openness to unexpected results, Michelson and Morley didn't merely answer a question; they reshaped the very framework of understanding within physics. Their work underscores the essence of scientific inquiry: that true discovery often lies not just in confirming our hypotheses, but in uncovering the deeper truths that challenge our prevailing notions. As researchers and scientists continue to push the boundaries of knowledge, the lessons from this experiment serve as a beacon, reminding us of the potential that rigorous, well-designed experiments have in illuminating the mysteries of our universe.

Borlaug's Green Revolution (1940s-1960s)

The Green Revolution , spearheaded by agronomist Norman Borlaug between the 1940s and 1960s, represents a transformative period in agricultural history. Borlaug's work focused on addressing the pressing food shortages in developing countries. By implementing advanced breeding techniques, he aimed to produce high-yield, disease-resistant, and dwarf wheat varieties that would boost agricultural productivity substantially.

To achieve this, Borlaug and his team undertook extensive crossbreeding of wheat varieties. They employed shuttle breeding —a technique where crops are grown in two distinct locations with different planting seasons. This not only accelerated the breeding process but also ensured the new varieties were adaptable to varied conditions. Another innovation was to develop strains of wheat that were "dwarf," ensuring that the plants, when loaded with grains, didn't become too tall and topple over—a common problem with high-yielding varieties.

The resulting high-yield, semi-dwarf, disease-resistant wheat varieties revolutionized global agriculture. Countries like India and Pakistan, which were on the brink of mass famine, witnessed a dramatic increase in wheat production. This Green Revolution saved millions from starvation, earned Borlaug the Nobel Peace Prize in 1970, and altered the course of agricultural research and policy worldwide.

The Green Revolution is particularly relevant to experimental design for these reasons:

  • Iterative Testing: Borlaug's approach highlighted the significance of continual testing and refining. By iterating breeding processes, he was able to perfect the wheat varieties more efficiently.
  • Adaptability: The use of shuttle breeding showcased the importance of ensuring that experimental designs account for diverse real-world conditions, enhancing the global applicability of results.
  • Anticipating Challenges: By focusing on dwarf varieties, Borlaug preempted potential problems, demonstrating that foresight in experimental design can lead to more effective solutions.
  • Scalability: The work wasn't just about creating a solution, but one that could be scaled up to meet global demands, emphasizing the necessity of scalability considerations in design.

The Green Revolution exemplifies the profound impact well-designed experiments can have on society. Borlaug's strategies, which combined foresight with rigorous testing, reshaped global agriculture, underscoring the potential of scientific endeavors to address pressing global challenges when thoughtfully and innovatively approached.

Experimental design has undergone a transformation over the years. Modern technology plays an indispensable role in refining and streamlining experimental processes. Gone are the days when researchers solely depended on manual calculations, paper-based data recording, and rudimentary statistical tools. Today, advanced software and tools provide accurate, quick, and efficient means to design experiments, collect data, perform statistical analysis, and interpret results.

Several tools and software are at the forefront of this technological shift in experimental design:

  • Minitab: A popular statistical software offering tools for various experimental designs including factorials, response surface methodologies, and optimization techniques.
  • R: An open-source programming language and environment tailored for statistical computing and graphics. Its extensibility and comprehensive suite of statistical techniques make it a favorite among researchers.
  • JMP: Developed by SAS , it is known for its interactive and dynamic graphics. It provides a powerful suite for design of experiments and statistical modeling.
  • Design-Expert: A software dedicated to experimental design and product optimization. It's particularly useful for response surface methods.
  • SPSS: A software package used for statistical analysis, it provides advanced statistics, machine learning algorithms, and text analysis for researchers of all levels.
  • Python (with libraries like SciPy and statsmodels): Python is a versatile programming language and, when combined with specific libraries, becomes a potent tool for statistical analysis and experimental design.

One of the primary advantages of using these software tools is their capability for advanced statistical analysis. They enable researchers to perform complex computations within seconds, something that would take hours or even days manually. Furthermore, the visual representation features in these tools assist in understanding intricate data patterns, correlations, and other crucial aspects of data. By aiding in statistical analysis and interpretation, software tools eliminate human errors, provide insights that might be overlooked in manual analysis, and significantly speed up the research process, allowing scientists and researchers to focus on drawing accurate conclusions and making informed decisions based on the data.

The world of experimental research is continually evolving, with each new development promising to reshape how we approach, conduct, and interpret experiments. The central tenets of experimental design—control, randomization, replication—though fundamental, are being complemented by sophisticated techniques that ensure richer insights and more robust conclusions.

One of the most transformative forces in experimental design's future landscape is the surge of artificial intelligence (AI) and machine learning (ML) technologies . Historically, the design and analysis of experiments have depended on human expertise for selecting factors to study, setting the levels of these factors, and deciding on the number and order of experimental runs. With AI and ML's advent, many of these tasks can be automated, leading to optimized experimental designs that might be too complex for manual formulation. For instance, machine learning algorithms can predict potential outcomes based on vast datasets, guiding researchers in choosing the most promising experimental conditions.

Moreover, AI-driven experimental platforms can dynamically adapt during the course of the experiment, tweaking conditions based on real-time results, thereby leading to adaptive experimental designs. These adaptive designs promise to be more efficient, as they can identify and focus on the most relevant regions of the experimental space, often requiring fewer experimental runs than traditional designs. By harnessing the power of AI and ML, researchers can uncover complex interactions and nonlinearities in their data that might have otherwise gone unnoticed.

Furthermore, the convergence of AI and experimental design holds tremendous potential for areas like drug development and personalized medicine. By analyzing vast genetic datasets, AI algorithms can help design experiments that target very specific biological pathways or predict individual patients' responses to particular treatments. Such personalized experimental designs could dramatically reduce the time and cost of bringing new treatments to market and ensuring that they are effective for the intended patient populations.

In conclusion, the future of experimental design is bright, marked by rapid advancements and a fusion of traditional methods with cutting-edge technologies. As AI and machine learning continue to permeate this field, we can expect experimental research to become more efficient, accurate, and personalized, heralding a new era of discovery and innovation.

In the ever-evolving landscape of research and innovation, experimental design remains a cornerstone, guiding scholars and professionals towards meaningful insights and discoveries. As we reflect on its past and envision its future, it's clear that experimental design will continue to play an instrumental role in shaping the trajectory of numerous disciplines. It will be instrumental in harnessing the full potential of emerging technologies, driving forward scientific understanding, and solving some of the most pressing challenges of our time. With a rich history behind it and a promising horizon ahead, experimental design stands as a testament to the human spirit's quest for knowledge, understanding, and innovation.

Header image by Gorodenkoff .

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

en experimental approach

HubSpot CRM

en experimental approach

Google Sheets

en experimental approach

Google Analytics

en experimental approach

Microsoft Excel

en experimental approach

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

en experimental approach

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is experimental research: Definition, types & examples

What is experimental research: Definition, types & examples

Defne Çobanoğlu

Life and its secrets can only be proven right or wrong with experimentation. You can speculate and theorize all you wish, but as William Blake once said, “ The true method of knowledge is experiment. ”

It may be a long process and time-consuming, but it is rewarding like no other. And there are multiple ways and methods of experimentation that can help shed light on matters. In this article, we explained the definition, types of experimental research, and some experimental research examples . Let us get started with the definition!

  • What is experimental research?

Experimental research is the process of carrying out a study conducted with a scientific approach using two or more variables. In other words, it is when you gather two or more variables and compare and test them in controlled environments. 

With experimental research, researchers can also collect detailed information about the participants by doing pre-tests and post-tests to learn even more information about the process. With the result of this type of study, the researcher can make conscious decisions. 

The more control the researcher has over the internal and extraneous variables, the better it is for the results. There may be different circumstances when a balanced experiment is not possible to conduct. That is why are are different research designs to accommodate the needs of researchers.

  • 3 Types of experimental research designs

There is more than one dividing point in experimental research designs that differentiates them from one another. These differences are about whether or not there are pre-tests or post-tests done and how the participants are divided into groups. These differences decide which experimental research design is used.

Types of experimental research designs

Types of experimental research designs

1 - Pre-experimental design

This is the most basic method of experimental study. The researcher doing pre-experimental research evaluates a group of dependent variables after changing the independent variables . The results of this scientific method are not satisfactory, and future studies are planned accordingly. The pre-experimental research can be divided into three types:

A. One shot case study research design

Only one variable is considered in this one-shot case study design. This research method is conducted in the post-test part of a study, and the aim is to observe the changes in the effect of the independent variable.

B. One group pre-test post-test research design

In this type of research, a single group is given a pre-test before a study is conducted and a post-test after the study is conducted. The aim of this one-group pre-test post-test research design is to combine and compare the data collected during these tests. 

C. Static-group comparison

In a static group comparison, 2 or more groups are included in a study where only a group of participants is subjected to a new treatment and the other group of participants is held static. After the study is done, both groups do a post-test evaluation, and the changes are seen as results.

2 - Quasi-experimental design

This research type is quite similar to the experimental design; however, it changes in a few aspects. Quasi-experimental research is done when experimentation is needed for accurate data, but it is not possible to do one because of some limitations. Because you can not deliberately deprive someone of medical treatment or give someone harm, some experiments are ethically impossible. In this experimentation method, the researcher can only manipulate some variables. There are three types of quasi-experimental design:

A. Nonequivalent group designs

A nonequivalent group design is used when participants can not be divided equally and randomly for ethical reasons. Because of this, different variables will be more than one, unlike true experimental research.

B. Regression discontinuity

In this type of research design, the researcher does not divide a group into two to make a study, instead, they make use of a natural threshold or pre-existing dividing point. Only participants below or above the threshold get the treatment, and as the divide is minimal, the difference would be minimal as well.

C. Natural Experiments

In natural experiments, random or irregular assignment of patients makes up control and study groups. And they exist in natural scenarios. Because of this reason, they do not qualify as true experiments as they are based on observation.

3 - True experimental design

In true experimental research, the variables, groups, and settings should be identical to the textbook definition. Grouping of the participant are divided randomly, and controlled variables are chosen carefully. Every aspect of a true experiment should be carefully designed and acted out. And only the results of a true experiment can really be fully accurate . A true experimental design can be divided into 3 parts:

A. Post-test only control group design

In this experimental design, the participants are divided into two groups randomly. They are called experimental and control groups. Only the experimental group gets the treatment, while the other one does not. After the experiment and observation, both groups are given a post-test, and a conclusion is drawn from the results.

B. Pre-test post-test control group

In this method, the participants are divided into two groups once again. Also, only the experimental group gets the treatment. And this time, they are given both pre-tests and post-tests with multiple research methods. Thanks to these multiple tests, the researchers can make sure the changes in the experimental group are directly related to the treatment.

C. Solomon four-group design

This is the most comprehensive method of experimentation. The participants are randomly divided into 4 groups. These four groups include all possible permutations by including both control and non-control groups and post-test or pre-test and post-test control groups. This method enhances the quality of the data.

  • Advantages and disadvantages of experimental research

Just as with any other study, experimental research also has its positive and negative sides. It is up to the researchers to be mindful of these facts before starting their studies. Let us see some advantages and disadvantages of experimental research:

Advantages of experimental research:

  • All the variables are in the researchers’ control, and that means the researcher can influence the experiment according to the research question’s requirements.
  • As you can easily control the variables in the experiment, you can specify the results as much as possible.
  • The results of the study identify a cause-and-effect relation .
  • The results can be as specific as the researcher wants.
  • The result of an experimental design opens the doors for future related studies.

Disadvantages of experimental research:

  • Completing an experiment may take years and even decades, so the results will not be as immediate as some of the other research types.
  • As it involves many steps, participants, and researchers, it may be too expensive for some groups.
  • The possibility of researchers making mistakes and having a bias is high. It is important to stay impartial
  • Human behavior and responses can be difficult to measure unless it is specifically experimental research in psychology.
  • Examples of experimental research

When one does experimental research, that experiment can be about anything. As the variables and environments can be controlled by the researcher, it is possible to have experiments about pretty much any subject. It is especially crucial that it gives critical insight into the cause-and-effect relationships of various elements. Now let us see some important examples of experimental research:

An example of experimental research in science:

When scientists make new medicines or come up with a new type of treatment, they have to test those thoroughly to make sure the results will be unanimous and effective for every individual. In order to make sure of this, they can test the medicine on different people or creatures in different dosages and in different frequencies. They can double-check all the results and have crystal clear results.

An example of experimental research in marketing:

The ideal goal of a marketing product, advertisement, or campaign is to attract attention and create positive emotions in the target audience. Marketers can focus on different elements in different campaigns, change the packaging/outline, and have a different approach. Only then can they be sure about the effectiveness of their approaches. Some methods they can work with are A/B testing, online surveys , or focus groups .

  • Frequently asked questions about experimental research

Is experimental research qualitative or quantitative?

Experimental research can be both qualitative and quantitative according to the nature of the study. Experimental research is quantitative when it provides numerical and provable data. The experiment is qualitative when it provides researchers with participants' experiences, attitudes, or the context in which the experiment is conducted.

What is the difference between quasi-experimental research and experimental research?

In true experimental research, the participants are divided into groups randomly and evenly so as to have an equal distinction. However, in quasi-experimental research, the participants can not be divided equally for ethical or practical reasons. They are chosen non-randomly or by using a pre-existing threshold.

  • Wrapping it up

The experimentation process can be long and time-consuming but highly rewarding as it provides valuable as well as both qualitative and quantitative data. It is a valuable part of research methods and gives insight into the subjects to let people make conscious decisions.

In this article, we have gathered experimental research definition, experimental research types, examples, and pros & cons to work as a guide for your next study. You can also make a successful experiment using pre-test and post-test methods and analyze the findings. For further information on different research types and for all your research information, do not forget to visit our other articles!

Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

A full guide to the Porter’s Value Chain

A full guide to the Porter’s Value Chain

Fatih Serdar Çıtak

20 must-have add-ons for Google Forms

20 must-have add-ons for Google Forms

Nursena Canbolat

What is a free form, and how to create one (in 6 simple steps)

What is a free form, and how to create one (in 6 simple steps)

Experimental vs Quasi-Experimental Design: Which to Choose?

Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:

 Experimental Study (a.k.a. Randomized Controlled Trial)Quasi-Experimental Study
ObjectiveEvaluate the effect of an intervention or a treatmentEvaluate the effect of an intervention or a treatment
How participants get assigned to groups?Random assignmentNon-random assignment (participants get assigned according to their choosing or that of the researcher)
Is there a control group?YesNot always (although, if present, a control group will provide better evidence for the study results)
Is there any room for confounding?No (although check for a detailed discussion on post-randomization confounding in randomized controlled trials)Yes (however, statistical techniques can be used to study causal relationships in quasi-experiments)
Level of evidenceA randomized trial is at the highest level in the hierarchy of evidenceA quasi-experiment is one level below the experimental study in the hierarchy of evidence [ ]
AdvantagesMinimizes bias and confounding– Can be used in situations where an experiment is not ethically or practically feasible
– Can work with smaller sample sizes than randomized trials
Limitations– High cost (as it generally requires a large sample size)
– Ethical limitations
– Generalizability issues
– Sometimes practically infeasible
Lower ranking in the hierarchy of evidence as losing the power of randomization causes the study to be more susceptible to bias and confounding

What is a quasi-experimental design?

A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.

Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.

Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.

(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .

Examples of quasi-experimental designs include:

  • One-Group Posttest Only Design
  • Static-Group Comparison Design
  • One-Group Pretest-Posttest Design
  • Separate-Sample Pretest-Posttest Design

What is an experimental design?

An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:

  • A treatment group: where participants receive the new intervention which effect we want to study.
  • A control or comparison group: where participants do not receive any intervention at all (or receive some standard intervention).

Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.

(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).

Examples of experimental designs include:

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Solomon Four-Group Design
  • Matched Pairs Design
  • Randomized Block Design

When to choose an experimental design over a quasi-experimental design?

Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.

Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.

So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?

This is what we’re going to discuss next.

When to choose a quasi-experimental design over a true experiment?

The issue with randomness is that it cannot be always achievable.

So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:

  • If being in one group is believed to be harmful for the participants , either because the intervention is harmful (ex. randomizing people to smoking), or the intervention has a questionable efficacy, or on the contrary it is believed to be so beneficial that it would be malevolent to put people in the control group (ex. randomizing people to receiving an operation).
  • In cases where interventions act on a group of people in a given location , it becomes difficult to adequately randomize subjects (ex. an intervention that reduces pollution in a given area).
  • When working with small sample sizes , as randomized controlled trials require a large sample size to account for heterogeneity among subjects (i.e. to evenly distribute confounding variables between the intervention and control groups).

Further reading

  • Statistical Software Popularity in 40,582 Research Papers
  • Checking the Popularity of 125 Statistical Tests and Models
  • Objectives of Epidemiology (With Examples)
  • 12 Famous Epidemiologists and Why

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Athl Train
  • v.45(1); Jan-Feb 2010

Study/Experimental/Research Design: Much More Than Statistics

Kenneth l. knight.

Brigham Young University, Provo, UT

The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand.

To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs.

Description:

The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style . At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping them negotiate the “Methods” section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study design with only the statistical analysis of the data. The design statement typically is placed at the end of the “Methods” section as a subsection called “Experimental Design” or as part of a subsection called “Data Analysis.” This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the study design that should be described at the beginning of the “Methods” section are instead placed in the “Statistical Analysis” section or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the “Methods” to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have you jumped back and forth from the early paragraphs of the “Methods” section to the “Statistics” section to try to understand which variables were collected and when? These efforts would be unnecessary if a road map at the beginning of the “Methods” section outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially important if the variables used in the statistical analysis were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of study design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, will allow them to become part of readers' clinical decision making.

WHAT IS A STUDY (OR EXPERIMENTAL OR RESEARCH) DESIGN?

The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style , 2 so I will use it here.

A study design is the architecture of an experimental study 3 and a description of how the study was conducted, 4 including all elements of how the data were obtained. 5 The study design should be the first subsection of the “Methods” section in an experimental manuscript (see the Table ). “Statistical Design” or, preferably, “Statistical Analysis” or “Data Analysis” should be the last subsection of the “Methods” section.

Table. Elements of a “Methods” Section

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-45-1-98-t01.jpg

The “Study Design” subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational study). 2 The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as “A 2 × 4 × 8 factorial guided data collection.” This tells us that there were 3 independent variables (factors), with 2 levels of the first factor, 4 levels of the second factor, and 8 levels of the third factor. Following is a sentence that names the levels of each factor: for example, “The independent variables were sex (male or female), training program (eg, walking, running, weight lifting, or plyometrics), and time (2, 4, 6, 8, 10, 15, 20, or 30 weeks).” Such an approach clearly outlines for readers how the various procedures fit into the overall structure and, therefore, enhances their understanding of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the “Instruments” and “Procedures” subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment order, if the experiment was a repeated-measures design. And although not a part of the design per se, a statement about obtaining written informed consent from participants and institutional review board approval is usually included in this subsection.

The nuts and bolts of the “Methods” section follow, including such things as equipment, materials, protocols, etc. These are beyond the scope of this commentary, however, and so will not be discussed.

The last part of the “Methods” section and last part of the “Study Design” section is the “Data Analysis” subsection. It begins with an explanation of any data manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Next, readers are told of the statistical measures used to analyze the data, such as a mixed 2 × 4 × 8 analysis of variance (ANOVA) with 2 between-groups factors (sex and training program) and 1 within-groups factor (time of measurement). Researchers should state and reference the statistical package and procedure(s) within the package used to compute the statistics. (Various statistical packages perform analyses slightly differently, so it is important to know the package and specific procedure used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL DESIGN VERSUS STATISTICAL ANALYSIS

Avoid using the term statistical design . Statistical methods are only part of the overall design. The term gives too much emphasis to the statistics, which are important, but only one of many tools used in interpreting data and only part of the study design:

The most important issues in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the research, not the numbers with which the data are cited and interpreted. 6

Stated another way, “The justification for the analysis lies not in the data collected but in the manner in which the data were collected.” 3 “Without the solid foundation of a good design, the edifice of statistical analysis is unsafe.” 7 (pp4–5)

The intertwining of study design and statistical analysis may have been caused (unintentionally) by R.A. Fisher, “… a genius who almost single-handedly created the foundations for modern statistical science.” 8 Most research did not involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921) 9 , 10 and experimental design (in 1935). 11 His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Design (see, for example, Edwards 12 ), and ANOVA courses taught in psychology and education departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze data, designs were much simpler, and often there was little difference between study design and statistical analysis. So combining the 2 elements did not cause serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies can be difficult to understand if your only reference point is the statistical analysis and design. (2) Dependent variables are frequently measured at different times. (3) How the data were collected is often not directly correlated with the statistical design.

For example, assume the goal is to determine the strength gain in novice and experienced athletes as a result of 3 strength training programs. Rate of change in strength is not a measurable variable; rather, it is calculated from strength measurements taken at various time intervals during the training. So the study design would be a 2 × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or advanced), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design , however, would be a 2 × 3 factorial with independent variables of experience (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Note that data were collected according to a 3-factor design but were analyzed according to a 2-factor design and that the dependent variables were different. So a single design statement, usually a statistical design statement, would not communicate which data were collected or how. Readers would be left to figure out on their own how the data were collected.

MULTIVARIATE RESEARCH AND THE NEED FOR STUDY DESIGNS

With the advent of electronic data gathering and computerized data handling and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data collection and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of H max and M max are measured 3 times each: before, immediately after, and 30 minutes after a 20-minute treatment with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are important for graphing temperature fluctuations during the procedure, only 3 temperatures (time 0, time 20, and time 50) are used for statistical analysis. A single dependent variable H max :M max ratio is computed to illustrate neural inhibition. Again, a single statistical design statement would tell little about how the data were obtained. And in this example, separate design statements would be needed for temperature measurement and H max :M max measurements.

As stated earlier, drawing conclusions from the data depends more on how the data were measured than on how they were analyzed. 3 , 6 , 7 , 13 So a single study design statement (or multiple such statements) at the beginning of the “Methods” section acts as a road map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the data were collected). Appropriate study design statements also increase the accuracy of conclusions drawn from the study.

CONCLUSIONS

The goal of scientific writing, or any writing, for that matter, is to communicate information. Including 2 design statements or subsections in scientific papers—one to explain how the data were collected and another to explain how they were statistically analyzed—will improve the clarity of communication and bring praise from readers. To summarize:

  • Purge from your thoughts and vocabulary the idea that experimental design and statistical design are synonymous.
  • Study or experimental design plays a much broader role than simply defining and directing the statistical analysis of an experiment.
  • A properly written study design serves as a road map to the “Methods” section of an experiment and, therefore, improves communication with the reader.
  • Study design should include a description of the type of design used, each factor (and each level) involved in the experiment, and the time at which each measurement was made.
  • Clarify when the variables involved in data collection and data analysis are different, such as when data analysis involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thanks to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of research design and statistics.

Advertisement

Advertisement

An experimental approach to analyze aerosol and splatter formations due to a dental procedure

  • Research Article
  • Published: 18 September 2021
  • Volume 62 , article number  202 , ( 2021 )

Cite this article

en experimental approach

  • E. A. Haffner   ORCID: orcid.org/0000-0002-8284-958X 1 ,
  • M. Bagheri 1 ,
  • J. E. Higham   ORCID: orcid.org/0000-0001-7577-0913 2 ,
  • L. Cooper   ORCID: orcid.org/0000-0002-6577-781X 3 ,
  • S. Rowan 3 ,
  • C. Stanford 3 ,
  • F. Mashayek   ORCID: orcid.org/0000-0003-1187-4937 1 &
  • P. Mirbod   ORCID: orcid.org/0000-0002-2627-1971 1  

1717 Accesses

11 Citations

Explore all metrics

Throughout 2020 and beyond, the entire world has observed a continuous increase in the infectious spread of the novel coronavirus (SARS-CoV-2) otherwise known as COVID-19. The high transmission of this airborne virus has raised countless concerns regarding safety measures employed in the working conditions for medical professionals. Specifically, those who perform treatment procedures on patients which intrinsically create mists of fine airborne droplets, i.e., perfect vectors for this and other viruses to spread. The present study focuses on understanding the splatter produced due to a common dentistry technique to remove plaque buildup on teeth. This technique uses a high-speed dentistry instrument, e.g., a Cavitron ultrasonic scaler, to scrape along the surface of a patient’s teeth. This detailed understanding of the velocity and the trajectory of the droplets generated by the splatter will aid in the development of hygiene mechanisms to guarantee the safety of those performing these procedures and people in clinics or hospitals. Optical flow tracking velocimetry (OFTV) method was employed to obtain droplet velocity and trajectory in a two-dimensional plane. Multiple data collection planes were taken in different orientations around a model of adult mandibular teeth. This technique provided pseudo-three-dimensional velocity information for the droplets within the splatter developed from this high-speed dental instrument. These results indicated that within the three-dimensional splatter produced there were high velocities (1–2 m/s) observed directly below the intersection point between the front teeth and the scaler. The splatter formed a cone-shape structure that propagated 10–15 mm away from the location of the scaler tip. From the droplet trajectories, it was observed that high velocity isolated droplets propagate away from the bulk of the splatter. It is these droplets which are concerning for health safety to those performing the medical procedures. Using a shadowgraphy technique, we further characterize the individual droplets’ size and their individual velocity. We then compare these results to previously published distributions. The obtained data can be used as a first step to further examine flow and transport of droplets in clinics/dental offices.

Graphical abstract

en experimental approach

Similar content being viewed by others

en experimental approach

The Airborne Lifetime and Spatial–Temporal Distribution of Emitted Droplets in Dental Procedures

en experimental approach

Simulated and clinical aerosol spread in common periodontal aerosol-generating procedures

en experimental approach

Study of aerosol dispersion and control in dental practice

Avoid common mistakes on your manuscript.

1 Introduction

The worldwide emergence of the novel COVID-19 virus has required healthcare professionals to review existing safety protocols and rapidly implement prescriptive adjustments to address numerous concerns in different areas. One of the most high-risk areas of infection is within dental practices. This risk is due to the fact that high-speed dental instruments have the capacity to produce and liberally expel bio-aerosols (Harrel and Molinari 2004 ). As dentists return to the ‘new’ normal, dental practices are rolling out new protocols to mitigate the risk of COVID-19 transmission. To design appropriate safety tools, it is crucial to understand: (1) the size of the droplets created as a result of dental procedures, and 2) the resident times and the travel distances of these droplets. These are important in order to determine the viral load captures and also to develop cleaning and social distancing measures as μ previous literature has showed that droplets diameters can range between 3–100 μm (Coulthard 2020 ; Mirbod et al. 2021 ) .

One of the primary sources of these potentially virally loaded droplets in a dental practice, originates from the use of high-speed dental instruments. As a by-product of the dental scaling procedures: the largest splattered droplets contain particles with diameters in the range of 50–100 μm, occurring up to 15–120 cm from the patient's oral cavity, and the aerosols created are comprised of < 50 μm in diameter (Raghunath et al. 2016 ). Furthermore, additional studies related to teeth drilling and grinding procedures have been shown to contain ultrafine particles with diameters in the 20–80 nm (Liu et al. 2019 ). Review studies and guidelines have been issued by the World Health Organization (WHO), Centers for Disease Control and Prevention (CDC), and European Centre for Disease Prevention and Control (ECDC), suggesting droplets can travel more than 2–8 m so that unmasked individuals should stay that far apart for their own safety. Bahl et al. ( 2020a , b ) and Poulain and Bourouiba ( 2019 ) have shown the rationale behind the WHO and CDC decision relates to larger droplets evaporate leaving airborne pathogens and smaller droplets that can travel longer distances and spread contaminants further.

Recent physical mitigations suggested by Majidi and Club ( 2020 ) and Liu et al. ( 2019 ) show that the risk is reduced by protecting dental clinicians via droplet redirection and capture. Rajeev et al. ( 2020 ) and Yadav et al. ( 2015 ) show the benefits of methods based on ozonization, ionization, and use of air sterilization. Jeswin and Jam ( 2012 ) also showed simplistic methods such as disinfecting the patient's mouth, the use of a rubber dam, eye protection, face masks, aspiration, and ventilation can all reduce the risk of transmission. However, to create more effective preventative strategies, we must have a full understanding of the splatter droplet size distributions as well as their potential travel and residence time. Relatable previous studies managed to experimentally determine both the volume and particle size distribution generated from human expulsions such as coughing and speaking (Bahl et al. 2020a , b ; Beggs 2020 ; Gralton et al. 2011 ; Scharfman et al. 2016 ). They have shown that particle sizes can range between 0.01 μm and 500 μm. Using both the experimental and theoretical works, these studies have also shown that particles of less than 50 µm diameter can remain suspended in the cloud long enough for the cough to reach heights where ventilation systems can be contaminated (Bourouiba et al. 2014 ). Other similar studies have focused more on the ejection velocities of these particles using Euclidean-based particle image velocimetry (PIV), the Lagrangian-based particle tracking velocimetry (PTV) or flow visualization techniques such as shadowgraphy (Cao et al. 2014 ; Chao et al. 2009 ; Mahajan et al. 1994 ; Tang et al. 2012 , 2013 ; VanSciver et al. 2011 ; Xie et al. 2009 ; Zhu et al. 2006 ).

To the best of the authors’ knowledge, while we currently have a good understanding of the droplet size distributions created by dental procedures, there is a lack of understanding of their kinematics. In this study, we use the quasi-Euclidean-Lagrangian method of optical flow tracking velocimetry (OFTV) to understand the kinematics of the droplet motion. We further analyze the droplets’ sizes and velocities for and compare these results for two different flow rates at which a Cavitron dental scaler operates using a shadowgraphy technique. The outcomes of the presented research have the potential to refine the flow characteristics of a simulated flow, more accurately model the spray patterns and to propose methods to control the direction of flow to minimize the possible contaminations.

2 Experimental procedure

We focus our study on the Cavitron ultrasonic scaler (CUS) and simulate an adult mouth using a resin model of a mandibular set of teeth. To replicate a real-world scenario, we orient the scaler at two different angles in reference to the surface of the teeth. The first orientation analyses the case where the point (tip) of the scaler of the CUS is being flushed against the surface of a tooth while the teeth model was at a 0˚ angle from the x -axis; therefore, the scaler point is 90 ° from the x -axis. This situation would mimic the procedure where the point of the scaler is used to scrape against the surface/gum-line of a tooth. This experimental setup is denoted as “Case 1”, and a schematic of the experiment is shown in Fig.  1 (a). The second orientation simulates a more typical situation used in dental practice with a patient sitting in a reclined position. For this purpose, the teeth model was positioned at a 45 ° angle to the x -axis and the point of the scaler was rotated 5 ° from its previous position in reference to the teeth model. This configuration would then mimic when lateral surface of the scaler is used to scrape the front of the tooth. This CUS/teeth orientation is used in the cases denoted as “Case 2” and “Case 3”. A schematic of both experiments is shown in Fig.  1 (b, c).

figure 1

The experimental schematics for the three different experimental orientations. a Case 1: The teeth are 0 ° and the point of the CUS is 90 ° from the x -axis, respectively. b , c Case 2 and 3: The teeth are rotated to be 45 ° from the x -axis. The point of the CUS is rotated so that it is 5˚ in reference to the front of the teeth. b The configuration with a P 1 data collection plane and c shows the P 2 data collection plane. d , e Diagrams of the lower mandible teeth with the appropriate teeth numbers (black) and the coordinate system in reference to the scaler tip and the front of the central incisor teeth (red). The tip of the CUS rests firmly against the d front of tooth #24 in Case 1, and e tooth #25 for Cases 2 and 3. f An image of the scaler tip being used with the CUS for this study experiments. The water jet is located within the concave side of the CUS tip

As per typical patient usage, a specific tip was chosen for the scaler Powerline finger grip (30 K FSI-PWR -1000) (Dentsply Sirona) with a tip diameter of 479.1 μm, which vibrates at a frequency of 25–30 kHz. The scaler is connected to a standard water tap with a pressure in range of 20–40 psi from which the flow rate was measured by a standard flow measuring gauge. The average flow rate for the scaler in all cases examined through OFVT has been measured to be around 31.5 ml/min. The dimensionless parameters for this experiment are reported in Table 1 . The Reynolds number, Re, defined as \(Re = u_{o} \rho_{f} d/\mu_{f}\) where \(\rho_{f}\) is the density of the fluid and \(\mu_{f}\) is the viscosity of the fluid. Here, \(u_{o}\) is the velocity of the fluid as it leaves the point of the scaler. The gaseous Weber number, We G , describes the interaction between the fluid and the air at the surface of the droplets which predicts the nature of the spray breakup (Lubarsky et al. 2010 ), can be defined as \(We_{G} = \rho_{air} du_{o}^{2} /\sigma\) where \(\sigma\) is the surface tension of the fluid (Zigan et al. 2012 ). Herein, the calculated We G is less than 0.1 meaning that the liquid jet is in the region of column breakup since the transition from column to bag breakup for a Newtonian liquid jet in a crossflow occurs for gaseous We G of 4 (Scharfman et al. 2016 ). The corresponding Ohnesorge number also defined as \(Oh = \mu_{f} /\sqrt {\rho_{f} d\sigma }\) (Scharfman et al. 2016 ). The working fluid in the experiments was water at room temperature 20 °C; therefore, the fluid constants are \(\rho_{f} = 0.998g/cm^{3}\) , \(\sigma = 0.0729kg/s^{2}\) , and \(\mu_{f} = 0.001Pa.s\) .

In order to interrogate the kinematic behavior of the spray created by the scaler, we use a thin light sheet (generated by a 527 nm Nd-YLF laser laser) to illuminate a single plane of droplets within the spray and capture images using a CMOS high-speed camera (Phantom) with a 60 mm focal-length lens. Not only does this allows us to take a detailed view of the droplets and splatter propagation, i.e., to investigate the particle size, it also allows us to track individual spray particles. In this study, we use multiple planes to gather an understanding of the measurements in the near and far field away from the scaler. There were three different experimental setups examined, each one has a different plane/teeth orientation with two data collection planes. Cases 1 and 2 shown in Fig.  1 (a, b) have the teeth positioned at two different angles but have the same laser plane orientation. For these cases, the laser plane is positioned parallel to the tip of the CUS, and we denote this plane as P 1 . This creates an x – y 2D plane according to the coordinate axis in Fig.  1 (a, b), and for these cases, \(u\) and \(v\) components of velocity are measured. We then moved the P 1 plane in the +  z direction to obtain multiple parallel data collection planes moving away from the location where the CUS tip is placed on the tooth. In the third case, Case 3 shown in Fig.  1 (c), the teeth orientation is the same as Case 2, but the laser plane is rotated 90 ° so that the plane is perpendicular to the CUS tip. This provides a y – z 2D plane, denoted as P 2 , shown by the coordinate axis in Fig.  1 (c). For the P 2 plane, the \(v\) and \(w\) components of velocity are measured. Similar to the P 1 plane, multiple P 2 planes are taken in the +  x direction moving away from the front surface of the teeth model. There are also two different data collection planes depending on how the laser sheet is oriented. We have denoted a P 1 plane which has the laser sheet positioned perpendicular to the front of the tooth and the laser sheet is parallel to the CUS tip that provides an x – y 2D plane. This is used in Cases 1 and 2 shown in Fig.  1 (a, b). For the last case, Case 3 (Fig.  1 (c)), the laser sheet is rotated so that it is parallel to the front of the tooth surface and perpendicular to the CUS tip creating a 2D y – z plane, and this is denoted as the P 2 plane. For each experimental case, multiple P 1 or P 2 planes were taken at varying distances from the tip of scaler or the front of the teeth, respectively.

Figure  1 (d, e) represents the diagrams of the front adult mandibular teeth including the tooth numbers and the coordinate system used in this study where Fig.  1 (f) shows the scaler used with the CUS. As described, the lateral surface of the CUS rests firmly on the front teeth shown in Fig.  1 (d, e). The zero-coordinate system is located at the tip of the CUS. Figure 2 (a, b) shows the Cavitron ultrasonic scaler (CUS) and its location in reference to the resin teeth. The lateral surface of the scaler is placed against the surface of either the lower left central incisor tooth (#24) for Case 1 or the lower right central incisor tooth (#25) for Cases 2 and 3 where the point (tip) of the scalar is located in the direction of the gum line. Figure  2 (c, d) shows raw images collected from P 1 and P 2, respectively. To characterize the kinematics of the individual particles, we use the OFTV tracking technique.

figure 2

A top-down view of the Cavitron and the teeth to show the locations of the 1 mm thick laser sheet for the two data collection planes. a The data collection plane, which was parallel to the Cavitron tip, P 1 and b the data collection plane that was perpendicular to the Cavitron tip, P 2 . Raw OFTV images obtained from both of the laser plane locations, c P 1 and the d P 2 with the CUS and teeth at 0 °

2.1 Optical flow tracking velocimetry (OFTV)

To analyze the kinematics of individual droplets within the splatter produced by the scaler, we use OFTV techniques. This specific technique has been commonly used in multiple fluid mechanics applications (Fullmer et al. 2020 ; Lucas and Kanade 1981 , 1985 ; Mella et al. 2019 ; Settles 2012 ). For each case presented here, we use 3000 images, resolving more than 100 integral time scales.

The OFTV analysis method for calculating the droplet tracks is based on solving sets of linear equations (i.e., the optical flow equations). The two main steps in this approach are first identifying the droplets to track and then tracking them across frames. We used the commercially available code known as Flow On The Go software which uses eigen features to determine the “features” in each frame from the image gradients which highlights the locations of the droplets. The eigen features are then determined by constructing a correlation matrix defined as

where \({\Psi }\left( {{\text{x}},{\text{y}};{\text{t}}} \right)\) is the pixel intensity and \({\Psi }_{x}\) and \({\Psi }_{y}\) are the intensity gradients in the x- and y-directions, respectively. A smoothing field utilizing a Gaussian kernel of five pixels wide is applied to the raw images. The intensity gradients are then extracted from those pre-processed images. The correlation matrix, \({\mathbf{M}},\) is then computed, and a response value, \(R\) , is calculated from the minimum eigenvalue of that matrix, \({\mathbf{M}}\) .

Within the Flow On The Go software, \(\lambda_{i}\) values are defined as regions with R  > 0.01. An assumption that the physical displacements of the droplets between the frames are sufficiently small such that \({\Psi }\left( {{\text{x}},{\text{y}};{\text{t}}} \right)\) can be expressed as

Following a Taylor series expansion, the above equation can then be rearranged to give the optical flow equation as

where \({\Psi }_{t}\) is the partial derivative of the pixel intensity with respect to time between image pairs and \(u\) and \(v\) are the velocities in the x- and y-directions, respectively.

The resulting optical flow equation is left with two unknowns, \(u\) and \(v\) . Solving for these unknowns poses various difficulties. We used a solution method known as the Lucas-Kanade solution method (Lucas and Kanade 1981 , 1985 ) to solve the optical flow equations within the FlowOnTheGo software. The Lucas-Kanade approach assumes that the velocity in one area is the same as that in its neighboring regions making the velocity gradients small. This allows the optical flow equations to be considered at each feature as

where i and j define the neighborhood around the feature at pixel x, y. When applied to the droplets within the splatter of the scaler, we used a neighborhood of 11, i.e., i and j ranged from -5 to + 5. This setting allowed us to solve the equation using a least square method and determine \(u\) and \(v\) by

From this equation, the \(u\) and \(v\) values can be defined, which are then used to create Lagrangian streamlines. A gridded interpolator approach is used to create both a velocity field and a scale within the field based on a calibration plate (Higham and Brevis 2019 ), any outliers are removed using the PODDEM algorithm (Higham et al. 2016 ).

3.1 Case 1: Plane, P 1 , cavitron at 0  °

As mentioned before, the first case analyzed the condition where the mandibular teeth placed with an \(0^\circ\) angle to the horizontal axis, and the scaler placed against the lower central tooth (tooth#24). Figure  3( a–c) shows the \(v\) (y-direction) and the \(u\) (x-direction) components of the velocity and the velocity magnitude, \(\left| U \right| = \sqrt {u^{2} + v^{2} }\) for the P 1 data collection plane positioned 3 mm away from where the CUS is positioned against the front tooth. Figure  3 (d, e) shows the magnitude of velocity in the P 1 plane located 6 mm and 9 mm from the location of the CUS tip positioned against the front of the tooth. The general shape of the splatter is reminiscent of a cone shape where part of the splatter moves over the teeth and the other part moves down the front of the teeth surface. The \(v\) component of velocity shows a maximum value of around 1.5 m/s, and the \(u\) component is almost zero resulting in the velocity vector magnitude of 1.8 m/s. These results show that while the maximum velocity occurs near the scaler’s tip, as the droplets move away, not only their speed reduces, but also they evaporate due the humidity and temperature variations in the environment. At a location 6 mm from the point of the scaler, the velocity magnitude of the droplets then decreases to 1.2 m/s. Moving the data collection plane further away from the scaler (i.e., 9 mm), the magnitude of the velocity \(\left| U \right|\) of the droplets reduces to 0.6 m/s. We also noticed that at this location, the velocity within the core of the splatter in front of the teeth surface decreases. However, the overall width of the splatter does not change in the 9 mm plane compared to the 3 mm plane. For more clarification, the \(v\) and \(u\) components of velocity contours for the P 1 planes at 6 mm and 9 mm are shown in Appendix Fig. 10 . Note that Ou et al. ( 2021 ) studied the splatter produced from an ultrasonic scaler using a similar technique to that presented in this study; however, their data was collected within a P1 parallel plane corresponding to our cases 1 and 2, only. They utilized an approach called laser sheet imaging (LSI) to capture the far field splatter in a 14 cm × 14 cm field of view. They also used an ultrasonic scaler in conjunction with different evacuation methods (Ou et al. 2021 ) where they found the majority of the splatter produced from a scaling procedure on tooth #25 or #24 was less than 2 m/s (Ou et al. 2021 ).

figure 3

The velocity measurements for Case 1: P 1 plane with the teeth model and scaler point at an 0 ° and 90 ° angle, respectively. a The \(v\) ( y -direction) component of velocity, b the \(u\) ( x -direction) component of velocity, and c the magnitude of the velocity vector with the laser sheet 3 mm away from the point of the CUS. The velocity magnitude for a P 1 plane positions d 6 mm and e 9 mm away from the tip of the CUS

Figure  4 shows the velocity magnitude distribution for the P 1 data collection plane taken at 15 mm (Fig.  4 (a)) and at 20 mm (Fig.  4 (b)) from the scaler tip. The maximum magnitude of velocity for P 1 plane that was positioned 15 mm from the scaler point is 0.1 m/s and within the collection plane 20 mm away from the point is 0.05 m/s as shown in Fig.  4 (a) and Fig.  4 (b), respectively. There is then a 97.2% decrease in the velocity magnitude from the closest P 1 plane at 3 mm to the farthest from the P 1 plane at 20 mm from the scaler. Even though the velocity is reduced, spray cone increases from around 10 mm to approximately 15 mm meaning that as the droplets move away from the teeth, they spread out, evaporate, and decrease in size. In Fig.  4 (a), the velocity vectors (indicated by the white arrows that are over 4 mm away from the surface of the teeth) begin to deflect and point away from the surface of the teeth. This is different from what was observed in the P 1 data collection planes positioned closer to the point as shown in Fig.  3 . In Fig.  4 (b); however, the velocity vectors are rotated to a point perpendicular toward the surface of the teeth. These results lead to the possibility that there are certain regions where the droplets are moving in opposite directions indicating a chaotic motion within the splatter cone. It should be noted that both \(u\) and \(v\) components of velocity are reduced compared to Fig.  3 as shown in Appendix Fig. 10 (a, b). Recently, Han et al. ( 2021 ) used fluorescent dye to color the liquid to illuminate the splatter generated by an ultrasonic scaler, triplex syringe, high-speed handpiece, and low-speed handpiece within a laser sheet. Although they did not plot the velocity propagation unlike our work, they showed the splatter formation at various locations around the procedure location. Also, the splatter formation by ultrasonic scaler used was 40 ml/min slightly higher than what is used in the current study, but it still showed the chaotic droplet splatter at various locations around the procedure site. They illuminated various planes with different orientations to the scaler around the mock patient’s mouth. Very recently, Li et al. (2021) used PIV to examine the flow field surrounding a patient during an ultrasonic scaler procedure within a plane that is parallel to the scaler’s tip similar to our cases 1 and 2. They also used a flow rate that was higher than the one used in this study (i.e., 50 ml/min). They measured splatter velocities between 0.01 m/s up to 6.39 m/s which is substantially higher than what was measured in the current study. It is believed this was due to the higher flow rates they used in their experiments.

figure 4

Case 1: The far field velocity magnitudes for the P 1 plane at a location a 15 mm and b 20 mm from the tip of the CUS

The locations of the droplets within the plane located 20 mm from the point of the scaler can be seen in Fig.  5 (a). These droplets are likely candidates for carrying viral loads. Figure  5 (b) demonstrates there are multiple droplets within the P 1 data collection planes that are not following the average motion of the droplets near the gumline of the teeth model. These droplets near the gumline of the teeth model show very low velocities in comparison to those droplets that are further away from the gumline. The dark blue color corresponds to droplets with very small velocity. There are droplets further away from the teeth with higher velocities which are able to move out and away from the mouth. It should be noted that Fig.  5 (b) shows a limited number of droplets with high velocity trajectories since this figure represents the trajectories of only 20% of the detected droplets. This reduction in displayed trajectories was performed to clarify the figure so individual trajectories could be easily tracked and observed. It is then safe to assume there are more high velocity droplets propagating away from the teeth, within a plane that is 20 mm from the CUS point. These droplets will eventually evaporate and seed the atmosphere with viral particles. Clearly, as the droplets move away from the mouth, the particles trajectories are dictated by ambient air flows.

figure 5

a The droplet locations at one instant in time for Case 1, P 1 plane with the scaler at 0˚ angle, with the laser sheet 20 mm from the point of the Cavitron. b The particle trajectories for 20% of droplets identified at the same laser sheet location. The color bars correspond to the velocity magnitude of the detected individual droplets

To further characterize the droplets’ sizes and velocities and to gain knowledge of how the droplets propagate into the environment, we applied a shadowgraphy technique combined with a Eigenbased particle characterization method (Higham et al. 2019 ),  with the scaler/teeth setup of the Case 1, P 1 plane orientation with the CUS at 0° from the x -axis. We considered the flow rate reported in (Mirbod et al. 2021 ) (29.5 ml/min), which is close to the flow rate used in this study’s OFTV experiments, and compare these results to a lower flow rate of 16.2 ml/min obtained through the shadowgraphy experiments. There were two different flow control mechanisms on the CUS, and these were manipulated to achieve the different flow rates. This flow rate was closer to what is typically used in dental practice in conjunction with a CUS. The shadowgraphy procedure has been discussed in detail in our previous publications (Haffner and Mirbod 2020 ; Mirbod et al. 2021 ; Wu and Mirbod 2018 ). Using the shadowgraphy technique, we obtained the raw images. We then used an in-house detection code to determine the size and location of each droplet. The code operates by first binarizing the raw image that is based on an adaptive threshold. Using an adaptive Hough transform (Illingworth and Kittler 1987 ), we then determine circular regions, i.e., droplets and define the velocities of the droplets using OFTV method. However, instead of using the eigenfeatures for droplet detection, we employ the centroids determined by the Hough transform.

Figure  6 (a, b) shows the mass fraction of the detected droplets and the corresponding velocities at the start of the scaler for these two flow rates. The dimensionless parameters for 16.2 ml/min are summarized in Table 2 . The equations for the Reynolds, gaseous Weber, and Ohnesorge numbers are reported in Sect.  2 . We further calculated relaxation time, \(\overline{\tau }_{o} = \rho_{f} \overline{d}_{p}^{2} /18\mu_{air}\) with the average droplet diameter, \(\overline{d}_{p}\) , which was calculated 70 μm for the distribution of particle sizes ranging from 23.4 μm to 254.2 μm. Here, \(\rho_{f}\) is the density of water, and \(\mu_{air}\) is the air viscosity. The Stokes number that describes the rate of settling for the droplets is also reported as \(St = \overline{\tau }_{o} u_{o} /d_{p}\) (van der Voort et al. 2018) with minimum \(d_{p}\) value of 23.4 μm and maximum value of 254.2 μm. Since the calculated Stokes number is greater than 1, the measured droplets are considered inertial, which is where they retain their initial trajectories while falling because of gravity (Fouxon 2012).

figure 6

a The mass fraction distribution of the measured droplets for the 16.2 ml/min case compared to the 29.5 ml/min at the onset of the CUS. b The velocity measurements at the various droplet diameters detected for both flow rates of 16.2 ml/min and 29.5 ml/min. The error bars correspond to the standard deviation of the measured velocity for the 0.15 μs of data gathered

Note that the Stokes number largely depends on the particle size, while the particle size depends on both aerosolization and evaporation. For the flow rate of 16.2 ml/min, the range of the droplet diameter detected ranged from 23.4 μm to 254.2 μm. Note that a majority of the droplets measured after the initial large droplet development are measured to be between 70.9 μm and 127.5 μm, while the maximum diameter measured at this flow rate is less than half of the maximum diameter detected for the 29.5 ml/min case, i.e., 596.7 μm. We expect that while large droplets settle along the positive y axis, the aerosols generated during this dental procedure contain significant concentrations of low Stokes number droplets that travel within the clinic/office atmosphere including the viral particles.

The mass fraction, shown in Fig.  6 (a), is the ratio of the mass of the droplets detected at a specific size to the total mass of particles detected. In Fig.  6 (b), it can be observed that the droplet individual velocities at the lower flow rate are much lower than those at the higher flow rate. For the flow rate of 16.2 ml/min, the smaller droplets have slightly lower velocities between 0.4 m/s and 0.46 m/s. The two largest droplet diameter ranges have an average size of 188.9 μm and 209.2 μm. These droplet diameters have higher velocities of approximately 0.56 m/s. While larger droplets might settle down quickly, moderate and small droplets can evaporate. These smaller droplets might carry infectious viruses and leave those airborne when the droplet evaporates. The resolution used in this experimental approach are the individual pixels of each image the camera is able to take; therefore, the resulting data has sub-pixel accuracy. Because of this accuracy, the relative error within the data set is less than 1%. It is worth noting that the scaler used for the shadowgraphy analysis is slightly different in shape compared to the one used in the OFTV experiments, although our analysis showed that they both produce almost the same size of droplets and operate approximately in the same manner. Very recently, Ou et al. ( 2021 ) used digital inline holography (DIH) to measure the droplet size within a splatter produced by an ultrasonic scaler positioned against different teeth. They determined that 99% of the droplets measured were in the range of 12  μ m and 200  μ m which is similar to our findings (Ou et al. 2021 ). Han et al. ( 2021 ) also positioned filter papers around mock dental procedures using ultrasonic scalers. The fluid used in their experiments contained a fluorescent tracer, so they were able to measure the size of the droplets which collected on the filter papers. These collection filter papers were placed around the procedure location ranging from 29 cm to 120 cm, and the droplets measured from the scaler were on average 1.38 μm (Han et al. 2021 ). These results reaffirm our observed data through the distribution of velocity moving away from the CUS and also the fact that due to the aerosolization and evaporation of large droplets, a significant concentration of droplets travels into the room atmosphere.

3.2 Case 2: Plane, P 1 , the teeth model located 45° from the horizontal

To better simulate the scenario that appears in dental offices, the teeth model is positioned at a \(45^\circ\) angle from the \(x\) axis with the incisor teeth pointed upwards and the point of the CUS has been rotated at a \(5^\circ\) angle from the surface of the tooth (central incisor #25). This simulates the situation used in a typical dental office. We then analyze the P 1 plane to observe how the new position of the teeth/scaler changes the splatter motion.

The velocity distribution within the P 1 data collection plane is depicted at three different locations: 3 mm, 6 mm, and 9 mm. Figure  7 (a–c) shows the average \(v\) and \(u\) components of the velocity along with the velocity magnitude for the data collection plane positioned 3 mm for the point of the CUS. The most notable difference in the splatter cone between this case and Case 1 (see Fig.  3 ) is that there was no detectable  splatter over the top of the incisor teeth. This showed that the splatter formed is forced entirely down the front surface of the teeth. Figure  7 (c) shows the maximum velocity magnitude was 1 m/s with a splatter spreading length of ~ 10 mm along the \(x\) direction with an angle of around 45 °, and the maximum velocity is reduced from 1.8 m/s to 1 m/s without any angle. However, the length of the cone-shape formed by the splatter is still observed to be 10 mm. For all these data collection planes, the \(u\) component of the velocity is always greater than the \(v\) component of the velocity with a maximum value of 0.9 m/s and 0.4 m/s, respectively. Figure  7 (d, e) corresponds to a P 1 data plane located at 6 mm and 9 mm from the point of the scaler, respectively. These figures show the maximum velocity in the splatter formed is reduced but still contains 5 mm-long areas of the maximum velocities. Since the teeth model reflects the light from the laser sheet, it washes out part of the splatter data close to the surface of the teeth. For the P 1 plane located at 9 mm from the scaler point, the maximum \(v\) component of velocity is 0.6 m/s and the maximum \(u\) component is 0.23 m/s, and these figures can be seen in the Appendix Fig 11 . In Fig.  7 (e), the splatter formed is observed to be expanding in length to 20 mm. The velocity magnitude in Fig.  7 (e) is 0.33 m/s with the directions of the velocity vectors oriented parallel to the teeth surface. Also, the splatter in this view spreads out to 20 mm in the \(x\) direction from the surface of the teeth; however, in this region, the velocity magnitude decays to 0.15 m/s.

figure 7

The velocity measurements for Case 2: P 1 plane with the teeth model at a 45 ° angle from the x -axis and the scaler 5˚ from the surface of the tooth. The a \(v\) component of velocity, b the \(u\) component of velocity, and c the magnitude of the velocity measured in a P 1 plane that is 3 mm from the CUS tip. The velocity magnitude in a P 1 plane d 6 mm and e 9 mm away from the CUS tip

To further scrutinize how the splatter moves away from the scaler, we plot the magnitude of velocities in P 1 planes which are 15 mm and 20 mm from the scaler point shown in Fig.  8 . The magnitude of the velocity splatter formed in Fig.  8 (a) appears to be slightly less than that in Fig.  7 (e), which was located about 15 mm from the front of the teeth; however, this is still longer than the splatter in Fig.  8 . The velocity magnitude for the plane 20 mm away from the scaler is shown in Fig.  8 (b). Previously, for all planes closer to the scaler, the velocity vectors (indicated by the white arrows) were pointed toward the teeth which were the main direction of the water droplets leaving the scaler. However, in Fig.  8 (b), there is a region very close to the surface of the teeth that shows velocity vectors pointing away from the teeth surface with values of 0.05 m/s. This could be showing droplets which have been reflected from the surface of the teeth. Both \(v\) and \(u\) velocities are reduced below 0.25 m/s and 0.03 m/s, respectively, within the P 1 plane that is 15 mm away from the scaler point shown in Appendix Fig. 13 (a, b).

figure 8

The far field velocity magnitudes for the P 1 plane at a location a 15 mm and b 20 mm from the tip of the CUS. These maps are related to the Case 2

3.3 Case 3: Plane, P 2 , the teeth model 45° from the horizontal

To provide more information for the propagation of the droplets within the splatter, the data collection plane was rotated 90° to create a perpendicular plane which has not been studied extensively. The P 1 data collection plane only provides information in a 2D plane parallel to the scaler point orientation. To fully visualize the splatter cone-shape, the P 2 plane is also examined at an orientation as shown in Fig.  1 (c). Previous literature has only shown visualization of a parallel plane or a plane at an angle in reference from the procedure location (Ou et al. 2021 and Han et al. 2021 ).

Figure  9 (a–c) shows the average velocities for the P 2 plane which is located at 3 mm from the surface of the teeth. In Fig.  9 (a), a high value of the \(v\) component of velocity can be seen close to the orifice of the scaler. The \(w\) component of velocity also has a high value near the orifice with a maximum velocity of \(\pm\) 0.4 m/s. This likely correlates with the generation of a water spray originated from the ultrasonic scaler, which is not visible in the other cases studied here. Unlike in other cases, we observe an opposing magnitude of the \(w\) component of velocity on either side of the scaler point. The magnitude of the velocity in this data collection plane, Fig.  9( c), shows a maximum velocity of 2 m/s near the scaler which then decays to 0.67 m/s as it moves out around the teeth model.

figure 9

The velocity measurements for Case 2: P 1 plane with the teeth model at a 45˚ angle from the x -axis and the scaler 5˚ from the surface of the tooth. The a \(v\) component of velocity, b \(w\) component of velocity, and c the magnitude of the velocity vector with the laser sheet 3 mm away from the front of the teeth. The same average values for d - f the laser sheet 6 mm away from the front of the teeth and for g - i the laser sheet 9 mm away from the teeth’s surface

The velocity components for the P 2 plane, located at 6 mm away from the front of the teeth, can be seen in Fig.  9 (d–f). The splatter cone-shape and velocity vectors are very similar to those of the data collection plane that is located 3 mm away from the teeth. The \(v\) component of velocity, shown in Fig.  9 (d), has even larger regions of the maximum velocity, 1.9 m/s, than that in the 3 mm data collection plane. The same occurs for the \(w\) component of velocity; there are larger areas of the maximum velocity of \(\pm\) 0.4 m/s. It should be noted that when the data collection plane is moving further away from the surface of the teeth, the plane is getting closer to the physical location of the orifice on the scaler; thus, the water spray originating from that location causes the higher velocity droplets as seen in Fig.  9 .

Figure  9 (g–i) shows the velocity vectors for a P 2 data collection plane located 9 mm from the front of the teeth. Both \(v\) and \(w\) components of velocity have decayed significantly to the maximum value of 0.63 m/s. The duality of the \(w\) component of velocity on either side of the scaler is no longer observed. The width of the splatter formed shown in Fig.  9( i) is still 15 mm on either side of the scaler which is comparable to the closer data collection planes. In short, these figures show how the splatter formed splits and moves around the front of the teeth. Specifically, in this configuration, it can be seen that the majority of velocity vectors within the splatter point out mostly in the negative \(y\) direction. This helps us to conclude that most of the splatter formed is contained within the mouth. Although for the data collection plane located at 9 mm from the front of the teeth, a wide splatter cone-shape is still observed. As the droplets of the splatter move further from the teeth or the scaler, they have a higher probability of being affected by the ambient airflow.

4 Conclusions

In this work, we explore the cone-shaped splatter pattern created around a mandibular teeth model produced by a Cavitron ultrasonic scaler in a setting common to dental clinics. We carry out a series of experiments using state-of-the-art techniques, namely OFTV to measure the global velocity of the droplets in different orientations and planes around a model of adult mandibular teeth. We present quasi-velocity measurements specifically surrounding the front of the teeth. Due to experimental limitations, we only examine this field of view which corresponds to the same field of view for the shadowgraphy experiments. The spread of the droplets within the splatter has been seen to move as far as 20 mm away from the teeth. Within these regions, the smaller droplets have relatively low velocity; it is expected that they travel by the ambient air flow within the dental office/clinic. These droplets are most concerning to the safety of those preforming the dental procedure due to their trajectories and the rate of evaporation which could potentially introduce infective bioaerosols into the receiving atmosphere. Proper safety protocols need to be applied in these regions to remove possible bacteria and viruses from the splatter produced from high-speed dental instruments. While the positions of the point of the scaler on the teeth changes the entire splatter formed, this study only considered two positions based on the orientation of the teeth and the scaler. Also, the shape of patients’ mouths and the variability in the physical and chemical characteristics of different mouths could potentially influence the splatter droplets from this instrument in ways which are not considered in these results.

It should be noted that the configurations described in this study are for cases mimicking the exact scenario in a dental office. The lateral side of the CUS and the point of the tool are typically used in scraping the surface of the teeth to remove built up plaque and infected tissue. We have oriented the CUS and the teeth model in such a way that the tool is commonly used in actual practice with the lateral surface of the CUS on the surface of the tooth with the point directed down toward the gumline. Based on the tooth/CUS, the projected splatter outside of the mouth will change. We also selected the front tooth because it could be the more exposed teeth which would provide the highest splatter outside the mouth.

Using a shadowgraphy technique outlined in detail in (Mirbod et al. 2021 ), individual droplet sizes and velocities are also examined and compared for two different flow rates at which the Cavitron ultrasonic scaler typically operates. The lower flow rate of 16 ml/min, used in this study, is more consistent with conditions typically used in dentistry practice. At this flow rate, a bulk of the droplets are between approximately 71 μm to 128 μm. This corresponded to results obtained by Ou et al. ( 2021 ) who used DHI and measured droplets within a range of 12 μm to 200 μm. The velocity of these droplets on average was measured at 0.22 m/s; however, the larger droplets (around 200 μm) had higher velocities of 0.28 m/s. Kun-Szabó et al. (2021) also studied the aerosol generated by an ultrasonic scaler when used in conjunction with an aerosol preventing method (a high-volume evacuator or an aerosol exhauster). However, they measured droplets ranging from 60 μm to 384 μm with no real distinguishing effects between the two aerosol preventing mechanisms.

These findings provide a novel understanding of the spray formation created by a scaler using state-of-the-art fluid mechanics experiments such as OFTV and shadowgraphy techniques. In practice, there are dental tools used to remove the extra fluid produced within the mouth from the procedure, and other tools used externally to reduce the aerosols produced. The use of these tools and their impacts on splatter propagation have been examined by Peng et al. ( 2020 ) and recently by Kun-Szabó et al. (2021). The goal of our study, however, was to provide insights into the kinematic and dynamics of the droplets and the splatter to possibly improve the current safety equipment already being used in dental procedures. With the onset of the COVID-19 pandemic, extra safety precautions are necessary to protect dental employees. During this investigation, it was observed that droplets of the splatter can be detected as far as 30 mm from the tip of the CUS, which could evade common dental suction tools. We were unable to analyze the trajectories within this plane due to the sporadic motion of the droplets and the reduction of droplets detected within that data collection plane. Examining the splatter droplets, their velocity, and their trajectories has the potential to develop safety procedures and also can serve as a first step to further characterizing droplets motion and flow transport inside the dental offices/clinics. For instance, the measured average velocity and the average droplet size for 16.2 ml/min flow rate have been already used as initial conditions for the computational analysis of the spread of aerosols discussed in our recent work by (Komperda et al. 2021 ). The ultrasonic scaler can produce the highest concentration of droplets throughout its use on a patient (Bennett et al. 2000 ). Holliday et al. ( 2021 ) used fluorescent dyed fluid to examine the splatter caused by drilling in conjunction with a suction tool within an open dental clinic. They specifically examined how the droplets were propagated through the dental clinic while applying various suction tools. Additionally, Liu et al. ( 2019 ) utilized PIV to study the velocity of the splatter droplets of an ultrasonic scaler without the use of a suction tool. (Plog et al. 2020 ) investigated the reduction of aerosol propagation using viscoelastic fluids and background illumination to determine droplet sizes for different fluids. These research literatures provide insights to improve safety conditions during dental procedures; however, they do not discuss the droplet′s simultaneous size, their location, and their velocity comprehensively as presented in this study.

Any of these droplets produced could contain harmful pathogens. This study also confirms how droplets propagate leading to harmful aerosols or settle on surfaces around the procedure (Peng et al. 2020 ). Other research groups are working toward developing new innovative ways to stop this propagation of splatter (Gandolfi et al. 2020 ), and we believe our research provides insight into the nature of the droplets within the splatter to aid in the construction of these types of safety equipment. The scope of the future experimental work will involve examining various fields of view above and below the view shown in this study.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Code availability

Research papers using custom computer code will also be asked to fill out a code and software submission checklist that will be made available to editors and reviewers during manuscript assessment. The aim is to make studies that use such code more reliable by ensuring that all relevant documentation is available and by facilitating testing of software by the reviewers. Further detailed guidance and required documentation at submission and acceptance of the manuscript can be found here.

Bahl P, de Silva CM, Chughtai AA, MacIntyre CR, Doolan C (2020a) An experimental framework to capture the flow dynamics of droplets expelled by a sneeze. Exp Fluids 61:1–9

Article   Google Scholar  

Bahl P, Doolan C, de Silva C, Chughtai AA, Bourouiba L, MacIntyre CR (2020) Airborne or droplet precautions for health workers treating COVID-19? J Infect Diseas. https://doi.org/10.1093/infdis/jiaa189

Beggs CB (2020) Is there an airborne component to the transmission of COVID-19?: a quantitative analysis study. medRxiv

Bennett A, Fulford M, Walker J, Bradshaw D, Martin M, Marsh P (2000) Microbial aerosols in general dental practice. Br Dent J 189:664–667

CDC Best Practices for Environmental Cleaning in Healthcare Facilities in Resource-Limited Settings. Atlanta, GA: US Department of Health and Human Services, ; Cape Town, South Africa: Infection Control Africa Network

Bourouiba L, Dehandschoewercker E, Bush JW (2014) Violent expiratory events: on coughing and sneezing. J Fluid Mech 745:537–563

Cao X, Liu J, Jiang N, Chen Q (2014) Particle image velocimetry measurement of indoor airflow field: A review of the technologies and applications. Energy and Buildings 69:367–380

Chao CYH, Wan MP, Morawska L et al (2009) Characterization of expiration air jets and droplet size distributions immediately at the mouth opening. J Aerosol Sci 40:122–133

Coulthard P (2020) Dentistry and coronavirus (COVID-19)-moral decision-making. Br Dent J 228:503–505

Fullmer WD, Higham JE, LaMarche CQ, Issangya A, Cocco R, Hrenya CM (2020) Comparison of velocimetry methods for horizontal air jets in a semicircular fluidized bed of Geldart Group D particles. Powder Technol 359:323–330

Gandolfi MG, Zamparini F, Spinelli A, Sambri V, Prati C (2020) Risks of aerosol contamination in dental procedures during the second wave of COVID-19—experience and proposals of innovative IPC in dental practice. Int J Environ Res Public Health 17:8954

Gralton J, Tovey E, McLaws M-L, Rawlinson WD (2011) The role of particle size in aerosolised pathogen transmission: a review. J Infect 62:1–13

Haffner EA, Mirbod P (2020) Velocity measurements of dilute particulate suspension over and through a porous medium model. Physics of Fluids 32:083608

Han P, Li H, Walsh LJ, Ivanovski S (2021) Splatters and aerosols contamination in dental aerosol generating procedures. Appl Sci 11:1914

Harrel SK, Molinari J (2004) Aerosols and splatter in dentistry: a brief review of the literature and infection control implications. J Am Dent Assoc 135:429–437

Higham J, Brevis W (2019) When, what and how image transformation techniques should be used to reduce error in Particle Image Velocimetry data? Flow Meas Instrum 66:79–85

Higham JE, Brevis W, Keylock CJ (2016) A rapid non-iterative proper orthogonal decomposition based outlier detection and correction for PIV data. Meas Sci Technol 27(12):125303–125310. https://doi.org/10.1088/0957-0233/27/12/125303

Higham JE, Vaidheeswaran A, Benavides K, Shepley P (2019) Eigenparticles: characterizing particles using eigenfaces. Granul Matter 21(3). https://doi.org/10.1007/s10035-019-0900-z

Holliday R, Allison JR, Currie CC et al (2021) Evaluating contaminated dental aerosol and splatter in an open plan clinic environment: Implications for the COVID-19 pandemic. J Dentistry 105:103565

Illingworth J, Kittler J (1987) The adaptive Hough transform. IEEE Transactions on Pattern Analysis and Machine Intelligence:690–698

Jeswin J, Jam H (2012) Aerosol: A silent killer in dental practice. Ann Essences Dent 4:55–59

Google Scholar  

Komperda J, Peyvan A, Li D et al (2021) Computer simulation of the SARS-CoV-2 contamination risk in a large dental clinic. Phy Fluids 33:033328

Liu M-H, Chen C-T, Chuang L-C, Lin W-M, Wan G-H (2019) Removal efficiency of central vacuum system and protective masks to suspended particles from dental treatment. PloS one 14:e0225644

Lubarsky E, Reichel JR, Zinn BT, McAmis R (2010) Spray in crossflow: Dependence on Weber number. J Eng Gas Turbines Power 132(2):021501

Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision.

Lucas BD, Kanade T (1985) Optical navigation by the method of differences, IJCAI. Citeseer, pp 981–984

Mahajan R, Singh P, Murty G, Aitkenhead A (1994) Relationship between expired lung volume, peak flow rate and peak velocity time during a voluntary cough manoeuvre. Br J Anaesth 72:298–301

Majidi K, Club HD (2020) Dental Clinic Aerosol Management with Aero-Shield.

Mella D, Brevis W, Higham J, Racic V, Susmel L (2019) Image-based tracking technique assessment and application to a fluid–structure interaction experiment. Proc Inst Mech Eng C J Mech Eng Sci 233:5724–5734

Mirbod P, Haffner EA, Bagheri M, Higham JE (2021) Aerosol formation due to a dental procedure: insights leading to the transmission of diseases to the environment. J R Soc Interface 18:20200967

Ou Q, Placucci RG, Danielson J et al (2021) Characterization and mitigation of aerosols and splatters from ultrasonic scalers. medRxiv

Peng X, Xu X, Li Y, Cheng L, Zhou X, Ren B (2020) Transmission routes of 2019-nCoV and controls in dental practice. Int J Oral Sci 12:1–6

Plog J, Wu J, Dias YJ, Mashayek F, Cooper LF, Yarin AL (2020) Reopening dentistry after COVID-19: Complete suppression of aerosolization in dental procedures by viscoelastic Medusa Gorgo. Phy Fluids 32:083111

Poulain S, Bourouiba L (2019) Disease transmission via drops and bubbles.

Raghunath N, Meenakshi S, Sreeshyla H, Priyanka N (2016) Aerosols in dental practice-A neglected infectious vector. Microbiology Research Journal International:1–8

Rajeev K, Kuthiala P, Ahmad FN et al (2020) Aerosol Suction Device: Mandatory Armamentarium in Dentistry Post Lock Down. J Adv Med Dental Sci Res 8(4):81–83

Scharfman B, Techet A, Bush J, Bourouiba L (2016) Visualization of sneeze ejecta: steps of fluid fragmentation leading to respiratory droplets. Exp Fluids 57:24

Settles GS (2012) Schlieren and shadowgraph techniques: visualizing phenomena in transparent media. Springer, Berlin

MATH   Google Scholar  

Tang JW, Nicolle A, Pantelic J et al (2012) Airflow dynamics of coughing in healthy human volunteers by shadowgraph imaging: an aid to aerosol infection control. PLoS One 7:e34818

Tang JW, Nicolle AD, Klettner CA et al (2013) Airflow dynamics of human jets: sneezing and breathing-potential sources of infectious aerosols. PLoS One 8:e59970

VanSciver M, Miller S, Hertzberg J (2011) Particle image velocimetry of human cough. Aerosol Sci Technol 45:415–422

WHO Decontamination and Reprocessing of Medical Devices for Health-care Facilities. Geneva,

Wu Z, Mirbod P (2018) Experimental analysis of the flow near the boundary of random porous media. Physics of Fluids 30:047103

Xie X, Li Y, Sun H, Liu L (2009) Exhaled droplets due to talking and coughing. J R Soc Interface 6:S703–S714

Yadav N, Agrawal B, Maheshwari C (2015) Role of high-efficiency particulate arrestor filters in control of air borne infections in dental clinics. SRM J Res Dental Sci 6:240

Zhu S, Kato S, Yang J-H (2006) Study on transport characteristics of saliva droplets produced by coughing in a calm indoor environment. Build Environ 41:1691–1702

Zigan L, Schmitz I, Wensing M, Leipertz A (2012) Reynolds number effects on atomization and cyclic spray fluctuations under gasoline direct injection conditions Fuel Systems for IC Engines. Elsevier, Armsterdam

Download references

Acknowledgements

This work was funded by the University of Illinois at Chicago (UIC) – College of Dentistry (Grant No. 200258-323152).

Author information

Authors and affiliations.

Department of Mechanical and Industrial Engineering, University of Illinois At Chicago, Chicago, IL, USA

E. A. Haffner, M. Bagheri, F. Mashayek & P. Mirbod

School of Environmental Sciences, University of Liverpool, Liverpool, UK

J. E. Higham

College of Dentistry, University of Illinois At Chicago, Chicago, IL, USA

L. Cooper, S. Rowan & C. Stanford

You can also search for this author in PubMed   Google Scholar

Contributions

PM designed the research, EAH, MB, and JEH performed the research, EAH, PM and JEH analyzed the data, and EAH, PM, MB, and JEH wrote the manuscript. FM, LC, SR, and CS provided the Cavitron ultrasonic scaler, contributed resources, insights on the procedure and problem observed.

Corresponding author

Correspondence to P. Mirbod .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Figures 10 , 11 , 12 , 13

figure 10

Case 1: setup \(v\) (left column) and \(u\) (right column) components of velocity at a P 1 plane ( a , b ) 6 mm and ( c , d ) 9 mm away from the tip of the CUS

figure 11

Case 1: setup \(v\) (left column) and \(u\) (right column) components of velocity at a P 1 plane ( a , b ) 15 mm and ( c , d ) 20 mm away from the tip of the CUS

figure 12

Case 2: setup \(v\) (left column) and \(u\) (right column) components of velocity at a P 1 plane ( a , b ) 6 mm and ( c , d ) 9 mm away from the tip of the CUS

figure 13

Case 2: setup \(v\) (left column) and \(u\) (right column) components of velocity at a P 1 plane ( a , b ) 15 mm and ( c , d ) 20 mm away from the tip of the CUS

Rights and permissions

Reprints and permissions

About this article

Haffner, E.A., Bagheri, M., Higham, J.E. et al. An experimental approach to analyze aerosol and splatter formations due to a dental procedure. Exp Fluids 62 , 202 (2021). https://doi.org/10.1007/s00348-021-03289-2

Download citation

Received : 09 April 2021

Revised : 20 August 2021

Accepted : 22 August 2021

Published : 18 September 2021

DOI : https://doi.org/10.1007/s00348-021-03289-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 14 September 2024

Damage source localisation in complex geometries using acoustic emission and acousto-ultrasonic techniques: an experimental study on clear aligners

  • Claudia Barile 1 ,
  • Claudia Cianci 1 ,
  • Vimalathithan Paramsamy Kannan 1 ,
  • Giovanni Pappalettera 1 ,
  • Carmine Pappalettere 1 ,
  • Caterina Casavola 1 ,
  • Michele Laurenziello 2 &
  • Domenico Ciavarella 2  

Scientific Reports volume  14 , Article number:  21467 ( 2024 ) Cite this article

Metrics details

  • Biomedical engineering
  • Mechanical engineering

Passive non-destructive evaluation tools such as acoustic emission (AE) testing and acousto-ultrasonics (AU) approach present a complex problem in damage localisation in complex and nonhomogeneous geometries. A novel AU-guided AE frequency interpretation approach is proposed in this research work which aims at overcoming this limitation. For the experimental evaluation, the damage sources from a geometrically complex clear dental aligners are tested under cyclic compression load and their origins are evaluated. Despite the rapid worldwide diffusion of the clear aligners, their mechanical behaviour is poorly investigated. In this work, the frequency characteristics of the artificially simulated stress wave, generated from different dental positions of the clear aligners, are studied using the AU approach. These frequency characteristics are then used to analyse the AE signals generated by these aligners when subjected to cyclic compressive loading. In addition, the time domain characteristics of the AE signals are studied using their Time of Arrival (ToA). The Akaike Information Criterion (AIC) is used to estimate the ToA. These frequency and time domain characteristics of the AE signals are used to estimate the local damage origin in the clear dental aligners. This will help in identifying localised damage sources during the usage period of the aligners. Experimental results revealed significant damages in the left maxillary premolar and right maxillary third molar of the aligners.

Similar content being viewed by others

en experimental approach

A machine learning framework for damage mechanism identification from acoustic emissions in unidirectional SiC/SiC composites

en experimental approach

Elastography of the bone-implant interface

en experimental approach

Non-contact ultrasonic inspection by Gas-Coupled Laser Acoustic Detection (GCLAD)

Introduction.

The wide availability of data processing and signal processing algorithms, supported by powerful hardware has made Acoustic Emission (AE) technique one of the formidable Non-Destructive Evaluation (NDE) tools. In particular, the growth of the AE technique over the last decade has been exponential 1 , 2 , 3 . Despite this tremendous growth, research into the propagation of acoustic waves in complex shapes and structures is very limited. Designing acoustic wave propagation in a non-homogeneous medium of complex geometry is quite challenging. In addition, the classification of damage sources based on source location is complex 4 .

Over the years, several methods have been developed to classify damage sources and localise the sources of acoustic emission. Among them, the triangulation principle 5 , the Akaike Information Criterion (AIC) 6 , 7 to estimate the arrival time, and the sideband peak frequency 8 are successful in source localisation. Signal-based approaches such as time–frequency analysis, frequency-based analysis 9 , 10 , 11 , and parameter-based approaches such as data classification and clustering are used for damage source classification 12 , 13 , 14 . Some research is aimed at bridging the gap between the signal-based and parameter-based approaches by using information-theoretic parameters such as entropy and complexity indices for the same 11 , 13 . Extensive research has been carried out over the years to improve the applicability of these techniques to inhomogeneous materials 8 , 15 . Nevertheless, their applications are limited to structures with geometrical regularity. This is due to the lack of information on the characteristics of the AE signals from damage sources in a geometrically irregular structure.

Acousto-ultrasonics (AU) is one of the simplest yet efficient approaches to characterising the propagation of acoustic waves. This approach can be described as an AE simulation with an ultrasonic source. The acoustic waves are typically associated with the spontaneously released stress waves that accompany the plastic deformation or crack growth in material. The AU approach differs in the sense that the artificial stress waves are induced in the material. In this approach, an artificially simulated stress waves by exciting a piezoelectric crystal with a voltage burst and are propagated through a material and recorded using an AE sensor (typically piezoelectric sensors). The recorded acoustic waves reveal the information about the propagation path. Based on this approach, a qualitative information about the propagating medium and its influence on the characteristics of the acoustic waves can be extracted 16 , 17 . Thus, in this research work, the AU approach is used contemporarily with the AE testing to characterise the damage sources in a complex geometry.

The test material in this study is one of the fast-growing orthodontic devices, the clear dental aligners. Recent research has shown that a significant number of adult patients are reluctant to undergo conventional orthodontic treatment for malocclusion because it is less aesthetically pleasing 18 , 19 . Clear dental aligners are designed based on the dental alignment of the patients obtained by a cast or an intra-oral scanner. Their design is pre-programmed to move the misaligned tooth or group of teeth into the desired position in small increments 20 , 21 . These aligners are used for a short period of time (usually between 7 and 14 days) and are replaced constantly. The pre-designed increments are designed to distribute the forces in constant gradual increments that realign the teeth 22 , 23 . The mechanical performance of the thermoformed aligners is susceptible to the degradation due to the thermal stress induced during the manufacturing process and the occlusal forces during their use. Although there has been some research into the various mechanical properties of aligner materials such as polyurethane (PU), polyethylene terephthalate (PET), PET-glycol modified (PET-g), polycarbonate, polyethylene, and polypropylene, very few research works have reported on the actual mechanical performance of the aligner itself 24 , 25 , 26 , 27 , 28 . This means that the available mechanical performance studies are mostly on the test specimens in the form of thin plates, dog bone specimens, or thin discs. It is therefore essential to understand the mechanical behaviour of these aligners under load and to identify the source origin of the damage.

The objective of this research work is to use the acousto-ultrasonics and the acoustic emission techniques to classify the damage sources in thermoformed clear dental aligners. The dental aligners are cyclically loaded for 22,500 cycles and their mechanical performance and damage sources are analysed using AE testing. AE testing is based on the analysis of stress waves generated by the damage sources of a material/structure under load. It is essential to have a prerequisite information on the frequency components of the stress waves that may be generated by the dental aligners. Since there are no previous studies available on this particular topic and designing the propagation of acoustic waves in complex geometry is quite difficult, the AU approach is used in this study.

First, the frequency characteristics of the propagating acoustic waves in the complex geometry of the aligners are studied using the AU approach based on Hsu–Nielsen tests. Then, using the information from the AU approach, the damage sources are classified using the AE signals generated by the clear dental aligners under cyclic loading. The characteristics of the AE signals are studied in their frequency domain using Fast Fourier Transform (FFT) and in their time domain using Akaike Information Criterion (AIC). Finally, the results are validated using microscopic analysis of the tested aligners. In short, this work uses an integrated acoustic emission and acousto-ultrasonic test method for evaluating the damage source origin in otherwise complex component.

Materials and methods

Preparation of clear dental aligners.

Dental alignment of a patient is reconstructed in 3D using 3Shape OrthoAnalyzer® software with the permissible accuracy of 6.9 μm. (Note: No experiments were conducted on the patient; a written consent was signed by the patient to acquire their dental record. This study was reported following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for observational studies 29 . All the procedures of this research protocol adhered to the Declaration of Helsinki of 1975, as revised in 2008). A solid cast of the dental alignment is then reconstructed in the Liquid Crystal HR2 3D Printer (Photocentric Ltd.) using daylight hard resin.

The clear dental aligners are prepared using this cast by thermoforming processing in Erokodent® Erkoform 3D vacuum machine. Medical grade polyurethane (PU) and polyethylene terephthalate—glycol modified (PET-g) sheets of 0.75 mm thickness from two suppliers, Bart Medicals Ltd., (commercial name Ghost Aligner®) and Dentsply Sirona Inc. (commercial name Essix ACE plastic), respectively, are used as the base materials. These thermoplastic sheets are thermoformed over the hard resin cast by first heating the sheets with a medium wave infrared heater, that heats the discs to 160 °C at a pressure of 0.8 bar and then by applying vacuum. The prepared aligners are named after their base materials as TPET-G and TPU (T refers to the thermoformed thermoplastic).

Denominations for the tooth positions

Since the objective of the study is to locate the damage sources in the dental positions of the clear aligners, a standard denomination system is used for the different tooth positions of the thermoformed aligners. The tooth positions are named according to the dental notations of ISO 3950—Dentistry—Designation system for teeth and areas of the oral cavity standard. The tooth numbers and their position in the oral cavity of the upper and lower jaws are presented in Table 1 . For the convenience of the readers, a figure representing the different dental nominations of the maxillary arch is given in Sect. S1 of the Supplementary file.

The tooth numbers from the table can be read as follows: 11 and 21 are the maxillary central incisors in left and right jaws, respectively and 18 and 28 are the maxillary third molars in the left and right jaws, respectively.

Acousto-ultrasonic test

As explained briefly in the Introduction section the Acousto-Ultrasonic test is based on inducing stress waves through the material and their recording by a piezoelectric sensor. Hsu-Nielsen source (mechanical pencil with brittle graphite lead) is used as the source to simulate stress waves to the aligner, since it produces a burst like signal that resembles growing crack signals in thin plates 30 . Moreover, the signal generated by this source generates a wide frequency response, which is useful for the analysis 31 , 32 , 33 . The piezoelectric sensor used in this study is a lightweight miniature PICO sensor (Physical Acoustics) with an operating frequency up to 750 kHz and resonant frequencies at 250 kHz and 550 kHz. The sensor signals are amplified by 40 dB and registered in a PAC PCI-2 data acquisition system at a sample rate of 2 MHz. The waveforms are recorded for a length of 2048 samples with a pre-trigger of 512 samples. The pre-trigger instructs the data acquisition system the length of the continuous waveform to be recorded before the threshold exceeding sample point. Therefore, the time-domain parameters extracted from the waveforms are pre-trigger dependent. In this study, since the waveforms are recorded at a sample rate of 2 MHz, waveforms of length \(1.024\times {10}^{-3}\) s are recorded with a pre-trigger of \(0.256\times {10}^{-3}\) s.

The sensor is held firmly onto the cast by a mechanical clip. A thin layer of silicone grease is applied between the sensor surface and the cast to ensure good acoustic coupling. The position of the sensor is close to the third molar position (see Supplementary S1 for tooth positions and the position of the sensor). This position is selected since it is the flattest position on the dental cast where the sensor makes full contact with the surface.

The cast and the aligners are mounted on a Universal Testing machine (in order to simulate the same conditions as the cyclic loading described in the subsequent sections), INSTRON 3344 with a 1 kN load cell. The upper and lower casts are held firmly at a constant pressure of 10 MPa. The test is carried out in two modes: without the aligners (propagation test on the casts) and with the aligners, in order to understand the attenuation and dispersion of the acoustic waves propagating through the casts from the aligners. First, the casts are brought into contact with a minimum compressive load of 5 N (approx.) to ensure that there are no vibrations or external interferences during the AU test. Then, for the second mode, the clear dental aligner is mounted on the upper cast and the same procedure is followed as before.

The AU test is performed on six different teeth positions: tooth 18, 14, 11, 21, 24 and 28, the maxillary third molars, the maxillary first premolars, and the maxillary central incisors. The acoustic events are generated from the flat mid position of each tooth positions (refer to Supplementary S1 section for more information). The AU test is repeated 3 times for each tooth position, simulating 3 signal acoustic event each time.

Cyclic compression tests

The mechanical performance of the dental aligners is studied under cyclic loading. The cyclic test is designed to simulate the occlusal forces acting on the aligner during the swallowing action during the aligner usage. The tests are conducted in a load-controlled mode with the cyclic compression load is applied at a frequency of f = 0.25 cycles/s. The loading stages of a single cycle are as follows:

Stage 1 Compression load ramped up from 0 to 50 N in 1 s.

Stage 2 Dwell time of 1 s with the compression load 50 N.

Stage 3 Compression load ramped down from 50 to 0 N in 1 s.

Stage 4 Dwell time of 1 s at 0 N.

For the convenience of the readers, a schematic representation of the loading cycles is presented in the Supplementary file under the Sect. S2 .

It is reported that the maximum occlusal forces exerted on the jaws of a human during a swallowing action is 50 N 34 , 35 , 36 . Similarly, the maximum duration of the occlusal contact lasts for a maximum of 1 s during swallowing 34 . Based on these factors, the compression load and dwell time for the cyclic compression loads are selected. The test is repeated for 22,500 cycles, which relates to the average swallowing action in two weeks period, which is also the average usage time of the clear aligners 25 , 37 . It is reported that there is no certain median value on the number of swallows even on healthy human population. Both early and recent studies report that a healthy human swallows between 203 and 1008 times a day 38 , 39 . For favourable testing conditions to evaluate the mechanical performance under real usage conditions, in the present study, the number of swallows is set to be 1500 times per day for the usage time of 15 days of the aligner.

To simulate the oral environment during the cyclic test, the aligner is made to be in contact with artificial saliva solution (The chemical composition of the artificial saliva solution used is provided in Supplementary S3 ) during the entirety of the test. For that, a sponge is impregnated with the artificial saliva solution and placed in a manner that it does not impend the loading while staying in contact with the aligner. The entire setup is enclosed by cellulose hydrate film. The test setup of the dental aligner cast, and the final test setup is presented in Fig.  1 a,b, respectively.

figure 1

( a ) Test setup showing the casts and the aligner. ( b ) Final test setup including sponge, envelope for saliva, and AE sensor.

The energy absorbed by the aligners at different cycles and their respective stiffness values are calculated from the hysteresis curve of load–displacement curves. The energy absorbed is the total area enclosed by the hysteresis loop, while the stiffness is calculated from the slope of the curve during the loading stage.

Optical microscopy

For fractographic analysis on the tested aligners, the damaged surfaces are observed under optical microscope (NIKON SMZ800), which has a maximum magnification of ×6.3. The aligners are cleaned with acetone and dried before observing under the microscope with a white light source being used for illumination.

Propagation of acoustic waves through the dental aligners

Representative AE waveforms in their time domain propagated through the dental cast (without the dental aligner) and their respective frequency domain characteristics (FFT results) are presented in Fig.  2 a,b, respectively.

figure 2

( a ) Acoustic waves propagated through the hard dental cast, represented in their time-domain and ( b ) their frequency characteristics in the FFT spectrum.

The sensor is placed close to the third molar (tooth 18) and the peak amplitudes of the simulated AE signals from these positions are higher than those from other positions. Particularly, the AE signals from the incisors (teeth 11 and 21) are very low. The greater attenuation in signal propagation as the distance between the source and the sensor increases is expected 4 , 40 .

The frequency components of the propagated signals are centred between the 100 and 200 kHz frequency bands. The peak frequencies of the signals from teeth 18 and 28, respectively are, 123 kHz and 185 kHz. A trend can be seen in the frequency domain. The peak frequency shifts towards the higher frequency at different teeth positions. For example, the peak frequency of the AE signals from tooth 24 is 169 kHz. On the other hand, signals from the incisors (teeth 11 and 21) show a large amount of dispersion. Comparing the results of Fig.  2 a,b, it can be observed that apart from the attenuation due to the propagating distance of the AE signals, a large amount of dispersion is also observed when the location of the AE source is far away from the sensor. This can be attributed to both the inhomogeneity of the propagating medium and the complex propagation path 4 , 16 , 17 . Nevertheless, it can be ascertained that the central frequencies of the propagated AE waves are between 100 and 200 kHz.

The AU test is continued with the aligners, TPET-G and TPU positioned on the upper cast. The frequency characteristics of the acoustic waves propagated through these aligners are compared with those propagated through the dental cast alone (without the aligners in place). This gives a clear indication on how the propagation of the acoustic waves is affected when these aligners are mounted.

The frequency characteristics of the AE waves simulated from the maxillary third molars (teeth 18 and 28) are presented in Fig.  3 a,b, maxillary first premolars (teeth 14 and 24) in Fig.  3 c,d, and central incisors (teeth 11 and 21) in Fig.  3 e,f, respectively.

figure 3

Frequency characteristics of the acoustic waves propagated through the aligners compared with the ones propagated through the Dental Cast; simulated from ( a ) Tooth 18, ( b ) Tooth 28, ( c ) Tooth 14, ( d ) Tooth 24, ( e ) Tooth 11, and ( f ) Tooth 21 Positions.

The difference in amplitude of the propagated waves from symmetrical teeth positions such as 18/28, 14/24 and 11/21 is because the sensor is positioned in a non-axisymmetric position, as explained in the previous section. Therefore, this difference in amplitudes is not considered for discussion.

When the acoustic waves are propagated through the dental aligners, a large amount of attenuation can be observed in both the cases, teeth 18 and 28. The spectral amplitude of the signals dropped from \(8.5\times {10}^{-3}\) a.u. to \(3.2 \times {10}^{-3}\) in TPET-G and \(2.7\times {10}^{-3}\) in TPU (see Fig.  2 a). However, there is little to no dispersion in the low frequency components. In fact, the peak frequencies are still around 125 kHz. On the other hand, the high frequency components completely vanishes when the acoustic waves are simulated through the aligners. The spectral amplitudes of the frequency components above 250 kHz completely vanishes. This is not due to the attenuation of these low magnitude/high frequency components because the low magnitude/low frequency components (even below 50 kHz) are preserved. Similar observations can be made also in Fig.  2 b where the acoustic waves are generated from the tooth 28. The high frequency components above 250 kHz vanishes entirely, large attenuations and little to no dispersion is observed in the frequency components between 100 and 200 kHz.

Similar to the propagation characteristics from the maxillary third molars, the frequency components above 250 kHz vanish when the acoustic waves are propagated from teeth 14 and 24. A significant amount of attenuation can also be observed here in Fig.  3 c,d between the acoustic waves propagated through the cast and the aligners. Little to no dispersion is observed, however. Also in this case, the significant frequency components of the propagated waves lie between 100 and 200 kHz.

The large amount of attenuation in the propagated acoustic waves are observed even in the absence of the aligners in Fig.  2 a,b, due to the propagating distance between the source location and the sensor. Although the major frequency components lie between 100 and 200 kHz, a definite frequency peak could not be identified as in the other two cases. Now, the waves propagated through the aligners have attenuated even further, particularly in TPET-G (see Fig.  3 e,f). It can be surmised that in the dental aligners when the source location is far away from the sensors, particularly in the central incisors, the propagated acoustic waves cannot be taken into consideration for analysis. This means that, if the damage sources are located at the central incisors during the mechanical loading of the aligners, a large amount of attenuation and dispersion can be expected from the propagated acoustic waves. It can even vanish if their amplitude is very low.

Of course, this can be rectified by placing multiple sensors at different dental positions. However, the complex geometry of the cast and its irregular surface does not permit positioning of multiple sensors.

Cyclic test results

The cyclic test results are presented and discussed in this research in terms of the energy absorption and stiffness changes in the aligners for all the loading cycles. For each cycle, the load–displacement hysteresis curve is extracted. Energy absorbed by the aligner is calculated as the area within the hysteresis loop and the stiffness is calculated as the first slope of the bilinear stiffening in the loading phase of the hysteresis curve. For the sake of brevity, the hysteresis curves are not presented here. They can be found in the Supplementary file under Sect. S4 . For detailed information about the mechanical performance of the aligners, the readers are requested to refer to the authors’ previous article 41 .

The energy absorbed by the aligners at different loading stages and the respective variation in their stiffness are calculated from the hysteresis curves and the results are presented in Fig.  4 a,b.

figure 4

Mechanical performance of the clear dental aligners: ( a ) energy absorbed and ( b ) stiffness during the cyclic compression test.

In Fig.  4 a, the energy absorbed by TPET-G is relatively lower than that of TPU. The energy absorbed by TPET-G increases from 3.6 Nmm in cycle 100 to 4.52 Nmm in cycle 4000. Beyond that, the average absorbed by TPET-G is around 4.6 Nmm, and it remains almost stable until the end of the test (22,500 cycles). In TPU, the energy absorbed dropped sharply after the first 200 cycles from 6.8 to 5.6 Nmm and it remains relatively stable until cycle 9000 (average energy absorbed is 5.95 Nmm). Post cycle 9000, the energy absorbed increases slightly to 6.4 Nmm and beyond that there is a steep decrease. The energy absorbed at cycle 17,000 is in fact 2.6 Nmm, which is lower than the average energy absorbed by TPET-G.

The stiffness of TPET-G remains relatively stable compared to TPU in Fig.  4 b, where the stiffness increases steeply up until cycle 17,000 and stablishes until the end of the test.

Fractographic results using optical microscopy

The inner surface of the clear dental aligners after the compression cyclic tests are tested under optical microscope (NIKON SMZ800). The fractographic analysis is carried out on different dental positions of the aligners. The teeth which have shown significant damages are presented in this section.

Several small cracks of length less than 1 mm and some strain hardened regions are found in the aligners, which could have arisen during the thermoforming process. In Supplementary S5 , a microscopic image of untested aligner is presented to observe the effects of the thermoforming process.

The fractographic images of TPET-G and TPU aligners at different tooth positions are presented in Figs.  5 and 6 , respectively.

figure 5

Microscopic images of TPET-G taken from different dental positions. ( a ) Tooth 14; ( b ) Tooth 18; ( c ) Tooth 26 and ( d ) Tooth 27.

figure 6

Microscopic images of TPU taken from different dental positions. ( a ) Tooth 17; ( b ) Tooth 18; ( c ) Tooth 21 and ( d ) Tooth 27.

Acoustic emission results from the cyclic compression tests

The AE signals from the cyclic compression tests on the aligners are recorded using the same piezoelectric sensor used in the AU test. The AE signals are recorded in burst-mode of acquisition with each ‘hit’ is recorded when it crosses the detection threshold of 26 dB. The signals are registered at a sample rate of 2 MHz, similar to the procedure followed during the AU test. Similarly, the signals are recorded for a length of 1024 µs with a pre-trigger of 256 µs.

Based on the frequency information obtained from the AU results in “Fractographic results using optical microscopy” section, it can be assumed that in the recorded AE signals, the significant frequency components may lie between 100 and 200 kHz.

The AE analysis is made in two different modes: in the frequency domain using the peak frequency of the signal FFT and in the time domain using the Time of Arrival (ToA) picked by Akaike Information Criterion (AIC). AIC is quite popular in estimating the ToA of signals (any non-stationary data in general) and the procedure is explained by several authors. The detailed procedure about AIC is reported by Kitagawa and Akaike 7 . A brief description on the ToA calculation procedure with an example signal from the TPU dental aligner is presented in the Supplementary Sect. S6 . It should be noted that the ToAs calculated in this study are pre-trigger dependent, similar to any other time domain descriptor. However, this does not affect the damage source analysis since it is not used as a standalone descriptor. The damage source analysis is performed by integrating the time domain and frequency domain descriptors with the AU test results.

The characteristics of the AE signals generated from TPET-G and TPU are compared with their mechanical characteristics (energy absorbed) and are presented respectively in Figs.  7 and 8 .

figure 7

Energy absorbed by TPET-G during the cyclic compression test compared with the AE signal characteristics: ( a ) peak frequency and ( b ) ToA.

figure 8

Energy absorbed by TPU during the cyclic compression test compared with the AE signal characteristics: ( a ) peak frequency and ( b ) ToA.

The frequency characteristics of the AE signals generated during the compression cycle test can be estimated approximately through the results of the AU test. The frequency characteristics of the acoustic waves not only depends on the propagating medium but also on the damage source and the characteristics of the sensor used. Hsu–Nielsen source is often used to emulate the acoustic waves generated from the damage sources such as crack growth in polymer-based materials. Based on this and the results of the AU test, it can be surmised that the damage sources in dental aligners might generate acoustic waves of significant frequency components centred between 100 and 200 kHz. Moreover, if the damage sources are from the central incisors (or generally far away from the sensor location), the acoustic waves generated from these sources may go unaccounted for. Despite this shortcoming, the expected frequency components of the acoustic waves generated from the damage sources are approximated using the AU approach.

However, in Figs.  7 a and 8 a, there are several AE signals that have frequencies above 400 kHz. In TPET-G, these signals appear in two distinct zones with varying frequencies: one around Cycle 450 and the other around Cycle 22,500 (at the end of the test). Their time and frequency domain characteristics are analysed. The results showed that the signals that have frequencies above 400 kHz are generally high frequency noise, and therefore, these results are presented in Supplementary Sect. S7 . Similar observations are observed also in the AE data collected from TPU, where most of these noise signals appear at the end of the test (although a countable number of them sporadically appears in other cycles).

Towards the mechanical test results, which consequently generated the AE signals, the TPET-G and TPU aligners are significantly different in terms of their energy absorption characteristics and stiffness behaviour.

Generally, in thermoplastic composites, under cyclic loading, the energy absorbed during the initial cycles is due to large (strain) deformation. This can explain the initial increase in the energy absorbed by both the aligners, which is observed in Fig.  4 a. The large strain deformation in these thermoplastics is typically followed by strain hardening 42 , 43 , 44 , 45 , 46 . During the strain hardening stage, the energy absorbed by the aligners and their stiffness are expected to remain stable. It is observed in TPET-G, where the average stiffness after cycle 8000 (until the end of the test—cycle 22,500) is 162.2 ± 4.3 N/mm (see Fig.  4 b). However, this phenomenon is not observed in TPU. In fact, the energy absorbed by the TPU aligner decreases steeply after 17,000 cycles while the stiffness increases exponentially between cycles 5000 and 17,000. After this, it remains more or less stable. Why the energy absorbed decreases steeply at 17,000, while the stiffness increases exponentially up to the same cycle? Possibly, the energy absorbed by this aligner is released by a local failure and the strain hardening is localised to a smaller region to the vicinity of this failure. Therefore, the stiffness increases exponentially up to cycle 17,000. It can be assumed that the energy absorbed by the TPU is released rapidly by some major damage (such crack nucleation or crack growth). In merit of validating these observations, the fractographic results of the tested aligners must be discussed.

Minor cracks and small chip formation can be found in teeth 26 and 27, respectively (see Fig.  5 c,d). There are two possible sources for the minor cracks and the chip formation in the molar region: one is the cracks formed due to the thermal stress induced during the thermoforming process and the other is the sliding occlusal contacts made by the hard dental cast on the mandibular arch 47 . The sliding occlusal contact could be the source for the chip formation on tooth 27 in Fig.  5 d. Strain hardened regions (whitening) are found in tooth 18 (Fig.  5 b). The thermal stress may be induced on the aligners during the thermoforming process, which resulted in local strain hardened regions. Only one major crack of length greater than 1 mm is observed in this aligner, which is at tooth 14 (see Fig.  5 a). This crack is located adjacent to some other minor cracks. This shows that despite the formation of this one major crack, the energy absorbed by this aligner, or its stiffness remains highly unaffected. Nevertheless, they appear to not have affected the energy absorbed by the TPET-G aligner.

In TPU, however, a large crack of length exceeding 3.50 mm is observed in tooth 27 (Fig.  6 d). This crack is adjacent to three other cracks of length, which originates from the apex of the crown of the tooth propagating along the stretch marks of the strain hardened regions. This possibly resulted from the sliding occlusal contacts between the hard dental cast and the aligner. The presence of this crack clearly shows that the energy absorbed by this aligner during its initial loading stages is released during the crack growth. Possibly, at cycle 17,000, the amount of energy released during this crack growth is quite high, which resulted in the steep decrease in the energy absorbed by this aligner. The presence of localised crack and the strain hardened region in the vicinity of the cracks confirms the assumptions made in earlier in this section regarding the damage sources in TPU dental aligners.

Apart from this major crack, similar to TPET-G, several minor cracks and strain hardened regions are also observed in TPU. Particularly, they are observed in teeth 17 (Fig.  6 a), 18 (Fig.  6 b) and 21 (Fig.  6 c). The comparison between the mechanical results and fractographic analysis explained the occurrence of damages and their dental location. However, it still remains unclear at which cycle these cracks starts to initiate. They probably initiated from the minor cracks formed during the thermoforming process, but at which cycle they begin to nucleate remains unanswered. For this, the acoustic waves propagated from these aligners are analysed and discussed in the next section.

The number of AE events generated from TPET-G is much lower than that of TPU. It is evident from the microscopic results that the TPU suffered most damage with cracks of length greater than 3.5 mm and other minor cracks propagating towards the strain hardening region (see Fig.  6 ). TPET-G, on the other hand, suffered less damage and consequently, less generation of AE signals. The stability in the energy absorption characteristics also concur with these observations. The distribution of peak frequencies of the AE signals from TPET-G are presented in Fig.  7 a. Their distribution can be found in four specific regions: one between cycles 5000 and 10,000 and the other from 17,000 to the end of the test, and two others highly localised at Cycle 450 and Cycle 22,500, respectively. As mentioned earlier, it is established that the latter two localised distributions are possibly noise. Therefore, the discussion is kept only for the former two regions. The first region between cycles 5000 and 10,000 signifies that a crack begins to propagate at cycle 5000 and it continues until cycle 10,000 where it stops to grow further. It begins to grow again at 17,000 cycles and extends until the end of the test. How can it be ascertained that these signals are generated from the same crack? Comparing the peak frequency of these AE signals with the results from the AU tests, it can be observed that the stress waves generated from different dental positions have different frequencies, due to the dispersion. Since the peak frequencies are localised around 125 kHz, it can be assumed that they are generated by the same damage source. Few sample FFT results of the AE signals from these two regions are presented in Supplementary Sect. S7 . Their frequency characteristics share similarity with the AE signals propagated through the dental cast. Moreover, the ToA of the AE signals in Fig.  7 b also shows similar values. The average ToA of the AE signals recorded from TPET-G is \(3.09\pm 0.07\times {10}^{-4}\) s. These results can be used to draw a safe conclusion that these signals are possibly generated from a same source.

The only major damage source observed in TPET-G in the fractographic analysis is in tooth 14 (left maxillary premolar). The peak frequency of the stress wave generated from tooth 14 during the AU test (see Fig.  3 c) is around 120 kHz, similar to the peak frequencies of AE signals generated during the cyclic loading of TPET-G. Therefore, it is safe to assume that the damage source in TPET-G is left maxillary premolar (tooth 14), which generated AE signals at frequencies around 125 kHz at two different periods of the cyclic tests: between cycles 5000–10,000 and between cycles 17,000–22,500.

In TPU, the presence of several cracks results in the large amount of AE signal generation. Between cycles 1000 and 14,000, a significant number of AE signals with peak frequencies around 125 kHz is recorded (see Fig.  8 a). Another group of AE signals with peak frequencies between 150 and 200 kHz are recorded from 7500 cycles up to 17,000 cycles. After 17,000 cycles, there is an idle period of about 2500 cycles, where no AE signals are recorded (see Fig.  8 a). Now, the signals with the peak frequencies around 125 kHz can be associated with cracks in the left maxillary premolars, similar to that of TPET-G. The second group of signals from 7500 cycle possibly could be the one observed in tooth 27. This is established based on several factors. First, this can be established by comparing the frequency characteristics of the stress waves generated from right maxillary third (tooth 28) in the AU test (see Fig.  3 b). Second, the fractographic analysis shows the presence of multiple large cracks in tooth 27 (see Fig.  6 d). Third, the ToA of these signals in Fig.  8 b has an average of \(3.1\pm 0.08\times {10}^{-4}\) s, which may infer that these signals could possibly be generated from the same source (similar to the observations made in TPET-G). These conclusions are not drawn solely from one result but rather by integrating all the aforementioned three results. And finally, the energy absorbed by TPU drops suddenly at cycle 17,000. These observations conclude that the crack in tooth 27 of TPU began to grow steadily from cycle 7500 and terminated at cycle 17,000 by releasing a large amount of energy. This is followed by an idle period where no AE signals being generated. The FFT results of these signals are also presented in the Supplementary Sect. S7 for verification.

The ToA and the frequency bands of the AE signals aid in identifying the two distinct damage sources in TPU. The presence of other frequencies such as one around 50 kHz and some other signals above 200 kHz could possibly be generated due to the friction between the cracked PU elements of the aligner. The FFT of these signals are presented in the Supplementary Sect. S7 , which confirms that their frequency features do not resemble the propagated signals in the AU test. At this moment, it can only be assumed that it is not generated from the crack growth event. Further analysis may be required to understand their source.

The damage sources in two different clear dental aligners, TPET-G and TPU are localised by using AE technique and AU approach. Generally, it is highly difficult to evaluate damage sources in a complex geometry using AE testing. However, in this research work, first, the characteristics of stress wave propagation in the aligners are studied using the AU approach. Based on these results, the AE signal analysis is designed. The frequency characteristics of the AE signals generated during the cyclic loading of the aligners and their ToA are able to locate the damage sources at different dental positions. These results are validated by the fractographic analysis of the aligners. Nevertheless, the conclusions are drawn based on the results obtained by using one sensor. In the future work, this approach could be extended to adapt multiple sensors and a profound damage analysis.

Data availability

The datasets generated during and/or analysed during the current study are not publicly available due to ongoing investigations but are available from the corresponding author on reasonable request.

Liptai, R. G., Harris, D. O. & Tatro, C. A. An introduction to acoustic emission. Acoust. Emission 505 , 1 (1972).

Google Scholar  

Hamstad, M. A. Thirty years of advances and some remaining challenges in the application of acoustic emission to composite materials. In Acoustic Emission Beyond the Millennium (eds Kishi, T. et al. ) 77–91 (Elsevier, 2000).

Gillis, P. P. Dislocation motions and acoustic emissions. In Acoustic Emission (ed. Gillis, P. P.) (ASTM International, 1972).

Surgeon, M. & Wevers, M. Modal analysis of acoustic emission signals from CFRP laminates. NDT E Int. 32 , 311–322 (1999).

Article   Google Scholar  

Akhtar, A. et al. Acoustic emission testing of steel cylinders for the storage of natural gas on vehicles. NDT E Int. 25 , 115–125 (1992).

Article   CAS   Google Scholar  

Sedlak, P., Hirose, Y., Enoki, M. & Sikula, J. Arrival time detection in thin multilayer plates on the basis of Akaike information criterion. J. Acoust. Emission 26 , 182–188 (2008).

Kitagawa, G. & Akaike, H. A procedure for the modeling of non-stationary time series. Ann. Inst. Stat. Math. 30 , 351–363 (1978).

Kundu, T., Nakatani, H. & Takeda, N. Acoustic source localization in anisotropic plates. Ultrasonics 52 , 740–746 (2012).

Article   PubMed   Google Scholar  

Ni, Q.-Q. & Iwamoto, M. Wavelet transform of acoustic emission signals in failure of model composites. Eng. Fract. Mech. 69 , 717–728 (2002).

Qi, G., Barhorst, A., Hashemi, J. & Kamala, G. Discrete wavelet decomposition of acoustic emission signals from carbon-fiber-reinforced composites. Compos. Sci. Technol. 57 , 389–403 (1997).

Barile, C., Casavola, C., Pappalettera, G. & Paramsamy Kannan, V. Acoustic emission waveforms for damage monitoring in composite materials: Shifting in spectral density, entropy and wavelet packet transform. Struct. Health Monit. 21 , 1768–1789 (2022).

Marec, A., Thomas, J.-H. & El Guerjouma, R. Damage characterization of polymer-based composite materials: Multivariable analysis and wavelet transform for clustering acoustic emission data. Mech. Syst. Signal Process 22 , 1441–1464 (2008).

Article   ADS   Google Scholar  

Karimian, S. F. & Modarres, M. Acoustic emission signal clustering in CFRP laminates using a new feature set based on waveform analysis and information entropy analysis. Compos. Struct. 268 , 113987 (2021).

Barile, C., Casavola, C., Pappalettera, G. & Kannan, V. P. Laplacian score and K-means data clustering for damage characterization of adhesively bonded CFRP composites by means of acoustic emission technique. Appl. Acoust. 185 , 108425 (2022).

Barile, C., Casavola, C., Pappalettera, G. & Kannan, V. P. Application of different acoustic emission descriptors in damage assessment of fiber reinforced plastics: A comprehensive review. Eng. Fract. Mech. 235 , 107083 (2020).

Vary, A. The acousto-ultrasonic approach. In Acousto-ultrasonics: Theory and Application (ed. Vary, A.) 1–21 (Springer, 1988).

Moon, S. M., Jerina, K. L. & Hahn, H. T. Acousto-ultrasonic wave propagation in composite laminates. In Acousto-ultrasonics: Theory and Application (eds Moon, S. M. et al. ) 111–125 (Springer, 1988).

Chapter   Google Scholar  

Buttke, T. M. & Proffit, W. R. Referring adult patients for orthodontic treatment. J. Am. Dent. Assoc. 130 , 73–79 (1999).

Article   CAS   PubMed   Google Scholar  

Eliades, T. & Bourauel, C. Intraoral aging of orthodontic materials: The picture we miss and its clinical relevance. Am. J. Orthod. Dentofac. Orthop. 127 , 403–412 (2005).

Zhang, N., Bai, Y., Ding, X. & Zhang, Y. Preparation and characterization of thermoplastic materials for invisible orthodontics. Dent. Mater. J. 30 , 954–959 (2011).

Barone, S., Paoli, A., Neri, P., Razionale, A. V. & Giannese, M. Mechanical and Geometrical properties assessment of thermoplastic materials for biomedical application. In Advances on Mechanics, Design Engineering and Manufacturing: Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing (JCM 2016), 14–16 September, 2016, Catania, Italy 437–446 (Springer, 2017).

Elkholy, F., Schmidt, S., Schmidt, F., Amirkhani, M. & Lapatki, B. G. Force decay of polyethylene terephthalate glycol aligner materials during simulation of typical clinical loading/unloading scenarios. J. Orofac. Orthop. 84 , 189 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Rossini, G., Parrini, S., Castroflorio, T., Deregibus, A. & Debernardi, C. L. Efficacy of clear aligners in controlling orthodontic tooth movement: A systematic review. Angle Orthod. 85 , 881–889 (2015).

Srinivasan, B., Padmanabhan, S. & Srinivasan, S. Comparative evaluation of physical and mechanical properties of clear aligners—A systematic review. Evid. Based Dent. 25 , 1–7 (2023).

Cianci, C. et al. Mechanical behavior of PET-G tooth aligners under cyclic loading. Front. Mater. 7 , 104 (2020).

Albilali, A. T., Baras, B. H. & Aldosari, M. A. Evaluation of mechanical properties of different thermoplastic orthodontic retainer materials after thermoforming and thermocycling. Polymers 15 , 1610 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Kohda, N. et al. Effects of mechanical properties of thermoplastic materials on the initial force of thermoplastic appliances. Angle Orthod. 83 , 476–483 (2013).

Kravitz, N. D., Kusnoto, B., BeGole, E., Obrez, A. & Agran, B. How well does Invisalign work? A prospective clinical study evaluating the efficacy of tooth movement with Invisalign. Am. J. Orthod. Dentofac. Orthop. 135 , 27–35 (2009).

Von Elm, E. et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: Guidelines for reporting observational studies. Int. J. Surg. 12 , 1495–1499 (2014).

Dunegan, H. L. An alternative to pencil lead breaks for simulation of acoustic emission signal sources. The DECI Report (2000).

Hamstad, M. A. Acoustic emission signals generated by monopole (pencil lead break) versus dipole sources: Finite element modeling and experiments. J. Acoust. Emission 25 , 92–106 (2007).

Sause, M. G. R. Investigation of Pencil-Lead Breaks as Acoustic Emission Sources (2011).

Boczar, T. & Lorenc, M. Time-frequency analysis of the calibrating signals generated in the Hsu–Nielsen system. Phys. Chem. Solid State 7 , 585–588 (2006).

Duyck, J. et al. Magnitude and distribution of occlusal forces on oral implants supporting fixed prostheses: An in vivo study. Clin. Oral Implants Res. 11 , 465–475 (2000).

Tak, S., Jeong, Y., Kim, J.-E., Kim, J.-H. & Lee, H. A comprehensive study on the mechanical effects of implant-supported prostheses under multi-directional loading and different occlusal contact points. BMC Oral Health 23 , 338 (2023).

Gibbs, C. H. et al. Occlusal forces during chewing and swallowing as measured by sound transmission. J. Prosthet. Dent. 46 , 443–449 (1981).

Ciavarella, D. et al. Comparison of the stress strain capacity between different clear aligners. Open Dent. J. 13 , 1 (2019).

Bulmer, J. M., Ewers, C., Drinnan, M. J. & Ewan, V. C. Evaluation of spontaneous swallow frequency in healthy people and those with, or at risk of developing, dysphagia: A review. Gerontol. Geriatr. Med. 7 , 23337214211041800 (2021).

Lear, C. S. C., Flanagan, J. B. Jr. & Moorrees, C. F. A. The frequency of deglutition in man. Arch. Oral Biol. 10 , 83 (1965).

Wevers, M. Listening to the sound of materials: Acoustic emission for the analysis of material behaviour. NDT E Int. 30 , 99–106 (1997).

Barile, C. et al. Thermoplastic clear dental aligners under cyclic compression loading: A mechanical performance analysis using acoustic emission technique. J. Mech. Behav. Biomed. Mater. 152 , 106451 (2024).

Gunatillake, P. A., Martin, D. J., Meijs, G. F., McCarthy, S. J. & Adhikari, R. Designing biostable polyurethane elastomers for biomedical implants. Aust. J. Chem. 56 , 545–557 (2003).

Wang, C. et al. Fretting behavior of thermoplastic polyurethanes. Lubricants 7 , 73 (2019).

Qi, H. J. & Boyce, M. C. Stress–strain behavior of thermoplastic polyurethanes. Mech. Mater. 37 , 817–839 (2005).

Pinchuk, L. A review of the biostability and carcinogenicity of polyurethanes in medicine and the new generation of ‘biostable’ polyurethanes. J. Biomater. Sci. Polym. Ed. 6 , 225–267 (1995).

Scetta, G. et al. Strain induced strengthening of soft thermoplastic polyurethanes under cyclic deformation. J. Polym. Sci. 59 , 685–696 (2021).

Rosentritt, M., Behr, M., Scharnagl, P., Handel, G. & Kolbeck, C. Influence of resilient support of abutment teeth on fracture resistance of all-ceramic fixed partial dentures: An in vitro study. Int. J. Prosthod. 24 , 5 (2011).

Download references

Acknowledgements

One of the Authors ( Vimalathithan Paramsamy Kannan ) acknowledges the support of the following: Funder: Project funded under the National Recovery and Resilience Plan (PNRR), Mission 4 Component 2 Investment 1.4—Call for tender No. 3138 of December 16, 2021 of Italian Ministry of University and Research funded by the European Union—NextGenerationEU. Award Number: CNMS denominato MOST, Concession Decree No. 1033 of June 17, 2022 adopted by the Italian Ministry of University and Research, CUP: D93C22000410001, Spoke 14” Hydrogen and New Fuels”. One of the Authors ( Cianci Claudia ) acknowledges that this work was partly supported by the Italian Ministry of University and Research under the Programme “Department of Excellence” Legge 232/2016 (Grant No. CUP—D93C23000100001)”.

Author information

Authors and affiliations.

Dipartimento di Meccanica, Matematica e Management, Politecnico di Bari, Bari, Italy

Claudia Barile, Claudia Cianci, Vimalathithan Paramsamy Kannan, Giovanni Pappalettera, Carmine Pappalettere & Caterina Casavola

Dipartimento di Medicina Sperimentale e Clinica, Università di Foggia, Foggia, Italy

Michele Laurenziello & Domenico Ciavarella

You can also search for this author in PubMed   Google Scholar

Contributions

Claudia Cianci—Methodology, Validation, Investigation, Formal Analysis, Data Curation, Writing—Original Draft, Writing—Review and Editing; Claudia Barile—Methodology, Validation, Investigation, Formal Analysis, Writing—Review and Editing; Vimalathithan Paramsamy Kannan—Conceptualization, Methodology, Validation, Investigation, Software, Formal Analysis, Data Curation, Writing—Original Draft, Writing—Review and Editing; Giovanni Pappalettera—Conceptualization, Methodology, Validation, Investigation, Formal Analysis, Writing—Review and Editing, Supervision; Caterina Casavola—Resources; Carmine Pappalettere—Conceptualization; Domenico Ciavarella—Conceptualization, Methodology, Resources, Writing—Review and Editing, Supervision; Michele Laurenziello—Conceptualization, Methodology, Resources, Writing—Review and Editing.

Corresponding author

Correspondence to Giovanni Pappalettera .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Barile, C., Cianci, C., Paramsamy Kannan, V. et al. Damage source localisation in complex geometries using acoustic emission and acousto-ultrasonic techniques: an experimental study on clear aligners. Sci Rep 14 , 21467 (2024). https://doi.org/10.1038/s41598-024-72553-2

Download citation

Received : 25 January 2024

Accepted : 09 September 2024

Published : 14 September 2024

DOI : https://doi.org/10.1038/s41598-024-72553-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Acoustic emission
  • Acousto-ultrasonics
  • Fractographic analysis
  • Clear dental aligners
  • Frequency analysis
  • Time of arrival

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

en experimental approach

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

buildings-logo

Article Menu

en experimental approach

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Technique and tectonic concepts as theoretical tools in object and space production: an experimental approach to building technologies i and ii courses.

en experimental approach

Graphical Abstract

1. Introduction

2. materials—methods and results, the concept of making as the theoretical focus of bt courses, 3. conceptual foundations and differences of the first stage of the bt i course and the second stage of the bt ii course in the context of the concept of making, 3.1. the first stage of the building technologies courses, 3.1.1. building technologies i course: technique as a concept of making objects, 3.1.2. methodology of building technologies i course: practices for technique, 3.2. the second stage of building technologies courses, 3.2.1. building technologies ii course: tectonics as a concept of architectural object production, 3.2.2. building technologies ii course methodology: tectonic practices, 4. discussion, 5. conclusions, data availability statement, acknowledgments, conflicts of interest.

  • Sönmez, M.; Batı, B.N. Poiesis of Objects: Theory of Making. ITU A/Z 2019 , 16 , 59–70. [ Google Scholar ] [ CrossRef ]
  • Merriam, S.B. Nitel Araştırma: Desen ve Uygulama Için Bir Rehber ; Turan, S., Ed.; Nobel Yayın Dağıtım: Ankara, Turkey, 2023. [ Google Scholar ]
  • Chmiliar, L. Multiple-case designs. In Encyclopedia of Case Study Research ; Mills, A., Durepos, G., Wiebe, E., Eds.; SAGE Publications: California, CA, USA, 2010; pp. 582–583. [ Google Scholar ]
  • Diri, B.Ş.; Mayuk, S.G. Türkiye’de Z kuşağı ile Mimarlık Eğitiminde Yenilikçi Yöntem Arayışları: Yapı Bilgisi Dersleri Örneği. In Contemporary Educational Research: Theory and Practice in Education ; Dellal, N.A., Koch, S., Eds.; Books on Demand: Frankfurt, Germany, 2019; pp. 48–65. ISBN 9783750426542. [ Google Scholar ]
  • Tökmenci, E.Ö. Osmanlı İmparatorluğundan günümüze Türkiye’deki mimarlık eğitiminin gelişimine yönelik bir araştırma. In Mimarlık ve Eğitim Kurultayı-2/Mimarın Formasyonu Nedir, Ne olmalıdır? TMMOB MİMARLAR ODASI: Ankara, Turkey, 2004; pp. 43–54. [ Google Scholar ]
  • Gürdallı, H.; Yücel, A. Mimarın Formasyonunda Formel Mimarlık Eğitiminin Yeri. İTÜ Dergisi/a 2006 , 5 , 99–103. [ Google Scholar ]
  • Lökçe, S. Mimarlık Eğitim Programları: Mimari Tasarım ve Teknoloji ile Bütünleşme. Gazi Üniversitesi Mimarlık ve Mühendislik Fakültesi Dergisi 2002 , 17 , 1–16. [ Google Scholar ]
  • Hızlı, N.; Aysel, N. Ernst Egli’nin Güzel Sanatlar Akademisi Mimarlık Eğitimi Reform Çalışmaları. In Ernst A. Egli Türkiyeye Katkılar ; Ali, C., Bancı, S., Cengizkan, M., Eds.; TMMOB Mimarlar Odası Yayınları: Ankara, Turkey, 2017; pp. 75–84. [ Google Scholar ]
  • Kopuz, A.D. Türkiye’de Erken Cumhuriyet Dönemi Yabancı Mimarların İzleri, Franz Hillinger Örneği. Megaron Derg. 2018 , 13 , 363–373. [ Google Scholar ]
  • Gür, Ş.Ö. Mimarlıkta Temel Eğitim Dersi Uygulaması. Mimar. Derg. 2000 , 293 , 25–34. [ Google Scholar ]
  • Guattares, A. Policy Bries: Education during COVID-19 and beyond. 2020. Available online: https://unsdg.un.org/resources/policy-brief-education-during-covid-19-and-beyond (accessed on 7 September 2024).
  • Gore, N. Craft and Innovation: Serious Play and the Direct Experience of the Real. J. Archit. Educ. 2004 , 58 , 39–44. [ Google Scholar ] [ CrossRef ]
  • Mayuk, S.G.; Çoşkun, N. Learning by Doing in Architecture Education: Building Science Course. IJEAD Int. J. Educ. Archit. Des. 2020 , 1 , 2–15. [ Google Scholar ]
  • Erbil, Y. Mimarlık eğitiminde yaparak/yaşayarak öğrenme. e-J. New World Sci. Acad. Soc. Sci. 2008 , 3 , 579–587. [ Google Scholar ]
  • Kraus, C. Introduction: Hands on, Minds on, Motivations of the Designbuild Educator. In Designbuild Education ; Kraus, C., Ed.; Routledge: New York, NY, USA, 2017; pp. 1–16. [ Google Scholar ]
  • Passarelli, R.N.; Mouton, B.J. The URBANbuild Program: Bridging Design, Construction, and Research. J. Archit. Educ. 2021 , 75 , 102–107. [ Google Scholar ] [ CrossRef ]
  • Folić, B.; Kosanović, S.; Glažar, T.; Fikfak, A. Design-Build Concept in Architectural Education. Archit. Urban Plan. 2016 , 1 , 49–55. [ Google Scholar ] [ CrossRef ]
  • Mockbee, S. Interview by Clifford Pearson and Andrea Oppenheimer Dean; The Hero of Hale County: Sam Mockbee . Architectural Record. 2001. Available online: https://www.architecturalrecord.com/articles/12281-the-hero-of-hale-county-sam-mockbee (accessed on 7 August 2024).
  • Freear, A. About: Rural Studio 8 7 2023. Available online: https://ruralstudio.org/about/ (accessed on 7 September 2024).
  • Zawistowski, K.; Zawistowski, M. In Process. In Designbuild Education ; Kraus, C., Ed.; Routledge: New York, NY, USA, 2017; pp. 125–139, Chapter 3. [ Google Scholar ]
  • Architecture for All Association. Herkes icin Mimarlık. Available online: https://herkesicinmimarlik.org/en/ (accessed on 7 September 2024).
  • Gür, B.F.; Yüncü, O. An integrated pedagogy for 1/1 learning. Metu J. Fac. Archit. 2012 , 27 , 83–94. [ Google Scholar ] [ CrossRef ]
  • Erdman, J.; Weddle, R. Designing/building/learning. J. Archit. Educ. 2002 , 55 , 174–179. [ Google Scholar ] [ CrossRef ]
  • Nicholas, C.; Oak, A. Make and break details: The architecture of design-build education. Des. Stud. 2020 , 66 , 35–53. [ Google Scholar ] [ CrossRef ]
  • Carpenter, W.J.; Hoffman, D. Learning by Building: Design and Construction in Architectural Education ; Van Nostrand Reinhold: New York, NY, USA, 1997. [ Google Scholar ]
  • Anzai, Y.; Simon, H.A. The Theory of Learning by Doing. Psychol. Rev. 1979 , 86 , 124–140. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Dewey, J. Tecrübe ve Eğitim ; Başaran, F.; Varış, F., Translators; MEB Yayınları: Ankara, Turkey, 1966. [ Google Scholar ]
  • Akdeniz, A. Exploring the impact of self-regulated learning intervention on students’ strategy use and performance in a design studio course. Int. J. Technol. Des. Educ. 2023 , 33 , 1923–1957. [ Google Scholar ] [ CrossRef ]
  • Pallasmaa, J. Existential and Embodied Wisdom in Architecture: The Thinking Hand. Body Soc. 2017 , 23 , 96–111. [ Google Scholar ] [ CrossRef ]
  • Niiranen, S. Supporting the development of students’ Technological understanding in craft and Technology education. Int. J. Technol. Des. Educ. 2019 , 4 , 81–93. [ Google Scholar ]
  • Foote, J. Design-build: Build-design. J. Archit. Educ. 2012 , 65 , 52–58. [ Google Scholar ] [ CrossRef ]
  • Eco, U. Açık Yapıt ; Esmer, T., Translator; Can Yayınları: İstanbul, Turkey, 2000. [ Google Scholar ]
  • Platon. Mektuplar ; Akderin, F., Translator; Say yayınları: İstanbul, Turkey, 2010. [ Google Scholar ]
  • Platon. Şölen-Dostluk ; Eyüboğlu, S.; Erhat, A., Translators; Türkiye İş Bankası Kültür Yayınları: İstanbul, Turkey, 2017. [ Google Scholar ]
  • Aristoteles. Fizik , 10th ed.; Babür, S., Translator; Yapı Kredi Yayınları: İstanbul, Turkey, 2023. [ Google Scholar ]
  • Falcon, A. Aristotle on Causality. In Stanford Encyclopedia of Philosophy Archive ; Zalta, E.N., Nodelman, U., Eds.; 2006; Available online: https://plato.stanford.edu/archives/spr2023/entries/aristotle-causality/ (accessed on 6 June 2024).
  • Hennig, B. The Four Causes. J. Philos. 2009 , 106 , 137–160. [ Google Scholar ] [ CrossRef ]
  • Vitruvius. Mimarlık Üzerine on Kitap , 8th ed.; Dürüşken, Ç., Translator; Alfa Yayınları: Ankara, Turkey, 2017. [ Google Scholar ]
  • Kart, B. Aristoteles ve Heidegger’in Sanat Kuramlarında “Poiesis” ve “Phronesis”. Kaygı Uludağ Üniversitesi Fen-Edeb. Fakültesi Felsefe Derg. 2015 , 77–88. [ Google Scholar ] [ CrossRef ]
  • Artun, A. Çağdaş Sanatın Örgütlenmesi: Estetik Modernizmin Tasfiyesi , 5th ed.; İletişim Yayıncılık: İstanbul, Turkey, 2023. [ Google Scholar ]
  • Foucault, M. Space, Knowledge and Power. In The Foucault Reader: An Introduction to Foucault’s Thought ; Rabinow, P., Ed.; Pantheon Books: New York, NY, USA, 1984; pp. 239–256. [ Google Scholar ]
  • Ma, Y. Techne, a virtue to be thickened: Rethinking technical concerns in teaching and teacher education. Res. Educ. 2018 , 100 , 118–120. [ Google Scholar ] [ CrossRef ]
  • Heidegger, M. Sanat Eserinin Kökeni (Der Ursprung des Kunstwerks) ; Tepebaşılı, F., Translator; De Ki Basım Yayım: Ankara, Turkey, 2007. [ Google Scholar ]
  • Rizzuto, A.P. Tectonic Memoirs: The Epistemological Parameters of Tectonic Theories of Architecture. Ph.D. Thesis, Georgia Institute of Technology, Doctor of Philosophy with a Major in Architecture, Atlanta, GA, USA, 2010. Available online: https://repository.gatech.edu/entities/publication/b02b8a0a-c0af-4ef0-b54a-7ae71aeb9e71/full (accessed on 7 September 2024).
  • Weber, S. Upsetting the Set Up: Remarks on Heidegger’s Questing After Technics. MLN 1989 , 104 , 980–985. [ Google Scholar ] [ CrossRef ]
  • Kartal, H.B.; Kartal, N.A. The Archıtectural Theory before and after Kant in the Intersectıon of the Phılosophy and Archıtecture. Int. J. Soc. Sci. Acad. 2020 , 4 , 692–694. [ Google Scholar ] [ CrossRef ]
  • Soussloff, C. The Artist. In Encyclopedia of Aesthetics ; Kelly, M., Ed.; Oxford University Press: Oxford, UK; New York, NY, USA, 2014; pp. 196–201. [ Google Scholar ]
  • Heidegger, M. Teknik Ve Dönüş & Özdeşlik Ve Ayrım ; Aca, N., Translator; Pharmakon: Ankara, Turkey, 2015. [ Google Scholar ]
  • Hartoonian, G. Ontology of Construction ; On Nihilism of Technology In Theories Of Modern Architecture; Cambridge University Press: Cambridge, UK, 1994. [ Google Scholar ]
  • Loos, A. Mimarlık Üzerine , 2nd ed.; Tümertekin, A.; Ülner, N., Translators; Janus: İstanbul, Turkey, 2015. [ Google Scholar ]
  • Kahn, L.I.; Wurman, R.S. What Will Be Has Always Been: The Words of Louis I. Kahn , 1st ed.; Wulman, R.S., Ed.; Rizzoli: New York, NY, USA, 1986. [ Google Scholar ]
  • Zumthor, P. Thinking Architecture ; Birkhauser: Basel, Switzerland, 2006. [ Google Scholar ]
  • Bustos, L.C.; Kirkegaard, P.H.; Sonne-Frederiksen, P.F.; Buthke, J. A Tectonic methodology for timber joints. The excellence of detail in the era of Technology throughout an experimental investigation. In Structures and Architecture—Bridging the Gap and Crossing Borders ; Cruz, P.J.S., Ed.; CRC Press: London, UK, 2019; pp. 237–246. [ Google Scholar ]
  • Hürol, Y. Tectonic Affects in Contemporary Architecture , 1st ed.; Cambridge Scholars Publishing: Newcastle, UK, 2022. [ Google Scholar ]
  • Mallgrave, H.F. The Four Elements of Architecture and Other Writings , 23rd ed.; Semper, G., Ed.; Cambridge University Press: Cambridge, UK, 2011. [ Google Scholar ]
  • Frampton, K. Studies in Tectonic Culture ; Cava, J., Ed.; MIT Press: Cambridge, MA, USA, 2001. [ Google Scholar ]
  • Liu, Y.-T.; Lim, C.-K. New tectonics: A preliminary framework involving classic and digital thinking. Des. Stud. 2006 , 27 , 267–307. [ Google Scholar ] [ CrossRef ]
  • Hill, J. Immaterial Architecture ; Taylor & Francis: New York, NY, USA, 2006. [ Google Scholar ]
  • Batı, B.N.; Sönmez, M. Investigation Of Immaterial Tectonic Expression in Architecture. Int. Ref. J. Des. Archit. 2018 , 15 , 160–177. [ Google Scholar ] [ CrossRef ]
  • Semper, G. Mimarlığın Dört Ögesi ve İki Konferans, 2nd, ed. ; Ülner, N.; Tümertekin, A., Translators; Janus: İstanbul, Turkey, 2015. [ Google Scholar ]
  • Bötticher, C. The Principles of the Hellenic and Germanic Ways of Building with Regard to Their Application to Our Present Way of Building. In What Style Should We Build? The German Debate on Architectural Style ; Mallgrave, H.F., Herrman, W., Eds.; The Getty Center Publication: Santa Monica, CA, USA, 1992; pp. 147–168. [ Google Scholar ]
  • Oxman, R. Informed Tectonics in Material-based Design. Des. Stud. 2012 , 33 , 427–455. [ Google Scholar ] [ CrossRef ]
  • Herrmann, W. Gottfried Semper in Search of Architecture ; MIT Press: Cambridge, MA, USA, 1984. [ Google Scholar ]
  • Tafuri, M. Theories and History of Architecture ; Verrecchia, G., Translator; Granada: London, UK, 1980. [ Google Scholar ]

Click here to enlarge figure

The Concept of Making
Building Technologies I CourseBuilding Technologies II Course
TechniqueTectonics
TheoryPracticeTheoryPractice
Consciousness, knowledge, imagination (Construction of thought)MaterialsMethodsContextMaterialsTechnique and Technology
Purpose, requirements (Construction of reality)Sensed thingsOverlapping,
attaching side by side,
fitting,
interweaving,
knitting,
bending,
piling up,
reducing
ActionGrasped thingsFraming
Ground/Mound
Possibilities, choices, personalisationRoof
Transformations, customisationsEnclosure
Size
Perception
Action and inaction
Form
Production of objectProduction of space
Building Technologies, I Course 12 Week Syllabus
WeeksDesign ProblemContentPractice
Stage 11–4Objective ✓
Form X
Material X
Discovery of sub-concepts of the object
(such as movement, sound, smell, size, size, texture, colour, hardness)
Lebineria Bird-2022, XQ-6 Creature-2021, Manduri Beetle-2020, Patunia Flower-2019, 23rd Tree-2018, Vooo Game Character-2017, Pereia Meatball-2016, Lindur Spider II-2015, Gundela Porridge-2014, A Creature-2013, Your Own Circle-2012
Stage 25–8Objective ✓
Form X
Material X
Discussion of making methods
(such as overlapping, folding, intertwining, bending, piling, reducing, knitting)
Your head and neck in 1/1 scale-2022, Torso and upper part of your own body-2021, Your own body in ½ scale-2020, Wrist, elbow, and shoulder-2019, Wearable arm-2018, Your arm-2017, a Trap for the creature-2016, a Shelter for the creature-2015, a Shelter for the Lindur spider-2014, Your head-2013, a Body-2012
Stage 39–12Objective ✓
Form ✓
Material X
Development and customisation of making methods concerning the materialDesign of the Other-2022, 1/1 a Peacock- 2021, 1/1 Your own body-2020, Second skin-2019, 1/1 a Grasshopper or Flamingo-2018, 50/1 a Centipede-2017, 1/3 a Giraffe-2016, ½ an Elephant-2015, 1/1 Your own body-2014, Learn from nature and make yourself a shelter-2013, Make the shelter of the body you design-2012
Lebineria Bird, 2022 S-1 S-21 S-31
XQ-6 Ceature, 2021 S-4 S-5 S-6
Manduri Beetle, 2020 S-7 S-8 S-9
Patunya Flower, 2019 S-10 S-11 S-12
23. Tree, 2018 S-13 S-14 S-15
Your head and neck, 2022 S-16 S-17 S-18
Your own body 2021 S-19 S-20 S-21
½ Your own body 2020 S-22 S-7 S-8
Torso and upper part of your own body 2019 S-11 S-23 S-12
Wearable arm 2018 S-25 S-26 S-27
Design of the Other, 2022 S-16 S-17 S-18
Peacock, 2021 S-37, S-38 S-28 S-21
Your own body, 2020 S-29 S-8 S-30
Second Skin, 2019 S-12 S-31- S-32 S-11
Flamingo 2018 S-33, S-14 S-35 S-14, S-26
MaterialFormObject
S-1, S-3- 2022MaterialWooden popsicle sticks
Secondary materialFilament
MethodKnitting, punching, lacing
S-36, S-37- 2021Material1 × 1 cm wooden lath
Secondary materialFlexible wire
MethodOverlapping—binding
S-30- 2020MaterialMetal fly wire
Secondary materialMetal wire
MethodKnitting, wrapping
S-31, S-32 - 2019Material04 cm diameter, plastic pipette
Secondary materialWooden stick skewer
MethodNesting
S-13- 2018Material10 mm x 10 mm Wooden Lath
Secondary materialWire
MethodBinding, placing side by side
Building Technologies II Course 12-Week Syllabus
WeeksContentPractise
Stage 11–5Discussion of constraints such as climate, topography, sensation, time, actionConstructing and producing the context
Stage 26–10Discussion of structural elements such as floor, cover, and wall and transformation of design genes to establish holistic constructionCreation of architectural space according to context and structural elements
Stage 311–12The production of spaceAnalogue and digital reproduction of context and construction integrity
S-1, S-3 -2022Characteristics of the contextA place on the slope of a forested hill and by the waterProduced Image
ActionSitting, sun protection
Models1/500 1/50
S-35, S-36-2021Characteristics of the contextA place on the cliffs and by the sea with a rainy weatherProduced Image
ActionTaking a break during a nature walk, watching the scenery, sitting
Models1/500 1/50
S-30, 2020Characteristics of the contextA rocky hill in the middle of the seaProduced Image
ActionSitting, viewing the landscape
Models1/500 1/50
S-31, S-32 -2019Characteristics of the ContextA place in the desert and on top of a hillProduced Image
ActionTaking a break, watching, standing in the shade, drinking water
Models1/500 1/50
S-13- 2018Characteristics of the ContextA cave on a rocky hill by the seaProduced Image
ActionSwimming in the sea, mooring the boat, sunbathing
Models1/500 1/50
S-1, S-3-2022ARepresentation of context
BGene transfer
CConstruction
S-36, S-37 -2021ARepresentation of context
BGene transfer
CConstruction
S-30-2020
S-31, S-32-2019
S-13- 2018
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Sönmez, M. Technique and Tectonic Concepts as Theoretical Tools in Object and Space Production: An Experimental Approach to Building Technologies I and II Courses. Buildings 2024 , 14 , 2866. https://doi.org/10.3390/buildings14092866

Sönmez M. Technique and Tectonic Concepts as Theoretical Tools in Object and Space Production: An Experimental Approach to Building Technologies I and II Courses. Buildings . 2024; 14(9):2866. https://doi.org/10.3390/buildings14092866

Sönmez, Murat. 2024. "Technique and Tectonic Concepts as Theoretical Tools in Object and Space Production: An Experimental Approach to Building Technologies I and II Courses" Buildings 14, no. 9: 2866. https://doi.org/10.3390/buildings14092866

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Overview of the experimental approach

    en experimental approach

  2. PPT

    en experimental approach

  3. Illustration of experimental approach

    en experimental approach

  4. -Schematic representation of the overall experimental approach for this

    en experimental approach

  5. The overall experimental approach. See the Methods section for

    en experimental approach

  6. Design of experiment approach depicting the procedure of experimental

    en experimental approach

VIDEO

  1. NIX EN EXPERIMENTAL

  2. EMBER EN EXPERIMENTAL

  3. Morlhex

  4. ES Lab Experiment 3 Part 1 (Introduction to Analysis and Design)

  5. Je joue VAL en EXPERIMENTAL sur BRAWLHALLA

  6. Joa Romero INTRA

COMMENTS

  1. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  2. Experimental Design

    Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes identifying the variables that ...

  3. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  4. Experimental Research: What it is + Types of designs

    The classic experimental design definition is: "The methods used to collect data in experimental studies.". There are three primary types of experimental design: The way you classify research subjects based on conditions or groups determines the type of research design you should use. 01. Pre-Experimental Design.

  5. A Quick Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  6. Experimental research

    10 Experimental research ... This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental ...

  7. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  8. Experimental Research Designs: Types, Examples & Advantages

    Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research.

  9. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling cause-and-effect relationships between variables across various disciplines. This paper delineates the key ...

  10. An Introduction to Experimental Design Research

    As discussed above, experimental design research encapsulates a wide range of research designs, sharing fundamental design conventions (see Part I, Chap. 3). Table 1.1 gives an overview of the basic types of experimental study, which are further elaborated with respect to design research in Chap. 12.

  11. Guide to experimental research design

    Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause ...

  12. Experimentation in Scientific Research

    Experimentation is one scientific research method, perhaps the most recognizable, in a spectrum of methods that also includes description, comparison, and modeling (see our Description, Comparison, and Modeling modules). While all of these methods share in common a scientific approach, experimentation is unique in that it involves the conscious ...

  13. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  14. Experimental Research Designs: Types, Examples & Methods

    Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in ...

  15. Experimental Research

    Experimental science is the queen of sciences and the goal of all speculation. Roger Bacon (1214-1294) Download chapter PDF. Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term 'experiment' arises from Latin, Experiri, which means, 'to ...

  16. Experimental Research

    Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc. It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable ...

  17. Mastering Research: The Principles of Experimental Design

    The world of experimental research is continually evolving, with each new development promising to reshape how we approach, conduct, and interpret experiments. The central tenets of experimental design—control, randomization, replication—though fundamental, are being complemented by sophisticated techniques that ensure richer insights and ...

  18. What is experimental research: Definition, types & examples

    An example of experimental research in marketing: The ideal goal of a marketing product, advertisement, or campaign is to attract attention and create positive emotions in the target audience. Marketers can focus on different elements in different campaigns, change the packaging/outline, and have a different approach.

  19. Experimental vs Quasi-Experimental Design: Which to Choose?

    An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups: A treatment group: where participants receive the new intervention which effect we want to study. A control or comparison group: where participants do not receive any ...

  20. Study/Experimental/Research Design: Much More Than Statistics

    Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping ...

  21. An experimental approach: Follow by leading.

    This chapter is presented as an attempt at a methodological elaboration of experiment as a specific intervention in gestalt therapy and as such it is based on an existing wide literature base focusing on the creative aspects of gestalt therapy in general and on potentials, limits, and general rules for the use of experiment in particular. This chapter complements the existing literature with a ...

  22. An experimental approach to analyze aerosol and splatter ...

    The experimental schematics for the three different experimental orientations. a Case 1: The teeth are 0 ° and the point of the CUS is 90 ° from the x-axis, respectively. b, c Case 2 and 3: The teeth are rotated to be 45 ° from the x-axis. The point of the CUS is rotated so that it is 5˚ in reference to the front of the teeth.

  23. Overconfidence and Excess Entry: An Experimental Approach

    The idea that overconfidence causes business entry mistakes has, of course, been suggested before (e.g., Richard Roll, 1986) but has not been directly tested by measuring economic de- cisions and personal overconfidence simulta- neously. To link the two we created an experimental setting with basic features of busi- ness entry situations.

  24. A comprehensive study on active mixer performance using liquid metal

    A comprehensive study on active mixer performance using liquid metal droplets: an experimental approach. ... liquid metal droplet diameter, input angle between the branches, and minichannel width. Given the experimental nature of our research, we employ a central composite design method within the framework of design of experiments (DoE). From ...

  25. Damage source localisation in complex geometries using acoustic

    Signal-based approaches such as time-frequency analysis, frequency-based analysis 9,10,11, and parameter-based approaches such as data classification and clustering are used for damage source ...

  26. A cluster analysis of text message users based on their demand for text

    The goal of this study was to determine whether cluster analysis could be used to identify distinct subgroups of text message users based on behavioral economic indices of demand for text messaging. Cluster analysis is an analytic technique that attempts to categorize cases based on similarities across selected variables. Participants completed a questionnaire about mobile phone usage and a ...

  27. Buildings

    The experimental approaches of the courses enabled students to focus on the properties of the materials and continue developing their search for techniques throughout the course. This ensured a continuous interaction between theory and practice and established a substance-form interaction. Table 8 shows the development of this technique. This ...