Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

q.4 discuss experimental research designs in detail

Enago Academy's Most Popular Articles

10 Tips to Prevent Research Papers From Being Retracted

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

q.4 discuss experimental research designs in detail

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Promoting Research
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

q.4 discuss experimental research designs in detail

In your opinion, what is the most effective way to improve integrity in the peer review process?

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

q.4 discuss experimental research designs in detail

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Experimental Design: Definition and Types

By Jim Frost 3 Comments

What is Experimental Design?

An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions.

An experiment is a data collection procedure that occurs in controlled conditions to identify and understand causal relationships between variables. Researchers can use many potential designs. The ultimate choice depends on their research question, resources, goals, and constraints. In some fields of study, researchers refer to experimental design as the design of experiments (DOE). Both terms are synonymous.

Scientist who developed an experimental design for her research.

Ultimately, the design of experiments helps ensure that your procedures and data will evaluate your research question effectively. Without an experimental design, you might waste your efforts in a process that, for many potential reasons, can’t answer your research question. In short, it helps you trust your results.

Learn more about Independent and Dependent Variables .

Design of Experiments: Goals & Settings

Experiments occur in many settings, ranging from psychology, social sciences, medicine, physics, engineering, and industrial and service sectors. Typically, experimental goals are to discover a previously unknown effect , confirm a known effect, or test a hypothesis.

Effects represent causal relationships between variables. For example, in a medical experiment, does the new medicine cause an improvement in health outcomes? If so, the medicine has a causal effect on the outcome.

An experimental design’s focus depends on the subject area and can include the following goals:

  • Understanding the relationships between variables.
  • Identifying the variables that have the largest impact on the outcomes.
  • Finding the input variable settings that produce an optimal result.

For example, psychologists have conducted experiments to understand how conformity affects decision-making. Sociologists have performed experiments to determine whether ethnicity affects the public reaction to staged bike thefts. These experiments map out the causal relationships between variables, and their primary goal is to understand the role of various factors.

Conversely, in a manufacturing environment, the researchers might use an experimental design to find the factors that most effectively improve their product’s strength, identify the optimal manufacturing settings, and do all that while accounting for various constraints. In short, a manufacturer’s goal is often to use experiments to improve their products cost-effectively.

In a medical experiment, the goal might be to quantify the medicine’s effect and find the optimum dosage.

Developing an Experimental Design

Developing an experimental design involves planning that maximizes the potential to collect data that is both trustworthy and able to detect causal relationships. Specifically, these studies aim to see effects when they exist in the population the researchers are studying, preferentially favor causal effects, isolate each factor’s true effect from potential confounders, and produce conclusions that you can generalize to the real world.

To accomplish these goals, experimental designs carefully manage data validity and reliability , and internal and external experimental validity. When your experiment is valid and reliable, you can expect your procedures and data to produce trustworthy results.

An excellent experimental design involves the following:

  • Lots of preplanning.
  • Developing experimental treatments.
  • Determining how to assign subjects to treatment groups.

The remainder of this article focuses on how experimental designs incorporate these essential items to accomplish their research goals.

Learn more about Data Reliability vs. Validity and Internal and External Experimental Validity .

Preplanning, Defining, and Operationalizing for Design of Experiments

A literature review is crucial for the design of experiments.

This phase of the design of experiments helps you identify critical variables, know how to measure them while ensuring reliability and validity, and understand the relationships between them. The review can also help you find ways to reduce sources of variability, which increases your ability to detect treatment effects. Notably, the literature review allows you to learn how similar studies designed their experiments and the challenges they faced.

Operationalizing a study involves taking your research question, using the background information you gathered, and formulating an actionable plan.

This process should produce a specific and testable hypothesis using data that you can reasonably collect given the resources available to the experiment.

  • Null hypothesis : The jumping exercise intervention does not affect bone density.
  • Alternative hypothesis : The jumping exercise intervention affects bone density.

To learn more about this early phase, read Five Steps for Conducting Scientific Studies with Statistical Analyses .

Formulating Treatments in Experimental Designs

In an experimental design, treatments are variables that the researchers control. They are the primary independent variables of interest. Researchers administer the treatment to the subjects or items in the experiment and want to know whether it causes changes in the outcome.

As the name implies, a treatment can be medical in nature, such as a new medicine or vaccine. But it’s a general term that applies to other things such as training programs, manufacturing settings, teaching methods, and types of fertilizers. I helped run an experiment where the treatment was a jumping exercise intervention that we hoped would increase bone density. All these treatment examples are things that potentially influence a measurable outcome.

Even when you know your treatment generally, you must carefully consider the amount. How large of a dose? If you’re comparing three different temperatures in a manufacturing process, how far apart are they? For my bone mineral density study, we had to determine how frequently the exercise sessions would occur and how long each lasted.

How you define the treatments in the design of experiments can affect your findings and the generalizability of your results.

Assigning Subjects to Experimental Groups

A crucial decision for all experimental designs is determining how researchers assign subjects to the experimental conditions—the treatment and control groups. The control group is often, but not always, the lack of a treatment. It serves as a basis for comparison by showing outcomes for subjects who don’t receive a treatment. Learn more about Control Groups .

How your experimental design assigns subjects to the groups affects how confident you can be that the findings represent true causal effects rather than mere correlation caused by confounders. Indeed, the assignment method influences how you control for confounding variables. This is the difference between correlation and causation .

Imagine a study finds that vitamin consumption correlates with better health outcomes. As a researcher, you want to be able to say that vitamin consumption causes the improvements. However, with the wrong experimental design, you might only be able to say there is an association. A confounder, and not the vitamins, might actually cause the health benefits.

Let’s explore some of the ways to assign subjects in design of experiments.

Completely Randomized Designs

A completely randomized experimental design randomly assigns all subjects to the treatment and control groups. You simply take each participant and use a random process to determine their group assignment. You can flip coins, roll a die, or use a computer. Randomized experiments must be prospective studies because they need to be able to control group assignment.

Random assignment in the design of experiments helps ensure that the groups are roughly equivalent at the beginning of the study. This equivalence at the start increases your confidence that any differences you see at the end were caused by the treatments. The randomization tends to equalize confounders between the experimental groups and, thereby, cancels out their effects, leaving only the treatment effects.

For example, in a vitamin study, the researchers can randomly assign participants to either the control or vitamin group. Because the groups are approximately equal when the experiment starts, if the health outcomes are different at the end of the study, the researchers can be confident that the vitamins caused those improvements.

Statisticians consider randomized experimental designs to be the best for identifying causal relationships.

If you can’t randomly assign subjects but want to draw causal conclusions about an intervention, consider using a quasi-experimental design .

Learn more about Randomized Controlled Trials and Random Assignment in Experiments .

Randomized Block Designs

Nuisance factors are variables that can affect the outcome, but they are not the researcher’s primary interest. Unfortunately, they can hide or distort the treatment results. When experimenters know about specific nuisance factors, they can use a randomized block design to minimize their impact.

This experimental design takes subjects with a shared “nuisance” characteristic and groups them into blocks. The participants in each block are then randomly assigned to the experimental groups. This process allows the experiment to control for known nuisance factors.

Blocking in the design of experiments reduces the impact of nuisance factors on experimental error. The analysis assesses the effects of the treatment within each block, which removes the variability between blocks. The result is that blocked experimental designs can reduce the impact of nuisance variables, increasing the ability to detect treatment effects accurately.

Suppose you’re testing various teaching methods. Because grade level likely affects educational outcomes, you might use grade level as a blocking factor. To use a randomized block design for this scenario, divide the participants by grade level and then randomly assign the members of each grade level to the experimental groups.

A standard guideline for an experimental design is to “Block what you can, randomize what you cannot.” Use blocking for a few primary nuisance factors. Then use random assignment to distribute the unblocked nuisance factors equally between the experimental conditions.

You can also use covariates to control nuisance factors. Learn about Covariates: Definition and Uses .

Observational Studies

In some experimental designs, randomly assigning subjects to the experimental conditions is impossible or unethical. The researchers simply can’t assign participants to the experimental groups. However, they can observe them in their natural groupings, measure the essential variables, and look for correlations. These observational studies are also known as quasi-experimental designs. Retrospective studies must be observational in nature because they look back at past events.

Imagine you’re studying the effects of depression on an activity. Clearly, you can’t randomly assign participants to the depression and control groups. But you can observe participants with and without depression and see how their task performance differs.

Observational studies let you perform research when you can’t control the treatment. However, quasi-experimental designs increase the problem of confounding variables. For this design of experiments, correlation does not necessarily imply causation. While special procedures can help control confounders in an observational study, you’re ultimately less confident that the results represent causal findings.

Learn more about Observational Studies .

For a good comparison, learn about the differences and tradeoffs between Observational Studies and Randomized Experiments .

Between-Subjects vs. Within-Subjects Experimental Designs

When you think of the design of experiments, you probably picture a treatment and control group. Researchers assign participants to only one of these groups, so each group contains entirely different subjects than the other groups. Analysts compare the groups at the end of the experiment. Statisticians refer to this method as a between-subjects, or independent measures, experimental design.

In a between-subjects design , you can have more than one treatment group, but each subject is exposed to only one condition, the control group or one of the treatment groups.

A potential downside to this approach is that differences between groups at the beginning can affect the results at the end. As you’ve read earlier, random assignment can reduce those differences, but it is imperfect. There will always be some variability between the groups.

In a  within-subjects experimental design , also known as repeated measures, subjects experience all treatment conditions and are measured for each. Each subject acts as their own control, which reduces variability and increases the statistical power to detect effects.

In this experimental design, you minimize pre-existing differences between the experimental conditions because they all contain the same subjects. However, the order of treatments can affect the results. Beware of practice and fatigue effects. Learn more about Repeated Measures Designs .

Assigned to one experimental condition Participates in all experimental conditions
Requires more subjects Fewer subjects
Differences between subjects in the groups can affect the results Uses same subjects in all conditions.
No order of treatment effects. Order of treatments can affect results.

Design of Experiments Examples

For example, a bone density study has three experimental groups—a control group, a stretching exercise group, and a jumping exercise group.

In a between-subjects experimental design, scientists randomly assign each participant to one of the three groups.

In a within-subjects design, all subjects experience the three conditions sequentially while the researchers measure bone density repeatedly. The procedure can switch the order of treatments for the participants to help reduce order effects.

Matched Pairs Experimental Design

A matched pairs experimental design is a between-subjects study that uses pairs of similar subjects. Researchers use this approach to reduce pre-existing differences between experimental groups. It’s yet another design of experiments method for reducing sources of variability.

Researchers identify variables likely to affect the outcome, such as demographics. When they pick a subject with a set of characteristics, they try to locate another participant with similar attributes to create a matched pair. Scientists randomly assign one member of a pair to the treatment group and the other to the control group.

On the plus side, this process creates two similar groups, and it doesn’t create treatment order effects. While matched pairs do not produce the perfectly matched groups of a within-subjects design (which uses the same subjects in all conditions), it aims to reduce variability between groups relative to a between-subjects study.

On the downside, finding matched pairs is very time-consuming. Additionally, if one member of a matched pair drops out, the other subject must leave the study too.

Learn more about Matched Pairs Design: Uses & Examples .

Another consideration is whether you’ll use a cross-sectional design (one point in time) or use a longitudinal study to track changes over time .

A case study is a research method that often serves as a precursor to a more rigorous experimental design by identifying research questions, variables, and hypotheses to test. Learn more about What is a Case Study? Definition & Examples .

In conclusion, the design of experiments is extremely sensitive to subject area concerns and the time and resources available to the researchers. Developing a suitable experimental design requires balancing a multitude of considerations. A successful design is necessary to obtain trustworthy answers to your research question and to have a reasonable chance of detecting treatment effects when they exist.

Share this:

q.4 discuss experimental research designs in detail

Reader Interactions

' src=

March 23, 2024 at 2:35 pm

Dear Jim You wrote a superb document, I will use it in my Buistatistics course, along with your three books. Thank you very much! Miguel

' src=

March 23, 2024 at 5:43 pm

Thanks so much, Miguel! Glad this post was helpful and I trust the books will be as well.

' src=

April 10, 2023 at 4:36 am

What are the purpose and uses of experimental research design?

Comments and Questions Cancel reply

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

  • Types of experimental

Log in or sign up

Get started for free

Experimental Research Design

  • First Online: 10 November 2021

Cite this chapter

q.4 discuss experimental research designs in detail

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  

3414 Accesses

1 Citations

This chapter addresses the peculiarities, characteristics, and major fallacies of experimental research designs. Experiments have a long and important history in the social, natural, and medicinal sciences. Unfortunately, in business and management this looks differently. This is astounding, as experiments are suitable for analyzing cause-and-effect relationships. A true experiment is a brilliant method for finding out if one element really causes other elements. Also, researchers find relevant information on how to write an experimental research design paper and learn about typical methodologies used for this research design. The chapter closes with referring to overlapping and adjacent research designs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Aronson, E., Ellsworth, P., Carlsmith, J., & Gonzales, M. (1990). Methods of research in social psychology (2nd ed.). McGraw-Hill.

Google Scholar  

Bargh, J., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology., 71 (2), 230–244.

Article   Google Scholar  

Biais, B., & Weber, M. (2009). Hindsight bias, risk perception, and investment performance. Management Science, 55 (6), 1018–1029.

Bortz, J. & Döring, N. (2006). Forschungsmethoden und evaluation . Springer.

Bowlin, K. O., Hales, J., & Kachelmeier, S. J. (2009). Experimental evidence of how prior experience as an auditor influences managers’ strategic reporting decisions. Rev Account Stud, 14 (1), 63–87.

Burmeister, K., & Schade, C. (2007). Are entrepreneurs’ decisions more biased? An experimental investigation of the susceptibility to status quo bias. Journal of Business Venturing, 22 (3), 340–362.

Christensen, L. B. (2007). Experimental methodology . Pearson/Allyn and Bacon.

de Vaus, D. A. (2001). Research design in social research . Reprinted. Los Angeles: SAGE Publications, Inc.

Doyle, A. C. (1890). The sign of the four. Lippincott’s Monthly Magazine . Ward, Lock & Co.

Franke, N., Schreier, M., & Kaiser, U. (2010). The “I designed it myself” effect in mass customization. Management Science, 56 (1), 125–140.

Friese, M., Wilhelm, H., & Michaela, W. (2009). The impulsive consumer: Predicting consumer behavior with implicit reaction time measurements. Social Psychology of Consumer Behavior (pp. 335–364). Psychology Press.

Fromkin, H. L., & Streufert, S. (1976). Laboratory experimentation. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 415–465). Rand McNally College.

Harbring, C., & Irlenbusch, B. (2011). Sabotage in tournaments: evidence from a laboratory experiment. Management Science, 57 (4), 611–627.

Hennig-Thurau, T., Groth, M., Paul, M., & Gremler, D. D. (2006). Are all smiles created equal? How emotional contagion and emotional labor affect service relationships. Journal of Marketing, 70 (3), 58–73.

Homburg, C., Koschate, N., & Hoyer, W. D. (2005). Do Satisfied customers really pay more? A study of the relationship between customer satisfaction and willingness to pay. Journal of Marketing, 69 (2), 84–96.

Johnson, B. (2001). Toward a new classification of nonexperimental quantitative research. Educational Researcher, 30 (2), 3–13.

Koschate-Fischer, N., & Schandelmeier, S. (2014). A Guideline for Designing Experimental Studies in Marketing research and a Critical Discussion of Selected Problem Areas. Journal of Business Economics, 84 (6), 793–826.

Krishnaswamy K. N., Sivakumar A. I. & Mathirajan M. (2009). Management research methodology: Integration of principles, methods and techniques . Dorling Kindersley.

Lödding, H., & Lohmann, S. (2012). INCAP – applying short-term flexibility to control inventories. International Journal of Production Research, 50 (3), 909–919.

Mohnen, A., Pokorny, K., & Sliwka, D. (2008). Transparency, inequity aversion, and the dynamics of peer pressure in teams: Theory and evidence. Journal of Labor Economics, 26 (4), 693–720.

Perdue, B. C., & Summers, J. O. (1986). Checking the success of manipulations in marketing experiments. Journal of Marketing Research, 23 (4), 317–326.

Robson, C. (2011). Real world research: A resource for users of social research methods in applied settings (3rd ed.). Wiley.

Rogers, W. S. (2003). Social psychology: Experimental and critical approaches . Open University Press.

Sandri, S., Schade, C., Mußhoff, O., & Odening, M. (2010). Holding on for too long? An experimental study on inertia in entrepreneurs’ and non-entrepreneurs’ disinvestment choices. Journal of Economic Behavior & Organization, 76 (1), 30–44.

Schoen, K., & Crilly, N. (2012). Implicit methods for testing product preference: Exploratory studies with the affective simon task. In J. Brasset, J. McDonnell, & M. Malpass (Eds.), Proceedings of 8th international design and emotion conference, ed. London: Central Saint Martins College of Art and Design.

Stier, W. (1999). Empirische Forschungsmethoden . Springer.

Trochim, W. (2005). Research methods: The concise knowledge base. Atomic Dog Pub.

Weber, M., & Zuchel, H. (2005). How do prior outcomes affect risk attitude? Comparing escalation of commitment and the house-money effect. Decision Analysis, 2 (1), 30–43.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ – Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug , Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Hunziker, S., Blankenagel, M. (2021). Experimental Research Design. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-34357-6_12

Download citation

DOI : https://doi.org/10.1007/978-3-658-34357-6_12

Published : 10 November 2021

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-34356-9

Online ISBN : 978-3-658-34357-6

eBook Packages : Business and Economics (German Language)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

q.4 discuss experimental research designs in detail

Designing and Conducting Experimental and Quasi-Experimental Research

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

gain insight into methods of instruction

subject to human error

intuitive practice shaped by research

personal bias of researcher may intrude

teachers have bias but can be reflective

sample may not be representative

researcher can have control over variables

can produce artificial results

humans perform experiments anyway

results may only apply to one situation and may be difficult to replicate

can be combined with other research methods for rigor

groups may not be comparable

use to determine what is best for population

human response can be difficult to measure

provides for greater transferability than anecdotal research

political pressure may skew results

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Barnes, Luann, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, & Mike Palmquist. (2005). Experimental and Quasi-Experimental Research. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=64

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Correlational Research Design

Correlational Research – Methods, Types and...

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Ethnographic Research

Ethnographic Research -Types, Methods and Guide

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Phenomenology

Phenomenology – Methods, Examples and Guide

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Experimental Research: Definition, Types, Design, Examples

Appinio Research · 14.05.2024 · 32min read

Experimental Research Definition Types Design Examples

Experimental research is a cornerstone of scientific inquiry, providing a systematic approach to understanding cause-and-effect relationships and advancing knowledge in various fields. At its core, experimental research involves manipulating variables, observing outcomes, and drawing conclusions based on empirical evidence. By controlling factors that could influence the outcome, researchers can isolate the effects of specific variables and make reliable inferences about their impact. This guide offers a step-by-step exploration of experimental research, covering key elements such as research design, data collection, analysis, and ethical considerations. Whether you're a novice researcher seeking to understand the basics or an experienced scientist looking to refine your experimental techniques, this guide will equip you with the knowledge and tools needed to conduct rigorous and insightful research.

What is Experimental Research?

Experimental research is a systematic approach to scientific inquiry that aims to investigate cause-and-effect relationships by manipulating independent variables and observing their effects on dependent variables. Experimental research primarily aims to test hypotheses, make predictions, and draw conclusions based on empirical evidence.

By controlling extraneous variables and randomizing participant assignment, researchers can isolate the effects of specific variables and establish causal relationships. Experimental research is characterized by its rigorous methodology, emphasis on objectivity, and reliance on empirical data to support conclusions.

Importance of Experimental Research

  • Establishing Cause-and-Effect Relationships : Experimental research allows researchers to establish causal relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. This provides valuable insights into the underlying mechanisms driving phenomena and informs theory development.
  • Testing Hypotheses and Making Predictions : Experimental research provides a structured framework for testing hypotheses and predicting the relationship between variables . By systematically manipulating variables and controlling for confounding factors, researchers can empirically test the validity of their hypotheses and refine theoretical models.
  • Informing Evidence-Based Practice : Experimental research generates empirical evidence that informs evidence-based practice in various fields, including healthcare, education, and business. Experimental research contributes to improving outcomes and informing decision-making in real-world settings by identifying effective interventions, treatments, and strategies.
  • Driving Innovation and Advancement : Experimental research drives innovation and advancement by uncovering new insights, challenging existing assumptions, and pushing the boundaries of knowledge. Through rigorous experimentation and empirical validation, researchers can develop novel solutions to complex problems and contribute to the advancement of science and technology.
  • Enhancing Research Rigor and Validity : Experimental research upholds high research rigor and validity standards by employing systematic methods, controlling for confounding variables, and ensuring replicability of findings. By adhering to rigorous methodology and ethical principles, experimental research produces reliable and credible evidence that withstands scrutiny and contributes to the cumulative body of knowledge.

Experimental research plays a pivotal role in advancing scientific understanding, informing evidence-based practice, and driving innovation across various disciplines. By systematically testing hypotheses, establishing causal relationships, and generating empirical evidence, experimental research contributes to the collective pursuit of knowledge and the improvement of society.

Understanding Experimental Design

Experimental design serves as the blueprint for your study, outlining how you'll manipulate variables and control factors to draw valid conclusions.

Experimental Design Components

Experimental design comprises several essential elements:

  • Independent Variable (IV) : This is the variable manipulated by the researcher. It's what you change to observe its effect on the dependent variable. For example, in a study testing the impact of different study techniques on exam scores, the independent variable might be the study method (e.g., flashcards, reading, or practice quizzes).
  • Dependent Variable (DV) : The dependent variable is what you measure to assess the effect of the independent variable. It's the outcome variable affected by the manipulation of the independent variable. In our study example, the dependent variable would be the exam scores.
  • Control Variables : These factors could influence the outcome but are kept constant or controlled to isolate the effect of the independent variable. Controlling variables helps ensure that any observed changes in the dependent variable can be attributed to manipulating the independent variable rather than other factors.
  • Experimental Group : This group receives the treatment or intervention being tested. It's exposed to the manipulated independent variable. In contrast, the control group does not receive the treatment and serves as a baseline for comparison.

Types of Experimental Designs

Experimental designs can vary based on the research question, the nature of the variables, and the desired level of control. Here are some common types:

  • Between-Subjects Design : In this design, different groups of participants are exposed to varying levels of the independent variable. Each group represents a different experimental condition, and participants are only exposed to one condition. For instance, in a study comparing the effectiveness of two teaching methods, one group of students would use Method A, while another would use Method B.
  • Within-Subjects Design : Also known as repeated measures design , this approach involves exposing the same group of participants to all levels of the independent variable. Participants serve as their own controls, and the order of conditions is typically counterbalanced to control for order effects. For example, participants might be tested on their reaction times under different lighting conditions, with the order of conditions randomized to eliminate any research bias .
  • Mixed Designs : Mixed designs combine elements of both between-subjects and within-subjects designs. This allows researchers to examine both between-group differences and within-group changes over time. Mixed designs help study complex phenomena that involve multiple variables and temporal dynamics.

Factors Influencing Experimental Design Choices

Several factors influence the selection of an appropriate experimental design:

  • Research Question : The nature of your research question will guide your choice of experimental design. Some questions may be better suited to between-subjects designs, while others may require a within-subjects approach.
  • Variables : Consider the number and type of variables involved in your study. A factorial design might be appropriate if you're interested in exploring multiple factors simultaneously. Conversely, if you're focused on investigating the effects of a single variable, a simpler design may suffice.
  • Practical Considerations : Practical constraints such as time, resources, and access to participants can impact your choice of experimental design. Depending on your study's specific requirements, some designs may be more feasible or cost-effective   than others .
  • Ethical Considerations : Ethical concerns, such as the potential risks to participants or the need to minimize harm, should also inform your experimental design choices. Ensure that your design adheres to ethical guidelines and safeguards the rights and well-being of participants.

By carefully considering these factors and selecting an appropriate experimental design, you can ensure that your study is well-designed and capable of yielding meaningful insights.

Experimental Research Elements

When conducting experimental research, understanding the key elements is crucial for designing and executing a robust study. Let's explore each of these elements in detail to ensure your experiment is well-planned and executed effectively.

Independent and Dependent Variables

In experimental research, the independent variable (IV) is the factor that the researcher manipulates or controls, while the dependent variable (DV) is the measured outcome or response. The independent variable is what you change in the experiment to observe its effect on the dependent variable.

For example, in a study investigating the effect of different fertilizers on plant growth, the type of fertilizer used would be the independent variable, while the plant growth (height, number of leaves, etc.) would be the dependent variable.

Control Groups and Experimental Groups

Control groups and experimental groups are essential components of experimental design. The control group serves as a baseline for comparison and does not receive the treatment or intervention being studied. Its purpose is to provide a reference point to assess the effects of the independent variable.

In contrast, the experimental group receives the treatment or intervention and is used to measure the impact of the independent variable. For example, in a drug trial, the control group would receive a placebo, while the experimental group would receive the actual medication.

Randomization and Random Sampling

Randomization is the process of randomly assigning participants to different experimental conditions to minimize biases and ensure that each participant has an equal chance of being assigned to any condition. Randomization helps control for extraneous variables and increases the study's internal validity .

Random sampling, on the other hand, involves selecting a representative sample from the population of interest to generalize the findings to the broader population. Random sampling ensures that each member of the population has an equal chance of being included in the sample, reducing the risk of sampling bias .

Replication and Reliability

Replication involves repeating the experiment to confirm the results and assess the reliability of the findings . It is essential for ensuring the validity of scientific findings and building confidence in the robustness of the results. A study that can be replicated consistently across different settings and by various researchers is considered more reliable. Researchers should strive to design experiments that are easily replicable and transparently report their methods to facilitate replication by others.

Validity: Internal, External, Construct, and Statistical Conclusion Validity

Validity refers to the degree to which an experiment measures what it intends to measure and the extent to which the results can be generalized to other populations or contexts. There are several types of validity that researchers should consider:

  • Internal Validity : Internal validity refers to the extent to which the study accurately assesses the causal relationship between variables. Internal validity is threatened by factors such as confounding variables, selection bias, and experimenter effects. Researchers can enhance internal validity through careful experimental design and control procedures.
  • External Validity : External validity refers to the extent to which the study's findings can be generalized to other populations or settings. External validity is influenced by factors such as the representativeness of the sample and the ecological validity of the experimental conditions. Researchers should consider the relevance and applicability of their findings to real-world situations.
  • Construct Validity : Construct validity refers to the degree to which the study accurately measures the theoretical constructs of interest. Construct validity is concerned with whether the operational definitions of the variables align with the underlying theoretical concepts. Researchers can establish construct validity through careful measurement selection and validation procedures.
  • Statistical Conclusion Validity : Statistical conclusion validity refers to the accuracy of the statistical analyses and conclusions drawn from the data. It ensures that the statistical tests used are appropriate for the data and that the conclusions drawn are warranted. Researchers should use robust statistical methods and report effect sizes and confidence intervals to enhance statistical conclusion validity.

By addressing these elements of experimental research and ensuring the validity and reliability of your study, you can conduct research that contributes meaningfully to the advancement of knowledge in your field.

How to Conduct Experimental Research?

Embarking on an experimental research journey involves a series of well-defined phases, each crucial for the success of your study. Let's explore the pre-experimental, experimental, and post-experimental phases to ensure you're equipped to conduct rigorous and insightful research.

Pre-Experimental Phase

The pre-experimental phase lays the foundation for your study, setting the stage for what's to come. Here's what you need to do:

  • Formulating Research Questions and Hypotheses : Start by clearly defining your research questions and formulating testable hypotheses. Your research questions should be specific, relevant, and aligned with your research objectives. Hypotheses provide a framework for testing the relationships between variables and making predictions about the outcomes of your study.
  • Reviewing Literature and Establishing Theoretical Framework : Dive into existing literature relevant to your research topic and establish a solid theoretical framework. Literature review helps you understand the current state of knowledge, identify research gaps, and build upon existing theories. A well-defined theoretical framework provides a conceptual basis for your study and guides your research design and analysis.

Experimental Phase

The experimental phase is where the magic happens – it's time to put your hypotheses to the test and gather data. Here's what you need to consider:

  • Participant Recruitment and Sampling Techniques : Carefully recruit participants for your study using appropriate sampling techniques . The sample should be representative of the population you're studying to ensure the generalizability of your findings. Consider factors such as sample size , demographics , and inclusion criteria when recruiting participants.
  • Implementing Experimental Procedures : Once you've recruited participants, it's time to implement your experimental procedures. Clearly outline the experimental protocol, including instructions for participants, procedures for administering treatments or interventions, and measures for controlling extraneous variables. Standardize your procedures to ensure consistency across participants and minimize sources of bias.
  • Data Collection and Measurement : Collect data using reliable and valid measurement instruments. Depending on your research questions and variables of interest, data collection methods may include surveys , observations, physiological measurements, or experimental tasks. Ensure that your data collection procedures are ethical, respectful of participants' rights, and designed to minimize errors and biases.

Post-Experimental Phase

In the post-experimental phase, you make sense of your data, draw conclusions, and communicate your findings  to the world . Here's what you need to do:

  • Data Analysis Techniques : Analyze your data using appropriate statistical techniques . Choose methods that are aligned with your research design and hypotheses. Standard statistical analyses include descriptive statistics , inferential statistics (e.g., t-tests , ANOVA ), regression analysis , and correlation analysis. Interpret your findings in the context of your research questions and theoretical framework.
  • Interpreting Results and Drawing Conclusions : Once you've analyzed your data, interpret the results and draw conclusions. Discuss the implications of your findings, including any theoretical, practical, or real-world implications. Consider alternative explanations and limitations of your study and propose avenues for future research. Be transparent about the strengths and weaknesses of your study to enhance the credibility of your conclusions.
  • Reporting Findings : Finally, communicate your findings through research reports, academic papers, or presentations. Follow standard formatting guidelines and adhere to ethical standards for research reporting. Clearly articulate your research objectives, methods, results, and conclusions. Consider your target audience and choose appropriate channels for disseminating your findings to maximize impact and reach.

Chi-Square Calculator :

t-Test Calculator :

One-way ANOVA Calculator :

By meticulously planning and executing each experimental research phase, you can generate valuable insights, advance knowledge in your field, and contribute to scientific progress.

A s you navigate the intricate phases of experimental research, leveraging Appinio can streamline your journey toward actionable insights. With our intuitive platform, you can swiftly gather real-time consumer data, empowering you to make informed decisions with confidence. Say goodbye to the complexities of traditional market research and hello to a seamless, efficient process that puts you in the driver's seat of your research endeavors.

Ready to revolutionize your approach to data-driven decision-making? Book a demo today and discover the power of Appinio in transforming your research experience!

Book a Demo

Experimental Research Examples

Understanding how experimental research is applied in various contexts can provide valuable insights into its practical significance and effectiveness. Here are some examples illustrating the application of experimental research in different domains:

Market Research

Experimental studies are crucial in market research in testing hypotheses, evaluating marketing strategies, and understanding consumer behavior . For example, a company may conduct an experiment to determine the most effective advertising message for a new product. Participants could be exposed to different versions of an advertisement, each emphasizing different product features or appeals.

By measuring variables such as brand recall, purchase intent, and brand perception, researchers can assess the impact of each advertising message and identify the most persuasive approach.

Software as a Service (SaaS)

In the SaaS industry, experimental research is often used to optimize user interfaces, features, and pricing models to enhance user experience and drive engagement. For instance, a SaaS company may conduct A/B tests to compare two versions of its software interface, each with a different layout or navigation structure.

Researchers can identify design elements that lead to higher user satisfaction and retention by tracking user interactions, conversion rates, and customer feedback . Experimental research also enables SaaS companies to test new product features or pricing strategies before full-scale implementation, minimizing risks and maximizing return on investment.

Business Management

Experimental research is increasingly utilized in business management to inform decision-making, improve organizational processes, and drive innovation. For example, a business may conduct an experiment to evaluate the effectiveness of a new training program on employee productivity. Participants could be randomly assigned to either receive the training or serve as a control group.

By measuring performance metrics such as sales revenue, customer satisfaction, and employee turnover, researchers can assess the training program's impact and determine its return on investment. Experimental research in business management provides empirical evidence to support strategic initiatives and optimize resource allocation.

In healthcare , experimental research is instrumental in testing new treatments, interventions, and healthcare delivery models to improve patient outcomes and quality of care. For instance, a clinical trial may be conducted to evaluate the efficacy of a new drug in treating a specific medical condition. Participants are randomly assigned to either receive the experimental drug or a placebo, and their health outcomes are monitored over time.

By comparing the effectiveness of the treatment and placebo groups, researchers can determine the drug's efficacy, safety profile, and potential side effects. Experimental research in healthcare informs evidence-based practice and drives advancements in medical science and patient care.

These examples illustrate the versatility and applicability of experimental research across diverse domains, demonstrating its value in generating actionable insights, informing decision-making, and driving innovation. Whether in market research or healthcare, experimental research provides a rigorous and systematic approach to testing hypotheses, evaluating interventions, and advancing knowledge.

Experimental Research Challenges

Even with careful planning and execution, experimental research can present various challenges. Understanding these challenges and implementing effective solutions is crucial for ensuring the validity and reliability of your study. Here are some common challenges and strategies for addressing them.

Sample Size and Statistical Power

Challenge : Inadequate sample size can limit your study's generalizability and statistical power, making it difficult to detect meaningful effects. Small sample sizes increase the risk of Type II errors (false negatives) and reduce the reliability of your findings.

Solution : Increase your sample size to improve statistical power and enhance the robustness of your results. Conduct a power analysis before starting your study to determine the minimum sample size required to detect the effects of interest with sufficient power. Consider factors such as effect size, alpha level, and desired power when calculating sample size requirements. Additionally, consider using techniques such as bootstrapping or resampling to augment small sample sizes and improve the stability of your estimates.

To enhance the reliability of your experimental research findings, you can leverage our Sample Size Calculator . By determining the optimal sample size based on your desired margin of error, confidence level, and standard deviation, you can ensure the representativeness of your survey results. Don't let inadequate sample sizes hinder the validity of your study and unlock the power of precise research planning!

Confounding Variables and Bias

Challenge : Confounding variables are extraneous factors that co-vary with the independent variable and can distort the relationship between the independent and dependent variables. Confounding variables threaten the internal validity of your study and can lead to erroneous conclusions.

Solution : Implement control measures to minimize the influence of confounding variables on your results. Random assignment of participants to experimental conditions helps distribute confounding variables evenly across groups, reducing their impact on the dependent variable. Additionally, consider using matching or blocking techniques to ensure that groups are comparable on relevant variables. Conduct sensitivity analyses to assess the robustness of your findings to potential confounders and explore alternative explanations for your results.

Researcher Effects and Experimenter Bias

Challenge : Researcher effects and experimenter bias occur when the experimenter's expectations or actions inadvertently influence the study's outcomes. This bias can manifest through subtle cues, unintentional behaviors, or unconscious biases , leading to invalid conclusions.

Solution : Implement double-blind procedures whenever possible to mitigate researcher effects and experimenter bias. Double-blind designs conceal information about the experimental conditions from both the participants and the experimenters, minimizing the potential for bias. Standardize experimental procedures and instructions to ensure consistency across conditions and minimize experimenter variability. Additionally, consider using objective outcome measures or automated data collection procedures to reduce the influence of experimenter bias on subjective assessments.

External Validity and Generalizability

Challenge : External validity refers to the extent to which your study's findings can be generalized to other populations, settings, or conditions. Limited external validity restricts the applicability of your results and may hinder their relevance to real-world contexts.

Solution : Enhance external validity by designing studies closely resembling real-world conditions and populations of interest. Consider using diverse samples  that represent  the target population's demographic, cultural, and ecological variability. Conduct replication studies in different contexts or with different populations to assess the robustness and generalizability of your findings. Additionally, consider conducting meta-analyses or systematic reviews to synthesize evidence from multiple studies and enhance the external validity of your conclusions.

By proactively addressing these challenges and implementing effective solutions, you can strengthen the validity, reliability, and impact of your experimental research. Remember to remain vigilant for potential pitfalls throughout the research process and adapt your strategies as needed to ensure the integrity of your findings.

Advanced Topics in Experimental Research

As you delve deeper into experimental research, you'll encounter advanced topics and methodologies that offer greater complexity and nuance.

Quasi-Experimental Designs

Quasi-experimental designs resemble true experiments but lack random assignment to experimental conditions. They are often used when random assignment is impractical, unethical, or impossible. Quasi-experimental designs allow researchers to investigate cause-and-effect relationships in real-world settings where strict experimental control is challenging. Common examples include:

  • Non-Equivalent Groups Design : This design compares two or more groups that were not created through random assignment. While similar to between-subjects designs, non-equivalent group designs lack the random assignment of participants, increasing the risk of confounding variables.
  • Interrupted Time Series Design : In this design, multiple measurements are taken over time before and after an intervention is introduced. Changes in the dependent variable are assessed over time, allowing researchers to infer the impact of the intervention.
  • Regression Discontinuity Design : This design involves assigning participants to different groups based on a cutoff score on a continuous variable. Participants just above and below the cutoff are treated as if they were randomly assigned to different conditions, allowing researchers to estimate causal effects.

Quasi-experimental designs offer valuable insights into real-world phenomena but require careful consideration of potential confounding variables and limitations inherent to non-random assignment.

Factorial Designs

Factorial designs involve manipulating two or more independent variables simultaneously to examine their main effects and interactions. By systematically varying multiple factors, factorial designs allow researchers to explore complex relationships between variables and identify how they interact to influence outcomes. Common types of factorial designs include:

  • 2x2 Factorial Design : This design manipulates two independent variables, each with two levels. It allows researchers to examine the main effects of each variable as well as any interaction between them.
  • Mixed Factorial Design : In this design, one independent variable is manipulated between subjects, while another is manipulated within subjects. Mixed factorial designs enable researchers to investigate both between-subjects and within-subjects effects simultaneously.

Factorial designs provide a comprehensive understanding of how multiple factors contribute to outcomes and offer greater statistical efficiency compared to studying variables in isolation.

Longitudinal and Cross-Sectional Studies

Longitudinal studies involve collecting data from the same participants over an extended period, allowing researchers to observe changes and trajectories over time. Cross-sectional studies , on the other hand, involve collecting data from different participants at a single point in time, providing a snapshot of the population at that moment. Both longitudinal and cross-sectional studies offer unique advantages and challenges:

  • Longitudinal Studies : Longitudinal designs allow researchers to examine developmental processes, track changes over time, and identify causal relationships. However, longitudinal studies require long-term commitment, are susceptible to attrition and dropout, and may be subject to practice effects and cohort effects.
  • Cross-Sectional Studies : Cross-sectional designs are relatively quick and cost-effective, provide a snapshot of population characteristics, and allow for comparisons across different groups. However, cross-sectional studies cannot assess changes over time or establish causal relationships between variables.

Researchers should carefully consider the research question, objectives, and constraints when choosing between longitudinal and cross-sectional designs.

Meta-Analysis and Systematic Reviews

Meta-analysis and systematic reviews are quantitative methods used to synthesize findings from multiple studies and draw robust conclusions. These methods offer several advantages:

  • Meta-Analysis : Meta-analysis combines the results of multiple studies using statistical techniques to estimate overall effect sizes and assess the consistency of findings across studies. Meta-analysis increases statistical power, enhances generalizability, and provides more precise estimates of effect sizes.
  • Systematic Reviews : Systematic reviews involve systematically searching, appraising, and synthesizing existing literature on a specific topic. Systematic reviews provide a comprehensive summary of the evidence, identify gaps and inconsistencies in the literature, and inform future research directions.

Meta-analysis and systematic reviews are valuable tools for evidence-based practice, guiding policy decisions, and advancing scientific knowledge by aggregating and synthesizing empirical evidence from diverse sources.

By exploring these advanced topics in experimental research, you can expand your methodological toolkit, tackle more complex research questions, and contribute to deeper insights and understanding in your field.

Experimental Research Ethical Considerations

When conducting experimental research, it's imperative to uphold ethical standards and prioritize the well-being and rights of participants. Here are some key ethical considerations to keep in mind throughout the research process:

  • Informed Consent : Obtain informed consent from participants before they participate in your study. Ensure that participants understand the purpose of the study, the procedures involved, any potential risks or benefits, and their right to withdraw from the study at any time without penalty.
  • Protection of Participants' Rights : Respect participants' autonomy, privacy, and confidentiality throughout the research process. Safeguard sensitive information and ensure that participants' identities are protected. Be transparent about how their data will be used and stored.
  • Minimizing Harm and Risks : Take steps to mitigate any potential physical or psychological harm to participants. Conduct a risk assessment before starting your study and implement appropriate measures to reduce risks. Provide support services and resources for participants who may experience distress or adverse effects as a result of their participation.
  • Confidentiality and Data Security : Protect participants' privacy and ensure the security of their data. Use encryption and secure storage methods to prevent unauthorized access to sensitive information. Anonymize data whenever possible to minimize the risk of data breaches or privacy violations.
  • Avoiding Deception : Minimize the use of deception in your research and ensure that any deception is justified by the scientific objectives of the study. If deception is necessary, debrief participants fully at the end of the study and provide them with an opportunity to withdraw their data if they wish.
  • Respecting Diversity and Cultural Sensitivity : Be mindful of participants' diverse backgrounds, cultural norms, and values. Avoid imposing your own cultural biases on participants and ensure that your research is conducted in a culturally sensitive manner. Seek input from diverse stakeholders to ensure your research is inclusive and respectful.
  • Compliance with Ethical Guidelines : Familiarize yourself with relevant ethical guidelines and regulations governing research with human participants, such as those outlined by institutional review boards (IRBs) or ethics committees. Ensure that your research adheres to these guidelines and that any potential ethical concerns are addressed appropriately.
  • Transparency and Openness : Be transparent about your research methods, procedures, and findings. Clearly communicate the purpose of your study, any potential risks or limitations, and how participants' data will be used. Share your research findings openly and responsibly, contributing to the collective body of knowledge in your field.

By prioritizing ethical considerations in your experimental research, you demonstrate integrity, respect, and responsibility as a researcher, fostering trust and credibility in the scientific community.

Conclusion for Experimental Research

Experimental research is a powerful tool for uncovering causal relationships and expanding our understanding of the world around us. By carefully designing experiments, collecting data, and analyzing results, researchers can make meaningful contributions to their fields and address pressing questions. However, conducting experimental research comes with responsibilities. Ethical considerations are paramount to ensure the well-being and rights of participants, as well as the integrity of the research process. Researchers can build trust and credibility in their work by upholding ethical standards and prioritizing participant safety and autonomy. Furthermore, as you continue to explore and innovate in experimental research, you must remain open to new ideas and methodologies. Embracing diversity in perspectives and approaches fosters creativity and innovation, leading to breakthrough discoveries and scientific advancements. By promoting collaboration and sharing findings openly, we can collectively push the boundaries of knowledge and tackle some of society's most pressing challenges.

How to Conduct Research in Minutes?

Discover the power of Appinio , the real-time market research platform revolutionizing experimental research. With Appinio, you can access real-time consumer insights to make better data-driven decisions in minutes. Join the thousands of companies worldwide who trust Appinio to deliver fast, reliable consumer insights.

Here's why you should consider using Appinio for your research needs:

  • From questions to insights in minutes:  With Appinio, you can conduct your own market research and get actionable insights in record time, allowing you to make fast, informed decisions for your business.
  • Intuitive platform for anyone:  You don't need a PhD in research to use Appinio. Our platform is designed to be user-friendly and intuitive so  that anyone  can easily create and launch surveys.
  • Extensive reach and targeting options:  Define your target audience from over 1200 characteristics and survey them in over 90 countries. Our platform ensures you reach the right people for your research needs, no matter where they are.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

What is Brand Architecture Models Strategy Examples

27.08.2024 | 34min read

What is Brand Architecture? Models, Strategy, Examples

What is Voice of the Customer VoC Program Examples

22.08.2024 | 32min read

What is Voice of the Customer (VoC)? Program, Examples

What is Employee Experience EX and How to Improve It

20.08.2024 | 31min read

What is Employee Experience (EX) and How to Improve It?

  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Mastering Research: The Principles of Experimental Design

David Costello

In a world overflowing with information and data, how do we differentiate between mere observation and genuine knowledge? The answer lies in the realm of experimental design. At its core, experimental design is a structured method used to investigate the relationships between different variables. It's not merely about collecting data, but about ensuring that this data is reliable, valid, and can lead to meaningful conclusions.

The significance of a well-structured research process cannot be understated. From medical studies determining the efficacy of a new drug, to businesses testing a new marketing strategy, or environmental scientists assessing the impact of climate change on a specific ecosystem – a robust experimental design serves as the backbone. Without it, we run the risk of drawing flawed conclusions or making decisions based on erroneous or biased information.

The beauty of experimental design is its universality. It's a tool that transcends disciplines, bringing rigor and credibility to investigations across fields. Whether you're in the world of biotechnology, finance, psychology, or countless other domains, understanding the tenets of experimental design will ensure that your inquiries are grounded in sound methodology, paving the way for discoveries that can shape industries and change lives.

Core principles

Types of experimental designs, steps in designing an experiment, pitfalls and challenges, case studies, tools and software, future progress, how experimental design has evolved over time.

Delving into the annals of scientific history, we find that experimental design, as a formalized discipline, is relatively young. However, the spirit of experimentation is ancient, sewn deeply into the fabric of human curiosity. As early as Ancient Greece, rudimentary experimental methods were employed to understand natural phenomena . Yet, the structured approach we recognize today took centuries to develop.

The Renaissance era witnessed a surge in scientific curiosity and methodical investigation . This period marked a shift from reliance on anecdotal evidence and dogmatic beliefs to empirical observation. Notably, Sir Francis Bacon , during the early 17th century, championed the empirical method, emphasizing the need for systematic data collection and analysis.

But it was during the late 19th and early 20th centuries that the discipline truly began to crystallize. The burgeoning fields of psychology, agriculture, and biology demanded rigorous methods to validate their findings. The introduction of statistical methods and controlled experiments in agricultural research set a benchmark for research methodologies across various disciplines.

From its embryonic stages of simple observation to the sophisticated, statistically driven methodologies of today, experimental design has been shaped by the demands of the times and the relentless pursuit of truth by generations of researchers. It has evolved from mere intuition-based inquiries to a framework of control, randomization, and replication, ensuring that our conclusions stand up to the strictest scrutiny.

Key figures and their contributions

When charting the evolution of experimental design, certain luminaries stand tall, casting long shadows of influence that still shape the field today. Let's delve into a few of these groundbreaking figures:

  • Contribution: Often heralded as the father of modern statistics, Fisher introduced many concepts that form the backbone of experimental design. His work in the 1920s and 1930s laid the groundwork for the design of experiments.
  • Legacy: Fisher's introduction of the randomized controlled trial, analysis of variance ( ANOVA ), and the principle of maximum likelihood estimation revolutionized statistics and experimental methodology. His book, The Design of Experiments , remains a classic reference in the field.
  • Contribution: A prolific figure in the world of statistics, Pearson developed the method of moments , laying the foundation for many statistical tests.
  • Legacy: Pearson's chi-squared test is one of the many techniques he introduced, which researchers still widely use today to test the independence of categorical variables.
  • Contribution: Together, they conceptualized the framework for the theory of hypothesis testing , which is a staple in modern experimental design.
  • Legacy: Their delineation of Type I and Type II errors and the introduction of confidence intervals have become fundamental concepts in statistical inference.
  • Contribution: While better known as a nursing pioneer, Nightingale was also a gifted statistician. She employed statistics and well-designed charts to advocate for better medical practices and hygiene during the Crimean War .
  • Legacy: Nightingale's application of statistical methods to health underscores the importance of data in decision-making processes and set a precedent for evidence-based health policies.
  • Contribution: Box made significant strides in the areas of quality control and time series analysis.
  • Legacy: The Box-Jenkins (or ARIMA) model for time series forecasting and the Box-Behnken designs for response surface methodology are testaments to his lasting influence in both experimental design and statistical forecasting.

These trailblazers, among many others, transformed experimental design from a nascent field of inquiry into a robust and mature discipline. Their innovations continue to guide researchers and inform methodologies, bridging the gap between curiosity and concrete understanding.

Randomization: ensuring each subject has an equal chance of being in any group

Randomization is the practice of allocating subjects or experimental units to different groups or conditions entirely by chance. This means each participant, or experimental unit, has an equal likelihood of being assigned to any specific group or condition.

Why is this method of assignment held in such high regard, and why is it so fundamental to the research process? Let's delve into the pivotal role randomization plays and its overarching importance in maintaining the rigor of experimental endeavors.

  • Eliminating Bias: By allocating subjects randomly, we prevent any unintentional bias in group assignments. This ensures that the groups are more likely to be comparable in all major respects. Without randomization, researchers might, even inadvertently, assign certain types of participants to one group over another, leading to skewed results.
  • Balancing Unknown Factors: There are always lurking variables that researchers might be unaware of or unable to control. Randomization helps in ensuring that these unobserved or uncontrolled variables are equally distributed across groups, thereby ensuring that the groups are comparable in all major respects.
  • Foundation for Statistical Analysis: Randomization is the bedrock upon which much of statistical inference is built. It allows researchers to make probabilistic statements about the outcomes of their studies. Without randomization, many of the statistical tools employed in analyzing experimental results would be inappropriate or invalid.
  • Enhancing External Validity: A randomized study increases the chances that the results are generalizable to a broader population. Because participants are randomly selected, the findings can often be extrapolated to similar groups outside the study.

While randomization is a powerful tool, it's not without its challenges. For instance, in smaller samples, randomization might not always guarantee perfectly balanced groups. Moreover, in some contexts, like when studying the effects of a surgical technique, randomization might be ethically challenging.

Nevertheless, in the grand scheme of experimental design, randomization remains a gold standard. It's a bulwark against biases, both known and unknown, ensuring that research conclusions are drawn from a foundation of fairness and rigor.

Replication: repeating the experiment to ensure results are consistent

At its essence, replication involves conducting an experiment again, under the same conditions, to verify its results. It's like double-checking your math on a complex equation—reassuring yourself and others that the outcome is consistent and not just a random occurrence or due to unforeseen errors.

So, what makes this practice of repetition so indispensable to the research realm? Let's delve deeper into the role replication plays in solidifying and authenticating scientific insights.

  • Verifying Results: Even with the most rigorous experimental designs, errors can creep in, or unusual random events can skew results. Replicating an experiment helps confirm that the findings are genuine and not a result of such anomalies.
  • Reducing Uncertainty: Every experiment comes with a degree of uncertainty. By replicating the study, this uncertainty can be reduced, providing a clearer picture of the phenomenon under investigation.
  • Uncovering Variability: Results can vary due to numerous reasons—slight differences in conditions, experimental materials, or even the subjects themselves. Replication can help identify and quantify this variability, lending more depth to the understanding of results.
  • Building Scientific Consensus: Replication is fundamental in building trust within the scientific community. When multiple researchers, possibly across different labs or even countries, reproduce the same results, it strengthens the validity of the findings.
  • Enhancing Generalizability: Repeated experiments, especially when performed in different locations or with diverse groups, can ensure that the results apply more broadly and are not confined to specific conditions or populations.

While replication is a robust tool in the researcher's arsenal, it isn't always straightforward. Sometimes, especially in fields like psychology or medicine, replicating the exact conditions of the original study can be challenging. Furthermore, in our age of rapid publication, there might be a bias towards novel findings rather than repeated studies, potentially undervaluing the importance of replication.

In conclusion, replication stands as a sentinel of validity in experimental design. While one experiment can shed light on a phenomenon, it's the repeated and consistent results that truly illuminate our understanding, ensuring that what we believe is based not on fleeting chance but on reliable and consistent evidence.

Control: keeping other variables constant while testing the variable of interest

In its simplest form, control means keeping all factors and conditions, save for the variable being studied, consistent and unchanged. It's akin to setting a stage where everything remains static, allowing the spotlight to shine solely on the lead actor: our variable of interest.

What exactly elevates this principle to such a paramount position in the scientific realm? Let's unpack the fundamental reasons that underscore the indispensability of control in experimental design.

  • Isolating the Variable of Interest: With numerous factors potentially influencing an experiment, it's crucial to ensure that the observed effects result solely from the variable being studied. Control aids in achieving this isolation, ensuring that extraneous variables don't cloud the results.
  • Eliminating Confounding Effects: Without proper control, other variables might interact with the variable of interest, leading to misleading or confounded outcomes. By keeping everything else constant, control ensures the purity of results.
  • Enhancing the Credibility of Results: When an experiment is well-controlled, its results become more trustworthy. It demonstrates that the researcher has accounted for potential disturbances, leading to a more precise understanding of the relationship between variables.
  • Facilitating Replication: A well-controlled experiment provides a consistent framework, making it easier for other researchers to replicate the study and validate its findings.
  • Aiding in Comparisons: By ensuring that all other variables remain constant, control allows for a clearer comparison between different experimental groups or conditions.

Maintaining strict control is not always feasible, especially in field experiments or when dealing with complex systems. In such cases, researchers often rely on statistical controls or randomization to account for the influence of extraneous variables.

In the grand tapestry of experimental research, control serves as the stabilizing thread, ensuring that the patterns we observe are genuine reflections of the variable under scrutiny. It's a testament to the meticulous nature of scientific inquiry, underscoring the need for precision and care in every step of the experimental journey.

Completely randomized design

The Completely Randomized Design (CRD) is an experimental setup where all the experimental units (e.g., participants, plants, animals) are allocated to different groups entirely by chance. There's no stratification, clustering, or blocking. In essence, every unit has an equal opportunity to be assigned to any group.

Here are the advantages that make it a favored choice for many researchers:

  • Simplicity: CRD is easy to understand and implement, making it suitable for experiments where the primary goal is to compare the effects of different conditions or interventions without considering other complicating factors.
  • Flexibility: Since the only criterion is random assignment, CRD can be employed in various experimental scenarios, irrespective of the number of conditions or experimental units.
  • Statistical Robustness: Due to its random nature, the CRD is amenable to many statistical analyses. When the assumptions of independence, normality, and equal variances are met, CRD allows for straightforward application of techniques like ANOVA to discern the effects of different conditions.

However, like any tool in the research toolkit, the Completely Randomized Design doesn't come without its caveats. It's crucial to acknowledge the limitations and considerations that accompany CRD, ensuring that its application is both judicious and informed.

  • Efficiency: In situations where there are recognizable subgroups or blocks within the experimental units, a CRD might not be the most efficient design. Variability within blocks could overshadow the effects of different conditions.
  • Environmental Factors: If the experimental units are spread across different environments or conditions, these uncontrolled variations might confound the effects being studied, leading to less precise or even misleading conclusions.
  • Size: In cases where the sample size is small, the sheer randomness of CRD might result in uneven group sizes, potentially reducing the power of the study.

The Completely Randomized Design stands as a testament to the power of randomness in experimental research. While it might not be the best fit for every scenario, especially when there are known sources of variability, it offers a robust and straightforward approach for many research questions. As with all experimental designs, the key is to understand its strengths and limitations, applying it judiciously based on the specifics of the research at hand.

Randomized block design

The Randomized Block Design (RBD) is an experimental configuration where units are first divided into blocks or groups based on some inherent characteristic or source of variability. Within these blocks, units are then randomly assigned to different conditions or categories. Essentially, it's a two-step process: first, grouping similar units, and then, randomizing assignments within these groups.

Here are the positive attributes of the Randomized Block Design that underscore its value in experimental research:

  • Control Over Variability: By grouping similar experimental units into blocks, RBD effectively reduces the variability that might otherwise confound the results. This enhances the experiment's power and precision.
  • More Accurate Comparisons: Since conditions are randomized within blocks of similar units, comparisons between different effects become more accurate and meaningful.
  • Flexibility: RBD can be employed in scenarios with any number of conditions and blocks. Its flexible nature makes it suitable for diverse experimental needs.

While the merits of the Randomized Block Design are widely recognized, understanding its potential limitations and considerations is paramount to ensure that research outcomes are both insightful and grounded in reality:

  • Complexity: Designing and analyzing an RBD can be more complex than simpler designs like CRD. It requires careful consideration of how to define blocks and how to randomize conditions within them.
  • Assumption of Homogeneity: RBD assumes that the variability within blocks is less than the variability between them. If this assumption is violated, the design might lose its efficiency.
  • Increased Sample Size: To maintain power, RBD might necessitate a larger sample size, especially if there are numerous blocks.

The Randomized Block Design stands as an exemplary method to combine the best of both worlds: the robustness of randomization and the sensitivity to inherent variability. While it might demand more meticulous planning and design, its capacity to deliver more refined insights makes it a valuable tool in the realm of experimental research.

Factorial design

A factorial design is an experimental setup where two or more independent variables, or factors, are simultaneously tested, not only for their individual effects but also for their combined or interactive effects. If you imagine an experiment where two factors are varied at two levels each, you would have a 2x2 factorial design, resulting in four unique experimental conditions.

Here are the advantages you should consider regarding this methodology:

  • Efficiency: Instead of conducting separate experiments for each factor, researchers can study multiple factors in a single experiment, conserving resources and time.
  • Comprehensive Insights: Factorial designs allow for the exploration of interactions between factors. This is crucial because in real-world situations, factors often don't operate in isolation.
  • Generalizability: By varying multiple factors simultaneously, the results tend to be more generalizable across a broader range of conditions.
  • Optimization: By revealing how factors interact, factorial designs can guide practitioners in optimizing conditions for desired outcomes.

No methodology is without its nuances, and while factorial designs boast numerous strengths, they come with their own set of limitations and considerations:

  • Complexity: As the number of factors or levels increases, the design can become complex, demanding more experimental units and potentially complicating data analysis.
  • Potential for Confounding: If not carefully designed, there's a risk that effects from one factor might be mistakenly attributed to another, especially in higher-order factorial designs.
  • Resource Intensive: While factorial designs can be efficient, they can also become resource-intensive as the number of conditions grows.

The factorial design stands out as an essential tool for researchers aiming to delve deep into the intricacies of multiple factors and their interactions. While it requires meticulous planning and interpretation, its capacity to provide a holistic understanding of complex scenarios renders it invaluable in experimental research.

Matched pair design

A Matched Pair Design , also known simply as a paired design, is an experimental setup where participants are grouped into pairs based on one or more matching criteria, often a specific characteristic or trait. Once matched, one member of each pair is subjected to one condition while the other experiences a different condition or control. This design is particularly powerful when comparing just two conditions, as it reduces the variability between subjects.

As we explore the advantages of this design, it becomes evident why it's often the methodology of choice for certain investigative contexts:

  • Control Over Variability: By matching participants based on certain criteria, this design controls for variability due to those criteria, thereby increasing the experiment's sensitivity and reducing error.
  • Efficiency: With a paired approach, fewer subjects may be required compared to completely randomized designs, potentially making the study more time and resource-efficient.
  • Direct Comparisons: The design facilitates direct comparisons between conditions, as each pair acts as its own control.

As with any research methodology, the Matched Pair Design, despite its distinct advantages, comes with inherent limitations and critical considerations:

  • Matching Complexity: The process of matching participants can be complicated, demanding meticulous planning and potentially excluding subjects who don't fit pairing criteria.
  • Not Suitable for Multiple Conditions: This design is most effective when comparing two conditions. When there are more than two conditions to compare, other designs might be more appropriate.
  • Potential Dependency Issues: Since participants are paired, statistical analyses must account for potential dependencies between paired observations.

The Matched Pair Design stands as a great tool for experiments where controlling for specific characteristics is crucial. Its emphasis on paired precision can lead to more reliable results, but its effective implementation requires careful consideration of the matching criteria and statistical analyses. As with all designs, understanding its nuances is key to leveraging its strengths and mitigating potential challenges.

Covariate design

A Covariate Design , also known as Analysis of Covariance (ANCOVA), is an experimental approach wherein the main effects of certain independent variables, as well as the effect of one or more covariates, are considered. Covariates are typically variables that are not of primary interest to the researcher but may influence the outcome variable. By including these covariates in the analysis, researchers can control for their effect, providing a clearer picture of the relationship between the primary independent variables and the outcome.

While many designs aim for clarity by isolating variables, the Covariate Design embraces and controls for the intricacies, presenting a series of compelling advantages. As we unpack these benefits, the appeal of incorporating covariates into experimental research becomes increasingly evident:

  • Increased Precision: By controlling for covariates, this design can lead to more precise estimates of the main effects of interest.
  • Efficiency: Including covariates can help explain more of the variability in the outcome, potentially leading to more statistically powerful results with smaller sample sizes.
  • Flexibility: The design offers the flexibility to account for and control multiple extraneous factors, allowing for more comprehensive analyses.

Every research approach, no matter how robust, comes with its own set of challenges and nuances. The Covariate Design is no exception to this rule:

  • Assumption Testing: Covariate Design requires certain assumptions to be met, such as linearity and homogeneity of regression slopes, which, if violated, can lead to misleading results.
  • Complexity: Incorporating covariates adds complexity to the experimental setup and the subsequent statistical analysis.
  • Risk of Overadjustment: If not chosen judiciously, covariates can lead to overadjustment, potentially masking true effects or leading to spurious findings.

The Covariate Design stands out for its ability to refine experimental results by accounting for potential confounding factors. This heightened precision, however, demands a keen understanding of the design's assumptions and the intricacies involved in its implementation. It serves as a powerful option in the researcher's arsenal, provided its complexities are navigated with knowledge and care.

Designing an experiment requires careful planning, an understanding of the underlying scientific principles, and a keen attention to detail. The essence of a well-designed experiment lies in ensuring both the integrity of the research and the validity of the results it yields. The experimental design acts as the backbone of the research, laying the foundation upon which meaningful conclusions can be drawn. Given the importance of this phase, it's paramount for researchers to approach it methodically. To assist in this experimental setup, here's a step-by-step guide to help you navigate this crucial task with precision and clarity.

  • Identify the Research Question or Hypothesis: Before delving into the experimental process, it's crucial to have a clear understanding of what you're trying to investigate. This begins with defining a specific research question or formulating a hypothesis that predicts the outcome of your study. A well-defined research question or hypothesis serves as the foundation for the entire experimental process.
  • Choose the Appropriate Experimental Design: Depending on the nature of your research question and the specifics of your study, you'll need to choose the most suitable experimental design. Whether it's a Completely Randomized Design, a Randomized Block Design, or any other setup, your choice will influence how you conduct the experiment and analyze the data.
  • Select the Subjects/Participants: Determine who or what will be the subjects of your study. This could range from human participants to animal models or even plants, depending on your field of study. It's vital to ensure that the selected subjects are representative of the larger population you aim to generalize to.
  • Allocate Subjects to Different Groups: Once you've chosen your participants, you'll need to decide how to allocate them to different experimental groups. This could involve random assignment or other methodologies, ensuring that each group is comparable and that the effects of confounding variables are minimized.
  • Implement the Experiment and Gather Data: With everything in place, conduct the experiment according to your chosen design. This involves exposing each group to the relevant conditions and then gathering data based on the outcomes you're measuring.
  • Analyze the Data: Once you've collected your data, it's time to dive into the numbers. Using statistical tools and techniques, analyze the data to determine whether there are significant differences between your groups, and if your hypothesis is supported.
  • Interpret the Results and Draw Conclusions: Data analysis will provide you with statistical outcomes, but it's up to you to interpret what these numbers mean in the context of your research question. Draw conclusions based on your findings, and consider their implications for your field and future research endeavors.

By following these steps, you can ensure a structured and systematic approach to your experimental research, paving the way for insightful and valid results.

Confounding variables: external factors that might influence the outcome

One of the most common challenges faced in experimental design is the presence of confounding variables. These are external factors that unintentionally vary along with the factor you are investigating, potentially influencing the outcome of the experiment. The danger of confounding variables lies in their ability to provide alternative explanations for any observed effect, thereby muddying the waters of your results.

For instance, if you were investigating the effect of a new drug on blood pressure and failed to control for factors like caffeine intake or stress levels, you might mistakenly attribute changes in blood pressure to the drug when they were actually caused by these other uncontrolled factors.

Properly identifying and controlling for confounding variables is essential. Failure to do so can lead to false conclusions and misinterpretations of data. Addressing them either through the experimental design itself, like by using randomization or matched groups, or in the analysis phase, such as through statistical controls, ensures that the observed effects can be confidently attributed to the variable or condition being studied rather than to extraneous influences.

External validity: making sure results can be generalized to broader contexts

A paramount challenge in experimental design is guaranteeing external validity. This concept refers to the degree to which the findings of a study can be generalized to settings, populations, times, and measures different from those specifically used in the study.

The dilemma often arises in highly controlled environments, such as laboratories. While these settings allow for precise conditions and minimized confounding variables, they might not always reflect real-world scenarios. For instance, a study might find a specific teaching method effective in a quiet, one-on-one setting. However, if that same method doesn't perform as well in a busy classroom with 30 students, the study's external validity becomes questionable.

For researchers, the challenge is to strike a balance. While controlling for potential confounding variables is paramount, it's equally crucial to ensure the experimental conditions maintain a certain degree of real-world relevance. To enhance external validity, researchers may use strategies such as diversifying participant pools, varying experimental conditions, or even conducting field experiments. Regardless of the approach, the ultimate goal remains: to ensure the experiment's findings can be meaningfully applied in broader, real-world contexts.

Ethical considerations: ensuring the safety and rights of participants

Any experimental design undertaking must prioritize the well-being, dignity, and rights of participants. Upholding these values not only ensures the moral integrity of any study but also is crucial in ensuring the reliability and validity of the research .

All participants, whether human or animal, are entitled to respect and their safety should never be placed in jeopardy. For human subjects, it's imperative that they are adequately briefed about the research aims, potential risks, and benefits. This highlights the significance of informed consent, a process where participants acknowledge their comprehension of the study and willingly agree to participate.

Beyond the initiation of the experiment, ethical considerations continue to play a pivotal role. It's vital to maintain the privacy and confidentiality of the participants, ensuring that the collected data doesn't lead to harm or stigmatization. Extra caution is needed when experiments involve vulnerable groups, such as children or the elderly. Furthermore, researchers should be equipped to offer necessary support or point towards professional help should participants experience distress because of the experimental procedures. It's worth noting that many research institutions have ethical review boards to ensure all experiments uphold these principles, fortifying the credibility and authenticity of the research process.

The Stanford Prison Experiment (1971)

The Stanford Prison Experiment , conducted in 1971 by psychologist Philip Zimbardo at Stanford University, stands as one of the most infamous studies in the annals of psychology. The primary objective of the experiment was to investigate the inherent psychological mechanisms and behaviors that emerge when individuals are placed in positions of power and subordination. To this end, volunteer participants were randomly assigned to roles of either prison guards or inmates in a simulated prison environment.

Zimbardo's design sought to create an immersive environment, ensuring that participants genuinely felt the dynamics of their assigned roles. The mock prison was set up in the basement of Stanford's psychology building, complete with cells and guard quarters. Participants assigned to the role of guards were provided with uniforms, batons, and mirrored sunglasses to prevent eye contact. Those assigned as prisoners wore smocks and stocking caps, emphasizing their status. To enhance the realism, an unannounced "arrest" was made for the "prisoners" at their homes by the local police department. Throughout the experiment, no physical violence was permitted; however, the guards were allowed to establish their own rules to maintain order and ensure the prisoners attended the daily counts.

Scheduled to run for two weeks, the experiment was terminated after only six days due to the extreme behavioral transformations observed. The guards rapidly became authoritarian, implementing degrading and abusive strategies to maintain control. In contrast, the prisoners exhibited signs of intense emotional distress, and some even demonstrated symptoms of depression. Zimbardo himself became deeply involved, initially overlooking the adverse effects on the participants. The study's findings highlighted the profound impact that situational dynamics and perceived roles can have on behavior. While it was severely criticized for ethical concerns, it underscored the depths to which human behavior could conform to assigned roles, leading to significant discussions on the ethics of research and the power dynamics inherent in institutional settings.

The Stanford Prison Experiment is particularly relevant to experimental design for these reasons:

  • Control vs. Realism: One of the challenging dilemmas in experimental design is striking a balance between controlling variables and maintaining ecological validity (how experimental conditions mimic real-world situations). Zimbardo's study attempted to create a highly controlled environment with the mock prison but also sought to maintain a sense of realism by arresting participants at their homes and immersing them in their roles. The consequences of this design, however, were unforeseen and extreme behavioral transformations.
  • Ethical Considerations: A cornerstone of experimental design involves ensuring the safety, rights, and well-being of participants. The Stanford Prison Experiment is often cited as an example of what can go wrong when these principles are not rigorously adhered to. The psychological distress faced by participants wasn't anticipated in the original design and wasn't adequately addressed during its execution. This oversight emphasizes the critical importance of periodic assessment of participants' well-being and the flexibility to adapt or terminate the study if adverse effects arise.
  • Role of the Researcher: Zimbardo's involvement and the manner in which he became part of the experiment highlight the potential biases and impacts a researcher can have on an experiment's outcome. In experimental design, it's crucial to consider the researcher's role and minimize any potential interference or influence they might have on the study's results.
  • Interpretation of Results: The aftermath of the experiment brought forth critical discussions on how results are interpreted and presented. It emphasized the importance of considering external influences, participant expectations, and other confounding variables when deriving conclusions from experimental data.

In essence, the Stanford Prison Experiment serves as a cautionary tale in experimental design. It underscores the importance of ethical considerations, participant safety, the potential pitfalls of high realism without safeguards, and the unintended consequences that can emerge even in well-planned experiments.

Meselson-Stahl Experiment (1958)

The Meselson-Stahl Experiment , conducted in 1958 by biologists Matthew Meselson and Franklin Stahl , holds a significant place in molecular biology. The duo set out to determine the mechanism by which DNA replicates, aiming to understand if it follows a conservative, semi-conservative, or dispersive model.

Utilizing Escherichia coli (E. coli) bacteria, Meselson and Stahl grew cultures in a medium containing a heavy isotope of nitrogen, 15 N, allowing the bacteria's DNA to incorporate this heavy isotope. Subsequently, they transferred the bacteria to a medium with the more common 14 N isotope and allowed it to replicate. By using ultracentrifugation, they separated DNA based on density, expecting distinct bands on a gradient depending on the replication model.

The observed patterns over successive bacterial generations revealed a single band that shifted from the heavy to light position, supporting the semi-conservative replication model. This meant that during DNA replication, each of the two strands of a DNA molecule serves as a template for a new strand, leading to two identical daughter molecules. The experiment's elegant design and conclusive results provided pivotal evidence for the molecular mechanism of DNA replication, reshaping our understanding of genetic continuity.

The Meselson-Stahl Experiment is particularly relevant to experimental design for these reasons:

  • Innovative Techniques: The use of isotopic labeling and density gradient ultracentrifugation was pioneering, showcasing the importance of utilizing and even developing novel techniques tailored to address specific scientific questions.
  • Controlled Variables: By methodically controlling the growth environment and the nitrogen sources, Meselson and Stahl ensured that any observed differences in DNA density were due to the replication mechanism itself, and not extraneous factors.
  • Direct Comparison: The experiment design allowed for direct comparison between the expected results of different replication models and the actual observed outcomes, facilitating a clear and decisive conclusion.
  • Clarity in Hypothesis: The researchers had clear expectations for the results of each potential replication model, which helped in accurately interpreting the outcomes.

Reflecting on the Meselson-Stahl Experiment, it serves as an exemplar in experimental biology. Their meticulous approach, combined with innovative techniques, answered a fundamental biological question with clarity. This experiment not only resolved a significant debate in molecular biology but also showcased the power of well-designed experimental methods in revealing nature's intricate processes.

The Hawthorne Studies (1920s-1930s)

The Hawthorne Studies , conducted between the 1920s and 1930s at Western Electric's Hawthorne plant in Chicago, represent a pivotal shift in organizational and industrial psychology. Initially intended to study the relationship between lighting conditions and worker productivity, the research evolved into a broader investigation of the various factors influencing worker output and morale. These studies have since shaped our understanding of human relations and the socio-psychological aspects of the workplace.

The Hawthorne Studies comprised several experiments, but the most notable were the "relay assembly tests" and the "bank wiring room studies." In the relay assembly tests, researchers made various manipulations to the working conditions of a small group of female workers, such as altering light levels, giving rest breaks, and changing the length of the workday. The intent was to identify which conditions led to the highest levels of productivity. Conversely, the bank wiring room studies were observational in nature. Here, the researchers aimed to understand the group dynamics and social structures that emerged among male workers, without any experimental manipulations.

Surprisingly, in the relay assembly tests, almost every change—whether it was an improvement or a return to original conditions—led to increased worker productivity. Even when conditions were reverted to their initial state, worker output remained higher than before. This puzzling phenomenon led researchers to speculate that the mere act of being observed and the knowledge that one's performance was being monitored led to increased effort and productivity, a phenomenon now referred to as the Hawthorne Effect . The bank wiring room studies, on the other hand, shed light on how informal group norms and social relations could influence individual productivity, often more significantly than monetary incentives.

These studies challenged the then-dominant scientific management approach, which viewed workers primarily as mechanical entities whose productivity could be optimized through physical and environmental adjustments. Instead, the Hawthorne Studies highlighted the importance of psychological and social factors in the workplace, laying the foundation for the human relations movement in organizational management.

The Hawthorne Studies are particularly relevant to experimental design for these reasons:

  • Observer Effect: The Hawthorne Studies introduced the idea that the mere act of observation could alter participants' behavior. This has significant implications for experimental design, emphasizing the need to account for and minimize observer-induced changes in behavior.
  • Complexity of Human Behavior: While the initial focus was on physical conditions (like lighting), the results demonstrated that human behavior and performance are influenced by a myriad of interrelated factors. This underscores the importance of considering psychological, social, and environmental variables when designing experiments.
  • Unintended Outcomes: The unintended discovery of the Hawthorne Effect exemplifies that experimental outcomes can sometimes diverge from initial expectations. Researchers should remain open to such unexpected findings, as they can lead to new insights and directions.
  • Evolution of Experimental Focus: The shift from purely environmental manipulations to observational studies in the Hawthorne research highlights the flexibility required in experimental design. As new findings emerge, it's crucial for researchers to adapt their methodologies to better address evolving research questions.

In summary, the Hawthorne Studies serve as a testament to the evolving nature of experimental research and the profound effects that observation, social dynamics, and psychological factors can have on outcomes. They highlight the importance of adaptability, holistic understanding, and the acknowledgment of unexpected results in the realm of experimental design.

Michelson-Morley Experiment (1887)

The Michelson-Morley Experiment , conducted in 1887 by physicists Albert A. Michelson and Edward W. Morley , is considered one of the foundational experiments in the world of physics. The primary aim was to detect the relative motion of matter through the hypothetical luminiferous aether, a medium through which light was believed to propagate.

Michelson and Morley designed an apparatus known as the interferometer . This device split a beam of light so that it traveled in two perpendicular directions. After reflecting off mirrors, the two beams would recombine, and any interference patterns observed would indicate differences in their travel times. If the aether wind existed, the Earth's motion through the aether would cause such an interference pattern. The experiment was conducted at different times of the year, considering Earth's motion around the sun might influence the results.

Contrary to expectations, the experiment found no significant difference in the speed of light regardless of the direction of measurement or the time of year. This null result was groundbreaking. It effectively disproved the existence of the luminiferous aether and paved the way for the theory of relativity introduced by Albert Einstein in 1905 , which fundamentally changed our understanding of time and space.

The Michelson-Morley Experiment is particularly relevant to experimental design for these reasons:

  • Methodological Rigor: The precision and care with which the experiment was designed and conducted set a new standard for experimental physics.
  • Dealing with Null Results: Rather than being discarded, the absence of the expected result became the main discovery, emphasizing the importance of unexpected outcomes in scientific research.
  • Impact on Theoretical Foundations: The experiment's findings had profound implications, showing that experiments can challenge and even overturn prevailing theoretical frameworks.
  • Iterative Testing: The experiment was not just a one-off. Its repeated tests at different times underscore the value of replication and varied conditions in experimental design.

Through their meticulous approach and openness to unexpected results, Michelson and Morley didn't merely answer a question; they reshaped the very framework of understanding within physics. Their work underscores the essence of scientific inquiry: that true discovery often lies not just in confirming our hypotheses, but in uncovering the deeper truths that challenge our prevailing notions. As researchers and scientists continue to push the boundaries of knowledge, the lessons from this experiment serve as a beacon, reminding us of the potential that rigorous, well-designed experiments have in illuminating the mysteries of our universe.

Borlaug's Green Revolution (1940s-1960s)

The Green Revolution , spearheaded by agronomist Norman Borlaug between the 1940s and 1960s, represents a transformative period in agricultural history. Borlaug's work focused on addressing the pressing food shortages in developing countries. By implementing advanced breeding techniques, he aimed to produce high-yield, disease-resistant, and dwarf wheat varieties that would boost agricultural productivity substantially.

To achieve this, Borlaug and his team undertook extensive crossbreeding of wheat varieties. They employed shuttle breeding —a technique where crops are grown in two distinct locations with different planting seasons. This not only accelerated the breeding process but also ensured the new varieties were adaptable to varied conditions. Another innovation was to develop strains of wheat that were "dwarf," ensuring that the plants, when loaded with grains, didn't become too tall and topple over—a common problem with high-yielding varieties.

The resulting high-yield, semi-dwarf, disease-resistant wheat varieties revolutionized global agriculture. Countries like India and Pakistan, which were on the brink of mass famine, witnessed a dramatic increase in wheat production. This Green Revolution saved millions from starvation, earned Borlaug the Nobel Peace Prize in 1970, and altered the course of agricultural research and policy worldwide.

The Green Revolution is particularly relevant to experimental design for these reasons:

  • Iterative Testing: Borlaug's approach highlighted the significance of continual testing and refining. By iterating breeding processes, he was able to perfect the wheat varieties more efficiently.
  • Adaptability: The use of shuttle breeding showcased the importance of ensuring that experimental designs account for diverse real-world conditions, enhancing the global applicability of results.
  • Anticipating Challenges: By focusing on dwarf varieties, Borlaug preempted potential problems, demonstrating that foresight in experimental design can lead to more effective solutions.
  • Scalability: The work wasn't just about creating a solution, but one that could be scaled up to meet global demands, emphasizing the necessity of scalability considerations in design.

The Green Revolution exemplifies the profound impact well-designed experiments can have on society. Borlaug's strategies, which combined foresight with rigorous testing, reshaped global agriculture, underscoring the potential of scientific endeavors to address pressing global challenges when thoughtfully and innovatively approached.

Experimental design has undergone a transformation over the years. Modern technology plays an indispensable role in refining and streamlining experimental processes. Gone are the days when researchers solely depended on manual calculations, paper-based data recording, and rudimentary statistical tools. Today, advanced software and tools provide accurate, quick, and efficient means to design experiments, collect data, perform statistical analysis, and interpret results.

Several tools and software are at the forefront of this technological shift in experimental design:

  • Minitab: A popular statistical software offering tools for various experimental designs including factorials, response surface methodologies, and optimization techniques.
  • R: An open-source programming language and environment tailored for statistical computing and graphics. Its extensibility and comprehensive suite of statistical techniques make it a favorite among researchers.
  • JMP: Developed by SAS , it is known for its interactive and dynamic graphics. It provides a powerful suite for design of experiments and statistical modeling.
  • Design-Expert: A software dedicated to experimental design and product optimization. It's particularly useful for response surface methods.
  • SPSS: A software package used for statistical analysis, it provides advanced statistics, machine learning algorithms, and text analysis for researchers of all levels.
  • Python (with libraries like SciPy and statsmodels): Python is a versatile programming language and, when combined with specific libraries, becomes a potent tool for statistical analysis and experimental design.

One of the primary advantages of using these software tools is their capability for advanced statistical analysis. They enable researchers to perform complex computations within seconds, something that would take hours or even days manually. Furthermore, the visual representation features in these tools assist in understanding intricate data patterns, correlations, and other crucial aspects of data. By aiding in statistical analysis and interpretation, software tools eliminate human errors, provide insights that might be overlooked in manual analysis, and significantly speed up the research process, allowing scientists and researchers to focus on drawing accurate conclusions and making informed decisions based on the data.

The world of experimental research is continually evolving, with each new development promising to reshape how we approach, conduct, and interpret experiments. The central tenets of experimental design—control, randomization, replication—though fundamental, are being complemented by sophisticated techniques that ensure richer insights and more robust conclusions.

One of the most transformative forces in experimental design's future landscape is the surge of artificial intelligence (AI) and machine learning (ML) technologies . Historically, the design and analysis of experiments have depended on human expertise for selecting factors to study, setting the levels of these factors, and deciding on the number and order of experimental runs. With AI and ML's advent, many of these tasks can be automated, leading to optimized experimental designs that might be too complex for manual formulation. For instance, machine learning algorithms can predict potential outcomes based on vast datasets, guiding researchers in choosing the most promising experimental conditions.

Moreover, AI-driven experimental platforms can dynamically adapt during the course of the experiment, tweaking conditions based on real-time results, thereby leading to adaptive experimental designs. These adaptive designs promise to be more efficient, as they can identify and focus on the most relevant regions of the experimental space, often requiring fewer experimental runs than traditional designs. By harnessing the power of AI and ML, researchers can uncover complex interactions and nonlinearities in their data that might have otherwise gone unnoticed.

Furthermore, the convergence of AI and experimental design holds tremendous potential for areas like drug development and personalized medicine. By analyzing vast genetic datasets, AI algorithms can help design experiments that target very specific biological pathways or predict individual patients' responses to particular treatments. Such personalized experimental designs could dramatically reduce the time and cost of bringing new treatments to market and ensuring that they are effective for the intended patient populations.

In conclusion, the future of experimental design is bright, marked by rapid advancements and a fusion of traditional methods with cutting-edge technologies. As AI and machine learning continue to permeate this field, we can expect experimental research to become more efficient, accurate, and personalized, heralding a new era of discovery and innovation.

In the ever-evolving landscape of research and innovation, experimental design remains a cornerstone, guiding scholars and professionals towards meaningful insights and discoveries. As we reflect on its past and envision its future, it's clear that experimental design will continue to play an instrumental role in shaping the trajectory of numerous disciplines. It will be instrumental in harnessing the full potential of emerging technologies, driving forward scientific understanding, and solving some of the most pressing challenges of our time. With a rich history behind it and a promising horizon ahead, experimental design stands as a testament to the human spirit's quest for knowledge, understanding, and innovation.

Header image by Gorodenkoff .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

q.4 discuss experimental research designs in detail

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

14.3 Quasi-experimental designs

Learning objectives.

Learners will be able to…

  • Describe a quasi-experimental design in social work research
  • Understand the different types of quasi-experimental designs
  • Determine what kinds of research questions quasi-experimental designs are suited for
  • Discuss advantages and disadvantages of quasi-experimental designs

Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don’t do as good a job of mitigating threats to internal validity, they still allow us to establish temporality , which is a criterion for establishing nomothetic causality. The prefix quasi means “resembling,” so quasi-experimental research is research that resembles experimental research, but is not true experimental research. Nonetheless, given proper attention, quasi-experiments can still provide rigorous and useful results.

The primary difference between quasi-experimental research and true experimental research is that quasi-experimental research does not involve random assignment to control and experimental groups. Instead, we talk about comparison groups in quasi-experimental research. As a result, these types of experiments don’t control for extraneous variables as well as true experiments do.  As a result, there are larger threats to internal validity in quasi-experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. Realistically, our example of the CBT-social anxiety project is likely to be a quasi experiment, based on the resources and participant pool we’re likely to have available. There are different kinds of quasi-experiments, and we will discuss three main types below: nonequivalent comparison group designs, time series designs, and ex post facto comparison group designs.

Nonequivalent comparison group design

This type of design looks very similar to the classical experimental design that we discussed in section 14.2. But instead of random assignment to control and experimental groups, researchers use other methods to construct their comparison and experimental groups. Researchers using this design will try to select a comparison group that’s as similar to the experimental group as possible based on relevant factors to their experimental group.

A diagram of this design will also look very similar to pretest/post-test design, but you’ll notice we’ve removed the “R” from our groups, since they are not randomly assigned (Figure 14.6).

q.4 discuss experimental research designs in detail

This kind of design provides weaker evidence that the intervention itself leads to a change in outcome. Nonetheless, we are still able to establish time order using this method, and can thereby show an association between the intervention and the outcome. Like true experimental designs, this type of quasi-experimental design is useful for explanatory research questions.

What might this look like in a practice setting? Let’s say you’re working at an agency that provides CBT and other types of interventions, and you have identified a group of clients who are seeking help for social anxiety, as in our earlier example. Once you’ve obtained consent from your clients, you can create a comparison group using one of the matching methods we just discussed. If the group is small, you might match using individual matching, but if it’s larger, you’ll probably sort people by demographics to try to get similar population profiles. (You can do aggregate matching more easily when your agency has some kind of electronic records or database, but it’s still possible to do manually.)

Static-group design

Another type of quasi-experimental research is the static-group design. In this type of research, there are both comparison and experimental groups, which are not randomly assigned. There is no pretest, only a post-test, and the comparison group has to be constructed by the researcher. Sometimes, researchers will use matching techniques to construct the groups, but often, the groups are constructed by convenience of who is being served at an agency.

Ex post facto comparison group design

Ex post facto (Latin for “after the fact”) designs are extremely similar to nonequivalent comparison group designs. There are still comparison and experimental groups, pretest and post-test measurements, and an intervention. But in ex post facto designs, participants are assigned to the comparison and experimental groups once the intervention has already happened. This type of design often occurs when interventions are already up and running at an agency and the agency wants to assess effectiveness based on people who have already completed treatment.

In most clinical agency environments, social workers conduct both initial and exit assessments, so there are usually some kind of pretest and post-test measures available. We also typically collect demographic information about our clients, which could allow us to try to use some kind of matching to construct comparison and experimental groups.

In terms of internal validity and establishing causality, ex post facto designs are a bit of a mixed bag. The ability to establish causality depends partially on the ability to construct comparison and experimental groups that are demographically similar so we can control for these extraneous variables .

Propensity Score Matching

There are more advanced ways to match participants in the experimental and comparison groups based on statistical analyses. Researchers using a quasi-experimental design may consider using a matching algorithm to select people for the experimental and comparison groups based on their similarity on key variables (or “covariates”). This allows the assignment be considered “as good as random” after conditioning on the covariates.

Propensity Score Matching (PSM, Rosenbaum & Rubin, 1983) [1] is one such algorithm in which the probability of being assigned to the treatment group can be modeled as a function of several covariates using logistic regression.  However, to use Propensity Score Matching, researchers need a relatively large initial sample because the technique reduces the final sample during the statistical matching process. The need for the large sample means Propensity Score Matching may not be feasible for all projects.

Time series design

Another type of quasi-experimental design is a time series design. Unlike other types of experimental design, time series designs do not have a comparison group. A time series is a set of measurements taken at intervals over a period of time (Figure 14.7). Proper time series design should include at least three pre- and post-intervention measurement points. While there are a few types of time series designs, we’re going to focus on the most common: interrupted time series design.

q.4 discuss experimental research designs in detail

But why use this method? Here’s an example. Let’s think about elementary student behavior throughout the school year. As anyone with children or who is a teacher knows, kids get very excited and animated around holidays, days off, or even just on a Friday afternoon. This fact might mean that around those times of year, there are more reports of disruptive behavior in classrooms. What if we took our one and only measurement in mid-December? It’s possible we’d see a higher-than-average rate of disruptive behavior reports, which could bias our results if our next measurement is around a time of year students are in a different, less excitable frame of mind. When we take multiple measurements throughout the first half of the school year, we can establish a more accurate baseline for the rate of these reports by looking at the trend over time.

We may want to test the effect of extended recess times in elementary school on reports of disruptive behavior in classrooms. When students come back after the winter break, the school extends recess by 10 minutes each day (the intervention), and the researchers start tracking the monthly reports of disruptive behavior again. These reports could be subject to the same fluctuations as the pre-intervention reports, and so we once again take multiple measurements over time to try to control for those fluctuations.

This method improves the extent to which we can establish causality because we are accounting for a major extraneous variable in the equation—the passage of time. On its own, it does not allow us to account for other extraneous variables, but it does establish time order and association between the intervention and the trend in reports of disruptive behavior. Finding a stable condition before the treatment that changes after the treatment is evidence for causality between treatment and outcome.

Quasi-experimental designs are common in social work intervention research because, when designed correctly, they balance the intense resource needs of true experiments with the realities of research in practice. They still offer researchers tools to gather robust evidence about whether interventions are having positive effects for clients.

Key Takeaways

  • Quasi-experimental designs are similar to true experiments, but do not require random assignment to experimental and control groups.
  • In quasi-experimental projects, the group not receiving the treatment is called the comparison group, not the control group.
  • Nonequivalent comparison group design is nearly identical to pretest/post-test experimental design, but participants are not randomly assigned to the experimental and control groups. As a result, this design provides slightly less robust evidence for causality.
  • Time series design does not have a control or experimental group, and instead compares the condition of participants before and after the intervention by measuring relevant factors at multiple points in time. This allows researchers to mitigate the error introduced by the passage of time.

Categories that we use that are determined ahead of time, based on existing literature/knowledge.

a summary of the main points of an article

whether you can actually reach people or documents needed to complete your project

The idea that researchers are responsible for conducting research that is ethical, honest, and following accepted research practices.

In a measure, when people say yes to whatever the researcher asks, even when doing so contradicts previous answers.

research that is conducted for the purpose of creating social change

Research methodologies that center and affirm African cultures, knowledge, beliefs, and values.

In nonequivalent comparison group designs, the process in which researchers match the population profile of the comparison and experimental groups.

what a researcher hopes to accomplish with their study

A type of reliability in which multiple forms of a tool yield the same results from the same participants.

the process of writing notes on an article

The identity of the person providing data cannot be connected to the data provided at any time in the research process, by anyone.

A statistical method to examine how a dependent variable changes as the value of a categorical independent variable changes

The potential for qualitative research findings to be applicable to other situations or with other people outside of the research study itself.

a statement about what you think is true backed up by evidence and critical thinking

Artifacts are a source of data for qualitative researcher that exist in some form already, without the research having to create it. They represent a very broad category that can range from print media, to clothing, to tools, to art, to live performances.

Comparable to informed consent for BUT this is for someone (e.g. child, teen, or someone with a cognitive impairment) who can’t legally give full consent but can determine if they are willing to participant. May or may not require researchers to collect an assent form, this could also be done verbally.

The characteristics we assume about our data, like that it is normally distributed, that makes it suitable for certain types of statistical tests

The characteristics that make up a variable

An audit trail is a system of documenting in qualitative research analysis that allows you to link your final results with your original raw data. Using an audit trail, an independent researcher should be able to start with your results and trace the research process backwards to the raw data. This helps to strengthen the trustworthiness of the research.

For the purposes of research, authenticity means that we do not misrepresent ourselves, our interests or our research; we are genuine in our interactions with participants and other colleagues.

also called convenience sampling; researcher gathers data from whatever cases happen to be convenient or available

Axial coding is phase of qualitative analysis in which the research will revisit the open codes and identify connections between codes, thereby beginning to group codes that share a relationship.

assumptions about the role of values in research

The stage in single-subjects design in which a baseline level or pattern of the dependent variable is established

One of the three values indicated in the Belmont report. An obligation to protect people from harm by maximizing benefits and minimizing risks.

Biases are conscious or subconscious preferences that lead us to favor some things over others.

A distribution with two distinct peaks when represented on a histogram.

A rating scale in which a respondent selects their alignment of choices between two opposite poles such as disagreement and agreement (e.g., strongly disagree, disagree, agree, strongly agree).

a group of statistical techniques that examines the relationship between two variables

A Boolean search is a structured system that uses modifying terms (AND, OR, NOT) and symbols such as quotation marks and asterisks to modify, broaden, or restrict the search results

A qualitative research technique where the researcher attempts to capture and track their subjective assumptions during the research process. * note, there are other definitions of bracketing, but this is the most widely used.

An acronym, BRUSO for writing questions in survey research. The letters stand for: “brief,” “relevant,” “unambiguous,” “specific,” and “objective.”

Case studies are a type of qualitative research design that focus on a defined case and gathers data to provide a very rich, full understanding of that case. It usually involves gathering data from multiple different sources to get a well-rounded case description.

variables whose values are organized into mutually exclusive groups but whose numerical values cannot be used in mathematical operations.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

A census is a study of every element in a population (as opposed of taking a sample of the population)

a statistical test to determine whether there is a significant relationship between two categorical variables

questions in which the researcher provides all of the response options

a sampling approach that begins by sampling groups (or clusters) of population elements and then selects elements from within those groups

A code is a label that we place on segment of data that seems to represent the main idea of that segment.

A document that we use to keep track of and define the codes that we have identified (or are using) in our qualitative data analysis.

Part of the qualitative data analysis process where we begin to interpret and assign meaning to the data.

When a participant faces undue or excess pressure to participate by either favorable or unfavorable means, this is known as coercion and must be avoided by researchers

predictable flaws in thinking

A type of longitudinal design where participants are selected because of a defining characteristic that the researcher is interested in studying. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest.

Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time.

Someone who has the formal or informal authority to grant permission or access to a particular community.

the group of participants in our study who do not receive the intervention we are researching in experiments without random assignment

measurements of variables based on more than one one indicator

These are software tools that can aid qualitative researchers in managing, organizing and manipulating/analyzing their data.

A mental image that summarizes a set of similar observations, feelings, or ideas

developing clear, concise definitions for the key concepts in a research question

A type of criterion validity. Examines how well a tool provides the same scores as an already existing tool administered at the same point in time.

The different levels of the independent variable in an experimental design.

a range of values in which the true value is likely to be, to provide a more accurate description of their data

For research purposes, confidentiality means that only members of the research team have access potentially identifiable information that could be associated with participant data. According to confidentiality, it is the research team's responsibility to restrict access to this information by other parties, including the public.

observing and analyzing information in a way that agrees with what you already think is true and excludes other alternatives

Conflicting allegiances.

a variable whose influence makes it difficult to understand the relationship between an independent and dependent variable

Consistency is the idea that we use a systematic (and potentially repeatable) process when conducting our research.

a characteristic that does not change in a study

Constant comparison reflects the motion that takes place in some qualitative analysis approaches whereby the researcher moves back and forth between the data and the emerging categories and evolving understanding they have in their results. They are continually checking what they believed to be the results against the raw data they are working with.

"when the construct measured is not identical across cultures or when behaviors that characterize the construct are not identical across cultures" (Meiring et al., 2005, p. 2)

Constructivist research is a qualitative design that seeks to develop a deep understanding of the meaning that people attach to events, experiences, or phenomena.

Conditions that are not directly observable and represent states of being, experiences, and ideas.

Content is the substance of the artifact (e.g. the words, picture, scene). It is what can actually be observed.

An approach to data analysis that seeks to identify patterns, trends, or ideas across qualitative data through processes of coding and categorization.

The extent to which a measure “covers” the construct of interest, i.e., it's comprehensiveness to measure the construct.

Context is the circumstances surrounding an artifact, event, or experience.

unintended influences on respondents’ answers because they are not related to the content of the item but to the context in which the item appears.

Research findings are applicable to the group of people who contributed to the knowledge building and the situation in which it took place.

a visual representation of across-tabulation of categorical variables to demonstrate all the possible occurrences of categories

required courses clinical practitioners must take in order to remain current with licensure

variables whose values are mutually exclusive and can be used in mathematical operations

In research design and statistics, a series of methods that allow researchers to minimize the effect of an extraneous variable on the dependent variable in their project.

the group of participants in our study who do not receive the intervention we are researching in experiments with random assignment

a confounding variable whose effects are accounted for mathematically in quantitative analysis to isolate the relationship between an independent and dependent variable

also called availability sampling; researcher gathers data from whatever cases happen to be convenient or available

a relationship between two variables in which their values change together.

a statistically derived value between -1 and 1 that tells us the magnitude and direction of the relationship between two variables

when the values of two variables change at the same time

In qualitative data, coverage refers to the amount of data that can be categorized or sorted using the code structure that we are using (or have developed) in our study. With qualitative research, our aim is to have good coverage with our code structure.

The extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with.

a theory and practice that critiques the ways in which systems of power shape the creation, distribution, and reception of information

a paradigm in social science research focused on power, inequality, and social change

Statistical measure used to asses the internal consistency of an instrument.

When a researcher collects data only once from participants using a questionnaire

spurious covariance between your independent and dependent variables that is in fact caused by systematic error introduced by culturally insensitive or incompetent research practices

the concept that scores obtained from a measure are similar when employed in different cultural populations

Research that portrays groups of people or communities as flawed, surrounded by problems, or incapable of producing change.

An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.

A plan that is developed by a researcher, prior to commencing a research project, that details how data will be collected, stored and managed during the research project.

This is the document where you list your variable names, what the variables actually measure or represent, what each of the values of the variable mean if the meaning isn't obvious.

A data matrix is a tool used by researchers to track and organize data and findings during qualitative analysis.

Including data from multiple sources to help enhance your understanding of a topic

a searchable collection of information

A statement at the end of data collection (e.g. at the end of a survey or interview) that generally thanks participants and reminds them what the research was about, what it's purpose is, resources available to them if they need them, and contact information for the researcher if they have questions or concerns.

A decision-rule provides information on how the researcher determines what code should be placed on an item, especially when codes may be similar in nature.

Research methods that reclaim control over indigenous ways of knowing and being.

The act of breaking piece of qualitative data apart during the analysis process to discern meaning and ultimately, the results of the study.

The type of research in which a specific expectation is deduced from a general premise and then tested

An approach to data analysis in which the researchers begins their analysis using a theory to see if their data fits within this theoretical framework (tests the theory).

starts by reading existing theories, then testing hypotheses and revising or confirming the theory

a variable that depends on changes in the independent variable

research that describes or defines a particular phenomenon

A technique for summarizing and presenting data.

Participants are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree.

Having the ability to make decisions for yourself limited

Occurs when two variables move together in the same direction - as one increases, so does the other, or, as one decreases, so does the other

an academic field, like social work

Variables with finite value choices.

The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct.

“a planned process that involves consideration of target audiences and the settings in which research findings are to be received and, where appropriate, communicating and interacting with wider policy and…service audiences in ways that will facilitate research uptake in decision-making processes and practice” (Wilson, Petticrew, Calnan, & Natareth, 2010, p. 91)

how you plan to share your research findings

How you plan to share your research findings

the way the scores are distributed across the levels of that variable.

The analysis of documents (or other existing artifacts) as a source of data.

a question that asks more than one thing at a time, making it difficult to respond accurately

A combination of two people or objects

The performance of an intervention under "real-world" conditions that are not closely controlled and ideal

performance of an intervention under ideal and controlled circumstances, such as in a lab or delivered by trained researcher-interventionists

individual units of a population

Emergent design is the idea that some decision in our research design will be dynamic and change as our understanding of the research question evolves as we go through the research process. This is (often) evident in qualitative research, but rare in quantitative research.

in mixed methods research, this refers to the order in which each method is used, either concurrently or sequentially

report the results of a quantitative or qualitative data analysis conducted by the author

information about the social world gathered and analyzed through scientific observation or experimentation

research questions that can be answered by systematically observing the real world

when someone is treated unfairly in their capacity to know something or describe their experience of the world

assumptions about how we come to know what is real and true

A general approach to research that is conscientious of the dynamics of power and control created by the act of research and attempts to actively address these dynamics through the process and outcomes of research.

Often the end result of a phenomological study, this is a description of the lived experience of the phenomenon being studied.

unsuitable research questions which are not answerable by systematic observation of the real world but instead rely on moral or philosophical opinions

Ethnography is a qualitative research design that is used when we are attempting to learn about a culture by observing people in their natural environment.

research that evaluates the outcomes of a policy or program

a process composed of "four equally weighted parts: 1) current client needs and situation, (2) the best relevant research evidence, (3) client values and preferences, and (4) the clinician’s expertise" (Drisko & Grady, 2015, p. 275)

After the fact

characteristics that disqualify a person from being included in a sample

Exempt review is the lowest level of review. Studies that are considered exempt expose participants to the least potential for harm and often involve little participation by human subjects.

Exhaustive categories are options for closed ended questions that allow for every possible response (no one should feel like they can't find the answer for them).

Expanded field notes represents the field notes that we have taken during data collection after we have had time to sit down and add details to them that we were not able to capture immediately at the point of collection.

Expedited review is the middle level of review. Studies considered under expedited review do not have to go before the full IRB board because they expose participants to minimal risk. However, the studies must be thoroughly reviewed by a member of the IRB committee.

an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.

treatment, intervention, or experience that is being tested in an experiment (the independent variable) that is received by the experimental group and not by the control group.

Refers to research that is designed specifically to answer the question of whether there is a causal relationship between two variables.

in experimental design, the group of participants in our study who do receive the intervention we are researching

explains why particular phenomena work in the way that they do; answers “why” questions

conducted during the early stages of a project, usually when a researcher wants to test the feasibility of conducting a more extensive study or if the topic has not been studied in the past

Having an objective person, someone not connected to your study, try to start with your findings and trace them back to your raw data using your audit trail. A tool to help demonstrate rigor in qualitative research.

This is a synonymous term for generalizability - the ability to apply the findings of a study beyond the sample to a broader population.

variables and characteristics that have an effect on your outcome, but aren't the primary variable whose influence you're interested in testing.

A purposive sampling strategy that selects a case(s) that represent extreme or underrepresented perspectives. It is a way of intentionally focusing on or representing voices that may not often be heard or given emphasis.

The extent to which a measurement method appears “on its face” to measure the construct of interest

when a measure does not indicate the presence of a phenomenon, when in reality it is present

when a measure indicates the presence of a phenomenon, when in reality it is not present

whether you can practically and ethically complete the research project you propose

Research methods in this tradition seek to, "remove the power imbalance between research and subject; (are) politically motivated in that (they) seeks to change social inequality; and (they) begin with the standpoints and experiences of women".[footnote]PAR-L. (2010). Introduction to feminist research. [Webpage]. https://www2.unb.ca/parl/research.htm#:~:text=Methodologically%2C%20feminist%20research%20differs%20from,standpoints%20and%20experiences%20of%20women.[/footnote]

respondents to a survey who choose neutral response options, even if they have an opinion

Notes that are taken by the researcher while we are in the field, gathering data.

Questions that screen out/identify a certain type of respondent, usually to direct them to a certain part of the survey.

items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample

respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion

Type of interview where participants answer questions in a group.

A document that will outline the instructions for conducting your focus group, including the questions you will ask participants. It often concludes with a debriefing statement for the group, as well.

A form of data gathering where researchers ask a group of participants to respond to a series of (mostly open-ended) questions.

Deliberate actions taken to impact a research project. For example deliberately falsifying data, plagiarism, not being truthful about the methodology, etc.

A table that lays out how many cases fall into each level of a variable.

A full board review will involve multiple members of the IRB evaluating your proposal. When researchers submit a proposal under full board review, the full IRB board will meet, discuss any questions or concerns with the study, invite the researcher to answer questions and defend their proposal, and vote to approve the study or send it back for revision. Full board proposals pose greater than minimal risk to participants. They may also involve the participation of vulnerable populations, or people who need additional protection from the IRB.

the people or organizations who control access to the population you want to study

The ability to apply research findings beyond the study sample to some broader population,

Findings form a research study that apply to larger group of people (beyond the sample). Producing generalizable findings requires starting with a representative sample.

(as in generalization) to make claims about a large population based on a smaller sample of people or items

research reports released by non-commercial publishers, such as government agencies, policy organizations, and think-tanks

A type of research design that is often used to study a process or identify a theory about how something works.

A form of qualitative analysis that aims to develop a theory or understanding of how some event or series of events occurs by closely examining participant knowledge and experience of that event(s).

A composite scale using a series of items arranged in increasing order of intensity of the construct of interest, from least intense to most intense.

The quality of or the amount of difference or variation in data or research participants.

a graphical display of a distribution.

The quality of or the amount of similarity or consistency in data or research participants.

As researchers in the social science, we ourselves are the main tool for conducting our studies.

The US Department of Health and Human Services (USDHHS) defines a human subject as “a living individual about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information ” (USDHHS, 1993, para. 1). [2]

a statement describing a researcher’s expectation regarding what they anticipate finding

A cyclical process of theory development, starting with an observed phenomenon, then developing or using a theory to make a specific prediction of what should happen if that theory is correct, testing that prediction, refining the theory in light of the findings, and using that refined theory to develop new hypotheses, and so on.

attempts to explain or describe your phenomenon exhaustively, based on the subjective understandings of your participants

A rich, deep, detailed understanding of a unique person, small group, and/or set of circumstances.

Tthe long-term condition that occurs at the end of a defined time period after an intervention.

The scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.

the impact your study will have on participants, communities, scientific knowledge, and social justice

Inclusion criteria are general requirements a person must possess to be a part of your sample.

causes a change in the dependent variable

a composite score derived from aggregating measures of multiple concepts (called components) using a set of rules and formulas

Clues that demonstrate the presence, intensity, or other aspects of a concept in the real world

things that require subtle and complex observations to measure, perhaps we must use existing knowledge and intuition to define.

In nonequivalent comparison group designs, the process by which researchers match individual cases in the experimental group to similar cases in the comparison group.

inductive reasoning draws conclusions from individual observations

An approach to data analysis in which we gather our data first and then generate a theory about its meaning through our analysis.

when a researcher starts with a set of observations and then moves from particular experiences to a more general set of propositions about those experiences

"a set of abilities requiring individuals to 'recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information" (American Library Association, 2020)

the accumulation of special rights and advantages not available to others in the area of information access

A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.

an administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated

The consistency of people’s responses across the items on a multiple-item measure. Responses about the same underlying construct should be correlated, though not perfectly.

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

a paradigm based on the idea that social context and interaction frame our realities

The extent to which different observers are consistent in their assessment or rating of a particular characteristic or item.

the various aspects or dimensions that come together in forming our identity

A level of measurement that is continuous, can be rank ordered, is exhaustive and mutually exclusive, and for which the distance between attributes is known to be equal. But for which there is no zero point.

An interview guide is a document that outlines the flow of information during your interview, including a greeting and introduction to orient your participant to the topic, your questions and any probes, and any debriefing statement you might include. If you are part of a research team, your interview guide may also include instructions for the interviewer if certain things are brought up in the interview or as general guidance.

A questionnaire that is read to respondents

any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options

A form of data gathering where researchers ask individual participants to respond to a series of (mostly open-ended) questions.

Type of reliability in which a rater rates something the same way on two different occasions.

a statistic ranging from 0 to 1 that measures how much outcomes (1) within a cluster are likely to be similar or (2) between different clusters are likely to be different

a “gut feeling” about what to do based on previous experience or knowledge

yer gut feelin'

occurs when two variables change in opposite directions - one goes up, the other goes down and vice versa; also called negative association

when the order in which the items are presented affects people’s responses

An iterative approach means that after planning and once we begin collecting data, we begin analyzing as data as it is coming in.  This early analysis of our (incomplete) data, then impacts our planning, ongoing data gathering and future analysis as it progresses.

a nonlinear process in which the original product is revised over and over again to improve it

One of the three ethical principles in the Belmont Report. States that benefits and burdens of research should be distributed fairly.

Someone who is especially knowledgeable about a topic being studied.

the words or phrases in your search query

when a participant's answer to a question is altered due to the way in which a question is written. In essence, the question leads the participant to answer in a specific way.

The level that describes how data for variables are recorded. The level of measurement defines the type of operations can be conducted with your data. There are four levels: nominal, ordinal, interval, and ratio.

measuring people’s attitude toward something by assessing their level of agreement with several statements about it

A research process where you create a plan, you gather your data, you analyze your data and each step is completed before you proceed to the next.

a statistical technique that can be used to predict how an independent variable affects a dependent variable in the context of other variables.

A science that deals with the principles and criteria of validity of inference and demonstration: the science of the formal principles of reasoning.

A graphic depiction (road map) that presents the shared relationships among the resources, activities, outputs, outcomes, and impact for your program

Researcher collects data from participants at multiple points over an extended period of time using a questionnaire.

examining social structures and institutions

The strength of a correlation, determined by the absolute value of a correlation coefficient

a type of survey question that lists a set of questions for which the response options are all the same in a grid layout

A purposive sampling strategy where you choose cases because they represent a range of very different perspectives on a topic

Also called the average, the mean is calculated by adding all your cases and dividing the total by the number of cases.

One number that can give you an idea about the distribution of your data.

The process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena under investigation in a research study.

The differerence between that value that we get when we measure something and the true value

Instrument or tool that operationalizes (measures) the concept that you are studying.

The value in the middle when all our values are placed in numerical order. Also called the 50th percentile.

Variables that refer to the mechanisms by which an independent variable might affect a dependent variable.

Member checking involves taking your results back to participants to see if we "got it right" in our analysis. While our findings bring together many different peoples' data into one set of findings, participants should still be able to recognize their input and feel like their ideas and experiences have been captured adequately.

approach to recruitment where participants are members of an organization or social group with identified membership

Memoing is the act of recording your thoughts, reactions, quandaries as you are reviewing the data you are gathering.

A written agreement between parties that want to participate in a collaborative project.

level of interaction or activity that exists between groups and within communities

a study that combines raw data from multiple quantitative studies and analyzes the pooled data using statistics

a study that combines primary data from multiple qualitative sources and analyzes the pooled data

an explanation of why you chose the specific design of your study; why do your chosen methods fit with the aim of your research

A description of how research is conducted.

level of interaction or activity that exists at the smallest level, usually among individuals

Usually unintentional. Very broad category that covers things such as not using the proper statistics for analysis, injecting bias into your study and in interpreting results, being careless with your research methodology

when researchers use both quantitative and qualitative methods in a project

The most commonly occurring value of a variable.

A variable that affects the strength and/or direction of the relationship between the independent and dependent variables.

concepts that are comprised of multiple elements

An empirical structure for measuring items or indicators of the multiple dimensions of a concept.

A group of statistical techniques that examines the relationship between at least three variables

Mutually exclusive categories are options for closed ended questions that do not overlap, so people only fit into one category or another, not both.

Those stories that we compose as human beings that allow us to make meaning of our experiences and the world around us

US legislation passed In 1974, which created the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, which went on to produce The Belmont Report.

collecting data in the field where it naturally/normally occurs

Making qualitative observations that attempt to capture the subjects of the observation as unobtrusively as possible and with limited structure to the observation.

Including data that contrasts, contradicts, or challenges the majority of evidence that we have found or expect to find

occurs when two variables change in opposite directions - one goes up, the other goes down and vice versa

ensuring that we have correctly captured and reflected an accurate understanding in our findings by clarifying and verifying our findings with our participants

The idea that qualitative researchers attempt to limit or at the very least account for their own biases, motivations, interests and opinions during the research process.

The lowest level of measurement; categories cannot be mathematically ranked, though they are exhaustive and mutually exclusive

causal explanations that can be universally applied to groups, such as scientific laws or universal truths

provides a more general, sweeping explanation that is universally true for all people

sampling approaches for which a person’s likelihood of being selected for membership in the sample is unknown

Referring to data analysis that doesn't examine how variables relate to each other.

If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity of the study’s results, especially as this relates to the representativeness of the sample.

The bias that occurs when those who respond to your request to participate in a study are different from those who do not respond to you request to participate in a study.

an association between two variables that is NOT caused by a third variable

the assumption that no relationship exists between the variables in question

The Nuremberg Code is a 10-point set of research principles designed to guide doctors and scientists who conduct research on human subjects, crafted in response to the atrocities committed during the Holocaust.

a single truth, observed without bias, that is universally applicable

Observation is a tool for data gathering where researchers rely on their own senses (e.g. sight, sound) to gather information on a topic.

In measurement, conditions that are easy to identify and verify through direct observation.

The rows in your data set. In social work, these are often your study participants (people), but can be anything from census tracts to black bears to trains.

including more than one member of your research team to aid in analyzing the data

The federal government agency that oversees IRBs.

a statistical procedure to compare the means of a variable across three or more groups

assumptions about what is real and true

journal articles that are made freely available by the publisher

An initial phase of coding that involves reviewing the data to determine the preliminary ideas that seem important and potential labels that reflect their significance.

sharing one's data and methods for the purposes of replication, verifiability, and collaboration of findings

Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words

According to the APA Dictionary of Psychology, an operational definition is "a description of something in terms of the operations (procedures, actions, or processes) by which it could be observed and measured. For example, the operational definition of anxiety could be in terms of a test score, withdrawal from a situation, or activation of the sympathetic nervous system. The process of creating an operational definition is known as operationalization."

process by which researchers spell out precisely how a concept will be measured in their study

Oral histories are a type of qualitative research design that offers a detailed accounting of a person's life, some event, or experience. This story(ies) is aimed at answering a specific research question.

verbal presentation of research findings to a conference audience

Level of measurement that follows nominal level. Has mutually exclusive categories and a hierarchy (rank order), but we cannot calculate a mathematical distance between attributes.

Extreme values in your data.

summarizes the incompatibility between a particular set of data and a proposed model for the data, usually the null hypothesis. The lower the p-value, the more inconsistent the data are with the null hypothesis, indicating that the relationship is statistically significant.

group presentations that feature experts on a given issue, with time for audience question-and-answer

A type of longitudinal design where the researchers gather data at multiple points in time and the same people participate in the survey each time it is administered.

Those who are asked to contribute data in a research study; sometimes called respondents or subjects.

An approach to research that more intentionally attempts to involve community members throughout the research process compared to more traditional research methods. In addition, participatory approaches often seek some concrete, tangible change for the benefit of the community (often defined by the community).

when a publisher prevents access to reading content unless the user pays money

A qualitative research tool for enhancing rigor by partnering with a peer researcher who is not connected with your project (therefore more objective), to discuss project details, your decision, perhaps your reflexive journal, as a means of helping to reduce researcher bias and maintain consistency and transparency in the research process.

a formal process in which other esteemed researchers and experts ensure your work meets the standards and expectations of the professional field

trade publications, magazines, and newspapers

the tendency for a pattern to occur at regular intervals

A qualitative research design that aims to capture and describe the lived experience of some event or "phenomenon" for a group of people.

Photovoice is a technique that merges pictures with narrative (word or voice data that helps that interpret the meaning or significance of the visual artifact. It is often used as a tool in CBPR.

Testing out your research materials in advance on people who are not included as participants in your study.

as a criteria for causal relationship, the relationship must make logical sense and seem possible

A purposive sampling strategy that focuses on selecting cases that are important in representing a contemporary politicized issue.

the larger group of people you want to be able to make conclusions about based on the conclusions you draw from the people in your sample

A statement about the researchers worldview and life experiences, specifically in respect to the research topic they are studying. It helps to demonstrate the subjective connection(s) the researcher has to the topic and is a way to encourage transparency in research.

a paradigm guided by the principles of objectivity, knowability, and deductive logic

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

presentations that use a poster to visually represent the elements of the study

the odds you will detect a significant relationship between variables when one is truly present in your sample

describe “how things are done” or comment on pressing issues in practice (Wallace & Wray, 2016, p. 20)

How well your findings can be translated and used in the "real world." For example, you may have a statistically significant correlation; however, the relationship may be very weak. This limits your abiltiy to use these data for real world change.

improvements in cognitive assessments due to exposure to the instrument

knowledge gained through “learning by doing” that guides social work intervention and increases over time

a research paradigm that suspends questions of philosophical ‘truth’ and focuses more on how different philosophies, theories, and methods can be used strategically to resolve a problem or question within the researcher's unique context

A type of criterion validity that examines how well your tool predicts a future criterion.

A measure of a participant's condition before they receive an intervention or treatment.

a type of experimental design in which participants are randomly assigned to control and experimental groups, one group receives an intervention, and both groups receive pre- and post-test assessments

Data you have collected yourself.

in a literature review, a source that describes primary data collected and analyzed by the author, rather than only reviewing what other researchers have found

This means that one scientist could repeat another’s study with relative ease. By replicating a study, we may become more (or less) confident in the original study’s findings.

a type of cluster sampling, in which clusters are given different chances of being selected based on their size so that each element across all of the clusters has an equal chance of being selected

sampling approaches for which a person’s likelihood of being selected from the sampling frame is known

Probes a brief prompts or follow up questions that are used in qualitative interviewing to help draw out additional information on a particular question or idea.

An analysis of how well a program runs

the "uptake of formal and informal learning opportunities that deepen and extend...professional competence, including knowledge, beliefs, motivation, and self-regulatory skills" (Richter, Kunter, Klusmann, Lüdtke, & Baumert, 2014)

The systematic process by which we determine if social programs are meeting their goals, how well the program runs, whether the program had the desired effect, and whether the program has merit according to stakeholders (including in terms of the monetary costs and benefits)

As researchers, this means we are extensively spending time with participants or are in the community we are studying.

In prospective studies, individuals are followed over time and data about them is collected as their characteristics or circumstances change.

a person who completes a survey on behalf of another person

Fake names assigned in research to protect the identity of participants.

claims about the world that appear scientific but are incompatible with the values and practices of science

The science of measurement. Involves using theory to assess measurement procedures and tools.

approach to recruitment where participants are sought in public spaces

In a purposive sample, participants are intentionally or hand-selected because of their specific expertise or experience.

data derived from analysis of texts. Usually, this is word data (like a conversation or journal entry) but can also include performances, pictures, and other means of expressing ideas.

qualitative methods interpret language and behavior to understand the world from the perspectives of other people

Research that involves the use of data that represents human expression through words, pictures, movies, performance and other artifacts.

numerical data

when a researcher administers a questionnaire verbally to participants

quantitative methods examine numerical data to precisely describe and predict elements of the social world

a subtype of experimental design that is similar to a true experiment, but does not have randomly assigned control and treatment groups

Research methods using this approach aim to question, challenge and/or reject knowledge that is commonly accepted and privileged in society and elevate and empower knowledge and perspectives that are often perceived as non-normative.

search terms used in a database to find sources of information, like articles or webpages

A research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner

A quota sample involves the researcher identifying a subgroups within a population that they want to make sure to include in their sample, and then identifies a quota or target number to recruit that represent each of these subgroups.

using a random process to decide which participants are tested in which conditions

Unpredictable error that does not result in scores that are consistently higher or lower on a given measure but are nevertheless inaccurate.

Errors lack any perceptable pattern.

an experiment that involves random assignment to a control and experimental group to evaluate the impact of an intervention or stimulus

An approach to sampling where all elements or people in a sampling frame have an equal chance of being selected for inclusion in a study's sample.

The difference between the highest and lowest scores in the distribution.

An ordered set of responses that participants must choose from.

The highest level of measurement. Denoted by mutually exclusive categories, a hierarchy (order), values can be added, subtracted, multiplied, and divided, and the presence of an absolute zero.

unprocessed data that researchers can analyze using quantitative and qualitative methods (e.g., responses to a survey or interview transcripts)

When respondents have difficult providing accurate answers to questions due to the passage of time.

Concept advanced by Albert Bandura that human behavior both shapes and is shaped by their environment.

The act of putting the deconstructed qualitative back together during the analysis process in the search for meaning and ultimately the results of the study.

the process by which the researcher informs potential participants about the study and attempts to get them to participate

A research journal that helps the researcher to reflect on and consider their thoughts and reactions to the research process and how it may be shaping the study

How we understand and account for our influence, as researchers, on the research process.

the process of considering something abstract to be a concrete object or thing; the fallacy of reification is assuming that abstract concepts exist in some concrete, tangible way

The degree to which an instrument reflects the true score rather than error.  In statistical terms, reliability is the portion of observed variability in the sample that is accounted for by the true variability, not by error. Note : Reliability is necessary, but not sufficient, for measurement validity.

a sample that looks like the population from which it was selected in all respects that are potentially relevant to the study

How closely your sample resembles the population from which it was drawn.

a systematic investigation, including development, testing, and. evaluation, designed to develop or contribute to generalizable knowledge

These are sites where contributing researchers can house data that other researchers can view and request permission to use

the methods researchers use to examine empirical data

a set of common philosophical (ontological, epistemological, and axiological) assumptions that inform research (e.g., Post-positivism, Constructivism, Pragmatic, Critical)

a document produced by researchers that reviews the literature relevant to their topic and describes the methods they will use to conduct their study

The details/steps outlining how a study will be carried out.

The unintended influence that the researcher may have on the research process.

One of the three ethical principles espoused in the Belmont Report. Treating people as autonomous beings who have the right to make their own decisions. Acknowledging participants' personal dignity.

the answers researchers provide to participants to choose from when completing a questionnaire

Similar to other longitudinal studies, these surveys deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past.

journal articles that summarize the findings other researchers and establish the state of the literature in a given topic area

Rigor is the process through which we demonstrate, to the best of our ability, that our research is empirically sound and reflects a scientific approach to knowledge building.

facilitated discussions on a topic, often to generate new ideas

the group of people you successfully recruit from your sampling frame to participate in your study

The number of cases found in your final sample.

Sampling bias is present when our sampling process results in a sample that does not represent our population in some way.

the set of all possible samples you could possibly draw for your study

The difference in the statistical characteristics of the population (i.e., the population parameters ) and those in the sample (i.e., the sample statistics ); the error caused by observing characteristics of a sample rather than the entire population

the list of people from which a researcher will draw her sample

used in systematic random sampling; the distance between the elements in the sampling frame selected for the sample; determined by dividing the total sampling frame by the desired sample size

The point where gathering more data doesn't offer any new ideas or perspectives on the issue you are studying.  Reaching saturation is an indication that we can stop qualitative data collection.

A graphical representation of data where the y-axis (the vertical one along the side) is your variable's value and the x-axis (the horizontal one along the bottom) represents the individual instance in your data.

Visual representations of the relationship between two interval/ratio variables that usually use dots to represent data points

a way of knowing that attempts to systematically collect and categorize facts or truths

Data someone else has collected that you have permission to use in your research.

analyzing data that has been collected by another person or research group

interpret, discuss, and summarize primary sources

the degree to which people in my sample differs from the overall population

Selective or theoretical coding is part of a qualitative analysis process that seeks to determine how important concepts and their relationships to each other come together, providing a theory that describes the focus of the study. It often results in an overarching or unifying idea tying these concepts together.

A questionnaire that is distributed to participants (in person, by mail, virtually) to complete independently.

a participant answers questions about themselves

Composite (multi-item) scales in which respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites.

An interview that has a general framework for the questions that will be asked, but there is more flexibility to pursue related topics that are brought up by participants than is found in a structured interview approach.

a classic work of research literature that is more than 5 years old and is marked by its uniqueness and contribution to professional knowledge” (Houser, 2018, p. 112)

in mixed methods research, this refers to the order each method is used

the words used to identify the organization and structure of your literature review to your reader

selecting elements from a list using randomly generated numbers

A distribution where cases are clustered on one or the other side of the median.

For a snowball sample, a few initial participants are recruited and then we rely on those initial (and successive) participants to help identify additional people to recruit. We thus rely on participants connects and knowledge of the population to aid our recruitment.

When a participant answers in a way that they believe is socially the most acceptable answer.

Social desirability bias occurs when we create questions that lead respondents to answer in ways that don't reflect their genuine thoughts or feelings to avoid being perceived negatively.

the science of humanity, social interactions, and social structures

A reliability evaluation that examines the internal consistency of a a measurement tool. This process involves comparing one half of a tool to the other half of the same tool and evaluating the results.

when an association between two variables appears to be causal but can in fact be explained by influence of a third variable

The people and organizations that have some interest in or will be effected by our program.

The ability to fail to accept the null hypotheses (i.e., actually find what you are seeking)

"Assuming that the null hypothesis is true and the study is repeated an infinite number times by drawing random samples from the same populations(s), less than 5% of these results will be more extreme than the current result" (Cassidy et al., 2019, p. 233).

the characteristic by which the sample is divided in stratified random sampling

dividing the study population into subgroups based on a characteristic (or strata) and then drawing a sample from each subgroup

Interview that uses a very prescribed or structured approach, with a rigid set of questions that are asked very consistently each time, with little to no deviation

Numbers or a series of numbers, symbols and letters assigned in research to both organize data as it is collected, as well as protecting the identity of participants.

the subset of the target population available for study

one truth among many, bound within a social and cultural context

The use of questionnaires to gather data from multiple participants.

A distribution with a roughly equal number of cases on either side of the median.

(also known as bias) refers to when a measure consistently outputs incorrect data, usually in one direction and due to an identifiable process

Errors that are generally predictable.

a probability sampling approach that begins by selecting a random start on a sampling frame and then selects every kth element from your sampling frame for the sample

journal articles that identify, appraise, and synthesize all relevant studies on a particular topic (Uman, 2011, p.57)

a quick, condensed summary of the report’s key findings arranged by row and column

knowledge that is difficult to express in words and may be conveyed more through intuition or feelings

the group of people whom your study addresses

approach to recruitment where participants are based on some personal characteristic or group association

as a criteria for causal relationship, the cause must come before the effect

any findings that follow from constructivist studies are not inherently applicable to other people or situations, as their realities may be quite different

review primary and secondary sources

The extent to which scores obtained on a scale or other measure are consistent across time

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

The Belmont Report is a document outlining basic ethical principles for research on human subjects in the United States and is the foundation of work conducted by IRBs in carrying out their task of overseeing protection of human subjects in research (National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, 1979).

published works that document a scholarly conversation on a specific topic within and between disciplines

Thematic analysis is an approach to qualitative analysis, in which the researcher attempts to identify themes or patterns across their data to better understand the topic being studied.

A visual representation of how each individual category fits with the others when using thematic analysis to analyze your qualitative data.

a network of linked concepts that together provide a rationale for a research project or analysis; theoretical frameworks are based in theory and empirical literature

a set of concepts and relationships scientists use to explain the social world

A thick description is a very complete, detailed, and illustrative of the subject that is being described.

Biases or circumstances that can reduce or limit the internal validity of a study

circumstances or events that may affect the outcome of an experiment, resulting in changes in the research participants that are not a result of the intervention, treatment, or experimental condition being tested

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

a set of measurements taken at intervals over a period of time

periodicals directed to members of a specific profession which often include information about industry trends and practical information for people working in the field

To type out the text of recorded interview or focus group.

The process of research is record and described in such a way that the steps the researcher took throughout the research process are clear.

ensuring that everyone receives the same, or close to the same, treatment as possible

The stage in single subjects research design in which the treatment or intervention is delivered

A type of longitudinal survey where the researchers gather data at multiple times, but each time they ask different people from the group they are studying because their concern is capturing the sentiment of the group, not the individual people they survey.

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

Trustworthiness is a quality reflected by qualitative research that is conducted in a credible way; a way that should produce confidence in its findings.

Data that accurately portrays information that was shared in or by the original source.

The level of confidence that research is obtained through a systematic and scientific process and that findings can be clearly connected to the data they are based on (and not some fabrication or falsification of that data).

a statistical procedure to compare the means of a variable across groups using multiple independent variables to distinguish among groups

A purposive sampling strategy where you select cases that represent the most common/ a commonly held perspective.

concepts that are expected to have a single underlying dimension

A distribution with one distinct peak when represented on a histogram.

A rating scale where the magnitude of a single trait is being tested

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

discrete segments of data

Univariate data analysis is a quantitative method in which a variable is examined individually to determine its distribution.

Interviews that contain very open-ended talking prompt that we want participants to respond to, with much flexibility to follow the conversation where it leads.

The extent to which the scores from a measure represent the variable they are intended to.

The extent to which the levels of a variable vary around their central tendency (the mean, median, or mode).

“a logical grouping of attributes that can be observed and measured and is expected to vary from person to person in a population” (Gillespie & Wagner, 2018, p. 9)

The name of your variable.

People who are at risk of undue influence or coercion. Examples are children, prisoners, parolees, and persons with impaired mental capabilities. Additional groups may be vulnerable if they are deemed to be unable to give consent.

According to the APA Dictionary of Psychology : an experimental design in which the treatment or other intervention is removed during one or more periods. A typical withdrawal design consists of three phases: an initial condition for obtaining a baseline, a condition in which the treatment is applied, and another baseline condition in which the treatment has been withdrawn. Often, the baseline condition is represented by the letter A and the treatment condition by the letter B, such that this type of withdrawal design is known as an A-B-A design. A fourth phase of reapplying the intervention may be added, as well as a fifth phase of removing the intervention, to determine whether the effect of the intervention can be reproduced.

interactive presentations which go hands-on with audience members to teach them new skills

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

  • Think back to the experiment you considered for your research project in Section 14.3. Now that you know more about quasi-experimental designs, do you still think it's a true experiment? Why or why not?
  • What should you consider when deciding whether an experimental or quasi-experimental design would be more feasible or fit your research question better?

TRACK 2 (IF YOU AREN'T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

Imagine you are interested in studying child welfare practice. You are interested in learning more about community-based programs aimed to prevent child maltreatment and to prevent out-of-home placement for children.

  • Now that you know more about quasi-experimental designs, do you still think the research design you proposed in the previous section is still a true experiment? Why or why not?
  • Rosenbaum, P. R., & Rubin, D. B. (1983). The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika, 70 (1), 41–55. https://doi.org/10.2307/2335942 ↵

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

IMAGES

  1. Experimental Research Design With Examples

    q.4 discuss experimental research designs in detail

  2. Experimental research design.revised

    q.4 discuss experimental research designs in detail

  3. Experimental Research: What it is + Types of designs

    q.4 discuss experimental research designs in detail

  4. What Is A True Experimental Design In Psychology

    q.4 discuss experimental research designs in detail

  5. Experimental Research Designs: Types, Examples & Advantages (2023)

    q.4 discuss experimental research designs in detail

  6. Question 4. Experimental approaches and the strengths

    q.4 discuss experimental research designs in detail

COMMENTS

  1. Experimental Research Designs: Types, Examples & Advantages

    There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design. 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2.

  2. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  3. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  4. Experimental Design: Definition and Types

    An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions. An experiment is a data collection ...

  5. Guide to experimental research design

    Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause ...

  6. PDF Experimental Research Designs

    ARTHUR—PSYC 302 (EXPERIMENTAL PSYCHOLOGY) 19A LECTURE NOTES [02/23/19] EXPERIMENTAL RESEARCH DESIGNS—PAGE 3 4. Solomon Four-Group design • Generally accepted as the best design, but requires a large number of participants. PRETEST TREATMENT POSTTEST GROUP I YES YES YES GROUP II NO YES YES GROUP III YES NO YES GROUP IV NO NO YES • Some possible comparisons

  7. Experimental Research Design

    The experimental research design definition is a research method used to investigate the interaction between independent and dependent variables, which can be used to determine a cause-and-effect ...

  8. 14.1 What is experimental design and when should you use it?

    Types of Experimental Designs. Experimental design is an umbrella term for a research method that is designed to test hypotheses related to causality under controlled conditions. Table 14.1 describes the three major types of experimental design (pre-experimental, quasi-experimental, and true experimental) and presents subtypes for each.

  9. An Introduction to Experimental Design Research

    Abstract. Design research brings together influences from the whole gamut of social, psychological, and more technical sciences to create a tradition of empirical study stretching back over 50 years (Horvath 2004; Cross 2007 ). A growing part of this empirical tradition is experimental, which has gained in importance as the field has matured.

  10. Experimental Research Design

    12.2 Particularities of Experimental Research. In this section, we specifically address the elements that make experimental research a discrete research design differentiated from others. Next to the characteristics of experimental research, we address the main issues and decisions to be made within this research design, and the major pitfalls.

  11. Experimental research

    10 Experimental research. 10. Experimental research. Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different ...

  12. Designing and Conducting Experimental and Quasi-Experimental Research

    Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect. Murray, S., et al. (1979, April 8-12).

  13. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    A placebo is an intervention with no effect, such as a dummy pill. Standard or usual health care. In nursing studies, for example, patients in the control group typically receive "usual care" because no care would be unethical. A lower dose of treatment or an alternative treatment.

  14. Experimental Design

    Experimental Design. Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes ...

  15. Experimental Research: Definition, Types, Design, Examples

    Content. Experimental research is a cornerstone of scientific inquiry, providing a systematic approach to understanding cause-and-effect relationships and advancing knowledge in various fields. At its core, experimental research involves manipulating variables, observing outcomes, and drawing conclusions based on empirical evidence.

  16. Mastering Research: The Principles of Experimental Design

    Steps in designing an experiment. Designing an experiment requires careful planning, an understanding of the underlying scientific principles, and a keen attention to detail. The essence of a well-designed experiment lies in ensuring both the integrity of the research and the validity of the results it yields.

  17. Experimental Design

    Experimental Design | Types, Definition & Examples. Published on June 9, 2024 by Julia Merkus, MA.Revised on July 22, 2024. An experimental design is a systematic plan for conducting an experiment that aims to test a hypothesis or answer a research question.. It involves manipulating one or more independent variables (IVs) and measuring their effect on one or more dependent variables (DVs ...

  18. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  19. PDF 11.3 The Four Principles of Experimental Design

    Chapter 11 - Experiments and Observational Studies. In Chapter 10 and 11 we talk about different methods used to collect data. In the last chapter we learned about Sample Surveys. In this chapter we will talk about Observational Studies and Experiments. They all collect data in different ways and lead to different conclusions.

  20. 14.3 Quasi-experimental designs

    Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don't do as good a job of mitigating threats to internal validity, they still allow us to establish temporality, which is a criterion for establishing nomothetic causality.The prefix quasi means "resembling," so quasi-experimental research is research that ...