Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Prevent plagiarism. Run a free check.

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Quasi Experimental Design Overview & Examples

By Jim Frost Leave a Comment

What is a Quasi Experimental Design?

A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.

Image illustrating a quasi experimental design.

Quasi-experimental research is a design that closely resembles experimental research but is different. The term “quasi” means “resembling,” so you can think of it as a cousin to actual experiments. In these studies, researchers can manipulate an independent variable — that is, they change one factor to see what effect it has. However, unlike true experimental research, participants are not randomly assigned to different groups.

Learn more about Experimental Designs: Definition & Types .

When to Use Quasi-Experimental Design

Researchers typically use a quasi-experimental design because they can’t randomize due to practical or ethical concerns. For example:

  • Practical Constraints : A school interested in testing a new teaching method can only implement it in preexisting classes and cannot randomly assign students.
  • Ethical Concerns : A medical study might not be able to randomly assign participants to a treatment group for an experimental medication when they are already taking a proven drug.

Quasi-experimental designs also come in handy when researchers want to study the effects of naturally occurring events, like policy changes or environmental shifts, where they can’t control who is exposed to the treatment.

Quasi-experimental designs occupy a unique position in the spectrum of research methodologies, sitting between observational studies and true experiments. This middle ground offers a blend of both worlds, addressing some limitations of purely observational studies while navigating the constraints often accompanying true experiments.

A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence that it causes the observed changes.

However, it’s crucial to recognize its limitations as well. Controlling confounding variables is a larger concern for a quasi-experimental design than a true experiment because it lacks random assignment.

In sum, quasi-experimental designs offer a valuable research approach when random assignment is not feasible, providing a more structured and controlled framework than observational studies while acknowledging and attempting to address potential confounders.

Types of Quasi-Experimental Designs and Examples

Quasi-experimental studies use various methods, depending on the scenario.

Natural Experiments

This design uses naturally occurring events or changes to create the treatment and control groups. Researchers compare outcomes between those whom the event affected and those it did not affect. Analysts use statistical controls to account for confounders that the researchers must also measure.

Natural experiments are related to observational studies, but they allow for a clearer causality inference because the external event or policy change provides both a form of quasi-random group assignment and a definite start date for the intervention.

For example, in a natural experiment utilizing a quasi-experimental design, researchers study the impact of a significant economic policy change on small business growth. The policy is implemented in one state but not in neighboring states. This scenario creates an unplanned experimental setup, where the state with the new policy serves as the treatment group, and the neighboring states act as the control group.

Researchers are primarily interested in small business growth rates but need to record various confounders that can impact growth rates. Hence, they record state economic indicators, investment levels, and employment figures. By recording these metrics across the states, they can include them in the model as covariates and control them statistically. This method allows researchers to estimate differences in small business growth due to the policy itself, separate from the various confounders.

Nonequivalent Groups Design

This method involves matching existing groups that are similar but not identical. Researchers attempt to find groups that are as equivalent as possible, particularly for factors likely to affect the outcome.

For instance, researchers use a nonequivalent groups quasi-experimental design to evaluate the effectiveness of a new teaching method in improving students’ mathematics performance. A school district considering the teaching method is planning the study. Students are already divided into schools, preventing random assignment.

The researchers matched two schools with similar demographics, baseline academic performance, and resources. The school using the traditional methodology is the control, while the other uses the new approach. Researchers are evaluating differences in educational outcomes between the two methods.

They perform a pretest to identify differences between the schools that might affect the outcome and include them as covariates to control for confounding. They also record outcomes before and after the intervention to have a larger context for the changes they observe.

Regression Discontinuity

This process assigns subjects to a treatment or control group based on a predetermined cutoff point (e.g., a test score). The analysis primarily focuses on participants near the cutoff point, as they are likely similar except for the treatment received. By comparing participants just above and below the cutoff, the design controls for confounders that vary smoothly around the cutoff.

For example, in a regression discontinuity quasi-experimental design focusing on a new medical treatment for depression, researchers use depression scores as the cutoff point. Individuals with depression scores just above a certain threshold are assigned to receive the latest treatment, while those just below the threshold do not receive it. This method creates two closely matched groups: one that barely qualifies for treatment and one that barely misses out.

By comparing the mental health outcomes of these two groups over time, researchers can assess the effectiveness of the new treatment. The assumption is that the only significant difference between the groups is whether they received the treatment, thereby isolating its impact on depression outcomes.

Controlling Confounders in a Quasi-Experimental Design

Accounting for confounding variables is a challenging but essential task for a quasi-experimental design.

In a true experiment, the random assignment process equalizes confounders across the groups to nullify their overall effect. It’s the gold standard because it works on all confounders, known and unknown.

Unfortunately, the lack of random assignment can allow differences between the groups to exist before the intervention. These confounding factors might ultimately explain the results rather than the intervention.

Consequently, researchers must use other methods to equalize the groups roughly using matching and cutoff values or statistically adjust for preexisting differences they measure to reduce the impact of confounders.

A key strength of quasi-experiments is their frequent use of “pre-post testing.” This approach involves conducting initial tests before collecting data to check for preexisting differences between groups that could impact the study’s outcome. By identifying these variables early on and including them as covariates, researchers can more effectively control potential confounders in their statistical analysis.

Additionally, researchers frequently track outcomes before and after the intervention to better understand the context for changes they observe.

Statisticians consider these methods to be less effective than randomization. Hence, quasi-experiments fall somewhere in the middle when it comes to internal validity , or how well the study can identify causal relationships versus mere correlation . They’re more conclusive than correlational studies but not as solid as true experiments.

In conclusion, quasi-experimental designs offer researchers a versatile and practical approach when random assignment is not feasible. This methodology bridges the gap between controlled experiments and observational studies, providing a valuable tool for investigating cause-and-effect relationships in real-world settings. Researchers can address ethical and logistical constraints by understanding and leveraging the different types of quasi-experimental designs while still obtaining insightful and meaningful results.

Cook, T. D., & Campbell, D. T. (1979).  Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin

Share this:

quasi experimental sampling technique

Reader Interactions

Comments and questions cancel reply.

  • Privacy Policy

Research Method

Home » Quasi-Experimental Research Design – Types, Methods

Quasi-Experimental Research Design – Types, Methods

Table of Contents

Quasi-Experimental Design

Quasi-Experimental Design

Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable(s) that is available in a true experimental design.

In a quasi-experimental design, the researcher uses an existing group of participants that is not randomly assigned to the experimental and control groups. Instead, the groups are selected based on pre-existing characteristics or conditions, such as age, gender, or the presence of a certain medical condition.

Types of Quasi-Experimental Design

There are several types of quasi-experimental designs that researchers use to study causal relationships between variables. Here are some of the most common types:

Non-Equivalent Control Group Design

This design involves selecting two groups of participants that are similar in every way except for the independent variable(s) that the researcher is testing. One group receives the treatment or intervention being studied, while the other group does not. The two groups are then compared to see if there are any significant differences in the outcomes.

Interrupted Time-Series Design

This design involves collecting data on the dependent variable(s) over a period of time, both before and after an intervention or event. The researcher can then determine whether there was a significant change in the dependent variable(s) following the intervention or event.

Pretest-Posttest Design

This design involves measuring the dependent variable(s) before and after an intervention or event, but without a control group. This design can be useful for determining whether the intervention or event had an effect, but it does not allow for control over other factors that may have influenced the outcomes.

Regression Discontinuity Design

This design involves selecting participants based on a specific cutoff point on a continuous variable, such as a test score. Participants on either side of the cutoff point are then compared to determine whether the intervention or event had an effect.

Natural Experiments

This design involves studying the effects of an intervention or event that occurs naturally, without the researcher’s intervention. For example, a researcher might study the effects of a new law or policy that affects certain groups of people. This design is useful when true experiments are not feasible or ethical.

Data Analysis Methods

Here are some data analysis methods that are commonly used in quasi-experimental designs:

Descriptive Statistics

This method involves summarizing the data collected during a study using measures such as mean, median, mode, range, and standard deviation. Descriptive statistics can help researchers identify trends or patterns in the data, and can also be useful for identifying outliers or anomalies.

Inferential Statistics

This method involves using statistical tests to determine whether the results of a study are statistically significant. Inferential statistics can help researchers make generalizations about a population based on the sample data collected during the study. Common statistical tests used in quasi-experimental designs include t-tests, ANOVA, and regression analysis.

Propensity Score Matching

This method is used to reduce bias in quasi-experimental designs by matching participants in the intervention group with participants in the control group who have similar characteristics. This can help to reduce the impact of confounding variables that may affect the study’s results.

Difference-in-differences Analysis

This method is used to compare the difference in outcomes between two groups over time. Researchers can use this method to determine whether a particular intervention has had an impact on the target population over time.

Interrupted Time Series Analysis

This method is used to examine the impact of an intervention or treatment over time by comparing data collected before and after the intervention or treatment. This method can help researchers determine whether an intervention had a significant impact on the target population.

Regression Discontinuity Analysis

This method is used to compare the outcomes of participants who fall on either side of a predetermined cutoff point. This method can help researchers determine whether an intervention had a significant impact on the target population.

Steps in Quasi-Experimental Design

Here are the general steps involved in conducting a quasi-experimental design:

  • Identify the research question: Determine the research question and the variables that will be investigated.
  • Choose the design: Choose the appropriate quasi-experimental design to address the research question. Examples include the pretest-posttest design, non-equivalent control group design, regression discontinuity design, and interrupted time series design.
  • Select the participants: Select the participants who will be included in the study. Participants should be selected based on specific criteria relevant to the research question.
  • Measure the variables: Measure the variables that are relevant to the research question. This may involve using surveys, questionnaires, tests, or other measures.
  • Implement the intervention or treatment: Implement the intervention or treatment to the participants in the intervention group. This may involve training, education, counseling, or other interventions.
  • Collect data: Collect data on the dependent variable(s) before and after the intervention. Data collection may also include collecting data on other variables that may impact the dependent variable(s).
  • Analyze the data: Analyze the data collected to determine whether the intervention had a significant impact on the dependent variable(s).
  • Draw conclusions: Draw conclusions about the relationship between the independent and dependent variables. If the results suggest a causal relationship, then appropriate recommendations may be made based on the findings.

Quasi-Experimental Design Examples

Here are some examples of real-time quasi-experimental designs:

  • Evaluating the impact of a new teaching method: In this study, a group of students are taught using a new teaching method, while another group is taught using the traditional method. The test scores of both groups are compared before and after the intervention to determine whether the new teaching method had a significant impact on student performance.
  • Assessing the effectiveness of a public health campaign: In this study, a public health campaign is launched to promote healthy eating habits among a targeted population. The behavior of the population is compared before and after the campaign to determine whether the intervention had a significant impact on the target behavior.
  • Examining the impact of a new medication: In this study, a group of patients is given a new medication, while another group is given a placebo. The outcomes of both groups are compared to determine whether the new medication had a significant impact on the targeted health condition.
  • Evaluating the effectiveness of a job training program : In this study, a group of unemployed individuals is enrolled in a job training program, while another group is not enrolled in any program. The employment rates of both groups are compared before and after the intervention to determine whether the training program had a significant impact on the employment rates of the participants.
  • Assessing the impact of a new policy : In this study, a new policy is implemented in a particular area, while another area does not have the new policy. The outcomes of both areas are compared before and after the intervention to determine whether the new policy had a significant impact on the targeted behavior or outcome.

Applications of Quasi-Experimental Design

Here are some applications of quasi-experimental design:

  • Educational research: Quasi-experimental designs are used to evaluate the effectiveness of educational interventions, such as new teaching methods, technology-based learning, or educational policies.
  • Health research: Quasi-experimental designs are used to evaluate the effectiveness of health interventions, such as new medications, public health campaigns, or health policies.
  • Social science research: Quasi-experimental designs are used to investigate the impact of social interventions, such as job training programs, welfare policies, or criminal justice programs.
  • Business research: Quasi-experimental designs are used to evaluate the impact of business interventions, such as marketing campaigns, new products, or pricing strategies.
  • Environmental research: Quasi-experimental designs are used to evaluate the impact of environmental interventions, such as conservation programs, pollution control policies, or renewable energy initiatives.

When to use Quasi-Experimental Design

Here are some situations where quasi-experimental designs may be appropriate:

  • When the research question involves investigating the effectiveness of an intervention, policy, or program : In situations where it is not feasible or ethical to randomly assign participants to intervention and control groups, quasi-experimental designs can be used to evaluate the impact of the intervention on the targeted outcome.
  • When the sample size is small: In situations where the sample size is small, it may be difficult to randomly assign participants to intervention and control groups. Quasi-experimental designs can be used to investigate the impact of an intervention without requiring a large sample size.
  • When the research question involves investigating a naturally occurring event : In some situations, researchers may be interested in investigating the impact of a naturally occurring event, such as a natural disaster or a major policy change. Quasi-experimental designs can be used to evaluate the impact of the event on the targeted outcome.
  • When the research question involves investigating a long-term intervention: In situations where the intervention or program is long-term, it may be difficult to randomly assign participants to intervention and control groups for the entire duration of the intervention. Quasi-experimental designs can be used to evaluate the impact of the intervention over time.
  • When the research question involves investigating the impact of a variable that cannot be manipulated : In some situations, it may not be possible or ethical to manipulate a variable of interest. Quasi-experimental designs can be used to investigate the relationship between the variable and the targeted outcome.

Purpose of Quasi-Experimental Design

The purpose of quasi-experimental design is to investigate the causal relationship between two or more variables when it is not feasible or ethical to conduct a randomized controlled trial (RCT). Quasi-experimental designs attempt to emulate the randomized control trial by mimicking the control group and the intervention group as much as possible.

The key purpose of quasi-experimental design is to evaluate the impact of an intervention, policy, or program on a targeted outcome while controlling for potential confounding factors that may affect the outcome. Quasi-experimental designs aim to answer questions such as: Did the intervention cause the change in the outcome? Would the outcome have changed without the intervention? And was the intervention effective in achieving its intended goals?

Quasi-experimental designs are useful in situations where randomized controlled trials are not feasible or ethical. They provide researchers with an alternative method to evaluate the effectiveness of interventions, policies, and programs in real-life settings. Quasi-experimental designs can also help inform policy and practice by providing valuable insights into the causal relationships between variables.

Overall, the purpose of quasi-experimental design is to provide a rigorous method for evaluating the impact of interventions, policies, and programs while controlling for potential confounding factors that may affect the outcome.

Advantages of Quasi-Experimental Design

Quasi-experimental designs have several advantages over other research designs, such as:

  • Greater external validity : Quasi-experimental designs are more likely to have greater external validity than laboratory experiments because they are conducted in naturalistic settings. This means that the results are more likely to generalize to real-world situations.
  • Ethical considerations: Quasi-experimental designs often involve naturally occurring events, such as natural disasters or policy changes. This means that researchers do not need to manipulate variables, which can raise ethical concerns.
  • More practical: Quasi-experimental designs are often more practical than experimental designs because they are less expensive and easier to conduct. They can also be used to evaluate programs or policies that have already been implemented, which can save time and resources.
  • No random assignment: Quasi-experimental designs do not require random assignment, which can be difficult or impossible in some cases, such as when studying the effects of a natural disaster. This means that researchers can still make causal inferences, although they must use statistical techniques to control for potential confounding variables.
  • Greater generalizability : Quasi-experimental designs are often more generalizable than experimental designs because they include a wider range of participants and conditions. This can make the results more applicable to different populations and settings.

Limitations of Quasi-Experimental Design

There are several limitations associated with quasi-experimental designs, which include:

  • Lack of Randomization: Quasi-experimental designs do not involve randomization of participants into groups, which means that the groups being studied may differ in important ways that could affect the outcome of the study. This can lead to problems with internal validity and limit the ability to make causal inferences.
  • Selection Bias: Quasi-experimental designs may suffer from selection bias because participants are not randomly assigned to groups. Participants may self-select into groups or be assigned based on pre-existing characteristics, which may introduce bias into the study.
  • History and Maturation: Quasi-experimental designs are susceptible to history and maturation effects, where the passage of time or other events may influence the outcome of the study.
  • Lack of Control: Quasi-experimental designs may lack control over extraneous variables that could influence the outcome of the study. This can limit the ability to draw causal inferences from the study.
  • Limited Generalizability: Quasi-experimental designs may have limited generalizability because the results may only apply to the specific population and context being studied.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Research Methods

Research Methods – Types, Examples and Guide

Applied Research

Applied Research – Types, Methods and Examples

Explanatory Research

Explanatory Research – Types, Methods, Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Experimental Research Design

Experimental Design – Types, Methods, Guide

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.3 Quasi-Experimental Research

Learning objectives.

  • Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
  • Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one.

The prefix quasi means “resembling.” Thus quasi-experimental research is research that resembles experimental research but is not true experimental research. Although the independent variable is manipulated, participants are not randomly assigned to conditions or orders of conditions (Cook & Campbell, 1979). Because the independent variable is manipulated before the dependent variable is measured, quasi-experimental research eliminates the directionality problem. But because participants are not randomly assigned—making it likely that there are other differences between conditions—quasi-experimental research does not eliminate the problem of confounding variables. In terms of internal validity, therefore, quasi-experiments are generally somewhere between correlational studies and true experiments.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. There are many different kinds of quasi-experiments, but we will discuss just a few of the most common ones here.

Nonequivalent Groups Design

Recall that when participants in a between-subjects experiment are randomly assigned to conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are likely to be dissimilar in some ways. For this reason, researchers consider them to be nonequivalent. A nonequivalent groups design , then, is a between-subjects design in which participants have not been randomly assigned to conditions.

Imagine, for example, a researcher who wants to evaluate a new method of teaching fractions to third graders. One way would be to conduct a study with a treatment group consisting of one class of third-grade students and a control group consisting of another class of third-grade students. This would be a nonequivalent groups design because the students are not randomly assigned to classes by the researcher, which means there could be important differences between them. For example, the parents of higher achieving or more motivated students might have been more likely to request that their children be assigned to Ms. Williams’s class. Or the principal might have assigned the “troublemakers” to Mr. Jones’s class because he is a stronger disciplinarian. Of course, the teachers’ styles, and even the classroom environments, might be very different and might cause different levels of achievement or motivation among the students. If at the end of the study there was a difference in the two classes’ knowledge of fractions, it might have been caused by the difference between the teaching methods—but it might have been caused by any of these confounding variables.

Of course, researchers using a nonequivalent groups design can take steps to ensure that their groups are as similar as possible. In the present example, the researcher could try to select two classes at the same school, where the students in the two classes have similar scores on a standardized math test and the teachers are the same sex, are close in age, and have similar teaching styles. Taking such steps would increase the internal validity of the study because it would eliminate some of the most important confounding variables. But without true random assignment of the students to conditions, there remains the possibility of other important confounding variables that the researcher was not able to control.

Pretest-Posttest Design

In a pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an antidrug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the antidrug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.

If the average posttest score is better than the average pretest score, then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores are better. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest. Perhaps an antidrug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it. Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a yearlong program, participants might become less impulsive or better reasoners and this might be responsible for the change.

Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially low on a test of fractions are given a special training program and then retested. Regression to the mean all but guarantees that their scores will be higher even if the training program has no effect. A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001). Thus one must generally be very cautious about inferring causality from pretest-posttest designs.

Does Psychotherapy Work?

Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952). But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:

http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm

Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980). They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.

Han Eysenck

In a classic 1952 article, researcher Hans Eysenck pointed out the shortcomings of the simple pretest-posttest design for evaluating the effectiveness of psychotherapy.

Wikimedia Commons – CC BY-SA 3.0.

Interrupted Time Series Design

A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979). Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.

Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 7.5 “A Hypothetical Interrupted Time-Series Design” shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.

Figure 7.5 A Hypothetical Interrupted Time-Series Design

A Hypothetical Interrupted Time-Series Design - The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not

The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.

Combination Designs

A type of quasi-experimental design that is generally better than either the nonequivalent groups design or the pretest-posttest design is one that combines elements of both. There is a treatment group that is given a pretest, receives a treatment, and then is given a posttest. But at the same time there is a control group that is given a pretest, does not receive the treatment, and then is given a posttest. The question, then, is not simply whether participants who receive the treatment improve but whether they improve more than participants who do not receive the treatment.

Imagine, for example, that students in one school are given a pretest on their attitudes toward drugs, then are exposed to an antidrug program, and finally are given a posttest. Students in a similar school are given the pretest, not exposed to an antidrug program, and finally are given a posttest. Again, if students in the treatment condition become more negative toward drugs, this could be an effect of the treatment, but it could also be a matter of history or maturation. If it really is an effect of the treatment, then students in the treatment condition should become more negative than students in the control condition. But if it is a matter of history (e.g., news of a celebrity drug overdose) or maturation (e.g., improved reasoning), then students in the two conditions would be likely to show similar amounts of change. This type of design does not completely eliminate the possibility of confounding variables, however. Something could occur at one of the schools but not the other (e.g., a student drug overdose), so students at the first school would be affected by it while students at the other school would not.

Finally, if participants in this kind of design are randomly assigned to conditions, it becomes a true experiment rather than a quasi experiment. In fact, it is the kind of experiment that Eysenck called for—and that has now been conducted many times—to demonstrate the effectiveness of psychotherapy.

Key Takeaways

  • Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.
  • Quasi-experimental research eliminates the directionality problem because it involves the manipulation of the independent variable. It does not eliminate the problem of confounding variables, however, because it does not involve random assignment to conditions. For these reasons, quasi-experimental research is generally higher in internal validity than correlational studies but lower than true experiments.
  • Practice: Imagine that two college professors decide to test the effect of giving daily quizzes on student performance in a statistics course. They decide that Professor A will give quizzes but Professor B will not. They will then compare the performance of students in their two sections on a common final exam. List five other variables that might differ between the two sections that could affect the results.

Discussion: Imagine that a group of obese children is recruited for a study in which their weight is measured, then they participate for 3 months in a program that encourages them to be more active, and finally their weight is measured again. Explain how each of the following might affect the results:

  • regression to the mean
  • spontaneous remission

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin.

Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324.

Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146.

Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Research Methodologies Guide

  • Action Research
  • Bibliometrics
  • Case Studies
  • Content Analysis
  • Digital Scholarship This link opens in a new window
  • Documentary
  • Ethnography
  • Focus Groups
  • Grounded Theory
  • Life Histories/Autobiographies
  • Longitudinal
  • Participant Observation
  • Qualitative Research (General)

Quasi-Experimental Design

  • Usability Studies

Quasi-Experimental Design is a unique research methodology because it is characterized by what is lacks. For example, Abraham & MacDonald (2011) state:

" Quasi-experimental research is similar to experimental research in that there is manipulation of an independent variable. It differs from experimental research because either there is no control group, no random selection, no random assignment, and/or no active manipulation. "

This type of research is often performed in cases where a control group cannot be created or random selection cannot be performed. This is often the case in certain medical and psychological studies. 

For more information on quasi-experimental design, review the resources below: 

Where to Start

Below are listed a few tools and online guides that can help you start your Quasi-experimental research. These include free online resources and resources available only through ISU Library.

  • Quasi-Experimental Research Designs by Bruce A. Thyer This pocket guide describes the logic, design, and conduct of the range of quasi-experimental designs, encompassing pre-experiments, quasi-experiments making use of a control or comparison group, and time-series designs. An introductory chapter describes the valuable role these types of studies have played in social work, from the 1930s to the present. Subsequent chapters delve into each design type's major features, the kinds of questions it is capable of answering, and its strengths and limitations.
  • Experimental and Quasi-Experimental Designs for Research by Donald T. Campbell; Julian C. Stanley. Call Number: Q175 C152e Written 1967 but still used heavily today, this book examines research designs for experimental and quasi-experimental research, with examples and judgments about each design's validity.

Online Resources

  • Quasi-Experimental Design From the Web Center for Social Research Methods, this is a very good overview of quasi-experimental design.
  • Experimental and Quasi-Experimental Research From Colorado State University.
  • Quasi-experimental design--Wikipedia, the free encyclopedia Wikipedia can be a useful place to start your research- check the citations at the bottom of the article for more information.
  • << Previous: Qualitative Research (General)
  • Next: Sampling >>
  • Last Updated: Aug 12, 2024 4:07 PM
  • URL: https://instr.iastate.libguides.com/researchmethods
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

quasi experimental sampling technique

Home Market Research Research Tools and Apps

Quasi-experimental Research: What It Is, Types & Examples

quasi-experimental research is research that appears to be experimental but is not.

Much like an actual experiment, quasi-experimental research tries to demonstrate a cause-and-effect link between a dependent and an independent variable. A quasi-experiment, on the other hand, does not depend on random assignment, unlike an actual experiment. The subjects are sorted into groups based on non-random variables.

What is Quasi-Experimental Research?

“Resemblance” is the definition of “quasi.” Individuals are not randomly allocated to conditions or orders of conditions, even though the regression analysis is changed. As a result, quasi-experimental research is research that appears to be experimental but is not.

The directionality problem is avoided in quasi-experimental research since the regression analysis is altered before the multiple regression is assessed. However, because individuals are not randomized at random, there are likely to be additional disparities across conditions in quasi-experimental research.

As a result, in terms of internal consistency, quasi-experiments fall somewhere between correlational research and actual experiments.

The key component of a true experiment is randomly allocated groups. This means that each person has an equivalent chance of being assigned to the experimental group or the control group, depending on whether they are manipulated or not.

Simply put, a quasi-experiment is not a real experiment. A quasi-experiment does not feature randomly allocated groups since the main component of a real experiment is randomly assigned groups. Why is it so crucial to have randomly allocated groups, given that they constitute the only distinction between quasi-experimental and actual  experimental research ?

Let’s use an example to illustrate our point. Let’s assume we want to discover how new psychological therapy affects depressed patients. In a genuine trial, you’d split half of the psych ward into treatment groups, With half getting the new psychotherapy therapy and the other half receiving standard  depression treatment .

And the physicians compare the outcomes of this treatment to the results of standard treatments to see if this treatment is more effective. Doctors, on the other hand, are unlikely to agree with this genuine experiment since they believe it is unethical to treat one group while leaving another untreated.

A quasi-experimental study will be useful in this case. Instead of allocating these patients at random, you uncover pre-existing psychotherapist groups in the hospitals. Clearly, there’ll be counselors who are eager to undertake these trials as well as others who prefer to stick to the old ways.

These pre-existing groups can be used to compare the symptom development of individuals who received the novel therapy with those who received the normal course of treatment, even though the groups weren’t chosen at random.

If any substantial variations between them can be well explained, you may be very assured that any differences are attributable to the treatment but not to other extraneous variables.

As we mentioned before, quasi-experimental research entails manipulating an independent variable by randomly assigning people to conditions or sequences of conditions. Non-equivalent group designs, pretest-posttest designs, and regression discontinuity designs are only a few of the essential types.

What are quasi-experimental research designs?

Quasi-experimental research designs are a type of research design that is similar to experimental designs but doesn’t give full control over the independent variable(s) like true experimental designs do.

In a quasi-experimental design, the researcher changes or watches an independent variable, but the participants are not put into groups at random. Instead, people are put into groups based on things they already have in common, like their age, gender, or how many times they have seen a certain stimulus.

Because the assignments are not random, it is harder to draw conclusions about cause and effect than in a real experiment. However, quasi-experimental designs are still useful when randomization is not possible or ethical.

The true experimental design may be impossible to accomplish or just too expensive, especially for researchers with few resources. Quasi-experimental designs enable you to investigate an issue by utilizing data that has already been paid for or gathered by others (often the government). 

Because they allow better control for confounding variables than other forms of studies, they have higher external validity than most genuine experiments and higher  internal validity  (less than true experiments) than other non-experimental research.

Is quasi-experimental research quantitative or qualitative?

Quasi-experimental research is a quantitative research method. It involves numerical data collection and statistical analysis. Quasi-experimental research compares groups with different circumstances or treatments to find cause-and-effect links. 

It draws statistical conclusions from quantitative data. Qualitative data can enhance quasi-experimental research by revealing participants’ experiences and opinions, but quantitative data is the method’s foundation.

Quasi-experimental research types

There are many different sorts of quasi-experimental designs. Three of the most popular varieties are described below: Design of non-equivalent groups, Discontinuity in regression, and Natural experiments.

Design of Non-equivalent Groups

Example: design of non-equivalent groups, discontinuity in regression, example: discontinuity in regression, natural experiments, example: natural experiments.

However, because they couldn’t afford to pay everyone who qualified for the program, they had to use a random lottery to distribute slots.

Experts were able to investigate the program’s impact by utilizing enrolled people as a treatment group and those who were qualified but did not play the jackpot as an experimental group.

How QuestionPro helps in quasi-experimental research?

QuestionPro can be a useful tool in quasi-experimental research because it includes features that can assist you in designing and analyzing your research study. Here are some ways in which QuestionPro can help in quasi-experimental research:

Design surveys

Randomize participants, collect data over time, analyze data, collaborate with your team.

With QuestionPro, you have access to the most mature market research platform and tool that helps you collect and analyze the insights that matter the most. By leveraging InsightsHub, the unified hub for data management, you can ​​leverage the consolidated platform to organize, explore, search, and discover your  research data  in one organized data repository . 

Optimize Your quasi-experimental research with QuestionPro. Get started now!

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

quasi experimental sampling technique

Why You Should Attend XDAY 2024

Aug 30, 2024

Alchemer vs Qualtrics

Alchemer vs Qualtrics: Find out which one you should choose

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Experimental and Quasi-Experimental Designs in Implementation Research

Christopher j. miller.

a VA Boston Healthcare System, Center for Healthcare Organization and Implementation Research (CHOIR), United States Department of Veterans Affairs, Boston, MA, USA

b Department of Psychiatry, Harvard Medical School, Boston, MA, USA

Shawna N. Smith

c Department of Psychiatry, University of Michigan Medical School, Ann Arbor, MI, USA

d Survey Research Center, Institute for Social Research, University of Michigan, Ann Arbor, MI, USA

Marianne Pugatch

Implementation science is focused on maximizing the adoption, appropriate use, and sustainability of effective clinical practices in real world clinical settings. Many implementation science questions can be feasibly answered by fully experimental designs, typically in the form of randomized controlled trials (RCTs). Implementation-focused RCTs, however, usually differ from traditional efficacy- or effectiveness-oriented RCTs on key parameters. Other implementation science questions are more suited to quasi-experimental designs, which are intended to estimate the effect of an intervention in the absence of randomization. These designs include pre-post designs with a non-equivalent control group, interrupted time series (ITS), and stepped wedges, the last of which require all participants to receive the intervention, but in a staggered fashion. In this article we review the use of experimental designs in implementation science, including recent methodological advances for implementation studies. We also review the use of quasi-experimental designs in implementation science, and discuss the strengths and weaknesses of these approaches. This article is therefore meant to be a practical guide for researchers who are interested in selecting the most appropriate study design to answer relevant implementation science questions, and thereby increase the rate at which effective clinical practices are adopted, spread, and sustained.

1. Background

The first documented clinical trial was conducted in 1747 by James Lind, a royal navy physician, who tested the hypothesis that citrus fruit could cure scurvy. Since then, based on foundational work by Fisher and others (1935), the randomized controlled trial (RCT) has emerged as the gold standard for testing the efficacy of treatment versus a control condition for individual patients. Randomization of patients is seen as a crucial to reducing the impact of measured or unmeasured confounding variables, in turn allowing researchers to draw conclusions regarding causality in clinical trials.

As described elsewhere in this special issue, implementation science is ultimately focused on maximizing the adoption, appropriate use, and sustainability of effective clinical practices in real world clinical settings. As such, some implementation science questions may be addressed by experimental designs. For our purposes here, we use the term “experimental” to refer to designs that feature two essential ingredients: first, manipulation of an independent variable; and second, random assignment of subjects. This corresponds to the definition of randomized experiments originally championed by Fisher (1925) . From this perspective, experimental designs usually take the form of RCTs—but implementation- oriented RCTs typically differ in important ways from traditional efficacy- or effectiveness-oriented RCTs. Other implementation science questions require different methodologies entirely: specifically, several forms of quasi-experimental designs may be used for implementation research in situations where an RCT would be inappropriate. These designs are intended to estimate the effect of an intervention despite a lack of randomization. Quasi-experimental designs include pre-post designs with a nonequivalent control group, interrupted time series (ITS), and stepped wedge designs. Stepped wedges are studies in which all participants receive the intervention, but in a staggered fashion. It is important to note that quasi-experimental designs are not unique to implementation science. As we will discuss below, however, each of them has strengths that make them particularly useful in certain implementation science contexts.

Our goal for this manuscript is two-fold. First, we will summarize the use of experimental designs in implementation science. This will include discussion of ways that implementation-focused RCTs may differ from efficacy- or effectiveness-oriented RCTs. Second, we will summarize the use of quasi-experimental designs in implementation research. This will include discussion of the strengths and weaknesses of these types of approaches in answering implementation research questions. For both experimental and quasi-experimental designs, we will discuss a recent implementation study as an illustrative example of one approach.

1. Experimental Designs in Implementation Science

RCTs in implementation science share the same basic structure as efficacy- or effectiveness-oriented RCTs, but typically feature important distinctions. In this section we will start by reviewing key factors that separate implementation RCTs from more traditional efficacy- or effectiveness-oriented RCTs. We will then discuss optimization trials, which are a type of experimental design that is especially useful for certain implementation science questions. We will then briefly turn our attention to single subject experimental designs (SSEDs) and on-off-on (ABA) designs.

The first common difference that sets apart implementation RCTs from more traditional clinical trials is the primary research question they aim to address. For most implementation trials, the primary research question is not the extent to which a particular treatment or evidence-based practice is more effective than a comparison condition, but instead the extent to which a given implementation strategy is more effective than a comparison condition. For more detail on this pivotal issue, see Drs. Bauer and Kirchner in this special issue.

Second, as a corollary of this point, implementation RCTs typically feature different outcome measures than efficacy or effectiveness RCTs, with an emphasis on the extent to which a health intervention was successfully implemented rather than an evaluation of the health effects of that intervention ( Proctor et al., 2011 ). For example, typical implementation outcomes might include the number of patients who receive the intervention, or the number of providers who administer the intervention as intended. A variety of evaluation-oriented implementation frameworks may guide the choices of such measures (e.g. RE-AIM; Gaglio et al., 2013 ; Glasgow et al., 1999 ). Hybrid implementation-effectiveness studies attend to both effectiveness and implementation outcomes ( Curran et al., 2012 ); these designs are also covered in more detail elsewhere in this issue (Landes, this issue).

Third, given their focus, implementation RCTs are frequently cluster-randomized (i.e. with sites or clinics as the unit of randomization, and patients nested within those sites or clinics). For example, consider a hypothetical RCT that aims to evaluate the implementation of a training program for cognitive behavioral therapy (CBT) in community clinics. Randomizing at the patient level for such a trial would be inappropriate due to the risk of contamination, as providers trained in CBT might reasonably be expected to incorporate CBT principles into their treatment even to patients assigned to the control condition. Randomizing at the provider level would also risk contamination, as providers trained in CBT might discuss this treatment approach with their colleagues. Thus, many implementation trials are cluster randomized at the site or clinic level. While such clustering minimizes the risk of contamination, it can unfortunately create commensurate problems with confounding, especially for trials with very few sites to randomize. Stratification may be used to at least partially address confounding issues in cluster- randomized and more traditional trials alike, by ensuring that intervention and control groups are broadly similar on certain key variables. Furthermore, such allocation schemes typically require analytic models that account for this clustering and the resulting correlations among error structures (e.g., generalized estimating equations [GEE] or mixed-effects models; Schildcrout et al., 2018 ).

1.1. Optimization trials

Key research questions in implementation science often involve determining which implementation strategies to provide, to whom, and when, to achieve optimal implementation success. As such, trials designed to evaluate comparative effectiveness, or to optimize provision of different types or intensities of implementation strategies, may be more appealing than traditional effectiveness trials. The methods described in this section are not unique to implementation science, but their application in the context of implementation trials may be particularly useful for informing implementation strategies.

While two-arm RCTs can be used to evaluate comparative effectiveness, trials focused on optimizing implementation support may use alternative experimental designs ( Collins et al., 2005 ; Collins et al., 2007 ). For example, in certain clinical contexts, multi-component “bundles” of implementation strategies may be warranted (e.g. a bundle consisting of clinician training, technical assistance, and audit/feedback to encourage clinicians to use a new evidence-based practice). In these situations, implementation researchers might consider using factorial or fractional-factorial designs. In the context of implementation science, these designs randomize participants (e.g. sites or providers) to different combinations of implementation strategies, and can be used to evaluate the effectiveness of each strategy individually to inform an optimal combination (e.g. Coulton et al., 2009 ; Pellegrini et al., 2014 ; Wyrick, et al., 2014 ). Such designs can be particularly useful in informing multi-component implementation strategies that are not redundant or overly burdensome ( Collins et al., 2014a ; Collins et al., 2009 ; Collins et al., 2007 ).

Researchers interested in optimizing sequences of implementation strategies that adapt to ongoing needs over time may be interested in a variant of factorial designs known as the sequential, multiple-assignment randomized trial (SMART; Almirall et al., 2012 ; Collins et al., 2014b ; Kilbourne et al., 2014b ; Lei et al., 2012 ; Nahum-Shani et al., 2012 ; NeCamp et al., 2017 ). SMARTs are multistage randomized trials in which some or all participants are randomized more than once, often based on ongoing information (e.g., treatment response). In implementation research, SMARTs can inform optimal sequences of implementation strategies to maximize downstream clinical outcomes. Thus, such designs are well-suited to answering questions about what implementation strategies should be used, in what order, to achieve the best outcomes in a given context.

One example of an implementation SMART is the Adaptive Implementation of Effective Program Trial (ADEPT; Kilbourne et al., 2014a ). ADEPT was a clustered SMART ( NeCamp et al., 2017 ) designed to inform an adaptive sequence of implementation strategies for implementing an evidence-based collaborative chronic care model, Life Goals ( Kilbourne et al., 2014c ; Kilbourne et al., 2012a ), into community-based practices. Life Goals, the clinical intervention being implemented, has proven effective at improving physical and mental health outcomes for patients with unipolar and bipolar depression by encouraging providers to instruct patients in self-management, and improving clinical information systems and care management across physical and mental health providers ( Bauer et al., 2006 ; Kilbourne et al., 2012a ; Kilbourne et al., 2008 ; Simon et al., 2006 ). However, in spite of its established clinical effectiveness, community-based clinics experienced a number of barriers in trying to implement the Life Goals model, and there were questions about how best to efficiently and effectively augment implementation strategies for clinics that struggled with implementation.

The ADEPT study was thus designed to determine the best sequence of implementation strategies to offer sites interested in implementing Life Goals. The ADEPT study involved use of three different implementation strategies. First, all sites received implementation support based on Replicating Effective Programs (REP), which offered an implementation manual, brief training, and low- level technical support ( Kilbourne et al., 2007 ; Kilbourne et al., 2012b ; Neumann and Sogolow, 2000 ). REP implementation support had been previously found to be low-cost and readily scalable, but also insufficient for uptake for many community-based settings ( Kilbourne et al., 2015 ). For sites that failed to implement Life Goals under REP, two additional implementation strategies were considered as augmentations to REP: External Facilitation (EF; Kilbourne et al., 2014b ; Stetler et al., 2006 ), consisting of phone-based mentoring in strategic skills from a study team member; and Internal Facilitation (IF; Kirchner et al., 2014 ), which supported protected time for a site employee to address barriers to program adoption.

The ADEPT study was designed to evaluate the best way to augment support for these sites that were not able to implement Life Goals under REP, specifically querying whether it was better to augment REP with EF only or the more intensive EF/IF, and whether augmentations should be provided all at once, or staged. Intervention assignments are mapped in Figure 1 . Seventy-nine community-based clinics across Michigan and Colorado were provided with initial implementation support under REP. After six months, implementation of the clinical intervention, Life Goals, was evaluated at all sites. Sites that had failed to reach an adequate level of delivery (defined as those sites enrolling fewer than ten patients in Life Goals, or those at which fewer than 50% of enrolled patients had received at least three Life Goals sessions) were considered non-responsive to REP and randomized to receive additional support through either EF or combined EF/IF. After six further months, Life Goals implementation at these sites was again evaluated. Sites surpassing the implementation response benchmark had their EF or EF/IF support discontinued. EF/IF sites that remained non-responsive continued to receive EF/IF for an additional six months. EF sites that remained non-responsive were randomized a second time to either continue with EF or further augment with IF. This design thus allowed for comparison of three different adaptive implementation interventions for sites that were initially non-responsive to REP to determine the best adaptive sequence of implementation support for sites that were initially non-responsive under REP:

An external file that holds a picture, illustration, etc.
Object name is nihms-1533574-f0001.jpg

SMART design from ADEPT trial.

  • Provide EF for 6 months; continue EF for a further six months for sites that remain nonresponsive; discontinue EF for sites that are responsive;
  • Provide EF/IF for 6 months; continue EF/IF for a further six months for sites that remain non-responsive; discontinue EF/IF for sites that are responsive; and
  • Provide EF for 6 months; step up to EF/IF for a further six months for sites that remain non-responsive; discontinue EF for sites that are responsive.

While analyses of this study are still ongoing, including the comparison of these three adaptive sequences of implementation strategies, results have shown that patients at sites that were randomized to receive EF as the initial augmentation to REP saw more improvement in clinical outcomes (SF-12 mental health quality of life and PHQ-9 depression scores) after 12 months than patients at sites that were randomized to receive the more intensive EF/IF augmentation.

1.2. Single Subject Experimental Designs and On-Off-On (ABA) Designs

We also note that there are a variety of Single Subject Experimental Designs (SSEDs; Byiers et al., 2012 ), including withdrawal designs and alternating treatment designs, that can be used in testing evidence-based practices. Similarly, an implementation strategy may be used to encourage the use of a specific treatment at a particular site, followed by that strategy’s withdrawal and subsequent reinstatement, with data collection throughout the process (on-off-on or ABA design). A weakness of these approaches in the context of implementation science, however, is that they usually require reversibility of the intervention (i.e. that the withdrawal of implementation support truly allows the healthcare system to revert to its pre-implementation state). When this is not the case—for example, if a hypothetical study is focused on training to encourage use of an evidence-based psychotherapy—then these designs may be less useful.

2. Quasi-Experimental Designs in Implementation Science

In some implementation science contexts, policy-makers or administrators may not be willing to have a subset of participating patients or sites randomized to a control condition, especially for high-profile or high-urgency clinical issues. Quasi-experimental designs allow implementation scientists to conduct rigorous studies in these contexts, albeit with certain limitations. We briefly review the characteristics of these designs here; other recent review articles are available for the interested reader (e.g. Handley et al., 2018 ).

2.1. Pre-Post with Non-Equivalent Control Group

The pre-post with non-equivalent control group uses a control group in the absence of randomization. Ideally, the control group is chosen to be as similar to the intervention group as possible (e.g. by matching on factors such as clinic type, patient population, geographic region, etc.). Theoretically, both groups are exposed to the same trends in the environment, making it plausible to decipher if the intervention had an effect. Measurement of both treatment and control conditions classically occurs pre- and post-intervention, with differential improvement between the groups attributed to the intervention. This design is popular due to its practicality, especially if data collection points can be kept to a minimum. It may be especially useful for capitalizing on naturally occurring experiments such as may occur in the context of certain policy initiatives or rollouts—specifically, rollouts in which it is plausible that a control group can be identified. For example, Kirchner and colleagues (2014) used this type of design to evaluate the integration of mental health services into primary care clinics at seven US Department of Veterans Affairs (VA) medical centers and seven matched controls.

One overarching drawback of this design is that it is especially vulnerable to threats to internal validity ( Shadish, 2002 ), because pre-existing differences between the treatment and control group could erroneously be attributed to the intervention. While unmeasured differences between treatment and control groups are always a possibility in healthcare research, such differences are especially likely to occur in the context of these designs due to the lack of randomization. Similarly, this design is particularly sensitive to secular trends that may differentially affect the treatment and control groups ( Cousins et al., 2014 ; Pape et al., 2013 ), as well as regression to the mean confounding study results ( Morton and Torgerson, 2003 ). For example, if a study site is selected for the experimental condition precisely because it is underperforming in some way, then regression to the mean would suggest that the site will show improvement regardless of any intervention; in the context of a pre-post with non-equivalent control group study, however, this improvement would erroneously be attributed to the intervention itself (Type I error).

There are, however, various ways that implementation scientists can mitigate these weaknesses. First, as mentioned briefly above, it is important to select a control group that is as similar as possible to the intervention site(s), which can include matching at both the health care network and clinic level (e.g. Kirchner et al., 2014 ). Second, propensity score weighting (e.g. Morgan, 2018 ) can statistically mitigate internal validity concerns, although this approach may be of limited utility when comparing secular trends between different study cohorts ( Dimick and Ryan, 2014 ). More broadly, qualitative methods (e.g. periodic interviews with staff at intervention and control sites) can help uncover key contextual factors that may be affecting study results above and beyond the intervention itself.

2.2. Interrupted Time Series

Interrupted time series (ITS; Shadish, 2002 ; Taljaard et al., 2014 ; Wagner et al., 2002 ) designs represent one of the most robust categories of quasi-experimental designs. Rather than relying on a non-equivalent control group, ITS designs rely on repeated data collections from intervention sites to determine whether a particular intervention is associated with improvement on a given metric relative to the pre-intervention secular trend. They are particularly useful in cases where a comparable control group cannot be identified—for example, following widespread implementation of policy mandates, quality improvement initiatives, or dissemination campaigns ( Eccles et al., 2003 ). In ITS designs, data are collected at multiple time points both before and after an intervention (e.g., policy change, implementation effort), and analyses explore whether the intervention was associated with the outcome beyond any pre-existing secular trend. More formally, ITS evaluations focus on identifying whether there is discontinuity in the trend (change in slope or level) after the intervention relative to before the intervention, using segmented regression to model pre- and post-intervention trends ( Gebski et al., 2012 ; Penfold and Zhang, 2013 ; Taljaard et al., 2014 ; Wagner et al., 2002 ). A number of recent implementation studies have used ITS designs, including an evaluation of implementation of a comprehensive smoke-free policy in a large UK mental health organization to reduce physical assaults ( Robson et al., 2017 ); the impact of a national policy limiting alcohol availability on suicide mortality in Slovenia ( Pridemore and Snowden, 2009 ); and the effect of delivery of a tailored intervention for primary care providers to increase psychological referrals for women with mild to moderate postnatal depression ( Hanbury et al., 2013 ).

ITS designs are appealing in implementation work for several reasons. Relative to uncontrolled pre-post analyses, ITS analyses reduce the chances that intervention effects are confounded by secular trends ( Bernal et al., 2017 ; Eccles et al., 2003 ). Time-varying confounders, such as seasonality, can also be adjusted for, provided adequate data ( Bernal et al., 2017 ). Indeed, recent work has confirmed that ITS designs can yield effect estimates similar to those derived from cluster-randomized RCTs ( Fretheim et al., 2013 ; Fretheim et al., 2015 ). Relative to an RCT, ITS designs can also allow for a more comprehensive assessment of the longitudinal effects of an intervention (positive or negative), as effects can be traced over all included time points ( Bernal et al., 2017 ; Penfold and Zhang, 2013 ).

ITS designs also present a number of challenges. First, the segmented regression approach requires clear delineation between pre- and post-intervention periods; interventions with indeterminate implementation periods are likely not good candidates for ITS. While ITS designs that include multiple ‘interruptions’ (e.g. introductions of new treatment components) are possible, they will require collection of enough time points between interruptions to ensure that each intervention’s effects can be ascertained individually ( Bernal et al., 2017 ). Second, collecting data from sufficient time points across all sites of interest, especially for the pre-intervention period, can be challenging ( Eccles et al., 2003 ): a common recommendation is at least eight time points both pre- and post-intervention ( Penfold and Zhang, 2013 ). This may be onerous, particularly if the data are not routinely collected by the health system(s) under study. Third, ITS cannot protect against confounding effects from other interventions that begin contemporaneously and may impact similar outcomes ( Eccles et al., 2003 ).

2.3. Stepped Wedge Designs

Stepped wedge trials are another type of quasi-experimental design. In a stepped wedge, all participants receive the intervention, but are assigned to the timing of the intervention in a staggered fashion ( Betran et al., 2018 ; Brown and Lilford, 2006 ; Hussey and Hughes, 2007 ), typically at the site or cluster level. Stepped wedge designs have their analytic roots in balanced incomplete block designs, in which all pairs of treatments occur an equal number of times within each block ( Hanani, 1961 ). Traditionally, all sites in stepped wedge trials have outcome measures assessed at all time points, thus allowing sites that receive the intervention later in the trial to essentially serve as controls for early intervention sites. A recent special issue of the journal Trials includes more detail on these designs ( Davey et al., 2015 ), which may be ideal for situations in which it is important for all participating patients or sites to receive the intervention during the trial. Stepped wedge trials may also be useful when resources are scarce enough that intervening at all sites at once (or even half of the sites as in a standard treatment-versus-control RCT) would not be feasible. If desired, the administration of the intervention to sites in waves allows for lessons learned in early sites to be applied to later sites (via formative evaluation; see Elwy et al., this issue).

The Behavioral Health Interdisciplinary Program (BHIP) Enhancement Project is a recent example of a stepped-wedge implementation trial ( Bauer et al., 2016 ; Bauer et al., 2019 ). This study involved using blended facilitation (including internal and external facilitators; Kirchner et al., 2014 ) to implement care practices consistent with the collaborative chronic care model (CCM; Bodenheimer et al., 2002a , b ; Wagner et al., 1996 ) in nine outpatient mental health teams in VA medical centers. Figure 2 illustrates the implementation and stepdown periods for that trial, with black dots representing primary data collection points.

An external file that holds a picture, illustration, etc.
Object name is nihms-1533574-f0002.jpg

BHIP Enhancement Project stepped wedge (adapted form Bauer et al., 2019).

The BHIP Enhancement Project was conducted as a stepped wedge for several reasons. First, the stepped wedge design allowed the trial to reach nine sites despite limited implementation resources (i.e. intervening at all nine sites simultaneously would not have been feasible given study funding). Second, the stepped wedge design aided in recruitment and retention, as all participating sites were certain to receive implementation support during the trial: at worst, sites that were randomized to later- phase implementation had to endure waiting periods totaling about eight months before implementation began. This was seen as a major strength of the design by its operational partner, the VA Office of Mental Health and Suicide Prevention. To keep sites engaged during the waiting period, the BHIP Enhancement Project offered a guiding workbook and monthly technical support conference calls.

Three additional features of the BHIP Enhancement Project deserve special attention. First, data collection for late-implementing sites did not begin until immediately before the onset of implementation support (see Figure 2 ). While this reduced statistical power, it also significantly reduced data collection burden on the study team. Second, onset of implementation support was staggered such that wave 2 began at the end of month 4 rather than month 6. This had two benefits: first, this compressed the overall amount of time required for implementation during the trial. Second, it meant that the study team only had to collect data from one site at a time, with data collection periods coming every 2–4 months. More traditional stepped wedge approaches typically have data collection across sites temporally aligned (e.g. Betran et al., 2018 ). Third, the BHIP Enhancement Project used a balancing algorithm ( Lew et al., 2019 ) to assign sites to waves, retaining some of the benefits of randomization while ensuring balance on key site characteristics (e.g. size, geographic region).

Despite their utility, stepped wedges have some important limitations. First, because they feature delayed implementation at some sites, stepped wedges typically take longer than similarly-sized parallel group RCTs. This increases the chances that secular trends, policy changes, or other external forces impact study results. Second, as with RCTs, imbalanced site assignment can confound results. This may occur deliberately in some cases—for example, if sites that develop their implementation plans first are assigned to earlier waves. Even if sites are randomized, however, early and late wave sites may still differ on important characteristics such as size, rurality, and case mix. The resulting confounding between site assignment and time can threaten the internal validity of the study—although, as above, balancing algorithms can reduce this risk. Third, the use of formative evaluation (Elwy, this issue), while useful for maximizing the utility of implementation efforts in a stepped wedge, can mean that late-wave sites receive different implementation strategies than early-wave sites. Similarly, formative evaluation may inform midstream adaptations to the clinical innovation being implemented. In either case, these changes may again threaten internal validity. Overall, then, stepped wedges represent useful tools for evaluating the impact of health interventions that (as with all designs) are subject to certain weaknesses and limitations.

3. Conclusions and Future Directions

Implementation science is focused on maximizing the extent to which effective healthcare practices are adopted, used, and sustained by clinicians, hospitals, and systems. Answering questions in these domains frequently requires different research methods than those employed in traditional efficacy- or effectiveness-oriented randomized clinical trials (RCTs). Implementation-oriented RCTs typically feature cluster or site-level randomization, and emphasize implementation outcomes (e.g. the number of patients receiving the new treatment as intended) rather than traditional clinical outcomes. Hybrid implementation-effectiveness designs incorporate both types of outcomes; more details on these approaches can be found elsewhere in this special issue (Landes, this issue). Other methodological innovations, such as factorial designs or sequential, multiple-assignment randomized trials (SMARTs), can address questions about multi-component or adaptive interventions, still under the umbrella of experimental designs. These types of trials may be especially important for demystifying the “black box” of implementation—that is, determining what components of an implementation strategy are most strongly associated with implementation success. In contrast, pre-post designs with non-equivalent control groups, interrupted time series (ITS), and stepped wedge designs are all examples of quasiexperimental designs that may serve implementation researchers when experimental designs would be inappropriate. A major theme cutting across each of these designs is that there are relative strengths and weaknesses associated with any study design decision. Determining what design to use ultimately will need to be informed by the primary research question to be answered, while simultaneously balancing the need for internal validity, external validity, feasibility, and ethics.

New innovations in study design are constantly being developed and refined. Several such innovations are covered in other articles within this special issue (e.g. Kim et al., this issue). One future direction relevant to the study designs presented in this article is the potential for adaptive trial designs, which allow information gleaned during the trial to inform the adaptation of components like treatment allocation, sample size, or study recruitment in the later phases of the same trial ( Pallmann et al., 2018 ). These designs are becoming increasingly popular in clinical treatment ( Bhatt and Mehta, 2016 ) but could also hold promise for implementation scientists, especially as interest grows in rapid-cycle testing of implementation strategies or efforts. Adaptive designs could potentially be incorporated into both SMART designs and stepped wedge studies, as well as traditional RCTs to further advance implementation science ( Cheung et al., 2015 ). Ideally, these and other innovations will provide researchers with increasingly robust and useful methodologies for answering timely implementation science questions.

  • Many implementation science questions can be addressed by fully experimental designs (e.g. randomized controlled trials [RCTs]).
  • Implementation trials differ in important ways, however, from more traditional efficacy- or effectiveness-oriented RCTs.
  • Adaptive designs represent a recent innovation to determine optimal implementation strategies within a fully experimental framework.
  • Quasi-experimental designs can be used to answer implementation science questions in the absence of randomization.
  • The choice of study designs in implementation science requires careful consideration of scientific, pragmatic, and ethical issues.

Acknowledgments

This work was supported by Department of Veterans Affairs grants QUE 15–289 (PI: Bauer) and CIN 13403 and National Institutes of Health grant RO1 MH 099898 (PI: Kilbourne).

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

  • Almirall D, Compton SN, Gunlicks-Stoessel M, Duan N, Murphy SA, 2012. Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy . Stat Med 31 ( 17 ), 1887–1902. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bauer MS, McBride L, Williford WO, Glick H, Kinosian B, Altshuler L, Beresford T, Kilbourne AM, Sajatovic M, Cooperative Studies Program 430 Study, T., 2006. Collaborative care for bipolar disorder: Part II. Impact on clinical outcome, function, and costs . Psychiatr Serv 57 ( 7 ), 937–945. [ PubMed ] [ Google Scholar ]
  • Bauer MS, Miller C, Kim B, Lew R, Weaver K, Coldwell C, Henderson K, Holmes S, Seibert MN, Stolzmann K, Elwy AR, Kirchner J, 2016. Partnering with health system operations leadership to develop a controlled implementation trial . Implement Sci 11 , 22. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bauer MS, Miller CJ, Kim B, Lew R, Stolzmann K, Sullivan J, Riendeau R, Pitcock J, Williamson A, Connolly S, Elwy AR, Weaver K, 2019. Effectiveness of Implementing a Collaborative Chronic Care Model for Clinician Teams on Patient Outcomes and Health Status in Mental Health: A Randomized Clinical Trial . JAMA Netw Open 2 ( 3 ), e190230. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bernal JL, Cummins S, Gasparrini A, 2017. Interrupted time series regression for the evaluation of public health interventions: a tutorial . Int J Epidemiol 46 ( 1 ), 348–355. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Betran AP, Bergel E, Griffin S, Melo A, Nguyen MH, Carbonell A, Mondlane S, Merialdi M, Temmerman M, Gulmezoglu AM, 2018. Provision of medical supply kits to improve quality of antenatal care in Mozambique: a stepped-wedge cluster randomised trial . Lancet Glob Health 6 ( 1 ), e57–e65. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bhatt DL, Mehta C, 2016. Adaptive Designs for Clinical Trials . N Engl J Med 375 ( 1 ), 65–74. [ PubMed ] [ Google Scholar ]
  • Bodenheimer T, Wagner EH, Grumbach K, 2002a. Improving primary care for patients with chronic illness . JAMA 288 ( 14 ), 1775–1779. [ PubMed ] [ Google Scholar ]
  • Bodenheimer T, Wagner EH, Grumbach K, 2002b. Improving primary care for patients with chronic illness: the chronic care model, Part 2 . JAMA 288 ( 15 ), 1909–1914. [ PubMed ] [ Google Scholar ]
  • Brown CA, Lilford RJ, 2006. The stepped wedge trial design: a systematic review . BMC medical research methodology 6 ( 1 ), 54. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Byiers BJ, Reichle J, Symons FJ, 2012. Single-subject experimental design for evidence-based practice . Am J Speech Lang Pathol 21 ( 4 ), 397–414. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cheung YK, Chakraborty B, Davidson KW, 2015. Sequential multiple assignment randomized trial (SMART) with adaptive randomization for quality improvement in depression treatment program . Biometrics 71 ( 2 ), 450–459. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Collins LM, Dziak JJ, Kugler KC, Trail JB, 2014a. Factorial experiments: efficient tools for evaluation of intervention components . Am J Prev Med 47 ( 4 ), 498–504. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Collins LM, Dziak JJ, Li R, 2009. Design of experiments with multiple independent variables: a resource management perspective on complete and reduced factorial designs . Psychol Methods 14 ( 3 ), 202–224. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Collins LM, Murphy SA, Bierman KL, 2004. A conceptual framework for adaptive preventive interventions . Prev Sci 5 ( 3 ), 185–196. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Collins LM, Murphy SA, Nair VN, Strecher VJ, 2005. A strategy for optimizing and evaluating behavioral interventions . Ann Behav Med 30 ( 1 ), 65–73. [ PubMed ] [ Google Scholar ]
  • Collins LM, Murphy SA, Strecher V, 2007. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions . Am J Prev Med 32 ( 5 Suppl ), S112–118. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Collins LM, Nahum-Shani I, Almirall D, 2014b. Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART) . Clin Trials 11 ( 4 ), 426–434. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Coulton S, Perryman K, Bland M, Cassidy P, Crawford M, Deluca P, Drummond C, Gilvarry E, Godfrey C, Heather N, Kaner E, Myles J, Newbury-Birch D, Oyefeso A, Parrott S, Phillips T, Shenker D, Shepherd J, 2009. Screening and brief interventions for hazardous alcohol use in accident and emergency departments: a randomised controlled trial protocol . BMC Health Serv Res 9 , 114. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cousins K, Connor JL, Kypri K, 2014. Effects of the Campus Watch intervention on alcohol consumption and related harm in a university population . Drug Alcohol Depend 143 , 120–126. [ PubMed ] [ Google Scholar ]
  • Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C, 2012. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact . Med Care 50 ( 3 ), 217–226. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Davey C, Hargreaves J, Thompson JA, Copas AJ, Beard E, Lewis JJ, Fielding KL, 2015. Analysis and reporting of stepped wedge randomised controlled trials: synthesis and critical appraisal of published studies, 2010 to 2014 . Trials 16 ( 1 ), 358. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Dimick JB, Ryan AM, 2014. Methods for evaluating changes in health care policy: the difference-in- differences approach . JAMA 312 ( 22 ), 2401–2402. [ PubMed ] [ Google Scholar ]
  • Eccles M, Grimshaw J, Campbell M, Ramsay C, 2003. Research designs for studies evaluating the effectiveness of change and improvement strategies . Qual Saf Health Care 12 ( 1 ), 47–52. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Fisher RA, 1925, July Theory of statistical estimation In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 22, No. 5, pp. 700–725). Cambridge University Press. [ Google Scholar ]
  • Fisher RA, 1935. The design of experiments . Oliver and Boyd, Edinburgh. [ Google Scholar ]
  • Fretheim A, Soumerai SB, Zhang F, Oxman AD, Ross-Degnan D, 2013. Interrupted time-series analysis yielded an effect estimate concordant with the cluster-randomized controlled trial result . Journal of Clinical Epidemiology 66 ( 8 ), 883–887. [ PubMed ] [ Google Scholar ]
  • Fretheim A, Zhang F, Ross-Degnan D, Oxman AD, Cheyne H, Foy R, Goodacre S, Herrin J, Kerse N, McKinlay RJ, Wright A, Soumerai SB, 2015. A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation . J Clin Epidemiol 68 ( 3 ), 324–333. [ PubMed ] [ Google Scholar ]
  • Gaglio B, Shoup JA, Glasgow RE, 2013. The RE-AIM framework: a systematic review of use over time . Am J Public Health 103 ( 6 ), e38–46. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gebski V, Ellingson K, Edwards J, Jernigan J, Kleinbaum D, 2012. Modelling interrupted time series to evaluate prevention and control of infection in healthcare . Epidemiol Infect 140 ( 12 ), 2131–2141. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Glasgow RE, Vogt TM, Boles SM, 1999. Evaluating the public health impact of health promotion interventions: the RE-AIM framework . Am J Public Health 89 ( 9 ), 1322–1327. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hanani H, 1961. The existence and construction of balanced incomplete block designs . The Annals of Mathematical Statistics 32 ( 2 ), 361–386. [ Google Scholar ]
  • Hanbury A, Farley K, Thompson C, Wilson PM, Chambers D, Holmes H, 2013. Immediate versus sustained effects: interrupted time series analysis of a tailored intervention . Implement Sci 8 , 130. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Handley MA, Lyles CR, McCulloch C, Cattamanchi A, 2018. Selecting and Improving Quasi-Experimental Designs in Effectiveness and Implementation Research . Annu Rev Public Health 39 , 5–25. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hussey MA, Hughes JP, 2007. Design and analysis of stepped wedge cluster randomized trials . Contemp Clin Trials 28 ( 2 ), 182–191. [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, Kirchner JE, Solberg LI, Main D, Bauer MS, Kyle J, Murphy SA, Nord KM, Thomas MR, 2014a. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program . Implement Sci 9 , 132. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Almirall D, Goodrich DE, Lai Z, Abraham KM, Nord KM, Bowersox NW, 2014b. Enhancing outreach for persons with serious mental illness: 12-month results from a cluster randomized trial of an adaptive implementation strategy . Implement Sci 9 , 163. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Bramlet M, Barbaresso MM, Nord KM, Goodrich DE, Lai Z, Post EP, Almirall D, Verchinina L, Duffy SA, Bauer MS, 2014c. SMI life goals: description of a randomized trial of a collaborative care model to improve outcomes for persons with serious mental illness . Contemp Clin Trials 39 ( 1 ), 74–85. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Goodrich DE, Lai Z, Clogston J, Waxmonsky J, Bauer MS, 2012a. Life Goals Collaborative Care for patients with bipolar disorder and cardiovascular disease risk . Psychiatr Serv 63 ( 12 ), 1234–1238. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Goodrich DE, Nord KM, Van Poppelen C, Kyle J, Bauer MS, Waxmonsky JA, Lai Z, Kim HM, Eisenberg D, Thomas MR, 2015. Long-Term Clinical Outcomes from a Randomized Controlled Trial of Two Implementation Strategies to Promote Collaborative Care Attendance in Community Practices . Adm Policy Ment Health 42 ( 5 ), 642–653. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, Stall R, 2007. Implementing evidence-based interventions in health care: application of the replicating effective programs framework . Implement Sci 2 , 42. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Neumann MS, Waxmonsky J, Bauer MS, Kim HM, Pincus HA, Thomas M, 2012b. Public-academic partnerships: evidence-based implementation: the role of sustained community-based practice and research partnerships . Psychiatr Serv 63 ( 3 ), 205–207. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kilbourne AM, Post EP, Nossek A, Drill L, Cooley S, Bauer MS, 2008. Improving medical and psychiatric outcomes among individuals with bipolar disorder: a randomized controlled trial . Psychiatr Serv 59 ( 7 ), 760–768. [ PubMed ] [ Google Scholar ]
  • Kirchner JE, Ritchie MJ, Pitcock JA, Parker LE, Curran GM, Fortney JC, 2014. Outcomes of a partnered facilitation strategy to implement primary care-mental health . J Gen Intern Med 29 Suppl 4 , 904–912. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA, 2012. A “SMART” design for building individualized treatment sequences . Annu Rev Clin Psychol 8 , 21–48. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lew RA, Miller CJ, Kim B, Wu H, Stolzmann K, Bauer MS, 2019. A robust method to reduce imbalance for site-level randomized controlled implementation trial designs . Implementation Sci , 14 , 46. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Morgan CJ, 2018. Reducing bias using propensity score matching . J Nucl Cardiol 25 ( 2 ), 404–406. [ PubMed ] [ Google Scholar ]
  • Morton V, Torgerson DJ, 2003. Effect of regression to the mean on decision making in health care . BMJ 326 ( 7398 ), 1083–1084. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, Waxmonsky JG, Yu J, Murphy SA, 2012. Experimental design and primary data analysis methods for comparing adaptive interventions . Psychol Methods 17 ( 4 ), 457–477. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • NeCamp T, Kilbourne A, Almirall D, 2017. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations . Stat Methods Med Res 26 ( 4 ), 1572–1589. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Neumann MS, Sogolow ED, 2000. Replicating effective programs: HIV/AIDS prevention technology transfer . AIDS Educ Prev 12 ( 5 Suppl ), 35–48. [ PubMed ] [ Google Scholar ]
  • Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, Holmes J, Mander AP, Odondi L.o., Sydes MR, Villar SS, Wason JMS, Weir CJ, Wheeler GM, Yap C, Jaki T, 2018. Adaptive designs in clinical trials: why use them, and how to run and report them . BMC medicine 16 ( 1 ), 29–29. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pape UJ, Millett C, Lee JT, Car J, Majeed A, 2013. Disentangling secular trends and policy impacts in health studies: use of interrupted time series analysis . J R Soc Med 106 ( 4 ), 124–129. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pellegrini CA, Hoffman SA, Collins LM, Spring B, 2014. Optimization of remotely delivered intensive lifestyle treatment for obesity using the Multiphase Optimization Strategy: Opt-IN study protocol . Contemp Clin Trials 38 ( 2 ), 251–259. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Penfold RB, Zhang F, 2013. Use of Interrupted Time Series Analysis in Evaluating Health Care Quality Improvements . Academic Pediatrics 13 ( 6, Supplement ), S38–S44. [ PubMed ] [ Google Scholar ]
  • Pridemore WA, Snowden AJ, 2009. Reduction in suicide mortality following a new national alcohol policy in Slovenia: an interrupted time-series analysis . Am J Public Health 99 ( 5 ), 915–920. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M, 2011. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda . Adm Policy Ment Health 38 ( 2 ), 65–76. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Robson D, Spaducci G, McNeill A, Stewart D, Craig TJK, Yates M, Szatkowski L, 2017. Effect of implementation of a smoke-free policy on physical violence in a psychiatric inpatient setting: an interrupted time series analysis . Lancet Psychiatry 4 ( 7 ), 540–546. [ PubMed ] [ Google Scholar ]
  • Schildcrout JS, Schisterman EF, Mercaldo ND, Rathouz PJ, Heagerty PJ, 2018. Extending the Case-Control Design to Longitudinal Data: Stratified Sampling Based on Repeated Binary Outcomes . Epidemiology 29 ( 1 ), 67–75. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Shadish WR, Cook Thomas D., Campbell Donald T, 2002. Experimental and quasi-experimental designs for generalized causal inference . Houghton Miffflin Company, Boston, MA. [ Google Scholar ]
  • Simon GE, Ludman EJ, Bauer MS, Unutzer J, Operskalski B, 2006. Long-term effectiveness and cost of a systematic care program for bipolar disorder . Arch Gen Psychiatry 63 ( 5 ), 500–508. [ PubMed ] [ Google Scholar ]
  • Stetler CB, Legro MW, Rycroft-Malone J, Bowman C, Curran G, Guihan M, Hagedorn H, Pineros S, Wallace CM, 2006. Role of “external facilitation” in implementation of research findings: a qualitative evaluation of facilitation experiences in the Veterans Health Administration . Implement Sci 1 , 23. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Taljaard M, McKenzie JE, Ramsay CR, Grimshaw JM, 2014. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care . Implement Sci 9 , 77. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D, 2002. Segmented regression analysis of interrupted time series studies in medication use research . J Clin Pharm Ther 27 ( 4 ), 299–309. [ PubMed ] [ Google Scholar ]
  • Wagner EH, Austin BT, Von Korff M, 1996. Organizing care for patients with chronic illness . Milbank Q 74 ( 4 ), 511–544. [ PubMed ] [ Google Scholar ]
  • Wyrick DL, Rulison KL, Fearnow-Kenney M, Milroy JJ, Collins LM, 2014. Moving beyond the treatment package approach to developing behavioral interventions: addressing questions that arose during an application of the Multiphase Optimization Strategy (MOST) . Transl Behav Med 4 ( 3 ), 252–259. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Experimental vs Quasi-Experimental Design: Which to Choose?

Here’s a table that summarizes the similarities and differences between an experimental and a quasi-experimental study design:

 Experimental Study (a.k.a. Randomized Controlled Trial)Quasi-Experimental Study
ObjectiveEvaluate the effect of an intervention or a treatmentEvaluate the effect of an intervention or a treatment
How participants get assigned to groups?Random assignmentNon-random assignment (participants get assigned according to their choosing or that of the researcher)
Is there a control group?YesNot always (although, if present, a control group will provide better evidence for the study results)
Is there any room for confounding?No (although check for a detailed discussion on post-randomization confounding in randomized controlled trials)Yes (however, statistical techniques can be used to study causal relationships in quasi-experiments)
Level of evidenceA randomized trial is at the highest level in the hierarchy of evidenceA quasi-experiment is one level below the experimental study in the hierarchy of evidence [ ]
AdvantagesMinimizes bias and confounding– Can be used in situations where an experiment is not ethically or practically feasible
– Can work with smaller sample sizes than randomized trials
Limitations– High cost (as it generally requires a large sample size)
– Ethical limitations
– Generalizability issues
– Sometimes practically infeasible
Lower ranking in the hierarchy of evidence as losing the power of randomization causes the study to be more susceptible to bias and confounding

What is a quasi-experimental design?

A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment.

Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized. Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.

Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.

(for more information, I recommend my other article: Understand Quasi-Experimental Design Through an Example ) .

Examples of quasi-experimental designs include:

  • One-Group Posttest Only Design
  • Static-Group Comparison Design
  • One-Group Pretest-Posttest Design
  • Separate-Sample Pretest-Posttest Design

What is an experimental design?

An experimental design is a randomized study design used to evaluate the effect of an intervention. In its simplest form, the participants will be randomly divided into 2 groups:

  • A treatment group: where participants receive the new intervention which effect we want to study.
  • A control or comparison group: where participants do not receive any intervention at all (or receive some standard intervention).

Randomization ensures that each participant has the same chance of receiving the intervention. Its objective is to equalize the 2 groups, and therefore, any observed difference in the study outcome afterwards will only be attributed to the intervention – i.e. it removes confounding.

(for more information, I recommend my other article: Purpose and Limitations of Random Assignment ).

Examples of experimental designs include:

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Solomon Four-Group Design
  • Matched Pairs Design
  • Randomized Block Design

When to choose an experimental design over a quasi-experimental design?

Although many statistical techniques can be used to deal with confounding in a quasi-experimental study, in practice, randomization is still the best tool we have to study causal relationships.

Another problem with quasi-experiments is the natural progression of the disease or the condition under study — When studying the effect of an intervention over time, one should consider natural changes because these can be mistaken with changes in outcome that are caused by the intervention. Having a well-chosen control group helps dealing with this issue.

So, if losing the element of randomness seems like an unwise step down in the hierarchy of evidence, why would we ever want to do it?

This is what we’re going to discuss next.

When to choose a quasi-experimental design over a true experiment?

The issue with randomness is that it cannot be always achievable.

So here are some cases where using a quasi-experimental design makes more sense than using an experimental one:

  • If being in one group is believed to be harmful for the participants , either because the intervention is harmful (ex. randomizing people to smoking), or the intervention has a questionable efficacy, or on the contrary it is believed to be so beneficial that it would be malevolent to put people in the control group (ex. randomizing people to receiving an operation).
  • In cases where interventions act on a group of people in a given location , it becomes difficult to adequately randomize subjects (ex. an intervention that reduces pollution in a given area).
  • When working with small sample sizes , as randomized controlled trials require a large sample size to account for heterogeneity among subjects (i.e. to evenly distribute confounding variables between the intervention and control groups).

Further reading

  • Statistical Software Popularity in 40,582 Research Papers
  • Checking the Popularity of 125 Statistical Tests and Models
  • Objectives of Epidemiology (With Examples)
  • 12 Famous Epidemiologists and Why

quasi experimental sampling technique

  • Voxco Online
  • Voxco Panel Management
  • Voxco Panel Portal
  • Voxco Audience
  • Voxco Mobile Offline
  • Voxco Dialer Cloud
  • Voxco Dialer On-premise
  • Voxco TCPA Connect
  • Voxco Analytics
  • Voxco Text & Sentiment Analysis

quasi experimental sampling technique

  • 40+ question types
  • Drag-and-drop interface
  • Skip logic and branching
  • Multi-lingual survey
  • Text piping
  • Question library
  • CSS customization
  • White-label surveys
  • Customizable ‘Thank You’ page
  • Customizable survey theme
  • Reminder send-outs
  • Survey rewards
  • Social media
  • Website surveys
  • Correlation analysis
  • Cross-tabulation analysis
  • Trend analysis
  • Real-time dashboard
  • Customizable report
  • Email address validation
  • Recaptcha validation
  • SSL security

Take a peek at our powerful survey features to design surveys that scale discoveries.

Download feature sheet.

  • Hospitality
  • Academic Research
  • Customer Experience
  • Employee Experience
  • Product Experience
  • Market Research
  • Social Research
  • Data Analysis

Explore Voxco 

Need to map Voxco’s features & offerings? We can help!

Watch a Demo 

Download Brochures 

Get a Quote

  • NPS Calculator
  • CES Calculator
  • A/B Testing Calculator
  • Margin of Error Calculator
  • Sample Size Calculator
  • CX Strategy & Management Hub
  • Market Research Hub
  • Patient Experience Hub
  • Employee Experience Hub
  • NPS Knowledge Hub
  • Market Research Guide
  • Customer Experience Guide
  • Survey Research Guides
  • Survey Template Library
  • Webinars and Events
  • Feature Sheets
  • Try a sample survey
  • Professional Services

quasi experimental sampling technique

Get exclusive insights into research trends and best practices from top experts! Access Voxco’s ‘State of Research Report 2024 edition’ .

We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.

VP Innovation & Strategic Partnerships, The Logit Group

  • Client Stories
  • Voxco Reviews
  • Why Voxco Research?
  • Careers at Voxco
  • Vulnerabilities and Ethical Hacking

Explore Regional Offices

  • Survey Software The world’s leading omnichannel survey software
  • Online Survey Tools Create sophisticated surveys with ease.
  • Mobile Offline Conduct efficient field surveys.
  • Text Analysis
  • Close The Loop
  • Automated Translations
  • NPS Dashboard
  • CATI Manage high volume phone surveys efficiently
  • Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
  • IVR Survey Software Boost productivity with automated call workflows.
  • Analytics Analyze survey data with visual dashboards
  • Panel Manager Nurture a loyal community of respondents.
  • Survey Portal Best-in-class user friendly survey portal.
  • Voxco Audience Conduct targeted sample research in hours.
  • Predictive Analytics
  • Customer 360
  • Customer Loyalty
  • Fraud & Risk Management
  • AI/ML Enablement Services
  • Credit Underwriting

quasi experimental sampling technique

Find the best survey software for you! (Along with a checklist to compare platforms)

Get Buyer’s Guide

  • 100+ question types
  • SMS surveys
  • Financial Services
  • Banking & Financial Services
  • Retail Solution
  • Risk Management
  • Customer Lifecycle Solutions
  • Net Promoter Score
  • Customer Behaviour Analytics
  • Customer Segmentation
  • Data Unification

Explore Voxco 

Watch a Demo 

Download Brochures 

  • CX Strategy & Management Hub
  • The Voxco Guide to Customer Experience
  • Professional services
  • Blogs & White papers
  • Case Studies

Find the best customer experience platform

Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.

Get the Guide Now

quasi experimental sampling technique

VP Innovation & Strategic Partnerships, The Logit Group

  • Why Voxco Intelligence?
  • Our clients
  • Client stories
  • Featuresheets

Explaining Quasi-Experimental Design And Its Various Methods

  • September 27, 2021

SHARE THE ARTICLE ON

photo 1593642532871 8b12e02d091c L

 As you strive to uncover causal (cause-and-effect) relationships between variables, you may often encounter ethical or practical constraints while conducting controlled experiments. 

Quasi-experimental design steps in as a powerful alternative that helps you overcome these challenges and offer valuable insights. 

In this blog, we’ll look into its characteristics, examples, types, and how it differs from true-experimental research design. The purpose of this blog is to understand how this research methodology bridges the gap between a fully controlled experiment and a purely observational study.

What Is Quasi-Experimental Design?

A quasi-experimental design is pretty much different from an experimental design, except for the fact that they both manifest the cause-effect relationship between the independent and dependent variables . 

So, how is quasi-experimental design different? 

Well, unlike experimental design, quasi-experiments do not include random assignments of participants meaning, the participants are placed in the experimental groups based on some of the other criteria. Let us take a deeper look at how quasi-experimental design works.

Read how Voxco helped Modus Research increase research efficiency with Voxco Online, CATI, IVR, and panel systems.

Experimental design has three characteristics:, 1. manipulation.

Manipulation simply means evaluating the effect of the independent variable on the dependent variable. 

Example: A chocolate and a crying child.

  • Independent variable:  Type of chocolate. 
  • Dependent variable: The child is crying for chocolate.

So manipulation means the effect of an independent variable, that is, chocolate, on the dependent variable, that is, the crying child. In short, you are using an outside source on the dependent variable. This proves that after getting the chocolate (independent variable), the child stops crying (dependent variable).

2. Randomization

Randomization means sudden selection without any plan. Example: A lottery system. The lottery numbers are announced at random so everyone who buys a lottery has an equal chance. Hence, it means you select a sample without any plan and everyone has an equal chance of getting into any one of the experimental groups.

This means using a control group in the experiment. In this group, researchers keep the independent variable constant. This control group is then compared to a treatment group, where the researchers have changed the independent variable. Well, for obvious reasons, researchers are more interested in the treatment group as it has a scope of change in the dependent variable. 

Example: You want to find out whether the workers work more efficiently if there is a pay raise. 

Here, you will put certain workers in the treatment group and some in the control group.

  • Treatment group: You pay more to the workers
  • Control group: You don’t pay any extra to the workers, and things remain the same. 

By comparing these two groups, you understand that the workers who got paid more worked more efficiently than the workers who didn’t. 

As for the quasi-experimental design, the manipulation characteristic of the true experiment remains the same. However randomization or control characteristics are present in contrast to each other or none at all. 

Hence, these experiments are conducted where random selection is difficult or even impossible. The quasi-experiment does not include random assignment, as the independent variable is manipulated before the measurement of the dependent variable.

See how easily you can create, test, distribute, and design the surveys.

  • 50+ question types
  • Advanced logic 
  • White-label
  • Auto-translations in 100+ languages

What are the types of quasi-experimental design?

Amongst all the various types of quasi-experimental design, let us first get to know two main types of quasi-experimental design:

  • Non-equivalent group design (NEGD)
  • Regression discontinuity design

1. Non-Equivalent Group Design (NEGD)

You can picture non-equivalent group designs as a mixture of both true experimental design as well as quasi-experimental design. The reason is, that it uses both their qualities. Like a true experiment, NEGD uses the pre-existing groups that we feel are similar, namely treatment and control groups. However it lacks the randomization characteristic of a quasi-experiment. 

While grouping, researchers see to it that they are not influenced by any third variables or confounding variables. Hence, the groups are as similar as possible. For example, when talking about political study, we might select groups that are more similar to each other. 

Let us understand it with an example:

Take the previous example where you studied whether the workers work more efficiently if there is a pay rise. 

You give a pre-test to the workers in one company while their pay is normal. Then you put them under the treatment group where they work and their pay is being increased. After the experiment, you take their post-test about their experience and attitude towards their work. 

Later, you give the same pre-test to the workers from a similar company and put them in a control group where their pay is not raised, and then conduct a post-test. 

Hence, the Non-equivalent design has a name to remind us that the groups are not equivalent and are not assigned on a random practice. 

2. Regression discontinuity design or RDD

Regression discontinuity design, or RDD, is a quasi-experimental design technique that computes the influence of a treatment or intervention. It does so by using a mechanism that assigns the treatment based on eligibility, known as a “cut-off”.

So the participants above the cut-off get to be in a treatment group and those below the cut-off doesn’t. Although the difference between these two groups is negligible. 

Let’s take a look at an example:

A school wants to grant a $50 scholarship to students, depending on an independent test taken to measure their intellect and household. 

Those who pass the test will get a scholarship. However, the students who are just below the cut-off and those just above it can be considered similar. We can say the differences in their scores occurred randomly. Hence you can keep on studying both groups to get a long-term outcome.

One-stop-shop to gather, measure, uncover, and act on insightful data.

Curious About The Price? Click Below To Get A Personalized Quote.

What are the advantages of a quasi-experimental design?

The quasi-experiment design, also known as external validity, can be perfect for determining what is best for the population. Let’s look at some advantages of this research methodology type. 

  • It gives the researchers power over the variables by being able to control them.
  • The quasi-experiment method can be combined with other experimental methods too.
  • It provides transferability to a greater extent.
  • It is an intuitive process that is well-shaped by the researchers. 
  • Involves real-world problems and solutions and not any artificial ones. 
  • Offers better control over the third variable, known as the confounding variable, which influences the cause and effect. 

What are the disadvantages of a quasi-experimental design?

As a research design, it is bound to have some limitations, let’s look at some of the disadvantages you should consider when selecting the design for your research. 

  • It serves less internal validity than true experiments.
  • Due to no randomization, you cannot tell for sure that the confounding or third variable is eradicated. 
  • It has scope for human errors.
  • It can allow the researcher’s personal bias to get involved. 
  • Human responses are difficult to measure; hence, there is a chance that the results will be produced artificially.
  • Using old or backdated data can be incorrect and inadequate for the study.

New call-to-action

Other Quasi-Experimental Designs

Apart from the above-mentioned types, there are other equally important quasi-experimental designs that have different applications depending on their characteristics and their respective design notations . 

Let’s take a look at all of them in detail:

1. The proxy Pre-Test Design

The proxy pre-test design works the same as a typical pre-test and post-test design. Except, the pre-test here is conducted AFTER the treatment is given. Got confused? How is it pre-test if it is conducted after? Well, the keyword here is “proxy”. These proxy variables tell where the groups would have been in the pre-test. 

You ask the group after their program about how they’d have answered the same questions before their treatment. Although, this technique is not very reliable as we cannot expect the participants to remember how they felt a long time ago, and we surely cannot tell if they are faking their answers. 

As this design is highly not recommended, you can use this under some unavoidable circumstances like the treatment has already begun and you couldn’t take the pre-test. 

In such cases, this approach will help rather than depending totally on the post-test.

Quasi-experimental design: explanation, methods and FAQs gender survey

You want to study the workers’ performance after the pay rise. But you were called to do the pre-test after the program had started. In that case, you will have to take the post-test and study a proxy variable, such as productivity from the time before the program and after the program

2. The Separate Pre-Post Samples Design

This technique also works on the pre-test and post-test designs. The difference is that the participants you used for the pre-test won’t be the same for the post-test. 

Quasi-experimental design: explanation, methods and FAQs gender survey

You want to study the client satisfaction of two similar companies. You take one for the treatment and the other for the control. Let’s say you conducted a pre-test in both companies at the same time and then begin your experiment. 

After a while, when the program is complete, you go to take a post-test. Now, the set of clients you take in for the test is going to be different than the pre-test ones, the reason being clients change after the course of the period. 

In this case, you cannot derive one-to-one results, but you can tell the average client satisfaction in both companies. 

3. The Double Pre-Test Design

The double pre-test design is a very robust quasi-experimental design designed to rule out the internal validity problem we had with the non-equivalent design. It has two pre-tests before the program. It is when the two groups are progressing at a different pace that you should change from pre-test 1 to pre-test 2. 

Due to the benefit of two pre-tests, you can determine the null case scenario. It assumes the difference between the scores in the pre-test and post-test is due to random chance, as it doesn’t allow one person to take the pre-test twice.

4. The Switching Replications Design

In the switching replications design, as the name suggests, the role of the group is switched. It follows the same treatment-control group pattern, except it has two phases.

Phase 1: Both the groups are pre-tested, then they undergo their respective program. Later they are post-tested.

Phase 2: In this phase, an original treatment group is now a control group and an original control group is now a treatment group.

Quasi-experimental design: explanation, methods and FAQs gender survey

The main benefit of inculcating this design is that it proves strong against internal validation as well as external validation. The reason is that two parallel implementations of the program allow all the participants to experience the program, making it ethically strong as well.

5.The Non-equivalent Dependent Variables (NEDV) Design

NEDV design, in its simplest form, is not the most reliable one and does not work wonders against internal validity either. But then, what is the use of NEDV? 

Well, sometimes the treatment group may be affected by some external factors. Hence, there are two pre and post-tests applied to the participants, one regarding the treatment itself and the other regarding that external variable. 

Quasi-experimental design: explanation, methods and FAQs gender survey

Wait, how about we take an example to understand this?

Let us say you started a program to test history teaching techniques. You design standards tests for history (treatment group) and show historical movies (external variable). Later in the post-tests, you find out that along with the history scores, students’ interest in historical movies has also increased, suggesting that showing historical movies has influenced students to study the subject.

6. The regression Point Displacement (RPD) Design

RPD design is used when measures for already existing groups are available and can be compared with those for treatment groups. The treatment group is the only group present, and both pre-test and post-tests are conducted. 

This method is widely beneficial for larger groups, communities, and companies. RPD works by comparing a single program unit with a larger comparison unit.

Quasi-experimental design: explanation, methods and FAQs gender survey

Consider a community-based COVID awareness program. It has been decided to start the initiative in a particular town or a vast district. The representatives forecast the active cases in that town and use the remaining towns as a comparison. Now rather than giving the average for the rest of the towns’ COVID cases, they show their count.

Looking for World’s best Survey Platform?

Voxco is the leading survey software trusted by 450+ brands across 40+ countries., when to use a quasi-experimental design.

All that studying but shouldn’t you know when to perfectly use quasi-experiments? Well, now as we are to the end of the matter, let us discuss when to use quasi-experiments and for what reasons. 

1. For ethical reasons

Remember when we discussed the “willingness” of obese people to participate in the experiment? That is when ethics start to matter. You cannot go on putting random participants under treatments as you do with true experiments. 

Especially when it directly affects the participants’ lives. One of the best examples is Oregon Health Study where health insurance is given to certain people while others were restricted from it. 

2. For practical reasons

True experiments, despite having higher internal validity, can be expensive. Also, it requires enough participants so that the true experiment can be justified. Unlike that, in a quasi-experiment, you can use the already gathered data. 

The data is collected and paid by some strong entity, say the government, and you use that to study your questions. 

Well, that concludes our guide. If you’re looking for extensive research tools, Voxco offers a complete market research tool kit that includes market research trends, a guide to online surveys, an agile market research guide, and five market research templates.  

Also read: Experimental Research .

Market Research toolkit to start your market research surveys and studies.

Differences between quasi-experiments and true experiments

The above description is overwhelming? Don’t worry. Here is the straight difference between the quasi-experiments and true experiments so that you can understand how both vary from each other.

TRUE EXPERIMENT

QUASI EXPERIMENT

Participants are assigned randomly to the experimental groups.

Participants are not randomly assigned to the experimental groups.

Participants have an equal chance of getting into any of the experimental groups.

Participants are categorized and then put into a respective experimental group.

Researchers design the treatment participants will go through.

Researchers do not design a treatment.

There are no various groups of treatments.

Researchers study the existing groups of treatments received.

Includes control groups and treatment groups.

Does not necessarily require control groups, apart from the fact they are generally used.

It does not include a pre-test.

It includes a pre-test.

Example of true-experimental design:

While starting the true experiment, you assign some participants in the treatment group where they are fed only junk food. While the other half of the participants go to the control group , where they have their regular ongoing diet (standard course).

You decide to take obese people’s reports every day after their meals to note down their health and discomfort, if any.

However, participants who are assigned to the treatment group would not like to change their diet to complete junk food for personal reasons. In this case, you cannot conduct a true experiment against their will. This is when quasi-experiment comes in.

Example of quasi-experimental design:

While talking to the participants, you find out that some of the participants want to try the junk food effect while the others don’t want to experiment with their diet and choose to stick with a regular diet.

You can now assign already existing groups to the participants according to their choices. Study how the regular consumption of junk food affects the obese from that group. 

Here, you did not assign groups to the random participants and can be confident about the difference occurring due to the conducted experiment. 

High Performer in G2’s Winter Reports

Quasi-experimental design: explanation, methods and FAQs gender survey

Quasi-experimental design has a unique approach that allows you to uncover causal relationship between variables when controlled experiments are not feasible or ethical. While it may not posses the level of control and randomization that you have when performing true-experiment; quasi-experimental research design enables you to make meaningful contribution by providing valuable insights to various fields.

Explore Voxco Survey Software

+ Omnichannel Survey Software 

+ Online Survey Software 

+ CATI Survey Software 

+ IVR Survey Software 

+ Market Research Tool

+ Customer Experience Tool 

+ Product Experience Software 

+ Enterprise Survey Software 

Quasi-experimental design: explanation, methods and FAQs gender survey

What gender survey questionnaire must include?

What Are Gender Survey Questions? SHARE THE ARTICLE ON Share on facebook Share on twitter Share on linkedin Table of Contents Gender survey questions are

Importance of Optimising Post purchase Customer Experience cvr

Retail banking customer experience

Retail banking customer experience SHARE THE ARTICLE ON Table of Contents What is customer experience in banking? As the name suggests, customer experience in banking

Data Synchronization Made Easy

Data Synchronization Made Easy SHARE THE ARTICLE ON Table of Contents What is data synchronization? Data synchronization is a method of storing data in more

Quasi-experimental design: explanation, methods and FAQs gender survey

What is NPS Dashboard

What is NPS® dashboard SHARE THE ARTICLE ON Table of Contents Understanding how satisfied – and thus how loyal – your customers are is critical

10 Best Predictive Dialer Software

10 Best Predictive Dialer Software SHARE THE ARTICLE ON Table of Contents Call center agents often rely on cold calling for market research, lead nurturing,

Quasi-experimental design: explanation, methods and FAQs gender survey

Customer Engagement Metrics For Business Success

Customer Engagement Metrics For Business Success SHARE THE ARTICLE ON Table of Contents In the fast-paced world of business, understanding how to effectively engage with

We use cookies in our website to give you the best browsing experience and to tailor advertising. By continuing to use our website, you give us consent to the use of cookies. Read More

Name Domain Purpose Expiry Type
hubspotutk www.voxco.com HubSpot functional cookie. 1 year HTTP
lhc_dir_locale amplifyreach.com --- 52 years ---
lhc_dirclass amplifyreach.com --- 52 years ---
Name Domain Purpose Expiry Type
_fbp www.voxco.com Facebook Pixel advertising first-party cookie 3 months HTTP
__hstc www.voxco.com Hubspot marketing platform cookie. 1 year HTTP
__hssrc www.voxco.com Hubspot marketing platform cookie. 52 years HTTP
__hssc www.voxco.com Hubspot marketing platform cookie. Session HTTP
Name Domain Purpose Expiry Type
_gid www.voxco.com Google Universal Analytics short-time unique user tracking identifier. 1 days HTTP
MUID bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 1 year HTTP
MR bat.bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 7 days HTTP
IDE doubleclick.net Google advertising cookie used for user tracking and ad targeting purposes. 2 years HTTP
_vwo_uuid_v2 www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie. 1 year HTTP
_vis_opt_s www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie that detects if the user is new or returning to a particular campaign. 3 months HTTP
_vis_opt_test_cookie www.voxco.com A session (temporary) cookie used by Generic Visual Website Optimizer (VWO) to detect if the cookies are enabled on the browser of the user or not. 52 years HTTP
_ga www.voxco.com Google Universal Analytics long-time unique user tracking identifier. 2 years HTTP
_uetsid www.voxco.com Microsoft Bing Ads Universal Event Tracking (UET) tracking cookie. 1 days HTTP
vuid vimeo.com Vimeo tracking cookie 2 years HTTP
Name Domain Purpose Expiry Type
__cf_bm hubspot.com Generic CloudFlare functional cookie. Session HTTP
Name Domain Purpose Expiry Type
_gcl_au www.voxco.com --- 3 months ---
_gat_gtag_UA_3262734_1 www.voxco.com --- Session ---
_clck www.voxco.com --- 1 year ---
_ga_HNFQQ528PZ www.voxco.com --- 2 years ---
_clsk www.voxco.com --- 1 days ---
visitor_id18452 pardot.com --- 10 years ---
visitor_id18452-hash pardot.com --- 10 years ---
lpv18452 pi.pardot.com --- Session ---
lhc_per www.voxco.com --- 6 months ---
_uetvid www.voxco.com --- 1 year ---

Quasi-experimental methods

17403IIED.pdf

This document is part of the ‘Better Evidence in Action’ toolkit.

Quasi-experimental methods are designed to explore the causal effects of an intervention, treatment or stimulus on a unit of study. Although these methods have many attributes associated with scientific experiments, they lack the benefits of the random assignment of treatments across a population that is often necessary for broad generalisability. Yet purposive sampling also has its benefits, especially when assessing small sub-groups that random sampling can miss. Researchers using these methods typically conduct tests in one of two ways: over time (pre-test, post-test) or over space (one-time comparisons), by establishing near-equivalence in factors that influence primary outcomes across treatment and control groups.

Cite this publication

Related project

An IIED workshop in September 2016 that focused on conceptualising 'better evidence' (Photo: Celie Manuel/IIED)

Generating better evidence for sustainable development research and evaluation

IIED is developing a body of work that seeks to understand how to develop better evidence for sustainable development research and evaluation

17400IIED.pdf

Knowledge-based participatory action research

01 March 2017

17404IIED.pdf

Theory-based impact evaluation

17401IIED.pdf

Participatory resource mapping

17406IIED.pdf

Community-driven data collection in informal settlements

17402IIED.pdf

Process tracing with Bayesian updating

G04132.pdf

A better evidence philosophy for sustainable development

  • News and insight
  • Our approach
  • Publications
  • Climate change
  • Biodiversity
  • Natural resource management
  • Food and agriculture
  • Sustainable markets
  • Policy and planning
  • Communication
  • Monitoring, evaluation and learning
  • Land acquisitions and rights
  • Drylands and pastoralism

Hadiyanto Hadiyanto

  • Indonesian Directorate General of HE-Universitas Jambi

Does quasi experiment need a sampling technique?

Most recent answer.

quasi experimental sampling technique

Top contributors to discussions in this field

Ariel Linden

  • Linden Consulting Group, LLC

Rustam Tillahodjaev

  • Tashkent State Technical University

David L Morgan

  • Portland State University

Martin Hofmeister

  • Consumer Centre of the German Federal State of Bavaria, Germany, Munich

David Morse

  • Mississippi State University (Emeritus)

Get help with your research

Join ResearchGate to ask questions, get input, and advance your work.

All Answers (6)

quasi experimental sampling technique

Similar questions and discussions

  • Asked 12 September 2018

Sakila Yesmin

  • Asked 26 October 2022

Abdi Dandena

  • Asked 12 November 2022

Vahid Norouzi Larsari

  • Asked 6 July 2023

Changhong Peng

  • Asked 11 July 2017

Dara Abdulla Al-Banna

  • Asked 8 March 2022

Ferat Yilmaz

  • Asked 17 July 2015

Jeraltin Justin

  • Asked 7 December 2020

Lucy Gilbert

  • Asked 6 April 2018

Alex Jane

Related Publications

M. Ma

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Open access
  • Published: 31 August 2024

Effects of pecha kucha presentation pedagogy on nursing students’ presentation skills: a quasi-experimental study in Tanzania

  • Setberth Jonas Haramba 1 ,
  • Walter C. Millanzi 1 &
  • Saada A. Seif 2  

BMC Medical Education volume  24 , Article number:  952 ( 2024 ) Cite this article

1 Altmetric

Metrics details

Introduction

Ineffective and non-interactive learning among nursing students limits opportunities for students’ classroom presentation skills, creativity, and innovation upon completion of their classroom learning activities. Pecha Kucha presentation is the new promising pedagogy that engages students in learning and improves students’ speaking skills and other survival skills. It involves the use of 20 slides, each covering 20 seconds of its presentation. The current study examined the effect of Pecha Kucha’s presentation pedagogy on presentation skills among nursing students in Tanzania.

The aim of this study was to establish comparative nursing student’s presentation skills between exposure to the traditional PowerPoint presentations and Pecha Kucha presentations.

The study employed an uncontrolled quasi-experimental design (pre-post) using a quantitative research approach among 230 randomly selected nursing students at the respective training institution. An interviewer-administered structured questionnaire adopted from previous studies to measure presentation skills between June and July 2023 was used. The study involved the training of research assistants, pre-assessment of presentation skills, training of participants, assigning topics to participants, classroom presentations, and post-intervention assessment. A linear regression analysis model was used to determine the effect of the intervention on nursing students’ presentation skills using Statistical Package for Social Solution (SPSS) version 26, set at a 95% confidence interval and 5% significance level.

Findings revealed that 63 (70.87%) participants were aged ≤ 23 years, of which 151 (65.65%) and 189 (82.17%) of them were males and undergraduate students, respectively. Post-test findings showed a significant mean score change in participants’ presentation skills between baseline (M = 4.07 ± SD = 0.56) and end-line (M = 4.54 ± SD = 0.59) that accounted for 0.4717 ± 0.7793; p  < .0001(95%CI) presentation skills mean score change with a medium effect size of 0.78. An increase in participants’ knowledge of Pecha Kucha presentation was associated with a 0.0239 ( p  < .0001) increase in presentation skills.

Pecha Kucha presentations have a significant effect on nursing students’ presentation skills as they enhance inquiry and mastery of their learning content before classroom presentations. The pedagogical approach appeared to enhance nursing students’ confidence during the classroom presentation. Therefore, there is a need to incorporate Pecha Kucha presentation pedagogy into nursing curricula and nursing education at large to promote student-centered teaching and learning activities and the development of survival skills.

Trial registration

It was not applicable as it was a quasi-experimental study.

Peer Review reports

The nursing students need to have different skills acquired during the learning process in order to enable them to provide quality nursing care and management in the society [ 1 ]. The referred nursing care and management practices include identifying, analyzing, synthesizing, and effective communication within and between healthcare professionals [ 1 ]. Given an increasing global economy and international competition for jobs and opportunities, the current traditional classroom learning methods are insufficient to meet such 21st - century challenges and demands [ 2 ]. The integration of presentation skills, creativity, innovation, collaboration, information, and media literacy skills helps to overcome the noted challenges among students [ 2 , 3 , 4 ]. The skills in question constitute the survival skills that help the students not only for career development and success but also for their personal, social and public quality of life as they enable students to overcome 21st challenges upon graduation [ 2 ].

To enhance the nursing students’ participation in learning, stimulating their presentation skills, critical thinking, creativity, and innovation, a combination of teaching and learning pedagogy should be employed [ 5 , 6 , 7 , 8 ]. Among others, classroom presentations, group discussions, problem-based learning, demonstrations, reflection, and role-play are commonly used for those purposes [ 5 ]. However, ineffective and non-interactive learning which contribute to limited presentation skills, creativity, and innovation, have been reported by several scholars [ 9 , 10 , 11 ]. For example, poor use and design of student PowerPoint presentations led to confusing graphics due to the many texts in the slides and the reading of about 80 slides [ 12 , 13 , 14 ]. Indeed, such non-interactive learning becomes boring and tiresome among the learners, and it is usually evidenced by glazing eyes, long yawning, occasional snoring, the use of a phone and frequent trips to the bathroom [ 12 , 14 ].

With an increasing number of nursing students in higher education institutions in Tanzania, the students’ traditional presentation pedagogy is insufficient to stimulate their presentation skills. They limit nursing student innovation, creativity, critical thinking, and meaningful learning in an attempt to solve health challenges [ 15 , 16 ].These hinder nursing students ability to communicate effectively by being able to demonstrate their knowledge and mastery of learning content [ 17 , 18 ]. Furthermore, it affects their future careers by not being able to demonstrate and express their expertise clearly in a variety of workplace settings, such as being able to present at scientific conferences, participating in job interviews, giving clinic case reports, handover reports, and giving feedback to clients [ 17 , 18 , 19 ].

Pecha Kucha presentation is a new promising approach for students’ learning in the classroom context as it motivates learners’ self-directed and collaborative learning, learner creativity, and presentation skills [ 20 , 21 , 22 ]. It encourages students to read more materials, enhances cooperative learning among learners, and is interesting and enjoyable among students [ 23 ].

Pecha Kucha presentation originated from the Japanese word “ chit chat , ” which represents the fast-paced presentation used in different fields, including teaching, marketing, advertising, and designing [ 24 , 25 , 26 ]. It involves 20 slides, where each slide covers 20 s, thus making a total of 6 min and 40 s for the whole presentation [ 22 ]. For effective learning through Pecha Kucha presentations, the design and format of the presentation should be meaningfully limited to 20 slides and targeted at 20 s for each slide, rich in content of the presented topic using high-quality images or pictures attuned to the content knowledge and message to be delivered to the target audiences [ 14 , 16 ]. Each slide should contain a primordial message with well-balanced information. In other words, the message should be simple in the sense that each slide should contain only one concept or idea with neither too much nor too little information, thus making it easy to be grasped by the audience [ 14 , 17 , 19 ].

The “true spirit” of Pecha Kucha is that it mostly consists of powerful images and meaningful specific text rather than the text that is being read by the presenter from the slides, an image, and short phrases that should communicate the core idea while the speaker offers well-rehearsed and elaborated comments [ 22 , 28 ]. The presenter should master the subject matter and incorporate the necessary information from classwork [ 14 , 20 ]. The audience’s engagement in learning by paying attention and actively listening to the Pecha Kucha presentation was higher compared with that in traditional PowerPoint presentations [ 29 ]. The creativity and collaboration during designing and selecting the appropriate images and contents, rehearsal before the presentation, and discussion after each presentation made students satisfied by enjoying Pecha Kucha presentations compared with traditional presentations [ 21 , 22 ]. Time management and students’ self-regulation were found to be significant through the Pecha Kucha presentation among the students and teachers or instructors who could appropriately plan the time for classroom instruction [ 22 , 23 ].

However, little is known about Pecha Kucha presentation in nursing education in Sub-Saharan African countries, including Tanzania, since there is insufficient evidence for the research(s) that have been published on the description of its effects on enhancing students’ presentation skills. Thus, this study assessed the effect of Pecha Kucha’s presentation pedagogy on enhancing presentation skills among nursing students. In particular, the study largely focused on nursing students’ presentation skills during the preparation and presentation of the students’ assignments, project works, case reports, or field reports.

The study answered the null hypothesis H 0  = H 1, which hypothesized that there is no significant difference in nursing students’ classroom presentation skills scores between the baseline and end-line assessments. The association between nursing students’ presentation skills and participants’ sociodemographic characteristics was formulated and analyzed before and after the intervention. This study forms the basis for developing new presentation pedagogy among nursing students in order to stimulate effective learning and the development of presentation skills during the teaching and learning process and the acquisition of 21st - century skills, which are characterized by an increased competitive knowledge-based society due to changing nature and technological eruptions.

The current study also forms the basis for re-defining classroom practices in an attempt to enhance and transform nursing students’ learning experiences. This will cultivate the production of graduates nurses who will share their expertise and practical skills in the health care team by attending scientific conferences, clinical case presentations, and job interviews in the global health market. To achieve this, the study determined the baseline and end-line nursing students’ presentation skills during the preparation and presentation of classroom assignments using the traditional PowerPoint presentation and Pecha Kucha presentation format.

Methods and materials

This study was conducted in health training institutions in Tanzania. Tanzania has a total of 47 registered public and private universities and university colleges that offer health programs ranging from certificate to doctorate degrees [ 24 , 25 ]. A total of seven [ 7 ] out of 47 universities offer a bachelor of science in nursing, and four [ 4 ] universities offer master’s to doctorate degree programs in nursing and midwifery sciences [ 24 , 26 ]. To enhance the representation of nursing students in Tanzania, this study was conducted in Dodoma Municipal Council, which is one of Tanzania’s 30 administrative regions [ 33 ]. Dodoma Region has two [ 2 ] universities that offer nursing programs at diploma and degree levels [ 34 ]. The referred universities host a large number of nursing students compared to the other five [ 5 ] universities in Tanzania, with traditional students’ presentation approaches predominating nursing students’ teaching and learning processes [ 7 , 32 , 35 ].

The two universities under study include the University of Dodoma and St. John’s University of Tanzania, which are located in Dodoma Urban District. The University of Dodoma is a public university that provides 142 training programs at the diploma, bachelor degree, and master’s degree levels with about 28,225 undergraduate students and 724 postgraduate students [ 26 , 27 ]. The University of Dodoma also has 1,031 nursing students pursuing a Bachelor of Science in Nursing and 335 nursing students pursuing a Diploma in Nursing in the academic year 2022–2023 [ 33 ]. The St. John’s University of Tanzania is a non-profit private university that is legally connected with the Christian-Anglican Church [ 36 ]. It has student enrollment ranging from 5000 to 5999 and it provides training programs leading to higher education degrees in a variety of fields, including diplomas, bachelor degrees, and master’s degrees [ 37 ]. It hosts 766 nursing students pursuing a Bachelor of Science in Nursing and 113 nursing students pursuing a Diploma in Nursing in the academic year 2022–2023 [ 30 , 31 ].

Study design and approach

An uncontrolled quasi-experimental design with a quantitative research approach was used to establish quantifiable data on the participants’ socio-demographic profiles and outcome variables under study. The design involved pre- and post-tests to determine the effects of the intervention on the aforementioned outcome variable. The design involved three phases, namely the baseline data collection process (pre-test via a cross-sectional survey), implementation of the intervention (process), and end-line assessment (post-test), as shown in Fig.  1 [ 7 ].

figure 1

A flow pattern of study design and approach

Target population

The study involved nursing students pursuing a Diploma in nursing and a bachelor of science in nursing in Tanzania. The population was highly expected to demonstrate competences and mastery of different survival and life skills in order to enable them to work independent at various levels of health facilities within and outside Tanzania. This cohort of undergraduate nursing students also involved adult learners who can set goals, develop strategies to achieve their goals, and hence achieve positive professional behavioral outcomes [ 7 ]. Moreover, as per annual data, the average number of graduate nursing students ranges from 3,500 to 4,000 from all colleges and universities in the country [ 38 ].

Study population

The study involved first- and third-year nursing students pursuing a Diploma in Nursing and first-, second-, and third-year nursing students pursuing a Bachelor of Science in Nursing at the University of Dodoma. The population had a large number of enrolled undergraduate nursing students, thus making it an ideal population for intervention, and it approximately served as a good representation of the universities offering nursing programs [ 11 , 29 ].

Inclusion criteria

The study included male and female nursing students pursuing a Diploma in nursing and a bachelor of science in nursing at the University of Dodoma. The referred students included those who were registered at the University of Dodoma during the time of study. Such students live on or off campus, and they were not exposed to PK training despite having regular classroom attendance. This enhanced enrollment of adequate study samples from each study program, monitoring of study intervention, and easy control of con-founders.

Exclusion criteria

All students recruited in the study were assessed at baseline, exposed to a training package and obtained their post-intervention learning experience. None of the study participants, who either dropped out of the study or failed to meet the recruitment criteria.

Sample size determination

A quasi-experimental study on Pecha Kucha as an alternative to traditional PowerPoint presentations at Worcester University, United States of America, reported significant student engagement during Pecha Kucha presentations compared with traditional PowerPoint presentations [ 29 ]. The mean score for the classroom with the traditional PowerPoint presentation was 2.63, while the mean score for the Pecha Kucha presentation was 4.08. This study adopted the formula that was used to calculate the required sample size for an uncontrolled quasi-experimental study among pre-scholars [ 39 ]. The formula is stated as:

Where: Zα was set at 1.96 from the normal distribution table.

Zβ was set at 0.80 power of the study.

Mean zero (π0) was the mean score of audiences’ engagement in using PowerPoint presentation = 2.63.

Mean one (π1) was the mean score of audience’s engagement in using Pecha Kucha presentation = 4.08.

Sampling technique

Given the availability of higher-training institutions in the study area that offer undergraduate nursing programs, a simple random sampling technique was used, whereby two cards, one labelled “University of Dodoma” and the other being labelled “St. Johns University of Tanzania,” were prepared and put in the first pot. The other two cards, one labelled “yes” to represent the study setting and the other being labelled “No” to represent the absence of study setting, were put in the second pot. Two research assistants were asked to select a card from each pot, and consequently, the University of Dodoma was selected as the study setting.

To obtain the target population, the study employed purposive sampling techniques to select the school of nursing and public health at the University of Dodoma. Upon arriving at the School of Nursing and Public Health of the University of Dodoma, the convenience sampling technique was employed to obtain the number of classes for undergraduate nursing students pursuing a Diploma in Nursing and a Bachelor of Science in Nursing. The study sample comprised the students who were available at the time of study. A total of five [ 5 ] classes of Diploma in Nursing first-, second-, and third-years and Bachelor of Science in Nursing first-, second-, and third-years were obtained.

To establish the representation for a minimum sample from each class, the number of students by sex was obtained from each classroom list using the proportionate stratified sampling technique (sample size/population size× stratum size) as recommended by scholars [ 40 ]. To recruit the required sample size from each class by gender, a simple random sampling technique through the lottery method was employed to obtain the required sample size from each stratum. During this phase, the student lists by gender from each class were obtained, and cards with code numbers, which were mixed with empty cards depending on the strata size, were allocated for each class and strata. Both labeled and empty cards were put into different pots, which were labeled appropriately by their class and strata names. Upon arriving at the specific classroom and after the introduction, the research assistant asked each nursing student to pick one card from the respective strata pot. Those who selected cards with code numbers were recruited in the study with their code numbers as their participation identity numbers. The process continued for each class until the required sample size was obtained.

To ensure the effective participation of nursing students in the study, the research assistant worked hand in hand with the facilitators and lecturers of the respective classrooms, the head of the department, and class representatives. The importance, advantages, and disadvantages of participating in the study were given to study participants during the recruitment process in order to create awareness and remove possible fears. During the intervention, study participants were also given pens and notebooks in an attempt to enable them to take notes. Moreover, the bites were provided during the training sessions. The number of participants from each classroom and the sampling process are shown in Fig.  2 [ 7 ].

figure 2

Flow pattern of participants sampling procedures

Data collection tools

The study adapted and modified the students’ questionnaire on presentation skills from scholars [ 20 , 23 , 26 , 27 , 28 , 29 ]. The modification involved rephrasing the question statement, breaking down items into specific questions, deleting repeated items that were found to measure the same variables, and improving language to meet the literacy level and cultural norms of study participants.

The data collection tool consisted of 68 question items that assessed the socio-demographic characteristics of the study participants and 33 question items rated on a five-point Likert scale, which ranges from 5 = strongly agree, 4 = agree, 3 = not sure, 2 = disagree, and 1 = strongly disagree. The referred tool was used to assess the students’ skills during the preparation and presentation of the assignments using the traditional PowerPoint presentation and Pecha Kucha presentation formats.

The students’ assessment specifically focused on the students’ ability to prepare the presentation content, master the learning content, share presentation materials, and communicate their understanding to audiences in the classroom context.

Validity and reliability of research instruments

Validity of the research instrument refers to whether the instrument measures the behaviors or qualities that are intended to be measured, and it is a measure of how well the measuring instrument performs its function [ 41 ]. The structured questionnaire, which intends to assess the participants’ presentation skills was validated for face and content validity. The principal investigator initially adapted the question items for different domains of students’ learning when preparing and presenting their assignment in the classroom.

The items were shared and discussed by two [ 2 ] educationists, two [ 2 ] research experts, one [ 1 ] statistician, and supervisors in order to ensure clarity, appropriateness, adequacy, and coverage of the presentation skills using Pecha Kucha presentation format. The content validity test was used until the saturation of experts’ opinions and inputs was achieved. The inter-observer rating scale on a five-point Likert scale ranging from 5-points = very relevant to 1-point = not relevant was also used.

The process involved addition, input deletion, correction, and editing for relevance, appropriateness, and scope of the content for the study participants. Some of the question items were broken down into more specific questions, and new domains evolved. Other question items that were found to measure the same variables were also deleted to ease the data collection and analysis. Moreover, the grammar and language issues were improved for clarity based on the literacy level of the study participants.

Reliability of the research instruments refers to the ability of the research instruments or tools to provide similar and consistent results when applied at different times and circumstances [ 41 ]. This study adapted the tools and question items used by different scholars to assess the impact of PKP on student learning [ 12 , 15 , 18 ].

To ensure the reliability of the tools, a pilot study was conducted in one of the nursing training institutions in order to assess the complexity, readability, clarity, completeness, length, and duration of the tool. Ambiguous and difficult (left unanswered) items were modified or deleted based on the consensus that was reached with the consulted experts and supervisor before subjecting the questionnaires to a pre-test.

The study involved 10% of undergraduate nursing students from an independent geographical location for a pilot study. The findings from the pilot study were subjected to explanatory factor analysis (Set a ≥ 0.3) and scale analysis in order to determine the internal consistency of the tools using the Cronbach alpha of ≥ 0.7, which was considered reliable [ 42 , 43 , 44 ]. Furthermore, after the data collection, the scale analysis was computed in an attempt to assess their internal consistency using SPPSS version 26, whereby the Cronbach alpha for question items that assessed the participants’ presentation skills was 0.965.

Data collection method

The study used the researcher-administered questionnaire to collect the participants’ socio-demographic information, co-related factors, and presentation skills as nursing students prepare and present their assignments in the classroom. This enhanced the clarity and participants’ understanding of all question items before providing the appropriate responses. The data were collected by the research assistants in the classroom with the study participants sitting distantly to ensure privacy, confidentiality, and the quality of the information that was provided by the research participants. The research assistant guided and led the study participants to answer the questions and fill in information in the questionnaire for each section, domain, and question item. The research assistant also collected the baseline information (pre-test) before the intervention, which was then compared with the post-intervention information. This was done in the first week of June 2023, after training and orientation of the research assistant on the data collection tools and recruitment of the study participants.

Using the researcher-administered questionnaire, the research assistant also collected the participants’ information related to presentation skills as they prepared and presented their given assignments after the intervention during the second week of July 2023. The participants submitted their presentations to the principle investigator and research assistant to assess the organization, visual appeal and creativity, content knowledge, and adherence to Pecha Kucha presentation requirements. Furthermore, the evaluation of the participants’ ability to share and communicate the given assignment was observed in the classroom presentation using the Pecha Kucha presentation format.

Definitions of variables

Pecha kucha presentation.

It refers to a specific style of presentation whereby the presenter delivers the content using 20 slides that are dominated by images, pictures, tables, or figures. Each slide is displayed for 20 s, thus making a total of 400 s (6 min and 40 s) for the whole presentation.

Presentation skills in this study

This involved students’ ability to plan, prepare, master learning content, create presentation materials, and share them with peers or the audience in the classroom. They constitute the learning activities that stimulate creativity, innovation, critical thinking, and problem-solving skills.

Measurement of pecha kucha preparation and presentation skills

The students’ presentation skills were measured using the four [ 4 ] learning domains. The first domain constituted the students’ ability to plan and prepare the presentation content. It consisted of 17 question items that assessed the students’ ability to gather and select information, search for specific content to be presented in the classroom, find out the learning content from different resources, and search for literature materials for the preparation of the assignment using traditional PowerPoint presentations and Pecha Kucha formats. It also aimed to ascertain a deeper understanding of the contents or topic, learning ownership and motivation to learn the topics with clear understanding and the ability to identify the relevant audience, segregate, and remove unnecessary contents using the Pecha Kucha format.

The second domain constituted the students’ mastery of learning during the preparation and presentation of their assignment before the audience in the classroom. It consisted of six [ 6 ] question items that measured the students’ ability to read several times, rehearse before the classroom presentation, and practice the assignment and presentation harder. It also measures the students’ ability to evaluate the selected information and content before their actual presentation and make revisions to the selected information and content before the presentation using the Pecha Kucha format.

The third domain constituted the students’ ability to prepare the presentation materials. It consisted of six [ 6 ] question items that measured the students’ ability to organize the information and contents, prepare the classroom presentation, revise and edit presentation resources, materials, and contents, and think about the audience and classroom design. The fourth domain constituted the students’ ability to share their learning. It consisted of four [ 4 ] question items that measured the students’ ability to communicate their learning with the audience, present a new understanding to the audience, transfer the learning to the audience, and answer the questions about the topic or assignment given. The variable was measured using a 5-point Likert scale. The average scores were computed for each domain, and an overall mean score was calculated across all domains. Additionally, an encompassing skills score was derived from the cumulative scores of all four domains, thus providing a comprehensive evaluation of the overall skills level.

Implementation of intervention

The implementation of the study involved the training of research assistants, sampling of the study participants, setting of the venue, pre-assessment of the students’ presentation skills using traditional PowerPoint presentations, training and demonstration of Pecha Kucha presentations to study participants, and assigning the topics to study participants. The implementation of the study also involved the participants’ submission of their assignments to the Principal Investigator for evaluation, the participants’ presentation of their assigned topic using the Pecha Kucha format, post-intervention assessment of the students’ presentation skills, data analysis, and reporting [ 7 ]. The intervention involved Principal Investigator and two [ 2 ] trained research assistants. The intervention in question was based on the concept of multimedia theory of cognitive learning (MTCL) for enhancing effective leaning in 21st century.

Training of research assistants

Two research assistants were trained with regard to the principles, characteristics, and format of Pecha Kucha presentations using the curriculum from the official Pecha Kucha website. Also, research assistants were oriented to the data collection tools and methods in an attempt to guarantee the relevancy and appropriate collection of the participants’ information.

Schedule and duration of training among research assistants

The PI prepared the training schedule and venue after negotiation and consensus with the research assistants. Moreover, the Principle Investigator trained the research assistants to assess the learning, learn how to collect the data using the questionnaire, and maintain the privacy and confidentiality of the study participants.

Descriptions of interventions

The intervention was conducted among the nursing students at the University of Dodoma, which is located in Dodoma Region, Tanzania Mainland, after obtaining their consent. The participants were trained regarding the concepts, principles, and characteristics of Pecha Kucha presentations and how to prepare and present their assignments using the Pecha Kucha presentation format. The study participants were also trained regarding the advantages and disadvantages of Pecha Kucha presentations. The training was accompanied by one example of an ideal Pecha Kucha presentation on the concepts of pressure ulcers. The teaching methods included lecturing, brainstorming, and small group discussion. After the training session, the evaluation was conducted to assess the participants’ understanding of the Pecha Kucha conceptualization, its characteristics, and its principles.

Each participant was given a topic as an assignment from the fundamentals of nursing, medical nursing, surgical nursing, community health nursing, mental health nursing, emergency critical care, pediatric, reproductive, and child health, midwifery, communicable diseases, non-communicable diseases, orthopedics and cross-cutting issues in nursing as recommended by scholars [ 21 , 38 ]. The study participants were given 14 days for preparation, rehearsal of their presentation using the Pecha Kucha presentation format, and submission of the prepared slides to the research assistant and principle investigator for evaluation and arrangement before the actual classroom presentation. The evaluation of the participants’ assignments involved the number of slides, quality of images used, number of words, organization of content and messages to be delivered, slide transition, duration of presentation, flow, and organization of slides.

Afterwards, each participant was given 6 min and 40 s for the presentation and 5 min to 10 min for answering the questions on the topic presented as raised by other participants. An average of 4 participants obtained the opportunity to present their assignments in the classroom every hour. After the completion of all presentations, the research assistants assessed the participant’s presentation skills using the researcher-administered questionnaire. The collected data were entered in SPSS version 26 and analyzed in an attempt to compare the mean score of participants’ presentation skills with the baseline mean score. The intervention sessions were conducted in the selected classrooms, which were able to accommodate all participants at the time that was arranged by the participant’s coordinators, institution administrators, and subject facilitators of the University of Dodoma, as described in Table  1 [ 7 ].

Evaluation of intervention

During the classroom presentation, there were 5 to 10 min for classroom discussion and reflection on the content presented, which was guided by the research assistant. During this time, the participants were given the opportunity to ask the questions, get clarification from the presenter, and provide their opinion on how the instructional messages were presented, content coverage, areas of strength and weakness for improvement, and academic growth. After the completion of the presentation sessions, the research assistant provided the questionnaire to participants in order to determine their presentation skills during the preparation of their assignments and classroom presentations using the Pecha Kucha presentation format.

Data analysis

The findings from this study were analyzed using the Statistical Package for Social Science (SPSS) computer software program version 26. The percentages, frequencies, frequency distributions, means, standard deviations, skewness, and kurtosis were calculated, and the results were presented using the figures, tables, and graphs. The mean score analysis was computed, and descriptive statistical analysis was used to analyze the demographic information of the participants in an attempt to determine the frequencies, percentages, and mean scores of their distributions. A paired sample t-test was used to compare the mean score differences of the presentation skills within the groups before and after the intervention. The mean score differences were determined based on the baseline scores against the post-intervention scores in order to establish any change in terms of presentation skills among the study participants.

The association between the Pecha Kucha presentation and the development of participants’ presentation skills was established using linear regression analysis set at a 95% confidence interval and 5% (≤ 0.05) significance level in an attempt to accept or reject the null hypothesis.

However, N-1 dummy variables were formed for the categorical independent variables so as to run the linear regression for the factors associated with the presentation skills. The linear regression equation with dummy variables is presented as follows:

Β 0 is the intercept.

Β 1 , Β 2 , …. Β k-1 are the coefficients which correspond to the dummy variables representing the levels of X 1 .

Β k is the coefficient which corresponds to the dummy variable representing the levels of X 2 .

Β k+1 is the coefficient which corresponds to the continuous predictor X 3 .

X 1,1 , X 1,2 ,……. X 1,k-1 are the dummy variables corresponding to the different levels of X 1 .

ε represents the error term.

The coefficients B1, B2… Bk indicate the change in the expected value of Y for each category relative to the reference category. If the Beta estimate is positive for the categorical or dummy variables, it means that the corresponding covariate has a positive impact on the outcome variable compared to reference category. However, if the beta estimate is positive for the case of continuous covariates, it means that the corresponding covariate has direct proportion effect on the outcome variables.

The distribution of the outcome variables was approximately normally distributed since the normality of the data is one of the requirements for parametric analysis. A paired t test was performed to compare the presentation skills of nursing students before and after the intervention.

Social-demographic characteristics of the study participants

The study involved a total of 230 nursing students, of whom 151 (65.65%) were male and the rest were female. The mean age of study participants was 23.03 ± 2.69, with the minimum age being 19 and the maximum age being 37. The total of 163 (70.87%) students, which comprised a large proportion of respondents, were aged less than or equal to 23, 215 (93.48%) participants were living on campus, and 216 (93.91) participants were exposed to social media.

A large number of study participants (82.17%) were pursuing a bachelor of Science in Nursing, with the majority being first-year students (30.87%). The total of 213 (92.61%) study participants had Form Six education as their entry qualification, with 176 (76.52%) participants being the product of public secondary schools and interested in the nursing profession. Lastly, the total of 121 (52.61%) study participants had never been exposed to any presentation training; 215 (93.48%) students had access to individual classroom presentations; and 227 (98.70%) study participants had access to group presentations during their learning process. The detailed findings for the participants’ social demographic information are indicated in Table  2 [ 46 ].

Baseline nursing students’ presentation skills using traditional powerPoint presentations

The current study assessed the participant’s presentation skills when preparing and presenting the materials before the audience using traditional PowerPoint presentations. The study revealed that the overall mean score of the participants’ presentation skills was 4.07 ± 0.56, including a mean score of 3.98 ± 0.62 for the participants’ presentation skills during the preparation of presentation content before the classroom presentation and a mean score of 4.18 ± 0.78 for the participants’ mastery of learning content before the classroom presentation. Moreover, the study revealed a mean score of 4.07 ± 0.71 for participants’ ability to prepare presentation materials for classroom presentations and a mean score of 4.04 ± 0.76 for participants’ ability to share the presentation materials in the classroom, as indicated in Table  3 [ 46 ].

Factors Associated with participants’ presentation skills through traditional powerPoint presentation

The current study revealed that the participants’ study program has a significant effect on their presentation skills, whereby being the bachelor of science in nursing was associated with a 0.37561 (P value < 0.027) increase in the participants’ presentation skills.The year of study also had significant effects on the participants’ presentation skills, whereby being a second-year bachelor student was associated with a 0.34771 (P value < 0.0022) increase in the participants’ presentation skills compared to first-year bachelor students and diploma students. Depending on loans as a source of student income retards presentation skills by 0.24663 (P value < 0.0272) compared to those who do not depend on loans as the source of income. Furthermore, exposure to individual presentations has significant effects on the participants’ presentation skills, whereby obtaining an opportunity for individual presentations was associated with a 0.33732 (P value 0.0272) increase in presentation skills through traditional PowerPoint presentations as shown in Table  4 [ 46 ].

Nursing student presentation skills through pecha kucha presentations

The current study assessed the participant’s presentation skills when preparing and presenting the materials before the audience using Pecha Kucha presentations. The study revealed that the overall mean score and standard deviation of participants’ presentation skills using the Pecha Kucha presentation format were 4.54 ± 0.59, including a mean score of 4.49 ± 0.66 for participant’s presentation skills during preparation of the content before classroom presentation and a mean score of 4.58 ± 0.65 for participants’ mastery of learning content before classroom presentation. Moreover, the study revealed a mean score of 4.58 ± 0.67 for participants ability to prepare the presentation materials for classroom presentation and a mean score of 4.51 ± 0.72 for participants ability to share the presentation materials in the classroom using Pecha Kucha presentation format as indicated in Table  5 [ 46 ].

Comparing Mean scores of participants’ presentation skills between traditional PowerPoint presentation and pecha kucha Presentation

The current study computed a paired t-test to compare and determine the mean change, effect size, and significance associated with the participants’ presentation skills when using the traditional PowerPoint presentation and Pecha Kucha presentation formats. The study revealed that the mean score of the participants’ presentation skills through the Pecha Kucha presentation was 4.54 ± 0.59 (p value < 0.0001) compared to the mean score of 4.07 ± 0.56 for the participants’ presentation skills using the traditional power point presentation with an effect change of 0.78. With regard to the presentation skills during the preparation of presentation content before the classroom presentation, the mean score was 4.49 ± 0.66 using the Pecha Kucha presentation compared to the mean score of 3.98 ± 0.62 for the traditional PowerPoint presentation. Its mean change was 0.51 ± 0.84 ( p  < .0001) with an effect size of 0.61.

Regarding the participants’ mastery of learning content before the classroom presentation, the mean score was 4.58 ± 0.65 when using the Pecha Kucha presentation format, compared to the mean score of 4.18 ± 0.78 when using the traditional power point presentation. Its mean change was 0.40 ± 0.27 ( p  < .0001) with an effect size of 1.48. Regarding the ability of the participants to prepare the presentation materials for classroom presentations, the mean score was 4.58 ± 0.67 when using the Pecha Kucha presentation format, compared to 4.07 ± 0.71 when using the traditional PowerPoint presentation. Its mean change was 0.51 ± 0.96 ( p  < .0001) with an effect size of 0.53.

Regarding the participants’ presentation skills when sharing the presentation material in the classroom, the mean score was 4.51 ± 0.72 when using the Pecha Kucha presentation format, compared to 4.04 ± 0.76 when using the traditional PowerPoint presentations. Its mean change was 0.47 ± 0.10, with a large effect size of 4.7. Therefore, Pecha Kucha presentation pedagogy has a significant effect on the participants’ presentation skills than the traditional PowerPoint presentation as shown in Table  6 [ 46 ].

Factors associated with presentation skills among nursing students through pecha kucha presentation

The current study revealed that the participant’s presentation skills using the Pecha Kucha presentation format were significantly associated with knowledge of the Pecha Kucha presentation format, whereby increase in knowledge was associated with a 0.0239 ( p  < .0001) increase in presentation skills. Moreover, the current study revealed that the presentation through the Pecha Kucha presentation format was not influenced by the year of study, whereby being a second-year student could retard the presentation skills by 0.23093 (p 0.039) compared to a traditional PowerPoint presentation. Other factors are shown in Table  7 [ 46 ].

Social-demographic characteristics profiles of participants

The proportion of male participants was larger than the proportion of female participants in the current study. This was attributable to the distribution of sex across the nursing students at the university understudy, whose number of male nursing students enrolled was higher than female students. This demonstrates the high rate of male nursing students’ enrolment in higher training institutions to pursue nursing and midwifery education programs. Different from the previous years, the nursing training institutions were predominantly comprised of female students and female nurses in different settings. This significant increase in male nursing students’ enrollment in nursing training institutions predicts a significant increase in the male nursing workforce in the future in different settings.

These findings on Pecha Kucha as an alternative to PowerPoint presentations in Massachusetts, where the proportion of female participants was large as compared to male participants, are different from the experimental study among English language students [ 29 ]. The referred findings are different from the results of the randomized control study among the nursing students in Anakara, Turkey, where a large proportion of participants were female nursing students [ 47 ]. This difference in participants’ sex may be associated with the difference in socio-cultural beliefs of the study settings, country’s socio-economic status, which influence the participants to join the nursing profession on the basis of securing employment easily, an opportunity abroad, or pressure from peers and parents. Nevertheless, such differences account for the decreased stereotypes towards male nurses in the community and the better performance of male students in science subjects compared to female students in the country.

The mean age of the study participants was predominantly young adults with advanced secondary education. Their ages reflect adherence to national education policy by considering the appropriate age of enrollment of the pupils in primary and secondary schools, which comprise the industries for students at higher training institutions. This age range of the participants in the current study suits the cognitive capability expected from the participants in order to demonstrate different survival and life skills by being able to set learning goals and develop strategies to achieve their goals according to Jean Piaget’s theory of cognitive learning [ 41 , 42 ].

Similar age groups were noted in the study among nursing students in a randomized control study in Anakara Turkey where the average age was 19.05 ± 0.2 [ 47 ]. A similar age group was also found in a randomized control study among liberal arts students in Anakara, Turkey, on differences in instructor, presenter, and audience ratings of Pecha Kucha presentations and traditional student presentations where the ages of the participants ranged between 19 and 22 years [ 49 ].

Lastly, a large proportion of the study participants had the opportunity for individual and group presentations in the classroom despite having not been exposed to any presentation training before. This implies that the teaching and learning process in a nursing education program is participatory and student-centered, thus giving the students the opportunity to interact with learning contents, peers, experts, webpages, and other learning resources to become knowledgeable. These findings fit with the principle that guides and facilitates the student’s learning from peers and teachers according to the constructivism theory of learning by Lev Vygotsky [ 48 ].

Effects of pecha kucha presentation pedagogy on participants’ presentation skills

The participants’ presentation skills were higher for Pecha Kucha presentations compared with traditional PowerPoint presentations. This display of the Pecha Kucha presentation style enables the nursing students to prepare the learning content, master their learning content before classroom presentations, create good presentation materials and present the materials, before the audience in the classroom. This finding was similar to that at Padang State University, Indonesia, among first-year English and literature students whereby the Pecha Kucha Presentation format helped the students improve their skills in presentation [ 20 ]. Pecha Kucha was also found to facilitate careful selection of the topic, organization and outlining of the students’ ideas, selection of appropriate images, preparation of presentations, rehearsing, and delivery of the presentations before the audience in a qualitative study among English language students at the Private University of Manila, Philippines [ 23 ].

The current study found that Pecha Kucha presentations enable the students to perform literature searches from different webpages, journals, and books in an attempt to identify specific contents during the preparation of the classroom presentations more than traditional PowerPoint presentations. This is triggered by the ability of the presentation format to force the students to filter relevant and specific information to be included in the presentation and search for appropriate images, pictures, or figures to be presented before the audience. Pecha Kucha presentations were found to increase the ability to perform literature searches before classroom presentations compared to traditional PowerPoint presentations in an experimental study among English language students at Worcester State University [ 29 ].

The current study revealed that Pecha Kucha presentations enable the students to create a well-structured classroom presentation effectively by designing 20 meaningful and content-rich slides containing 20 images, pictures, or figures and a transitional flow of 20 s for each slide, more than the traditional PowerPoint presentation with an unlimited number of slides containing bullets with many texts or words. Similarly, in a cross-sectional study of medical students in India, Pecha Kucha presentations were found to help undergraduate first-year medical students learn how to organize knowledge in a sequential fashion [ 26 ].

The current study revealed that Pecha Kucha presentations enhance sound mastery of the learning contents and presentation materials before the classroom presentation compared with traditional PowerPoint presentations. This is hastened by the fact that there is no slide reading during the classroom Pecha Kucha presentation, thus forcing students to read several times, rehearse, and practice harder the presentation contents and materials before the classroom presentation. Pecha Kucha presentation needed first year English and literature students to practice a lot before their classroom presentation in a descriptive qualitative study at Padang State University-Indonesia [ 20 ].

The current study revealed that the participants became more confident in answering the questions about the topic during the classroom presentation using the Pecha Kucha presentation style than during the classroom presentation using the tradition PowerPoint presentation. This is precipitated by the mastery level of the presentation contents and materials through rehearsal, re-reading, and material synthesis before the classroom presentations. Moreover, Pecha Kucha was found to significantly increase the students’ confidence during classroom presentation and preparation in a qualitative study among English language students at the Private University of Manila, Philippines [ 23 ].

Hence, there was enough evidence to reject the null hypothesis in that there was no significant difference in nursing students’ presentation skills between the baseline and end line. The Pecha Kucha presentation format has a significant effect on nursing student’s classroom presentation skills as it enables them to prepare the learning content, have good mastery of the learning contents, create presentation materials, and confidently share their learning with the audience in the classroom.

The current study’s findings complement the available pieces of evidence on the effects of Pecha Kucha presentations on the students’ learning and development of survival life skills in the 21st century. Pecha kucha presentations have more significant effects on the students’ presentation skills compared with traditional PowerPoint presentations. It enables the students to select the topic carefully, organize and outline the presentation ideas, select appropriate images, create presentations, rehearse the presentations, and deliver them confidently before an audience. It also enables the students to select and organize the learning contents for classroom presentations more than traditional PowerPoint presentations.

Pecha Kucha presentations enhance the mastery of learning content by encouraging the students to read the content several times, rehearse, and practice hard before the actual classroom presentation. It increases the students’ ability to perform literature searches before the classroom presentation compared to a traditional PowerPoint presentation. Pecha Kucha presentations enable the students to create well-structured classroom presentations more effectively compared to traditional PowerPoint presentations. Furthermore, Pecha Kucha presentations make the students confident during the presentation of their assignments and project works before the audience and during answering the questions.

Lastly, Pecha Kucha presentations enhance creativity among the students by providing the opportunity for them to decide on the learning content to be presented. Specifically, they are able to select the learning content, appropriate images, pictures, or figures, organize and structure the presentation slides into a meaningful and transitional flow of ideas, rehearse and practice individually before the actual classroom presentation.

Strength of the study

This study has addressed the pedagogical gap in nursing training and education by providing new insights on the innovative students’ presentation format that engages students actively in their learning to bring about meaningful and effective students’ learning. It has also managed to recruit, asses, and provide intended intervention to 230 nursing students without dropout.

Study limitation

The current study has pointed out some of the strengths of the PechaKucha presentations on the students’ presentation skills over the traditional students’ presentations. However, the study had the following limitations: It involved one group of nursing students from one of the public training institutions in Tanzania. The use of one university may obscure the interpretation of the effects of the size of the intervention on the outcome variables of interest, thus limiting the generalization of the study findings to all training institutions in Tanzania. Therefore, the findings from this study need to be interpreted by considering this limitation. The use of one group of nursing students from one university to explore their learning experience through different presentation formats may also limit the generalization of the study findings to all nursing students in the country. The limited generalization may be attributed to differences in socio-demographic characteristics, learning environments, and teaching and learning approaches. Therefore, the findings from this study need to be interpreted by considering this limitation.

Suggestions for future research

The future research should try to overcome the current study limitations and shortcomings and extend the areas assessed by the study to different study settings and different characteristics of nursing students in Tanzania as follows: To test rigorously the effects of Pecha Kucha presentations in enhancing the nursing students’ learning, the future studies should involve nursing students’ different health training institutions rather than one training institution. Future studies should better use the control students by randomly allocating the nursing students or training institutions in the intervention group or control group in order to assess the students’ learning experiences through the use of Pecha Kucha presentations and PowerPoint presentations consecutively. Lastly, future studies should focus on nursing students’ mastery of content knowledge and students’ classroom performance through the use of the Pecha Kucha presentation format in the teaching and learning process.

Data availability

The datasets generated and analyzed by this study can be obtained from the corresponding author on reasonable request through [email protected] & [email protected].

Abbreviations

Doctor (PhD)

Multimedia Theory of Cognitive Learning

National Council for Technical and Vocational Education and Training

Principle Investigator

Pecha Kucha presentation

Statistical Package for Social Sciences

Tanzania Commission for Universities

World Health Organization

International Council of Nurses. Nursing Care Continuum Framework and Competencies. 2008.

Partnership for 21st Century Skills. 21st Century Skills, Education & Competitiveness. a Resour Policy Guid [Internet]. 2008;20. https://files.eric.ed.gov/fulltext/ED519337.pdf

Partnership for 21st Century Skills. 21St Century Knowledge and Skills in Educator Preparation. Education [Internet]. 2010;(September):40. https://files.eric.ed.gov/fulltext/ED519336.pdf

Partnership for 21st Century Skills. A State Leaders Action Guide to 21st Century Skills: A New Vision for Education. 2006; http://apcrsi.pt/website/wp-content/uploads/20170317_Partnership_for_21st_Century_Learning.pdf

World Health Organization. Four-Year Integrated Nursing And Midwifery Competency-Based Prototype Curriculum for the African Region [Internet]. Republic of South Africa: WHO Regional Office for Africa. 2016; 2016. 13 p. https://apps.who.int/iris/bitstream/handle/10665/331471/9789290232612-eng.pdf?sequence=1&isAllowed=y

World Health Organization, THREE-YEAR REGIONAL PROTOTYPE PRE-SERVICE COMPETENCY-BASED NURSING, CURRICULUM [Internet]. 2016. https://apps.who.int/iris/bitstream/handle/10665/331657/9789290232629-eng.pdf?sequence=1&isAllowed=y

Haramba SJ, Millanzi WC, Seif SA. Enhancing nursing student presentation competences using Facilitatory Pecha kucha presentation pedagogy: a quasi-experimental study protocol in Tanzania. BMC Med Educ [Internet]. 2023;23(1):628. https://bmcmededuc.biomedcentral.com/articles/ https://doi.org/10.1186/s12909-023-04628-z

Millanzi WC, Osaki KM, Kibusi SM. Non-cognitive skills for safe sexual behavior: an exploration of baseline abstinence skills, condom use negotiation, Self-esteem, and assertiveness skills from a controlled problem-based Learning Intervention among adolescents in Tanzania. Glob J Med Res. 2020;20(10):1–18.

Google Scholar  

Millanzi WC, Herman PZ, Hussein MR. The impact of facilitation in a problem- based pedagogy on self-directed learning readiness among nursing students : a quasi- experimental study in Tanzania. BMC Nurs. 2021;20(242):1–11.

Millanzi WC, Kibusi SM. Exploring the effect of problem-based facilitatory teaching approach on metacognition in nursing education: a quasi-experimental study of nurse students in Tanzania. Nurs Open. 2020;7(April):1431–45.

Article   Google Scholar  

Millanzi WC, Kibusi SM. Exploring the effect of problem based facilitatory teaching approach on motivation to learn: a quasi-experimental study of nursing students in Tanzania. BMC Nurs [Internet]. 2021;20(1):3. https://bmcnurs.biomedcentral.com/articles/ https://doi.org/10.1186/s12912-020-00509-8

Hadiyanti KMW, Widya W. Analyzing the values and effects of Powerpoint presentations. LLT J J Lang Lang Teach. 2018;21(Suppl):87–95.

Nichani A. Life after death by power point: PechaKucha to the rescue? J Indian Soc Periodontol [Internet]. 2014;18(2):127. http://www.jisponline.com/text.asp?2014/18/2/127/131292

Uzun AM, Kilis S. Impressions of Pre-service teachers about Use of PowerPoint slides by their instructors and its effects on their learning. Int J Contemp Educ Res. 2019.

Unesco National Commission TM. UNESCO National Commission Country ReportTemplate Higher Education Report. [ UNITED REPUBLIC OF TANZANIA ]; 2022.

TCU. VitalStats on University Education in Tanzania. 2020. 2021;1–4. https://www.tcu.go.tz/sites/default/files/VitalStats 2020.pdf.

Kwame A, Petrucka PM. A literature-based study of patient-centered care and communication in nurse-patient interactions: barriers, facilitators, and the way forward. BMC Nurs [Internet]. 2021;20(1):158. https://bmcnurs.biomedcentral.com/articles/ https://doi.org/10.1186/s12912-021-00684-2

Kourkouta L, Papathanasiou I. Communication in Nursing Practice. Mater Socio Medica [Internet]. 2014;26(1):65. http://www.scopemed.org/fulltextpdf.php?mno=153817

Foulkes M. Presentation skills for nurses. Nurs Stand [Internet]. 2015;29(25):52–8. http://rcnpublishing.com/doi/ https://doi.org/10.7748/ns.29.25.52.e9488

Solusia C, Kher DF, Rani YA. The Use of Pecha Kucha Presentation Method in the speaking for Informal Interaction Class. 2020;411(Icoelt 2019):190–4.

Sen G. What is PechaKucha in Teaching and How Does It Work? Clear Facts About PechaKucha in Classroom [Internet]. Asian College of Teachers. 2016 [cited 2022 Jun 15]. https://www.asiancollegeofteachers.com/blogs/452-What-is-PechaKucha-in-Teaching-and-How-Does-It-Work-Clear-Facts-About-PechaKucha-in-Classroom-blog.php

Pecha Kucha Website. Pecha Kucha School [Internet]. 2022. https://www.pechakucha.com/schools

Mabuan RA. Developing Esl/Efl Learners Public Speaking Skills through Pecha Kucha Presentations. Engl Rev J Engl Educ. 2017;6(1):1.

Laieb M, Cherbal A. Improving speaking performance through Pecha Kucha Presentations among Algerian EFL Learners. The case of secondary School students. Jijel: University of Mohammed Seddik Ben Yahia; 2021.

Angelina P, IMPROVING INDONESIAN EFL STUDENTS SPEAKING, SKILL THROUGH PECHA KUCHA. LLT J A. J Lang Lang Teach [Internet]. 2019;22(1):86–97. https://e-journal.usd.ac.id/index.php/LLT/article/view/1789

Abraham RR, Torke S, Gonsalves J, Narayanan SN, Kamath MG, Prakash J, et al. Modified directed self-learning sessions in physiology with prereading assignments and Pecha Kucha talks: perceptions of students. Adv Physiol Educ. 2018;42(1):26–31.

Coskun A. The Effect of Pecha Kucha Presentations on Students’ English Public Speaking Anxiety. Profile Issues Teach Prof Dev [Internet]. 2017;19(_sup1):11–22. https://revistas.unal.edu.co/index.php/profile/article/view/68495

González Ruiz C, STUDENT PERCEPTIONS OF THE USE OF PECHAKUCHA, In PRESENTATIONS IN SPANISH AS A FOREIGN LANGUAGE. 2016. pp. 7504–12. http://library.iated.org/view/GONZALEZRUIZ2016STU

Warmuth KA. PechaKucha as an Alternative to Traditional Student Presentations. Curr Teach Learn Acad J [Internet]. 2021;(January). https://www.researchgate.net/publication/350189239

Hayashi PMJ, Holland SJ. Pecha Kucha: Transforming Student Presentations. Transform Lang Educ [Internet]. 2017; https://jalt-publications.org/files/pdf-article/jalt2016-pcp-039.pdf

Solmaz O. Developing EFL Learners ’ speaking and oral presentation skills through Pecha Kucha presentation technique. 2019;10(4):542–65.

Tanzania Commission for Universities. University Institutions operating in Tanzania. THE UNITED REPUBLIC OF TANZANIA; 2021.

The University of Dodoma. About Us [Internet]. 2022 [cited 2022 Aug 22]. https://www.udom.ac.tz/about

NACTVET. Registered Institutions [Internet]. The United Republic of Tanzania. 2022. https://www.nacte.go.tz/?s=HEALTH

TCU. University education in tanzania 2021. VitalStats, [Internet]. 2022;(May):63. https://www.tcu.go.tz/sites/default/files/VitalStats 2021.pdf.

St. John University of Tanzania. About St. John University [Internet]. 2022 [cited 2022 Aug 22]. https://sjut.ac.tz/our-university/

TopUniversitieslist. St John’s University of Tanzania Ranking [Internet]. World University Rankings & Reviews. 2023 [cited 2023 Jul 1]. https://topuniversitieslist.com/st-johns-university-of-tanzania/

Tanzania Nursing and Midwifery Council. TANZANIA NURSING AND MIDWIFERY COUNCIL THE REGISTRATION AND LICENSURE EXAMINATION GUIDELINE FOR NURSESAND MIDWIVES IN TANZANIA REVISED VERSION. : 2020; https://www.tnmc.go.tz/downloads/

Salim MA, Gabrieli P, Millanzi WC. Enhancing pre-school teachers’ competence in managing pediatric injuries in Pemba Island, Zanzibar. BMC Pediatr. 2022;22(1):1–13.

Iliyasu R, Etikan I. Comparison of quota sampling and stratified random sampling. Biometrics Biostat Int J [Internet]. 2021;10(1):24–7. https://medcraveonline.com/BBIJ/comparison-of-quota-sampling-and-stratified-random-sampling.html

Surucu L, Ahmet M, VALIDITY, AND RELIABILITY IN QUANTITATIVE RESEARCH. Bus Manag Stud An Int J [Internet]. 2020;8(3):2694–726. https://bmij.org/index.php/1/article/view/1540

Lima E, de Barreto P, Assunção SM. Factor structure, internal consistency and reliability of the posttraumatic stress disorder checklist (PCL): an exploratory study. Trends Psychiatry Psychother. 2012;34(4):215–22.

Taber KS. The Use of Cronbach’s alpha when developing and Reporting Research Instruments in Science Education. Res Sci Educ. 2018;48(6):1273–96.

Tavakol M, Dennick R. Making sense of Cronbach’s alpha. Int J Med Educ. 2011;2(2011):53–5.

Madar P, London W, ASSESSING THE STUDENT :. PECHAKUCHA. 2013;3(2):4–10.

Haramba, S. J., Millanzi, W. C., & Seif, S. A. (2023). Enhancing nursing student presentation competencies using Facilitatory Pecha Kucha presentation pedagogy: a quasi-experimental study protocol in Tanzania. BMC Medical Education, 23(1), 628. https://doi.org/10.1186/s12909-023-04628-z

Bakcek O, Tastan S, Iyigun E, Kurtoglu P, Tastan B. Comparison of PechaKucha and traditional PowerPoint presentations in nursing education: A randomized controlled study. Nurse Educ Pract [Internet]. 2020;42:102695. https://linkinghub.elsevier.com/retrieve/pii/S1471595317305097

Mcleod G. Learning theory and Instructional Design. Learn Matters. 2001;2(2003):35–43.

Warmuth KA, Caple AH. Differences in Instructor, Presenter, and Audience Ratings of PechaKucha and Traditional Student Presentations. Teach Psychol [Internet]. 2022;49(3):224–35. http://journals.sagepub.com/doi/10.1177/00986283211006389

Download references

Acknowledgements

The supervisors at the University of Dodoma, statisticians, my employer, family members, research assistants and postgraduate colleagues are acknowledged for their support in an attempt to facilitate the development and completion of this manuscript.

The source of funds to conduct this study was the registrar, Tanzania Nursing and Midwifery Council (TNMC) who is the employer of the corresponding author. The funds helped the author in developing the protocol, printing the questionnaires, and facilitating communication during the data collection and data analysis and manuscript preparation.

Author information

Authors and affiliations.

Department of Nursing Management and Education, The University of Dodoma, Dodoma, United Republic of Tanzania

Setberth Jonas Haramba & Walter C. Millanzi

Department of Public and Community Health Nursing, The University of Dodoma, Dodoma, United Republic of Tanzania

Saada A. Seif

You can also search for this author in PubMed   Google Scholar

Contributions

S.J.H: conceptualization, proposal development, data collection, data entry, data cleaning and analysis, writing the original draft of the manuscript W.C.M: Conceptualization, supervision, review, and editing of the proposal, and the final manuscript S.S.A: Conceptualization, supervision, review, and editing of the proposal and the final manuscript.

Corresponding author

Correspondence to Setberth Jonas Haramba .

Ethics declarations

Ethics approval and consent to participate.

All methods were carried out under the relevant guidelines and regulations. Since the study involved the manipulation of human behaviors and practices and the exploration of human internal learning experiences, there was a pressing need to obtain ethical clearance and permission from the University of Dodoma (UDOM) Institution of Research Review Ethics Committee (IRREC) in order to conduct this study. The written informed consents were obtained from all the participants, after explaining to them the purpose, the importance of participating in the study, the significance of the study findings to students’ learning, and confidentiality and privacy of the information that will be provided. The nursing students who participated in this study benefited from the knowledge of the Pecha Kucha presentation format and how to prepare and present their assignments using the Pecha Kucha presentation format.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Haramba, S.J., Millanzi, W.C. & Seif, S.A. Effects of pecha kucha presentation pedagogy on nursing students’ presentation skills: a quasi-experimental study in Tanzania. BMC Med Educ 24 , 952 (2024). https://doi.org/10.1186/s12909-024-05920-2

Download citation

Received : 16 October 2023

Accepted : 16 August 2024

Published : 31 August 2024

DOI : https://doi.org/10.1186/s12909-024-05920-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nursing students
  • Pecha Kucha presentation pedagogy and presentation skills

BMC Medical Education

ISSN: 1472-6920

quasi experimental sampling technique

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: Welcome to this overview of quantitative research methods. This tutorial will give you the big picture of quantitative research and introduce key concepts that will help you determine if quantitative methods are appropriate for your project study. First, what is educational research? Educational research is a process of scholarly inquiry designed to investigate the process of instruction and learning, the behaviors, perceptions, and attributes of students and teachers, the impact of institutional processes and policies, and all other areas of the educational process. The research design may be quantitative, qualitative, or a mixed methods design. The focus of this overview is quantitative methods. The general purpose of quantitative research is to explain, predict, investigate relationships, describe current conditions, or to examine possible impacts or influences on designated outcomes. Quantitative research differs from qualitative research in several ways. It works to achieve different goals and uses different methods and design. This table illustrates some of the key differences. Qualitative research generally uses a small sample to explore and describe experiences through the use of thick, rich descriptions of detailed data in an attempt to understand and interpret human perspectives. It is less interested in generalizing to the population as a whole. For example, when studying bullying, a qualitative researcher might learn about the experience of the victims and the experience of the bully by interviewing both bullies and victims and observing them on the playground. Quantitative studies generally use large samples to test numerical data by comparing or finding correlations among sample attributes so that the findings can be generalized to the population. If quantitative researchers were studying bullying, they might measure the effects of a bully on the victim by comparing students who are victims and students who are not victims of bullying using an attitudinal survey. In conducting quantitative research, the researcher first identifies the problem. For Ed.D. research, this problem represents a gap in practice. For Ph.D. research, this problem represents a gap in the literature. In either case, the problem needs to be of importance in the professional field. Next, the researcher establishes the purpose of the study. Why do you want to do the study, and what do you intend to accomplish? This is followed by research questions which help to focus the study. Once the study is focused, the researcher needs to review both seminal works and current peer-reviewed primary sources. Based on the research question and on a review of prior research, a hypothesis is created that predicts the relationship between the study's variables. Next, the researcher chooses a study design and methods to test the hypothesis. These choices should be informed by a review of methodological approaches used to address similar questions in prior research. Finally, appropriate analytical methods are used to analyze the data, allowing the researcher to draw conclusions and inferences about the data, and answer the research question that was originally posed. In quantitative research, research questions are typically descriptive, relational, or causal. Descriptive questions constrain the researcher to describing what currently exists. With a descriptive research question, one can examine perceptions or attitudes as well as more concrete variables such as achievement. For example, one might describe a population of learners by gathering data on their age, gender, socioeconomic status, and attributes towards their learning experiences. Relational questions examine the relationship between two or more variables. The X variable has some linear relationship to the Y variable. Causal inferences cannot be made from this type of research. For example, one could study the relationship between students' study habits and achievements. One might find that students using certain kinds of study strategies demonstrate greater learning, but one could not state conclusively that using certain study strategies will lead to or cause higher achievement. Causal questions, on the other hand, are designed to allow the researcher to draw a causal inference. A causal question seeks to determine if a treatment variable in a program had an effect on one or more outcome variables. In other words, the X variable influences the Y variable. For example, one could design a study that answered the question of whether a particular instructional approach caused students to learn more. The research question serves as a basis for posing a hypothesis, a predicted answer to the research question that incorporates operational definitions of the study's variables and is rooted in the literature. An operational definition matches a concept with a method of measurement, identifying how the concept will be quantified. For example, in a study of instructional strategies, the hypothesis might be that students of teachers who use Strategy X will exhibit greater learning than students of teachers who do not. In this study, one would need to operationalize learning by identifying a test or instrument that would measure learning. This approach allows the researcher to create a testable hypothesis. Relational and causal research relies on the creation of a null hypothesis, a version of the research hypothesis that predicts no relationship between variables or no effect of one variable on another. When writing the hypothesis for a quantitative question, the null hypothesis and the research or alternative hypothesis use parallel sentence structure. In this example, the null hypothesis states that there will be no statistical difference between groups, while the research or alternative hypothesis states that there will be a statistical difference between groups. Note also that both hypothesis statements operationalize the critical thinking skills variable by identifying the measurement instrument to be used. Once the research questions and hypotheses are solidified, the researcher must select a design that will create a situation in which the hypotheses can be tested and the research questions answered. Ideally, the research design will isolate the study's variables and control for intervening variables so that one can be certain of the relationships being tested. In educational research, however, it is extremely difficult to establish sufficient controls in the complex social settings being studied. In our example of investigating the impact of a certain instructional strategy in the classroom on student achievement, each day the teacher uses a specific instructional strategy. After school, some of the students in her class receive tutoring. Other students have parents that are very involved in their child's academic progress and provide learning experiences in the home. These students may do better because they received extra help, not because the teacher's instructional strategy is more effective. Unless the researcher can control for the intervening variable of extra help, it will be impossible to effectively test the study's hypothesis. Quantitative research designs can fall into two broad categories, experimental and quasi-experimental. Classic experimental designs are those that randomly assign subjects to either a control or treatment comparison group. The researcher can then compare the treatment group to the control group to test for an intervention's effect, known as a between-subject design. It is important to note that the control group may receive a standard treatment or may receive a treatment of any kind. Quasi-experimental designs do not randomly assign subjects to groups, but rather take advantage of existing groups. A researcher can still have a control and comparison group, but assignment to the groups is not random. The use of a control group is not required. However, the researcher may choose a design in which a single group is pre- and post-tested, known as a within-subjects design. Or a single group may receive only a post-test. Since quasi-experimental designs lack random assignment, the researcher should be aware of the threats to validity. Educational research often attempts to measure abstract variables such as attitudes, beliefs, and feelings. Surveys can capture data about these hard-to-measure variables, as well as other self-reported information such as demographic factors. A survey is an instrument used to collect verifiable information from a sample population. In quantitative research, surveys typically include questions that ask respondents to choose a rating from a scale, select one or more items from a list, or other responses that result in numerical data. Studies that use surveys or tests need to include strategies that establish the validity of the instrument used. There are many types of validity that need to be addressed. Face validity. Does the test appear at face value to measure what it is supposed to measure? Content validity. Content validity includes both item validity and sampling validity. Item validity ensures that the individual test items deal only with the subject being addressed. Sampling validity ensures that the range of item topics is appropriate to the subject being studied. For example, item validity might be high, but if all the items only deal with one aspect of the subjects, then sampling validity is low. Content validity can be established by having experts in the field review the test. Concurrent validity. Does a new test correlate with an older, established test that measures the same thing? Predictive validity. Does the test correlate with another related measure? For example, GRE tests are used at many colleges because these schools believe that a good grade on this test increases the probability that the student will do well at the college. Linear regression can establish the predictive validity of a test. Construct validity. Does the test measure the construct it is intended to measure? Establishing construct validity can be a difficult task when the constructs being measured are abstract. But it can be established by conducting a number of studies in which you test hypotheses regarding the construct, or by completing a factor analysis to ensure that you have the number of constructs that you say you have. In addition to ensuring the validity of instruments, the quantitative researcher needs to establish their reliability as well. Strategies for establishing reliability include Test retest. Correlates scores from two different administrations of the same test. Alternate forms. Correlates scores from administrations of two different forms of the same test. Split half reliability. Treats each half of one test or survey as a separate administration and correlates the results from each. Internal consistency. Uses Cronbach's coefficient alpha to calculate the average of all possible split halves. Quantitative research almost always relies on a sample that is intended to be representative of a larger population. There are two basic sampling strategies, random and non-random, and a number of specific strategies within each of these approaches. This table provides examples of each of the major strategies. The next section of this tutorial provides an overview of the procedures in conducting quantitative data analysis. There are specific procedures for conducting the data collection, preparing for and analyzing data, presenting the findings, and connecting to the body of existing research. This process ensures that the research is conducted as a systematic investigation that leads to credible results. Data comes in various sizes and shapes, and it is important to know about these so that the proper analysis can be used on the data. In 1946, S.S. Stevens first described the properties of measurement systems that allowed decisions about the type of measurement and about the attributes of objects that are preserved in numbers. These four types of data are referred to as nominal, ordinal, interval, and ratio. First, let's examine nominal data. With nominal data, there is no number value that indicates quantity. Instead, a number has been assigned to represent a certain attribute, like the number 1 to represent male and the number 2 to represent female. In other words, the number is just a label. You could also assign numbers to represent race, religion, or any other categorical information. Nominal data only denotes group membership. With ordinal data, there is again no indication of quantity. Rather, a number is assigned for ranking order. For example, satisfaction surveys often ask respondents to rank order their level of satisfaction with services or programs. The next level of measurement is interval data. With interval data, there are equal distances between two values, but there is no natural zero. A common example is the Fahrenheit temperature scale. Differences between the temperature measurements make sense, but ratios do not. For instance, 20 degrees Fahrenheit is not twice as hot as 10 degrees Fahrenheit. You can add and subtract interval level data, but they cannot be divided or multiplied. Finally, we have ratio data. Ratio is the same as interval, however ratios, means, averages, and other numerical formulas are all possible and make sense. Zero has a logical meaning, which shows the absence of, or having none of. Examples of ratio data are height, weight, speed, or any quantities based on a scale with a natural zero. In summary, nominal data can only be counted. Ordinal data can be counted and ranked. Interval data can also be added and subtracted, and ratio data can also be used in ratios and other calculations. Determining what type of data you have is one of the most important aspects of quantitative analysis. Depending on the research question, hypotheses, and research design, the researcher may choose to use descriptive and or inferential statistics to begin to analyze the data. Descriptive statistics are best illustrated when viewed through the lens of America's pastimes. Sports, weather, economy, stock market, and even our retirement portfolio are presented in a descriptive analysis. Basic terminology for descriptive statistics are terms that we are most familiar in this discipline. Frequency, mean, median, mode, range, variance, and standard deviation. Simply put, you are describing the data. Some of the most common graphic representations of data are bar graphs, pie graphs, histograms, and box and whisker graphs. Attempting to reach conclusions and make causal inferences beyond graphic representations or descriptive analyses is referred to as inferential statistics. In other words, examining the college enrollment of the past decade in a certain geographical region would assist in estimating what the enrollment for the next year might be. Frequently in education, the means of two or more groups are compared. When comparing means to assist in answering a research question, one can use a within-group, between-groups, or mixed-subject design. In a within-group design, the researcher compares measures of the same subjects across time, therefore within-group, or under different treatment conditions. This can also be referred to as a dependent-group design. The most basic example of this type of quasi-experimental design would be if a researcher conducted a pretest of a group of students, subjected them to a treatment, and then conducted a post-test. The group has been measured at different points in time. In a between-group design, subjects are assigned to one of the two or more groups. For example, Control, Treatment 1, Treatment 2. Ideally, the sampling and assignment to groups would be random, which would make this an experimental design. The researcher can then compare the means of the treatment group to the control group. When comparing two groups, the researcher can gain insight into the effects of the treatment. In a mixed-subjects design, the researcher is testing for significant differences between two or more independent groups while subjecting them to repeated measures. Choosing a statistical test to compare groups depends on the number of groups, whether the data are nominal, ordinal, or interval, and whether the data meet the assumptions for parametric tests. Nonparametric tests are typically used with nominal and ordinal data, while parametric tests use interval and ratio-level data. In addition to this, some further assumptions are made for parametric tests that the data are normally distributed in the population, that participant selection is independent, and the selection of one person does not determine the selection of another, and that the variances of the groups being compared are equal. The assumption of independent participant selection cannot be violated, but the others are more flexible. The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the method of analysis for a quasi-experimental design. When choosing a t-test, the assumptions are that the data are parametric. The analysis of variance, or ANOVA, assesses whether the means of more than two groups are statistically different from each other. When choosing an ANOVA, the assumptions are that the data are parametric. The chi-square test can be used when you have non-parametric data and want to compare differences between groups. The Kruskal-Wallis test can be used when there are more than two groups and the data are non-parametric. Correlation analysis is a set of statistical tests to determine whether there are linear relationships between two or more sets of variables from the same list of items or individuals, for example, achievement and performance of students. The tests provide a statistical yes or no as to whether a significant relationship or correlation exists between the variables. A correlation test consists of calculating a correlation coefficient between two variables. Again, there are parametric and non-parametric choices based on the assumptions of the data. Pearson R correlation is widely used in statistics to measure the strength of the relationship between linearly related variables. Spearman-Rank correlation is a non-parametric test that is used to measure the degree of association between two variables. Spearman-Rank correlation test does not assume any assumptions about the distribution. Spearman-Rank correlation test is used when the Pearson test gives misleading results. Often a Kendall-Taw is also included in this list of non-parametric correlation tests to examine the strength of the relationship if there are less than 20 rankings. Linear regression and correlation are similar and often confused. Sometimes your methodologist will encourage you to examine both the calculations. Calculate linear correlation if you measured both variables, x and y. Make sure to use the Pearson parametric correlation coefficient if you are certain you are not violating the test assumptions. Otherwise, choose the Spearman non-parametric correlation coefficient. If either variable has been manipulated using an intervention, do not calculate a correlation. While linear regression does indicate the nature of the relationship between two variables, like correlation, it can also be used to make predictions because one variable is considered explanatory while the other is considered a dependent variable. Establishing validity is a critical part of quantitative research. As with the nature of quantitative research, there is a defined approach or process for establishing validity. This also allows for the findings transferability. For a study to be valid, the evidence must support the interpretations of the data, the data must be accurate, and their use in drawing conclusions must be logical and appropriate. Construct validity concerns whether what you did for the program was what you wanted to do, or whether what you observed was what you wanted to observe. Construct validity concerns whether the operationalization of your variables are related to the theoretical concepts you are trying to measure. Are you actually measuring what you want to measure? Internal validity means that you have evidence that what you did in the study, i.e., the program, caused what you observed, i.e., the outcome, to happen. Conclusion validity is the degree to which conclusions drawn about relationships in the data are reasonable. External validity concerns the process of generalizing, or the degree to which the conclusions in your study would hold for other persons in other places and at other times. Establishing reliability and validity to your study is one of the most critical elements of the research process. Once you have decided to embark upon the process of conducting a quantitative study, use the following steps to get started. First, review research studies that have been conducted on your topic to determine what methods were used. Consider the strengths and weaknesses of the various data collection and analysis methods. Next, review the literature on quantitative research methods. Every aspect of your research has a body of literature associated with it. Just as you would not confine yourself to your course textbooks for your review of research on your topic, you should not limit yourself to your course texts for your review of methodological literature. Read broadly and deeply from the scholarly literature to gain expertise in quantitative research. Additional self-paced tutorials have been developed on different methodologies and techniques associated with quantitative research. Make sure that you complete all of the self-paced tutorials and review them as often as needed. You will then be prepared to complete a literature review of the specific methodologies and techniques that you will use in your study. Thank you for watching.

techradar

IMAGES

  1. PPT

    quasi experimental sampling technique

  2. PPT

    quasi experimental sampling technique

  3. PPT

    quasi experimental sampling technique

  4. 5 Quasi-Experimental Design Examples (2024)

    quasi experimental sampling technique

  5. Quasi-experimental design of the research

    quasi experimental sampling technique

  6. PPT

    quasi experimental sampling technique

VIDEO

  1. Chapter 5. Alternatives to Experimentation: Correlational and Quasi Experimental Designs

  2. Sampling Technique

  3. sampling technique types

  4. sampling technique part 15

  5. sampling technique lecture 2

  6. Week 4 Lecture: Experiments and Sampling

COMMENTS

  1. Quasi-Experimental Design

    Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable.

  2. Quasi Experimental Design Overview & Examples

    A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.

  3. Quasi-Experimental Research Design

    Quasi-Experimental Design Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable (s) that is available in a true experimental design.

  4. 7.3 Quasi-Experimental Research

    Key Takeaways. Quasi-experimental research involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions. Among the important types are nonequivalent groups designs, pretest-posttest, and interrupted time-series designs.

  5. PDF Quasi-experimental Approaches

    APPROACHES Experimental approaches work by comparing changes in a group that receives a development intervention with a group that does not. The difference is then attributed to the intervention. In a full experimental approach, units are randomly allocated to two groups - one that receives the intervention and one that does not. In a quasi-experimental approach non-random methods of ...

  6. The Use and Interpretation of Quasi-Experimental Studies in Medical

    In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical ...

  7. Quasi-Experimental Design

    Quasi-Experimental Research Designs by Bruce A. Thyer. This pocket guide describes the logic, design, and conduct of the range of quasi-experimental designs, encompassing pre-experiments, quasi-experiments making use of a control or comparison group, and time-series designs. An introductory chapter describes the valuable role these types of ...

  8. Quasi-experiment

    A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to ...

  9. Quasi-experimental Research: What It Is, Types & Examples

    Quasi-experimental research is a quantitative research method. It involves numerical data collection and statistical analysis. Quasi-experimental research compares groups with different circumstances or treatments to find cause-and-effect links. It draws statistical conclusions from quantitative data.

  10. Quasi-experimentation: A guide to design and analysis.

    Design and statistical techniques for a full coverage of quasi-experimentation are collected in an accessible format, in a single volume. The book begins with a general overview of quasi-experimentation. Chapter 2 defines a treatment effect and the hurdles over which one must leap to draw credible causal inferences.

  11. Selecting and Improving Quasi-Experimental Designs in Effectiveness and

    Intervention implementation requires 'holding fast' on internal validity needs while incorporating external validity considerations (such as uptake by diverse sub-populations, acceptability, cost, sustainability). Quasi-experimental designs (QEDs) are increasingly employed to achieve a better balance between internal and external validity.

  12. Use of Quasi-Experimental Research Designs in Education Research

    We first provide an overview of widely used quasi-experimental research methods in this growing literature, with particular emphasis on articles from the top ranked education research journals, including those published by the American Educational Research Association.

  13. Experimental and Quasi-Experimental Designs in Implementation Research

    In this article we review the use of experimental designs in implementation science, including recent methodological advances for implementation studies. We also review the use of quasi-experimental designs in implementation science, and discuss the strengths and weaknesses of these approaches.

  14. Quasi-Experimental Design: Types, Examples, Pros, and Cons

    A quasi-experimental design can be a great option when ethical or practical concerns make true experiments impossible, but the research methodology does have its drawbacks. Learn all the ins and outs of a quasi-experimental design.

  15. PDF Quasi-experimental Better Evidence methods in Action

    In brief Quasi-experimental methods are designed to explore the causal effects of an intervention, treatment or stimulus on a unit of study. Although these methods have many attributes associated with scientific experiments, they lack the benefits of the random assignment of treatments across a population that is often necessary for broad generalisability. Yet purposive sampling also has its ...

  16. Quasi-Experimental Methods

    Quasi-experimental methods are research designs that that aim to identify the impact of a particular intervention, program, or event (a "treatment") by comparing treated units (households, groups, villages, schools, firms, etc.) to control units. While quasi-experimental methods use a control group, they differ from experimental methods in that ...

  17. 14.3 Quasi-experimental designs

    Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don't do as good a job of mitigating threats to internal validity, they still allow us to establish temporality, which is a criterion for establishing nomothetic causality.

  18. Experimental vs Quasi-Experimental Design: Which to Choose?

    A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention. The intervention can be a training program, a policy change or a medical treatment. Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn't is not randomized.

  19. Quasi-experimental design: explanation, methods and FAQs

    A quasi-experimental design is pretty much different from an experimental design, except for the fact that they both manifest the cause-effect relationship between the independent and dependent variables.

  20. Quasi-experimental methods

    Quasi-experimental methods are designed to explore the causal effects of an intervention, treatment or stimulus on a unit of study. Although these methods have many attributes associated with scientific experiments, they lack the benefits of the random assignment of treatments across a population that is often necessary for broad generalisability. Yet purposive sampling also has its benefits ...

  21. Does quasi experiment need a sampling technique?

    I believe you will be sampling classes, being the child your unit of analyses.The quasi-experimental design needs to be adapted to children, study power will depend on that. Then you will need to ...

  22. Effects of pecha kucha presentation pedagogy on nursing students

    The study employed an uncontrolled quasi-experimental design (pre-post) using a quantitative research approach among 230 randomly selected nursing students at the respective training institution. ... Sampling technique. Given the availability of higher-training institutions in the study area that offer undergraduate nursing programs, a simple ...

  23. Comprehensive Guide to Quantitative Research Methods in Education

    Content validity includes both item validity and sampling validity. Item validity ensures that the individual test items deal only with the subject being addressed. ... This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the method of analysis for a quasi-experimental design. When ...