Vittana.org

16 Advantages and Disadvantages of Experimental Research

How do you make sure that a new product, theory, or idea has validity? There are multiple ways to test them, with one of the most common being the use of experimental research. When there is complete control over one variable, the other variables can be manipulated to determine the value or validity that has been proposed.

Then, through a process of monitoring and administration, the true effects of what is being studied can be determined. This creates an accurate outcome so conclusions about the final value potential. It is an efficient process, but one that can also be easily manipulated to meet specific metrics if oversight is not properly performed.

Here are the advantages and disadvantages of experimental research to consider.

What Are the Advantages of Experimental Research?

1. It provides researchers with a high level of control. By being able to isolate specific variables, it becomes possible to determine if a potential outcome is viable. Each variable can be controlled on its own or in different combinations to study what possible outcomes are available for a product, theory, or idea as well. This provides a tremendous advantage in an ability to find accurate results.

2. There is no limit to the subject matter or industry involved. Experimental research is not limited to a specific industry or type of idea. It can be used in a wide variety of situations. Teachers might use experimental research to determine if a new method of teaching or a new curriculum is better than an older system. Pharmaceutical companies use experimental research to determine the viability of a new product.

3. Experimental research provides conclusions that are specific. Because experimental research provides such a high level of control, it can produce results that are specific and relevant with consistency. It is possible to determine success or failure, making it possible to understand the validity of a product, theory, or idea in a much shorter amount of time compared to other verification methods. You know the outcome of the research because you bring the variable to its conclusion.

4. The results of experimental research can be duplicated. Experimental research is straightforward, basic form of research that allows for its duplication when the same variables are controlled by others. This helps to promote the validity of a concept for products, ideas, and theories. This allows anyone to be able to check and verify published results, which often allows for better results to be achieved, because the exact steps can produce the exact results.

5. Natural settings can be replicated with faster speeds. When conducting research within a laboratory environment, it becomes possible to replicate conditions that could take a long time so that the variables can be tested appropriately. This allows researchers to have a greater control of the extraneous variables which may exist as well, limiting the unpredictability of nature as each variable is being carefully studied.

6. Experimental research allows cause and effect to be determined. The manipulation of variables allows for researchers to be able to look at various cause-and-effect relationships that a product, theory, or idea can produce. It is a process which allows researchers to dig deeper into what is possible, showing how the various variable relationships can provide specific benefits. In return, a greater understanding of the specifics within the research can be understood, even if an understanding of why that relationship is present isn’t presented to the researcher.

7. It can be combined with other research methods. This allows experimental research to be able to provide the scientific rigor that may be needed for the results to stand on their own. It provides the possibility of determining what may be best for a specific demographic or population while also offering a better transference than anecdotal research can typically provide.

What Are the Disadvantages of Experimental Research?

1. Results are highly subjective due to the possibility of human error. Because experimental research requires specific levels of variable control, it is at a high risk of experiencing human error at some point during the research. Any error, whether it is systemic or random, can reveal information about the other variables and that would eliminate the validity of the experiment and research being conducted.

2. Experimental research can create situations that are not realistic. The variables of a product, theory, or idea are under such tight controls that the data being produced can be corrupted or inaccurate, but still seem like it is authentic. This can work in two negative ways for the researcher. First, the variables can be controlled in such a way that it skews the data toward a favorable or desired result. Secondly, the data can be corrupted to seem like it is positive, but because the real-life environment is so different from the controlled environment, the positive results could never be achieved outside of the experimental research.

3. It is a time-consuming process. For it to be done properly, experimental research must isolate each variable and conduct testing on it. Then combinations of variables must also be considered. This process can be lengthy and require a large amount of financial and personnel resources. Those costs may never be offset by consumer sales if the product or idea never makes it to market. If what is being tested is a theory, it can lead to a false sense of validity that may change how others approach their own research.

4. There may be ethical or practical problems with variable control. It might seem like a good idea to test new pharmaceuticals on animals before humans to see if they will work, but what happens if the animal dies because of the experimental research? Or what about human trials that fail and cause injury or death? Experimental research might be effective, but sometimes the approach has ethical or practical complications that cannot be ignored. Sometimes there are variables that cannot be manipulated as it should be so that results can be obtained.

5. Experimental research does not provide an actual explanation. Experimental research is an opportunity to answer a Yes or No question. It will either show you that it will work or it will not work as intended. One could argue that partial results could be achieved, but that would still fit into the “No” category because the desired results were not fully achieved. The answer is nice to have, but there is no explanation as to how you got to that answer. Experimental research is unable to answer the question of “Why” when looking at outcomes.

6. Extraneous variables cannot always be controlled. Although laboratory settings can control extraneous variables, natural environments provide certain challenges. Some studies need to be completed in a natural setting to be accurate. It may not always be possible to control the extraneous variables because of the unpredictability of Mother Nature. Even if the variables are controlled, the outcome may ensure internal validity, but do so at the expense of external validity. Either way, applying the results to the general population can be quite challenging in either scenario.

7. Participants can be influenced by their current situation. Human error isn’t just confined to the researchers. Participants in an experimental research study can also be influenced by extraneous variables. There could be something in the environment, such an allergy, that creates a distraction. In a conversation with a researcher, there may be a physical attraction that changes the responses of the participant. Even internal triggers, such as a fear of enclosed spaces, could influence the results that are obtained. It is also very common for participants to “go along” with what they think a researcher wants to see instead of providing an honest response.

8. Manipulating variables isn’t necessarily an objective standpoint. For research to be effective, it must be objective. Being able to manipulate variables reduces that objectivity. Although there are benefits to observing the consequences of such manipulation, those benefits may not provide realistic results that can be used in the future. Taking a sample is reflective of that sample and the results may not translate over to the general population.

9. Human responses in experimental research can be difficult to measure. There are many pressures that can be placed on people, from political to personal, and everything in-between. Different life experiences can cause people to react to the same situation in different ways. Not only does this mean that groups may not be comparable in experimental research, but it also makes it difficult to measure the human responses that are obtained or observed.

The advantages and disadvantages of experimental research show that it is a useful system to use, but it must be tightly controlled in order to be beneficial. It produces results that can be replicated, but it can also be easily influenced by internal or external influences that may alter the outcomes being achieved. By taking these key points into account, it will become possible to see if this research process is appropriate for your next product, theory, or idea.

7 Advantages and Disadvantages of Experimental Research

There are multiple ways to test and do research on new ideas, products, or theories. One of these ways is by experimental research. This is when the researcher has complete control over one set of the variable, and manipulates the others. A good example of this is pharmaceutical research. They will administer the new drug to one group of subjects, and not to the other, while monitoring them both. This way, they can tell the true effects of the drug by comparing them to people who are not taking it. With this type of research design, only one variable can be tested, which may make it more time consuming and open to error. However, if done properly, it is known as one of the most efficient and accurate ways to reach a conclusion. There are other things that go into the decision of whether or not to use experimental research, some bad and some good, let’s take a look at both of these.

The Advantages of Experimental Research

1. A High Level Of Control With experimental research groups, the people conducting the research have a very high level of control over their variables. By isolating and determining what they are looking for, they have a great advantage in finding accurate results.

2. Can Span Across Nearly All Fields Of Research Another great benefit of this type of research design is that it can be used in many different types of situations. Just like pharmaceutical companies can utilize it, so can teachers who want to test a new method of teaching. It is a basic, but efficient type of research.

3. Clear Cut Conclusions Since there is such a high level of control, and only one specific variable is being tested at a time, the results are much more relevant than some other forms of research. You can clearly see the success, failure, of effects when analyzing the data collected.

4. Many Variations Can Be Utilized There is a very wide variety of this type of research. Each can provide different benefits, depending on what is being explored. The investigator has the ability to tailor make the experiment for their own unique situation, while still remaining in the validity of the experimental research design.

The Disadvantages of Experimental Research

1. Largely Subject To Human Errors Just like anything, errors can occur. This is especially true when it comes to research and experiments. Any form of error, whether a systematic (error with the experiment) or random error (uncontrolled or unpredictable), or human errors such as revealing who the control group is, they can all completely destroy the validity of the experiment.

2. Can Create Artificial Situations By having such deep control over the variables being tested, it is very possible that the data can be skewed or corrupted to fit whatever outcome the researcher needs. This is especially true if it is being done for a business or market study.

3. Can Take An Extensive Amount of Time To Do Full Research With experimental testing individual experiments have to be done in order to fully research each variable. This can cause the testing to take a very long amount of time and use a large amount of resources and finances. These costs could transfer onto the company, which could inflate costs for consumers.

Important Facts About Experimental Research

  • Experimental Research is most used in medical ways, with animals.
  • Every single new medicine or drug is testing using this research design.
  • There are countless variations of experimental research, including: probability, sequential, snowball, and quota.

You Might Also Like

Recent Posts

  • Only Child Characteristics
  • Does Music Affect Your Mood
  • Negative Motivation
  • Positive Motivation
  • External and Internal Locus of Control
  • How To Leave An Emotionally Abusive Relationship
  • The Ability To Move Things With Your Mind
  • How To Tell Is Someone Is Lying About Cheating
  • Interpersonal Attraction Definition
  • Napoleon Compex Symptoms

FutureofWorking.com

8 Advantages and Disadvantages of Experimental Research

Experimental research has become an important part of human life. Babies conduct their own rudimentary experiments (such as putting objects in their mouth) to learn about the world around them, while older children and teens conduct experiments at school to learn more science. Ancient scientists used experimental research to prove their hypotheses correct; Galileo Galilei and Antoine Lavoisier, for instance, did various experiments to uncover key concepts in physics and chemistry, respectively. The same goes for modern experts, who utilize this scientific method to see if new drugs are effective, discover treatments for illnesses, and create new electronic gadgets (among others).

Experimental research clearly has its advantages, but is it really a perfect way to verify and validate scientific concepts? Many people point out that it has several disadvantages and might even be harmful to subjects in some cases. To learn more about these, let’s take a look into the pros and cons of this type of procedure.

List of Advantages of Experimental Research

1. It gives researchers a high level of control. When people conduct experimental research, they can manipulate the variables so they can create a setting that lets them observe the phenomena they want. They can remove or control other factors that may affect the overall results, which means they can narrow their focus and concentrate solely on two or three variables.

In the pharmaceutical industry, for example, scientists conduct studies in which they give a new kind drug to a group of subjects and a placebo drug to another group. They then give the same kind of food to the subjects and even house them in the same area to ensure that they won’t be exposed to other factors that may affect how the drugs work. At the end of the study, the researchers analyze the results to see how the new drug affects the subjects and identify its side effects and adverse results.

2. It allows researchers to utilize many variations. As mentioned above, researchers have almost full control when they conduct experimental research studies. This lets them manipulate variables and use as many (or as few) variations as they want to create an environment where they can test their hypotheses — without destroying the validity of the research design. In the example above, the researchers can opt to add a third group of subjects (in addition to the new drug group and the placebo group), who would be given a well-known and widely available drug that has been used by many people for years. This way, they can compare how the new drug performs compared to the placebo drug as well as the widely used drug.

3. It can lead to excellent results. The very nature of experimental research allows researchers to easily understand the relationships between the variables, the subjects, and the environment and identify the causes and effects in whatever phenomena they’re studying. Experimental studies can also be easily replicated, which means the researchers themselves or other scientists can repeat their studies to confirm the results or test other variables.

4. It can be used in different fields. Experimental research is usually utilized in the medical and pharmaceutical industries to assess the effects of various treatments and drugs. It’s also used in other fields like chemistry, biology, physics, engineering, electronics, agriculture, social science, and even economics.

List of Disadvantages of Experimental Research

1. It can lead to artificial situations. In many scenarios, experimental researchers manipulate variables in an attempt to replicate real-world scenarios to understand the function of drugs, gadgets, treatments, and other new discoveries. This works most of the time, but there are cases when researchers over-manipulate their variables and end up creating an artificial environment that’s vastly different from the real world. The researchers can also skewer the study to fit whatever outcome they want (intentionally or unintentionally) and compromise the results of the research.

2. It can take a lot of time and money. Experimental research can be costly and time-consuming, especially if the researchers have to conduct numerous studies to test each variable. If the studies are supported by the government, they would consume millions or even billions of taxpayers’ dollars, which could otherwise have been spent on other community projects such as education, housing, and healthcare. If the studies are privately funded, they can be a huge burden on the companies involved who, in turn, would pass on the costs to the customers. As a result, consumers have to spend a large amount if they want to avail of these new treatments, gadgets, and other innovations.

3. It can be affected by errors. Just like any kind of research, experimental research isn’t always perfect. There might be blunders in the research design or in the methodology as well as random mistakes that can’t be controlled or predicted, which can seriously affect the outcome of the study and require the researchers to start all over again.

There might also be human errors; for instance, the researchers may allow their personal biases to affect the study. If they’re conducting a double-blind study (in which both the researchers and the subjects don’t know which the control group is), the researchers might be made aware of which subjects belong to the control group, destroying the validity of the research. The subjects may also make mistakes. There have been cases (particularly in social experiments) in which the subjects give answers that they think the researchers want to hear instead of truthfully saying what’s on their mind.

4. It might not be feasible in some situations. There are times when the variables simply can’t be manipulated or when the researchers need an impossibly large amount of money to conduct the study. There are also cases when the study would impede on the subjects’ human rights and/or would give rise to ethical issues. In these scenarios, it’s better to choose another kind of research design (such as review, meta-analysis, descriptive, or correlational research) instead of insisting on using the experimental research method.

Experimental research has become an important part of the history of the world and has led to numerous discoveries that have made people’s lives better, longer, and more comfortable. However, it can’t be denied that it also has its disadvantages, so it’s up to scientists and researchers to find a balance between the benefits it provides and the drawbacks it presents.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Experimental Method Works in Psychology

sturti/Getty Images

The Experimental Process

Types of experiments, potential pitfalls of the experimental method.

The experimental method is a type of research procedure that involves manipulating variables to determine if there is a cause-and-effect relationship. The results obtained through the experimental method are useful but do not prove with 100% certainty that a singular cause always creates a specific effect. Instead, they show the probability that a cause will or will not lead to a particular effect.

At a Glance

While there are many different research techniques available, the experimental method allows researchers to look at cause-and-effect relationships. Using the experimental method, researchers randomly assign participants to a control or experimental group and manipulate levels of an independent variable. If changes in the independent variable lead to changes in the dependent variable, it indicates there is likely a causal relationship between them.

What Is the Experimental Method in Psychology?

The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

For example, researchers may want to learn how different visual patterns may impact our perception. Or they might wonder whether certain actions can improve memory . Experiments are conducted on many behavioral topics, including:

The scientific method forms the basis of the experimental method. This is a process used to determine the relationship between two variables—in this case, to explain human behavior .

Positivism is also important in the experimental method. It refers to factual knowledge that is obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables. Then they formulate a hypothesis, manipulate the variables, and collect data on the results. Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on the experiment outcome.

History of the Experimental Method

The idea of using experiments to better understand human psychology began toward the end of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879.

Wundt is often called the father of experimental psychology. He believed that experiments could help explain how psychology works, and used this approach to study consciousness .

Wundt coined the term "physiological psychology." This is a hybrid of physiology and psychology, or how the body affects the brain.

Other early contributors to the development and evolution of experimental psychology as we know it today include:

  • Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus
  • Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions
  • Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology
  • Georg Elias Müller (1850-1934), who performed an early experiment on attitude which involved the sensory discrimination of weights and revealed how anticipation can affect this discrimination

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable

The dependent variable is the effect that the experimenter is measuring. If a researcher was investigating how sleep influences test scores, for example, the test scores would be the dependent variable.

Independent Variable

The independent variable is the variable that the experimenter manipulates. In the previous example, the amount of sleep an individual gets would be the independent variable.

A hypothesis is a tentative statement or a guess about the possible relationship between two or more variables. In looking at how sleep influences test scores, the researcher might hypothesize that people who get more sleep will perform better on a math test the following day. The purpose of the experiment, then, is to either support or reject this hypothesis.

Operational definitions are necessary when performing an experiment. When we say that something is an independent or dependent variable, we must have a very clear and specific definition of the meaning and scope of that variable.

Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment. Types of extraneous variables include participant variables, situational variables, demand characteristics, and experimenter effects. In some cases, researchers can take steps to control for extraneous variables.

Demand Characteristics

Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in a psychology experiment. This can sometimes cause participants to alter their behavior, which can affect the results of the experiment.

Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables. 

Confounding Variables

Confounding variables are variables that can affect the dependent variable, but that experimenters cannot control for. Confounding variables can make it difficult to determine if the effect was due to changes in the independent variable or if the confounding variable may have played a role.

Psychologists, like other scientists, use the scientific method when conducting an experiment. The scientific method is a set of procedures and principles that guide how scientists develop research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:

  • Identifying a problem to study
  • Devising the research protocol
  • Conducting the experiment
  • Analyzing the data collected
  • Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their academic careers. Learning how to conduct an experiment is important to understanding how psychologists prove and disprove theories in this field.

There are a few different types of experiments that researchers might use when studying psychology. Each has pros and cons depending on the participants being studied, the hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control over the variables. These experiments can also be easier for other researchers to replicate. The drawback of this research type is that what takes place in a lab is not always what takes place in the real world.

Field Experiments

Sometimes researchers opt to conduct their experiments in the field. For example, a social psychologist interested in researching prosocial behavior might have a person pretend to faint and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings. But it is more difficult for researchers to control the many variables existing in these settings that could potentially influence the experiment's results.

Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-experiment. Quasi-experiments are often referred to as natural experiments because the researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to manipulate the independent variable in the situation (personality traits). Participants also cannot be randomly assigned because they naturally fall into pre-existing groups based on their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where scientists are interested in studying phenomena in natural, real-world settings. It's also beneficial if there are limits on research funds or time.

Field experiments can be either quasi-experiments or true experiments.

Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors, Researchers use experiments to study many aspects of psychology.

A 2019 study investigated whether splitting attention between electronic devices and classroom lectures had an effect on college students' learning abilities. It found that dividing attention between these two mediums did not affect lecture comprehension. However, it did impact long-term retention of the lecture information, which affected students' exam performance.

An experiment used participants' eye movements and electroencephalogram (EEG) data to better understand cognitive processing differences between experts and novices. It found that experts had higher power in their theta brain waves than novices, suggesting that they also had a higher cognitive load.

A study looked at whether chatting online with a computer via a chatbot changed the positive effects of emotional disclosure often received when talking with an actual human. It found that the effects were the same in both cases.

One experimental study evaluated whether exercise timing impacts information recall. It found that engaging in exercise prior to performing a memory task helped improve participants' short-term memory abilities.

Sometimes researchers use the experimental method to get a bigger-picture view of psychological behaviors and impacts. For example, one 2018 study examined several lab experiments to learn more about the impact of various environmental factors on building occupant perceptions.

A 2020 study set out to determine the role that sensation-seeking plays in political violence. This research found that sensation-seeking individuals have a higher propensity for engaging in political violence. It also found that providing access to a more peaceful, yet still exciting political group helps reduce this effect.

While the experimental method can be a valuable tool for learning more about psychology and its impacts, it also comes with a few pitfalls.

Experiments may produce artificial results, which are difficult to apply to real-world situations. Similarly, researcher bias can impact the data collected. Results may not be able to be reproduced, meaning the results have low reliability .

Since humans are unpredictable and their behavior can be subjective, it can be hard to measure responses in an experiment. In addition, political pressure may alter the results. The subjects may not be a good representation of the population, or groups used may not be comparable.

And finally, since researchers are human too, results may be degraded due to human error.

What This Means For You

Every psychological research method has its pros and cons. The experimental method can help establish cause and effect, and it's also beneficial when research funds are limited or time is of the essence.

At the same time, it's essential to be aware of this method's pitfalls, such as how biases can affect the results or the potential for low reliability. Keeping these in mind can help you review and assess research studies more accurately, giving you a better idea of whether the results can be trusted or have limitations.

Colorado State University. Experimental and quasi-experimental research .

American Psychological Association. Experimental psychology studies human and animals .

Mayrhofer R, Kuhbandner C, Lindner C. The practice of experimental psychology: An inevitably postmodern endeavor . Front Psychol . 2021;11:612805. doi:10.3389/fpsyg.2020.612805

Mandler G. A History of Modern Experimental Psychology .

Stanford University. Wilhelm Maximilian Wundt . Stanford Encyclopedia of Philosophy.

Britannica. Gustav Fechner .

Britannica. Hermann von Helmholtz .

Meyer A, Hackert B, Weger U. Franz Brentano and the beginning of experimental psychology: implications for the study of psychological phenomena today . Psychol Res . 2018;82:245-254. doi:10.1007/s00426-016-0825-7

Britannica. Georg Elias Müller .

McCambridge J, de Bruin M, Witton J.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: A systematic review .  PLoS ONE . 2012;7(6):e39116. doi:10.1371/journal.pone.0039116

Laboratory experiments . In: The Sage Encyclopedia of Communication Research Methods. Allen M, ed. SAGE Publications, Inc. doi:10.4135/9781483381411.n287

Schweizer M, Braun B, Milstone A. Research methods in healthcare epidemiology and antimicrobial stewardship — quasi-experimental designs . Infect Control Hosp Epidemiol . 2016;37(10):1135-1140. doi:10.1017/ice.2016.117

Glass A, Kang M. Dividing attention in the classroom reduces exam performance . Educ Psychol . 2019;39(3):395-408. doi:10.1080/01443410.2018.1489046

Keskin M, Ooms K, Dogru AO, De Maeyer P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking . ISPRS Int J Geo-Inf . 2020;9(7):429. doi:10.3390.ijgi9070429

Ho A, Hancock J, Miner A. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot . J Commun . 2018;68(4):712-733. doi:10.1093/joc/jqy026

Haynes IV J, Frith E, Sng E, Loprinzi P. Experimental effects of acute exercise on episodic memory function: Considerations for the timing of exercise . Psychol Rep . 2018;122(5):1744-1754. doi:10.1177/0033294118786688

Torresin S, Pernigotto G, Cappelletti F, Gasparella A. Combined effects of environmental factors on human perception and objective performance: A review of experimental laboratory works . Indoor Air . 2018;28(4):525-538. doi:10.1111/ina.12457

Schumpe BM, Belanger JJ, Moyano M, Nisa CF. The role of sensation seeking in political violence: An extension of the significance quest theory . J Personal Social Psychol . 2020;118(4):743-761. doi:10.1037/pspp0000223

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

17 Advantages and Disadvantages of Experimental Research Method in Psychology

There are numerous research methods used to determine if theories, ideas, or even products have validity in a market or community. One of the most common options utilized today is experimental research. Its popularity is due to the fact that it becomes possible to take complete control over a single variable while conducting the research efforts. This process makes it possible to manipulate the other variables involved to determine the validity of an idea or the value of what is being proposed.

Outcomes through experimental research come through a process of administration and monitoring. This structure makes it possible for researchers to determine the genuine impact of what is under observation. It is a process which creates outcomes with a high degree of accuracy in almost any field.

The conclusion can then offer a final value potential to consider, making it possible to know if a continued pursuit of the information is profitable in some way.

The pros and cons of experimental research show that this process is highly efficient, creating data points for evaluation with speed and regularity. It is also an option that can be manipulated easily when researchers want their work to draw specific conclusions.

List of the Pros of Experimental Research

1. Experimental research offers the highest levels of control. The procedures involved with experimental research make it possible to isolate specific variables within virtually any topic. This advantage makes it possible to determine if outcomes are viable. Variables are controllable on their own or in combination with others to determine what can happen when each scenario is brought to a conclusion. It is a benefit which applies to ideas, theories, and products, offering a significant advantage when accurate results or metrics are necessary for progress.

2. Experimental research is useful in every industry and subject. Since experimental research offers higher levels of control than other methods which are available, it offers results which provide higher levels of relevance and specificity. The outcomes that are possible come with superior consistency as well. It is useful in a variety of situations which can help everyone involved to see the value of their work before they must implement a series of events.

3. Experimental research replicates natural settings with significant speed benefits. This form of research makes it possible to replicate specific environmental settings within the controls of a laboratory setting. This structure makes it possible for the experiments to replicate variables that would require a significant time investment otherwise. It is a process which gives the researchers involved an opportunity to seize significant control over the extraneous variables which may occur, creating limits on the unpredictability of elements that are unknown or unexpected when driving toward results.

4. Experimental research offers results which can occur repetitively. The reason that experimental research is such an effective tool is that it produces a specific set of results from documented steps that anyone can follow. Researchers can duplicate the variables used during the work, then control the variables in the same way to create an exact outcome that duplicates the first one. This process makes it possible to validate scientific discoveries, understand the effectiveness of a program, or provide evidence that products address consumer pain points in beneficial ways.

5. Experimental research offers conclusions which are specific. Thanks to the high levels of control which are available through experimental research, the results which occur through this process are usually relevant and specific. Researchers an determine failure, success, or some other specific outcome because of the data points which become available from their work. That is why it is easier to take an idea of any type to the next level with the information that becomes available through this process. There is always a need to bring an outcome to its natural conclusion during variable manipulation to collect the desired data.

6. Experimental research works with other methods too. You can use experimental research with other methods to ensure that the data received from this process is as accurate as possible. The results that researchers obtain must be able to stand on their own for verification to have findings which are valid. This combination of factors makes it possible to become ultra-specific with the information being received through these studies while offering new ideas to other research formats simultaneously.

7. Experimental research allows for the determination of cause-and-effect. Because researchers can manipulate variables when performing experimental research, it becomes possible to look for the different cause-and-effect relationships which may exist when pursuing a new thought. This process allows the parties involved to dig deeply into the possibilities which are present, demonstrating whatever specific benefits are possible when outcomes are reached. It is a structure which seeks to understand the specific details of each situation as a way to create results.

List of the Cons of Experimental Research

1. Experimental research suffers from the potential of human errors. Experimental research requires those involved to maintain specific levels of variable control to create meaningful results. This process comes with a high risk of experiencing an error at some stage of the process when compared to other options that may be available. When this issue goes unnoticed as the results become transferable, the data it creates will reflect a misunderstanding of the issue under observation. It is a disadvantage which could eliminate the value of any information that develops from this process.

2. Experimental research is a time-consuming process to endure. Experimental research must isolate each possible variable when a subject matter is being studied. Then it must conduct testing on each element under consideration until a resolution becomes possible, which then requires data collection to occur. This process must continue to repeat itself for any findings to be valid from the effort. Then combinations of variables must go through evaluation in the same manner. It is a field of research that sometimes costs more than the potential benefits or profits that are achievable when a favorable outcome is eventually reached.

3. Experimental research creates unrealistic situations that still receive validity. The controls which are necessary when performing experimental research increase the risks of the data becoming inaccurate or corrupted over time. It will still seem authentic to the researchers involved because they may not see that a variable is an unrealistic situation. The variables can skew in a specific direction if the information shifts in a certain direction through the efforts of the researchers involved. The research environment can also be extremely different than real-life circumstances, which can invalidate the value of the findings.

4. Experimental research struggles to measure human responses. People experience stress in uncountable ways during the average day. Personal drama, political arguments, and workplace deadlines can influence the data that researchers collect when measuring human response tendencies. What happens inside of a controlled situation is not always what happens in real-life scenarios. That is why this method is not the correct choice to use in group or individual settings where a human response requires measurement.

5. Experimental research does not always create an objective view. Objective research is necessary for it to provide effective results. When researchers have permission to manipulate variables in whatever way they choose, then the process increases the risk of a personal bias, unconscious or otherwise, influencing the results which are eventually obtained. People can shift their focus because they become uncomfortable, are aroused by the event, or want to manipulate the results for their personal agenda. Data samples are therefore only a reflection of that one group instead of offering data across an entire demographic.

6. Experimental research can experience influences from real-time events. The issue with human error in experimental research often involves the researchers conducting the work, but it can also impact the people being studied as well. Numerous outside variables can impact responses or outcomes without the knowledge of researchers. External triggers, such as the environment, political stress, or physical attraction can alter a person’s regular perspective without it being apparent. Internal triggers, such as claustrophobia or social interactions, can alter responses as well. It is challenging to know if the data collected through this process offers an element of honesty.

7. Experimental research cannot always control all of the variables. Although experimental research attempts to control every variable or combination that is possible, laboratory settings cannot reach this limitation in every circumstance. If data must be collected in a natural setting, then the risk of inaccurate information rises. Some research efforts place an emphasis on one set of variables over another because of a perceived level of importance. That is why it becomes virtually impossible in some situations to apply obtained results to the overall population. Groups are not always comparable, even if this process provides for more significant transferability than other methods of research.

8. Experimental research does not always seek to find explanations. The goal of experimental research is to answer questions that people may have when evaluating specific data points. There is no concern given to the reason why specific outcomes are achievable through this system. When you are working in a world of black-and-white where something works or it does not, there are many shades of gray in-between these two colors where additional information is waiting to be discovered. This method ignores that information, settling for whatever answers are found along the extremes instead.

9. Experimental research does not make exceptions for ethical or moral violations. One of the most significant disadvantages of experimental research is that it does not take the ethical or moral violations that some variables may create out of the situation. Some variables cannot be manipulated in ways that are safe for people, the environment, or even the society as a whole. When researchers encounter this situation, they must either transfer their data points to another method, continue on to produce incomplete results, fabricate results, or set their personal convictions aside to work on the variable anyway.

10. Experimental research may offer results which apply to only one situation. Although one of the advantages of experimental research is that it allows for duplication by others to obtain the same results, this is not always the case in every situation. There are results that this method can find which may only apply to that specific situation. If this process is used to determine highly detailed data points which require unique circumstances to obtain, then future researchers may find that result replication is challenging to obtain.

These experimental research pros and cons offer a useful system that can help determine the validity of an idea in any industry. The only way to achieve this advantage is to place tight controls over the process, and then reduce any potential for bias within the system to appear. This makes it possible to determine if a new idea of any type offers current or future value.

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experimental method disadvantages

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted - Enago

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

experimental method disadvantages

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experimental method disadvantages

In your opinion, what is the most effective way to improve integrity in the peer review process?

Observational vs. Experimental Study: A Comprehensive Guide

Explore the fundamental disparities between experimental and observational studies in this comprehensive guide by Santos Research Center, Corp. Uncover concepts such as control group, random sample, cohort studies, response variable, and explanatory variable that shape the foundation of these methodologies. Discover the significance of randomized controlled trials and case control studies, examining causal relationships and the role of dependent variables and independent variables in research designs.

This enlightening exploration also delves into the meticulous scientific study process, involving survey members, systematic reviews, and statistical analyses. Investigate the careful balance of control group and treatment group dynamics, highlighting how researchers meticulously assign variables and analyze statistical patterns to discern meaningful insights. From dissecting issues like lung cancer to understanding sleep patterns, this guide emphasizes the precision of controlled experiments and controlled trials, where variables are isolated and scrutinized, paving the way for a deeper comprehension of the world through empirical research.

Introduction to Observational and Experimental Studies

These two studies are the cornerstones of scientific inquiry, each offering a distinct approach to unraveling the mysteries of the natural world.

Observational studies allow us to observe, document, and gather data without direct intervention. They provide a means to explore real-world scenarios and trends, making them valuable when manipulating variables is not feasible or ethical. From surveys to meticulous observations, these studies shed light on existing conditions and relationships.

Experimental studies , in contrast, put researchers in the driver's seat. They involve the deliberate manipulation of variables to understand their impact on specific outcomes. By controlling the conditions, experimental studies establish causal relationships, answering questions of causality with precision. This approach is pivotal for hypothesis testing and informed decision-making.

At Santos Research Center, Corp., we recognize the importance of both observational and experimental studies. We employ these methodologies in our diverse research projects to ensure the highest quality of scientific investigation and to answer a wide range of research questions.

Observational Studies: A Closer Look

In our exploration of research methodologies, let's zoom in on observational research studies—an essential facet of scientific inquiry that we at Santos Research Center, Corp., expertly employ in our diverse research projects.

What is an Observational Study?

Observational research studies involve the passive observation of subjects without any intervention or manipulation by researchers. These studies are designed to scrutinize the relationships between variables and test subjects, uncover patterns, and draw conclusions grounded in real-world data.

Researchers refrain from interfering with the natural course of events in controlled experiment. Instead, they meticulously gather data by keenly observing and documenting information about the test subjects and their surroundings. This approach permits the examination of variables that cannot be ethically or feasibly manipulated, making it particularly valuable in certain research scenarios.

Types of Observational Studies

Now, let's delve into the various forms that observational studies can take, each with its distinct characteristics and applications.

Cohort Studies:  A cohort study is a type of observational study that entails tracking one group of individuals over an extended period. Its primary goal is to identify potential causes or risk factors for specific outcomes or treatment group. Cohort studies provide valuable insights into the development of conditions or diseases and the factors that influence them.

Case-Control Studies:  Case-control studies, on the other hand, involve the comparison of individuals with a particular condition or outcome to those without it (the control group). These studies aim to discern potential causal factors or associations that may have contributed to the development of the condition under investigation.

Cross-Sectional Studies:  Cross-sectional studies take a snapshot of a diverse group of individuals at a single point in time. By collecting data from this snapshot, researchers gain insights into the prevalence of a specific condition or the relationships between variables at that precise moment. Cross-sectional studies are often used to assess the health status of the different groups within a population or explore the interplay between various factors.

Advantages and Limitations of Observational Studies

Observational studies, as we've explored, are a vital pillar of scientific research, offering unique insights into real-world phenomena. In this section, we will dissect the advantages and limitations that characterize these studies, shedding light on the intricacies that researchers grapple with when employing this methodology.

Advantages: One of the paramount advantages of observational studies lies in their utilization of real-world data. Unlike controlled experiments that operate in artificial settings, observational studies embrace the complexities of the natural world. This approach enables researchers to capture genuine behaviors, patterns, and occurrences as they unfold. As a result, the data collected reflects the intricacies of real-life scenarios, making it highly relevant and applicable to diverse settings and populations.

Moreover, in a randomized controlled trial, researchers looked to randomly assign participants to a group. Observational studies excel in their capacity to examine long-term trends. By observing one group of subjects over extended periods, research scientists gain the ability to track developments, trends, and shifts in behavior or outcomes. This longitudinal perspective is invaluable when studying phenomena that evolve gradually, such as chronic diseases, societal changes, or environmental shifts. It allows for the detection of subtle nuances that may be missed in shorter-term investigations.

Limitations: However, like any research methodology, observational studies are not without their limitations. One significant challenge of statistical study lies in the potential for biases. Since researchers do not intervene in the subjects' experiences, various biases can creep into the data collection process. These biases may arise from participant self-reporting, observer bias, or selection bias in random sample, among others. Careful design and rigorous data analysis are crucial for mitigating these biases.

Another limitation is the presence of confounding variables. In observational studies, it can be challenging to isolate the effect of a specific variable from the myriad of other factors at play. These confounding variables can obscure the true relationship between the variables of interest, making it difficult to establish causation definitively. Research scientists must employ statistical techniques to control for or adjust these confounding variables.

Additionally, observational studies face constraints in their ability to establish causation. While they can identify associations and correlations between variables, they cannot prove causality or causal relationship. Establishing causation typically requires controlled experiments where researchers can manipulate independent variables systematically. In observational studies, researchers can only infer potential causation based on the observed associations.

Experimental Studies: Delving Deeper

In the intricate landscape of scientific research, we now turn our gaze toward experimental studies—a dynamic and powerful method that Santos Research Center, Corp. skillfully employs in our pursuit of knowledge.

What is an Experimental Study?

While some studies observe and gather data passively, experimental studies take a more proactive approach. Here, researchers actively introduce an intervention or treatment to an experiment group study its effects on one or more variables. This methodology empowers researchers to manipulate independent variables deliberately and examine their direct impact on dependent variables.

Experimental research are distinguished by their exceptional ability to establish cause-and-effect relationships. This invaluable characteristic allows researchers to unlock the mysteries of how one variable influences another, offering profound insights into the scientific questions at hand. Within the controlled environment of an experimental study, researchers can systematically test hypotheses, shedding light on complex phenomena.

Key Features of Experimental Studies

Central to statistical analysis, the rigor and reliability of experimental studies are several key features that ensure the validity of their findings.

Randomized Controlled Trials:  Randomization is a critical element in experimental studies, as it ensures that subjects are assigned to groups in a random assignment. This randomly assigned allocation minimizes the risk of unintentional biases and confounding variables, strengthening the credibility of the study's outcomes.

Control Groups:  Control groups play a pivotal role in experimental studies by serving as a baseline for comparison. They enable researchers to assess the true impact of the intervention being studied. By comparing the outcomes of the intervention group to those of survey members of the control group, researchers can discern whether the intervention caused the observed changes.

Blinding:  Both single-blind and double-blind techniques are employed in experimental studies to prevent biases from influencing the study or controlled trial's outcomes. Single-blind studies keep either the subjects or the researchers unaware of certain aspects of the study, while double-blind studies extend this blindness to both parties, enhancing the objectivity of the study.

These key features work in concert to uphold the integrity and trustworthiness of the results generated through experimental studies.

Advantages and Limitations of Experimental Studies

As with any research methodology, this one comes with its unique set of advantages and limitations.

Advantages:  These studies offer the distinct advantage of establishing causal relationships between two or more variables together. The controlled environment allows researchers to exert authority over variables, ensuring that changes in the dependent variable can be attributed to the independent variable. This meticulous control results in high-quality, reliable data that can significantly contribute to scientific knowledge.

Limitations:  However, experimental ones are not without their challenges. They may raise ethical concerns, particularly when the interventions involve potential risks to subjects. Additionally, their controlled nature can limit their real-world applicability, as the conditions in experiments may not accurately mirror those in the natural world. Moreover, executing an experimental study in randomized controlled, often demands substantial resources, with other variables including time, funding, and personnel.

Observational vs Experimental: A Side-by-Side Comparison

Having previously examined observational and experimental studies individually, we now embark on a side-by-side comparison to illuminate the key distinctions and commonalities between these foundational research approaches.

Key Differences and Notable Similarities

Methodologies

  • Observational Studies : Characterized by passive observation, where researchers collect data without direct intervention, allowing the natural course of events to unfold.
  • Experimental Studies : Involve active intervention, where researchers deliberately manipulate variables to discern their impact on specific outcomes, ensuring control over the experimental conditions.
  • Observational Studies : Designed to identify patterns, correlations, and associations within existing data, shedding light on relationships within real-world settings.
  • Experimental Studies : Geared toward establishing causality by determining the cause-and-effect relationships between variables, often in controlled laboratory environments.
  • Observational Studies : Yield real-world data, reflecting the complexities and nuances of natural phenomena.
  • Experimental Studies : Generate controlled data, allowing for precise analysis and the establishment of clear causal connections.

Observational studies excel at exploring associations and uncovering patterns within the intricacies of real-world settings, while experimental studies shine as the gold standard for discerning cause-and-effect relationships through meticulous control and manipulation in controlled environments. Understanding these differences and similarities empowers researchers to choose the most appropriate method for their specific research objectives.

When to Use Which: Practical Applications

The decision to employ either observational or experimental studies hinges on the research objectives at hand and the available resources. Observational studies prove invaluable when variable manipulation is impractical or ethically challenging, making them ideal for delving into long-term trends and uncovering intricate associations between certain variables (response variable or explanatory variable). On the other hand, experimental studies emerge as indispensable tools when the aim is to definitively establish causation and methodically control variables.

At Santos Research Center, Corp., our approach to both scientific study and methodology is characterized by meticulous consideration of the specific research goals. We recognize that the quality of outcomes hinges on selecting the most appropriate method of research study. Our unwavering commitment to employing both observational and experimental research studies further underscores our dedication to advancing scientific knowledge across diverse domains.

Conclusion: The Synergy of Experimental and Observational Studies in Research

In conclusion, both observational and experimental studies are integral to scientific research, offering complementary approaches with unique strengths and limitations. At Santos Research Center, Corp., we leverage these methodologies to contribute meaningfully to the scientific community.

Explore our projects and initiatives at Santos Research Center, Corp. by visiting our website or contacting us at (813) 249-9100, where our unwavering commitment to rigorous research practices and advancing scientific knowledge awaits.

Recent Posts

At Santos Research Center, a medical research facility dedicated to advancing TBI treatments, we emphasize the importance of tailored rehabilitation...

Learn about COVID-19 rebound after Paxlovid, its symptoms, causes, and management strategies. Join our study at Santos Research Center. Apply now!

Learn everything about Respiratory Syncytial Virus (RSV), from symptoms and diagnosis to treatment and prevention. Stay informed and protect your health with...

Discover key insights on Alzheimer's disease, including symptoms, stages, and care tips. Learn how to manage the condition and find out how you can...

Discover expert insights on migraines, from symptoms and causes to management strategies, and learn about our specialized support at Santos Research Center.

Explore our in-depth guide on UTIs, covering everything from symptoms and causes to effective treatments, and learn how to manage and prevent urinary tract infections.

Your definitive guide to COVID symptoms. Dive deep into the signs of COVID-19, understand the new variants, and get answers to your most pressing questions.

Santos Research Center, Corp. is a research facility conducting paid clinical trials, in partnership with major pharmaceutical companies & CROs. We work with patients from across the Tampa Bay area.

Contact Details

Navigation menu.

Experimental and Quasi-Experimental Research

Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

gain insight into methods of instruction

subject to human error

intuitive practice shaped by research

personal bias of researcher may intrude

teachers have bias but can be reflective

sample may not be representative

researcher can have control over variables

can produce artificial results

humans perform experiments anyway

results may only apply to one situation and may be difficult to replicate

can be combined with other research methods for rigor

groups may not be comparable

use to determine what is best for population

human response can be difficult to measure

provides for greater transferability than anecdotal research

political pressure may skew results

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Citation Information

Luann Barnes, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, and Mike Palmquist. (1994-2024). Experimental and Quasi-Experimental Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Athl Train
  • v.45(1); Jan-Feb 2010

Study/Experimental/Research Design: Much More Than Statistics

Kenneth l. knight.

Brigham Young University, Provo, UT

The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand.

To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs.

Description:

The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style . At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping them negotiate the “Methods” section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study design with only the statistical analysis of the data. The design statement typically is placed at the end of the “Methods” section as a subsection called “Experimental Design” or as part of a subsection called “Data Analysis.” This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the study design that should be described at the beginning of the “Methods” section are instead placed in the “Statistical Analysis” section or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the “Methods” to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have you jumped back and forth from the early paragraphs of the “Methods” section to the “Statistics” section to try to understand which variables were collected and when? These efforts would be unnecessary if a road map at the beginning of the “Methods” section outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially important if the variables used in the statistical analysis were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of study design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, will allow them to become part of readers' clinical decision making.

WHAT IS A STUDY (OR EXPERIMENTAL OR RESEARCH) DESIGN?

The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style , 2 so I will use it here.

A study design is the architecture of an experimental study 3 and a description of how the study was conducted, 4 including all elements of how the data were obtained. 5 The study design should be the first subsection of the “Methods” section in an experimental manuscript (see the Table ). “Statistical Design” or, preferably, “Statistical Analysis” or “Data Analysis” should be the last subsection of the “Methods” section.

Table. Elements of a “Methods” Section

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-45-1-98-t01.jpg

The “Study Design” subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational study). 2 The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as “A 2 × 4 × 8 factorial guided data collection.” This tells us that there were 3 independent variables (factors), with 2 levels of the first factor, 4 levels of the second factor, and 8 levels of the third factor. Following is a sentence that names the levels of each factor: for example, “The independent variables were sex (male or female), training program (eg, walking, running, weight lifting, or plyometrics), and time (2, 4, 6, 8, 10, 15, 20, or 30 weeks).” Such an approach clearly outlines for readers how the various procedures fit into the overall structure and, therefore, enhances their understanding of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the “Instruments” and “Procedures” subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment order, if the experiment was a repeated-measures design. And although not a part of the design per se, a statement about obtaining written informed consent from participants and institutional review board approval is usually included in this subsection.

The nuts and bolts of the “Methods” section follow, including such things as equipment, materials, protocols, etc. These are beyond the scope of this commentary, however, and so will not be discussed.

The last part of the “Methods” section and last part of the “Study Design” section is the “Data Analysis” subsection. It begins with an explanation of any data manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Next, readers are told of the statistical measures used to analyze the data, such as a mixed 2 × 4 × 8 analysis of variance (ANOVA) with 2 between-groups factors (sex and training program) and 1 within-groups factor (time of measurement). Researchers should state and reference the statistical package and procedure(s) within the package used to compute the statistics. (Various statistical packages perform analyses slightly differently, so it is important to know the package and specific procedure used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL DESIGN VERSUS STATISTICAL ANALYSIS

Avoid using the term statistical design . Statistical methods are only part of the overall design. The term gives too much emphasis to the statistics, which are important, but only one of many tools used in interpreting data and only part of the study design:

The most important issues in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the research, not the numbers with which the data are cited and interpreted. 6

Stated another way, “The justification for the analysis lies not in the data collected but in the manner in which the data were collected.” 3 “Without the solid foundation of a good design, the edifice of statistical analysis is unsafe.” 7 (pp4–5)

The intertwining of study design and statistical analysis may have been caused (unintentionally) by R.A. Fisher, “… a genius who almost single-handedly created the foundations for modern statistical science.” 8 Most research did not involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921) 9 , 10 and experimental design (in 1935). 11 His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Design (see, for example, Edwards 12 ), and ANOVA courses taught in psychology and education departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze data, designs were much simpler, and often there was little difference between study design and statistical analysis. So combining the 2 elements did not cause serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies can be difficult to understand if your only reference point is the statistical analysis and design. (2) Dependent variables are frequently measured at different times. (3) How the data were collected is often not directly correlated with the statistical design.

For example, assume the goal is to determine the strength gain in novice and experienced athletes as a result of 3 strength training programs. Rate of change in strength is not a measurable variable; rather, it is calculated from strength measurements taken at various time intervals during the training. So the study design would be a 2 × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or advanced), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design , however, would be a 2 × 3 factorial with independent variables of experience (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Note that data were collected according to a 3-factor design but were analyzed according to a 2-factor design and that the dependent variables were different. So a single design statement, usually a statistical design statement, would not communicate which data were collected or how. Readers would be left to figure out on their own how the data were collected.

MULTIVARIATE RESEARCH AND THE NEED FOR STUDY DESIGNS

With the advent of electronic data gathering and computerized data handling and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data collection and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of H max and M max are measured 3 times each: before, immediately after, and 30 minutes after a 20-minute treatment with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are important for graphing temperature fluctuations during the procedure, only 3 temperatures (time 0, time 20, and time 50) are used for statistical analysis. A single dependent variable H max :M max ratio is computed to illustrate neural inhibition. Again, a single statistical design statement would tell little about how the data were obtained. And in this example, separate design statements would be needed for temperature measurement and H max :M max measurements.

As stated earlier, drawing conclusions from the data depends more on how the data were measured than on how they were analyzed. 3 , 6 , 7 , 13 So a single study design statement (or multiple such statements) at the beginning of the “Methods” section acts as a road map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the data were collected). Appropriate study design statements also increase the accuracy of conclusions drawn from the study.

CONCLUSIONS

The goal of scientific writing, or any writing, for that matter, is to communicate information. Including 2 design statements or subsections in scientific papers—one to explain how the data were collected and another to explain how they were statistically analyzed—will improve the clarity of communication and bring praise from readers. To summarize:

  • Purge from your thoughts and vocabulary the idea that experimental design and statistical design are synonymous.
  • Study or experimental design plays a much broader role than simply defining and directing the statistical analysis of an experiment.
  • A properly written study design serves as a road map to the “Methods” section of an experiment and, therefore, improves communication with the reader.
  • Study design should include a description of the type of design used, each factor (and each level) involved in the experiment, and the time at which each measurement was made.
  • Clarify when the variables involved in data collection and data analysis are different, such as when data analysis involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thanks to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of research design and statistics.

Frequently asked questions

What’s the difference between correlational and experimental research.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

experimental method disadvantages

Experimental Research: Meaning And Examples Of Experimental Research

Ever wondered why scientists across the world are being lauded for discovering the Covid-19 vaccine so early? It’s because every…

What Is Experimental Research

Ever wondered why scientists across the world are being lauded for discovering the Covid-19 vaccine so early? It’s because every government knows that vaccines are a result of experimental research design and it takes years of collected data to make one. It takes a lot of time to compare formulas and combinations with an array of possibilities across different age groups, genders and physical conditions. With their efficiency and meticulousness, scientists redefined the meaning of experimental research when they discovered a vaccine in less than a year.

What Is Experimental Research?

Characteristics of experimental research design, types of experimental research design, advantages and disadvantages of experimental research, examples of experimental research.

Experimental research is a scientific method of conducting research using two variables: independent and dependent. Independent variables can be manipulated to apply to dependent variables and the effect is measured. This measurement usually happens over a significant period of time to establish conditions and conclusions about the relationship between these two variables.

Experimental research is widely implemented in education, psychology, social sciences and physical sciences. Experimental research is based on observation, calculation, comparison and logic. Researchers collect quantitative data and perform statistical analyses of two sets of variables. This method collects necessary data to focus on facts and support sound decisions. It’s a helpful approach when time is a factor in establishing cause-and-effect relationships or when an invariable behavior is seen between the two.  

Now that we know the meaning of experimental research, let’s look at its characteristics, types and advantages.

The hypothesis is at the core of an experimental research design. Researchers propose a tentative answer after defining the problem and then test the hypothesis to either confirm or disregard it. Here are a few characteristics of experimental research:

  • Dependent variables are manipulated or treated while independent variables are exerted on dependent variables as an experimental treatment. Extraneous variables are variables generated from other factors that can affect the experiment and contribute to change. Researchers have to exercise control to reduce the influence of these variables by randomization, making homogeneous groups and applying statistical analysis techniques.
  • Researchers deliberately operate independent variables on the subject of the experiment. This is known as manipulation.
  • Once a variable is manipulated, researchers observe the effect an independent variable has on a dependent variable. This is key for interpreting results.
  • A researcher may want multiple comparisons between different groups with equivalent subjects. They may replicate the process by conducting sub-experiments within the framework of the experimental design.

Experimental research is equally effective in non-laboratory settings as it is in labs. It helps in predicting events in an experimental setting. It generalizes variable relationships so that they can be implemented outside the experiment and applied to a wider interest group.

The way a researcher assigns subjects to different groups determines the types of experimental research design .

Pre-experimental Research Design

In a pre-experimental research design, researchers observe a group or various groups to see the effect an independent variable has on the dependent variable to cause change. There is no control group as it is a simple form of experimental research . It’s further divided into three categories:

  • A one-shot case study research design is a study where one dependent variable is considered. It’s a posttest study as it’s carried out after treating what presumably caused the change.
  • One-group pretest-posttest design is a study that combines both pretest and posttest studies by testing a single group before and after administering the treatment.
  • Static-group comparison involves studying two groups by subjecting one to treatment while the other remains static. After post-testing all groups the differences are observed.

This design is practical but lacks in certain areas of true experimental criteria.

True Experimental Research Design

This design depends on statistical analysis to approve or disregard a hypothesis. It’s an accurate design that can be conducted with or without a pretest on a minimum of two dependent variables assigned randomly. It is further classified into three types:

  • The posttest-only control group design involves randomly selecting and assigning subjects to two groups: experimental and control. Only the experimental group is treated, while both groups are observed and post-tested to draw a conclusion from the difference between the groups.
  • In a pretest-posttest control group design, two groups are randomly assigned subjects. Both groups are presented, the experimental group is treated and both groups are post-tested to measure how much change happened in each group.
  • Solomon four-group design is a combination of the previous two methods. Subjects are randomly selected and assigned to four groups. Two groups are tested using each of the previous methods.

True experimental research design should have a variable to manipulate, a control group and random distribution.

With experimental research, we can test ideas in a controlled environment before marketing. It acts as the best method to test a theory as it can help in making predictions about a subject and drawing conclusions. Let’s look at some of the advantages that make experimental research useful:

  • It allows researchers to have a stronghold over variables and collect desired results.
  • Results are usually specific.
  • The effectiveness of the research isn’t affected by the subject.
  • Findings from the results usually apply to similar situations and ideas.
  • Cause and effect of a hypothesis can be identified, which can be further analyzed for in-depth ideas.
  • It’s the ideal starting point to collect data and lay a foundation for conducting further research and building more ideas.
  • Medical researchers can develop medicines and vaccines to treat diseases by collecting samples from patients and testing them under multiple conditions.
  • It can be used to improve the standard of academics across institutions by testing student knowledge and teaching methods before analyzing the result to implement programs.
  • Social scientists often use experimental research design to study and test behavior in humans and animals.
  • Software development and testing heavily depend on experimental research to test programs by letting subjects use a beta version and analyzing their feedback.

Even though it’s a scientific method, it has a few drawbacks. Here are a few disadvantages of this research method:

  • Human error is a concern because the method depends on controlling variables. Improper implementation nullifies the validity of the research and conclusion.
  • Eliminating extraneous variables (real-life scenarios) produces inaccurate conclusions.
  • The process is time-consuming and expensive
  • In medical research, it can have ethical implications by affecting patients’ well-being.
  • Results are not descriptive and subjects can contribute to response bias.

Experimental research design is a sophisticated method that investigates relationships or occurrences among people or phenomena under a controlled environment and identifies the conditions responsible for such relationships or occurrences

Experimental research can be used in any industry to anticipate responses, changes, causes and effects. Here are some examples of experimental research :

  • This research method can be used to evaluate employees’ skills. Organizations ask candidates to take tests before filling a post. It is used to screen qualified candidates from a pool of applicants. This allows organizations to identify skills at the time of employment. After training employees on the job, organizations further evaluate them to test impact and improvement. This is a pretest-posttest control group research example where employees are ‘subjects’ and the training is ‘treatment’.
  • Educational institutions follow the pre-experimental research design to administer exams and evaluate students at the end of a semester. Students are the dependent variables and lectures are independent. Since exams are conducted at the end and not the beginning of a semester, it’s easy to conclude that it’s a one-shot case study research.
  • To evaluate the teaching methods of two teachers, they can be assigned two student groups. After teaching their respective groups on the same topic, a posttest can determine which group scored better and who is better at teaching. This method can have its drawbacks as certain human factors, such as attitudes of students and effectiveness to grasp a subject, may negatively influence results. 

Experimental research is considered a standard method that uses observations, simulations and surveys to collect data. One of its unique features is the ability to control extraneous variables and their effects. It’s a suitable method for those looking to examine the relationship between cause and effect in a field setting or in a laboratory. Although experimental research design is a scientific approach, research is not entirely a scientific process. As much as managers need to know what is experimental research , they have to apply the correct research method, depending on the aim of the study.

Harappa’s Thinking Critically program makes you more decisive and lets you think like a leader. It’s a growth-driven course for managers who want to devise and implement sound strategies, freshers looking to build a career and entrepreneurs who want to grow their business. Identify and avoid arguments, communicate decisions and rely on effective decision-making processes in uncertain times. This course teaches critical and clear thinking. It’s packed with problem-solving tools, highly impactful concepts and relatable content. Build an analytical mindset, develop your skills and reap the benefits of critical thinking with Harappa!

Explore Harappa Diaries to learn more about topics such as Main Objective Of Research , Definition Of Qualitative Research , Examples Of Experiential Learning and Collaborative Learning Strategies to upgrade your knowledge and skills.

Thriversitybannersidenav

Green Garage

8 Main Advantages and Disadvantages of Experimental Research

Commonly used in sciences such as sociology, psychology, physics, chemistry, biology and medicine, experimental research is a collection of research designs which make use of manipulation and controlled testing in order to understand casual processes. To determine the effect on a dependent variable, one or more variables need to be manipulated.

Experimental research is used where:

  • time priority in a causal relationship.
  • consistency in a causal relationship.
  • magnitude of the correlation is great.

In the strictest sense, experimental research is called a true experiment. This is where a researcher manipulates one variable and controls or randomizers the rest of the variables. The study involves a control group where the subjects are randomly assigned between groups. A researcher only tests one effect at a time. The variables that need to be test and measured should be known beforehand as well.

Another way experimental research can be defined is as a quasi experiment. It’s where scientists are actively influencing something in order to observe the consequences.

The aim of experimental research is to predict phenomenons. In most cases, an experiment is constructed so that some kinds of causation can be explained. Experimental research is helpful for society as it helps improve everyday life.

When a researcher decides on a topic of interest, they try to define the research problem, which really helps as it makes the research area narrower thus they are able to study it more appropriately. Once the research problem is defined, a researcher formulates a research hypothesis which is then tested against the null hypothesis.

In experimental research, sampling groups play a huge part and should therefore be chosen correctly, especially of there is more than one condition involved in the experiment. One of the sample groups usually serves as the control group while the others are used for the experimental conditions. Determination of sampling groups is done through a variety of ways, and these include:

  • probability sampling
  • non-probability sampling
  • simple random sampling
  • convenience sampling
  • stratified sampling
  • systematic sampling
  • cluster sampling
  • sequential sampling
  • disproportional sampling
  • judgmental sampling
  • snowball sampling
  • quota sampling

Being able to reduce sampling errors is important when researchers want to get valid results from their experiments. As such, researchers often make adjustments to the sample size to lessen the chances of random errors.

All this said, what are the popular examples of experimental research?

Stanley Milgram Experiment – Conducted to determine whether people obey orders, even if its clearly dangerous. It was created to explain why many people were slaughtered by Nazis during World War II. The killings were done after certain orders were made. In fact, war criminals were deemed just following orders and therefore not responsible for their actions.

Law of Segregation – based on the Mendel Pea Plant Experiment and was performed in the 19th century. Gregory Mendel was an Austrian monk who was studying at the University of Vienna. He didn’t know anything about the process behind inherited behavior, but found rules about how characteristics are passed down through generations. Mendel was able to generate testable rather than observational data.

Ben Franklin Kite Experiment – it is believed that Benjamin Franklin discovered electricity by flying his kite into a storm cloud therefore receiving an electric shock. This isn’t necessarily true but the kite experiment was a major contribution to physics as it increased our knowledge on natural phenomena.

But just like any other type of research, there are certain sides who are in support of this method and others who are on the opposing side. Here’s why that’s the case:

List of Advantages of Experimental Research

1. Control over variables This kind of research looks into controlling independent variables so that extraneous and unwanted variables are removed.

2. Determination of cause and effect relationship is easy Because of its experimental design, this kind of research looks manipulates variables so that a cause and effect relationship can be easily determined.

3. Provides better results When performing experimental research, there are specific control set ups as well as strict conditions to adhere to. With these two in place, better results can be achieved. With this kind of research, the experiments can be repeated and the results checked again. Getting better results also gives a researcher a boost of confidence.

Other advantages of experimental research include getting insights into instruction methods, performing experiments and combining methods for rigidity, determining the best for the people and providing great transferability.

List of Disadvantages of Experimental Research

1. Can’t always do experiments Several issues such as ethical or practical reasons can hinder an experiment from ever getting started. For one, not every variable that can be manipulated should be.

2. Creates artificial situations Experimental research also means controlling irrelevant variables on certain occasions. As such, this creates a situation that is somewhat artificial.

3. Subject to human error Researchers are human too and they can commit mistakes. However, whether the error was made by machine or man, one thing remains certain: it will affect the results of a study.

Other issues cited as disadvantages include personal biases, unreliable samples, results that can only be applied in one situation and the difficulty in measuring the human experience.

Also cited as a disadvantage, is that the results of the research can’t be generalized into real-life situation. In addition, experimental research takes a lot of time and can be really expensive.

4. Participants can be influenced by environment Those who participate in trials may be influenced by the environment around them. As such, they might give answers not based on how they truly feel but on what they think the researcher wants to hear. Rather than thinking through what they feel and think about a subject, a participant may just go along with what they believe the researcher is trying to achieve.

5. Manipulation of variables isn’t seen as completely objective Experimental research mainly involves the manipulation of variables, a practice that isn’t seen as being objective. As mentioned earlier, researchers are actively trying to influence variable so that they can observe the consequences.

Learning Materials

  • Business Studies
  • Combined Science
  • Computer Science
  • Engineering
  • English Literature
  • Environmental Science
  • Human Geography
  • Macroeconomics
  • Microeconomics
  • Experimental Method

A key aim of psychology is to learn and understand more about psychological phenomena. This is usually done through a process called the experimental method . The experimental method in psychology research attempts to investigate the cause-and-effect relationship between variables . The crucial aspect of the experimental method is that it follows a 'scientific routine' to increase the chances of establishing valid and reliable results. As you can expect with all kinds of research, there are many advantages and disadvantages of the experimental method in psychology research. 

Millions of flashcards designed to help you ace your studies

  • Cell Biology

What do extraneous variables affect? 

Can a research be considered valid if many extraneous variables have been found to affect the study? 

Can the experimental method be used to identify if there is a cause-and-effect relationship between variables? 

Is it acceptable for research following the scientific method to be based on the researchers' subjective opinion? 

Review generated flashcards

to start learning or create your own AI flashcards

Start learning or create your own AI flashcards

  • Approaches in Psychology
  • Basic Psychology
  • Biological Bases of Behavior
  • Biopsychology
  • Careers in Psychology
  • Clinical Psychology
  • Bartlett War of the Ghosts
  • Brain Development
  • Bruner and Minturn Study of Perceptual Set
  • Case Studies Psychology
  • Computation
  • Conservation of Number Piaget
  • Constructive Processes in Memory
  • Correlation
  • Data handling
  • Depth Cues Psychology
  • Designing Research
  • Developmental Research
  • Dweck's Theory of Mindset
  • Ethical considerations in research
  • Factors Affecting Perception
  • Factors Affecting the Accuracy of Memory
  • Formulation of Hypothesis
  • Gibson's Theory of Direct Perception
  • Gregory's Constructivist Theory of Perception
  • Gunderson et al 2013 study
  • Hughes Policeman Doll Study
  • Issues and Debates in Developmental Psychology
  • Language and Perception
  • McGarrigle and Donaldson Naughty Teddy
  • Memory Processes
  • Memory recall
  • Nature and Nurture in Development
  • Normal Distribution Psychology
  • Perception Research
  • Perceptual Set
  • Piagets Theory in Education
  • Planning and Conducting Research
  • Population Samples
  • Primary and Secondary Data
  • Quantitative Data
  • Quantitative and Qualitative Data
  • Quantitative and Qualitative Methods
  • Research Procedures
  • Serial Position Effect
  • Short-term Retention
  • Structures of Memory
  • Tables, Charts and Graphs
  • The Effects of Learning on Development
  • The Gilchrist And Nesberg Study Of Motivation
  • Three Mountains Task
  • Types of Variable
  • Types of bias and how to control
  • Visual Cues and Constancies
  • Visual illusions
  • Willingham's Learning Theory
  • Cognition and Development
  • Cognitive Psychology
  • Data Handling and Analysis
  • Developmental Psychology
  • Eating Behaviour
  • Emotion and Motivation
  • Famous Psychologists
  • Forensic Psychology
  • Health Psychology
  • Individual Differences Psychology
  • Issues and Debates in Psychology
  • Personality in Psychology
  • Psychological Treatment
  • Relationships
  • Research Methods in Psychology
  • Schizophrenia
  • Scientific Foundations of Psychology
  • Scientific Investigation
  • Sensation and Perception
  • Social Context of Behaviour
  • Social Psychology

The experimental method is a research process that involves following scientific guidelines to test hypotheses and establish causal relationships between variables .

  • To begin with our learning of the experiment method, we will start with a quick recap covering the elements that make up research.
  • We will then move on to discuss the experimental method in psychology research.
  • To finish off, we will discuss the advantages and disadvantages of the experimental method in psychology.

the experimental method, women observing chemical reaction occuring in a test tube she is holding whilst being surrounded by a desk with a laptop microscope and test tubes and a conical flask with chemical substances in them, Vaia

Experimental Method of Research

Before we get into the experimental method in psychology research, let's take a quick look at what are the basic components that make up scientific research .

Hypotheses and variables

The hypothesis is an important component of research. The hypothesis is formed at the start of an experiment with the purpose of stating what the researcher expects to find in their study. The hypothesis is important because it is used to identify if the results support or negate psychological theories.

The h ypothesis is a specific, testable statement about the expected outcomes after comparing two (or more) variables.

The hypothesis needs to state the variables being investigated in the research.

The independent variable (IV) is the variable that the researcher manipulates/ changes in their study. This is the variable that the researcher believes these changes in the IV will cause a change in the dependent variable (DV).

The DV is the variable that is being observed and measured. The DV is thought of as the effect that is caused by the changes in the IV.

There are other types of variables, such as extraneous variables, participant variables and situational variables. These are variables that may cause changes in the DV. Ideally, psychology research that follows the scientific method should not have these types of variables. However, it is next to impossible to control for every potential variable that should not affect the DV.

Stages of the experimental method in psychology research

The experimental method has a standardised procedure and has several fixed steps to it that are usually carried out in a lab setting .

  • Identify the topic of interest/research and form a hypothesis.
  • Identify the IV(s) and DV(s), determine the design and type of experiment, and determine how to measure the IV and DV, e.g. self-report measure, observations, etc.
  • Prepare the materials needed in the study and recruit participants through an appropriate sampling method.
  • Conduct the experiment in a carefully planned scientific manner, collect the data and statistically analyse the results.
  • Write up the lab report, evaluate the study and give suggestions for further research.

Experimental Method Example

A hypothetical study has been described below to show how the experimental method is used in psychology research.

  • The researchers researched previously published work on the effects of caffeine on reaction times. The researchers hypothesised that drinking caffeine would affect reaction times based on the previous findings.
  • The researchers identified reaction time as the DV and caffeine as the IV; they decided to carry out the study in a lab setting.
  • The next stage involved preparing a test that measured reaction times, and participants were randomly assigned to three groups (drink with high levels of caffeine, drink with low levels of caffeine and no caffeine).
  • The study was then carried out in a manner to prevent the reliability and validity of the study from being lowered.
  • The results of the study were written up in the correct psychology format.

the experimental method, model of a brain with someone placing a lightbulb in it and three other people holding either a clipboard telescope or a smaller version of a lightbulb Vaia

The Features of the Experimental Method

There are three essential requirements of research that follows the experimental method.

We will now discuss each of these and identify how researchers can try and make sure that their research meets these requirements.

Research needs to be considered empirical. Empirical research means that the findings should be reflective of objective facts that the researcher has observed rather than their subjective opinion.

The next requirement, reliability, is important as it makes sure that research findings are consistent across time, in different situations, settings and when applied to other people. When research is found to be reliable, then it is thought that the research findings are representative of the population and can be applied to real-life settings.

Reliability refers to how consistent the results of an experiment are. If the results are similar when the same procedure has been carried out on different occasions, settings or using different participants, then the findings will be considered reliable.

Testing the same study vigorously using the same methodology but on different days, settings, and times or using different samples is used to identify if a study is reliable.

The third requirement of research that follows the experimental method is validity.

Validity is how well a test measures what it intends to.

Validity is important because if the researcher is not in fact measuring what they claim they are measuring, then the results are not accurate and cannot be accurately interpreted or applied. For example, if a test claims it measures personality type but instead measures emotion level, it cannot be a valid test.

The researcher should ensure that their subjective opinion does not influence the research methodology and analysis to ensure research is valid. Researchers can try and combat this through:

Random allocation : Participants are randomly assigned to the experimental or control group; this is used to ensure that individual differences do not cause the results.

Single/Double-blind technique: The researcher is unaware of which experimental condition the participants are in. This prevents the researcher from giving subconscious hints that may influence the participants' behaviour.

Studies that do not use this may measure participants' artificial responses, so the results may not be considered valid.

the experimental method, five people standing in a row with speech bubbles above their head indicating their opinions, Vaia

Types of Experimental Method in Psychology: Experimental Designs

The allocation of participants in experimental/ control conditions is important to ensure that a study is valid. The experimental design is the different ways the participants are split into different conditions/groups of the IV. There are different types of experimental designs .

The independent group's design (IGD)

The IGD is when different participants are assigned to each condition.

When investigating the effect of sleep on reaction times, if using an IGD you would have one group with less sleep (4 hours) and one group with more sleep (11 hours), and the results between the two groups would be compared.

The advantages of this design are that it is less time-consuming than the alternative methods. As different participants are used for each condition there is less chance of participants guessing the hypothesis and altering their behaviour and order effects is not an issue.

However, the disadvantages of this design are that the researcher needs to recruit more participants compared to the other designs. Moreover, there is an increased chance of individual differences influencing the results.

The repeated measures design (RMD)

The RMD is when the same participants are used in all of the conditions.

RMD may be used when investigating if participants are better at memorising information from educational videos or from reading books. The study would involve testing memory after watching an educational video and after reading a book. Each participant would be tested in both conditions.

The advantages of this design are that individual differences will not influence the results of the study as each participant is tested in both conditions and fewer participants may be required to be recruited in comparison to IGD.

In contrast, a disadvantage of this design is that there is a higher risk of order effects influencing the results. This is the idea that the order of conditions tested may influence the study's results.

The matched pairs design (MPD)

The MPD is when participants in each condition are matched on specific variables relevant to the study, e.g. gender, age, IQ, etc.

The advantages of the MPD are that there is no chance of order effects since each condition has a different set of participants and there is less risk of individual differences affecting the results since participants have been matched on such variables.

The disadvantages of this design are that matching participants may be a difficult, costly and time-consuming process.

Advantages and Disadvantages of the Experimental Method in Psychology

Let's move on to discuss the advantages and disadvantages of the experimental method as a whole.

Advantages of the experimental method

  • The experimental method gives researchers a high level of control since they choose the IVs and DVs, how to measure them, and the procedure of the study. This means the studies are likely to be high in validity.
  • Because of the standardised procedures, experiments can be replicated and their reliability can be tested.

The experimental method allows cause and effect relationships to be determined, which is the goal of psychological research.

The conclusions of these experiments allow useful applications to the real world.

Disadvantages of the experimental method

  • Results are subject to human error and subjectivity, e.g. researcher bias, social desirability bias, order effects, etc and so it can be difficult to strictly adhere to the experimental method.
  • The procedure of the experimental method can be time-consuming and costly.
  • Can include practical problems, e.g. some variables may be hard to manipulate or measure.
  • Extraneous variables sometimes can't be controlled, which can lower the validity of a study and its results.
  • Participants' behaviour can be influenced by the researcher or the conditions of the experiment.

The Experimental Method - Key Takeaways

  • The experimental method is a research process that involves following scientific guidelines to test hypotheses and establish causal relationships between variables.
  • There are three important requirements of scientific research that follow the experimental method; these are that research should be empirical, reliable and valid.
  • The experimental designs used in psychology research are the independent measures design, repeated measures design and matched pairs design .
  • There are advantages and disadvantages of the experimental method in psychology.

Flashcards in Experimental Method 4

Experimental Method

Learn with 4 Experimental Method flashcards in the free Vaia app

We have 14,000 flashcards about Dynamic Landscapes.

Already have an account? Log in

Frequently Asked Questions about Experimental Method

What are the five steps in the experimental method?

The five steps of the experimental method are:

Who used the experimental method?

Some famous researchers who used the experimental method in psychology research are Loftus and Palmer's (1974) experiment on the accuracy of eyewitness testimony, Asch's (1951) Conformity study, and Milgram's (1963) Obedience experiment. 

What is the quasi-experimental method?

The quasi-experimental method is similar to the experimental method in that it tests how changes in the independent variable affect the dependent variable. 

The difference between the two types of the experimental method is that quasi-experimental methods do not randomly assign participants to control and experimental groups, whereas the experimental method does.   

What is the experimental method of psychology?

The experimental method is a research process that involves following scientific guidelines to test hypotheses and establish causal relationships between variables. 

What are the main advantages of the experimental method?

The main advantages of the experimental method are:

  • Because of the standardised procedures, experiments can be replicated, and their reliability can be tested.

Test your knowledge with multiple choice flashcards

Experimental Method

Join the Vaia App and learn efficiently with millions of flashcards and more!

Keep learning, you are doing great.

Discover learning materials with the free Vaia app

1

Vaia is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.

Experimental Method

Vaia Editorial Team

Team Psychology Teachers

  • 10 minutes reading time
  • Checked by Vaia Editorial Team

Study anywhere. Anytime.Across all devices.

Create a free account to save this explanation..

Save explanations to your personalised space and access them anytime, anywhere!

By signing up, you agree to the Terms and Conditions and the Privacy Policy of Vaia.

Sign up to highlight and take notes. It’s 100% free.

Join over 22 million students in learning with our Vaia App

The first learning app that truly has everything you need to ace your exams in one place

  • Flashcards & Quizzes
  • AI Study Assistant
  • Study Planner
  • Smart Note-Taking

Join over 22 million students in learning with our Vaia App

Privacy Overview

Chapter 1: Introduction to Lifespan Development

Experimental research.

The goal of the experimental method is to provide more definitive conclusions about the causal relationships among the variables in a research hypothesis than what is available from correlational research. Experiments are designed to test hypotheses , or specific statements about the relationship between variables . Experiments are conducted in a controlled setting in an effort to explain how certain factors or events produce outcomes. A variable is anything that changes in value . In the experimental research design, the variables of interest are called the independent variable and the dependent variable. The independent variable in an experiment is the causing variable that is created or manipulated by the experimenter . The dependent variable in an experiment is a measured variable that is expected to be influenced by the experimental manipulation.

A good experiment randomly assigns participants to at least two groups that are compared. The experimental group receives the treatment under investigation, while the control group does not receive the treatment the experimenter is studying as a comparison. For instance, to assess whether violent TV affects aggressive behavior the experimental group might view a violent television show, while the control group watches a non-violent show. Additionally, experimental designs control for extraneous variables , or variables that are not part of the experiment that could inadvertently effect either the experimental or control group, thus distorting the results.

Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Second, and more important, is that some of the most interesting and key social variables cannot be experimentally manipulated because of ethical concerns. If we want to study the influence of abuse on children’s development of depression, these relationships must be assessed using correlational designs because it is simply not ethical to experimentally manipulate these variables. Characteristics of descriptive, correlational, and experimental research designs can be found in Table 1.5.

Descriptive

To create a snapshot of the current state of affairs

Provides a relatively complete picture of what is occurring at a given time. Allows the development of questions for further study.

Does not assess relationships among variables. May be unethical if participants do not know they are being observed.

Correlational

To assess the relationships between and among two or more variables

Allows testing of expected relationships between and among variables and the making of predictions. Can assess these relationships in everyday life events.

Cannot be used to draw inferences about the causal relationships between and among the variables.

Experimental

To assess the causal impact of one or more experimental manipulations on a dependent variable

Allows drawing of conclusions about the causal relationships among variables.

Cannot experimentally manipulate many important variables. May be expensive and time consuming.

Source: Stangor, C. (2011). (4th ed.). Mountain View, CA: Cengage.

  • Authored by : Martha Lally and Suzanne Valentine-French. Provided by : College of Lake County Foundation. Located at : http://dept.clcillinois.edu/psy/LifespanDevelopment.pdf . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Candela

Privacy Policy

previous episode

Rigor and reproducibility in experimental design, next episode, common flaws.

Overview Teaching: 20 min Exercises: 15 min Questions What are some common features of poor experimental design? What are some consequences of poor experimental design? Objectives Differentiate between technical and biological replicates. Describe what could happen to an experiment if technical replicates are used in place of biological replicates. Define confounding factors are and their impact on a study.

Costs of poor design

Scientific research advances when scientists can corroborate others’ results instead of pursuing false leads. Too often, though, published studies can’t be reproduced or replicated, and the self-correcting nature of science falters. Some of these problems are entirely preventable through thoughtful and well-informed study design.

Time and money

According to Chalmers and Glasziou , more than 85% of the dollars invested in research are lost annually to avoidable problems. Poorly designed studies are responsible for some of this waste. Given limited research funding it is imperative that experimental design, study quality, and reproducibility be prioritized so that research findings help to build stable and reliable scientific knowledge.

Ethical considerations

In animal studies, poor design often uses too many animals and is wasteful, or uses too few animals to obtain meaningful results. Poorly designed preclinical trials can lead to clinical trials involving humans with a shaky foundation of research findings. This is related to statistical power and inadequate sample sizes.

Underpowered studies

Studies that lack power (sensitivity) lack the ability to detect experimental effects. The lowest power we should accept is an 80% chance of detecting an effect.

Confounding factors

Confounding factors, or confounders, are a third factor influencing the relationship between independent and dependent variables. A confounder is a variable not unaccounted for, yet one that exerts either a small or large effect on a dependent (response) variable. Such variables increase variance and bias in the study. For example, one design for the Salk polio vaccine trials considered vaccinating only those children whose parents consented to the vaccination, and leaving children from non-consenting parents as an unvaccinated control group. However, wealthier parents were more likely to consent to vaccinating their children, so socioeconomic status would have been introduced as a confounding factor. The effect of socioeconomic status would have confounded or “mixed up” the effect of the treatment (the vaccination). In another example, a study that only investigates the effect of activity level (active versus sedentary) on weight of mice excludes several factors that are known to affect weight. These missing factors (such as, age and sex) are confounding variables. Age and sex both have an effect on weight that is unaccounted for by the study; thus, the variation attributed to activity level cannot be accurately measured because of the lack of accounting for the variance known to be associated with the missing (confounding) variables.

Incorrect randomization

A well-designed experiment avoids confounding from known and unknown influences. Randomization creates comparable groups, which are alike in all characteristics except for the treatment under study. Randomization eliminates selection bias, balances the groups, and forms the basis for statistical tests. Poor randomization introduces confounding variables and frustrates attempts to quantify the effect of a treatment. If treatment groups differ with respect to factors other than the treatment under study, the results will be biased. In a clinical trial, for example, if younger participants were assigned the treatment and older participants were in the control group, there would be no way to determine whether the treatment had an effect or if the participants’ age had an effect. In a study involving mice, if all the males were treated by one technician and all the females by another, it would be difficult to disambiguate the effect of the treatment from the effect of sex or the effect of the technician.

Failure to blind wherever possible

Blinding ensures that neither the investigator nor the staff know what treatment a specific subject has received. Since investigators and staff have a stake in the outcome of experiments, a robust design ensures that the treatment is hidden and outcomes can’t be influenced. Double-blind trials blind both the investigator and the participants to the treatment, so that no one knows who is and is not receiving it. Double-blind clinical trials can help with the placebo effect. Failure to blind leads to biased and unreliable results.

Pseudoreplication

Pseudoreplication occurs when researchers artificially inflate the number of replicates by repeatedly taking measurements from the same subject or sample. For example, repeatedly measuring the blood pressure of participants in a hypertension study will yield very similar results for each individual, because the measurements are dependent on one another, specifically on the overall health, genetics, and baseline blood pressure of each participant. Because measurements from the same participant are not independent, they might lead to what appears to be statistically significant differences, but in fact are not. A simple approach to correct this is to average the measurements for each individual, and to use the average as a single data point. It’s important to have the same number of measurements for each individual, however, so that the averages are comparable.

Technical versus Biological Replicates

Technical replicates are measurements taken on the same sample. Biological replicates are measurements taken on different samples (one per sample). Technical replicates do not convey biological variation in the data, as the difference between technical replicates in a sample measure “technical” variation, such as, instrument settings, technician skill, and environmental effects. Biological replicates differ from technical in that differences seen between samples tends to be mostly biological. If, for example, different technicians worked on measuring the biological samples, it is possible that a technician effect can be accounted for in the model via evaluating a technician batch effect. The key to understanding replicates is to identify the source of the variation that you are attempting to measure. Are you attempting to quantify the accuracy of the measuring tool or procedure from one measurement to the next? If so, then this is a technical replicate. Are you attempting to quantify the difference between one mouse and another? If so, this is a biological replicate.

A defining characteristic between biological and technical replicates are whether a particular measurement is taken once or multiple times on an individual sample. A biological replicate is a single measurement; whereas, a technical replicate is done in multiple. For example, a blood pressure measurement on a drug-treated mouse is a biological measurement; it is taken with the intent of identifying a difference between that sample and other samples from different sample groups (e.g., blood pressure measurements between males and females; each individual’s blood pressure measurement is a biological replicate). If the blood pressure measurement was done repeatedly on the same mouse (at the same time), then the measurements are referred to as technical replicates. Those measurements are done on the same biological unit (mouse) and are not biologically different, but rather they are only different due to technical variation (e.g., a source of technical variation could be instrument error which could cause slight changes to blood pressure measurements). Technical replicates convey how consistent repeated measurements on a particular mouse are. Measurements take on the same mouse but at different times (longitudinal study) are considered technical replicates, as they are done on the same mouse, yet are used to measure the effect of time (and are analyzed via specific algorithms that account for this unique experimental design).

As an example, if I were to weigh myself on a bathroom scale, record the measurement, then repeatedly weigh myself and record the measurement each time, the measurements might differ from one instance to the next. I could determine the variation of the bathroom scale by averaging all technical replicates and finding the difference of each measurement from this average. Manufacturing of measurement instruments like bathroom scales is never perfect, so there will be technical variation in measurements. In contrast, if I were to measure my own weight and a friend did the same, my weight and my friend’s weight are independent of one another. This would be an example of a biological replicate.

experimental method disadvantages

Proper use of Technical Replicates

When working with technical replicates, the model should reflect the presence of the technical replicates because each replicate contributes to the overall error in the model. Technical replicates are not independent biological replicates; thus, if technical replicates are treated as biological replicates it leads to inflation of degrees of freedom and deflation of standard error. Such a mistake will lead to an adjustment of the fundamental statistics used in regression analysis and lead to inaccurate analysis results. To account for this type of error, the subject (or sample number/ID) can be used as a random model term, or, alternatively, the technical replicates can be collapsed (averaged). If you treat the biological subject as a random effect, then the mixed model tests for all treatments and other effects are identical to what you get if you average the technical replicates.

Some notable retractions

Re-analyses of published works have become much more common, resulting in more paper retractions. Retraction Watch catalogs retractions in the scientific literature. Retractions can happen in the most high profile journals and from some of the most esteemed investigators. Often the reasons for retraction are lack of reproducibility. Here are a few examples, along with excerpts from author retraction statements.

Huang, W et al. DDX5 and its associated lncRNA Rmrp modulate T H 17 cell effector functions. Nature 528.7583 (2015): 517.

An excerpt from the last author’s retraction notice:

“In follow-up experiments to this Article, we have been unable to replicate key aspects of the original results. Most importantly, an RNA-dependent physical association of RORγt and DDX5 cannot be reproduced and is not substantiated upon further analysis of the original data. The authors therefore wish to retract the Article. We deeply regret this error and apologize to our scientific colleagues.”

2009 Nobel Laureate Jack W. Szostak retracted a 2016 paper in Nature Chemistry that explored the origins of life on earth, after discovering the main conclusions were not correct. A member of Szostak’s lab, Tivoli Olsen, could not reproduce the 2016 findings. Szostak told Retraction Watch:

“In retrospect, we were totally blinded by our belief [in our findings]…we were not as careful or rigorous as we should have been (and as Tivoli was) in interpreting these experiments…The only saving grace is that we are the ones who discovered and corrected our own errors, and figured out what was going on. ”

Harvard stem cell biologist Douglas Melton retracted a 2013 paper in Cell that had garnered significant attention after other researchers attempted and failed to replicate his results. Dr. Melton told Retraction Watch that “more attention to the statistical strength is a lesson that I’ve learned … When we repeated our original experiments with a larger number of mice, we also failed to observe β-cell expansion upon Angptl-8/betatrophin overexpression and reported these results in a Correspondence (Cell, 2014, 159, 467–468). We have subsequently repeated a series of blinded experiments with the Kushner lab and have now determined conclusively that our conclusion that Angptl-8/betatrophin causes specific β-cell replication is wrong and cannot be supported (PLoS One, 2016, 11, e0159276). Therefore, the most appropriate course of action is to retract the paper. We regret and apologize for this mistake.”

Discussion: Harsh consequences In the examples above, what might the others have done to avoid an embarrassing and difficult retraction? Which of the features of poor experimental likely caused the retractions? Solution
Key Points When designing an experiment, use biological replicates. Choose a single representative value (the mean, median, or mode) for technical replicates. Poor study design can lead to waste and insignificant results.

Learning Materials

  • Business Studies
  • Combined Science
  • Computer Science
  • Engineering
  • English Literature
  • Environmental Science
  • Human Geography
  • Macroeconomics
  • Microeconomics
  • Experimental Method

A key aim of psychology is to learn and understand more about psychological phenomena. This is usually done through a process called the experimental method . The experimental method in psychology research attempts to investigate the cause-and-effect relationship between variables . The crucial aspect of the experimental method is that it follows a 'scientific routine' to increase the chances of establishing valid and reliable results. As you can expect with all kinds of research, there are many advantages and disadvantages of the experimental method in psychology research. 

Millions of flashcards designed to help you ace your studies

  • Cell Biology

What do extraneous variables affect? 

Can a research be considered valid if many extraneous variables have been found to affect the study? 

Can the experimental method be used to identify if there is a cause-and-effect relationship between variables? 

Is it acceptable for research following the scientific method to be based on the researchers' subjective opinion? 

Review generated flashcards

to start learning or create your own AI flashcards

Start learning or create your own AI flashcards

  • Approaches in Psychology
  • Basic Psychology
  • Biological Bases of Behavior
  • Biopsychology
  • Careers in Psychology
  • Clinical Psychology
  • Bartlett War of the Ghosts
  • Brain Development
  • Bruner and Minturn Study of Perceptual Set
  • Case Studies Psychology
  • Computation
  • Conservation of Number Piaget
  • Constructive Processes in Memory
  • Correlation
  • Data handling
  • Depth Cues Psychology
  • Designing Research
  • Developmental Research
  • Dweck's Theory of Mindset
  • Ethical considerations in research
  • Factors Affecting Perception
  • Factors Affecting the Accuracy of Memory
  • Formulation of Hypothesis
  • Gibson's Theory of Direct Perception
  • Gregory's Constructivist Theory of Perception
  • Gunderson et al 2013 study
  • Hughes Policeman Doll Study
  • Issues and Debates in Developmental Psychology
  • Language and Perception
  • McGarrigle and Donaldson Naughty Teddy
  • Memory Processes
  • Memory recall
  • Nature and Nurture in Development
  • Normal Distribution Psychology
  • Perception Research
  • Perceptual Set
  • Piagets Theory in Education
  • Planning and Conducting Research
  • Population Samples
  • Primary and Secondary Data
  • Quantitative Data
  • Quantitative and Qualitative Data
  • Quantitative and Qualitative Methods
  • Research Procedures
  • Serial Position Effect
  • Short-term Retention
  • Structures of Memory
  • Tables, Charts and Graphs
  • The Effects of Learning on Development
  • The Gilchrist And Nesberg Study Of Motivation
  • Three Mountains Task
  • Types of Variable
  • Types of bias and how to control
  • Visual Cues and Constancies
  • Visual illusions
  • Willingham's Learning Theory
  • Cognition and Development
  • Cognitive Psychology
  • Data Handling and Analysis
  • Developmental Psychology
  • Eating Behaviour
  • Emotion and Motivation
  • Famous Psychologists
  • Forensic Psychology
  • Health Psychology
  • Individual Differences Psychology
  • Issues and Debates in Psychology
  • Personality in Psychology
  • Psychological Treatment
  • Relationships
  • Research Methods in Psychology
  • Schizophrenia
  • Scientific Foundations of Psychology
  • Scientific Investigation
  • Sensation and Perception
  • Social Context of Behaviour
  • Social Psychology

The experimental method is a research process that involves following scientific guidelines to test hypotheses and establish causal relationships between variables .

  • To begin with our learning of the experiment method, we will start with a quick recap covering the elements that make up research.
  • We will then move on to discuss the experimental method in psychology research.
  • To finish off, we will discuss the advantages and disadvantages of the experimental method in psychology.

the experimental method, women observing chemical reaction occuring in a test tube she is holding whilst being surrounded by a desk with a laptop microscope and test tubes and a conical flask with chemical substances in them, StudySmarter

Experimental Method of Research

Before we get into the experimental method in psychology research, let's take a quick look at what are the basic components that make up scientific research .

Hypotheses and variables

The hypothesis is an important component of research. The hypothesis is formed at the start of an experiment with the purpose of stating what the researcher expects to find in their study. The hypothesis is important because it is used to identify if the results support or negate psychological theories.

The h ypothesis is a specific, testable statement about the expected outcomes after comparing two (or more) variables.

The hypothesis needs to state the variables being investigated in the research.

The independent variable (IV) is the variable that the researcher manipulates/ changes in their study. This is the variable that the researcher believes these changes in the IV will cause a change in the dependent variable (DV).

The DV is the variable that is being observed and measured. The DV is thought of as the effect that is caused by the changes in the IV.

There are other types of variables, such as extraneous variables, participant variables and situational variables. These are variables that may cause changes in the DV. Ideally, psychology research that follows the scientific method should not have these types of variables. However, it is next to impossible to control for every potential variable that should not affect the DV.

Stages of the experimental method in psychology research

The experimental method has a standardised procedure and has several fixed steps to it that are usually carried out in a lab setting .

  • Identify the topic of interest/research and form a hypothesis.
  • Identify the IV(s) and DV(s), determine the design and type of experiment, and determine how to measure the IV and DV, e.g. self-report measure, observations, etc.
  • Prepare the materials needed in the study and recruit participants through an appropriate sampling method.
  • Conduct the experiment in a carefully planned scientific manner, collect the data and statistically analyse the results.
  • Write up the lab report, evaluate the study and give suggestions for further research.

Experimental Method Example

A hypothetical study has been described below to show how the experimental method is used in psychology research.

  • The researchers researched previously published work on the effects of caffeine on reaction times. The researchers hypothesised that drinking caffeine would affect reaction times based on the previous findings.
  • The researchers identified reaction time as the DV and caffeine as the IV; they decided to carry out the study in a lab setting.
  • The next stage involved preparing a test that measured reaction times, and participants were randomly assigned to three groups (drink with high levels of caffeine, drink with low levels of caffeine and no caffeine).
  • The study was then carried out in a manner to prevent the reliability and validity of the study from being lowered.
  • The results of the study were written up in the correct psychology format.

the experimental method, model of a brain with someone placing a lightbulb in it and three other people holding either a clipboard telescope or a smaller version of a lightbulb StudySmarter

The Features of the Experimental Method

There are three essential requirements of research that follows the experimental method.

We will now discuss each of these and identify how researchers can try and make sure that their research meets these requirements.

Research needs to be considered empirical. Empirical research means that the findings should be reflective of objective facts that the researcher has observed rather than their subjective opinion.

The next requirement, reliability, is important as it makes sure that research findings are consistent across time, in different situations, settings and when applied to other people. When research is found to be reliable, then it is thought that the research findings are representative of the population and can be applied to real-life settings.

Reliability refers to how consistent the results of an experiment are. If the results are similar when the same procedure has been carried out on different occasions, settings or using different participants, then the findings will be considered reliable.

Testing the same study vigorously using the same methodology but on different days, settings, and times or using different samples is used to identify if a study is reliable.

The third requirement of research that follows the experimental method is validity.

Validity is how well a test measures what it intends to.

Validity is important because if the researcher is not in fact measuring what they claim they are measuring, then the results are not accurate and cannot be accurately interpreted or applied. For example, if a test claims it measures personality type but instead measures emotion level, it cannot be a valid test.

The researcher should ensure that their subjective opinion does not influence the research methodology and analysis to ensure research is valid. Researchers can try and combat this through:

Random allocation : Participants are randomly assigned to the experimental or control group; this is used to ensure that individual differences do not cause the results.

Single/Double-blind technique: The researcher is unaware of which experimental condition the participants are in. This prevents the researcher from giving subconscious hints that may influence the participants' behaviour.

Studies that do not use this may measure participants' artificial responses, so the results may not be considered valid.

the experimental method, five people standing in a row with speech bubbles above their head indicating their opinions, StudySmarter

Types of Experimental Method in Psychology: Experimental Designs

The allocation of participants in experimental/ control conditions is important to ensure that a study is valid. The experimental design is the different ways the participants are split into different conditions/groups of the IV. There are different types of experimental designs .

The independent group's design (IGD)

The IGD is when different participants are assigned to each condition.

When investigating the effect of sleep on reaction times, if using an IGD you would have one group with less sleep (4 hours) and one group with more sleep (11 hours), and the results between the two groups would be compared.

The advantages of this design are that it is less time-consuming than the alternative methods. As different participants are used for each condition there is less chance of participants guessing the hypothesis and altering their behaviour and order effects is not an issue.

However, the disadvantages of this design are that the researcher needs to recruit more participants compared to the other designs. Moreover, there is an increased chance of individual differences influencing the results.

The repeated measures design (RMD)

The RMD is when the same participants are used in all of the conditions.

RMD may be used when investigating if participants are better at memorising information from educational videos or from reading books. The study would involve testing memory after watching an educational video and after reading a book. Each participant would be tested in both conditions.

The advantages of this design are that individual differences will not influence the results of the study as each participant is tested in both conditions and fewer participants may be required to be recruited in comparison to IGD.

In contrast, a disadvantage of this design is that there is a higher risk of order effects influencing the results. This is the idea that the order of conditions tested may influence the study's results.

The matched pairs design (MPD)

The MPD is when participants in each condition are matched on specific variables relevant to the study, e.g. gender, age, IQ, etc.

The advantages of the MPD are that there is no chance of order effects since each condition has a different set of participants and there is less risk of individual differences affecting the results since participants have been matched on such variables.

The disadvantages of this design are that matching participants may be a difficult, costly and time-consuming process.

Advantages and Disadvantages of the Experimental Method in Psychology

Let's move on to discuss the advantages and disadvantages of the experimental method as a whole.

Advantages of the experimental method

  • The experimental method gives researchers a high level of control since they choose the IVs and DVs, how to measure them, and the procedure of the study. This means the studies are likely to be high in validity.
  • Because of the standardised procedures, experiments can be replicated and their reliability can be tested.

The experimental method allows cause and effect relationships to be determined, which is the goal of psychological research.

The conclusions of these experiments allow useful applications to the real world.

Disadvantages of the experimental method

  • Results are subject to human error and subjectivity, e.g. researcher bias, social desirability bias, order effects, etc and so it can be difficult to strictly adhere to the experimental method.
  • The procedure of the experimental method can be time-consuming and costly.
  • Can include practical problems, e.g. some variables may be hard to manipulate or measure.
  • Extraneous variables sometimes can't be controlled, which can lower the validity of a study and its results.
  • Participants' behaviour can be influenced by the researcher or the conditions of the experiment.

The Experimental Method - Key Takeaways

  • The experimental method is a research process that involves following scientific guidelines to test hypotheses and establish causal relationships between variables.
  • There are three important requirements of scientific research that follow the experimental method; these are that research should be empirical, reliable and valid.
  • The experimental designs used in psychology research are the independent measures design, repeated measures design and matched pairs design .
  • There are advantages and disadvantages of the experimental method in psychology.

Flashcards in Experimental Method 4

Experimental Method

Learn with 4 Experimental Method flashcards in the free StudySmarter app

We have 14,000 flashcards about Dynamic Landscapes.

Already have an account? Log in

Frequently Asked Questions about Experimental Method

What are the five steps in the experimental method?

The five steps of the experimental method are:

Who used the experimental method?

Some famous researchers who used the experimental method in psychology research are Loftus and Palmer's (1974) experiment on the accuracy of eyewitness testimony, Asch's (1951) Conformity study, and Milgram's (1963) Obedience experiment. 

What is the quasi-experimental method?

The quasi-experimental method is similar to the experimental method in that it tests how changes in the independent variable affect the dependent variable. 

The difference between the two types of the experimental method is that quasi-experimental methods do not randomly assign participants to control and experimental groups, whereas the experimental method does.   

What is the experimental method of psychology?

The experimental method is a research process that involves following scientific guidelines to test hypotheses and establish causal relationships between variables. 

What are the main advantages of the experimental method?

The main advantages of the experimental method are:

  • Because of the standardised procedures, experiments can be replicated, and their reliability can be tested.

Test your knowledge with multiple choice flashcards

Experimental Method

Join the StudySmarter App and learn efficiently with millions of flashcards and more!

Keep learning, you are doing great.

Discover learning materials with the free StudySmarter app

1

About StudySmarter

StudySmarter is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.

Experimental Method

StudySmarter Editorial Team

Team Psychology Teachers

  • 10 minutes reading time
  • Checked by StudySmarter Editorial Team

Study anywhere. Anytime.Across all devices.

Create a free account to save this explanation..

Save explanations to your personalised space and access them anytime, anywhere!

By signing up, you agree to the Terms and Conditions and the Privacy Policy of StudySmarter.

Sign up to highlight and take notes. It’s 100% free.

Join over 22 million students in learning with our StudySmarter App

The first learning app that truly has everything you need to ace your exams in one place

  • Flashcards & Quizzes
  • AI Study Assistant
  • Study Planner
  • Smart Note-Taking

Join over 22 million students in learning with our StudySmarter App

IMAGES

  1. Experimental Psychology

    experimental method disadvantages

  2. Advantages and Disadvantages of Experimental Research

    experimental method disadvantages

  3. 29: Experimental Methods

    experimental method disadvantages

  4. 2: Advantages and disadvantages of different experimental techniques

    experimental method disadvantages

  5. Advantages of Experimental Method

    experimental method disadvantages

  6. EXPERIMENTAL METHOD.

    experimental method disadvantages

VIDEO

  1. What is experimental research

  2. PCR: Past, Present, and Future

  3. #experimental method#advantages disadvantages

  4. Advantages of Experimental Method

  5. Methods Of Psychology || Experimental Method || Lecture no#17 In Urdu By Nafsiat.Pk

  6. Discussion Method of Teaching

COMMENTS

  1. 16 Advantages and Disadvantages of Experimental Research

    6. Experimental research allows cause and effect to be determined. The manipulation of variables allows for researchers to be able to look at various cause-and-effect relationships that a product, theory, or idea can produce. It is a process which allows researchers to dig deeper into what is possible, showing how the various variable ...

  2. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  3. 7 Advantages and Disadvantages of Experimental Research

    The Advantages of Experimental Research. 1. A High Level Of Control. With experimental research groups, the people conducting the research have a very high level of control over their variables. By isolating and determining what they are looking for, they have a great advantage in finding accurate results. 2.

  4. 8 Advantages and Disadvantages of Experimental Research

    List of Disadvantages of Experimental Research. 1. It can lead to artificial situations. In many scenarios, experimental researchers manipulate variables in an attempt to replicate real-world scenarios to understand the function of drugs, gadgets, treatments, and other new discoveries. This works most of the time, but there are cases when ...

  5. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  6. 17 Advantages and Disadvantages of Experimental Research Method in

    10. Experimental research may offer results which apply to only one situation. Although one of the advantages of experimental research is that it allows for duplication by others to obtain the same results, this is not always the case in every situation. There are results that this method can find which may only apply to that specific situation.

  7. Experimental Research Designs: Types, Examples & Advantages

    Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types: 1. Pre-experimental Research Design. A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research.

  8. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  9. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  10. Advantages & Disadvantages of Various Experimental Designs

    Experimental Design. Lisa and Henry are both psychologists doing research on how to treat anxiety. Lisa wants to see if a new pill is more effective at treating anxiety than the pills that doctors ...

  11. Observational vs. Experimental Study: A Comprehensive Guide

    Dive into the intricacies of each method and discover their unique applications in research. Unravel the differences between observational and experimental studies. Dive into the intricacies of each method and discover their unique applications in research. 5927 Webb Rd Tampa FL 33615 (813) 249-9100.

  12. Experimental and Quasi-Experimental Research

    Advantages and Disadvantages of Experimental Research: Discussion. ... An experimental method for semantic field study. Linguistic Communication, 2, N. pag. This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained. ...

  13. Study/Experimental/Research Design: Much More Than Statistics

    Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping ...

  14. What's the difference between correlational and experimental research?

    Controlled experiments establish causality, whereas correlational studies only show associations between variables. In an experimental design, you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can't impact the results. In a correlational design, you measure variables ...

  15. Experimental Research: Meaning And Examples Of Experimental ...

    Advantages And Disadvantages Of Experimental Research . With experimental research, we can test ideas in a controlled environment before marketing. It acts as the best method to test a theory as it can help in making predictions about a subject and drawing conclusions. Let's look at some of the advantages that make experimental research useful:

  16. Experimental Research Design

    Disadvantages of Experimental Study Design. ... A true experiment is the only research method that can prove the existence of a cause and effect relationship between two variables.

  17. 8 Main Advantages and Disadvantages of Experimental Research

    Other advantages of experimental research include getting insights into instruction methods, performing experiments and combining methods for rigidity, determining the best for the people and providing great transferability. List of Disadvantages of Experimental Research. 1. Can't always do experiments

  18. Experimental Method: Examples & Advantages

    There are advantages and disadvantages of the experimental method in psychology. Flashcards in Experimental Method 17. Start learning What are the three different designs that can be used in the experimental method? The independent measures design, repeated measures design and matched pairs design. ...

  19. Experimental Research

    Experimental Research. The goal of the experimental method is to provide more definitive conclusions about the causal relationships among the variables in a research hypothesis than what is available from correlational research. Experiments are designed to test hypotheses, or specific statements about the relationship between variables.

  20. Rigor and Reproducibility in Experimental Design: Common flaws

    Ethical considerations. In animal studies, poor design often uses too many animals and is wasteful, or uses too few animals to obtain meaningful results. Poorly designed preclinical trials can lead to clinical trials involving humans with a shaky foundation of research findings. This is related to statistical power and inadequate sample sizes.

  21. Quasi-Experimental Design: Types, Examples, Pros, and Cons

    See why leading organizations rely on MasterClass for learning & development. A quasi-experimental design can be a great option when ethical or practical concerns make true experiments impossible, but the research methodology does have its drawbacks. Learn all the ins and outs of a quasi-experimental design.

  22. The Web Experiment Method: Advantages, disadvantages, and solutions

    These and 13 other. advantages of Web experiments are reviewed and contrasted with 7 disadvantages, such as (1) multiple submissions, (2) lack of experimental control, (3) self-selection, and (4 ...

  23. Experimental Method: Examples & Advantages

    The experimental method has a standardised procedure and has several fixed steps to it that are usually carried out in a lab setting. Identify the topic of interest/research and form a hypothesis. Identify the IV (s) and DV (s), determine the design and type of experiment, and determine how to measure the IV and DV, e.g. self-report measure ...