• Abnormal Psychology
  • Assessment (IB)
  • Biological Psychology
  • Cognitive Psychology
  • Criminology
  • Developmental Psychology
  • Extended Essay
  • General Interest
  • Health Psychology
  • Human Relationships
  • IB Psychology
  • IB Psychology HL Extensions
  • Internal Assessment (IB)
  • Love and Marriage
  • Post-Traumatic Stress Disorder
  • Prejudice and Discrimination
  • Qualitative Research Methods
  • Research Methodology
  • Revision and Exam Preparation
  • Social and Cultural Psychology
  • Studies and Theories
  • Teaching Ideas

True, Natural and Field Experiments An easy lesson idea for learning about experiments.

Travis Dixon September 29, 2016 Research Methodology

field vs natural experiment

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to email a link to a friend (Opens in new window)

There is a difference between a “true experiment” a “field experiment” and  a “natural experiment”. These separate experimental methods are commonly used in psychological research and they each have their strengths and limitations.

True Experiments

field vs natural experiment

Berry’s classic study compared two cultures in order to understand how economics, parenting and cultural values can influence behaviour. But what type of method would we call this?

A true experiment is one where:

  • have randomly assigned participants to a condition (if using independent samples)

Repeated measures designs don’t need random allocation because there is no allocation as all participants do both conditions.

One potential issue in laboratory experiments is that they are conducted in environments that are not natural for the participants, so the behaviour might not reflect what happens in real life.

Field Experiments

A field experiment is one where:

  • the researcher conducts an experiment by manipulating an IV,
  • …and measuring the effects on the DV in a natural environment.

They still try to minimize the effects of other variables and to control for these, but it’s just happening in a natural environment: the field.

  • Natural Experiment

A natural experiment is one where:

  • the independent variable is naturally occurring. i.e. it hasn’t been manipulated by the researcher.

There are many instances where naturally occurring events or phenomenon may interest researchers. The issue with natural experiments is that it can’t be guaranteed that it is the independent variable that is having an effect on the dependent variable.

  • Quantitative Research Methods Glossary
  • Let’s STOP the research methods madness!
  • What makes an experiment “quasi”?

Activity Idea

Students can work with a partner to decide if the following are true, field or natural experiments.

If you cant’ decide, what other information do you need?

  • Berry’s cross-cultural study on conformity ( Key Study: Conformity Across Cultures (Berry, 1967)
  • Bandura’s bobo doll study ( Key Study: Bandura’s Bobo Doll (1963)
  • Rosenzweig’s rat study ( Key Study: Animal research on neuroplasticity (Rosenzweig and Bennett, 1961)

Let’s make it a bit trickier:

  • Key Study: London Taxi Drivers vs. Bus Drivers (Maguire, 2006)
  • Key Study: Evolution of Gender Differences in Sexual Behaviour (Clark and Hatfield, 1989)
  • Key Study: Serotonin, tryptophan and the brain (Passamonti et al., 2012)
  • Saint Helena Study : television was introduced on the island of Saint Helena in the Atlantic ocean and the researchers measured the behaviour of the kids before and after TV was introduced.
  • Light Therapy : the researchers randomly assigned patients with depression into three different groups. The three groups received different forms of light therapy to treat depression (red light, bright light, soft light). The lights were installed in the participants’ bedrooms and were timed to come on naturally. The effects on depression were measured via interviews.

What are the strengths and limitations of:

  • True Experiment 
  • Field Experiment 

Travis Dixon

Travis Dixon is an IB Psychology teacher, author, workshop leader, examiner and IA moderator.

Experimental Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

field vs natural experiment

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Types of Experiment: Overview

Last updated 6 Sept 2022

  • Share on Facebook
  • Share on Twitter
  • Share by Email

Different types of methods are used in research, which loosely fall into 1 of 2 categories.

Experimental (Laboratory, Field & Natural) & N on experimental ( correlations, observations, interviews, questionnaires and case studies).

All the three types of experiments have characteristics in common. They all have:

  • an independent variable (I.V.) which is manipulated or a naturally occurring variable
  • a dependent variable (D.V.) which is measured
  • there will be at least two conditions in which participants produce data.

Note – natural and quasi experiments are often used synonymously but are not strictly the same, as with quasi experiments participants cannot be randomly assigned, so rather than there being a condition there is a condition.

Laboratory Experiments

These are conducted under controlled conditions, in which the researcher deliberately changes something (I.V.) to see the effect of this on something else (D.V.).

Control – lab experiments have a high degree of control over the environment & other extraneous variables which means that the researcher can accurately assess the effects of the I.V, so it has higher internal validity.

Replicable – due to the researcher’s high levels of control, research procedures can be repeated so that the reliability of results can be checked.

Limitations

Lacks ecological validity – due to the involvement of the researcher in manipulating and controlling variables, findings cannot be easily generalised to other (real life) settings, resulting in poor external validity.

Field Experiments

These are carried out in a natural setting, in which the researcher manipulates something (I.V.) to see the effect of this on something else (D.V.).

Validity – field experiments have some degree of control but also are conducted in a natural environment, so can be seen to have reasonable internal and external validity.

Less control than lab experiments and therefore extraneous variables are more likely to distort findings and so internal validity is likely to be lower.

Natural / Quasi Experiments

These are typically carried out in a natural setting, in which the researcher measures the effect of something which is to see the effect of this on something else (D.V.). Note that in this case there is no deliberate manipulation of a variable; this already naturally changing, which means the research is merely measuring the effect of something that is already happening.

High ecological validity – due to the lack of involvement of the researcher; variables are naturally occurring so findings can be easily generalised to other (real life) settings, resulting in high external validity.

Lack of control – natural experiments have no control over the environment & other extraneous variables which means that the researcher cannot always accurately assess the effects of the I.V, so it has low internal validity.

Not replicable – due to the researcher’s lack of control, research procedures cannot be repeated so that the reliability of results cannot be checked.

  • Laboratory Experiment
  • Field experiment
  • Quasi Experiment
  • Natural Experiment
  • Field experiments

You might also like

Field experiments, laboratory experiments, natural experiments, control of extraneous variables, similarities and differences between classical and operant conditioning, learning approaches - social learning theory, differences between behaviourism and social learning theory, ​research methods in the social learning theory, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

Field experiments, explained

Editor’s note: This is part of a series called “The Day Tomorrow Began,” which explores the history of breakthroughs at UChicago.  Learn more here.

A field experiment is a research method that uses some controlled elements of traditional lab experiments, but takes place in natural, real-world settings. This type of experiment can help scientists explore questions like: Why do people vote the way they do? Why do schools fail? Why are certain people hired less often or paid less money?

University of Chicago economists were early pioneers in the modern use of field experiments and conducted innovative research that impacts our everyday lives—from policymaking to marketing to farming and agriculture.  

Jump to a section:

What is a field experiment, why do a field experiment, what are examples of field experiments, when did field experiments become popular in modern economics, what are criticisms of field experiments.

Field experiments bridge the highly controlled lab environment and the messy real world. Social scientists have taken inspiration from traditional medical or physical science lab experiments. In a typical drug trial, for instance, participants are randomly assigned into two groups. The control group gets the placebo—a pill that has no effect. The treatment group will receive the new pill. The scientist can then compare the outcomes for each group.

A field experiment works similarly, just in the setting of real life.

It can be difficult to understand why a person chooses to buy one product over another or how effective a policy is when dozens of variables affect the choices we make each day. “That type of thinking, for centuries, caused economists to believe you can't do field experimentation in economics because the market is really messy,” said Prof. John List, a UChicago economist who has used field experiments to study everything from how people use  Uber and  Lyft to  how to close the achievement gap in Chicago-area schools . “There are a lot of things that are simultaneously moving.”

The key to cleaning up the mess is randomization —or assigning participants randomly to either the control group or the treatment group. “The beauty of randomization is that each group has the same amount of bad stuff, or noise or dirt,” List said. “That gets differenced out if you have large enough samples.”

Though lab experiments are still common in the social sciences, field experiments are now often used by psychologists, sociologists and political scientists. They’ve also become an essential tool in the economist’s toolbox.  

Some issues are too big and too complex to study in a lab or on paper—that’s where field experiments come in.

In a laboratory setting, a researcher wants to control as many variables as possible. These experiments are excellent for testing new medications or measuring brain functions, but they aren’t always great for answering complex questions about attitudes or behavior.

Labs are highly artificial with relatively small sample sizes—it’s difficult to know if results will still apply in the real world. Also, people are aware they are being observed in a lab, which can alter their behavior. This phenomenon, sometimes called the Hawthorne effect, can affect results.

Traditional economics often uses theories or existing data to analyze problems. But, when a researcher wants to study if a policy will be effective or not, field experiments are a useful way to look at how results may play out in real life.

In 2019, UChicago economist Michael Kremer (then at Harvard) was awarded the Nobel Prize alongside Abhijit Banerjee and Esther Duflo of MIT for their groundbreaking work using field experiments to help reduce poverty . In the 1990s and 2000s, Kremer conducted several randomized controlled trials in Kenyan schools testing potential interventions to improve student performance. 

In the 1990s, Kremer worked alongside an NGO to figure out if buying students new textbooks made a difference in academic performance. Half the schools got new textbooks; the other half didn’t. The results were unexpected—textbooks had no impact.

“Things we think are common sense, sometimes they turn out to be right, sometimes they turn out to be wrong,” said Kremer on an episode of  the Big Brains podcast. “And things that we thought would have minimal impact or no impact turn out to have a big impact.”

In the early 2000s, Kremer returned to Kenya to study a school-based deworming program. He and a colleague found that providing deworming pills to all students reduced absenteeism by more than 25%. After the study, the program was scaled nationwide by the Kenyan government. From there it was picked up by multiple Indian states—and then by the Indian national government.

“Experiments are a way to get at causal impact, but they’re also much more than that,” Kremer said in  his Nobel Prize lecture . “They give the researcher a richer sense of context, promote broader collaboration and address specific practical problems.”    

Among many other things, field experiments can be used to:

Study bias and discrimination

A 2004 study published by UChicago economists Marianne Bertrand and Sendhil Mullainathan (then at MIT) examined racial discrimination in the labor market. They sent over 5,000 resumes to real job ads in Chicago and Boston. The resumes were exactly the same in all ways but one—the name at the top. Half the resumes bore white-sounding names like Emily Walsh or Greg Baker. The other half sported African American names like Lakisha Washington or Jamal Jones. The study found that applications with white-sounding names were 50% more likely to receive a callback.

Examine voting behavior

Political scientist Harold Gosnell , PhD 1922, pioneered the use of field experiments to examine voting behavior while at UChicago in the 1920s and ‘30s. In his study “Getting out the vote,” Gosnell sorted 6,000 Chicagoans across 12 districts into groups. One group received voter registration info for the 1924 presidential election and the control group did not. Voter registration jumped substantially among those who received the informational notices. Not only did the study prove that get-out-the-vote mailings could have a substantial effect on voter turnout, but also that field experiments were an effective tool in political science.

Test ways to reduce crime and shape public policy

Researchers at UChicago’s  Crime Lab use field experiments to gather data on crime as well as policies and programs meant to reduce it. For example, Crime Lab director and economist Jens Ludwig co-authored a  2015 study on the effectiveness of the school mentoring program  Becoming a Man . Developed by the non-profit Youth Guidance, Becoming a Man focuses on guiding male students between 7th and 12th grade to help boost school engagement and reduce arrests. In two field experiments, the Crime Lab found that while students participated in the program, total arrests were reduced by 28–35%, violent-crime arrests went down by 45–50% and graduation rates increased by 12–19%.

The earliest field experiments took place—literally—in fields. Starting in the 1800s, European farmers began experimenting with fertilizers to see how they affected crop yields. In the 1920s, two statisticians, Jerzy Neyman and Ronald Fisher, were tasked with assisting with these agricultural experiments. They are credited with identifying randomization as a key element of the method—making sure each plot had the same chance of being treated as the next.

The earliest large-scale field experiments in the U.S. took place in the late 1960s to help evaluate various government programs. Typically, these experiments were used to test minor changes to things like electricity pricing or unemployment programs.

Though field experiments were used in some capacity throughout the 20th century, this method didn’t truly gain popularity in economics until the 2000s. Kremer and List were early pioneers and first began experimenting with the method in the 1990s.

In 2004, List co-authored  a seminal paper defining field experiments and arguing for the importance of the method. In 2008,  he and UChicago economist Steven Levitt published another study tracing the history of field experiments and their impact on economics.

In the past few decades, the use of field experiments has exploded. Today, economists often work alongside NGOs or nonprofit organizations to study the efficacy of programs or policies. They also partner with companies to test products and understand how people use services.  

There are several  ethical discussions happening among scholars as field experiments grow in popularity. Chief among them is the issue of informed consent. All studies that involve human test subjects must be approved by an institutional review board (IRB) to ensure that people are protected.

However, participants in field experiments often don’t know they are in an experiment. While an experiment may be given the stamp of approval in the research community, some argue that taking away peoples’ ability to opt out is inherently unethical. Others advocate for stricter review processes as field experiments continue to evolve.

According to List, another major issue in field experiments is the issue of scale . Many experiments only test small groups—say, dozens to hundreds of people. This may mean the results are not applicable to broader situations. For example, if a scientist runs an experiment at one school and finds their method works there, does that mean it will also work for an entire city? Or an entire country?

List believes that in addition to testing option A and option B, researchers need a third option that accounts for the limitations that come with a larger scale. “Option C is what I call critical scale features. I want you to bring in all of the warts, all of the constraints, whether they're regulatory constraints, or constraints by law,” List said. “Option C is like your reality test, or what I call policy-based evidence.”

This problem isn’t unique to field experiments, but List believes tackling the issue of scale is the next major frontier for a new generation of economists.

Hero photo copyright Shutterstock.com

More Explainers

A chair on stage

Improv, Explained

Illustration of cosmic rays making contact with Earth

Cosmic rays, explained

Get more with UChicago News delivered to your inbox.

Recommended Stories

A hand holding a paper heart, inserting it into a coin slot

An economist illuminates our giving habits—during the pandemic and…

Michael Kremer meeting with officials in Kenya including Dr. Sara Ruto

Collaborating with Kenyan government on development innovations is…

Related Topics

Latest news, education lab: using tutoring to reverse pandemic-era learning loss.

Illustration of a couple with kids and a couple with no kids

Big Brains podcast: Why are more women saying no to having kids?

Photo of 3 scientists on a stage with The Kavli Prize backdrop behind

UChicago President Paul Alivisatos accepts 2024 Kavli Prize in Nanoscience

a room of scientists cheer in front of a bank of computers

Particle physics

Fermilab short-baseline detector detects its first neutrinos

Inside the Lab

Go 'Inside the Lab' at UChicago

Explore labs through videos and Q&As with UChicago faculty, staff and students

Bats hanging upside down

Collapse of bat populations increased infant mortality rate, study finds

Abstract illustration of cells

UChicago scientists develop new nanomedicine approach to improve cancer treatment

Around uchicago.

Zewei Wu

Dispatches from Abroad

At Max Planck Institute for Astrophysics, UChicago student unravels the mysteries of galaxies

Artificial Intelligence

NSF awards $20 million to build AI models that predict scientific discoveries a…

New chicago booth course empowers student entrepreneurs to tackle global issues.

Campus News

Project to improve accessibility, sustainability of Main Quadrangles

IOP at the DNC

2024 DNC, RNC

An ‘unparalleled experience’: UChicago students at the Democratic, Republican conventions

John Jayne (pictured left) and Jesse Ssengonzi

The Olympics

Living their Olympic dreams: UChicago alumni relish moments on world stage

Meet A UChicagoan

“The trouble comes from using these immortal materials for disposable products.”

Photo of A scientist with black gloves and blue lab coat and a mustache holding a small item

Obama Foundation

University of Chicago Obama Foundation Scholars Program includes 18 emerging leaders for 2024-25

Psychology Sorted

Psychology for all, experimental methods explained.

brain-153040_640

The easiest one to define is the true experiment.  

Often called a ‘laboratory/lab’ experiment, this does not have to take place in a lab, but can be conducted in a classroom, office, waiting room, or even outside, providing it meets the criteria.  These are that allocation of participants to the two or more experimental (or experimental and control) groups or conditions is random and that the independent variable (IV) is manipulated by the researcher in order to measure the effect on the dependent variable (DV).  Other variables are carefully controlled, such as location, temperature, time of day, time taken for experiment, materials used, etc. This should result in a cause and effect relationship between the IV and the DV. Examples are randomised controlled drug trials or many of the cognitive experiments into memory, such as Glanzer and Cunitz_1966.

A field experiment is similar, in that individuals are usually randomly assigned to groups, where this is possible, and the IV is manipulated by the researcher. However, as this takes place in the participants’ natural surroundings, the extraneous variables that could confound the findings of the research are somewhat more difficult to control.  The implications for causation depend on how well these variables are controlled, and on the random allocation of participants.   Examples are bystander effect studies, and also research into the effect of digital technology on learning, such as that conducted by Hembrooke and Gay_2003 .

A quasi-experiment  is similar to either or both of the above, but the participants are not randomly allocated to groups.  Instead they are allocated on the basis of self-selection as male/female; left or right-handed; preference for coffee or tea; young/old, etc.  or researcher selection as scoring above or below and certain level on a pre-test; measured socio-economic status; psychology student or biology student, etc.  These are therefore, non-equivalent groups.  The IV is often manipulated and the DV measured as before, but the nature of the groups is a potential confounding variable.  If testing the effect of a new reading scheme on the reading ages of 11 year olds, a quasi-experimental design would allocate one class of 11 year olds to read using the scheme, and another to continue with the old scheme (control group), and then measure reading ages after a set period of time.  But there may have been other differences between the groups that mean a cause and effect relationship cannot be reliably established: those in the first class may also have already been better readers, or several months older, than those in the control group. Baseline pre-testing is one way around this, in which the students’ improvement is measured against their own earlier reading age, in a pre-test/post-test design.  In some quasi-experiments, the allocation to groups by certain criteria itself forms the IV, and the effects of gender, age or handedness on memory, for example, are measured. Examples are research into the efficacy of anti-depressants, when some participants are taking one anti-depressant and some another, or Caspi et al._2003 , who investigated whether a polymorphism on the serotonin transporter gene is linked to a higher or lower risk of individual depression in the face of different levels of perceived stress.

Finally, natural experiments are those in which there is no manipulation of the IV, because it is a naturally-occurring variable.  It may be an earthquake (IV) and measurement of people’s fear levels (DV) at living on a fault line before and after the event, or an increase in unemployment as a large factory closes (IV) and measurement of depression levels amongst adults of working age before and after the factory closure (DV). As with field experiments, many of the extraneous variables are difficult to control as the research takes place in people’s natural environment. A good example of a natural experiment is Charlton (1975) research into the effect of the introduction of television to the remote island of St. Helena.

The differences between quasi experiments and correlational research, and between natural experiments and case studies are sometimes hard to determine, so I would always encourage students to explain exactly why they are designating something as one or the other. We can’t always trust the original article either – Bartlett was happy to describe his studies as experiments, which they were not! Here’s hoping these examples have helped.  The following texts are super-useful, and were referred to while writing  this post.:

Campbell, D.T. & Stanley J.C . (1963). Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin (ISBN 9780528614002)

Coolican, H. (2009, 5th ed.). Research Methods and Statistics in Psychology. UK: Hodder (ISBN 9780340983447)

Shadish, W.R., Cook, T.D. & Campbell, D.T. (2001, 2nd ed.).  Experimental and Quasi-experimental Designs for Generalized Causal Inference. UK: Wadsworth (ISBN 9780395615560)

Share this:

Leave a comment cancel reply.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Field Experiments

  • Reference work entry
  • First Online: 01 January 2018
  • pp 4569–4574
  • Cite this reference work entry

field vs natural experiment

  • John A. List 1 &
  • David Reiley 1  

33 Accesses

Field experiments have grown significantly in prominence since the 1990s. In this article, we provide a summary of the major types of field experiments, explore their uses, and describe a few examples. We show how field experiments can be used for both positive and normative purposes within economics. We also discuss more generally why data collection is useful in science, and more narrowly discuss the question of generalizability. In this regard, we envision field experiments playing a classic role in helping investigators learn about the behavioural principles that are shared across different domains.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

field vs natural experiment

Exploiting Data from Field Experiments

field vs natural experiment

Experimental Design

Bibliography.

Anderson, E.T., and D. Simester. 2003. Effects of $9 price endings on retail sales: Evidence from field experiments. Quantitative Marketing and Economics 1: 93–110.

Article   Google Scholar  

Bohm, P. 1972. Estimating the demand for public goods: An experiment. European Economic Review 3: 111–130.

Engelmann, D. and V. Grimm. 2003. Bidding behavior in multi-unit auctions – An experimental investigation and some theoretical insights. Working paper. Prague: Centre for Economic Research and Graduate Education, Economic Institute.

Google Scholar  

Harrison, G.W., and J.A. List. 2004. Field experiments. Journal of Economic Literature 42: 1009–1055.

Karlan, D. and J.A. List. 2007. Does price matter in charitable giving? Evidence from a large-scale natural field experiment. American Economic Review , forthcoming.

Levitt, S.D., and J.A. List. 2006. What do laboratory experiments measuring social preferences tell us about the real world? Journal of Economic Perspectives 21(2): 153–174.

List, J.A. 2006. Field experiments: A bridge between lab and naturally occurring data. Advances in Economic Analysis & Policy 6(2), Article 8. Abstract online. http://www.bepress.com/beieap/advances/vol6/iss2/art8 . Accessed 26 May 2007.

List, J.A., and D. Lucking-Reiley. 2000. Demand reduction in a multi-unit auction: Evidence from a sportscard field experiment. American Economic Review 90: 961–972.

Lucking-Reiley, D. 1999. Using field experiments to test equivalence between auction formats: Magic on the Internet. American Economic Review 89: 1063–1080.

Orne, M.T. 1962. On the social psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist 17: 776–783.

Porter, D. and R. Vragov. 2003. An experimental examination of demand reduction in multi-unit versions of the uniform-price, Vickrey, and English auctions. Working paper. Interdisciplinary Center for Economic Science, George Mason University.

Vickrey, W. 1961. Counterspeculation, auctions, and competitive sealed tenders. Journal of Finance 16: 8–37.

Download references

Author information

Authors and affiliations.

http://link.springer.com/referencework/10.1057/978-1-349-95121-5

John A. List & David Reiley

You can also search for this author in PubMed   Google Scholar

Editor information

Copyright information.

© 2018 Macmillan Publishers Ltd.

About this entry

Cite this entry.

List, J.A., Reiley, D. (2018). Field Experiments. In: The New Palgrave Dictionary of Economics. Palgrave Macmillan, London. https://doi.org/10.1057/978-1-349-95189-5_2000

Download citation

DOI : https://doi.org/10.1057/978-1-349-95189-5_2000

Published : 15 February 2018

Publisher Name : Palgrave Macmillan, London

Print ISBN : 978-1-349-95188-8

Online ISBN : 978-1-349-95189-5

eBook Packages : Economics and Finance Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction

Comparison with controlled study design

Natural experiments as quasi experiments, instrumental variables.

  • When did science begin?
  • Where was science invented?
  • Is Internet technology "making us stupid"?
  • What is the impact of artificial intelligence (AI) technology on society?

Blackboard inscribed with scientific formulas and calculations in physics and mathematics

natural experiment

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Table Of Contents

natural experiment , observational study in which an event or a situation that allows for the random or seemingly random assignment of study subjects to different groups is exploited to answer a particular question. Natural experiments are often used to study situations in which controlled experimentation is not possible, such as when an exposure of interest cannot be practically or ethically assigned to research subjects. Situations that may create appropriate circumstances for a natural experiment include policy changes, weather events, and natural disasters. Natural experiments are used most commonly in the fields of epidemiology , political science , psychology , and social science .

Key features of experimental study design include manipulation and control. Manipulation, in this context , means that the experimenter can control which research subjects receive which exposures. For instance, subjects randomized to the treatment arm of an experiment typically receive treatment with the drug or therapy that is the focus of the experiment, while those in the control group receive no treatment or a different treatment. Control is most readily accomplished through random assignment, which means that the procedures by which participants are assigned to a treatment and control condition ensure that each has equal probability of assignment to either group. Random assignment ensures that individual characteristics or experiences that might confound the treatment results are, on average, evenly distributed between the two groups. In this way, at least one variable can be manipulated, and units are randomly assigned to the different levels or categories of the manipulated variables.

In epidemiology, the gold standard in research design generally is considered to be the randomized control trial (RCT). RCTs, however, can answer only certain types of epidemiologic questions, and they are not useful in the investigation of questions for which random assignment is either impracticable or unethical. The bulk of epidemiologic research relies on observational data, which raises issues in drawing causal inferences from the results. A core assumption for drawing causal inference is that the average outcome of the group exposed to one treatment regimen represents the average outcome the other group would have had if they had been exposed to the same treatment regimen. If treatment is not randomly assigned, as in the case of observational studies, the assumption that the two groups are exchangeable (on both known and unknown confounders) cannot be assumed to be true.

As an example, suppose that an investigator is interested in the effect of poor housing on health. Because it is neither practical nor ethical to randomize people to variable housing conditions, this subject is difficult to study using an experimental approach. However, if a housing policy change, such as a lottery for subsidized mortgages, was enacted that enabled some people to move to more desirable housing while leaving other similar people in their previous substandard housing, it might be possible to use that policy change to study the effect of housing change on health outcomes. In another example, a well-known natural experiment in Helena , Montana, smoking was banned from all public places for a six-month period. Investigators later reported a 60-percent drop in heart attacks for the study area during the time the ban was in effect.

Because natural experiments do not randomize participants into exposure groups, the assumptions and analytical techniques customarily applied to experimental designs are not valid for them. Rather, natural experiments are quasi experiments and must be thought about and analyzed as such. The lack of random assignment means multiple threats to causal inference , including attrition , history, testing, regression , instrumentation, and maturation, may influence observed study outcomes. For this reason, natural experiments will never unequivocally determine causation in a given situation. Nevertheless, they are a useful method for researchers, and if used with care they can provide additional data that may help with a research question and that may not be obtainable in any other way.

The major limitation in inferring causation from natural experiments is the presence of unmeasured confounding. One class of methods designed to control confounding and measurement error is based on instrumental variables (IV). While useful in a variety of applications, the validity and interpretation of IV estimates depend on strong assumptions, the plausibility of which must be considered with regard to the causal relation in question.

field vs natural experiment

In particular, IV analyses depend on the assumption that subjects were effectively randomized, even if the randomization was accidental (in the case of an administrative policy change or exposure to a natural disaster) and adherence to random assignment was low. IV methods can be used to control for confounding in observational studies, to control for confounding due to noncompliance, and to correct for misclassification.

IV analysis, however, can produce serious biases in effect estimates. It can also be difficult to identify the particular subpopulation to which the causal effect IV estimate applies. Moreover, IV analysis can add considerable imprecision to causal effect estimates. Small sample size poses an additional challenge in applying IV methods.

Learning Materials

  • Business Studies
  • Combined Science
  • Computer Science
  • Engineering
  • English Literature
  • Environmental Science
  • Human Geography
  • Macroeconomics
  • Microeconomics
  • Natural Experiment

Whilst oftentimes people tend to think of experiments occurring in laboratories and controlled settings, psychologists also consider real-world environments as opportunities to investigate phenomena. Behaviour changes depending on the setting, and investigating research areas in their natural settings can amplify the validity of the findings. Natural experiments offer researchers the opportunity to investigate human behaviour in everyday life. 

Millions of flashcards designed to help you ace your studies

  • Cell Biology

Which of the following experiments does not involve the researcher manipulating the independent variable? 

True or false: Similar to lab experiments, natural experiments are conducted in controlled settings.

True or false: Confounding/ extraneous variables can be an issue in natural experiments. 

After Hurricane Katrina, researchers wanted to investigate how the natural disaster affected mental health. What type of experiment is likely to be conducted? 

Sampling bias can be an issue in natural experiments; this can influence the research's...

True or false: Ethical issues can be a potential concern for natural experiments. 

Review generated flashcards

to start learning or create your own AI flashcards

Start learning or create your own AI flashcards

  • Approaches in Psychology
  • Basic Psychology
  • Biological Bases of Behavior
  • Biopsychology
  • Careers in Psychology
  • Clinical Psychology
  • Cognition and Development
  • Cognitive Psychology
  • Data Handling and Analysis
  • Developmental Psychology
  • Eating Behaviour
  • Emotion and Motivation
  • Famous Psychologists
  • Forensic Psychology
  • Health Psychology
  • Individual Differences Psychology
  • Issues and Debates in Psychology
  • Personality in Psychology
  • Psychological Treatment
  • Relationships
  • Research Methods in Psychology
  • Aims and Hypotheses
  • Causation in Psychology
  • Coding Frame Psychology
  • Correlational Studies
  • Cross Cultural Research
  • Cross Sectional Research
  • Ethical Issues and Ways of Dealing with Them
  • Experimental Designs
  • Features of Science
  • Field Experiment
  • Independent Group Design
  • Lab Experiment
  • Longitudinal Research
  • Matched Pairs Design
  • Meta Analysis
  • Observational Design
  • Online Research
  • Paradigms and Falsifiability
  • Peer Review and Economic Applications of Research
  • Pilot Studies and the Aims of Piloting
  • Quality Criteria
  • Questionnaire Construction
  • Repeated Measures Design
  • Research Methods
  • Sampling Frames
  • Sampling Psychology
  • Scientific Processes
  • Scientific Report
  • Scientific Research
  • Self-Report Design
  • Self-Report Techniques
  • Semantic Differential Rating Scale
  • Snowball Sampling
  • Schizophrenia
  • Scientific Foundations of Psychology
  • Scientific Investigation
  • Sensation and Perception
  • Social Context of Behaviour
  • Social Psychology
  • We are going to explore natural experiments used in psychological research.
  • We will start by highlighting the natural experiment definition.
  • We will then explore how natural experiments are used in psychology and cover natural experiment examples of research to demonstrate to help illustrate our points.
  • Moving on, we will cover natural and field experiments to highlight the differences between the two types of investigations.
  • And to finish, we will explore the natural experiment's advantages and disadvantages.

Natural Experiment Natural disaster Vaia

Natural Experiment Defintion

Natural experiments are essentially experiments that investigate naturally occurring phenomena. The natural experiment definition is a research procedure that occurs in the participant's natural setting that requires no manipulation by the researcher.

In experiments, changes in the independent variable (IV) are observed to identify if these changes affect the dependent variable (DV). However, in natural experiments, the researcher does not manipulate the IV. Instead, they observe the natural changes that occur.

Some examples of naturally occurring IVs are sex at birth, whether people have experienced a natural disaster, experienced a traumatic experience, or been diagnosed with a specific illness.

These examples show that it's next to impossible for the researcher to manipulate these.

Natural Experiment: Psychology

Why may researchers choose to use a natural experiment? As we have just discussed, sometimes researchers can't manipulate the IV. But, they may still wish to see how changes in the IV affect the DV, so use a natural experiment.

Sometimes a researcher can manipulate the IV, but it may be unethical or impractical to do so, so they conduct a natural experiment.

In natural experiments, the researcher can see how changes in the IV affect a DV, but unlike in lab experiments, the researcher has to identify how the IV is changing. In contrast, lab experiments pre-determine how the IV will be manipulated.

Natural Experiment: Examples

Natural experiments often take place in real-world settings. An example can be seen in examining the effect of female and male performance in an office environment and if gender plays a role in the retention of customers. Other examples include examining behaviours in schools, and the effect age has on behaviour.

Let's look at a hypothetical study that uses a natural experiment research method.

A research team was interested in investigating attitudes towards the community after experiencing a natural disaster.

The study collected data using interviews. The IV was naturally occurring as the researcher did not manipulate the IV; instead, they recruited participants who had recently experienced a natural disaster.

Natural Experiment vs Field Experiment

The table below summarises the key similarities and differences between natural experiments vs field experiments.

Natural ExperimentField Experiment
YY
NY

Natural Experiment: Advantages and Disadvantages

In the following section will present the natural experiment's advantages and disadvantages. We will discuss the new research possibilities, causal conclusions, rare opportunities, pre-existing sampling bias and ethical issues.

Natural Experiment, busy street with cars and trains, Vaia

New Research Opportunities

Natural experiments provide opportunities for research that can't be done for ethical and practical reasons.

For example, it is impossible to manipulate a natural disaster or maternal deprivation on participants.

So, natural experiments are the only ethical way for researchers to investigate the causal relationship of the above topics. Thus, natural experiments open up practical research opportunities to study conditions that cannot be manipulated.

High Ecological Validity

Natural experiments have high ecological validity because natural experiments study real-world problems that occur naturally in real-life settings.

When research is found to use and apply real-life settings and techniques, it is considered to have high mundane realism.

And the advantage of this is that the results are more likely applicable and generalisable to real-life situations.

Rare Opportunities

There are scarce opportunities for researchers to conduct a natural experiment. Most natural events are ‘one-off’ situations. Because natural events are unique, the results have limited generalisability to similar situations.

In addition, it is next to impossible for researchers to replicate natural experiments; therefore, it is difficult to establish the reliability of findings.

Pre-Existing Sampling Bias

In natural experiments, pre-existing sampling bias can be a problem. In natural experiments, researchers cannot randomly assign participants to different conditions because naturally occurring events create them. Therefore, in natural experiments, participant differences may act as confounding variables .

As a result, sample bias in natural experiments can lead to low internal validity and generalisability of the research.

Ethical Issues

Although natural experiments are considered the only ethically acceptable method for studying conditions that can't be manipulated, ethical issues may still arise. Because natural experiments are often conducted after traumatic events, interviewing or observing people after the event could cause psychological harm to participants.

Researchers should prepare for potential ethical issues, such as psychological harm, usually dealt with by offering therapy. However, this can be pretty costly. And the ethical issue may lead participants to drop out of the research, which can also affect the quality of the research.

Natural Experiment - Key takeaways

The natural experiment definition is a research procedure that occurs in the participant's natural setting that requires no manipulation of the researcher.

The advantages of natural experiments are that they provide opportunities for research that researchers cannot do for ethical or practical reasons and have high ecological validity.

The disadvantages of natural experiments are reliability issues, pre-existing sample bias, and ethical issues, such as conducting a study after traumatic events may cause psychological distress.

Flashcards in Natural Experiment 6

generalisability.

Natural Experiment

Learn with 6 Natural Experiment flashcards in the free Vaia app

We have 14,000 flashcards about Dynamic Landscapes.

Already have an account? Log in

Frequently Asked Questions about Natural Experiment

What is a natural experiment?

The natural experiment definition is a research procedure that occurs in the participant's natural setting that requires no manipulation of the researcher. 

What is an example of natural experiment?

Beckett (2006) investigated the effects of deprivation on children’s IQ at age 11. They compared 128 Romanian children who UK families had adopted at various ages and 50 UK children who had been adopted before six months. They found that Romanian children who had been adopted before six months of age had similar IQs to the UK children; however, Romanian children adopted after six months of age had much worse scores. 

What are the characteristics of a natural experiment?

The characteristics of natural experiments are that they are carried out in a natural setting and the IV is not manipulated in this type of experiment. 

What are the advantages and disadvantages of natural experiments?

And the disadvantages of natural experiments are reliability issues, pre-existing sample bias, and ethical issues, such as conducting a study after traumatic events may cause psychological distress.

What are natural experiments in research?

Natural experiments in psychology research are often used when manipulating a variable is unethical or impractical.

Test your knowledge with multiple choice flashcards

Natural Experiment

Join the Vaia App and learn efficiently with millions of flashcards and more!

Keep learning, you are doing great.

Discover learning materials with the free Vaia app

1

Vaia is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.

Natural Experiment

Vaia Editorial Team

Team Psychology Teachers

  • 6 minutes reading time
  • Checked by Vaia Editorial Team

Study anywhere. Anytime.Across all devices.

Create a free account to save this explanation..

Save explanations to your personalised space and access them anytime, anywhere!

By signing up, you agree to the Terms and Conditions and the Privacy Policy of Vaia.

Sign up to highlight and take notes. It’s 100% free.

Join over 22 million students in learning with our Vaia App

The first learning app that truly has everything you need to ace your exams in one place

  • Flashcards & Quizzes
  • AI Study Assistant
  • Study Planner
  • Smart Note-Taking

Join over 22 million students in learning with our Vaia App

Privacy Overview

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 07 January 2019

How natural field experiments have enhanced our understanding of unemployment

  • Omar Al-Ubaydli   ORCID: orcid.org/0000-0002-5193-8766 1 , 2 , 3 &
  • John A. List 4 , 5  

Nature Human Behaviour volume  3 ,  pages 33–39 ( 2019 ) Cite this article

948 Accesses

10 Citations

17 Altmetric

Metrics details

Natural field experiments investigating key labour market phenomena such as unemployment have only been used since the early 2000s. This paper reviews the literature and draws three primary conclusions that deepen our understanding of unemployment. First, the inability to monitor workers perfectly in many occupations complicates the hiring decision in a way that contributes to unemployment. Second, the inability to determine a worker’s attributes precisely at the time of hiring leads to discrimination on the basis of factors such as race, gender, age and ethnicity. This can lead to systematically high and persistent levels of unemployment for groups that face discrimination. Third, the importance of social and personal dynamics in the workplace can lead to short-term unemployment. Much of the knowledge necessary for these conclusions could only be obtained using natural field experiments due to their ability to combine randomized control with an absence of experimenter demand effects.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Similar content being viewed by others

field vs natural experiment

What’s in a name? Measuring access to social activities with a field experiment

field vs natural experiment

Reducing discrimination against job seekers with and without employment gaps

field vs natural experiment

Family-friendly policy evolution: a bibliometric study

Blakely, T. A., Collings, S. C. & Atkinson, J. Unemployment and suicide. Evidence for a causal association? J. Epidemiol. Community Health 57 , 594–600 (2003).

Article   CAS   Google Scholar  

Remmer, K. L. The political impact of economic crisis in Latin America in the 1980s. Am. Polit. Sci. Rev. 85 , 777–800 (1991).

Article   Google Scholar  

Leeper, E. M., Walker, T. B. & Yang, S. C. S. Government investment and fiscal stimulus. J. Monet. Econ. 57 , 1000–1012 (2010).

Staiger, D., Stock, J. H. & Watson, M. W. The NAIRU, unemployment and monetary policy. J. Econ. Perspect. 11 , 33–49 (1997).

Dornbusch, R. Expectations and exchange rate dynamics. J. Polit. Econ. 84 , 1161–1176 (1976).

Long, J. B. Jr & Plosser, C. I. Real business cycles. J. Polit. Econ. 91 , 39–69 (1983).

Nickell, S. Unemployment and labor market rigidities: Europe versus North America. J. Econ. Perspect. 11 , 55–74 (1997).

Brown, C., Gilroy, C. & Kohen, A. The effect of the minimum wage on employment and unemployment. J. Econ. Lit. . 20 , 487–528 (1982).

Google Scholar  

Oswald, A. J. The economic theory of trade unions: an introductory survey. Scand. J. Econ. 87 , 160–193 (1985).

Rogerson, R., Shimer, R. & Wright, R. Search-theoretic models of the labor market: a survey. J. Econ. Lit. 43 , 959–988 (2005).

Benassy, J. P. Imperfect competition, unemployment and policy. Eur. Econ. Rev. 31 , 417–426 (1987).

Crépon, B. & Van den Berg, G. J. Active labor market policies. Annu. Rev. Econ. 8 , 521–546 (2016).

Keynes, J. M. General Theory of Employment, Interest and Money (Atlantic Publishers & Distributors, New Delhi, 2016).

Engle, R. F., Hendry, D. F. & Richard, J. F. Exogeneity. Econometrica 51 , 277–304 (1983).

MacKinlay, A. C. Event studies in economics and finance. J. Econ. Lit. 35 , 13–39 (1997).

Al-Ubaydli, O. & List, J. A. in Handbook of Experimental Economic Methodology (eds Fréchette, G. R. & Schotter, A.) 420–462 (Oxford Univ. Press, New York, 2015).

Harrison, G. W. & List, J. A. Field experiments. J. Econ. Lit. 42 , 1009–1055 (2004).

Akerlof, G. A. Labor contracts as partial gift exchange. Q. J. Econ. 97 , 543–569 (1982).

List, J. A. & Rasul, I. Handbook of Labor Economics Vol. 4 (eds Ashenfelter, O. & Card, D.) 103–228 (Elsevier, Amsterdam, 2011).

Heckman, J. J. Causal parameters and policy analysis in economics: a twentieth century retrospective. Q. J. Econ. 115 , 45–97 (2000).

Heckman, J. J. & Smith, J. A. Assessing the case for social experiments. J. Econ. Perspect. 9 , 85–110 (1995).

Fréchette, G. R. & Schotter, A. Handbook of Experimental Economic Methodology (Oxford Univ. Press, New York, 2015).

Cummings, R. G., Elliott, S., Harrison, G. W. & Murphy, J. Are hypothetical referenda incentive compatible? J. Polit. Econ. 105 , 609–621 (1997).

Kramer, A. D., Guillory, J. E. & Hancock, J. T. Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl Acad. Sci. USA 111 , 8788–8790 (2014).

Falk, A. & Heckman, J. J. Lab experiments are a major source of knowledge in the social sciences. Science 326 , 535–538 (2009).

Maniadis, Z., Tufano, F. & List, J. A. One swallow doesn’t make a summer: new evidence on anchoring effects. Am. Econ. Rev. 104 , 277–90 (2014).

Levitt, S. D. & List, J. A. What do laboratory experiments measuring social preferences reveal about the real world? J. Econ. Perspect. 21 , 153–174 (2007).

Pigou, A. C. Theory of Unemployment (Routledge, Abingdon, 2013).

Marshall, A. Principles of Economics (Macmillan, London, 1890).

Walras, L. Éléments d’économie Politique Pure, Ou, Théorie de la Richesse Sociale (F. Rouge, Lausanne, 1896).

Al-Ubaydli, O., List, J. A. & Price, M. K. The nature of excess: using randomized treatments to investigate price dynamics. Preprint at https://www.nber.org/papers/w16319 (2010).

Al-Ubaydli, O. & List, J. A. Handbook of Economic Field Experiments Vol. 1 (eds Banerjee, A. V. & Duflo, E.) 271–307 (Elsevier, Amsterdam, 2017).

Levitt, S. D. & List, J. A. Field experiments in economics: the past, the present, and the future. Eur. Econ. Rev. 53 , 1–18 (2009).

Garraty, J. A. Unemployment in History: Economic Thought and Public Policy (Harper & Row, New York, 1978).

Smith, V. L. An experimental study of competitive market behavior. J. Polit. Econ. 70 , 111–137 (1962).

Nash, J. Non-cooperative games. Ann. Math. 54 , 286–295 (1951).

Ramey, V. A. Can government purchases stimulate the economy? J. Econ. Lit. 49 , 673–685 (2011).

Smith, V. L. Experimental auction markets and the Walrasian hypothesis. J. Polit. Econ. 73 , 387–393 (1965).

Manning, A. An integration of trade union models in a sequential bargaining framework. Econ. J. 97 , 121–139 (1987).

Shapiro, C. & Stiglitz, J. E. Equilibrium unemployment as a worker discipline device. Am. Econ. Rev . 74 , 433–444 (1984).

Akerlof, G. A. The market for “lemons”: quality uncertainty and the market mechanism. Q. J. Econ. 84 , 235–251 (1970).

Prendergast, C. The provision of incentives in firms. J. Econ. Lit. 37 , 7–63 (1999).

Sparks, R. A model of involuntary unemployment and wage rigidity: worker incentives and the threat of dismissal. J. Labor Econ. 4 , 560–581 (1986).

Akerlof, G. A. & Kranton, R. E. Identity, supervision, and work groups. Am. Econ. Rev . 98 , 212–17 (2008).

Greenwald, B. C. Adverse selection in the labour market. Rev. Econ. Stud. 53 , 325–347 (1986).

Spence, M. Job market signaling. Q. J. Econ. 87 , 281–306 (1973).

Cain, G. G. Handbook of Labor Economics Vol. 1 (eds Ashenfelter, O. C. & Layard, R.) 693–785 (Elsevier, Amsterdam, 1986).

Becker, G. S. The Theory of Discrimination (Univ. Chicago Press, Chicago, 1957).

Nagin, D. S., Rebitzer, J. B., Sanders, S. & Taylor, L. J. Monitoring, motivation, and management: the determinants of opportunistic behavior in a field experiment. Am. Econ. Rev . 92 , 850–873 (2002).

Boly, A. On the incentive effects of monitoring: evidence from the lab and the field. Exp. Econ. 14 , 241–253 (2011).

Holmstrom, B. & Milgrom, P. Multitask principal-agent analyses: incentive contracts, asset ownership, and job design. J. Law Econ. Organ. 7 , 24–52 (1991).

Shearer, B. Piece rates, fixed wages and incentives: evidence from a field experiment. Rev. Econ. Stud . 71 , 513–534 (2004).

Bandiera, O., Barankay, I. & Rasul, I. Social preferences and the response to incentives: evidence from personnel data. Q. J. Econ. 120 , 917–962 (2005).

Shi, L. Incentive effect of piece-rate contracts: evidence from two small field experiments. B. E. J. Econ. Anal. Policy 10 , https://doi.org/10.2202/1935-1682.2539 (2010).

Hong, F., Hossain, T., List, J. A. & Tanaka, M. Testing the theory of multitasking: evidence from a natural field experiment in Chinese factories. Int. Econ. Rev. 59 , 511–536 (2018).

Al-Ubaydli, O., Andersen, S., Gneezy, U. & List, J. A. Carrots that look like sticks: toward an understanding of multitasking incentive schemes. South. Econ. J. 81 , 538–561 (2015).

Benabou, R. & Tirole, J. Intrinsic and extrinsic motivation. Rev. Econ. Stud. 70 , 489–520 (2003).

Bertrand, M. & Duflo, E. Handbook of Economic Field Experiments Vol. 1 (eds Banerjee, A. V. & Duflo, E.) 309–393 (Elsevier, Amsterdam, 2017).

Bertrand, M. & Mullainathan, S. Are Emily and Greg more employable than Lakisha and Jamal? Am. Econ. Rev. 94 , 991–1013 (2004).

Gneezy, U., List, J. & Price, M. K. Toward an understanding of why people discriminate: evidence from a series of natural field experiments. Preprint at https://www.nber.org/papers/w17855 (2012).

Neumark, D., Bank, R. J. & Van Nort, K. D. Sex discrimination in restaurant hiring: an audit study. Q. J. Econ. 111 , 915–941 (1996).

Pager, D. The mark of a criminal record. Am. J. Sociol. 108 , 937–975 (2003).

Kroft, K., Lange, F. & Notowidigdo, M. J. Duration dependence and labor market conditions: evidence from a field experiment. Q. J. Econ. 128 , 1123–1167 (2013).

List, J. A. The nature and extent of discrimination in the marketplace: evidence from the field. Q. J. Econ. 119 , 49–89 (2004).

Blaug, M. The Methodology of Economics: Or, How Economists Explain (Cambridge Univ. Press, Cambridge, 1992).

Akerlof, G. A. & Yellen, J. L. Fairness and unemployment. Am. Econ. Rev . 78 , 44–49 (1988).

Agell, J. & Lundborg, P. Theories of pay and unemployment: survey evidence from Swedish manufacturing firms. Scand. J. Econ. 97 , 295–307 (1995).

List, J. A. The behavioralist meets the market: measuring social preferences and reputation effects in actual transactions. J. Polit. Econ. 114 , 1–37 (2006).

Gneezy, U. & List, J. A. Putting behavioral economics to work: testing for gift exchange in labor markets using field experiments. Econometrica 74 , 1365–1384 (2006).

Pritchard, R. D., Dunnette, M. D. & Gorgenson, D. O. Effects of perceptions of equity and inequity on worker performance and satisfaction. J. Appl. Psychol. 56 , 75–94 (1972).

Loewenstein, G. & Schkade, D. in Well-being: The Foundations of Hedonic Psychology (eds Kahneman, D., Diener, E. & Schwarz, N.) 85–105 (Russell Sage Foundation, New York, 1999).

Lee, D. & Rupp, N. G. Retracting a gift: How does employee effort respond to wage reductions? J. Labor Econ. 25 , 725–761 (2007).

Kube, S., Maréchal, M. A. & Puppe, C. Do wage cuts damage work morale? Evidence from a natural field experiment. J. Eur. Econ. Assoc. 11 , 853–870 (2013).

Bellemare, C. & Shearer, B. Gift giving and worker productivity: evidence from a firm-level experiment. Games Econ. Behav . 67 , 233–244 (2009).

Levitt, S. D. & Neckermann, S. What field experiments have and have not taught us about managing workers. Oxf. Rev. Econ. Policy 30 , 639–657 (2014).

Fehr, E., Goette, L. & Zehnder, C. A behavioral account of the labor market: the role of fairness concerns. Annu. Rev. Econ. 1 , 355–384 (2009).

Fehr, E., Kirchsteiger, G. & Riedl, A. Does fairness prevent market clearing? An experimental investigation. Q. J. Econ. 108 , 437–459 (1993).

Al-Ubaydli, O. & List, J. A. Do natural field experiments afford researchers more or less control than laboratory experiments? Am. Econ. Rev . 105 , 462–66 (2015).

Card, D. & Krueger, A. B. Minimum wages and employment: a case study of the fast-food industry in New Jersey and Pennsylvania. Am. Econ. Rev . 84 , 772–793 (1994).

Burda, M. C. A note on firing costs and severance benefits in equilibrium unemployment. Scand. J. Econ. 94 , 479–489 (1992).

Bazen, S. & Skourias, N. Is there a negative effect of minimum wages on youth employment in France? Eur. Econ. Rev. 41 , 723–732 (1997).

Endo, S. K. Neither panacea, placebo, nor poison: examining the rise of anti-unemployment discrimination laws. Pace Law Rev 33 , 4 (2013).

Christiano, L. J., Eichenbaum, M. & Evans, C. L. Nominal rigidities and the dynamic effects of a shock to monetary policy. J. Polit. Econ. 113 , 1–45 (2005).

Duffy, J. Experimental Macroeconomics (Palgrave Macmillan, London, 2010).

Download references

Author information

Authors and affiliations.

Bahrain Center for Strategic, International and Energy Studies, Manama, Bahrain

Omar Al-Ubaydli

George Mason University, Fairfax, VA, USA

King Fahad University of Petroleum and Minerals, Dhahran, Saudi Arabia

University of Chicago, Chicago, IL, USA

John A. List

NBER, Cambridge, MA, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Omar Al-Ubaydli .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Al-Ubaydli, O., List, J.A. How natural field experiments have enhanced our understanding of unemployment. Nat Hum Behav 3 , 33–39 (2019). https://doi.org/10.1038/s41562-018-0496-z

Download citation

Received : 19 March 2018

Accepted : 16 November 2018

Published : 07 January 2019

Issue Date : January 2019

DOI : https://doi.org/10.1038/s41562-018-0496-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Income raises human well-being indefinitely, but age consistently slashes it.

  • Shunsuke Managi

Scientific Reports (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

field vs natural experiment

  • A-Z Publications

Annual Review of Public Health

Volume 38, 2017, review article, open access, natural experiments: an overview of methods, approaches, and contributions to public health intervention research.

  • Peter Craig 1 , Srinivasa Vittal Katikireddi 1 , Alastair Leyland 1 , and Frank Popham 1
  • View Affiliations Hide Affiliations Affiliations: MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow G2 3QB, United Kingdom; email: [email protected] , [email protected] , [email protected] , [email protected]
  • Vol. 38:39-56 (Volume publication date March 2017) https://doi.org/10.1146/annurev-publhealth-031816-044327
  • First published as a Review in Advance on January 11, 2017
  • Copyright © 2017 Annual Reviews. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 (CC-BY-SA) International License, which permits unrestricted use, distribution, and reproduction in any medium and any derivative work is made available under the same, similar, or a compatible license. See credit lines of images or other third-party material in this article for license information.

Population health interventions are essential to reduce health inequalities and tackle other public health priorities, but they are not always amenable to experimental manipulation. Natural experiment (NE) approaches are attracting growing interest as a way of providing evidence in such circumstances. One key challenge in evaluating NEs is selective exposure to the intervention. Studies should be based on a clear theoretical understanding of the processes that determine exposure. Even if the observed effects are large and rapidly follow implementation, confidence in attributing these effects to the intervention can be improved by carefully considering alternative explanations. Causal inference can be strengthened by including additional design features alongside the principal method of effect estimation. NE studies often rely on existing (including routinely collected) data. Investment in such data sources and the infrastructure for linking exposure and outcome data is essential if the potential for such studies to inform decision making is to be realized.

Article metrics loading...

Full text loading...

Literature Cited

  • Abadie A , Diamond A , Hainmueller J . 1.  2010 . Synthetic control methods for comparative case studies: estimating the effect of California's Tobacco Control Program. J. Am. Stat. Assoc. 105 : 493– 505 [Google Scholar]
  • Abadie A , Diamond A , Hainmueller J . 2.  2011 . Synth: an R package for synthetic control methods in comparative case studies. J. Stat. Softw. 42 : 1– 17 [Google Scholar]
  • Abadie A , Diamond A , Hainmueller J . 3.  2015 . Comparative politics and the synthetic control method. Am. J. Polit. Sci. 50 : 495– 510 [Google Scholar]
  • Abadie A , Gardeazabal J . 4.  2003 . The economic costs of conflict: a case study of the Basque Country. Am. Econ. Rev. 93 : 113– 32 [Google Scholar]
  • 5.  Acad. Med. Sci. 2007 . Identifying the Environmental Causes of Disease: How Should We Decide What to Believe and When to Take Action. London: Acad. Med. Sci. [Google Scholar]
  • Andalon M . 6.  2011 . Impact of Oportunidades in overweight and obesity in Mexico. Health Econ 20 : Suppl. 1 1– 18 [Google Scholar]
  • Austin PC . 7.  2011 . An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar. Behav. Res. 46 : 399– 424 [Google Scholar]
  • Basu S , Rehkopf DH , Siddiqi A , Glymour MM , Kawachi I . 8.  2016 . Health behaviors, mental health, and health care utilization among single mothers after welfare reforms in the 1990s. Am. J. Epidemiol. 83 : 531– 38 [Google Scholar]
  • Bauhoff S . 9.  2014 . The effect of school district nutrition policies on dietary intake and overweight: a synthetic control approach. Econ. Hum. Biol. 12 : 45– 55 [Google Scholar]
  • Bonell C , Fletcher A , Morton M , Lorenc T , Moore L . 10.  2012 . Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc. Sci. Med. 75 : 2299– 306 [Google Scholar]
  • Bor J , Moscoe E , Mutevedzi P , Newell M-L , Barnighausen T . 11.  2014 . Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology 25 : 729– 37 [Google Scholar]
  • Boyd J , Ferrante AM , O'Keefe C , Bass AJ , Randall AM . 12.  et al. 2012 . Data linkage infrastructure for cross-jurisdictional health-related research in Australia. BMC Health Serv. Res. 2 : 480 [Google Scholar]
  • Brown J , Neary J , Katikireddi SV , Thomson H , McQuaid RW . 13.  et al. 2015 . Protocol for a mixed-methods longitudinal study to identify factors influencing return to work in the over 50s participating in the UK Work Programme: Supporting Older People into Employment (SOPIE). BMJ Open 5 : e010525 [Google Scholar]
  • Chattopadhyay R , Duflo E . 14.  2004 . Women as policy makers: evidence from a randomised policy experiment in India. Econometrica 72 : 1409– 43 [Google Scholar]
  • 15.  Comm. Soc. Determinants Health. 2008 . Closing the Gap in a Generation: Health Equity Through Action on the Social Determinants of Health. Final Report of the Commission on Social Determinants of Health. Geneva: World Health Organ. [Google Scholar]
  • Craig P , Cooper C , Gunnell D , Macintyre S , Petticrew M . 16.  et al. 2012 . Using natural experiments to evaluate population health interventions: new Medical Research Council guidance. J. Epidemiol. Community Health 66 : 1182– 86 [Google Scholar]
  • Crifasi CK , Meyers JS , Vernick JS , Webster DW . 17.  2015 . Effects of changes in permit-to-purchase handgun laws in Connecticut and Missouri on suicide rates. Prev. Med. 79 : 43– 49 [Google Scholar]
  • D'Agostino RB . 18.  1998 . Tutorial in biostatistics. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Stat. Med. 17 : 2265– 81 [Google Scholar]
  • De Angelo G , Hansen B . 19.  2014 . Life and death in the fast lane: police enforcement and traffic fatalities. Am. Econ. J. Econ. Policy 6 : 231– 57 [Google Scholar]
  • Deaton A . 20.  2010 . Instruments, randomisation and learning about development. J. Econ. Lit. 48 : 424– 55 [Google Scholar]
  • Dundas R , Ouédraogo S , Bond L , Briggs AH , Chalmers J . 21.  et al. 2014 . Evaluation of health in pregnancy grants in Scotland: a protocol for a natural experiment. BMJ Open 4 : e006547 [Google Scholar]
  • Dunning T . 22.  2012 . Natural Experiments in the Social Sciences: A Design-Based Approach Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  • Dusheiko M , Gravelle H , Jacobs R , Smith P . 23.  2006 . The effect of financial incentives on gatekeeping doctors: evidence from a natural experiment. J. Health Econ. 25 : 449– 78 [Google Scholar]
  • Eadie D , Heim D , MacAskill S , Ross A , Hastings G , Davies J . 24.  2008 . A qualitative analysis of compliance with smoke-free legislation in community bars in Scotland: implications for public health. Addiction 103 : 1019– 26 [Google Scholar]
  • Fall T , Hägg S , Mägi R , Ploner A , Fischer K . 25.  et al. 2013 . The role of adiposity in cardiometabolic traits: a Mendelian randomization analysis. PLOS Med 10 : e1001474 [Google Scholar]
  • 26.  Farr Inst. Health Inf. Res. 2016 . Environmental and public health research. Farr Inst. Health Inf. Res. Dundee, UK: http://www.farrinstitute.org/research-education/research/environmental-and-public-health [Google Scholar]
  • 27.  Foresight 2007 . Tackling Obesities: Future Choices. Challenges for Research and Research Management. London: Gov. Off. Sci. [Google Scholar]
  • Fuller T , Peters J , Pearson M , Anderson R . 28.  2014 . Impact of the transparent reporting of evaluations with nonrandomized designs reporting guideline: ten years on. Am. J. Public Health 104 : e110– 17 [Google Scholar]
  • Goodman A , van Sluijs EMF , Ogilvie D . 29.  2016 . Impact of offering cycle training in schools upon cycling behaviour: a natural experimental study. Int. J. Behav. Nutr. Phys. Act. 13 : 34 [Google Scholar]
  • Green CP , Heywood JS , Navarro M . 30.  2014 . Did liberalising bar hours decrease traffic accidents?. J. Health Econ. 35 : 189– 98 [Google Scholar]
  • Grundy C , Steinbach R , Edwards P , Green J , Armstrong B . 31.  et al. 2009 . Effect of 20 mph traffic speed zones on road injuries in London, 1986–2006: controlled interrupted time series analysis. BMJ 339 : b4469 [Google Scholar]
  • Gunnell D , Fernando R , Hewagama M , Priyangika W , Konradsen F , Eddleston M . 32.  2007 . The impact of pesticide regulations on suicide in Sri Lanka. Int. J. Epidemiol. 36 : 1235– 42 [Google Scholar]
  • Heckman JJ . 33.  1995 . Randomization as an instrumental variable. Rev. Econ. Stat. 78 : 336– 41 [Google Scholar]
  • Hernán M , Robins J . 34.  2017 . Causal Inference Boca Raton, FL: Chapman Hall/CRC In press [Google Scholar]
  • Hernán M , Robins JM . 35.  2006 . Instruments for causal inference. An epidemiologist's dream. Epidemiology 17 : 360– 72 [Google Scholar]
  • Holmes MV , Dale CE , Zuccolo L , Silverwood RJ , Guo Y . 36.  et al. 2014 . Association between alcohol and cardiovascular disease: Mendelian randomisation analysis based on individual participant data. BMJ 349 g4164 [Google Scholar]
  • 37.  House Commons Sci. Technol. Comm 2016 . The Big Data Dilemma. Fourth Report of Session 2015–16. HC 468 London: Station. Off. Ltd. [Google Scholar]
  • Humphreys DK , Panter J , Sahlqvist S , Goodman A , Ogilvie D . 38.  2016 . Changing the environment to improve population health: a framework for considering exposure in natural experimental studies. J. Epidemiol. Community Health. doi: 10.1136/jech-2015-206381 [Google Scholar]
  • Ichida Y , Hirai H , Kondo K , Kawachi I , Takeda T , Endo H . 39.  2013 . Does social participation improve self-rated health in the older population? A quasi-experimental intervention study. Soc. Sci. Med. 94 : 83– 90 [Google Scholar]
  • 40.  IOM (Inst. Med.) 2010 . Bridging the Evidence Gap in Obesity Prevention: A Framework to Inform Decision Making Washington, DC: Natl. Acad. Press [Google Scholar]
  • Jones A , Rice N . 41.  2009 . Econometric evaluation of health policies HEDG Work. Pap. 09/09 Univ. York [Google Scholar]
  • Katikireddi SV , Bond L , Hilton S . 42.  2014 . Changing policy framing as a deliberate strategy for public health advocacy: a qualitative policy case study of minimum unit pricing of alcohol. Milbank Q 92 : 250– 83 [Google Scholar]
  • Katikireddi SV , Der G , Roberts C , Haw S . 43.  2016 . Has childhood smoking reduced following smoke-free public places legislation? A segmented regression analysis of cross-sectional UK school-based surveys. Nicotine Tob. Res. 18 : 1670– 74 [Google Scholar]
  • Kontopantelis E , Doran T , Springate DA , Buchan I , Reeves D . 44.  2015 . Regression based quasi-experimental approach when randomisation is not an option: interrupted time series analysis. BMJ 350 : h2750 [Google Scholar]
  • Kreif N , Grieve R , Hangartner D , Nikolova S , Turner AJ , Sutton M . 45.  2015 . Examination of the synthetic control method for evaluating health policies with multiple treated units. Health Econ doi: 10.1002/hec.3258 [Google Scholar]
  • Labrecque JA , Kaufman JS . 46.  2016 . Can a quasi-experimental design be a better idea than an experimental one?. Epidemiology 27 : 500– 2 [Google Scholar]
  • Lee DS , Lemieux T . 47.  2010 . Regression discontinuity designs in economics. J. Econ. Lit. 48 : 281– 355 [Google Scholar]
  • Lewis SJ , Araya R , Davey Smith G , Freathy R , Gunnell D . 48.  et al. 2011 . Smoking is associated with, but does not cause, depressed mood in pregnancy—a Mendelian randomization study. PLOS ONE 6 : e21689 [Google Scholar]
  • Linden A , Adams JL . 49.  2011 . Applying a propensity score-based weighting model to interrupted time series data: improving causal inference in programme evaluation. J. Eval. Clin. Pract. 17 : 1231– 38 [Google Scholar]
  • Linden A , Adams JL . 50.  2012 . Combining the regression discontinuity design and propensity score-based weighting to improve causal inference in program evaluation. J. Eval. Clin. Pract. 18 : 317– 25 [Google Scholar]
  • Little RJ , Rubin DB . 51.  2000 . Causal effects in clinical and epidemiological studies via potential outcomes: concepts and analytical approaches. Annu. Rev. Public Health 21 : 121– 45 [Google Scholar]
  • Ludwig J , Miller D . 52.  2007 . Does Head Start improve children's life chances? Evidence from an RD design. Q. J. Econ. 122 : 159– 208 [Google Scholar]
  • Mcleod AI , Vingilis ER . 53.  2008 . Power computations in time series analyses for traffic safety interventions. Accid. Anal. Prev. 40 : 1244– 48 [Google Scholar]
  • Melhuish E , Belsky J , Leyland AH , Barnes J . 54.  Natl. Eval. Sure Start Res. Team. 2008 . Effects of fully-established Sure Start Local Programmes on 3-year-old children and their families living in England: a quasi-experimental observational study. Lancet 372 : 1641– 47 [Google Scholar]
  • Messer LC , Oakes JM , Mason S . 55.  2010 . Effects of socioeconomic and racial residential segregation on preterm birth: a cautionary tale of structural confounding. Am. J. Epidemiol. 171 : 664– 73 [Google Scholar]
  • Moore G , Audrey S , Barker M , Bond L , Bonell C . 56.  et al. 2015 . MRC process evaluation of complex intervention. Medical Research Council guidance. BMJ 350 : h1258 [Google Scholar]
  • Moscoe E , Bor J , Barnighausen T . 57.  2015 . Regression discontinuity designs are under-used in medicine, epidemiology and public health: a review of current and best practice. J. Clin. Epidemiol. 68 : 132– 43 [Google Scholar]
  • Nandi A , Hajizadeh M , Harper S , Koski A , Strumpf EC , Heymann J . 58.  2016 . Increased duration of paid maternity leave lowers infant mortality in low and middle-income countries: a quasi-experimental study. PLOS Med. 13 : e1001985 [Google Scholar]
  • Pega F , Blakely T , Glymour MM , Carter KN , Kawachi I . 59.  2016 . Using marginal structural modelling to estimate the cumulative impact of an unconditional tax credit on self-rated health. Am. J. Epidemiol. 183 : 315– 24 [Google Scholar]
  • Phillips R , Amos A , Ritchie D , Cunningham-Burley S , Martin C . 60.  2007 . Smoking in the home after the smoke-free legislation in Scotland: qualitative study. BMJ 335 : 553 [Google Scholar]
  • Ramsay CR , Matowe L , Grilli R , Grimshaw JM , Thomas RE . 61.  2005 . Interrupted time series designs in health technology assessment: lessons from two systematic reviews of behaviour change strategies. Int. J. Technol. Assess. Health Care 19 : 613– 23 [Google Scholar]
  • Restrepo BJ , Rieger M . 62.  2016 . Denmark's policy on artificial trans fat and cardiovascular disease. Am. J. Prev. Med. 50 : 69– 76 [Google Scholar]
  • Rihoux B , Ragin C . 63.  2009 . Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques London: Sage [Google Scholar]
  • Robinson M , Geue C , Lewsey J , Mackay D , McCartney G . 64.  et al. 2014 . Evaluating the impact of the Alcohol Act on off-trade alcohol sales: a natural experiment in Scotland. Addiction 109 : 2035– 43 [Google Scholar]
  • Rosenbaum PR , Rubin DB . 65.  1983 . The central role of the propensity score in observational studies for causal effects. Biometrika 70 : 41– 55 [Google Scholar]
  • Rubin DB . 66.  2008 . For objective causal inference, design trumps analysis. Ann. Appl. Stat. 2 : 808– 40 [Google Scholar]
  • Ryan AM , Krinsky S , Kontopantelis E , Doran T . 67.  2016 . Long-term evidence for the effect of pay-for-performance in primary care on mortality in the UK: a population study. Lancet 388 : 268– 74 [Google Scholar]
  • Sanson-Fisher RW , D'Este CS , Carey ML , Noble N , Paul CL . 68.  2014 . Evaluation of systems-oriented public health interventions: alternative research designs. Annu. Rev. Public Health 35 : 9– 27 [Google Scholar]
  • Shadish WR , Cook TD , Campbell DT . 69.  2002 . Experimental and Quasi-Experimental Designs for Generalized Causal Inference New York: Houghton Mifflin [Google Scholar]
  • Shah BR , Laupacis A , Hux JE , Austin PC . 70.  2005 . Propensity score methods gave similar results to traditional regression modeling in observational studies: a systematic review. J. Clin. Epidemiol. 58 : 550– 59 [Google Scholar]
  • Swanson SA , Hernán MA . 71.  2013 . Commentary: how to report instrumental variable analyses (suggestions welcome). Epidemiology 24 : 370– 74 [Google Scholar]
  • Thomas J , O'Mara-Eves A , Brunton G . 72.  2014 . Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst. Rev. 3 : 67 [Google Scholar]
  • van Leeuwen N , Lingsma HF , de Craen T , Nieboer D , Mooijaart S . 73.  et al. 2016 . Regression discontinuity design: simulation and application in two cardiovascular trials with continuous outcomes. Epidemiology 27 : 503– 11 [Google Scholar]
  • von Elm E , Altman DG , Egger M , Pocock SJ , Gøtzsche PC . 74.  et al. 2008 . Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. J. Clin. Epidemiol. 61 : 344– 49 [Google Scholar]
  • Wagner AK , Soumerai SB , Zhang F , Ross-Degnan D . 75.  2002 . Segmented regression analysis of interrupted time series studies in medication use research. J. Clin. Pharm. Ther. 27 : 299– 309 [Google Scholar]
  • Wanless D . 76.  2004 . Securing Good Health for the Whole Population London: HM Treas. [Google Scholar]
  • Warren J , Wistow J , Bambra C . 77.  2013 . Applying qualitative comparative analysis (QCA) to evaluate a public health policy initiative in the North East of England. Policy Soc 32 : 289– 301 [Google Scholar]
  • Yen ST , Andrews M , Chen Z , Eastwood DB . 78.  2008 . Food stamp program participation and food insecurity: an instrumental variables approach. Am. J. Agric. Econ. 90 : 117– 32 [Google Scholar]

Data & Media loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, review of community-based research: assessing partnership approaches to improve public health, nature and health, measuring social class in us public health research: concepts, methodologies, and guidelines, the epidemiology of depression across cultures, acute respiratory effects of particulate air pollution, racism and health: evidence and needed research, the social determinants of health: coming of age, the role of behavioral science theory in development and implementation of public health interventions, mediation analysis: a practitioner's guide, the prescription opioid and heroin crisis: a public health approach to an epidemic of addiction.

IMAGES

  1. Experimental Method AO1 AO2 AO3

    field vs natural experiment

  2. PPT

    field vs natural experiment

  3. True, Natural and Field Experiments

    field vs natural experiment

  4. PPT

    field vs natural experiment

  5. PPT

    field vs natural experiment

  6. PPT

    field vs natural experiment

VIDEO

  1. Effective Field Theories in Nuclear and Particle Physics (Part 1)

  2. Strengths and Weaknesses of LAB, FIELD and NATURAL EXPERIMENT- Research Methods -Psychology

  3. Preview: Natural Experiments

  4. Experimental research VS field experiment //Research methodology

  5. Setup for Magnetic Field Strength vs. Distance Lab using Vernier Sensor

  6. Dr. David Edds

COMMENTS

  1. What is the difference between a natural and a field experiment?

    Extraneous variables are also a problem in a natural experiment, however natural experiments are particularly useful for studying areas where it would be unethical to manipulate the IV such as depression. A field experiment is where the independent variable (IV) is manipulated and dependent variable (DV) is measured but the experiment is ...

  2. True, Natural and Field Experiments

    This simple lesson idea will help students understand the differences between these types of experiments. +3. There is a difference between a "true experiment" a "field experiment" and a "natural experiment". These separate experimental methods are commonly used in psychological research and they each have their strengths and ...

  3. Experimental Method In Psychology

    2. Field Experiment. A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

  4. Types of Experiment: Overview

    Experimental (Laboratory, Field & Natural) & Non experimental (correlations, observations, interviews, questionnaires and case studies). All the three types of experiments have characteristics in common. They all have: there will be at least two conditions in which participants produce data. Note - natural and quasi experiments are often used ...

  5. What is a field experiment?

    Field experiments, explained. Editor's note: This is part of a series called "The Day Tomorrow Began," which explores the history of breakthroughs at UChicago. Learn more here. A field experiment is a research method that uses some controlled elements of traditional lab experiments, but takes place in natural, real-world settings.

  6. 50 Field Experiments and Natural Experiments

    Sixth, we describe two methodological challenges that field experiments frequently confront, noncompliance and attrition, showing the statistical and design implications of each. Seventh, we discuss the study of natural experiments and discontinuities as alternatives to both randomized interventions and conventional nonexperimental research.

  7. Field experiment

    Research. Field experiments are experiments carried out outside of laboratory settings. They randomly assign subjects (or other sampling units) to either treatment or control groups to test claims of causal relationships. Random assignment helps establish the comparability of the treatment and control group so that any differences between them ...

  8. Field Experiments

    Field experiments are true but don't occur in a controlled environment or have random allocation of participants. Natural and quasi-experiments cannot prove or disprove causation with the same confidence as a lab experiment. Natural experiments don't manipulate the IV; they observe changes in a naturally occurring IV.

  9. Experimental methods explained

    As with field experiments, many of the extraneous variables are difficult to control as the research takes place in people's natural environment. A good example of a natural experiment is Charlton (1975) research into the effect of the introduction of television to the remote island of St. Helena.

  10. Field Experiments

    Field experiments can be a useful tool for each of these purposes. For example, Anderson and Simester collect facts useful for constructing a theory about consumer reactions to nine-dollar endings on prices. They explore the effects of different price endings by conducting a natural field experiment with a retail catalogue merchant.

  11. Natural and Field experiments explained (Research Methods A Level

    What are field and natural experiments? What are their strengths and weaknesses? This short video explains all of the above and more. #alevelpsychology #psyc...

  12. Natural experiment

    A natural experiment is a study in which individuals (or clusters of individuals) are exposed to the experimental and control conditions that are determined by nature or by other factors outside the control of the investigators. The process governing the exposures arguably resembles random assignment.Thus, natural experiments are observational studies and are not controlled in the traditional ...

  13. Full article: Natural experiment methodology for research: a review of

    1.1. Why are natural experiments important in research? As an applied scientist working in public health, I continually hear about how public health decision-makers are increasingly pushed to make evidence-based decisions around interventions despite there being a large gap between the type of research that is available and the type of research they need to make real-world decisions.

  14. Natural experiment

    natural experiment, observational study in which an event or a situation that allows for the random or seemingly random assignment of study subjects to different groups is exploited to answer a particular question. Natural experiments are often used to study situations in which controlled experimentation is not possible, such as when an ...

  15. What is a Natural Experiment?

    A natural experiment is a real world situation that resembles an experiment without any intervention or control by experimenters. The term is used for empirical studies based on variables and control groups that occur spontaneously. Natural experiments are rarely as well controlled as an experiment in a lab but allow for unique scale and scope of research.

  16. PDF Natural and Field Experiments: The Role of Qualitative Methods Thad

    After discussing natural experiments from a variety of perspectives, I give a short example of how a field experiment may be used to explore the relationship between cross-cutting cleavages and ethnic voting in Mali, drawing on my recent joint research on this topic. As I describe, qualitative methods have contributed in both expected and

  17. Natural Experiment: Definition & Examples, Psychology

    The natural experiment definition is a research procedure that occurs in the participant's natural setting that requires no manipulation by the researcher. In experiments, changes in the independent variable (IV) are observed to identify if these changes affect the dependent variable (DV). However, in natural experiments, the researcher does ...

  18. How natural field experiments have enhanced our understanding of

    Compared with natural field experiments, laboratory experiments have two key advantages 25. First, the enhanced levels of control over the environment permit manipulations that are unfeasible in ...

  19. 15 Field Experiments and Natural Experiments

    Sixth, we describe two methodological challenges that field experiments frequently confront, noncompliance and attrition, showing the statistical and design implications of each. Seventh, we discuss the study of natural experiments and discontinuities as alternatives to both randomized interventions and conventional nonexperimental research.

  20. Natural experiment methodology for research: A review of how different

    The evaluation of natural experiments (i.e. an intervention not controlled or manipulated by researchers), using various experimental and non-experimental design options can provide an alternative to the RCT. ... , N., Robinson, K., Riley, B., Smith Fowler, H. (2017). Alliance members' roles in collective field-building: An assessment of ...

  21. PDF Do Natural Field Experiments Afford Researchers More or Less Control

    natural field experiments, and that this advantage is to be balanced against the disadvantage that laboratory experiments are less generalizable. This paper presents a simple model that explores circumstances under which natural field experiments provide researchers with more control than laboratory experiments afford.

  22. Natural Experiments: An Overview of Methods, Approaches, and

    Population health interventions are essential to reduce health inequalities and tackle other public health priorities, but they are not always amenable to experimental manipulation. Natural experiment (NE) approaches are attracting growing interest as a way of providing evidence in such circumstances. One key challenge in evaluating NEs is selective exposure to the intervention. Studies should ...

  23. PDF "Laboratory vs. Field Experiments: What Can We Learn?"

    In conclusion. Broad agreement: generalizations must be made carefully. From experiments and from field observations. Field and laboratory experiments both add to our ability to understand the ("real") world. Series of experiments, and varieties of observations help us understand what is robustly generalizable.