To read this content please select one of the options below:

Please note you do not have access to teaching notes, analyzing the law qualitatively.

Qualitative Research Journal

ISSN : 1443-9883

Article publication date: 14 September 2022

Issue publication date: 4 January 2023

This article develops a methodological framework to support qualitative analyses of legal texts. Scholars across the social sciences and humanities use qualitative methods to study legal phenomena but often overlook formal legal texts as productive sites for analysis. Moreover, when qualitative researchers do analyze legal texts, they rarely discuss the methodological underpinnings that support their approach. A thorough consideration of the methodological underpinnings of qualitative approaches to legal analysis is therefore warranted.

Design/methodology/approach

By bringing critical legal theory into conversation with qualitative methodology, this article outlines a set of key principles to inform qualitative approaches to reading the law.

To construct this methodological framework, this article first distinguishes between qualitative approaches to textual analysis and the doctrinal approaches undertaken in legal practice and formal legal scholarship. It then considers how this qualitative approach might be applied to one particular genre of legal text: namely, judicial opinions, otherwise known as reasons for judgment. In doing so, it argues that robust qualitative analyses of legal texts must consider the unique characteristics of those texts, such as their distinct form, voice, rhetorical structure, and performative capabilities.

Originality/value

The methodological framework outlined here should encourage qualitative researchers to approach legal texts more readily and challenge the hegemony of doctrinal approaches to legal interpretation in social science research.

  • Qualitative data analysis
  • Methodological theory
  • Socio-legal research methods
  • Legal judgments
  • Judicial opinions

Mitchell, M. (2023), "Analyzing the law qualitatively", Qualitative Research Journal , Vol. 23 No. 1, pp. 102-113. https://doi.org/10.1108/QRJ-04-2022-0061

Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

  • Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

qualitative research methods law

Your purchase has been completed. Your documents are now available to view.

1. Legal Research as Qualitative Research

From the book research methods for law.

  • Ian Dobinson and Francis Johns
  • X / Twitter

Supplementary Materials

Please login or register with De Gruyter to order this product.

Research Methods for Law

Chapters in this book (19)

Harvard Empirical Legal Studies Series

5005 Wasserstein Hall (WCC) 1585 Massachusetts Avenue Cambridge, MA02138

Contact the Graduate Program

The  Harvard Empirical Legal Studies (HELS) Series  explores a range of empirical methods, both qualitative and quantitative, and their application in legal scholarship in different areas of the law. It is a platform for engaging with current empirical research, hearing from leading scholars working in a variety of fields, and developing ideas and empirical projects.

HELS is open to all students and scholars with an interest in empirical research. No prior background in empirical legal research is necessary. If you would like to join HELS and receive information about our sessions, please subscribe to our mailing list by completing the HELS mailing list form .

If you have any questions, do not hesitate to contact the current HELS coordinator,  Tiran Bajgiran.

All times are provided in U.S. Eastern Time (UTC/GMT-0400).

Spring 2024 Sessions

Empire and the shaping of american constitutional law.

Aziz Rana, BC Law

Monday, Mar. 25, 12:15 PM Lewis 202

This talk will explore how US imperial practice has influenced the methods and boundaries of American constitutional study.

Historical Approaches to Neoliberal Legality

Quinn Slobodian, Boston University

Thursday, Mar. 28, 12:15 PM Lewis 202

Fall 2023 Sessions

On critical quantitative methods.

Hendrik Theine , WU, Vienna/Univ. of Pennsylvania Monday, Nov. 6, 12:30 PM Lewis 202

Economic inequality is a profound challenge in the United States. Both income and wealth inequality increased remarkably since the 1980s. This growing concentration of economic inequality creates real-world political and societal problems which are increasingly reflected by social science scholarship. Among those detriments is for instance the increasing economic and political power of the super-rich. The research at hand takes a new radical look at media discourses of economic inequality over four decades in various elite US newspapers by way of quantitative critical discourse analysis. It shows that up until recently, there was minimal media coverage of economic inequality, but interest has steadily increased since then. Initially, the focus was primarily on income inequality, but over time, it has expanded to encompass broader issues of inequality. Notably, the discourse on economic inequality is significantly influenced by party politics and elections. The study also highlights certain limitations in the discourse. Critiques of inequality tend to remain at a general level, discussing concepts like capitalist and racial inequality. There is relatively less focus on policy-related discussions, such as tax reform, or discussions centered around specific actors, like the wealthy and their charitable contributions.

Spring 2023 Sessions

How to conduct qualitative empirical legal scholarship.

Jessica Silbey , Professor of Law at Boston University Yanakakis Faculty Research Scholar

Friday, March 31, 12:30 PM WCC 3034

This session explores the benefits and some limitations of qualitative research methods to study intellectual property law. It compares quantitative research methods and the economic analysis of law in the same field as other kinds of empirical inquiry that are helpful in collaboration but limited in isolation. Creativity and innovation, the practices intellectual property law purports to regulate, are not amenable to quantification without identifying qualitative variables. The lessons from this session apply across fields of legal research.

Fall 2022 Sessions

How to read quantitative empirical legal scholarship.

Holger Spamann , Lawrence R. Grove Professor of Law

Friday, September 13, 12:30 PM WCC 3007

As legal scholars, what tools do we need to read critically and engage productively with quantitative empirical scholarship? In the first session of the 2022-2023 Harvard Empirical Legal Studies Series, Harvard Law School Professor Holger Spamann will compare and discuss different quantitative studies. This session will be a first approximation to be able to understand and eventually produce empirical legal scholarship. All students and scholars interested in empirical research are welcome and encouraged to attend.

How do People Learn from Not Being Caught? An Experimental Investigation of a “Non-Occurrence Bias”

Tom Zur , John M. Olin Fellow and SJD candidate, HLS

Friday, November 4, 2:00 PM WCC 3007

The law and economics literature on specific deterrence has long theorized that offenders rationally learn from being caught and sanctioned. This paper presents evidence from a randomized controlled trial showing that offenders learn differently when not being caught as compared to being caught, which we call a “non-occurrence bias.” This implies that the socially optimal level of investment in law enforcement should be lower than stipulated by rational choice theory, even on grounds of deterrence alone.

Empirical Legal Research: Using Data and Methodology to Craft a Research Agenda

Florencia Marotta-Wurgler , NYU Boxer Family Professor of Law Faculty Director, NYU Law in Buenos Aires

Monday, November 14, 12:30 PM Lewis 202

Using a series of examples, this discussion will focus on strategies to conduct empirical legal research and develop a robust research agenda. Topics will include creating a data set and leveraging to answer unexplored questions, developing meaningful methodologies to address legal questions, building on existing work to develop a robust research agenda, and engaging the process of automation and scaling up to develop large scale data sets using machine learning approaches. 

Resources for Empirical Research

  • HLS Library Empirical Research Service
  • Harvard Institute for Quantitative Social Research (IQSS)
  • Harvard Committee on the Use of Human Subjects
  • Qualtrics Harvard
  • Harvard Kennedy School Behavioral Insights Group

Past HELS Sessions

Holger Spamann (Lawrence R. Grove Professor of Law) – How to Read Quantitative Empirical Legal Scholarship?

Katerina Linos (Professor of Law at UC Berkeley School of Law) – Qualitative Methods for Law Review Writing

Aziza Ahmed (Professor of Law at UC Irvine School of Law) – Risk and Rage: How Feminists Transformed the Law and Science of AIDS

Amy Kapczynski and Yochai Benkler –(Professor of Law at Yale; Professor of Law at Harvard) Law & Political Economy and the Question of Method

Jessica Silbey – (Boston University School of Law) Ethnography in Legal Scholarship

Roberto Tallarita – (Lecturer on Law, and Associate Director of the Program on Corporate Governance at Harvard) The Limits of Portfolio Primacy

Susan S. Silbey – (Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology at MIT) HELS with Susan Silbey: Analyzing Ethnographic Data and Producting New Theory

Cass R. Sunstein  (University Professor at Harvard) – Optimal Sludge? The Price of Program Integrity

Scott L. Cummings  (Professor of Legal Ethics and Professor of Law at UCLA School of Law) – The Making of Public Interest Lawyers

Elliot Ash  (Assistant Professor of Law, Economics, and Data Science at ETH Zürich) – Gender Attitudes in the Judiciary: Evidence from U.S. Circuit Courts

Kathleen Thelen  (Ford Professor of Political Science at MIT) – Employer Organization in the United States: Historical Legacies and the Long Shadow of the American Courts

Omer Kimhi  (Associate Professor at Haifa University Law School) – Caught In a Circle of Debt – Consumer Bankruptcy Discharge and Its Aftereffects

Suresh Naidu  (Professor in Economics and International and Public Affairs, Columbia School of International and Public Affairs) – Ideas Have Consequences: The Impact of Law and Economics on American Justice

Vardit Ravitsky  (Full Professor at the Bioethics Program, School of Public Health, University of Montreal) – Empirical Bioethics: The Example of Research on Prenatal Testing

Johnnie Lotesta  (Postdoctoral Democracy Fellow at the Ash Center for Democratic Governance and Innovation at the Harvard Kennedy School) – Opinion Crafting and the Making of U.S. Labor Law in the States

David Hagmann  (Harvard Kennedy School) – The Agent-Selection Dilemma in Distributive Bargaining

Cass R. Sunstein  (Harvard Law School) – Rear Visibility and Some Problems for Economic Analysis (with Particular Reference to Experience Goods)

Talia Gillis  (Ph.D. Candidate and S.J.D. Candidate, Harvard Business School and Graduate School of Arts and Sciences and Harvard Law School) – False Dreams of Algorithmic Fairness: The Case of Credit Pricing

Tzachi Raz (Ph.D. Candidate in Economics at Harvard University) – There’s No Such Thing as Free Land: The Homestead Act and Economic Development

Crystal Yang (Harvard Law School) – Fear and the Safety Net: Evidence from Secure Communities

Adaner Usmani (Harvard Sociology) – The Origins of Mass Incarceration

Jim Greiner (Harvard Law School) – Randomized Control Trials in the Legal Profession

Talia Shiff  (Postdoctoral Fellow, Weatherhead Center for International Affairs and Department of Sociology, Harvard University) – Legal Standards and Moral Worth in Frontline Decision-Making: Evaluations of Victimization in US Asylum Determinations

Francesca Gino (Harvard Business School) – Rebel Talent

Joscha Legewie (Department of Sociology, Harvard University) – The Effects of Policing on Educational Outcomes and Health of Minority Youth

Ryan D. Enos (Department of Government, Harvard University) – The Space Between Us: Social Geography and Politics

Katerina Linos (Berkeley Law, University of California) – How Technology Transforms Refugee Law

Roie Hauser (Visiting Researcher at the Program on Corporate Governance, Harvard Law School) – Term Length and the Role of Independent Directors in Acquisitions

Anina Schwarzenbach (Fellow, National Security Program, the Belfer Center for Science and International Affairs, Harvard Kennedy School) – A Challenge to Legitimacy: Effects of Stop-and-Search Police Contacts on Young People’s Relations with the Police

Cass R. Sunstein (Harvard Law School) – Willingness to Pay to Use Facebook, Twitter, Youtube, Instagram, Snapchat, and More: A National Survey

Netta Barak-Corren (Hebrew University of Jerusalem) – The War Within

James Greiner & Holger Spamann (Harvard Law School) – Panel: Why​ ​Does​ ​the​ ​Legal​ ​Profession​ ​Resist​ ​Rigorous​ ​Empiricism?

Mila Versteeg (University of Virginia School of Law) (with Adam Chilton) – Do Constitutional Rights Make a Difference?

Susan S. Silbey (MIT Department of Anthropology) (with Patricia Ewick) – The Common Place of Law

Holger Spamann (Harvard Law School) – Empirical Legal Studies: What They Are and How NOT to Do Them

Arevik Avedian (Harvard Law School) – How to Read an Empirical Paper in Law

James Greiner (Harvard Law School) – Randomized Experiments in the Law

Robert MacCoun (Stanford Law School) – Coping with Rapidly Changing Standards and Practices in the Empirical Sciences (including ELS)

Mario Small (Harvard Department of Sociology) – Qualitative Research in the Big Data Era

Adam Chilton (University of Chicago Law School) – Trade Openness and Antitrust Law

Jennifer Lerner (Harvard Kennedy School and Department of Psychology) – Anger in Legal Decision Making

Sarah Dryden-Peterson (Harvard Graduate School of Education) – Respect, Reciprocity, and Relationships in Interview-Based Research

Charles Wang (Harvard Business School) – Natural Experiments and Court Rulings

Guhan Subramanian (Harvard Law School) – Determining Fair Value

James Greiner (Harvard Law School) – Randomized Control Trials and the Impact of Legal Aid

Maya Sen (Harvard Kennedy School) – The Political Ideologies of Law Clerks and their Judges

Daria Roithmayr (University of Southern California Law School) – The Dynamics of Police Violence

Crystal Yang (Harvard Law School) – Empiricism in the Service of Criminal Law and Theory

Oren Bar-Gill (Harvard Law School) – Is Empirical Legal Studies Changing Law and Economics?

Elizabeth Linos (Harvard Kennedy School; VP, Head of Research and Evaluation, North America, Behavioral Insights Team) – Behavioral Law and Economics in Action: BIT, BIG, and the policymaking of choice architecture

Meira Levinson (Harvard School of Education) – Justice in Schools: Qualitative Sociological Research and Normative Ethics in Schools

Howell Jackson (HLS) – Cost-Benefit Analysis

Michael Heise (Cornell Law School) – Quantitative Research in Law: An Introductory Workshop

Susan Silbey (MIT) – Interviews: An Introductory Workshop

Kevin Quinn (UC Berkeley) – Quantifying Judicial Decisions

Holger Spamman (Harvard Law School) – Comparative Empirical Research

James Greiner (Harvard Law School) – Randomized Controlled Trials in the Research of Legal Problems

Michael Heise (Cornell Law School) – Quantitative Research in Law

James Greiner (Harvard Law School) – A Typology of Empirical Methods in Law

David Wilkins (Harvard Law School) – Mixed Methods Work and the Legal Profession

Tom Tyler (Yale Law School) – Fairness and Policing

Modal Gallery

Gallery block modal gallery.

  • Accessibility

Chicago Unbound

Chicago Unbound

Home > U Chi L Rev > Vol. 84 (2017) > Iss. 1

University of Chicago Law Review

Qualitative Methods for Law Review Writing

Katerina Linos Follow Melissa Carlson Follow

Typical law review articles not only clarify what the law is, but also examine the history of the current rules, assess the status quo, and present reform proposals. To make theoretical arguments more plausible, legal scholars frequently use examples: they draw on cases, statutes, political debates, and other sources. But legal scholars often pick their examples unsystematically and explore them armed with only the tools for doctrinal analysis. Unsystematically chosen examples can help develop plausible theories, but they rarely suffice to convince readers that these theories are true, especially when plausible alternative explanations exist. This project presents methodological insights from multiple social science disciplines and from history that could strengthen legal scholarship by improving research design, case selection, and case analysis. We describe qualitative techniques rarely found in law review writing, such as process tracing, theoretically informed sampling, and most similar case design, among others. We provide examples of best practice and illustrate how each technique can be adapted for legal sources and arguments.

Recommended Citation

Linos, Katerina and Carlson, Melissa (2017) "Qualitative Methods for Law Review Writing," University of Chicago Law Review : Vol. 84: Iss. 1, Article 10. Available at: https://chicagounbound.uchicago.edu/uclrev/vol84/iss1/10

Since May 13, 2017

  • Journal Home
  • Most Popular Papers
  • Receive Email Notices or RSS

Special Issues:

  • Special Issue
  • Issues 3 & 4

Advanced Search

ISSN: 0041-9494

Privacy Copyright

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Qualitative Methods for Law and Society—A Research Guide

Profile image of Liam McHugh-Russell

This research guide is intended as a starting point for doctoral researchers in the EUI Department of Law who plan to (or hope to) draw on socio-political, anthropological or historical methodologies as part of their dissertation research. Ultimately, the methods, methodology, and the boundaries of the research project are produced dynamically by the researcher, so this is no more than a starting point, a set of suggestions rather than a book of recipes. All sources listed are available either through the EUI library or free online.

Related Papers

Lisa Webley

qualitative research methods law

Tonny Nyarko

Canon of legal research: LAW AS AN ACADEMIC DISCIPLINE, HANOCH DAGAN: MAPPING LEGAL RESEARCHMATHIAS M. SIEMS AND DAITHI´ MAC SI´THIGH: INTERDISCIPLINARY APPROACH TO LEGAL SCHORLASHIP: A BLEND FROM THE QUALITATIVE PARADIGM, A. LYDIA A. ANKANSAH AND VICTOR CHIMBWANDA

Erasmus Law Review

Danielle Chevalier

This seminar focuses on qualitative methods in the social sciences. It is structured as a survey course, exposing students to a range of issues, rather than intensive training in a single approach. The purpose of the seminar is twofold: First, to provide participants with a broad sense of qualitative research strategies, a better understanding of how to design and carry out research, an awareness of the different logics and trade-offs that distinguish methodologies and methods and an improved capacity to read and evaluate diverse qualitative social science research. Second, to write a dissertation proposal that will be competitive for various external dissertation fellowship funders—such as NSF, Fulbright, SSRC, etc.—and defensible before one’s dissertation committee.

Darren O’Donovan

This chapter will focus upon conceptually mapping the place of socio-legal methodology within legal research. Questions to be addressed include: what are the underlying theories regarding the nature of law and legal argument underpinning this form of scholarship? How do we understand the position of law in relation to the general social sciences? Having located this methodological school, I will then proceed to consider what reasons students or researchers might have for engaging in socio-legal research. This will be achieved by discussing five major strands of socio-legal research and how they seek to make distinctive contributions to knowledge. It will be shown that socio-legal scholarship has challenged doctrinal legal research culture by questioning the assumed centrality of law and legal institutions to many social problems. It has sought to present a more complex understanding of 'how legal rules, doctrines, legal decisions, institutionalised cultural and legal practices work together to create the reality of law in action'. 1 As a result, the proponents of the methodology have successfully challenged legal scholars to display greater policy imagination, by acknowledging law's status as just one form of regulation, and cautioning against overly doctrinal understandings of the discipline.

Onati Socio Legal Series

Sol Picciotto

Juliette Galonnier

This course aims at introducing students to qualitative methods in the social sciences. It highlights the contributions of qualitative research to the study of the State, public policy and institutions, while also exploring its limitations. By the end of the course, students will be familiar with a range of qualitative methods, including interviews, ethnography, archival research, focus groups, international comparison, text analysis and the use of leaked documents. They will read key authors who use qualitative methods to examine the crafting of public policies, the history of institutions or the workings of international organizations. The selected readings focus mostly on Europe, North America and the MENA region. Students will collectively engage with the strengths and challenges of qualitative methods. They will conduct their own qualitative research on one public policy or institution of their choice and will have the opportunity to reflect on the practical obstacles and opportunities that these methods raise on the field.

Kristina Simion

Qualitative and quantitative research can lay the basis for rule of law interventions that are rooted in sound evidence and responsive to local community interests, aspirations, values, and demands. Without grounded knowledge of qualitative and quantitative research, researchers' results can easily be erroneous (as a result of, for example, poorly designed interview protocols and questionnaires). Indeed, it is an unfortunate truth that rule of law interventions are continually critiqued for being planned on the basis of inadequate research and information, and for producing unsatisfactory results. INPROL's new Practitioner's Guide on Qualitative and Quantitative Approaches to Rule of Law Research was drafted to assists practitioners in structuring research. It clarifies common research terminology and concepts, and outlines the steps involved in designing and implementing qualitative and quantitative research. The Guide recognizes that high-quality research is an essential element of the design and evaluation of rule of law programs. It is also a useful way of enhancing a practitioner's personal information needs, as conducting rule of law research can be overwhelming for the practitioner who has little previous experience. Where do you start? What components do you need to factor into your plans? What kind of research do you need to conduct? These difficult questions are even harder to address in a conflict-affected environment, where access to research participants (i.e., the people participating in research) may be difficult; information may be scarce and difficult to evaluate; and the researcher may find it hard to travel because of security risks.

Emilio Dabed

Theoretical approaches and methodological choices in the anthropological study of the “legal” share the assumption that normative phenomena and, more specifically, law is a social product, carrying the traces of the context in which they are produced. In this sense, norms/law can be understood as “metaphoric representation” of their social and political context. At the same time, anthropological legal research has looked abundantly at the ways in which law participates in the creation of social reality. In assessing the “performative” impact of juridical phenomena, or what Bourdieu refers to as the “power of law”, an analysis of the relation between legal processes, discursive practice and political and social changes is imperative. The central argument is, thus, that juridical phenomena “not only reflect but also produce and reinforce social processes”.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Armando Marques-Guedes

Janapriya Journal of Interdisciplinary Studies

Prakash Upadhyay

Selen AYIRTMAN ERCAN

Qualitative Research in Political Science

Joachim Blatter

Kiunyu Chan

Sortuz Onati Journal of Emergent Socio Legal Studies

Lucero Ibarra Rojas

Amy R Codling

AYLIN KOCUNYAN

GRACE GWANZURA

Forum Qualitative Sozialforschung Forum Qualitative Social Research

Nicole Westmarland

Israel Law Review

Malcolm M FEELEY

Seng Li Kareng

Bald de Vries

Forum Qualitative …

Kelera Bukalidi-Bogidua

Teaching Sociology

Kabwe Kapwaya

Jiří Přibáň

MKSES Publication

Vipul K U M A R Gautam

Journal of Legal Pluralism and Unoffical Law

Nova Law School, UNL, Lisboa, doctoral programme

Current Anthropology

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Qualitative Methods for Law Review Writing

We are extremely grateful to Catherine Albiston, Lauren Edelman, Stavros Gadinis, David Lieberman, Aila Matanock, Alison Post, Kevin Quinn, Karen Tani, and participants at the Berkeley Law Faculty Workshop for their generous comments.

  • Share The University of Chicago Law Review | Qualitative Methods for Law Review Writing on Facebook
  • Share The University of Chicago Law Review | Qualitative Methods for Law Review Writing on Twitter
  • Share The University of Chicago Law Review | Qualitative Methods for Law Review Writing on Email
  • Share The University of Chicago Law Review | Qualitative Methods for Law Review Writing on LinkedIn

Typical law review articles not only clarify what the law is, but also examine the history of the current rules, assess the status quo, and present reform proposals. To make theoretical arguments more plausible, legal scholars frequently use examples: they draw on cases, statutes, political debates, and other sources. But legal scholars often pick their examples unsystematically and explore them armed with only the tools for doctrinal analysis. Unsystematically chosen examples can help develop plausible theories, but they rarely suffice to convince readers that these theories are true, especially when plausible alternative explanations exist. This project presents methodological insights from multiple social science disciplines and from history that could strengthen legal scholarship by improving research design, case selection, and case analysis. We describe qualitative techniques rarely found in law review writing, such as process tracing, theoretically informed sampling, and most similar case design, among others. We provide examples of best practice and illustrate how each technique can be adapted for legal sources and arguments.

I.  Imagining Alternatives and Identifying a Puzzle

“[A]ll you really need to have is an ‘explanandum’—a puzzle, paradox, or conundrum about the social world that in one way or another upsets our expectations, and for which there is no ready answer. But this is not at all a trivial accomplishment.” 16

For social scientific research, the starting point—and perhaps half the battle—is identifying a puzzle that cannot be easily solved. Legal advocacy training does not highlight this element of puzzlement. In fact, many masterful legal strategists downplay the novelty of their arguments so that courts can more easily accept them.

To identify a puzzle, one can begin by imagining alternative outcomes to the one that occurred. The sources legal scholars regularly use are superb starting points for this task. The adversarial process inherently offers (at least) two alternative ways of understanding a set of facts—the plaintiff’s and the defendant’s. Amicus briefs and other third-party interventions can also help sketch out alternative options. Additionally, separate opinions from judges, including powerful concurrences and dissents, provide a range of plausible alternative legal outcomes. Furthermore, trial and appellate court judges can offer different answers to the same question, creating legally plausible alternative conclusions. In short, the legal process itself offers a broad range of well-constructed alternatives.

Legal scholars often go beyond these first steps to construct plausible but nonobvious alternative worlds, and draw comparisons across historical periods, legal fields, and jurisdictions. For example, in Pigs and Positivism , Professor Hendrik Hartog constructs a nonobvious but plausible counterfactual by examining a case concerning pig owners’ right to let pigs roam in urban settings. 17 Predictably, the prosecution emphasized the risks and nuisances pigs create, while the defense minimized them. 18 Drawing on historical and comparative evidence, Hartog spells out a plausible, alternative understanding of the case. Defense lawyers could have argued that pig keepers possess a customary right to let their pigs roam freely because this was a commonly accepted practice historically. 19 Despite its plausibility, the defense did not make a claim about custom—why?

By identifying this third plausible alternative, Hartog demonstrates that, while prosecutors and defense attorneys predictably disagree, the terms of disagreement explain the bounds of what is legally acceptable in particular times and places. 20 Hartog shows that an argument about custom was just outside the bounds of acceptability in early nineteenth-century New York City, even though it might have been entirely acceptable at a slightly earlier moment, in a more rural American setting, or in contemporary Britain. 21

After imagining plausible alternatives, scholars select cases that allow them to effectively explore why a particular path was or should have been chosen rather than its alternative. In the Part that follows, we present useful techniques for scholars to systematically select cases.

II.  Sampling and Case Selection

Concerns about case selection and sampling are widespread among legal scholars, particularly the worry of cherry-picking cases that best fit an argument. What is less well-known is how to create representative samples and select cases to make credible, generalizable causal claims. We introduce some helpful sampling and case selection techniques in the paragraphs that follow.

A.    Sampling

Through sampling, researchers gather a subset of units from which they can make inferences about a broader population. Sampling techniques are useful for scholars pursuing doctrinal projects because the credibility of a generalization about doctrine depends on the representativeness of chosen examples. Sampling also holds important advantages for scholars pursuing causal arguments because it helps eliminate alternative explanations of the outcome. Below, we start with some general considerations about carefully sampling legal cases. We then present two particularly useful sampling techniques: random sampling and theoretically informed sampling. We discuss random sampling to dispel the assumption that it is too complicated to use in qualitative research. We present theoretically informed sampling because it allows scholars who work with few cases to make valid inferences.

Careful sampling requires scholars to clearly define the scope of their generalizations and the population to which their inferences apply. To see careful sampling in practice, we turn to Multiple Disadvantages: An Empirical Test of Intersectionality Theory in EEO Litigation . 22 Professors Rachel Best, Lauren Edelman, Linda Krieger, and Scott Eliason sample judicial opinions in equal employment opportunity cases in US federal courts to argue that antidiscrimination lawsuits provide the least protection for plaintiffs with multiple social disadvantages. 23 Plaintiffs who allege discrimination based on multiple traits, such as race and gender, are only half as likely to win their cases as other plaintiffs. 24

Careful sampling is critical in making this claim persuasive. First, the authors select the appropriate unit in which to test their theory: federal circuit and district court cases. 25 Circuit decisions establish precedent, while district courts handle a substantial number of discrimination cases and are thus “the primary federal locale for civil rights dispute resolution.” 26 If the authors had used Supreme Court cases as their unit of analysis, it would have been harder to assess whether plaintiff characteristics influence judicial rulings. Supreme Court cases are idiosyncratic; they often involve novel issues and particularly motivated parties. The authors could not draw valid general inferences from these cases.

Second, the authors clearly explain their sample’s limitations and define the scope of their inferences. The authors randomly sampled from relevant district and circuit court opinions available on Westlaw. 27 The authors emphasize that they could not include disputes that were resolved before reaching the courts or court opinions that were never published. 28 By defining the limits of their sample, the authors strengthen the plausibility of their inferences.

1.   Random sampling and systematic sampling.

Random sampling is widely used in the social sciences. Random sampling involves selecting subjects from a larger population by chance; each subject has equal probability of being selected. Random sampling has distinct advantages because it eliminates the possibility that the characteristics of selected units influence the outcome. This technique allows scholars with limited information about the universe of cases to draw generalizations efficiently.

Random sampling is critical to Best and her colleagues’ ability to make a general claim about plaintiffs’ success in antidiscrimination lawsuits. The authors collected all relevant district and circuit court opinions between 1965 and 1999 available on Westlaw, from which they randomly chose 2 percent. 29 Each district court opinion has unique characteristics that could influence its outcome; moreover, the authors do not possess anywhere near complete knowledge about every district court case. Random sampling allows the authors to make valid generalizations to all published district and circuit court cases despite these challenges.

A related technique—systematic sampling—can also produce credible generalizations. Systematic sampling involves randomly choosing a starting point and then selecting cases based on a fixed interval. 30 For example, for his book Habeas Corpus: From England to Empire , Professor Paul Halliday creates a systematic sample of all uses of the writ of habeas corpus issued by the courts of the King’s Bench from 1500 to 1800. 31 Starting in 1502, Halliday chooses petitions filed in every fourth year until 1798. 32 Creating this systematic sample allows Halliday to identify common case characteristics and make generalizations about how people approached law. 33 Systematic sampling also allows scholars to correlate outcomes to variables; this is important for Halliday, who “correlat[es] outcomes to . . . the wrongs for which prisoners were held and the jurisdictions that ordered confinement.” 34

Random sampling has an important limitation: it requires the researcher to select a relatively large number of cases. We turn next to theoretically informed sampling, which is more appropriate for studying smaller numbers of cases.

2.  Theoretically informed sampling.

Theoretically informed sampling holds distinct advantages for producing causal claims and credible generalizations with a small number of cases. First, the researcher identifies theoretically important characteristics that could influence the outcome. The researcher then sorts cases into categories defined by these characteristics and selects cases from each category. 35

For example, if a researcher was interested in treaty compliance, she would begin by identifying state characteristics that could delay compliance, such as limited bureaucratic capacity, poverty, or federalism. The researcher would then create categories defined by different combinations of these variables (for example, a wealthy federal state with high bureaucratic capacity) and sort states into each category. She would then select cases from each category, either randomly or based on practical and theoretical concerns. For example, because US treaty ratification behavior is very different from that of other wealthy federal states with high bureaucratic capacity, the researcher might want to include additional wealthy federal states. Ultimately, the researcher should “select[ ] a manageable number of cases that are diverse in terms of theoretically important traits.” 36

Theoretically informed sampling is more difficult to carry out than random sampling and more likely to lead the researcher to introduce bias into the selection process. Despite these drawbacks, theoretically informed sampling has distinct advantages over random sampling for scholars working with a small number of cases. Random sampling has poor small-sample properties: the chances that a researcher who randomly selects five countries will end up with five developing countries, or five agricultural economies, rather than five diverse states, are surprisingly high. Scholars cannot then make valid generalizations because the cases selected have particular, shared characteristics. 37

We could not locate exemplary uses of theoretically informed sampling in the legal literature. This makes our description more challenging, yet more likely to be useful. Below is an example that illustrates some of the steps outlined above, but that has important limitations. In Legalizing Gender Inequality: Courts, Markets, and Unequal Pay for Women in America , Professors Robert Nelson and William Bridges investigate “wage differences between jobs held primarily by women and those held primarily by men within the same organization. ” 38 Al­though relevant literature argues that market principles produce these differences, Nelson and Bridges argue that organizational processes cause pay differences between typically “male” and “female” jobs. 39 Undergirding this argument are four case studies of gender discrimination lawsuits. 40

The authors select these cases to capture theoretically important variation across lawsuits. 41 The authors define the universe of cases, which includes defendant organizations large enough to have sufficiently differentiated occupations, internal labor markets, and bureaucratic personnel systems. 42 Within these parameters, the authors identify firm characteristics that might influence their outcome of interest, development of gender inequality. The potentially influential characteristics include whether organizations are public or private and the proportion of the workforce with firm-specific skills. 43 After creating four categories (for example, public companies requiring firm-specific skills), the authors select cases from each category according to practical considerations, namely, whether evidence was accessible. 44 Essentially, the authors select cases based on the values of potentially influential variables because it allows the authors to effectively evaluate whether and how organization type and skill requirements influence the outcome. Because the authors demonstrate that these other variables do not fully account for the patterns they observe, it strengthens their argument that their independent variable of interest is driving the outcome. As such, by using theoretically informed sampling, researchers can use few cases to assess their independent variable’s effect on the outcome.

Despite their use of theoretically informed sampling, the authors’ selection process raises important questions. For example, they examine only organizations sued for gender discrimination; these organizations may have especially egregious practices, and thus may be unrepresentative. 45 The authors try to alleviate this concern by, among other things, comparing employment numbers to similarly sized firms and including statements from employers that the firms sued were not unusual. 46

B.    Case Selection Techniques

While sampling techniques strengthen generalizations about the prevalence of certain population characteristics, case selection techniques are used to make structured and focused comparisons across cases, strengthening causal claims. We describe several case selection techniques below.

1.   Most difficult case design.

Selecting cases in which one’s theory is least likely to hold true can offer strong theoretical leverage. These cases, called “least-likely” cases, 47 undergird most difficult case design. If a researcher demonstrates that her theory holds true in an unlikely case, the argument is likely to hold in a broader range of cases. 48 In The Hollow Hope: Can Courts Bring About Social Change? , Professor Gerald Rosenberg uses two prominent US Supreme Court cases, Roe and Brown , to argue that the US Supreme Court’s influence on public policy is limited. 49

Using a least-likely case selection strategy is particularly effective for increasing the causal strength and generalizability of Rosenberg’s argument. The Supreme Court is more visible and influential than any other court in the American political system. 50 Roe and Brown are considered prime examples of a court producing significant social reform. 51 If Rosenberg’s theory holds true in the cases in which it is most likely to fail, it is plausible that his hypothesis could hold true in other, “easier” cases. If Rosenberg had instead chosen a case from a lower court believed to have little impact on social reform, his claim would have been far less plausible, and would have generated far less interest.

2.   Most similar case design.

In most similar case selection, the researcher chooses cases that have similar values on theoretically important characteristics, but differ on the independent variable of interest. 52 This allows the researcher to “hold constant” the other characteristics’ effects. 53 In Judicial Comparativism and Judicial Diplomacy , Professor David Law uses a most similar case design to explore why some courts use foreign law more than others. 54 Law hypothesizes that a court’s institutional capacity to learn about foreign law, and the emphasis a legal education system places on foreign law, shapes a court’s use of foreign law. 55        

Law selects the Japanese Supreme Court, the Korean Constitutional Court, and the Taiwanese Constitutional Court because they share characteristics that potentially explain judicial engagement in comparativism. 56 These countries are geographically adjacent, are democratic, share security and economic alliances with the United States, train judges similarly, have German-influenced civil law systems, have comparable popular attitudes toward comparativism, and share welcoming attitudes toward foreign law. 57

Despite their similarities, these courts differ on the outcome and explanatory variables of interest, namely, the court’s use of foreign law, the court’s institutional capacity for comparativism, and the use of comparativism in legal education. The use of foreign law by Japan’s highest court is minimal relative to Korea’s Constitutional Court, which draws on foreign law in a majority of cases, 58 and to Taiwan’s Constitutional Court, which consults foreign constitutional materials almost automatically. 59 While neither the Japanese justices nor their clerks conduct foreign legal research routinely, 60 the Korean Court has extensive foreign law research mechanisms, including a research institute for comparative constitutional scholarship. 61 Moreover, each country’s legal education system emphasizes comparativism differently. In top South Korean and Taiwanese universities, all constitutional law professors studied law abroad, compared to 25 percent to 66 percent in top Japanese universities. 62 While law professors regularly work for the Korean Constitutional Court 63 and a majority of the Taiwanese Constitutional Court justices are former legal professors, Japanese professors rarely hold seats on Japan’s Supreme Court. 64 By using most similar case design, Law effectively isolates important differences between the countries at issue, demonstrating how the highlighted differences influence judicial usage of foreign law. 65

3.   Variants on most similar case design.

Variants on most similar case design have distinct advantages for assessing claims that are of particular interest to legal scholars, such as whether particular legal devices are necessary or sufficient to produce an outcome of interest. For example, many legal scholars want to know whether particular legal rules are essential for well-functioning markets, effective political participation, or robust environmental protection. Similarly, many legal scholars wonder whether adopting similar laws (for example, a model code) in different jurisdictions will result in largely similar outcomes.

In Private Enforcement of Corporate Law: An Empirical Comparison of the United Kingdom and the United States , Professors John Armour, Bernard Black, Brian Cheffins, and Richard Nolan use a variation of most similar case design to assess whether formal private enforcement of corporate law is necessary for strong securities markets. 66 The authors select the United States and the United Kingdom because they share similar values on important characteristics. 67 “Both are common-law jurisdictions with strong judiciaries, low levels of government corruption, [ ] highly developed stock markets,” liquid securities markets, and many publicly traded firms. 68

The authors argue that, “[i]f private enforcement is [indeed] essential for robust stock markets,” they should observe “vigorous private enforcement of corporate law in both” countries, as these countries are otherwise similar in relevant respects. 69 The rate of private enforcement, however, drastically differs. The United States possesses a relatively high frequency of suits brought against directors of public companies. These suits are almost nonexistent in the United Kingdom. 70 By selecting cases that share otherwise-similar characteristics and outcomes, Armour and his coauthors trace back from the outcome and determine if the development of strong stock markets depends crucially on the private enforcement of corporate law. By showing that, contrary to expectations, private enforcement is not present in both cases, the authors effectively eliminate this as an essential precondition for strong securities markets.

Variations of most similar case design are also useful for legal scholars evaluating whether similar legal frameworks are used in the same way, or produce similar effects, across contexts. In How Dispute Resolution System Design Matters , Professor Shauhin Talesh examines why California and Vermont consumers receive different protections despite the fact that these states have nearly identical automobile consumer protection laws, or “lemon laws.” 71

Starting with nearly identical lemon laws, Talesh identifies differences between the contexts that could influence the implementation of these laws. Talesh finds that California and Vermont vary in terms of public and private control of dispute resolution structures. 72 In California, disputes are resolved in forums funded by automobile manufacturers but operated by external third-party organizations. 73 In Vermont, consumer disputes are resolved in a state-operated dispute resolution structure. 74 These dispute resolution structures filter business and consumer preferences differently, giving similar lemon laws distinct meanings. California’s managerial-justice adjudicatory model stresses business values of efficiency and managerial discretion. Vermont, by contrast, uses a collaborative justice model that reflects consumer values. 75

It is not only similarly structured laws, but also identical words, that are interpreted in very different ways. For example, both Vermont and California emphasize impartiality and neutrality in the fact-finding process; however, these words’ meanings differ across states. In California, arbitrators who actively investigate facts “compromise” impartiality and neutrality, while Vermont arbitrators must actively investigate facts to establish impartiality and neutrality. 76 This distinction leads California arbitrators to provide advantages for businesses, while Vermont arbitrators favor consumers. 77 Ultimately, by selecting cases with similar laws yet different outcomes, Talesh effectively establishes the critical role of varied implementation. 78

4.   Most different case design.

In most different case design, researchers select cases that differ on all relevant characteristics except the explanatory variable and outcome. 79 As such, most different case designs can suggest that the same variable produces the same effect across extremely different contexts. In The Euro-Crisis and the Courts: Judicial Review and the Political Process in Comparative Perspective , Professor Federico Fabbrini argues that, in response to the European debt crisis (the Euro-crisis) and new legal architecture of the Economic and Monetary Union (EMU), European courts have increased their involvement in the fiscal domain. 80

Fabbrini compares high court judicial decisions in Estonia, France, Germany, Ireland, and Portugal, highlighting that these five member states represent the very diverse political, economic, and legal conditions that characterize the European Union (EU). 81 These countries vary dramatically: not only in size, wealth, and culture, but also in terms of the length of their EU membership and the power available to their supreme courts to review legislation. 82

Drawing from post-Euro-crisis court rulings, Fabbrini identifies a common cause of this increasingly high degree of judicial intervention in fiscal and economic affairs: EU member states’ intergovernmental management of the Euro-crisis. 83 As the dominant decision-making bodies, EU member states’ executive branches reformed the EMU architecture via international agreements, allowing courts to influence fiscal reform. 84 By using most different case logic, Fabbrini emphasizes the common cause of the increase in judicial involvement in economic affairs, thereby increasing the credibility and generalizability of his argument. However, most different case design has important limitations: when selected cases share more than one relevant similarity, this technique cannot, on its own, help the researcher distinguish between them. More generally, qualitative work requires that case selection be combined with within-case analysis, to which we turn next.

III.  Process Tracing: Developing Multiple Empirical Implications

After imagining alternative plausible outcomes and selecting cases, qualitatively oriented scholars trace the events prior to the outcome, parsing their theory into logically interconnected propositions that explain why the outcome occurred. If a legal scholar attributes an outcome to a particular cause, it is reasonable to think that this cause would produce other “traces,” or implications. Using available evidence, this scholar can see whether these expected implications actually occurred, thereby strengthening (or weakening) her explanation of the outcome. Additionally, scholars can weigh the plausibility of these implications against alternative explanations of the outcome. 85

The logic of process tracing should not be unfamiliar to lawyers; similar logic is used to assemble evidence in individual cases. In process tracing, scholars form multiple hypotheses about what caused an outcome, identify implications of each hypothesis, and weigh the hypotheses against available evidence. Similarly, to link a suspect to a crime, a prosecutor identifies a motive and develops a theory connecting a suspect’s motive to the time, place, and method of the crime. The prosecutor examines whether the evidence is more consistent with her theory or alternative theories. Evidence will vary in probative value; for example, eyewitness testimony might be less definitive than DNA evidence. 86 Although lawyers “process trace” when composing legal briefs and establishing narrow causal propositions, legal scholars do not use this logic systematically in law review writing. That is, in brief writing, lawyers often assess how diverse facts contribute to their legal arguments, but in academic writing, we often see less effort spent to collect and assess key facts that would make theoretical propositions plausible.

After developing a theoretical explanation of the outcome, scholars using process tracing must assess how diagnostic evidence increases or decreases the probability that this explanation is true. These pieces of diagnostic evidence are called causal process observations (CPOs) because they elucidate the broader causal mechanism linking the variables. 87 These pieces of evidence differ from the independent observations used in statistical analyses; they do not add breadth but depth, and are logically connected, rather than independent of one another. Different types of CPOs have varying probative value. In Professor David Collier’s language, “doubly decisive” evidence and “smoking gun” evidence have high probative value: doubly decisive evidence supports one theory and discredits alternatives, while smoking gun evidence supports one theory but does not speak to alternatives. 88 In contrast, “straw-in-the-wind” evidence and “hoop” evidence are only mildly helpful. 89

Below we provide two applications of process tracing to show how it can assess different types of causal arguments using various legal sources. We distinguish theoretically between (a) testing a theory with multiple empirical implications connected chronologically, and (b) testing a particular type of chronological connection common in legal scholarship—path-dependent processes 90 —in which early events have unusually large consequences later on.

A.    Process Tracing When Observations Are Linked Temporally

Researchers can effectively use process tracing to evaluate theories with chronologically connected empirical implications. To do so, the researcher breaks down her explanation of an outcome into various sequential, causal propositions, and evaluates these propositions against temporally interlinked observations. In The Strength of a Weak Agency , Professors Nicholas Pedriana and Robin Stryker explain how social movement pressure can expand the capacity of an agency with a small staff, limited budget, and limited jurisdiction. 91 Specifically, they highlight how the NAACP and Legal Defense Fund (LDF) pressured the Equal Employment Opportunity Commission (EEOC) to aggressively interpret Title VII, 92 thereby expanding the agency’s powers. 93 While political leaders and lawyers initially understood Title VII as prohibiting only intentional discrimination, social movement pressure forced an aggressive EEOC litigation strategy, culminating in Griggs v Duke Power Co , 94 which prohibited unintentional discrimination. 95

Pedriana and Stryker’s first proposition involves social movements flooding the EEOC with complaints to demonstrate that the agency’s existing resources and capacity were insufficient. 96 Next, early EEOC leaders disagreed about expanding the agency’s mission, leading the EEOC to pursue interpretations the agency’s leaders understood as very aggressive. 97 This set of propositions has relatively distinctive empirical implications, and helps Pedriana and Stryker distinguish their theory from alternative explanations. One possible alternative is that EEOC leadership, seeking to increase their powers, would have pursued an expansive mandate even without social movement pressure. 98 Or perhaps the premise that the EEOC had an initial narrow mandate is incorrect. 99 Alternatively, perhaps the Supreme Court would have decided Griggs similarly regardless of social movement pressure and EEOC advocacy. 100

To reject the alternative explanation that power-seeking bureaucrats drove EEOC expansion, the authors highlight that the first EEOC chairman, Franklin Delano Roosevelt Jr, was yachting during congressional hearings regarding appropriations for his agency. 101 Roosevelt focused on public relations because he wanted to run for governor of New York, leaving EEOC senior staff unsure about the agency’s central objectives and how to accomplish them. 102

To evaluate their proposition that social movements exposed the EEOC’s ineffectiveness, thereby pressuring the EEOC to adopt an aggressive strategy, Pedriana and Stryker note that the NAACP and the LDF filed mass complaints in the months after Title VII came into force. 103 Jack Greenberg, director of the LDF, publicly stated that “the best way to get it amended [Title VII] is to show it doesn’t work.” 104 Throughout its initial years, the EEOC was continually handling at least four times the number of complaints it was budgeted to handle due to the unrelenting tide of complaints from the LDF and the NAACP. 105 The volume of complaints and social movement leaders’ statements are, in the language of Collier’s classification structure, “smoking gun” evidence. Given this evidence, it would be surprising if the alternate explanation—that social movement pressure had no effect on the EEOC—were true.

To evaluate their proposition that there was a push to expand the EEOC’s mandate, Pedriana and Stryker show that EEOC leadership initially disagreed over whether Title VII covered intentional discrimination and discriminatory effects. 106 Pedriana and Stryker first follow steps that legal scholars normally use: they draw from the text of Title VII, the legislative history of the statute, and statements made by the nonpartisan Bureau of National Affairs. 107 Perhaps recognizing the potential for strategic use of the legislative record, Pedriana and Stryker also draw on EEOC internal communications and staff statements. 108 Although the EEOC later (successfully) challenged employment tests as discriminatory based on statistical evidence of their impact on minority applicants, the EEOC’s general counsel initially stated that “if [the EEOC testing guidelines are] intended as a legal position as to what is meant by professionally developed tests then it is very wide off the mark . . . I cannot conceive arguing this position before a District judge .” 109 Additionally, EEOC Executive Director Herman Edelsberg said that incorporating disparate impact into the guidelines would make them “too ambitious to be a legal document.” 110 Again, this is smoking gun evidence; it would be very surprising if the alternate explanation—that the EEOC’s mandate was unquestionably broad—were true given this evidence.

Pedriana and Stryker demonstrate how legal scholars can develop temporally linked propositions with distinctive empirical signatures, and how evaluating these propositions against available evidence can substantially increase their persuasiveness. We now turn to path-dependent causal claims and explain how best to substantiate them.

B.    Process Tracing When Observations Are Path Dependent

Legal scholars commonly make claims about path depen­dence, processes in which early events have large consequences later on. A HeinOnline search showed that 2,662 articles mentioned path dependence explicitly from 2000 to 2015. Legal interpretation techniques, including rules governing precedents, analogical reasoning, and conventions about interpreting similar language systematically, make early judicial decisions crucial. Below we explain why process tracing can help develop path-dependent claims. 111

What distinguishes path dependence from other claims about event sequence? First, in path-dependent processes, positive feedback loops make early events have bigger consequences than later ones. 112 Second, path-dependent processes have critical junctures, when one option is picked among many; after this choice, it becomes increasingly difficult to return to alternatives. 113 The adoption of the QWERTY keyboard effectively illustrates path dependence. While countless ways of arranging letters on a keyboard were initially possible, once the QWERTY sequence was chosen and adopted by millions of typists, it became nearly impossible to switch to another, more efficient arrangement.

Process-tracing techniques are very useful for identifying feedback loops and critical junctures. 114 In The Lost Promise of Civil Rights , Professor Risa Goluboff explains how the NAACP adopted the now-dominant civil rights litigation strategy and why it concentrated on government-imposed segregation rather than challenging abysmal labor conditions, an alternate strategy championed by the Civil Rights Section (CRS) of the Justice Department. 115 Goluboff theorizes that early legal victories encouraged similar litigation and subsequent victories, creating a positive feedback loop that institutionalized this litigation strategy, making alternative litigation strategies much harder to pursue later on. 116

To establish that an event constitutes a critical juncture, a scholar must demonstrate that there were at least two alternatives available and that, after one alternative was chosen, it became increasingly difficult to return to the other option. Goluboff does this for key decisions in the 1930s and 1940s. 117 She also establishes that, once the NAACP chose its litigation strategy, choices about the cases it selected made it difficult, if not impossible, to change. Initially, the NAACP received both racial discrimination complaints from northern industrial workers and labor discrimination complaints from southern agricultural workers. 118 While the NAACP originally pursued both types of complaints, by the 1940s, the NAACP fashioned a legal strategy around the racial discrimination claims of industrial workers. 119 Multiple factors influenced this decision. The NAACP relied heavily on local counsel, and in the 1940s most black lawyers were in northern cities. 120 Additionally, the NAACP found that “sympathetic judges and amenable lawyers” were scarce in the south, making it “easier to win cases” in the north. 121

Perhaps the biggest critical juncture was the Supreme Court’s decision in Brown , which vindicated the NAACP’s legal strategy and established equal protection as the dominant civil rights lens. 122 Brown is perhaps the most significant US Supreme Court case; the antidiscrimination framework Brown and its progeny represent is common in casebooks and taught across law schools nationally. 123 While establishing the antidiscrimination approach’s dominance is easy, it is challenging for legal scholars to imagine that an alternative vision was possible. Goluboff convincingly establishes this alternate vision in a number of ways. Goluboff develops a plausible, alternate legal vision championed by the CRS: raw legal material for an alternate vision of civil rights, namely, agricultural workers’ horrific complaints, was ample, 124 allowing the CRS to develop a conception of civil rights based on labor and economic discrimination. 125 Additionally, she highlights that the Supreme Court overturned its own precedents with unusual frequency throughout the 1930s and 1940s 126 and presents comments from prominent civil rights lawyers and casebooks exemplifying their perceptions of ambiguity in civil rights doctrine. 127 This is smoking gun evidence because it makes it highly unlikely that the Brown decision was inevitable.

Implications and Conclusions

In place of a conclusion, we speculate on an observation that transformed quantitative research. In a much-cited 1986 piece, Paul Holland argued that some questions can be answered much more easily than others. 128 For example, it is very difficult to ascertain why people commit crimes; however, we can more easily determine whether expanding the police force reduces crime rates. Statistical analysis, Holland argued, has distinct advantages for answering the second type of question, which focuses on measuring the effect of a given variable. 129 The ease and effectiveness with which statistical analyses can answer “effects-driven” questions have led this method and question type to dominate social science research. More and more, social scientists are asking answerable questions with quantitative methods; however, fewer reflect on whether these questions, while answerable, are interesting and contribute to our understanding of the world.

Legal scholars arguably face the opposite problem. Legal scholarship has no shortage of interesting questions. However, many of these critical questions are never answered; legal scholars rarely defend their preferred theories against plausible alternatives effectively. By showcasing a variety of methodological techniques that are well suited to the types of claims and evidence legal scholars typically work with, we hope to move closer to answering the critically important questions legal scholars pose.

  • 16 Kristin Luker, Salsa Dancing into the Social Sciences: Research in an Age of Info-Glut 55 (Harvard 2008).
  • 17 See Hendrik Hartog, Pigs and Positivism , 1985 Wis L Rev 899, 904–06.
  • 18 Id at 905–06, 908–09.
  • 19 Id at 912–13.
  • 20 See id at 913, 935.
  • 21 See Hartog, 1985 Wis L Rev at 912–15, 929–35 (cited in note 17).
  • 22 See generally Rachel Kahn Best, et al, Multiple Disadvantages: An Empirical Test of Intersectionality Theory in EEO Litigation , 45 L & Society Rev 991 (2011).
  • 23 Id at 999–1000, 1017–19.
  • 24 Id at 1009 (noting that the employee wins a clear victory in 15 percent of cases with intersectional bases of discrimination, as opposed to 30 percent of cases with nonintersectional bases of discrimination).
  • 25 Id at 999.
  • 26 Best, et al, 45 L & Society Rev at 999 (cited in note 22).
  • 27 Id at 999 & nn 4–5.
  • 28 See id at 1000 & n 8.
  • 29 Id at 999.
  • 30 See, for example, Paul D. Halliday, Habeas Corpus: From England to Empire 28–29, 319 (Belknap 2010).
  • 31 Id at 4.
  • 32 Id at 319.
  • 33 See id at 5.
  • 34 Halliday, Habeas Corpus at 319 (cited in note 30).
  • 35 See Sarah Curtis, et al, Approaches to Sampling and Case Selection in Qualitative Research: Examples in the Geography of Health , 50 Soc Sci & Med 1001, 1002 (2000) (discussing the theoretical framework for case selection).
  • 36 Katerina Linos, How to Select and Develop International Law Case Studies: Lessons from Comparative Law and Comparative Politics , 109 Am J Intl L 475, 480 (2015).
  • 37 See Jason Dietrich, The Effects of Sampling Strategies on the Small Sample Properties of the Logit Estimator , 32 J Applied Stat 543, 544 (2005) (“On average, simple random sampling yields a sample reflecting the true population distributions. . . . For smaller samples, however, there is an increased risk that the model cannot be estimated because of limited variation in either the dependent or independent variables.”).
  • 38 Robert L. Nelson and William P. Bridges, Legalizing Gender Inequality: Courts, Markets, and Unequal Pay for Women in America 2 (Cambridge 1999).
  • 39 See id at 2–3.
  • 40 Id at 102, 105–08.
  • 41 Id at 102.
  • 42 Nelson and Bridges, Legalizing Gender Inequality at 108 (cited in note 38).
  • 44 This last step distinguishes theoretically informed sampling from stratified sampling. In stratified sampling, cases are picked at random within each stratum; in theoretically informed sampling, researchers select cases within each stratum. See id at 109–10.
  • 45 Id at 112.
  • 46 Nelson and Bridges, Legalizing Gender Inequality at 112–13 (cited in note 38).
  • 47 Harry Eckstein, Case Study and Theory in Political Science , in Fred I. Greenstein and Nelson W. Polsby, eds, 7 Handbook of Political Science: Strategies of Inquiry 79, 119 (Addison-Wesley 1975). See also Jack S. Levy, Case Studies: Types, Designs, and Logics of Inference , 25 Conflict Mgmt & Peace Sci 1, 12 (2008).
  • 48 See Levy, 25 Conflict Mgmt & Peace Sci at 12 (cited in note 47).
  • 49 Rosenberg, The Hollow Hope at 420 (cited in note 5).
  • 50 See id at 7.
  • 51 Id at 8.
  • 52 See Jason Seawright and John Gerring, Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options , 61 Polit Rsrch Q 294, 304 (2008).
  • 53 For an in-depth description of most similar case selection, see Ran Hirschl, The Question of Case Selection in Comparative Constitutional Law , 53 Am J Comp L 125, 133–39 (2005).
  • 54 See generally David S. Law, Judicial Comparativism and Judicial Diplomacy , 163 U Pa L Rev 927 (2015).
  • 55 Id at 942.
  • 56 Id at 942–43, 949–50.
  • 57 Id at 950.
  • 58 Law, 163 U Pa L Rev at 953, 962 (cited in note 54).
  • 59 See id at 977.
  • 60 Id at 953–54.
  • 61 Id at 972–73, 1033.
  • 62 Law, 163 U Pa L Rev at 1035 (cited in note 54).
  • 63 See id at 964, 970–71.
  • 64 Id at 1012–13.
  • 65 Id at 949–52.
  • 66 See generally John Armour, et al, Private Enforcement of Corporate Law: An Empirical Comparison of the United Kingdom and the United States , 6 J Empirical Legal Stud 687 (2009).
  • 67 See id at 692.
  • 68 Id at 689, 692 (citation omitted).
  • 69 Id at 692.
  • 70 Armour, et al, 6 J Empirical Legal Stud at 690 (cited in note 66).
  • 71 See generally Shauhin A. Talesh, How Dispute Resolution System Design Matters: An Organizational Analysis of Dispute Resolution Structures and Consumer Lemon Laws , 46 L & Society Rev 463 (2012).
  • 72 Id at 466–68.
  • 73 Id at 464.
  • 74 Id at 464–65.
  • 75 Talesh, 46 L & Society Rev at 474 (cited in note 71).
  • 76 See id at 478–80.
  • 77 See id at 478.
  • 78 Id at 483–89.
  • 79 See Seawright and Gerring, 61 Polit Rsrch Q at 306 (cited in note 52).
  • 80 Federico Fabbrini, The Euro-Crisis and the Courts: Judicial Review and the Political Process in Comparative Perspective , 32 Berkeley J Intl L 64, 65 (2014).
  • 81 See id at 75–76.
  • 83 Id at 65.
  • 84 Fabbrini, 32 Berkeley J Intl L at 65 (cited in note 80).
  • 85 See Lawrence B. Mohr, The Reliability of the Case Study as a Source of Information , 2 Advances Info Processing Orgs 65, 67–69 (1985).
  • 86 However, for a critique of the reliability of DNA evidence, see generally Andrea Roth, Maryland v. King and the Wonderful, Horrible DNA Revolution in Law Enforcement , 11 Ohio St J Crim L 295 (2013).
  • 87 David Collier, Understanding Process Tracing , 44 PS: Polit Sci & Polit 823, 826 (2011).
  • 88 Id at 825.
  • 89 “Straw-in-the-wind” evidence does not prove or disprove a theory, but suggests that its validity is more likely than it would otherwise be. “Hoop” evidence can disprove a theory but cannot independently establish its validity. Id.
  • 90 For an excellent example of how to effectively use process tracing, see Tasha Fairfield, Going Where the Money Is: Strategies for Taxing Economic Elites in Unequal Democracies , 47 World Development 42, 46–51 (2013).
  • 91 See generally Nicholas Pedriana and Robin Stryker, The Strength of a Weak Agency: Enforcement of Title VII of the 1964 Civil Rights Act and the Expansion of State Capacity, 1965–1971 , 110 Am J Sociology 709 (2004).
  • 92 Title VII of the Civil Rights Act of 1964, Pub L No 88-352, 78 Stat 253, codified at 42 USC § 2000e et seq.
  • 93 Pedriana and Stryker, 110 Am J Sociology at 710–11, 725–27 (cited in note 91).
  • 94 401 US 424 (1971).
  • 95 Id at 431 (holding that “[t]he Act proscribes not only overt discrimination but also practices that are fair in form, but discriminatory in operation”).
  • 96 Pedriana and Stryker, 110 Am J Sociology at 725 (cited in note 91).
  • 97 See id at 721.
  • 98 Id at 720.
  • 99 See id at 729.
  • 100 Pedriana and Stryker, 110 Am J Sociology at 739–40, 748 (cited in note 91).
  • 101 Id at 721.
  • 103 Id at 725.
  • 104 Pedriana and Stryker, 110 Am J Sociology at 725 (cited in note 91) (brackets in original).
  • 105 See id.
  • 106 See id at 728–30.
  • 107 See id at 723, 726.
  • 108 See Pedriana and Stryker, 110 Am J Sociology at 723 (cited in note 91).
  • 109 Id at 735 (brackets and ellipsis in original).
  • 111 For another example of process tracing to establish path dependence, see generally Katerina Linos, Path Dependence in Discrimination Law: Employment Cases in the United States and the European Union , 35 Yale J Intl L 115 (2010).
  • 112 See Paul Pierson, Increasing Returns, Path Dependence, and the Study of Politics , 94 Am Polit Sci Rev 251, 251–52 (2000).
  • 113 James Mahoney, Path Dependence in Historical Sociology , 29 Theory & Society 507, 513 (2000).
  • 114 See Giovanni Capoccia and R. Daniel Kelemen, The Study of Critical Junctures: Theory, Narrative, and Counterfactuals in Historical Institutionalism , 59 World Polit 341, 343, 358–59 (2007).
  • 115 See Risa L. Goluboff, The Lost Promise of Civil Rights 175–76 (Harvard 2007).
  • 116 See id at 217–37.
  • 117 See id at 174–237.
  • 118 See id at 81–82.  
  • 119 See Goluboff, The Lost Promise at 197 (cited in note 115).
  • 120 Id at 187.
  • 122 See id at 238–70.
  • 123 See, for example, Erwin Chemerinsky, Constitutional Law: Principles and Policies 722–25 (Wolters Kluwer 4th ed 2011).
  • 124 See Goluboff, The Lost Promise at 81–84, 175–76 (cited in note 115).
  • 125 Id at 112–14 (“The CRS maintained its original commitment to the rights of labor and reworked, rather than rejected, labor rights into its new civil rights practice during the 1940s.”).
  • 126 See id at 23.
  • 127 Id at 111–12.
  • 128 See generally Paul W. Holland, Statistics and Causal Inference , 81 J Am Stat Assn 945 (1986).
  • 129 See id at 945–48.

Thanks to Jake Gersen, Todd Henderson, Daryl Levinson, Jens Ludwig, Richard McAdams, Tom Miles, Matthew Stephenson, David Strauss, Adrian Vermeule, Noah Zatz, and participants at a workshop at The University of Chicago Law School for helpful comments.

We are grateful to Susan Bandes, Elizabeth Foote, Jacob Gersen, Brian Leiter, Anup Malani, Richard McAdams, Elizabeth Mertz, Jonathan Nash, Eric Posner, Adam Samaha, Larry Solum, David Strauss, Noah Zatz, and participants in a work-inprogress lunch at The University of Chicago Law School for valuable comments. We are also grateful to the Chicago Judges Project, and in particular to Dean Saul Levmore, for relevant support.

We thank Eric Posner, Richard Posner, Peter Strauss, and Adrian Vermeule for helpful comments. We are also grateful to Rachael Dizard, Casey Fronk, Darius Horton, Matthew Johnson, Bryan Mulder, Brett Reynolds, Matthew Tokson, and Adam Wells for superb research assistance.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

• Assessing complex multi-component interventions or systems (of change)

• What works for whom when, how and why?

• Focussing on intervention improvement

• Document study

• Observations (participant or non-participant)

• Interviews (especially semi-structured)

• Focus groups

• Transcription of audio-recordings and field notes into transcripts and protocols

• Coding of protocols

• Using qualitative data management software

• Combinations of quantitative and/or qualitative methods, e.g.:

• : quali and quanti in parallel

• : quanti followed by quali

• : quali followed by quanti

• Checklists

• Reflexivity

• Sampling strategies

• Piloting

• Co-coding

• Member checking

• Stakeholder involvement

• Protocol adherence

• Sample size

• Randomization

• Interrater reliability, variability and other “objectivity checks”

• Not being quantitative research

Acknowledgements

Abbreviations.

EVTEndovascular treatment
RCTRandomised Controlled Trial
SOPStandard Operating Procedure
SRQRStandards for Reporting Qualitative Research

Authors’ contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Qualitative and Quantitative Approaches to Rule of Law Research

68 Pages Posted: 7 Aug 2016

Kristina Simion

Australian National University (ANU), Institute of Advanced Studies, Research School of Social Sciences (RSSS), School of Regulation & Global Governance (RegNet), Students

Date Written: July 7, 2016

Qualitative and quantitative research can lay the basis for rule of law interventions that are rooted in sound evidence and responsive to local community interests, aspirations, values, and demands. Without grounded knowledge of qualitative and quantitative research, researchers' results can easily be erroneous (as a result of, for example, poorly designed interview protocols and questionnaires). Indeed, it is an unfortunate truth that rule of law interventions are continually critiqued for being planned on the basis of inadequate research and information, and for producing unsatisfactory results. INPROL's new Practitioner's Guide on Qualitative and Quantitative Approaches to Rule of Law Research was drafted to assists practitioners in structuring research. It clarifies common research terminology and concepts, and outlines the steps involved in designing and implementing qualitative and quantitative research. The Guide recognizes that high-quality research is an essential element of the design and evaluation of rule of law programs. It is also a useful way of enhancing a practitioner's personal information needs, as conducting rule of law research can be overwhelming for the practitioner who has little previous experience. Where do you start? What components do you need to factor into your plans? What kind of research do you need to conduct? These difficult questions are even harder to address in a conflict-affected environment, where access to research participants (i.e., the people participating in research) may be difficult; information may be scarce and difficult to evaluate; and the researcher may find it hard to travel because of security risks.

Keywords: rule of law, research, methods, methodologies, development

Suggested Citation: Suggested Citation

Kristina Simion (Contact Author)

Australian national university (anu), institute of advanced studies, research school of social sciences (rsss), school of regulation & global governance (regnet), students ( email ).

Canberra Australia

HOME PAGE: http://regnet.anu.edu.au/our-people/phd-students/kristina-simion

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, legal information, technology & law librarianship ejournal.

Subscribe to this fee journal for more curated articles on this topic

Political Institutions: Constitutions eJournal

Political methods: qualitative & multiple methods ejournal.

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: In this video, we're going to dive into the topic of qualitative coding, which you'll need to understand if you plan to undertake qualitative analysis for any dissertation, thesis, or research project. We'll explain what exactly qualitative coding is, the different coding approaches and methods, and how to go about coding your data step by step. So go ahead, grab a cup of coffee, grab a cup of tea, whatever works for you, and let's jump into it. Hey, welcome to Grad Coach TV, where we demystify and simplify the oftentimes intimidating world of academic research. My name's Emma, and today we're going to explore qualitative coding, an essential first step in qualitative analysis. If you'd like to learn more about qualitative analysis or research methodology in general, we've also got videos covering those topics, so be sure to check them out. I'll include the links below. If you're new to Grad Coach TV, hit that subscribe button for more videos covering all things research related. Also, if you're looking for hands-on help with your qualitative coding, check out our one-on-one coaching services, where we hold your hand through the coding process step by step. Alternatively, if you're looking to fast track your coding, we also offer a professional coding service, where our seasoned qualitative experts code your data for you, ensuring high-quality initial coding. If that sounds interesting to you, you can learn more and book a free consultation at gradcoach.com. All right, with that out of the way, let's get into it. To kick things off, let's start by understanding what a code is. At the simplest level, a code is a label that describes a piece of content. For example, in the sentence, pigeons attacked me and stole my sandwich, you could use pigeons as a code. This code would simply describe that the sentence involves pigeons. Of course, there are many ways you could code this, and this is just one approach. We'll explore the different ways in which you can code later in this video. So, qualitative coding is simply the process of creating and assigning codes to categorize data extracts. You'll then use these codes later down the road to derive themes and patterns for your actual qualitative analysis. For example, thematic analysis or content analysis. It's worth It's worth noting that coding and analysis can take place simultaneously. In fact, it's pretty much expected that you'll notice some themes emerge while you code. That said, it's important to note that coding does not necessarily involve identifying themes. Instead, it refers to the process of labeling and grouping similar types of data, which in turn will make generating themes and analyzing the data more manageable. You might be wondering then, why should I bother with coding at all? Why not just look for themes from the outset? Well, coding is a way of making sure your data is valid. In other words, it helps ensure that your analysis is undertaken systematically, and that other researchers can review it. In the world of research, we call this transparency. In other words, coding is the foundation of high quality analysis, which makes it an essential first step. Right, now that we've got a plain language definition of coding on the table, the next step is to understand what types of coding exist. Let's start with the two main approaches, deductive and inductive coding. With deductive coding, you as the researcher begin with a set of pre-established codes and apply them to your data set, for example, a set of interview transcripts. Inductive coding, on the other hand, works in reverse, as you start with a blank canvas and create your set of codes based on the data itself. In other words, the codes emerge from the data. Let's take a closer look at both of these approaches. With deductive coding, you'll make use of predetermined codes, also called a priori codes, which are developed before you interact with the present data. This usually involves drawing up a set of codes based on a research question or previous research from your literature review. You could also use an existing code set from the codebook of a previous study. For example, if you were studying the eating habits of college students, you might have a research question along the lines of, what foods do college students eat the most? As a result of this research question, you might develop a code set that includes codes such as sushi, pizza, and burgers. You'd then code your data set using only these codes, regardless of what you find in the data. On the upside, the deductive approach allows you to undertake your analysis with a very tightly focused lens and quickly identify relevant data, avoiding distractions and detours. The downside, of course, is that you could miss out on some very valuable insights as a result of this tight predetermined focus. Now let's look at the opposite approach, inductive coding. As I mentioned earlier, this type of coding involves jumping right into the data without predetermined codes and developing the codes based on what you find within the data. For example, if you were to analyze a set of open-ended interview question responses, you wouldn't necessarily know which direction the conversation would flow. If a conversation begins with a discussion of cats, it might go on to include other animals too. And so, you'd add these codes as you progress with your analysis. Simply put, with inductive coding, you go with the flow of the data. Inductive coding is great when you're researching something that isn't yet well understood because the coding derived from the data helps you explore the subject. Therefore, this approach to coding is usually adopted when researchers want to investigate new ideas or concepts or when they want to create new theories. So, as you can see, the inductive and deductive approaches represent two ends of a spectrum, but this doesn't mean that they're mutually exclusive. You can also take a hybrid approach where you utilize a mix of both. For example, if you've got a set of codes you've derived from a literature review or a previous study, in other words, a deductive approach, but you still don't have a rich enough code set to capture the depth of your qualitative data, you can combine deductive and inductive approaches, which we call a hybrid approach. To adopt a hybrid approach, you'll begin your analysis with a set of a priori codes, in other words, a deductive approach, and then add new codes, in other words, an inductive approach, as you work your way through the data. Essentially, the hybrid coding approach provides the best of both worlds, which is why it's pretty common to see this in research. All right, now that we've covered what qualitative coding is and the overarching approaches, let's dive into the actual coding process and look at how to undertake the coding. So, let's take a look at the actual coding process step by step. Whether you adopt an inductive or deductive approach, your coding will consist of two stages, initial coding and line-by-line coding. In the initial coding stage, the objective is to get a general overview of the data by reading through and understanding it. If you're using an inductive approach, this is also where you'll develop an initial set of codes. Then in the second stage, line-by-line coding, you'll delve deeper into the data and organize it into a formalized set of codes. Let's take a look at these stages of qualitative coding in more detail. Stage one, initial coding. The first step of the coding process is to identify the essence of the text and code it accordingly. While there are many qualitative analysis software options available, you can just as easily code text-based data using Microsoft Word's comments feature. In fact, if it's your first time coding, it's oftentimes best to just stick with Word as this eliminates the additional need to learn new software. Importantly, you should avoid the temptation of any sort of automated coding software or service. No matter what promises they make, automated software simply cannot compare to human-based coding as it can't understand the subtleties of language and context. Don't waste your time with this. In all likelihood, you'll just end up having to recode everything yourself anyway. Okay, so let's take a look at a practical example of the coding process. Assume you had the following interview data from two interviewees. In the initial stage of coding, you could assign the code of pets or animals. These are just initial fairly broad codes that you can and will develop and refine later. In the initial stage, broad rough codes are fine. They're just a starting point which you will build onto later when you undertake line-by-line coding. So, at this stage, you're probably wondering how to decide what codes to use, especially when there are so many ways to read and interpret any given sentence. Well, there are a few different coding methods you can adopt and the right method will depend on your research aims and research questions. In other words, the way you code will depend on what you're trying to achieve with your research. Five common methods utilized in the initial coding stage include in vivo coding, process coding, descriptive coding, structural coding, and value coding. These are not the only methods available, but they're a useful starting point. Let's take a look at each of them to understand how and when each method could be useful. Method number one, in vivo coding. When you use in vivo coding, you make use of a participant's own words rather than your interpretation of the data. In other words, you use direct quotes from participants as your codes. By doing this, you'll avoid trying to infer meaning by staying as close to the original phrases and words as possible. In vivo coding is particularly useful when your data are derived from participants who speak different languages or come from different cultures. In cases like these, it's often difficult to accurately infer meaning thanks to linguistic and or cultural differences. For example, English speakers typically view the future as in front of them and the past as behind them. However, this isn't the same in all cultures. Speakers of Aymara view the past as in front of them and the future as behind them. Why? Because the future is unknown. It must be out of sight or behind them. They know what happened in the past so their perspective is that it's positioned in front of them where they can see it. In a scenario like this one, it's not possible to derive the reason for viewing the past as in front and the future as behind without knowing the Aymara culture's perception of time. Therefore, in vivo coding is particularly useful as it avoids interpretation errors. While this case is a unique one, it illustrates the point that different languages and cultures can view the same things very differently, which would have major impacts on your data. Method number two, process coding. Next up, there's process coding, which makes use of action-based codes. Action-based codes are codes that indicate a movement or procedure. These actions are often indicated by gerunds, that is words ending in ing. For example, running, jumping, or singing. Process coding is useful as it allows you to code parts of data that aren't necessarily spoken but that are still important to understand the meaning of the text. For example, you may have action codes such as describing a panda, singing a song, or arguing with a relative. Another example would be if a participant were to say something like, I have no idea where she is. A sentence like this could be interpreted in many different ways depending on the context and movements of the participant. The participant could, for example, shrug their shoulders, which would indicate that they genuinely don't know where the girl is. Alternatively, they could wink, suggesting that they do actually know where the girl is. Simply put, process coding is useful as it allows you to, in a concise manner, identify occurrences in a set of data that are not necessarily spoken and to provide a dynamic account of events. Method number three, descriptive coding. Descriptive coding is a popular coding method that aims to summarize extracts by using a single word that encapsulates the general idea of the data. These words will typically describe the data in a highly condensed manner, which allows you as the researcher to quickly refer to the content. For example, a descriptive code could be food, when coding a video clip that involves a group of people discussing what they ate throughout the day, or cooking, when coding an image showing the steps of a recipe. Descriptive coding is very useful when dealing with data that appear in forms other than text. For example, video clips, sound recordings, or images. It's also particularly useful when you want to organize a large data set by topic area. This makes descriptive coding a popular choice for many research projects. Method number four, structural coding. True to its name, structural coding involves labeling and describing specific structural attributes of the data. Generally, it includes coding according to answers of the questions of who, what, where, and how, rather than the actual topics expressed in the data. For example, if you were coding a collection of dissertations, which would be quite a large data set, structural coding might be useful as you could code according to different sections within each of these documents. Coding what centric labels, such as hypotheses, literature review, and methodology, would help you to efficiently refer to sections and navigate without having to work through sections of data all over again. So, structural coding is useful when you want to access segments of data quickly, and it can help tremendously when you're dealing with large data sets. Structural coding can also be useful for data from open-ended survey questions. This data may initially be difficult to code as they lack the set structure of other forms of data, such as an interview with a strict closed set of questions to be answered. In this case, it would be useful to code sections of data that answer certain questions, such as who, what, where, and how. Method number five, values coding. Last but not least, values-based coding involves coding excerpts that relate to the participant's worldviews. Typically, this type of coding focuses on excerpts that provide insight regarding the values, attitudes, and beliefs of the participants. In practical terms, this means you'd be looking for instances where your participants say things like, I feel, I think that, I need, and it's important that, as these sorts of statements often provide insight into their values, attitudes, and beliefs. Values coding is therefore very useful when your research aims and research questions seek to explore cultural values and interpersonal experiences and actions, or when you're looking to learn about the human experience. All right, so we've looked at five popular methods that can be used in the initial coding stage. As I mentioned, this is not a comprehensive list, so if none of these sound relevant to your project, be sure to look up alternative coding methods to find the right fit for your research aims. The five methods we've discussed allow you to arrange your data so that it's easier to navigate during the next stage, line-by-line coding. While these methods can all be used individually, it's important to know that it's possible, and quite often beneficial, to combine them. For example, when conducting initial coding with interview data, you could begin by using structural coding to indicate who speaks when. Then, as a next step, you could apply descriptive coding so that you can navigate to and between conversation topics easily. As with all design choices, the right method or combination of methods depends on your research aims and research questions, so think carefully about what you're trying to achieve with your research. Then, select the method or methods that make sense in light of that. So, to recap, the aim of initial coding is to understand and familiarize yourself with your data, to develop an initial code set, if you're taking an inductive approach, and to take the first shot at coding your data. Once that's done, you can move on to the next stage, line-by-line coding. Let's do it. Line-by-line coding is pretty much exactly what it sounds like, reviewing your data line-by-line, digging deeper, refining your codes, and assigning additional codes to each line. With line-by-line coding, the objective is to pay close attention to your data, to refine and expand upon your coding, especially when it comes to adopting an inductive approach. For example, if you have a discussion of beverages and you previously just coded this as beverages, you could now go deeper and code more specifically, such as coffee, tea, and orange juice. The aim here is to scratch below the surface. This is the time to get detailed and specific so that you can capture as much richness from the data as possible. In the line-by-line coding process, it's useful to code as much data as possible, even if you don't think you're going to use it. As you go through this process, your coding will become more thorough and detailed, and you'll have a much better understanding of your data as a result of this. This will be incredibly valuable in the analysis phase, so don't cut corners here. Take your time to work through your data line-by-line and apply your mind to see how you refine your coding as much as possible. Keep in mind that coding is an iterative process, which means that you'll move back and forth between interviews or documents to apply the codes consistently throughout your data set. Be careful to clearly define each code and update previously coded excerpts if you adjust or update the definition of any code, or if you split any code into narrower codes. Line-by-line coding takes time, so don't rush it. Be patient and work through your data meticulously to ensure you develop a high-quality code set. Stage three, moving from coding to analysis. Once you've completed your initial and line-by-line coding, the next step is to start your actual qualitative analysis. Of course, the coding process itself will get you in analysis mode, and you'll probably already have some insights and ideas as a result of it, so you should always keep notes of your thoughts as you work through the coding process. When it comes to qualitative data analysis, there are many different methods you can use, including content analysis, thematic analysis, and discourse analysis. The analysis method you adopt will depend heavily on your research aims and research questions. We cover qualitative analysis methods on the Grad Coach blog, so we're not going to go down that rabbit hole here, but we'll discuss the important first steps that build the bridge from qualitative coding to qualitative analysis. So, how do you get started with your analysis? Well, each analysis will be different, but it's useful to ask yourself the following more general questions to get the wheels turning. What actions and interactions are shown in the data? What are the aims of these interactions and excerpts? How do participants interpret what is happening, and how do they speak about it? What does their language reveal? What are the assumptions made by the participants? What are the participants doing? Why do I want to learn about this? What am I trying to find out? As with initial coding and line-by-line coding, your qualitative analysis can follow certain steps. The first two steps will typically be code categorization and theme identification. Let's look at these two steps. Code categorization, which is the first step, is simply the process of reviewing everything you've coded and then creating categories that can be used to guide your future analysis. In other words, it's about bundling similar or related codes into categories to help organize your data effectively. Let's look at a practical example. If you were discussing different types of animals, your codes may include dogs, llamas, and lions. In the process of code categorization, you could label, in other words, categorize these three animals as mammals, whereas you could categorize flies, crickets, and beetles as insects. By creating these code categories, you will be making your data more organized, as well as enriching it so that you can see new connections between different groups of codes. Once you've categorized your codes, you can move on to the next step, which is to identify the themes in your data. Let's look at the theme identification step. From the coding and categorization processes, you'll naturally start noticing themes. Therefore, the next logical step is to identify and clearly articulate the themes in your data set. When you determine themes, you'll take what you've learned from the coding and categorization stages and synthesize it to develop themes. This is the part of the analysis process where you'll begin to draw meaning from your data and produce a narrative. The nature of this narrative will, of course, depend on your research aims, your research questions, and the analysis method you've chosen. For example, content analysis or thematic analysis. So, keep these factors front of mind as you scan for themes, as they'll help you stay aligned with the big picture. All right, now that we've covered both the what and the how of qualitative coding, I want to quickly share some general tips and suggestions to help you optimize your coding process. Let's rapid fire. One, before you begin coding, plan out the steps you'll take and the coding approach and method or methods you'll follow to avoid inconsistencies. Two, when adopting a deductive approach, it's best to use a codebook with detailed descriptions of each code right from the start of the coding process. This will ensure that you apply codes consistently based on their descriptions and will help you keep your work organized. Three, whether you adopt an inductive or deductive approach, keep track of the meanings of your codes and remember to revisit these as you go along. Four, while coding, keep your research aims, research questions, coding methods, and analysis method front of mind. This will help you to avoid directional drift, which happens when coding is not kept consistent. Five, if you're working in a research team with multiple coders, make sure that everyone has been trained and clearly understands how codes need to be assigned. If multiple coders are pulling in even slightly different directions, you will end up with a mess that needs to be redone. You don't want that. So keep these five tips in mind and you'll be on the fast track to coding success. And there you have it, qualitative coding in a nutshell. Remember, as with every design choice in your dissertation, thesis, or research project, your research aims and research questions will have a major influence on how you approach the coding. So keep these two elements front of mind every step of the way and make sure your coding approach and methods align well. If you enjoyed the video, hit the like button and leave a comment if you have any questions. Also, be sure to subscribe to the channel for more research-related content. If you need a helping hand with your qualitative coding or any part of your research project, remember to check out our private coaching service where we work with you on a one-on-one basis, chapter by chapter, to help you craft a winning piece of research. If that sounds interesting to you, book a free consultation with a friendly coach at gradcoach.com. As always, I'll include a link below. That's all for this episode of Grad Coach TV. Until next time, good luck.

techradar

IMAGES

  1. Qualitative Research: Definition, Types, Methods and Examples

    qualitative research methods law

  2. qualitative research methods

    qualitative research methods law

  3. 6 Types of Qualitative Research Methods

    qualitative research methods law

  4. Qualitative Research

    qualitative research methods law

  5. Understanding Qualitative Research: An In-Depth Study Guide

    qualitative research methods law

  6. Qualitative Research

    qualitative research methods law

VIDEO

  1. 9. Qualitative and Quantitative Research Approaches

  2. Qualitative Research Method ( Step by Step complete description )

  3. Exploring Qualitative and Quantitative Research Methods and why you should use them

  4. Research Methods S6b

  5. Research Methods S6a

  6. Legal search method Success tips and secrets law Motivation

COMMENTS

  1. 39 Qualitative Approaches to Empirical Legal Research

    Q ualitative research methods are often identified with the social sciences and humanities more generally than with the discipline of law in particular. That is not to say that lawyers do not make use of qualitative research methods in their own practice. Many common law practitioners are unaware that they undertake qualitative empirical legal research on a regular basis—the case-based ...

  2. 12 Qualitative Legal Research: A Methodological Discourse

    Kristina Simion writes, 'In qualitative research, emphasis is placed on people's feelings, perceptions and experiences in order to explore and understand the meaning individuals or groups ascribe to a social or human problem.' 4 Its essential feature is that it allows the researcher to identify issues and interpret the behaviour, events, or objects from the perspective of participants. 5 ...

  3. The Oxford Handbook of Qualitative Research

    Abstract. The Oxford Handbook of Qualitative Research, second edition, presents a comprehensive retrospective and prospective review of the field of qualitative research. Original, accessible chapters written by interdisciplinary leaders in the field make this a critical reference work. Filled with robust examples from real-world research ...

  4. Analyzing the law qualitatively

    Purpose. This article develops a methodological framework to support qualitative analyses of legal texts. Scholars across the social sciences and humanities use qualitative methods to study legal phenomena but often overlook formal legal texts as productive sites for analysis. Moreover, when qualitative researchers do analyze legal texts, they ...

  5. Qualitative Methods for Law Review Writing

    213. Qualitative empirical methods commonly used across the social sciences are not systematically used to study law.2 This is surprising because qualitative methods are particularly well suited for analyzing the types of evidence, and developing the types of arguments, we typically see in law reviews. Court deci-sions alone offer unusually ...

  6. Qualitative Methods for Law Review Writing

    Qualitative Methods for Law Review Writing. University of Chicago Law Review, Vol. 84, No. 213, 2017. 26 Pages ... insights from multiple social science disciplines and from history that could strengthen legal scholarship by improving research design, case selection, and case analysis. We describe qualitative techniques rarely found in law ...

  7. 1. Legal Research as Qualitative Research

    Dobinson, Ian and Johns, Francis. "1. Legal Research as Qualitative Research" In edited by Mike McConville and Wing Hong (Eric) Chui, 18-47. Edinburgh: Edinburgh University Press, 2017. Dobinson I, Johns F. 1. Legal Research as Qualitative Research. In: McConville M, Chui W (ed.) .

  8. When law and data collide: the methodological challenge of conducting

    The definition of what constitutes 'mixed methods' is contested and evolving.7 At its core, mixed methods research combines both qualitative and quantitative research methods to understand a research problem.8 This goes beyond merely collecting and analysing different forms of data; mixed methods mix or integrate qualitative and ...

  9. Harvard Empirical Legal Studies Series

    Overview. The Harvard Empirical Legal Studies (HELS) Series explores a range of empirical methods, both qualitative and quantitative, and their application in legal scholarship in different areas of the law.It is a platform for engaging with current empirical research, hearing from leading scholars working in a variety of fields, and developing ideas and empirical projects.

  10. Analyzing the law qualitatively

    Purpose. This article develops a methodological framework to support qualitative analyses of legal texts. Scholars across the social sciences and humanities use qualitative methods to study legal phenomena but often overlook formal legal texts as productive sites for analysis. Moreover, when qualitative researchers do analyze legal texts, they ...

  11. PDF GENERAL EDITOR: GABRIELE GRIFFIN Research Methods

    Tables and Figures ta Bles 2.1 Core features of qualitative and quantitative methods 51 2.2 A summary of common non-probability sampling 58 2.3 Five major methods of quantitative research 60 2.4 Continuum of quantitative research designs 61 2.5 Advantages and disadvantages of survey methods 63 2.6 Measures of central tendency/Basic descriptive statistics 64

  12. Research Methods for Law on JSTOR

    Research Methods for Law introduces undergraduate and postgraduate students to available methods of research - legalistic, empirical, comparative and theoretical - drawing on actual research projects as examples. The book is written by a team of contributors with a broad range of teaching and research experience in law, criminal justice and ...

  13. New Qualitative Methods and Critical Research Directions in Crime, Law

    The goal of the Special Issue on qualitative methods was to "to promote the use and understanding of sound qualitative research designs and to encourage their use among those seeking answers to questions about crime and justice" (Copes, Citation 2010, p. 388). To this end, the Issue was a success.

  14. Introduction

    Leavy explains qualitative research as a form of bricolage and qualitative researchers as bricoleurs. The remainder of the chapter reviews the contents of the handbook, providing a chapter by chapter summary. Keywords: Qualitative research, paradigm, ontology, epistemology, genre, methods, theory, methodology, ethics, values, reflexivity.

  15. "Qualitative Methods for Law Review Writing" by Katerina Linos and

    Linos, Katerina and Carlson, Melissa (2017) "Qualitative Methods for Law Review Writing," University of Chicago Law Review: Vol. 84: Iss. 1, Article 10. Typical law review articles not only clarify what the law is, but also examine the history of the current rules, assess the status quo, and present reform proposals.

  16. What Is Qualitative Research? An Overview and Guidelines

    Abstract. This guide explains the focus, rigor, and relevance of qualitative research, highlighting its role in dissecting complex social phenomena and providing in-depth, human-centered insights. The guide also examines the rationale for employing qualitative methods, underscoring their critical importance. An exploration of the methodology ...

  17. Qualitative Methods for Law and Society—A Research Guide

    1 Research Guide—Qualitative Methods for Law and Society Research includes only minimal references to broad traditions of scholarship under the various law ands including law and literature, law and culture, legal anthropology and, perhaps most glaringly, law and philosophy. This last exclusion is tied to the third feature of this research ...

  18. Qualitative Methods for Law Review Writing

    We describe qualitative techniques rarely found in law review writing, such as process tracing, theoretically informed sampling, and most similar case design, among others. We provide examples of best practice and illustrate how each technique can be adapted for legal sources and arguments. I. Imagining Alternatives and Identifying a Puzzle.

  19. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  20. Qualitative and Quantitative Approaches to Rule of Law Research

    Abstract. Qualitative and quantitative research can lay the basis for rule of law interventions that are rooted in sound evidence and responsive to local community interests, aspirations, values, and demands. Without grounded knowledge of qualitative and quantitative research, researchers' results can easily be erroneous (as a result of, for ...

  21. Chapter 38 Qualitative Approaches to Empirical Legal Research

    It is a type of qualitative research in which precedent is analysed utilising documents as source material in order to determine the law (Webley, 2010). Studies conducted using this methodology is ...

  22. Writing A Law Dissertation Methodology

    Writing a law dissertation methodology. The research method or methodology you adopt will depend partly on the topic you have selected and partly based on your own interests and/or preferences. Factors such as the amount of time and resources that you can commit to your research is also likely to factor. Conversely, if you have a particular ...

  23. The Oxford Handbook of Qualitative Research

    The final section offers a commentary about politics and research and the move toward public scholarship. The Oxford Handbook of Qualitative Research is intended for students of all levels, faculty, and researchers across the social sciences. Keywords: qualitative research, museum studies, disaster studies, data analysis, assessment, ethical ...

  24. Mastering Qualitative Coding: A Step-by-Step Guide for Research

    Speaker 1: In this video, we're going to dive into the topic of qualitative coding, which you'll need to understand if you plan to undertake qualitative analysis for any dissertation, thesis, or research project. We'll explain what exactly qualitative coding is, the different coding approaches and methods, and how to go about coding your data step by step.