Research-Methodology

Deductive Approach (Deductive Reasoning)

A deductive approach is concerned with “developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis” [1]

It has been stated that “deductive means reasoning from the particular to the general. If a causal relationship or link seems to be implied by a particular theory or case example, it might be true in many cases. A deductive design might test to see if this relationship or link did obtain on more general circumstances” [2] .

Deductive approach can be explained by the means of hypotheses, which can be derived from the propositions of the theory. In other words, deductive approach is concerned with deducting conclusions from premises or propositions.

Deduction begins with an expected pattern “that is tested against observations, whereas induction begins with observations and seeks to find a pattern within them” [3] .

Advantages of Deductive Approach

Deductive approach offers the following advantages:

  • Possibility to explain causal relationships between concepts and variables
  • Possibility to measure concepts quantitatively
  • Possibility to generalize research findings to a certain extent

Alternative to deductive approach is  inductive approach.  The table below guides the choice of specific approach depending on circumstances:

Wealth of literature Abundance of sources Scarcity of sources
Time availability Short time available to complete the study There is no shortage of time to compete the study
Risk To avoid risk Risk is accepted, no theory may emerge at all

Choice between deductive and inductive approaches

Deductive research approach explores a known theory or phenomenon and tests if that theory is valid in given circumstances. It has been noted that “the deductive approach follows the path of logic most closely. The reasoning starts with a theory and leads to a new hypothesis. This hypothesis is put to the test by confronting it with observations that either lead to a confirmation or a rejection of the hypothesis” [4] .

Moreover, deductive reasoning can be explained as “reasoning from the general to the particular” [5] , whereas inductive reasoning is the opposite. In other words, deductive approach involves formulation of hypotheses and their subjection to testing during the research process, while inductive studies do not deal with hypotheses in any ways.

Application of Deductive Approach (Deductive Reasoning) in Business Research

In studies with deductive approach, the researcher formulates a set of hypotheses at the start of the research. Then, relevant research methods are chosen and applied to test the hypotheses to prove them right or wrong.

Deductive Approach Deductive Reasoning

Generally, studies using deductive approach follow the following stages:

  • Deducing  hypothesis from theory.
  • Formulating  hypothesis in operational terms and proposing relationships between two specific variables
  • Testing  hypothesis with the application of relevant method(s). These are quantitative methods such as regression and correlation analysis, mean, mode and median and others.
  • Examining  the outcome of the test, and thus confirming or rejecting the theory. When analysing the outcome of tests, it is important to compare research findings with the literature review findings.
  • Modifying  theory in instances when hypothesis is not confirmed.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research approaches. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,   research design ,  methods of data collection ,   data analysis  and   sampling   are explained in this e-book in simple words.

John Dudovskiy

Deductive Approach (Deductive Reasoning)

[1] Wilson, J. (2010) “Essentials of Business Research: A Guide to Doing Your Research Project” SAGE Publications, p.7

[2] Gulati, PM, 2009, Research Management: Fundamental and Applied Research, Global India Publications, p.42

[3] Babbie, E. R. (2010) “The Practice of Social Research” Cengage Learning, p.52

[4] Snieder, R. & Larner, K. (2009) “The Art of Being a Scientist: A Guide for Graduate Students and their Mentors”, Cambridge University Press, p.16

[5] Pelissier, R. (2008) “Business Research Made Easy” Juta & Co., p.3

The potential of working hypotheses for deductive exploratory research

  • Open access
  • Published: 08 December 2020
  • Volume 55 , pages 1703–1725, ( 2021 )

Cite this article

You have full access to this open access article

deductive method research paper

  • Mattia Casula   ORCID: orcid.org/0000-0002-7081-8153 1 ,
  • Nandhini Rangarajan 2 &
  • Patricia Shields   ORCID: orcid.org/0000-0002-0960-4869 2  

66k Accesses

83 Citations

4 Altmetric

Explore all metrics

While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in the research process. The working hypothesis conceptual framework is introduced, placed in a philosophical context, defined, and applied to public administration and comparative public policy. Doing so, this article explains: the philosophical underpinning of exploratory, deductive research; how the working hypothesis informs the methodologies and evidence collection of deductive, explorative research; the nature of micro-conceptual frameworks for deductive exploratory research; and, how the working hypothesis informs data analysis when exploratory research is deductive.

Similar content being viewed by others

deductive method research paper

Reflections on Methodological Issues

deductive method research paper

Research: Meaning and Purpose

deductive method research paper

Research Design and Methodology

Avoid common mistakes on your manuscript.

1 Introduction

Exploratory research is generally considered to be inductive and qualitative (Stebbins 2001 ). Exploratory qualitative studies adopting an inductive approach do not lend themselves to a priori theorizing and building upon prior bodies of knowledge (Reiter 2013 ; Bryman 2004 as cited in Pearse 2019 ). Juxtaposed against quantitative studies that employ deductive confirmatory approaches, exploratory qualitative research is often criticized for lack of methodological rigor and tentativeness in results (Thomas and Magilvy 2011 ). This paper focuses on the neglected topic of deductive, exploratory research and proposes working hypotheses as a useful framework for these studies.

To emphasize that certain types of applied research lend themselves more easily to deductive approaches, to address the downsides of exploratory qualitative research, and to ensure qualitative rigor in exploratory research, a significant body of work on deductive qualitative approaches has emerged (see for example, Gilgun 2005 , 2015 ; Hyde 2000 ; Pearse 2019 ). According to Gilgun ( 2015 , p. 3) the use of conceptual frameworks derived from comprehensive reviews of literature and a priori theorizing were common practices in qualitative research prior to the publication of Glaser and Strauss’s ( 1967 ) The Discovery of Grounded Theory . Gilgun ( 2015 ) coined the terms Deductive Qualitative Analysis (DQA) to arrive at some sort of “middle-ground” such that the benefits of a priori theorizing (structure) and allowing room for new theory to emerge (flexibility) are reaped simultaneously. According to Gilgun ( 2015 , p. 14) “in DQA, the initial conceptual framework and hypotheses are preliminary. The purpose of DQA is to come up with a better theory than researchers had constructed at the outset (Gilgun 2005 , 2009 ). Indeed, the production of new, more useful hypotheses is the goal of DQA”.

DQA provides greater level of structure for both the experienced and novice qualitative researcher (see for example Pearse 2019 ; Gilgun 2005 ). According to Gilgun ( 2015 , p. 4) “conceptual frameworks are the sources of hypotheses and sensitizing concepts”. Sensitizing concepts frame the exploratory research process and guide the researcher’s data collection and reporting efforts. Pearse ( 2019 ) discusses the usefulness for deductive thematic analysis and pattern matching to help guide DQA in business research. Gilgun ( 2005 ) discusses the usefulness of DQA for family research.

Given these rationales for DQA in exploratory research, the overarching purpose of this paper is to contribute to that growing corpus of work on deductive qualitative research. This paper is specifically aimed at guiding novice researchers and student scholars to the working hypothesis as a useful a priori framing tool. The applicability of the working hypothesis as a tool that provides more structure during the design and implementation phases of exploratory research is discussed in detail. Examples of research projects in public administration that use the working hypothesis as a framing tool for deductive exploratory research are provided.

In the next section, we introduce the three types of research purposes. Second, we examine the nature of the exploratory research purpose. Third, we provide a definition of working hypothesis. Fourth, we explore the philosophical roots of methodology to see where exploratory research fits. Fifth, we connect the discussion to the dominant research approaches (quantitative, qualitative and mixed methods) to see where deductive exploratory research fits. Sixth, we examine the nature of theory and the role of the hypothesis in theory. We contrast formal hypotheses and working hypotheses. Seven, we provide examples of student and scholarly work that illustrates how working hypotheses are developed and operationalized. Lastly, this paper synthesizes previous discussion with concluding remarks.

2 Three types of research purposes

The literature identifies three basic types of research purposes—explanation, description and exploration (Babbie 2007 ; Adler and Clark 2008 ; Strydom 2013 ; Shields and Whetsell 2017 ). Research purposes are similar to research questions; however, they focus on project goals or aims instead of questions.

Explanatory research answers the “why” question (Babbie 2007 , pp. 89–90), by explaining “why things are the way they are”, and by looking “for causes and reasons” (Adler and Clark 2008 , p. 14). Explanatory research is closely tied to hypothesis testing. Theory is tested using deductive reasoning, which goes from the general to the specific (Hyde 2000 , p. 83). Hypotheses provide a frame for explanatory research connecting the research purpose to other parts of the research process (variable construction, choice of data, statistical tests). They help provide alignment or coherence across stages in the research process and provide ways to critique the strengths and weakness of the study. For example, were the hypotheses grounded in the appropriate arguments and evidence in the literature? Are the concepts imbedded in the hypotheses appropriately measured? Was the best statistical test used? When the analysis is complete (hypothesis is tested), the results generally answer the research question (the evidence supported or failed to support the hypothesis) (Shields and Rangarajan 2013 ).

Descriptive research addresses the “What” question and is not primarily concerned with causes (Strydom 2013 ; Shields and Tajalli 2006 ). It lies at the “midpoint of the knowledge continuum” (Grinnell 2001 , p. 248) between exploration and explanation. Descriptive research is used in both quantitative and qualitative research. A field researcher might want to “have a more highly developed idea of social phenomena” (Strydom 2013 , p. 154) and develop thick descriptions using inductive logic. In science, categorization and classification systems such as the periodic table of chemistry or the taxonomies of biology inform descriptive research. These baseline classification systems are a type of theorizing and allow researchers to answer questions like “what kind” of plants and animals inhabit a forest. The answer to this question would usually be displayed in graphs and frequency distributions. This is also the data presentation system used in the social sciences (Ritchie and Lewis 2003 ; Strydom 2013 ). For example, if a scholar asked, what are the needs of homeless people? A quantitative approach would include a survey that incorporated a “needs” classification system (preferably based on a literature review). The data would be displayed as frequency distributions or as charts. Description can also be guided by inductive reasoning, which draws “inferences from specific observable phenomena to general rules or knowledge expansion” (Worster 2013 , p. 448). Theory and hypotheses are generated using inductive reasoning, which begins with data and the intention of making sense of it by theorizing. Inductive descriptive approaches would use a qualitative, naturalistic design (open ended interview questions with the homeless population). The data could provide a thick description of the homeless context. For deductive descriptive research, categories, serve a purpose similar to hypotheses for explanatory research. If developed with thought and a connection to the literature, categories can serve as a framework that inform measurement, link to data collection mechanisms and to data analysis. Like hypotheses they can provide horizontal coherence across the steps in the research process.

Table  1 demonstrated these connections for deductive, descriptive and explanatory research. The arrow at the top emphasizes the horizontal or across the research process view we emphasize. This article makes the case that the working hypothesis can serve the same purpose as the hypothesis for deductive, explanatory research and categories for deductive descriptive research. The cells for exploratory research are filled in with question marks.

The remainder of this paper focuses on exploratory research and the answers to questions found in the table:

What is the philosophical underpinning of exploratory, deductive research?

What is the Micro-conceptual framework for deductive exploratory research? [ As is clear from the article title we introduce the working hypothesis as the answer .]

How does the working hypothesis inform the methodologies and evidence collection of deductive exploratory research?

How does the working hypothesis inform data analysis of deductive exploratory research?

3 The nature of exploratory research purpose

Explorers enter the unknown to discover something new. The process can be fraught with struggle and surprises. Effective explorers creatively resolve unexpected problems. While we typically think of explorers as pioneers or mountain climbers, exploration is very much linked to the experience and intention of the explorer. Babies explore as they take their first steps. The exploratory purpose resonates with these insights. Exploratory research, like reconnaissance, is a type of inquiry that is in the preliminary or early stages (Babbie 2007 ). It is associated with discovery, creativity and serendipity (Stebbins 2001 ). But the person doing the discovery, also defines the activity or claims the act of exploration. It “typically occurs when a researcher examines a new interest or when the subject of study itself is relatively new” (Babbie 2007 , p. 88). Hence, exploration has an open character that emphasizes “flexibility, pragmatism, and the particular, biographically specific interests of an investigator” (Maanen et al. 2001 , p. v). These three purposes form a type of hierarchy. An area of inquiry is initially explored . This early work lays the ground for, description which in turn becomes the basis for explanation . Quantitative, explanatory studies dominate contemporary high impact journals (Twining et al. 2017 ).

Stebbins ( 2001 ) makes the point that exploration is often seen as something like a poor stepsister to confirmatory or hypothesis testing research. He has a problem with this because we live in a changing world and what is settled today will very likely be unsettled in the near future and in need of exploration. Further, exploratory research “generates initial insights into the nature of an issue and develops questions to be investigated by more extensive studies” (Marlow 2005 , p. 334). Exploration is widely applicable because all research topics were once “new.” Further, all research topics have the possibility of “innovation” or ongoing “newness”. Exploratory research may be appropriate to establish whether a phenomenon exists (Strydom 2013 ). The point here, of course, is that the exploratory purpose is far from trivial.

Stebbins’ Exploratory Research in the Social Sciences ( 2001 ), is the only book devoted to the nature of exploratory research as a form of social science inquiry. He views it as a “broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding of an area of social or psychological life” (p. 3). It is science conducted in a way distinct from confirmation. According to Stebbins ( 2001 , p. 6) the goal is discovery of potential generalizations, which can become future hypotheses and eventually theories that emerge from the data. He focuses on inductive logic (which stimulates creativity) and qualitative methods. He does not want exploratory research limited to the restrictive formulas and models he finds in confirmatory research. He links exploratory research to Glaser and Strauss’s ( 1967 ) flexible, immersive, Grounded Theory. Strydom’s ( 2013 ) analysis of contemporary social work research methods books echoes Stebbins’ ( 2001 ) position. Stebbins’s book is an important contribution, but it limits the potential scope of this flexible and versatile research purpose. If we accepted his conclusion, we would delete the “Exploratory” row from Table  1 .

Note that explanatory research can yield new questions, which lead to exploration. Inquiry is a process where inductive and deductive activities can occur simultaneously or in a back and forth manner, particularly as the literature is reviewed and the research design emerges. Footnote 1 Strict typologies such as explanation, description and exploration or inductive/deductive can obscures these larger connections and processes. We draw insight from Dewey’s ( 1896 ) vision of inquiry as depicted in his seminal “Reflex Arc” article. He notes that “stimulus” and “response” like other dualities (inductive/deductive) exist within a larger unifying system. Yet the terms have value. “We need not abandon terms like stimulus and response, so long as we remember that they are attached to events based upon their function in a wider dynamic context, one that includes interests and aims” (Hildebrand 2008 , p. 16). So too, in methodology typologies such as deductive/inductive capture useful distinctions with practical value and are widely used in the methodology literature.

We argue that there is a role for exploratory, deductive, and confirmatory research. We maintain all types of research logics and methods should be in the toolbox of exploratory research. First, as stated above, it makes no sense on its face to identify an extremely flexible purpose that is idiosyncratic to the researcher and then basically restrict its use to qualitative, inductive, non-confirmatory methods. Second, Stebbins’s ( 2001 ) work focused on social science ignoring the policy sciences. Exploratory research can be ideal for immediate practical problems faced by policy makers, who could find a framework of some kind useful. Third, deductive, exploratory research is more intentionally connected to previous research. Some kind of initial framing device is located or designed using the literature. This may be very important for new scholars who are developing research skills and exploring their field and profession. Stebbins’s insights are most pertinent for experienced scholars. Fourth, frameworks and deductive logic are useful for comparative work because some degree of consistency across cases is built into the design.

As we have seen, the hypotheses of explanatory and categories of descriptive research are the dominate frames of social science and policy science. We certainly concur that neither of these frames makes a lot of sense for exploratory research. They would tend to tie it down. We see the problem as a missing framework or missing way to frame deductive, exploratory research in the methodology literature. Inductive exploratory research would not work for many case studies that are trying to use evidence to make an argument. What exploratory deductive case studies need is a framework that incorporates flexibility. This is even more true for comparative case studies. A framework of this sort could be usefully applied to policy research (Casula 2020a ), particularly evaluative policy research, and applied research generally. We propose the Working Hypothesis as a flexible conceptual framework and as a useful tool for doing exploratory studies. It can be used as an evaluative criterion particularly for process evaluation and is useful for student research because students can develop theorizing skills using the literature.

Table  1 included a column specifying the philosophical basis for each research purpose. Shifting gears to the philosophical underpinning of methodology provides useful additional context for examination of deductive, exploratory research.

4 What is a working hypothesis

The working hypothesis is first and foremost a hypothesis or a statement of expectation that is tested in action. The term “working” suggest that these hypotheses are subject to change, are provisional and the possibility of finding contradictory evidence is real. In addition, a “working” hypothesis is active, it is a tool in an ongoing process of inquiry. If one begins with a research question, the working hypothesis could be viewed as a statement or group of statements that answer the question. It “works” to move purposeful inquiry forward. “Working” also implies some sort of community, mostly we work together in relationship to achieve some goal.

Working Hypothesis is a term found in earlier literature. Indeed, both pioneering pragmatists, John Dewey and George Herbert Mead use the term working hypothesis in important nineteenth century works. For both Dewey and Mead, the notion of a working hypothesis has a self-evident quality and it is applied in a big picture context. Footnote 2

Most notably, Dewey ( 1896 ), in one of his most pivotal early works (“Reflex Arc”), used “working hypothesis” to describe a key concept in psychology. “The idea of the reflex arc has upon the whole come nearer to meeting this demand for a general working hypothesis than any other single concept (Italics added)” (p. 357). The notion of a working hypothesis was developed more fully 42 years later, in Logic the Theory of Inquiry , where Dewey developed the notion of a working hypothesis that operated on a smaller scale. He defines working hypotheses as a “provisional, working means of advancing investigation” (Dewey 1938 , pp. 142). Dewey’s definition suggests that working hypotheses would be useful toward the beginning of a research project (e.g., exploratory research).

Mead ( 1899 ) used working hypothesis in a title of an American Journal of Sociology article “The Working Hypothesis and Social Reform” (italics added). He notes that a scientist’s foresight goes beyond testing a hypothesis.

Given its success, he may restate his world from this standpoint and get the basis for further investigation that again always takes the form of a problem. The solution of this problem is found over again in the possibility of fitting his hypothetical proposition into the whole within which it arises. And he must recognize that this statement is only a working hypothesis at the best, i.e., he knows that further investigation will show that the former statement of his world is only provisionally true, and must be false from the standpoint of a larger knowledge, as every partial truth is necessarily false over against the fuller knowledge which he will gain later (Mead 1899 , p. 370).

Cronbach ( 1975 ) developed a notion of working hypothesis consistent with inductive reasoning, but for him, the working hypothesis is a product or result of naturalistic inquiry. He makes the case that naturalistic inquiry is highly context dependent and therefore results or seeming generalizations that may come from a study and should be viewed as “working hypotheses”, which “are tentative both for the situation in which they first uncovered and for other situations” (as cited in Gobo 2008 , p. 196).

A quick Google scholar search using the term “working hypothesis” show that it is widely used in twentieth and twenty-first century science, particularly in titles. In these articles, the working hypothesis is treated as a conceptual tool that furthers investigation in its early or transitioning phases. We could find no explicit links to exploratory research. The exploratory nature of the problem is expressed implicitly. Terms such as “speculative” (Habib 2000 , p. 2391) or “rapidly evolving field” (Prater et al. 2007 , p. 1141) capture the exploratory nature of the study. The authors might describe how a topic is “new” or reference “change”. “As a working hypothesis, the picture is only new, however, in its interpretation” (Milnes 1974 , p. 1731). In a study of soil genesis, Arnold ( 1965 , p. 718) notes “Sequential models, formulated as working hypotheses, are subject to further investigation and change”. Any 2020 article dealing with COVID-19 and respiratory distress would be preliminary almost by definition (Ciceri et al. 2020 ).

5 Philosophical roots of methodology

According to Kaplan ( 1964 , p. 23) “the aim of methodology is to help us understand, in the broadest sense not the products of scientific inquiry but the process itself”. Methods contain philosophical principles that distinguish them from other “human enterprises and interests” (Kaplan 1964 , p. 23). Contemporary research methodology is generally classified as quantitative, qualitative and mixed methods. Leading scholars of methodology have associated each with a philosophical underpinning—positivism (or post-positivism), interpretivism or constructivist and pragmatism, respectively (Guba 1987 ; Guba and Lincoln 1981 ; Schrag 1992 ; Stebbins 2001 ; Mackenzi and Knipe 2006 ; Atieno 2009 ; Levers 2013 ; Morgan 2007 ; O’Connor et al. 2008 ; Johnson and Onwuegbuzie 2004 ; Twining et al. 2017 ). This section summarizes how the literature often describes these philosophies and informs contemporary methodology and its literature.

Positivism and its more contemporary version, post-positivism, maintains an objectivist ontology or assumes an objective reality, which can be uncovered (Levers 2013 ; Twining et al. 2017 ). Footnote 3 Time and context free generalizations are possible and “real causes of social scientific outcomes can be determined reliably and validly (Johnson and Onwuegbunzie 2004 , p. 14). Further, “explanation of the social world is possible through a logical reduction of social phenomena to physical terms”. It uses an empiricist epistemology which “implies testability against observation, experimentation, or comparison” (Whetsell and Shields 2015 , pp. 420–421). Correspondence theory, a tenet of positivism, asserts that “to each concept there corresponds a set of operations involved in its scientific use” (Kaplan 1964 , p. 40).

The interpretivist, constructivists or post-modernist approach is a reaction to positivism. It uses a relativist ontology and a subjectivist epistemology (Levers 2013 ). In this world of multiple realities, context free generalities are impossible as is the separation of facts and values. Causality, explanation, prediction, experimentation depend on assumptions about the correspondence between concepts and reality, which in the absence of an objective reality is impossible. Empirical research can yield “contextualized emergent understanding rather than the creation of testable theoretical structures” (O’Connor et al. 2008 , p. 30). The distinctively different world views of positivist/post positivist and interpretivist philosophy is at the core of many controversies in methodology, social and policy science literature (Casula 2020b ).

With its focus on dissolving dualisms, pragmatism steps outside the objective/subjective debate. Instead, it asks, “what difference would it make to us if the statement were true” (Kaplan 1964 , p. 42). Its epistemology is connected to purposeful inquiry. Pragmatism has a “transformative, experimental notion of inquiry” anchored in pluralism and a focus on constructing conceptual and practical tools to resolve “problematic situations” (Shields 1998 ; Shields and Rangarajan 2013 ). Exploration and working hypotheses are most comfortably situated within the pragmatic philosophical perspective.

6 Research approaches

Empirical investigation relies on three types of methodology—quantitative, qualitative and mixed methods.

6.1 Quantitative methods

Quantitative methods uses deductive logic and formal hypotheses or models to explain, predict, and eventually establish causation (Hyde 2000 ; Kaplan 1964 ; Johnson and Onwuegbunzie 2004 ; Morgan 2007 ). Footnote 4 The correspondence between the conceptual and empirical world make measures possible. Measurement assigns numbers to objects, events or situations and allows for standardization and subtle discrimination. It also allows researchers to draw on the power of mathematics and statistics (Kaplan 1964 , pp. 172–174). Using the power of inferential statistics, quantitative research employs research designs, which eliminate competing hypotheses. It is high in external validity or the ability to generalize to the whole. The research results are relatively independent of the researcher (Johnson & Onwuegbunzie 2004 ).

Quantitative methods depend on the quality of measurement and a priori conceptualization, and adherence to the underlying assumptions of inferential statistics. Critics charge that hypotheses and frameworks needlessly constrain inquiry (Johnson and Onwuegbunzie 2004 , p. 19). Hypothesis testing quantitative methods support the explanatory purpose.

6.2 Qualitative methods

Qualitative researchers who embrace the post-modern, interpretivist view, Footnote 5 question everything about the nature of quantitative methods (Willis et al. 2007 ). Rejecting the possibility of objectivity, correspondence between ideas and measures, and the constraints of a priori theorizing they focus on “unique impressions and understandings of events rather than to generalize the findings” (Kolb 2012 , p. 85). Characteristics of traditional qualitative research include “induction, discovery, exploration, theory/hypothesis generation and the researcher as the primary ‘instrument’ of data collection” (Johnson and Onwuegbunzie 2004 , p. 18). It also concerns itself with forming “unique impressions and understandings of events rather than to generalize findings” (Kolb 2012 , p. 85). The data of qualitative methods are generated via interviews, direct observation, focus groups and analysis of written records or artifacts.

Qualitative methods provide for understanding and “description of people’s personal experiences of phenomena”. They enable descriptions of detailed “phenomena as they are situated and embedded in local contexts.” Researchers use naturalistic settings to “study dynamic processes” and explore how participants interpret experiences. Qualitative methods have an inherent flexibility, allowing researchers to respond to changes in the research setting. They are particularly good at narrowing to the particular and on the flipside have limited external validity (Johnson and Onwuegbunzie 2004 , p. 20). Instead of specifying a suitable sample size to draw conclusions, qualitative research uses the notion of saturation (Morse 1995 ).

Saturation is used in grounded theory—a widely used and respected form of qualitative research, and a well-known interpretivist qualitative research method. Introduced by Glaser and Strauss ( 1967 ), this “grounded on observation” (Patten and Newhart 2000 , p. 27) methodology, focuses on “the creation of emergent understanding” (O’Connor et al. 2008 , p. 30). It uses the Constant Comparative method, whereby researchers develop theory from data as they code and analyze at the same time. Data collection, coding and analysis along with theoretical sampling are systematically combined to generate theory (Kolb 2012 , p. 83). The qualitative methods discussed here support exploratory research.

A close look at the two philosophies and assumptions of quantitative and qualitative research suggests two contradictory world views. The literature has labeled these contradictory views the Incompatibility Theory, which sets up a quantitative versus qualitative tension similar to the seeming separation of art and science or fact and values (Smith 1983a , b ; Guba 1987 ; Smith and Heshusius 1986 ; Howe 1988 ). The incompatibility theory does not make sense in practice. Yin ( 1981 , 1992 , 2011 , 2017 ), a prominent case study scholar, showcases a deductive research methodology that crosses boundaries using both quantaitive and qualitative evidence when appropriate.

6.3 Mixed methods

Turning the “Incompatibility Theory” on its head, Mixed Methods research “combines elements of qualitative and quantitative research approaches … for the broad purposes of breadth and depth of understanding and corroboration” (Johnson et al. 2007 , p. 123). It does this by partnering with philosophical pragmatism. Footnote 6 Pragmatism is productive because “it offers an immediate and useful middle position philosophically and methodologically; it offers a practical and outcome-oriented method of inquiry that is based on action and leads, iteratively, to further action and the elimination of doubt; it offers a method for selecting methodological mixes that can help researchers better answer many of their research questions” (Johnson and Onwuegbunzie 2004 , p. 17). What is theory for the pragmatist “any theoretical model is for the pragmatist, nothing more than a framework through which problems are perceived and subsequently organized ” (Hothersall 2019 , p. 5).

Brendel ( 2009 ) constructed a simple framework to capture the core elements of pragmatism. Brendel’s four “p”’s—practical, pluralism, participatory and provisional help to show the relevance of pragmatism to mixed methods. Pragmatism is purposeful and concerned with the practical consequences. The pluralism of pragmatism overcomes quantitative/qualitative dualism. Instead, it allows for multiple perspectives (including positivism and interpretivism) and, thus, gets around the incompatibility problem. Inquiry should be participatory or inclusive of the many views of participants, hence, it is consistent with multiple realities and is also tied to the common concern of a problematic situation. Finally, all inquiry is provisional . This is compatible with experimental methods, hypothesis testing and consistent with the back and forth of inductive and deductive reasoning. Mixed methods support exploratory research.

Advocates of mixed methods research note that it overcomes the weaknesses and employs the strengths of quantitative and qualitative methods. Quantitative methods provide precision. The pictures and narrative of qualitative techniques add meaning to the numbers. Quantitative analysis can provide a big picture, establish relationships and its results have great generalizability. On the other hand, the “why” behind the explanation is often missing and can be filled in through in-depth interviews. A deeper and more satisfying explanation is possible. Mixed-methods brings the benefits of triangulation or multiple sources of evidence that converge to support a conclusion. It can entertain a “broader and more complete range of research questions” (Johnson and Onwuegbunzie 2004 , p. 21) and can move between inductive and deductive methods. Case studies use multiple forms of evidence and are a natural context for mixed methods.

One thing that seems to be missing from mixed method literature and explicit design is a place for conceptual frameworks. For example, Heyvaert et al. ( 2013 ) examined nine mixed methods studies and found an explicit framework in only two studies (transformative and pragmatic) (p. 663).

7 Theory and hypotheses: where is and what is theory?

Theory is key to deductive research. In essence, empirical deductive methods test theory. Hence, we shift our attention to theory and the role and functions of the hypotheses in theory. Oppenheim and Putnam ( 1958 ) note that “by a ‘theory’ (in the widest sense) we mean any hypothesis, generalization or law (whether deterministic or statistical) or any conjunction of these” (p. 25). Van Evera ( 1997 ) uses a similar and more complex definition “theories are general statements that describe and explain the causes of effects of classes of phenomena. They are composed of causal laws or hypotheses, explanations, and antecedent conditions” (p. 8). Sutton and Staw ( 1995 , p. 376) in a highly cited article “What Theory is Not” assert the that hypotheses should contain logical arguments for “why” the hypothesis is expected. Hypotheses need an underlying causal argument before they can be considered theory. The point of this discussion is not to define theory but to establish the importance of hypotheses in theory.

Explanatory research is implicitly relational (A explains B). The hypotheses of explanatory research lay bare these relationships. Popular definitions of hypotheses capture this relational component. For example, the Cambridge Dictionary defines a hypothesis a “an idea or explanation for something that is based on known facts but has not yet been proven”. Vocabulary.Com’s definition emphasizes explanation, a hypothesis is “an idea or explanation that you then test through study and experimentation”. According to Wikipedia a hypothesis is “a proposed explanation for a phenomenon”. Other definitions remove the relational or explanatory reference. The Oxford English Dictionary defines a hypothesis as a “supposition or conjecture put forth to account for known facts.” Science Buddies defines a hypothesis as a “tentative, testable answer to a scientific question”. According to the Longman Dictionary the hypothesis is “an idea that can be tested to see if it is true or not”. The Urban Dictionary states a hypothesis is “a prediction or educated-guess based on current evidence that is yet be tested”. We argue that the hypotheses of exploratory research— working hypothesis — are not bound by relational expectations. It is this flexibility that distinguishes the working hypothesis.

Sutton and Staw (1995) maintain that hypotheses “serve as crucial bridges between theory and data, making explicit how the variables and relationships that follow from a logical argument will be operationalized” (p. 376, italics added). The highly rated journal, Computers and Education , Twining et al. ( 2017 ) created guidelines for qualitative research as a way to improve soundness and rigor. They identified the lack of alignment between theoretical stance and methodology as a common problem in qualitative research. In addition, they identified a lack of alignment between methodology, design, instruments of data collection and analysis. The authors created a guidance summary, which emphasized the need to enhance coherence throughout elements of research design (Twining et al. 2017 p. 12). Perhaps the bridging function of the hypothesis mentioned by Sutton and Staw (1995) is obscured and often missing in qualitative methods. Working hypotheses can be a tool to overcome this problem.

For reasons, similar to those used by mixed methods scholars, we look to classical pragmatism and the ideas of John Dewey to inform our discussion of theory and working hypotheses. Dewey ( 1938 ) treats theory as a tool of empirical inquiry and uses a map metaphor (p. 136). Theory is like a map that helps a traveler navigate the terrain—and should be judged by its usefulness. “There is no expectation that a map is a true representation of reality. Rather, it is a representation that allows a traveler to reach a destination (achieve a purpose). Hence, theories should be judged by how well they help resolve the problem or achieve a purpose ” (Shields and Rangarajan 2013 , p. 23). Note that we explicitly link theory to the research purpose. Theory is never treated as an unimpeachable Truth, rather it is a helpful tool that organizes inquiry connecting data and problem. Dewey’s approach also expands the definition of theory to include abstractions (categories) outside of causation and explanation. The micro-conceptual frameworks Footnote 7 introduced in Table  1 are a type of theory. We define conceptual frameworks as the “way the ideas are organized to achieve the project’s purpose” (Shields and Rangarajan 2013 p. 24). Micro-conceptual frameworks do this at the very close to the data level of analysis. Micro-conceptual frameworks can direct operationalization and ways to assess measurement or evidence at the individual research study level. Again, the research purpose plays a pivotal role in the functioning of theory (Shields and Tajalli 2006 ).

8 Working hypothesis: methods and data analysis

We move on to answer the remaining questions in the Table  1 . We have established that exploratory research is extremely flexible and idiosyncratic. Given this, we will proceed with a few examples and draw out lessons for developing an exploratory purpose, building a framework and from there identifying data collection techniques and the logics of hypotheses testing and analysis. Early on we noted the value of the Working Hypothesis framework for student empirical research and applied research. The next section uses a masters level student’s work to illustrate the usefulness of working hypotheses as a way to incorporate the literature and structure inquiry. This graduate student was also a mature professional with a research question that emerged from his job and is thus an example of applied research.

Master of Public Administration student, Swift ( 2010 ) worked for a public agency and was responsible for that agency’s sexual harassment training. The agency needed to evaluate its training but had never done so before. He also had never attempted a significant empirical research project. Both of these conditions suggest exploration as a possible approach. He was interested in evaluating the training program and hence the project had a normative sense. Given his job, he already knew a lot about the problem of sexual harassment and sexual harassment training. What he did not know much about was doing empirical research, reviewing the literature or building a framework to evaluate the training (working hypotheses). He wanted a framework that was flexible and comprehensive. In his research, he discovered Lundvall’s ( 2006 ) knowledge taxonomy summarized with four simple ways of knowing ( Know - what, Know - how, Know - why, Know - who ). He asked whether his agency’s training provided the participants with these kinds of knowledge? Lundvall’s categories of knowing became the basis of his working hypotheses. Lundvall’s knowledge taxonomy is well suited for working hypotheses because it is so simple and is easy to understand intuitively. It can also be tailored to the unique problematic situation of the researcher. Swift ( 2010 , pp. 38–39) developed four basic working hypotheses:

WH1: Capital Metro provides adequate know - what knowledge in its sexual harassment training

WH2: Capital Metro provides adequate know - how knowledge in its sexual harassment training

WH3: Capital Metro provides adequate know - why knowledge in its sexual harassment training

WH4: Capital Metro provides adequate know - who knowledge in its sexual harassment training

From here he needed to determine what would determine the different kinds of knowledge. For example, what constitutes “know what” knowledge for sexual harassment training. This is where his knowledge and experience working in the field as well as the literature come into play. According to Lundvall et al. ( 1988 , p. 12) “know what” knowledge is about facts and raw information. Swift ( 2010 ) learned through the literature that laws and rules were the basis for the mandated sexual harassment training. He read about specific anti-discrimination laws and the subsequent rules and regulations derived from the laws. These laws and rules used specific definitions and were enacted within a historical context. Laws, rules, definitions and history became the “facts” of Know-What knowledge for his working hypothesis. To make this clear, he created sub-hypotheses that explicitly took these into account. See how Swift ( 2010 , p. 38) constructed the sub-hypotheses below. Each sub-hypothesis was defended using material from the literature (Swift 2010 , pp. 22–26). The sub-hypotheses can also be easily tied to evidence. For example, he could document that the training covered anti-discrimination laws.

WH1: Capital Metro provides adequate know - what knowledge in its sexual Harassment training

WH1a: The sexual harassment training includes information on anti-discrimination laws (Title VII).

WH1b: The sexual harassment training includes information on key definitions.

WH1c: The sexual harassment training includes information on Capital Metro’s Equal Employment Opportunity and Harassment policy.

WH1d: Capital Metro provides training on sexual harassment history.

Know-How knowledge refers to the ability to do something and involves skills (Lundvall and Johnson 1994 , p. 12). It is a kind of expertise in action. The literature and his experience allowed James Smith to identify skills such as how to file a claim or how to document incidents of sexual harassment as important “know-how” knowledge that should be included in sexual harassment training. Again, these were depicted as sub-hypotheses.

WH2: Capital Metro provides adequate know - how knowledge in its sexual Harassment training

WH2a: Training is provided on how to file and report a claim of harassment

WH2b: Training is provided on how to document sexual harassment situations.

WH2c: Training is provided on how to investigate sexual harassment complaints.

WH2d: Training is provided on how to follow additional harassment policy procedures protocol

Note that the working hypotheses do not specify a relationship but rather are simple declarative sentences. If “know-how” knowledge was found in the sexual harassment training, he would be able to find evidence that participants learned about how to file a claim (WH2a). The working hypothesis provides the bridge between theory and data that Sutton and Staw (1995) found missing in exploratory work. The sub-hypotheses are designed to be refined enough that the researchers would know what to look for and tailor their hunt for evidence. Figure  1 captures the generic sub-hypothesis design.

figure 1

A Common structure used in the development of working hypotheses

When expected evidence is linked to the sub-hypotheses, data, framework and research purpose are aligned. This can be laid out in a planning document that operationalizes the data collection in something akin to an architect’s blueprint. This is where the scholar explicitly develops the alignment between purpose, framework and method (Shields and Rangarajan 2013 ; Shields et al. 2019b ).

Table  2 operationalizes Swift’s working hypotheses (and sub-hypotheses). The table provide clues as to what kind of evidence is needed to determine whether the hypotheses are supported. In this case, Smith used interviews with participants and trainers as well as a review of program documents. Column one repeats the sub-hypothesis, column two specifies the data collection method (here interviews with participants/managers and review of program documents) and column three specifies the unique questions that focus the investigation. For example, the interview questions are provided. In the less precise world of qualitative data, evidence supporting a hypothesis could have varying degrees of strength. This too can be specified.

For Swift’s example, neither the statistics of explanatory research nor the open-ended questions of interpretivist, inductive exploratory research is used. The deductive logic of inquiry here is somewhat intuitive and similar to a detective (Ulriksen and Dadalauri 2016 ). It is also a logic used in international law (Worster 2013 ). It should be noted that the working hypothesis and the corresponding data collection protocol does not stop inquiry and fieldwork outside the framework. The interviews could reveal an unexpected problem with Smith’s training program. The framework provides a very loose and perhaps useful ways to identify and make sense of the data that does not fit the expectations. Researchers using working hypotheses should be sensitive to interesting findings that fall outside their framework. These could be used in future studies, to refine theory or even in this case provide suggestions to improve sexual harassment training. The sensitizing concepts mentioned by Gilgun ( 2015 ) are free to emerge and should be encouraged.

Something akin to working hypotheses are hidden in plain sight in the professional literature. Take for example Kerry Crawford’s ( 2017 ) book Wartime Sexual Violence. Here she explores how basic changes in the way “advocates and decision makers think about and discuss conflict-related sexual violence” (p. 2). She focused on a subsequent shift from silence to action. The shift occurred as wartime sexual violence was reframed as a “weapon of war”. The new frame captured the attention of powerful members of the security community who demanded, initiated, and paid for institutional and policy change. Crawford ( 2017 ) examines the legacy of this key reframing. She develops a six-stage model of potential international responses to incidents of wartime violence. This model is fairly easily converted to working hypotheses and sub-hypotheses. Table  3 shows her model as a set of (non-relational) working hypotheses. She applied this model as a way to gather evidence among cases (e.g., the US response to sexual violence in the Democratic Republic of the Congo) to show the official level of response to sexual violence. Each case study chapter examined evidence to establish whether the case fit the pattern formalized in the working hypotheses. The framework was very useful in her comparative context. The framework allowed for consistent comparative analysis across cases. Her analysis of the three cases went well beyond the material covered in the framework. She freely incorporated useful inductively informed data in her analysis and discussion. The framework, however, allowed for alignment within and across cases.

9 Conclusion

In this article we argued that the exploratory research is also well suited for deductive approaches. By examining the landscape of deductive, exploratory research, we proposed the working hypothesis as a flexible conceptual framework and a useful tool for doing exploratory studies. It has the potential to guide and bring coherence across the steps in the research process. After presenting the nature of exploratory research purpose and how it differs from two types of research purposes identified in the literature—explanation, and description. We focused on answering four different questions in order to show the link between micro-conceptual frameworks and research purposes in a deductive setting. The answers to the four questions are summarized in Table  4 .

Firstly, we argued that working hypothesis and exploration are situated within the pragmatic philosophical perspective. Pragmatism allows for pluralism in theory and data collection techniques, which is compatible with the flexible exploratory purpose. Secondly, after introducing and discussing the four core elements of pragmatism (practical, pluralism, participatory, and provisional), we explained how the working hypothesis informs the methodologies and evidence collection of deductive exploratory research through a presentation of the benefits of triangulation provided by mixed methods research. Thirdly, as is clear from the article title, we introduced the working hypothesis as the micro-conceptual framework for deductive explorative research. We argued that the hypotheses of explorative research, which we call working hypotheses are distinguished from those of the explanatory research, since they do not require a relational component and are not bound by relational expectations. A working hypothesis is extremely flexible and idiosyncratic, and it could be viewed as a statement or group of statements of expectations tested in action depending on the research question. Using examples, we concluded by explaining how working hypotheses inform data collection and analysis for deductive exploratory research.

Crawford’s ( 2017 ) example showed how the structure of working hypotheses provide a framework for comparative case studies. Her criteria for analysis were specified ahead of time and used to frame each case. Thus, her comparisons were systemized across cases. Further, the framework ensured a connection between the data analysis and the literature review. Yet the flexible, working nature of the hypotheses allowed for unexpected findings to be discovered.

The evidence required to test working hypotheses is directed by the research purpose and potentially includes both quantitative and qualitative sources. Thus, all types of evidence, including quantitative methods should be part of the toolbox of deductive, explorative research. We show how the working hypotheses, as a flexible exploratory framework, resolves many seeming dualisms pervasive in the research methods literature.

To conclude, this article has provided an in-depth examination of working hypotheses taking into account philosophical questions and the larger formal research methods literature. By discussing working hypotheses as applied, theoretical tools, we demonstrated that working hypotheses fill a unique niche in the methods literature, since they provide a way to enhance alignment in deductive, explorative studies.

In practice, quantitative scholars often run multivariate analysis on data bases to find out if there are correlations. Hypotheses are tested because the statistical software does the math, not because the scholar has an a priori, relational expectation (hypothesis) well-grounded in the literature and supported by cogent arguments. Hunches are just fine. This is clearly an inductive approach to research and part of the large process of inquiry.

In 1958 , Philosophers of Science, Oppenheim and Putnam use the notion of Working Hypothesis in their title “Unity of Science as Working Hypothesis.” They too, use it as a big picture concept, “unity of science in this sense, can be fully realized constitutes an over-arching meta-scientific hypothesis, which enables one to see a unity in scientific activities that might otherwise appear disconnected or unrelated” (p. 4).

It should be noted that the positivism described in the research methods literature does not resemble philosophical positivism as developed by philosophers like Comte (Whetsell and Shields 2015 ). In the research methods literature “positivism means different things to different people….The term has long been emptied of any precise denotation …and is sometimes affixed to positions actually opposed to those espoused by the philosophers from whom the name derives” (Schrag 1992 , p. 5). For purposes of this paper, we are capturing a few essential ways positivism is presented in the research methods literature. This helps us to position the “working hypothesis” and “exploratory” research within the larger context in contemporary research methods. We are not arguing that the positivism presented here is anything more. The incompatibility theory discussed later, is an outgrowth of this research methods literature…

It should be noted that quantitative researchers often use inductive reasoning. They do this with existing data sets when they run correlations or regression analysis as a way to find relationships. They ask, what does the data tell us?

Qualitative researchers are also associated with phenomenology, hermeneutics, naturalistic inquiry and constructivism.

See Feilzer ( 2010 ), Howe ( 1988 ), Johnson and Onwuegbunzie ( 2004 ), Morgan ( 2007 ), Onwuegbuzie and Leech ( 2005 ), Biddle and Schafft ( 2015 ).

The term conceptual framework is applicable in a broad context (see Ravitch and Riggan 2012 ). The micro-conceptual framework narrows to the specific study and informs data collection (Shields and Rangarajan 2013 ; Shields et al. 2019a ) .

Adler, E., Clark, R.: How It’s Done: An Invitation to Social Research, 3rd edn. Thompson-Wadsworth, Belmont (2008)

Google Scholar  

Arnold, R.W.: Multiple working hypothesis in soil genesis. Soil Sci. Soc. Am. J. 29 (6), 717–724 (1965)

Article   Google Scholar  

Atieno, O.: An analysis of the strengths and limitation of qualitative and quantitative research paradigms. Probl. Educ. 21st Century 13 , 13–18 (2009)

Babbie, E.: The Practice of Social Research, 11th edn. Thompson-Wadsworth, Belmont (2007)

Biddle, C., Schafft, K.A.: Axiology and anomaly in the practice of mixed methods work: pragmatism, valuation, and the transformative paradigm. J. Mixed Methods Res. 9 (4), 320–334 (2015)

Brendel, D.H.: Healing Psychiatry: Bridging the Science/Humanism Divide. MIT Press, Cambridge (2009)

Bryman, A.: Qualitative research on leadership: a critical but appreciative review. Leadersh. Q. 15 (6), 729–769 (2004)

Casula, M.: Under which conditions is cohesion policy effective: proposing an Hirschmanian approach to EU structural funds, Regional & Federal Studies, https://doi.org/10.1080/13597566.2020.1713110 (2020a)

Casula, M.: Economic gowth and cohesion policy implementation in Italy and Spain, Palgrave Macmillan, Cham (2020b)

Ciceri, F., et al.: Microvascular COVID-19 lung vessels obstructive thromboinflammatory syndrome (MicroCLOTS): an atypical acute respiratory distress syndrome working hypothesis. Crit. Care Resusc. 15 , 1–3 (2020)

Crawford, K.F.: Wartime sexual violence: From silence to condemnation of a weapon of war. Georgetown University Press (2017)

Cronbach, L.: Beyond the two disciplines of scientific psychology American Psychologist. 30 116–127 (1975)

Dewey, J.: The reflex arc concept in psychology. Psychol. Rev. 3 (4), 357 (1896)

Dewey, J.: Logic: The Theory of Inquiry. Henry Holt & Co, New York (1938)

Feilzer, Y.: Doing mixed methods research pragmatically: implications for the rediscovery of pragmatism as a research paradigm. J. Mixed Methods Res. 4 (1), 6–16 (2010)

Gilgun, J.F.: Qualitative research and family psychology. J. Fam. Psychol. 19 (1), 40–50 (2005)

Gilgun, J.F.: Methods for enhancing theory and knowledge about problems, policies, and practice. In: Katherine Briar, Joan Orme., Roy Ruckdeschel., Ian Shaw. (eds.) The Sage handbook of social work research pp. 281–297. Thousand Oaks, CA: Sage (2009)

Gilgun, J.F.: Deductive Qualitative Analysis as Middle Ground: Theory-Guided Qualitative Research. Amazon Digital Services LLC, Seattle (2015)

Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine, Chicago (1967)

Gobo, G.: Re-Conceptualizing Generalization: Old Issues in a New Frame. In: Alasuutari, P., Bickman, L., Brannen, J. (eds.) The Sage Handbook of Social Research Methods, pp. 193–213. Sage, Los Angeles (2008)

Chapter   Google Scholar  

Grinnell, R.M.: Social work research and evaluation: quantitative and qualitative approaches. New York: F.E. Peacock Publishers (2001)

Guba, E.G.: What have we learned about naturalistic evaluation? Eval. Pract. 8 (1), 23–43 (1987)

Guba, E., Lincoln, Y.: Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. Jossey-Bass Publishers, San Francisco (1981)

Habib, M.: The neurological basis of developmental dyslexia: an overview and working hypothesis. Brain 123 (12), 2373–2399 (2000)

Heyvaert, M., Maes, B., Onghena, P.: Mixed methods research synthesis: definition, framework, and potential. Qual. Quant. 47 (2), 659–676 (2013)

Hildebrand, D.: Dewey: A Beginners Guide. Oneworld Oxford, Oxford (2008)

Howe, K.R.: Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Edu. Res. 17 (8), 10–16 (1988)

Hothersall, S.J.: Epistemology and social work: enhancing the integration of theory, practice and research through philosophical pragmatism. Eur. J. Social Work 22 (5), 860–870 (2019)

Hyde, K.F.: Recognising deductive processes in qualitative research. Qual. Market Res. Int. J. 3 (2), 82–90 (2000)

Johnson, R.B., Onwuegbuzie, A.J.: Mixed methods research: a research paradigm whose time has come. Educ. Res. 33 (7), 14–26 (2004)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Kaplan, A.: The Conduct of Inquiry. Chandler, Scranton (1964)

Kolb, S.M.: Grounded theory and the constant comparative method: valid research strategies for educators. J. Emerg. Trends Educ. Res. Policy Stud. 3 (1), 83–86 (2012)

Levers, M.J.D.: Philosophical paradigms, grounded theory, and perspectives on emergence. Sage Open 3 (4), 2158244013517243 (2013)

Lundvall, B.A.: Knowledge management in the learning economy. In: Danish Research Unit for Industrial Dynamics Working Paper Working Paper, vol. 6, pp. 3–5 (2006)

Lundvall, B.-Å., Johnson, B.: Knowledge management in the learning economy. J. Ind. Stud. 1 (2), 23–42 (1994)

Lundvall, B.-Å., Jenson, M.B., Johnson, B., Lorenz, E.: Forms of Knowledge and Modes of Innovation—From User-Producer Interaction to the National System of Innovation. In: Dosi, G., et al. (eds.) Technical Change and Economic Theory. Pinter Publishers, London (1988)

Maanen, J., Manning, P., Miller, M.: Series editors’ introduction. In: Stebbins, R. (ed.) Exploratory research in the social sciences. pp. v–vi. Thousands Oak, CA: SAGE (2001)

Mackenzie, N., Knipe, S.: Research dilemmas: paradigms, methods and methodology. Issues Educ. Res. 16 (2), 193–205 (2006)

Marlow, C.R.: Research Methods for Generalist Social Work. Thomson Brooks/Cole, New York (2005)

Mead, G.H.: The working hypothesis in social reform. Am. J. Sociol. 5 (3), 367–371 (1899)

Milnes, A.G.: Structure of the Pennine Zone (Central Alps): a new working hypothesis. Geol. Soc. Am. Bull. 85 (11), 1727–1732 (1974)

Morgan, D.L.: Paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J. Mixed Methods Res. 1 (1), 48–76 (2007)

Morse, J.: The significance of saturation. Qual. Health Res. 5 (2), 147–149 (1995)

O’Connor, M.K., Netting, F.E., Thomas, M.L.: Grounded theory: managing the challenge for those facing institutional review board oversight. Qual. Inq. 14 (1), 28–45 (2008)

Onwuegbuzie, A.J., Leech, N.L.: On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. Int. J. Soc. Res. Methodol. 8 (5), 375–387 (2005)

Oppenheim, P., Putnam, H.: Unity of science as a working hypothesis. In: Minnesota Studies in the Philosophy of Science, vol. II, pp. 3–36 (1958)

Patten, M.L., Newhart, M.: Understanding Research Methods: An Overview of the Essentials, 2nd edn. Routledge, New York (2000)

Pearse, N.: An illustration of deductive analysis in qualitative research. In: European Conference on Research Methodology for Business and Management Studies, pp. 264–VII. Academic Conferences International Limited (2019)

Prater, D.N., Case, J., Ingram, D.A., Yoder, M.C.: Working hypothesis to redefine endothelial progenitor cells. Leukemia 21 (6), 1141–1149 (2007)

Ravitch, B., Riggan, M.: Reason and Rigor: How Conceptual Frameworks Guide Research. Sage, Beverley Hills (2012)

Reiter, B.: The epistemology and methodology of exploratory social science research: Crossing Popper with Marcuse. In: Government and International Affairs Faculty Publications. Paper 99. http://scholarcommons.usf.edu/gia_facpub/99 (2013)

Ritchie, J., Lewis, J.: Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage, London (2003)

Schrag, F.: In defense of positivist research paradigms. Educ. Res. 21 (5), 5–8 (1992)

Shields, P.M.: Pragmatism as a philosophy of science: A tool for public administration. Res. Pub. Admin. 41995-225 (1998)

Shields, P.M., Rangarajan, N.: A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. New Forums Press (2013)

Shields, P.M., Tajalli, H.: Intermediate theory: the missing link in successful student scholarship. J. Public Aff. Educ. 12 (3), 313–334 (2006)

Shields, P., & Whetsell, T.: Public administration methodology: A pragmatic perspective. In: Raadshelders, J., Stillman, R., (eds). Foundations of Public Administration, pp. 75–92. New York: Melvin and Leigh (2017)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part I). Sotsiologicheskie issledovaniya 10 , 39–47 (2019a)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part 2). Sotsiologicheskie issledovaniya 11 , 40–51 (2019b)

Smith, J.K.: Quantitative versus qualitative research: an attempt to clarify the issue. Educ. Res. 12 (3), 6–13 (1983a)

Smith, J.K.: Quantitative versus interpretive: the problem of conducting social inquiry. In: House, E. (ed.) Philosophy of Evaluation, pp. 27–52. Jossey-Bass, San Francisco (1983b)

Smith, J.K., Heshusius, L.: Closing down the conversation: the end of the quantitative-qualitative debate among educational inquirers. Educ. Res. 15 (1), 4–12 (1986)

Stebbins, R.A.: Exploratory Research in the Social Sciences. Sage, Thousand Oaks (2001)

Book   Google Scholar  

Strydom, H.: An evaluation of the purposes of research in social work. Soc. Work/Maatskaplike Werk 49 (2), 149–164 (2013)

Sutton, R. I., Staw, B.M.: What theory is not. Administrative science quarterly. 371–384 (1995)

Swift, III, J.: Exploring Capital Metro’s Sexual Harassment Training using Dr. Bengt-Ake Lundvall’s taxonomy of knowledge principles. Applied Research Project, Texas State University https://digital.library.txstate.edu/handle/10877/3671 (2010)

Thomas, E., Magilvy, J.K.: Qualitative rigor or research validity in qualitative research. J. Spec. Pediatric Nurs. 16 (2), 151–155 (2011)

Twining, P., Heller, R.S., Nussbaum, M., Tsai, C.C.: Some guidance on conducting and reporting qualitative studies. Comput. Educ. 107 , A1–A9 (2017)

Ulriksen, M., Dadalauri, N.: Single case studies and theory-testing: the knots and dots of the process-tracing method. Int. J. Soc. Res. Methodol. 19 (2), 223–239 (2016)

Van Evera, S.: Guide to Methods for Students of Political Science. Cornell University Press, Ithaca (1997)

Whetsell, T.A., Shields, P.M.: The dynamics of positivism in the study of public administration: a brief intellectual history and reappraisal. Adm. Soc. 47 (4), 416–446 (2015)

Willis, J.W., Jost, M., Nilakanta, R.: Foundations of Qualitative Research: Interpretive and Critical Approaches. Sage, Beverley Hills (2007)

Worster, W.T.: The inductive and deductive methods in customary international law analysis: traditional and modern approaches. Georget. J. Int. Law 45 , 445 (2013)

Yin, R.K.: The case study as a serious research strategy. Knowledge 3 (1), 97–114 (1981)

Yin, R.K.: The case study method as a tool for doing evaluation. Curr. Sociol. 40 (1), 121–137 (1992)

Yin, R.K.: Applications of Case Study Research. Sage, Beverley Hills (2011)

Yin, R.K.: Case Study Research and Applications: Design and Methods. Sage Publications, Beverley Hills (2017)

Download references

Acknowledgements

The authors contributed equally to this work. The authors would like to thank Quality & Quantity’ s editors and the anonymous reviewers for their valuable advice and comments on previous versions of this paper.

Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. There are no funders to report for this submission.

Author information

Authors and affiliations.

Department of Political and Social Sciences, University of Bologna, Strada Maggiore 45, 40125, Bologna, Italy

Mattia Casula

Texas State University, San Marcos, TX, USA

Nandhini Rangarajan & Patricia Shields

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mattia Casula .

Ethics declarations

Conflict of interest.

No potential conflict of interest was reported by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casula, M., Rangarajan, N. & Shields, P. The potential of working hypotheses for deductive exploratory research. Qual Quant 55 , 1703–1725 (2021). https://doi.org/10.1007/s11135-020-01072-9

Download citation

Accepted : 05 November 2020

Published : 08 December 2020

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11135-020-01072-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Exploratory research
  • Working hypothesis
  • Deductive qualitative research
  • Find a journal
  • Publish with us
  • Track your research

Frequently asked questions

How do you use deductive reasoning in research.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Inductive vs Deductive Research Approach (with Examples)

Inductive vs Deductive Reasoning | Difference & Examples

Published on 4 May 2022 by Raimo Streefkerk . Revised on 10 October 2022.

The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory .

Inductive reasoning moves from specific observations to broad generalisations , and deductive reasoning the other way around.

Both approaches are used in various types of research , and it’s not uncommon to combine them in one large study.

Inductive-vs-deductive-reasoning

Table of contents

Inductive research approach, deductive research approach, combining inductive and deductive research, frequently asked questions about inductive vs deductive reasoning.

When there is little to no existing literature on a topic, it is common to perform inductive research because there is no theory to test. The inductive approach consists of three stages:

  • A low-cost airline flight is delayed
  • Dogs A and B have fleas
  • Elephants depend on water to exist
  • Another 20 flights from low-cost airlines are delayed
  • All observed dogs have fleas
  • All observed animals depend on water to exist
  • Low-cost airlines always have delays
  • All dogs have fleas
  • All biological life depends on water to exist

Limitations of an inductive approach

A conclusion drawn on the basis of an inductive method can never be proven, but it can be invalidated.

Example You observe 1,000 flights from low-cost airlines. All of them experience a delay, which is in line with your theory. However, you can never prove that flight 1,001 will also be delayed. Still, the larger your dataset, the more reliable the conclusion.

Prevent plagiarism, run a free check.

When conducting deductive research , you always start with a theory (the result of inductive research). Reasoning deductively means testing these theories. If there is no theory yet, you cannot conduct deductive research.

The deductive research approach consists of four stages:

  • If passengers fly with a low-cost airline, then they will always experience delays
  • All pet dogs in my apartment building have fleas
  • All land mammals depend on water to exist
  • Collect flight data of low-cost airlines
  • Test all dogs in the building for fleas
  • Study all land mammal species to see if they depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed
  • 10 out of 20 dogs didn’t have fleas
  • All land mammal species depend on water
  • 5 out of 100 flights of low-cost airlines are not delayed = reject hypothesis
  • 10 out of 20 dogs didn’t have fleas = reject hypothesis
  • All land mammal species depend on water = support hypothesis

Limitations of a deductive approach

The conclusions of deductive reasoning can only be true if all the premises set in the inductive study are true and the terms are clear.

  • All dogs have fleas (premise)
  • Benno is a dog (premise)
  • Benno has fleas (conclusion)

Many scientists conducting a larger research project begin with an inductive study (developing a theory). The inductive study is followed up with deductive research to confirm or invalidate the conclusion.

In the examples above, the conclusion (theory) of the inductive study is also used as a starting point for the deductive study.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Streefkerk, R. (2022, October 10). Inductive vs Deductive Reasoning | Difference & Examples. Scribbr. Retrieved 11 June 2024, from https://www.scribbr.co.uk/research-methods/inductive-vs-deductive-reasoning/

Is this article helpful?

Raimo Streefkerk

Raimo Streefkerk

Other students also liked, inductive reasoning | types, examples, explanation, what is deductive reasoning | explanation & examples, a quick guide to experimental design | 5 steps & examples.

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction to Research Methods

1.7 Deductive Approaches to Research

Researchers taking a deductive approach take the steps described for inductive research and reverse their order. They start with a social theory that they find compelling and then test its implications with data; i.e., they move from a more general level to a more specific one. A deductive approach to research is the one that people typically associate with scientific investigation. The researcher studies what others have done, reads existing theories of whatever phenomenon he or she is studying, and then tests hypotheses that emerge from those theories (see Figure 1.5).

Specific level of focus Analysis Specific level of focus

Figure 1.5: Steps involved with a deductive approach to research.  This image is from Principles of Sociological Inquiry , which was adapted by the Saylor Academy without attribution to the original authors or publisher, as requested by the licensor, and is licensed under a CC BY-NC-SA 3.0 License .

Text Attributions

This chapter has been adapted from Chapter 2.3 in Principles of Sociological Inquiry , which was adapted by the Saylor Academy without attribution to the original authors or publisher, as requested by the licensor, and is licensed under a CC BY-NC-SA 3.0 License .

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Privacy Policy

Research Method

Home » Inductive Vs Deductive Research

Inductive Vs Deductive Research

Table of Contents

Inductive Vs Deductive Research

When conducting research, two main approaches are commonly used: inductive and deductive. Understanding these methods is essential for any researcher as they provide different pathways to developing and testing theories. Let’s explore what these methods entail, their unique characteristics, and how they differ from each other.

Inductive Research

Inductive research is a bottom-up approach. It begins with specific observations or real examples of events, trends, or phenomena. From these specific instances, researchers look for patterns and regularities. Over time, these observations can lead to broader generalizations and theories.

For example, if a researcher observes that many students who eat breakfast perform better in school, they might develop a theory that breakfast improves academic performance. Inductive research is often exploratory and open-ended, allowing for new theories to emerge from the data.

Deductive Research

Deductive research, on the other hand, is a top-down approach. It starts with a general theory or hypothesis and then tests this theory by collecting and examining specific data. Researchers begin with an existing theory or assumption and design experiments or studies to test whether this theory holds true in particular instances.

For example, if a researcher starts with the theory that “exercise improves mental health,” they would collect data to test this hypothesis. They might conduct a study where participants are divided into groups that either do or do not exercise and then measure their mental health outcomes. Deductive research is often more focused and aims to confirm or disprove existing theories.

Difference between Inductive and Deductive Research

Here are some key differences between inductive and deductive research:

Starting Point:

  • Inductive: Begins with specific observations or data.
  • Deductive: Begins with a general theory or hypothesis.
  • Inductive: Looks for patterns and develops a theory.
  • Deductive: Tests a theory by collecting and analyzing specific data.
  • Inductive: Generates new theories or ideas.
  • Deductive: Confirms or refutes existing theories.
  • Inductive: More open-ended and exploratory.
  • Deductive: More focused and aimed at testing specific hypotheses.

Both inductive and deductive research methods are valuable in the field of research. Inductive research is useful for developing new theories, while deductive research is essential for testing and validating existing theories. By understanding and applying these methods appropriately, researchers can effectively contribute to their fields and build a solid foundation of knowledge.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Vs Quantitative Research

Qualitative Vs Quantitative Research

Clinical Research Vs Lab Research

Clinical Research Vs Lab Research

Descriptive Statistics vs Inferential Statistics

Descriptive vs Inferential Statistics – All Key...

Reliability Vs Validity

Reliability Vs Validity

Basic Vs Applied Research

Basic Vs Applied Research

Descriptive vs Experimental Research

Descriptive vs Experimental Research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Deductive and Inductive Approach of Research

Profile image of Nirmalya Das

A deductive approach is concerned with the developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis. The deductive means reasoning from the particular to the general. If a causal relationship or link seems to be

Related Papers

deductive method research paper

tommy masimba

hillary mukorera

The Internet Encyclopedia of Philosophy

Timothy Shanahan

In philosophy, an argument consists of a set of statements called premises that serve as grounds for affirming another statement called the conclusion. Philosophers typically distinguish arguments in natural languages (such as English) into two fundamentally different types: deductive and inductive. Each type of argument is said to have characteristics that categorically distinguish it from the other type. The two types of argument are also said to be subject to differing evaluative standards. Pointing to paradigmatic examples of each type of argument helps to clarify their key differences. The distinction between the two types of argument may hardly seem worthy of philosophical reflection, as evidenced by the fact that their differences are usually presented as straightforward, such as in many introductory philosophy textbooks. Nonetheless, the question of how best to distinguish deductive from inductive arguments, and indeed whether there is a coherent categorical distinction between them at all, turns out to be considerably more problematic than commonly recognized. This article identifies and discusses a range of different proposals for marking categorical differences between deductive and inductive arguments while highlighting the problems and limitations attending each. Consideration is also given to the ways in which one might do without a distinction between two types of argument by focusing instead solely on the application of evaluative standards to arguments.

Kairos. Journal of Philosophy & Science

Ricardo Tavares da Silva

According to the traditional view, the following incompatibility holds true: in reasoning, either there is warrant (certainty) or there is novelty. If there is warrant, there is not novelty: that would be the case of deductive reasoning. If there is novelty, there is not warrant: that would be the case of inductive reasoning. Causal reasoning would belong to the second group because there is novelty and, therefore, there is not warrant in it. I argue that this is false: reasoning may have novelty and, nevertheless, be a deductive one. That is precisely what happens in (some) causal reasoning. And I will develop the following line of argumentation: one thing is to warrant that some state of affairs exists and other thing is to warrant that warrant. So we may have correct deductive reasoning without having certainty of that correction, like in some cases of causal reasoning.

Advancing Developmental Science

Robert Ricco

Qayyum Nawaz

Determination of this research article was to scrutinize the attainments of the students at elementary level when taught by deductive and inductive methods of teaching mathematics at elementary level. A thirty students sample was taken from six Government elementary schools and divided them into two groups, one was experimental and the second was control group. There is no significant difference between the performances of two groups was the pre-test scores. The students of control group were taught by deductive method and the students of experimental group were taught by inductive method under a control environment. After experiment, a researcher made post-test was conducted. The result of post-test shows that there is a significant difference between the performances of students of two groups it means that the students of experimental group performed better than the control group

Educational Studies in Mathematics

Ruhama Even

saeede asghari

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Quality and Quantity

Patricia Shields

First Principles Thinking Review

Kacper Grass

Studies in Logic

Jeffrey Goodman

Informal Logic

Trudy Govier

Chong Ho Yu

Shaul Markovitch

Kaikubad Ali

World Futures

Farzad Mahootian

Premise: Journal of English Education and Applied Linguistics

PREMISE: Journal of English Education

Proceedings of the 18th European Conference on Research Methodology for Business and Management Studies

Noel Pearse

Andrie Kontozi

Spyros Mallios

必有回响 willecho

Key Concepts in Ethnography

Karen O'Reilly

Journal of Applied Arts & Health

Karen Huckvale

The American Statistician

Wim Nuijten

Jane Gilgun

Keith Jones

Slobodan Cvetanovic

Rajib Timalsina

Bakhtawer Zain

Tuncay Koçak

European Scientific Journal

Dr John D Rich, Jr.

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

deductive method research paper

Home Market Research

Inductive vs Deductive Research: Difference of Approaches

Inductive vs deductive research: Understand the differences between these two approaches to thinking to guide your research. Learn more.

The terms “inductive” and “deductive” are often used in logic, reasoning, and science. Scientists use both inductive and deductive research methods as part of the scientific method.

Famous fictional detectives like Sherlock Holmes are often associated with deduction, even though that’s not always what Holmes does (more on that later). Some writing classes include both inductive and deductive essays.

But what’s the difference between inductive vs deductive research? The difference often lies in whether the argument proceeds from the general to the specific or the specific to the general. 

Both methods are used in different types of research, and it’s not unusual to use both in one project. In this article, we’ll describe each in simple yet defined terms.

Content Index: 

What is inductive research, stages of inductive research process, what is deductive research, stages of deductive research process, difference between inductive vs deductive research.

Inductive research is a method in which the researcher collects and analyzes data to develop theories, concepts, or hypotheses based on patterns and observations seen in the data. 

It uses a “bottom-up” method in which the researcher starts with specific observations and then moves on to more general theories or ideas. Inductive research is often used in exploratory studies or when not much research has been done on a topic before.

LEARN ABOUT: Research Process Steps

The three steps of the inductive research process are:

  • Observation: 

The first step of inductive research is to make detailed observations of the studied phenomenon. This can be done in many ways, such as through surveys, interviews, or direct observation.

  • Pattern Recognition: 

The next step is to look at the data in detail once the data has been collected. This means looking at the data for patterns, themes, and relationships. The goal is to find insights and trends that can be used to make the first categories and ideas.

  • Theory Development: 

At this stage, the researcher will start to create initial categories or concepts based on the patterns and themes from the data analysis. This means putting the data into groups based on their similarities and differences to make a framework for understanding the thing being studied.

LEARN ABOUT: Data Management Framework

These three steps are often repeated in a cycle, so the researcher can improve their analysis and understand the phenomenon over time. Inductive research aims to develop new theories and ideas based on the data rather than testing existing theories, as in deductive research.

Deductive research is a type of research in which the researcher starts with a theory, hypothesis, or generalization and then tests it through observations and data collection.

It uses a top-down method in which the researcher starts with a general idea and then tests it through specific observations. Deductive research is often used to confirm a theory or test a well-known hypothesis.

The five steps in the process of deductive research are:

  • Formulation of a hypothesis: 

The first step in deductive research is to develop a hypothesis and guess how the variables are related. Most of the time, the hypothesis is built on theories or research that have already been done.

  • Design of a research study: 

The next step is designing a research study to test the hypothesis. This means choosing a research method, figuring out what needs to be measured, and figuring out how to collect and look at the data.

  • Collecting data: 

Once the research design is set, different methods, such as surveys, experiments, or observational studies, are used to gather data. Usually, a standard protocol is used to collect the data to ensure it is correct and consistent.

  • Analysis of data: 

In this step, the collected data are looked at to see if they support or disprove the hypothesis. The goal is to see if the data supports or refutes the hypothesis. You need to use statistical methods to find patterns and links between the variables to do this.

  • Drawing conclusions: 

The last step is drawing conclusions from the analysis of the data. If the hypothesis is supported, it can be used to make generalizations about the population being studied. If the hypothesis is wrong, the researcher may need to develop a new one and start the process again.

The five steps of deductive research are repeated, and researchers may need to return to earlier steps if they find new information or new ways of looking at things. In contrast to inductive research, deductive research aims to test theories or hypotheses that have already been made.

The main differences between inductive and deductive research are how the research is done, the goal, and how the data is analyzed. Inductive research is exploratory, flexible, and based on qualitative observation analysis. Deductive research, on the other hand, is about proving something and is structured and based on quantitative analysis .

Here are the main differences between inductive vs deductive research in more detail:

deductive method research paper

TopicsInductive researchTopicsDeductive research
Bottom-upapproachIn inductive research, the researcher starts with data and observations, then uses data patterns to develop theories or generalizations. 
This is a bottom-up approach in which the researcher builds from specific observations to more general theories.
Top-down approachIn deductive research researcher starts with a theory or hypothesis, then tests it through observations and gathering data.
This is a top-down approach in which the researcher tests a theory or generalization using specific observations.
Develops theories from observationsIn inductive research, theories or generalizations are made based on what has been seen and how it has been seen. 
The goal is to create theories explaining and making sense of the data.
Tests theories through observationsDeductive research aims to use real-world observations to test theories or hypotheses.
The person doing the research gathers data to prove or disprove the theory or hypothesis.
Used in exploratory studiesInductive research is often used to learn more about a phenomenon or area of interest when there is a limited amount of previous research on the subject. 
With this method, new theories and ideas can be made from the data.
Used in confirmatory studiesResearchers often use deductive research when they want to test a well-known theory or hypothesis and either prove or disprove it.
This method works best when the researcher has a clear research question and wants to test a specific hypothesis.
Flexible and adaptable to new findingsInductive research is flexible and open to new information because researchers can change their theories and hypotheses based on their findings. 
This method works best when the research question is unclear, or unexpected results arise.
Structured and systematicDeductive research is structured and methodical because it uses a research design and method that have already been decided upon.
This method starts with a clear plan for the research, making it easier to collect and analyze data more objectively and consistently.
Relies more on qualitative analysisInductive research uses more qualitative analysis, like textual or visual analysis, to find patterns and themes in the data.Relies more on quantitative analysisDeductive research uses more quantitative methods, like statistical analysis, to test and confirm the theory or hypothesis. 
This method uses numbers to test the theory or hypothesis and draw objective conclusions.

LEARN ABOUT: Theoretical Research

Inductive research and deductive research are two different types of research with different starting points, goals, methods, and ways of looking at the data.

Inductive research uses specific observations and patterns to come up with new theories. On the other hand, deductive research starts with a theory or hypothesis and tests it through observations.

Both approaches have advantages as well as disadvantages and can be used in different types of research depending on the question and goals.

QuestionPro is a responsive online platform for surveys and research that can be used for both inductive and deductive research. It has many tools and features to help you collect and analyze data, such as customizable survey templates, advanced survey logic, and real-time reporting.

With QuestionPro, researchers can do surveys, send them out, analyze the results, and draw conclusions that help them make decisions and learn more about their fields.

The platform has advanced data analysis and reporting tools that can be used with both qualitative and quantitative methods of data analysis.

Whether researchers do inductive or deductive research, QuestionPro can help them design, run, and analyze their projects completely and powerfully. So sign up now for a free trial! 

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Weighting Survey Data

How to Weighting Survey Data to Enhance Your Data Quality?

Jun 12, 2024

stay interviews

Stay Interviews: What Is It, How to Conduct, 15 Questions

Jun 11, 2024

types of correlation

Exploring Types of Correlation for Patterns and Relationship 

Jun 10, 2024

Life@QuestionPro: The Journey of Kristie Lawrence

Life@QuestionPro: The Journey of Kristie Lawrence

Jun 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Assoc Oral Maxillofac Surg
  • v.47(3); 2021 Jun 30

Inductive or deductive? Research by maxillofacial surgeons

Soung min kim.

1 Oral and Maxillofacial Microvascular Reconstruction LAB, Brong Ahafo Regional Hospital, Sunyani, Ghana

2 Department of Oral and Maxillofacial Surgery, Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea

Hundreds to thousands of scientific papers are published every day, most by renowned and competent reviewers as they evaluate and accredit each one according to their own standards. As oral and maxillofacial surgeons who perform clinical treatment and surgery every day, we have a scientific way of thinking and acting, and based on these skills, we publish experiences and records while evaluating and criticizing the experiments and results of others at the same time.

According to Röhrig et al. 1 , medical research is classified into primary and secondary research. Primary research includes basic, clinical, and epidemiological research, and secondary research includes meta-analysis and review (systematic and simple or narrative), such as cross-sectional, longitudinal, trend, cohort, and panel studies.

Basic research seeks to advance the frontiers of knowledge without a defined goal of utility or specific purpose and includes the development and improvement of analytical procedures. This pure research is also known as experimental research, whereas experiments in clinical research are also known as interventional and observational study 1 . Experimental or interventional studies involve an active attempt to change a disease determinant, such as an exposure or a behavior, or the progress of a disease through treatment. This is similar in design to experiments in other sciences. A clinical trial is a form of research to test new treatments and is divided into different stages. Non-interventional or observational studies include comparative management, and the treatment is exclusively according to the doctor’s discretion. Meanwhile, in epidemiological research or observational studies, researchers observe subjects and measure variables of interest. Assignment of subjects into treated and untreated groups is beyond the control of the researcher.

Before becoming clinicians, we must ask ourselves, “Are we deviating from a scientific and rational way of thinking?” As scientists, we need to evaluate data while coping with different circumstances. Therefore, in this issue, I would like to describe the two most basic approaches for methods of research: inductive and deductive. Although they can be complementary, these two approaches are quite different, and the relationship between theory and research differs for each approach.

The inductive method is cultural anthropology, deriving general facts from individual facts. The inductive method begins with research and establishing a theory, then investigates and observes to find a generalized theory. In the inductive method, the researcher begins by collecting relevant data for the research topic. Once a substantial amount of data has been collected, the researcher will develop an empirical generalization, stepping back to get an overview. In the early stages of inductive research, the researcher looks for preliminary patterns and regularities in the data, aiming to develop a tentative theory that could explain those patterns and regularities. Thus, the inductive approach starts with a set of observations and then moves from those experiences to broader generalizations about those experiences. In other words, they move from data to theory, or from the specific to the general.

The deductive method, on the other hand, derives specific facts from general facts in social science, pre-theory, post-investigation, theoretical hypothesis investigation, observation and generalization. The deductive approach involves the same steps described earlier for inductive research but reverses the order. The researcher begins with a compelling theory and then tests its implications with data. That is, deductive research narrows information from a general to a more specific level. The deductive approach is typically associated with scientific investigation. The researcher studies what is known, analyzes the existing theories of the topic of interest, and then tests the hypotheses that emerge from the deductive methods.( Fig. 1 )

An external file that holds a picture, illustration, etc.
Object name is jkaoms-47-3-151-f1.jpg

Schematic chain of the approaches to inductive and deductive research.

We maxillofacial surgeons are both clinicians and scientists. Clinician-scientists are practicing professionals who engage in scientific research. By being involved in both fields, we have the unique opportunity to connect and exchange knowledge between research and practice, and as such, we are considered vital to the advancement of medical practice. By combining practice and research, we act as a bridge between distinct professional fields by transferring the most recent advancements from research to clinical practice and ensuring the clinical relevance of research, for example 2 . We must remind ourselves that medical knowledge is not acquired primarily for its own sake but for a specific purpose: the care of the sick 3 .

Understanding the cycles of deductive and inductive research methods can help an oral and maxillofacial surgeon take charge of an ideal clinical study through the integration of theoretical systems.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Qualitative research: deductive and inductive approaches to data analysis

Purpose The purpose of this paper is to explain the rationale for choosing the qualitative approach to research human resources practices, namely, recruitment and selection, training and development, performance management, rewards management, employee communication and participation, diversity management and work and life balance using deductive and inductive approaches to analyse data. The paper adopts an emic perspective that favours the study of transfer of human resource management practices from the point of view of employees and host country managers in subsidiaries of western multinational enterprises in Ghana. Design/methodology/approach Despite the numerous examples of qualitative methods of data generation, little is known particularly to the novice researcher about how to analyse qualitative data. This paper develops a model to explain in a systematic manner how to methodically analyse qualitative data using both deductive and inductive approaches. Findings The deductive and inductive approaches provide a comprehensive approach in analysing qualitative data. The process involves immersing oneself in the data reading and digesting in order to make sense of the whole set of data and to understand what is going on. Originality/value This paper fills a serious gap in qualitative data analysis which is deemed complex and challenging with limited attention in the methodological literature particularly in a developing country context, Ghana.

  • Related Documents

Public sector reform and the state of performance management in Portugal: is there a gap between performance measurement and its use?

PurposeThis paper aims to analyse the state of performance management in the Portuguese public sector as part of the efforts towards public administration reform.Design/methodology/approachTheoretically, the authors took Bouckaert and Halligan's (2008, pp. 35–39) approach into consideration to analyse the adoption of performance management practices. This approach was supplemented by an adaptation of Pollitt and Bouckaert's (2011, p. 33) framework to analyse the context for administrative reforms. The used data analysis techniques include documentary analysis (namely legislation and evaluation reports of reform efforts), secondary data analysis and a survey conducted with 296 Portuguese top public managers.FindingsThe findings show that Portuguese public sector organisations adopted several tools to measure performance over the years, but failed to incorporate performance information into their management practices or to properly use it for either internal or external purposes. Concerning the ideal types proposed by Bouckaert and Halligan (2008, p. 36), Portugal is considered to fit the “performance administration” ideal type, even though it is moving closer to the “managements of performance” ideal type.Originality/valueThis is one of the first comprehensive studies on the state of performance management in Portugal framed within the broader context of public sector reforms. The findings will be of interest both to scholars who study public administration reforms and performance management and to Portuguese policy makers and public managers who are interested in understanding and improving the way performance information is measured, incorporated and used in that sector.

Equality and harmony: diversity management in China

Purpose Considering the importance of China as a global economic power and the emphasis placed on human resources in a knowledge economy, the findings of no less than 30 articles on diversity management in that country seem inadequate given the growing importance of diversity in the workplace. Analysis of those articles reveals that most of the research focuses on firms located on the eastern coast. Moreover, while cataloging the types of industry and ownership covered provides a broad overview, specific industries and ownership types require further examination. Methodology Searches were conducted in both English and Chinese databases using the keyword search phrase of “diversity management and China”. The criteria for including an article were as follows: 1) an emphasis on diversity management within the business environment; 2) a focus on applications of diversity management within the People’s Republic of China, thus excluding Taiwan; and 3) a research-based or conceptual orientation. The search was further limited by using the “abstract” as a limiter under the assumption that if the concepts were important, the author(s) would have used that terminology in the abstract. Findings Gender emerged as a major concern along with residential status; racial and ethnic differences, on the other hand, cultural and/or other influences on diversity management received limited attention. Both qualitative and quantitative research methods were used by the various authors, but exploratory methods such as grounded theory saw minimal use. With the little research done on diversity management in China, it is difficult to assess whether or Chinese firms are fully using its available workforce. China must embrace diversity management practices with a view to achieving competitive advantages as well as equality and harmony in the workplace. Originality/value This is one of the first published reviews of articles from both Chinese and English databases that delves into the issue of diversity management in China.

Organizational learning, knowledge management practices and firm’s performance

Purpose – The study aims at investigating the impact of organizational learning (OL) on the firm’s performance and knowledge management (KM) practices in a heavy engineering organization in India. Design/methodology/approach – The data were collected from 205 middle and senior executives working in the project engineering management division of a heavy engineering public sector organization. The organization manufactures power generation equipment. Questionnaires were administered to collect the data from the respondents. Findings – Results were analyzed using the exploratory factor analysis and multiple regression analysis techniques. The findings showed that all the factors of OL, i.e. collaboration and team working, performance management, autonomy and freedom, reward and recognition and achievement orientation were found to be the positive predictors of different dimensions of firm’s performance and KM practices. Research limitations/implications – The implications are discussed to improve the OL culture to enhance the KM practices so that firm’s performance could be sustained financially or otherwise. The study is conducted in one division of a large public organization, hence generalizability is limited. Originality/value – This is an original study carried out in a large a heavy engineering organization in India that validates the theory of OL and KM in the Indian context.

The inseparable nature of working and learning: peripheral management practice that facilitates employee learning

Purpose – This paper aims to look at the peripheral management practice that facilitates employee learning. Such management practices are embedded or inseparable to working and being a good manager. Design/methodology/approach – Point of view. Findings – For many frontline managers and their employees, the separation between working and learning is often not apparent. There appears to be no clear distinction between when they are working and when they are learning. Practical implications – Better development of organizational managers. Originality/value – This paper highlights the informal nature of learning and working and builds on the understanding that much of the learning that occurs at work occurs as part of a social act, often involving managers and their employees. In this way, employee learning that is identified and facilitated by frontline managers is so often entwined in other management activity. Furthermore, this paper outlines some practical actions that organizations can undertake to aid greater frontline management involvement in employee learning.

Managing the most important asset: a twenty year review on the performance management literature

Purpose The three objectives served by this review are to provide readers a limpid insight about the topic performance management (PM), to analyse the latest trends in PM literature and to illustrate the theoretical perspectives. It would be fascinating for the practitioners and researchers to see the latest trends in the PM system, which is not yet covered in previous reviews. The study covers the historical and theoretical perspectives of human resource management practices. We also try to unveil some of the theoretical debates and conflicts regarding the topic. Design/methodology/approach We reviewed 139 studies on PM published within the last 20 years (2000–2020). The method used here is the integrative review method. The criteria used to determine studies are articles from peer-reviewed journals regarding the PM system published between 2000 and 2020. The initial search for studies was conducted using an extensive journal database, and then an intensive reference-based search was also done. Each selected article was coded, themes were identified, and trends for every 5 years were determined. All the articles were analysed and classified based on the methodology used to identify qualitative and quantitative studies. Findings The review concludes that PM literature's emphasis shifted from traditional historical evaluations conducted once or twice a year to forward-looking, feedback-enriched PM systems. By segregating the studies into 5-year periods, we could extract five significant trends that prevailed in the PM literature from 2000 to 2020: reactions to PM system, factors that influence PM system, quality of rating sources, evaluating the PM system and types of the PM system. The review ends with a discussion of practical implications and avenues for future research. Research limitations/implications It is equally a limitation and strength of this paper that we conducted a review of 139 articles to cover the whole works in PM literature during the last 20 years. The study could not concentrate on any specific PM theme, such as exploring employee outcomes or organizational outcomes. Likewise, the studies on public sector and non-profit organizations are excluded from this review, which constitutes a significant share of PM literature. Another significant limitation is that the selected articles are classified only based on their methodology; further classification based on different themes and contexts can also be done. Originality/value The study is an original review of the PM literature to identify the latest trends in the field.

The complex concept of sustainable of diversity management

Purpose – Explores the notion of sustainable diversity-management practices. Design/methodology/approach – Summarizes research into the sustainability of diversity management across four countries and provides examples of efforts to maintain high levels of diversity. Findings – Looks at the activities of Africa House, an organization that develops business links with Africa, and of Bright Entertainment Network (BEN) Television, which is a television station that caters primarily for ethnic minorities. Social implications – Highlights the complexity of diversity and so the difficulty of legislating in this area. Originality/value – Explains that employees can also stifle attempts to engage in sustainable diversity management policies. A lack of understanding of local laws or language, or through limited social contacts, can prevent full participation by employees.

Measuring the invisible

PurposeConstruction logistics is an essential part of Construction Supply Chain Management for both project management and cost aspects. The quantum of money that is embodied in the transportation of materials to site could be 39–58 per cent of total logistics costs and between 4 and 10 per cent of the product selling price for many firms. However, limited attention has been paid to measure the logistics performance at the operational level in the construction industry. The purpose of this paper is to contribute to the knowledge about managing logistics costs by setting a key performance indicator (KPI) based on the number of vehicle movements to the construction site.Design/methodology/approachA case study approach was adopted with on-site observations and interviews. Observations were performed from the start of construction until “hand-over” to the building owner. A selection of construction suppliers and subcontractors involved in the studied project were interviewed.FindingsData analysis of vehicle movements suggested that construction transportation costs can be monitored and managed. The identified number of vehicle movements as a KPI offers a significant step towards logistics performance management in construction projects.Originality/valueThis research paper demonstrates that framework of using vehicular movements meet the criterion of effective KPI and is able to detect rooms for improvements. The key findings shed valuable insight for industry practitioners in initiating the measurement and monitor “the invisible logistics costs and performance”. It provides a basis for benchmarking that enables comparison, learning and improvement and thereby continuous enhancement of best practice at the operational level, which may accelerate the slow SCM implementation in the construction industry.

Unions as institutional entrepreneurs

Purpose Research on the diffusion and adaptation of LGBT diversity management practices has, until now, rarely considered the role of unions in this process; where it has done, the consideration has largely been cursory or tangential. In order to contribute towards overcoming this research gap, the purpose of this paper is to focus more closely on this issue, within the Italian context. Design/methodology/approach Theoretically based on the notion of institutional entrepreneurship, the paper analyses the ways in which trade unions contribute to the diffusion of LG-inclusive policies. Empirically this study is based on qualitative interviews with representatives from the unions, LGBT activists and individuals from those companies that have received support from the unions in terms of shaping their initiatives. Findings Italian unions act as institutional entrepreneurs in the sexual orientation field by framing the issue of the inclusion of LGBT workers as an issue of including minority groups under the broad umbrella of equality in workplaces, and by cooperating with LGBT associations. The latter provides the unions with two different things. First, with more legitimacy, from the viewpoint of LGBT employees; second, with the specific competencies in dealing with these issues. The accomplishments of the unions consist of arranging single agreements concerning the establishment of “punishment systems” for discriminatory behaviours, rather than promoting inclusion-oriented behaviours within the organization. Originality/value This paper highlights the role of unions, and in doing so, focusses on a hitherto marginalized actor in the process of adapting LGBT diversity initiatives. In focussing on the Italian context, it adds an important perspective to a discourse that has previously consisted of predominantly Anglo-American views.

Institutional repository: access and use by academic staff at Egerton University, Kenya

Purpose The purpose of this paper is to examine the access and use of the institutional repository (IR) among academic staff at Egerton University. Design/methodology/approach The paper provides a description of the building and development of the IR at the Egerton university and describes expected benefits of the repository to the University and relevant stakeholders. A survey was conducted among 84 academic staff with an aim of examining their levels of awareness on the existence of the IR at the Egerton University and assess their access and use. Through a structured questionnaire both quantitative and qualitative data were collected. Findings The study revealed that majority of the academic staff at the Egerton University are still not aware of the existence of the IR. Staff also faced challenges in accessing and using the content available. The paper provided suggestions on how best to enhance the access and utilization of the IRs among the academic staff. Practical implications From a practical point of view, the paper provides implications on the access and use of IRs by the academic staff. The paper points out some challenges faced by this group of users which other academic institutions may try to solve in their respective contexts. Originality/value Findings and discussions provided in the paper will pave way to solving the challenges faced in access and use of IR by the academic staff at the Egerton University.

Perkembangan Pariwisata Di Daya Tarik Wisata Pantai Berawa Kabupaten Badung

The research is conducted in Banjar Berawa/Desa Adat Berawa, nort Kuta district, Badung Regency. The study purposes to know the impact of tourism growth to the society consumptive pattern in Berawa. The technique of the data collection use direct observation technique to the location, interviewing for informan, I.E.Bendesa Adat Berawa, Klian Desa Berawa, Klian Dinas Berawa, Klian Subak Tibubeneng Village, Local society, and documentation by taking photos. The data are analysed by using qualitative data analysis technique which are working based on data, searching and finding pattern, getting interesting data and deciding the data that will be ssuced. The technique of informants assigmnen is purposive sampling which is assigning the sample by using particular consideration so, deserves to be a sample. The data analysis includes with collection data, reduction data, displaying data, conclusion and  verivication. The result of the study shows the society point of view to the aconomy establishmemt of educational aspec of the society which is getting increase, society in Banjar Berawa are also starting to enterpreneurship by set up abussiness such as homestay and laundy. The point of view of society to the customary order is the society seens strong with the wealth that they have. The society fulfill their family neededs by taking many kends of occupation.  Keywords :Impact, tourism, consumptionpattern, society

Export Citation Format

Share document.

deductive method research paper

  • Open access
  • Published: 11 June 2024

A rapid mixed-methods assessment of Libya’s primary care system

  • Luke N. Allen 1 ,
  • Arian Hatefi 2 ,
  • Mohini Kak 3 ,
  • Christopher H. Herbst 4 ,
  • Jacqueline Mallender 5 &
  • Ghassan Karem 6  

BMC Health Services Research volume  24 , Article number:  721 ( 2024 ) Cite this article

140 Accesses

1 Altmetric

Metrics details

Libya has experienced decades of violent conflict that have severely disrupted health service delivery. The Government of National Unity is committed to rebuilding a resilient health system built on a platform of strong primary care.

Commissioned by the government, we set out to perform a rapid assessment of the system as it stands and identify areas for improvement.

Design and setting

We used a rapid applied policy explanatory-sequential mixed-methods design, working with Libyan data and Libyan policymakers, with supporting interview data from other primary care policymakers working across the Middle East and North Africa region.

We used the Primary Health Care Performance Initiative framework to structure our assessment. Review of policy documents and secondary analysis of WHO and World Bank survey data informed a series of targeted policymaker interviews. We used deductive framework analysis to synthesise our findings.

We identified 11 key documents and six key policymakers to interview. Libya has strong policy commitments to providing good quality primary care, and a high number of health staff and facilities. Access to services and trust in providers is high. However, a third of facilities are non-operational; there is a marked skew towards axillary and administrative staff; and structural challenges with financing, logistics, and standards has led to highly variable provision of care.

In reforming the primary care system, the government should consolidate leadership, clarify governance structures and systems, and focus on setting national standards for human resources for health, facilities, stocks, and clinical care.

Peer Review reports

Libya is an oil-rich upper-middle income country in North Africa [ 1 ]. Most of the country is desert and 90% of its 7 million people live along the Mediterranean coast [ 2 , 3 ]. Life expectancy is 70 for males and 76 for females [ 2 ]. Non-communicable diseases account for four fifths of all deaths and disability-adjusted life years (DALYS) and two thirds of adults are overweight [ 4 , 5 ]. Colonel Qadhafi led the country for four-decades until the 2011 a civil war which led to his removal. A series of interim governments have governed Libya over the past decade in the context of multiple competing power blocs, variably backed by overseas powers. Libya has become a major transit country for migrants seeking to reach Europe, and in 2019 migrants made up 12% of the population [ 6 ].

Perhaps surprisingly – given this context – Libya has relatively strong medical training institutions and facilities. The Ministry of Health (MoH) is increasingly focusing on the central role of primary care, and the government’s new Health Service Delivery Policy commits to “develop and organise service delivery based on the primary health care, which assures universal access, as a fundamental human right, to a health services package (defined by MOH), including emergency services at all levels of the health care.” [ 1 ]. The establishment of a well-structured primary care system also directly underpins the MoH’s ‘2030 Vision’. The Presidency Council of the Government of National Accord has also established and funded a Primary Healthcare Institute to operationalise these plans in collaboration with international partners.

In 2021, as part of World Bank Technical Assistance to the MOH, our team was commissioned by the government to perform an assessment of the primary care system. We had a three-month period to provide a holistic view of how the system currently stood and which areas should be prioritised for strengthening. In this paper we present our key findings and recommendations.

We performed a deductive framework analysis, using the applied policy research approach initially developed by the Social and Community Planning Research Institute [ 7 ]. Key characteristics of this method are that it is generative, dynamic, comprehensive, and accessible. We selected this approach because it is well suited for rapid analysis of policy issues [ 8 ].

To structure our analysis we used the Primary Health Care Performance Initiative conceptual framework, [ 9 ] developed by a collaborative group from the Bill and Melinda Gates Foundation, the World Bank, the World Health Organisation (WHO), Ariadne Labs, and Results for Development [ 10 ]. The framework aligns with WHO normative documents and key primary care references, [ 11 , 12 , 13 ] and it is specifically designed for whole-system assessments in low- and middle-income countries [ 10 ]. Our primary care system assessment focused on system components, inputs, and service delivery, as shown in Fig.  1 .

figure 1

The PHCPI framework

We used an explanatory sequential mixed methods approach for data collection and iterative analysis, [ 14 , 15 ] moving from a review of policy documents and secondary analysis of previous study findings to a short series of key informant interviews with policymakers to hone our findings and identify key recommendations.

We used an undergirding pragmatist philosophical paradigm because this approach provides researchers with broad latitude to select the research methods and techniques that best meet their needs in answering their research questions. Adherents of this paradigm hold that “truth and value can only be determined by practical application and consequences” [ 16 , 17 ].

Data collection

Policy documents and literature review.

Our rapid search for relevant policy documents and literature was conducted in January – May 2021. This included two stages. In the first stage, we worked with the head of the national Primary Health Care Institute (PHCI) to review the library of all national health policy documents that had been published since 2011 (the data or the civil war). We performed a full-text review of all documents that mentioned primary care in the title or abstract/executive summary. We also performed full-text reviews of all national health system documents. We extracted all data pertaining to any one or more of the system, input, and service delivery domains in the PHCPI framework. We also coded and extracted all quotes that represented national commitments to primary care. Three authors (LA, JM, and AH) performed independent triplicate document review. LA performed data extraction, with a 50% sample double-checked by JM.

In the second phase, we performed a literature review using PubMed and Google Scholar, using the deliberately broad search terms; Libya$ AND (primary care OR primary health care OR PHC). We searched PubMed and the first 20 pages of Google Scholar. Title and abstract screening were used to identify papers that provided any information on the system, input, and service delivery domains in the PHCPI framework. We included empirical research studies and official reports. Opinion pieces, grey literature, and reports were included in order to maximise the data available for analysis. No date or language restrictions were applied.

We included papers and reports that covered both the public and private primary care sectors, however the government and the PHC Institute is primarily concerned with the state of the public system. Libya has committed to the Universal Health Coverage principles of providing comprehensive state-financed care to all, free at the point of use [ 18 ]. Private entities, though technically permitted to supply primary care services, are not regulated in Libya, and it is difficult to get accurate data on these providers.

After completing the literature review, we approached primary care teams at the WHO, World Bank, and Libyan government to uncover previously conducted studies, reports, and surveys that provided additional data on the primary care system. We attended international development partner meeting to obtain further data on each element of the primary care system.

In total we identified 11 key documents that contained quantitative and quantitative data assessing all aspects of the primary care system (Table  1 ).

Secondary analysis

We followed the key stages of deductive framework analysis to extract, sort, and analyse the data; 1) familiarization; 2) indexing—annotating each source to link data with relevant domains from the PHCPI framework; 3) charting—‘lifting data from their original context and rearranging according to thematic area’; and 4) mapping and interpretation of the dataset as a whole [ 7 ].

The key documents included a mix of quantitative and qualitative reports. Working with the WHO, World Bank, and MOH, we were able to obtain the underlying raw data for service readiness, patient and provider survey feedback, and family health survey data. We performed simple summary statistics in MS Excel to generate national summaries aligned with each of the PHCPI domains.

The Libyan political context has presented long-standing challenges to rigorous data collection. Among available secondary sources (i.e. those listed in Table  1 ), three contained underlying raw data. Two surveys were conducted by the World Bank as part of its long-standing technical assistance program, and the third, the Service Availability and Readiness Assessment, is a WHO-developed standardized assessment tool that affords comparison across and within countries.

Moving from document review and secondary data analysis to qualitative interviews

We produced a first draft of the national assessment, grouping all data under each PHCPI domain and noting areas of agreement between data sources, dissonance, and silence. To extend our understanding, triangulate emerging findings, resolve dissonance, and fill in knowledge gaps, we conducted a targeted set of semi-structured interviews with a sample of key informants holding senior leadership positions within Libya’s primary care system.

Our topic guide (Appendix) focused on key areas of uncertainty within each PHCPI domain. We also sought to gather first-hand experience of delivering PHC services in contemporary Libya; perceptions of the main challenges faced by patients and providers; current efforts to advance PHC; and priority areas for reform.

We used purposive sampling and snowballing. Given that we had to perform the document review, generate initial findings, identify interviewees, perform interviews and analyse all data within 12 weeks, we were only able to interview a very small group of people. As such, we focused on system leaders with broad experience of primary care in Libya. Working with the lead of the national PHC institute, we performed a stakeholder analysis to identify the major organisations contributing to the development and delivery of primary care in Libya. We identified the seniormost representatives of these organisations and invited them to participate via email. Given that there are very few people working directly on primary care policy in Libya, we asked our interviewees to recommend further people to interview. In total, six interviews were conducted in Spring 2021 by LA and JM with senior policymakers working in the national GP Society and Libyan GP training scheme, the National Centre for Disease Control family practice team, the WHO Libya Country Office, and the WHO Eastern Mediterranean Regional Office primary care team. All interviews were conducted via Microsoft Teams in English and lasted 45–60 minutes. All participants spoke fluent English. Notes were taken during the interviews. We used thematic analysis [ 30 ] to code the interview transcripts and identify the main themes.

Our interviews were low risk, and the power imbalance favoured our interviewees, all of whom were high-level policymakers. Interviewees were fully informed of the project scope and provided consent to participate. We have kept responses anonymous to protect them from any potential risks. The recorded interviews and interview notes were stored on a secure, password protected folder within the EU. These data will be destroyed after seven years.

Synthesis and interpretation

We iteratively repeated the deductive framework analysis stages to map and analyse the totality of data with reference to the PHCPI conceptual framework. After every interview, each domain was updated, with new themes and findings being used to update the topic guide for the subsequent interview.

In writing up our report, we presented our findings narratively by thematic area, following the structure of the PHCPI framework.

Reflexivity

Interviews were conducted by LA – a white male British family physician and global health policy expert, and MK, a female Asian health system specialist. Both researchers have extensive experience working with primary care systems in high-, middle-, and low-income settings, but no prior experience working in Libya’s clinical system. LA and MK worked in equal partnership with the wider research team that was comprised of World Bank middle east health specialists, JM a senior British health economist, and GK, a senior Libyan primary care policymaker with clinical experience working as a Libyan primary care doctor.

Governance and leadership

The public primary care system is the first of three levels of care. The Municipality ( Baladya ) manages the primary care facilities, while a secretariat of health at the district level manages the hospitals, including specialized hospitals. Libya refers to primary care as ‘Primary Health Care’, and for the remainder of this report we will use the term PHC. There are three types of PHC facility that provide comprehensive primary care services, ranging from small PHC units to large polyclinics. “communicable disease centers” also operate within the PHC tier but offer vertically integrated, disease-specific services.

Libya has a number of well-developed primary care policy documents and institutions, but thus far they have not yet translated into tangible action. For instance, the government acknowledges that the provision of health care as a basic right [ 23 ]. It has developed a 2020–2022 PHC strategy, signed by the Minister of Health in September 2019, its 2030 vision emphasises the centrality of PHC, and the country has invested in the creation of the PHC Institute. However, these documents and institutions have not yet led to marked improvements in the quality of services. The government does not have a quality-of-care framework, and it has yet to clearly outline a basic package of services.

In spite of the headline principles that are in place, there is a paucity of operational guidance. PHC governance remains fragmented, with multiple institutions sharing overlapping responsibilities for various aspects of PHC delivery. The result is that each institution tends to view these responsibilities as belonging primarily to other agencies. Compounding this, policy-making organisations do not routinely work together. Interviewees felt that senior politicians typically perceive PHC as intangible and strategically unimportant, especially in comparison to secondary care, which often appears medically more urgent, making it is easier to make the case for creating tangible prestigious services and thereby win political capital. There is a need for the PHCI to take a greater lead in developing a long-term strategy around human resources, service delivery models, and financing.

“The current system is very fragmented with no clear leadership” –Key informant interviewee

The fragmentation of PHC governance has impeded the ability of local facilities to respond to the unique needs of their local communities. Interviewees, policy documents, and a number of recent national PHC workshops co-hosted by the WHO all emphasise the desirability of devolving administrative and financial authority to local levels, while mitigating potential risks. The current model is seen as fractured, restrictive, inefficient, and inadequately funded, with a lack of guidance on standards and non-transparent flows of governance and accountability. According to the 2030 National Health Policy, there is no consistency in the package of services offered at any of the facility types, nor in the structure, staffing, or resources. There is no established national quality management infrastructure, despite the fact that highly variable PHC quality remains a major weakness of the system. Two interviewees noted that work to establish quality indicators and an essential basket of services is underway. The National Centre for Health Sector Reforms has argued that the mistaken pursuit of localized autonomy has aggravated the problem of variability and led to the undesirable emergence of a number of “power centres” and “budget centres,” which has weakened the authority of the ministry of health to oversee and administer the health system [ 25 ].

The current system does not involve patients or communities in decision-making processes. Community-level involvement is a core pillar of the Alma-Ata [ 11 ] and Astana Vision of Primary Health Care [ 31 ]. But the imperative to provide basic services in extremely challenging conditions has meant that investing in community engagement has fallen as a priority. Various policy documents intimate that community engagement should play a larger role in shaping national and local services, but there is no explicit strategy for achieving this stated objective. There are currently no routine mechanisms for seeking, collating, analysing, or acting on patient and community feedback on their experience with care. One-off surveys have been conducted with international partners and research institutions, but these activities are not yet built into routine practice.

National health financing

In comparison to secondary care PHC is chronically underfunded. As with most countries around the world—including those with strong PHC systems—secondary care tends to capture a disproportionate share of health spending [ 5 , 32 ]. Whilst the health system needs major investment, in 2017 the actual capital expenditure on the health sector was 10 percent of the budgeted expenditure, meaning that 90% of the funds allocated to health were not spent. Salaries are late or unpaid for a large proportion of public workers according to the government’s accounts, including 81 percent of health staff [ 23 ]. This is a major pain point for the PHC system.

Libya’s fragmented budget system makes it difficult to ensure that resources are aligned and available to meet the needs of patients. The health budget is divided into four chapters, and the centralisation of staffing, capital expenditure, and medical supplies—handled by three different non-health-related ministries—leads to delays and inefficiencies. The 2020–2022 PHC strategy recommends delegating a degree of financial autonomy to PHC units, as well as moving to a capitation payment system, starting with recurrent costs and then extending to cover the entire cost structure of PHC centres, with the aim of covering the provision of the essential package of health services.

In line with WHO recommendations, almost all care is provided free at the point of use, but the costs of medicines and supplies present a barrier for poorer patients. In 2018 the World Bank conducted a patient survey with over 1,000 patients and over 500 service providers. This well-conducted project found that 97 percent of patients were not charged fees for accessing PHC services – including ‘under the table’ payments. However, 51 percent of them considered the costs of medical services in Libya to be a problem, and 13 percent forwent recommended PHC services or medicines because of cost concerns. Costs are associated with medicines, supplies, equipment, and diagnostics. Where local services are deemed to be inadequate, patients commonly look to the private sector, which incurs additional costs for accessing care. In 2011, out-of-pocket expenditure accounted for 36 percent of total health spending [ 33 ].

Adjustment to population health needs

Libyan PHC policy documents recognise that services should be responsive to population health needs. However, provision varies widely among facilities, with no particular logic underpinning the distribution of specialist services. Epidemiological data are recorded by a national Health Information System that collates facility-level data from paper records, and by the National Centre for Disease Control, which gathers and reports epidemiological data. Yet these systems are not used to shape PHC service delivery. The absence of a national electronic health records system makes data collection and analysis onerous.

The focus of many local facilities is on maintaining basic services. The current PHC system does have a culture of innovation and learning that is required to continually adapt and improve, based on changing population needs. Simply maintaining the status quo is perceived as already very challenging. Many health care clinical staff simply lack the time, training or headspace to engage with continuous improvement.

“It is such a challenging environment there is not time or space to think about innovation and learning”-Key informant interviewee

Drugs and supplies

Logistical issues with drugs and supplies are two major challenges for the PHC system (Fig.  2 ). According to the latest Service Availability and Readiness report, the mean availability of essential medicines was 10 percent in 2017 [ 21 ]. These WHO assessments use serial standardised reviews with a representative number of facilities across the country. They are generally well conducted, however the report we had access to did not provide any information about the specific methods employed in Libya, making critical appraisal difficult.

figure 2

Source: World Bank provider survey 2018

Percentage of staff reporting issues with supplies

The World Bank has conducted the only patient and provider survey in Libya since 2011, as far as we are aware. The questions are well-worded, and were asked by localy-recruited data colelctors in participant’s own language. The sampling strategy was rigorous and the sample size enables national generalisations. The survey suggests that this issue of low medicine availability is perceived by both clinicians and patients as one of the biggest problems facing PHC facilities (Fig.  5 ). The lack of sufficient supplies is a very common reason given by patients for bypassing their nearest PHC facility when seeking care. The absence of an essential package of services means that core medicines are often not available.

Facility infrastructure

Libya has an adequate density of facilities overall, albeit with a skewed distribution across the regions (Fig.  3 ). The stated health facility density of 2.8 health facilities/10,000 people is above the WHO target of 2/10,000, but this count includes private facilities. There are no national standards for facility infrastructure or facility density, which means that the distribution and physical status varies widely.

figure 3

Source: 2017 SARA report

Distribution of PHC facilities

According to government records, there are currently 728 PHC Units, 571 PHC Centres, and 56 Polyclinics. Units provide maternal, neonatal, nutritional, child and school health services, vaccination, early diagnosis of infectious diseases, health promotion, registration and follow up of chronic diseases, curative services, local water quality monitoring and assessment of local environmental risk factors. PHC Centres offer supervision for PHC Units, the same basket of services, plus dental care. Polyclinics offer more specialised care for catchment areas of 50–60,000 people, accepting both walk-ins and referrals from PHC Units and Centres.

Although Libya has 1,355 PHC facilities in all (according to official figures in the latest Health Sector Bulletin) [ 27 ], 273 are closed because of either a lack of maintenance (51 percent), inaccessibility on account of conflict (20 percent), physical damage (19 percent), or other parties occupying it (11 percent). According to the 2017 Service Availability and Readiness Assessment (SARA) report, only one-third of primary healthcare clinics are fully functional and only 40 percent offer basic maternal and childcare. The mean basic service provision index is 45 percent [ 21 ].

Information systems

Libya has a nascent health information system that would benefit from deeper investment. There are more than 400 civil registration offices, and Libya has an automated vital registration system [ 34 ] however medical records are still paper-based and health management systems are not in routine use at the national or local level [ 29 ]. Existing health records systems are not interoperable across the wider health network. This can frustrate data-driven central planning and coordinated patient care. A District Health Information System (DHIS-2) is currently being implemented but interviewees raised concerns that it is currently too far removed from clinical care to offer meaningful patient benefit. A health information system (HIS) workplan is currently under development.

“The new system [DHIS-2] is not being designed with clinical users in mind… It will not be useful for them”-Key informant

Libya has a large health workforce, but the skill mix is unbalanced. The cumulative density of physicians, nurses, and midwifes is 8.68/1,000, which is virtually double the ratio the WHO recommends for achieving universal health coverage [ 23 ]. However, there is a surfeit of axillary clinical staff and a shortage of nurses, general doctors, and family physicians. There is a drive toward boosting the number of family physicians—currently around 124—and experiments with introducing community health workers to operate in a sensitization and signposting role, directing people to appropriate PHC services. The government is also keen to train more generalist nurses for primary care, moving away from a recent trend to train single disease specialists.

PHC facilities have no autonomy in changing their personnel structure. There are no national staffing standards for primary care, and the number of staff attached to each primary care facility ranges from two to several hundred, and more than 1,000 for large polyclinics, according to the latest SARA report [ 21 ]. Workers are not distributed across the country equitably, and there are concerns that competencies are variable. The average total staff per PCH facility is 88, and many staff members only provide services for single health conditions such as tuberculosis. In some facilities up to 50 percent of staff are dedicated to administration functions [ 21 ]. Furthermore, there are a large number of paid “ghost workers” who appear on facility books but do not show up or perform any work.

There are also 302 facilities that currently do not offer any services at all, yet collectively employ 14,500 health workers [ 21 ]. Many more facilities have hundreds of staff on their books but offer very limited services. These staffing figures suggest that the limited money that is being spent on PHC could be allocated much more efficiently; employing fewer but more highly-skilled clinicians, retraining ‘vertical’ single-condition workers to address a range of common primary care conditions, and ceasing all payments to ghost staff and non-operational facilities.

Facility funds

Facilities do not have a high degree of control over their funds. This constrains managers’ abilities to shape services to meet local needs. Facilities are able to manage non-wage recurrent expenditure (under Chapter II of the national budget), but staff salaries, capital expenditure, and medical supplies are all financed centrally by three different departments.

There are no national data on the proportion of funds that are actually received under each chapter each year, or how these funds are spent. PHC facilities do not receive adequate finances to stock medications and equipment. In a robust survey of 541 health care workers, 43 percent had not received their salary in full over the previous three months, and 73 percent had not been paid on-time [ 19 ]. The inability to manage funds makes it hard to retain staff and build trust with the local community. Interviewees felt that issues with funding, salaries, and procuring appropriate supplies had deep roots and complex antecedents at the national, regional, and local levels.

“Late payments are not something that can be easily fixed. It affects the whole public sector.”-Key informant
  • Service delivery

Population health management

The current PHC system lacks established processes to identify systemic variation and use this knowledge to develop new actions to improve population health. The over-centralized control of purchasing and service design not only impedes local priority setting but also undermines the case for deepening community engagement and for translating national policies into local strategic action plans that go beyond delivering a universal package of basic services. Empanelment has not been introduced, reducing the incentive to employ proactive outreach to improve population health outcomes in local communities. However, there are a number of successful initiatives to care for marginalized and rural populations, such as mobile outreach clinics that provide primary care services.

Facility organization and management

PHC facilities could be more effectively organized by providing managers with the training and administrative authority to use their data and staff to monitor and continually improve care quality. According to key stakeholder interviewees, the competence of facility managers ranges widely, and there is no culture of improvement and innovation. There is a general sense that managers should be given better training and more autonomy in running facilities.

Health intelligence is not routinely gathered or used at the facility level. Care is delivered by multidisciplinary teams, but there is wide variation in team composition, responsibilities, and competencies. Many facilities lack adequate supplies of medicines, supplies, and medical equipment, and staffing can be patchy and unreliable. At the start of the COVID-19 pandemic, up to 90 percent of PHC facilities closed because they lacked personal protective equipment (PPE) [ 29 ]. Performance measurement and management systems are in place, but are highly variable.

Financial access to services appears to be good but does not reflect widespread perceptions of low service quality. As stated above, some 97 percent of patients state that they are not charged to access PHC services [ 20 ], however clinics often do not have medicines or supplies so patients have to buy their own on the private market. As such, financial considerations represent an important barrier to access (Fig.  4 ). The health ministry feels that perceptions of low quality lead some to opt to pay for private sector services [ 23 ] and one-fifth of patients stated that they forwent medical care in the preceding year because of cost in the preceding year because of cost in the World Bank’s 2018 survey [ 20 ]. There are no up-to-date PHC-specific health account data on health spending, and there are no available data on impoverishment or catastrophic expenditures.

figure 4

Source: World Bank 2018 patient survey

Top complaints raised by patients about PHC facilities

Despite extremely challenging conditions, Libya has managed to provide its citizens with good geographic access to free PHC services, according to the 2018 World Bank survey: more than 90 percent of patients reach their chosen clinic within 30 minutes of travel time [ 20 ]. A quarter of patients bypass their nearest clinic typically because of concerns that medicines or equipment will not be available [ 20 ]. Conflict and political volatility pose grave external threats to sustained, coherent PHC service delivery and governance. More than 60 percent of patients felt that they received high-quality care. Nevertheless, the same group of patients felt that fundamental changes were needed to make the system work better (Fig.  5 ).

figure 5

Source: 2018 World Bank Patient Survey

Patients’ views of the PHC system

Wait times are generally low, but opening hours can be unpredictable, and PHC is not available 24/7. The majority of PHC facilities use a combination of appointment booking for chronic or routine conditions, and walk-in systems for acute or urgent care. In practice, however, patients tend to show up and wait for services without an appointment. Although this can lead to queues at busy times, the median wait time is in fact only 15 minutes [ 20 ]. Most facilities publicise their opening times, but they are not always actually open during these times.

Availability of effective PHC services

Although patients can access facilities fairly easily, there is a systemic issue with the availability of adequate staff, supplies, and services. Although there are high numbers of staff working in the health sector, there is a relative shortage of doctors (especially family physicians) and nurses. Stockouts are another major issue. Security issues and electricity blackouts present further barriers to effective health care services. One of the main reasons given for bypassing certain PHC facilities in favour of others, or in favour of private care, is that some facilities do not offer the services the patients need. One interviewee noted that availability is not a major issue for many patients because they know which facilities function well and which ones do not, and simply bypass the poor ones. While this may be true for some urban patients, it can be more problematic in rural areas where the next clinic could be a long distance away.

At universities in Libya, staff receive a reasonably high standard of initial medical and other health-related training. By contrast, there is a widespread feeling that, beyond university, in-service training is insufficient to maintain high-quality care. There is no agreement as to the job description of doctors, nurses and other PHC-providing clinical team members. This makes it hard to guarantee that a trained care provider with the requisite skills will be present. Clinicians are generally eager for further training so that they can stay up to date, but opportunities are sparse. Although the vast majority of staff and patients perceive clinicians to be competent and well-trained, 12 percent of patients stated that their nearest facility does not have qualified or knowledgeable clinical staff. There is no national data on the minimum objective clinical competency of staff. Patient safety data does not seem to be routinely collected or used, nor is there routine use of a human resource management system to keep track of staff placements, training, and promotions to manage staff deployments and growth.

Most PHC facilities do not have a family physician. Family medicine residency training is unpaid and trainees have to come in to do learning sets on their days off to avoid clashes with other clinical commitments. The fact that the residents are willing to persist despite these obstacles speaks very highly of provider motivation and the quality of the training on offer. Nevertheless, this arrangement is also a barrier to training the 7,000 additional family physicians the country needs, according to the director of family physician training.

High-quality primary health care

PHC is the “first point of contact” with the health system by default rather than by design. Because of the large number of PHC facilities, it is often more convenient to visit a clinician at a PHC unit rather than a hospital. The problem is that, despite this central role, PHC does not perform a gate-keeping function, and the current system does not make any provisions for this role. Libya’s endorsement of the Declaration of Astana [ 12 ] and the WHO PHC operational framework [ 35 ] implies that moving toward this role remains an aspirational ideal for the PHC system, but it is not an explicit policy-level goal. Patients are able to present to virtually any public or private facility and any level of care without referral or the need for registration. This is appropriate for the current system, because imposing a rigid referral system would be counterproductive. But there is a well-founded aspiration to move toward a more integrated system.

Continuity of care is weak: patients often experience their care as a series of disjointed and isolated interactions because the lack of empanelment and named clinicians hampers relational continuity; the non-systematic use of interoperable medical records hampers informational continuity; and weak care coordination and two-way communication between specialities hampers management continuity.

Comprehensiveness is mixed. A wide range of services are offered in the PHC sector, but poor planning means that there are large gaps and overlaps in the services offered within each community. There is no guidance on which services should be provided. However, a package of essential PHC services is in development. General facility readiness is 45 percent according to the 2017 SARA report [ 21 ].

Coordination is undermined by the current PHC model. Clinics are not responsible for a geographically-defined population of patients, and patients do not have a clinician who is responsible for coordinating their care. More generally, PHC does not play a coordinating role across the course of treatment and across sites of care. The unavailability of interoperable and unified patient records is a further impediment; and currently, secondary care is not expected to inform PHC teams of what they are doing for patients referred to them.

“Whole-person care” is very slowly supplanting disease-oriented care. The vast majority of doctors staffing PHC facilities are secondary care clinicians, steeped in the disease-oriented, post-preventative, biomedical model. Family physicians are taught the biopsychosocial approach to patient care and are gradually advancing the culture of shared decision-making with patients. However, at the time of writing, there were only 134 family physicians in the country.

Summary and policy recommendations

Service quality is highly variable, partly driven by the lack of an established package of essential health services and standards. Introducing an essential service package should help to raise quality by focusing procurement and logistical efforts on the core set of medicines and supplies required to offer the basic range of primary care services in every community. Similarly, introducing quality standards for essential staffing lists, job descriptions, and facility manager training standards is the right starting point in moving toward the reduction of unwanted variation in service quality.

Fragmented centralisation characterises and undermines the Libyan PHC system. PHC facilities do not have the financial or administrative authority to organise local staffing, supplies or services. Central government ministries are poorly coordinated and often lack adequate technical capacity, which can lead to gross inefficiencies in the distribution and deployment of PHC resources. This impacts comprehensiveness and service quality. The new PHC Institute could play a key role in coordinating the disparate government agencies to match supplies and staff to local population needs.

The absence of interoperable electronic health records is a critical weakness. Health surveillance, priority setting, performance management, and the tailoring of services to meet local needs are all possible with traditional paper records, but it is much less efficient than electronic records. The introduction of DHIS-2 is welcome, but it needs to engage more with clinicians during the piloting stage. Besides boosting health surveillance and planning, a move to electronic patient records could also advance coordination and continuity of care across primary and secondary care settings.

Sustained high-level political buy in is essential for the future of PHC. The current very modest budgetary allocation to PHC points to the fact that the value of primary care is not fully understood at the highest levels of government. Making greater political investment in reforming the multidisciplinary agencies that impact primary care funding, procurement, staffing, and operations will not be easy, but this work is needed if the wider systemic problems that underlie the critical issues in the health sector are to be fixed. Breaking up the ambitious goal of PHC reform into a series of more manageable, shorter-term, “quick win” modules may help by giving politicians tangible and attainable legacy projects. Consolidating PHC policy leadership is also required to rationalize the disparate flows of governance and accountability.

The nascent culture of family medicine needs to be nurtured. First-class PHC is built on first-contact, continuous, coordinated, comprehensive, and person-centred services. These are the core principles of family medicine that are being advanced by the growing cadre of family doctors. The government should incentivize training with the aim of staffing every PHC facility with at least one family physician. The introduction of the gatekeeper role, empanelment, and interoperable medical records will further support the realization of high-quality primary health care, but patients are likely to resist any attempt to limit their choices until they perceive that their local facility is able to offer an adequate level of care.

The PHC workforce should be reoriented toward skilled generalists. The current workforce is disproportionately skewed toward administrators, low-level clinical assistants, and nurses trained to manage single diseases. Improved efficiency and governance structures are required to tilt the balance toward a smaller but more competent and broadly skilled workforce. Existing standalone communicable disease facilities and workers should be retrained and redeployed to provide a broader range of services, aligned with emerging health challenges such as non-communicable diseases and mental health problems.

COVID-19 has brought into sharp focus the potential role of PHC in providing comprehensive services across the spectrum—from outreach, prevention, and screening to diagnosis, treatment, and rehabilitation. Although the pandemic has crippled many facilities, it has also drawn attention to the importance of adequately resourcing the first level of the health system to enable it to provide comprehensive services.

As far as we are aware, this is the first holistic assessment of Libya’s primary care system. We found that the Libyan PHC system has a number of strengths. There is a high level of national policy commitment to universal health coverage, equitable service provision, and the development of a strong PHC network. There are a large number of PHC facilities, and a reasonable number of staff providing relatively good levels of geographic coverage. Physical and financial access to services is good, with short travel times and free care at the point of use. Staff tend to be highly committed to their work, and patient-provider respect and trust are high, according to the most recent surveys. Recognising the importance of family medicine, a small but growing number of doctors are being trained in this specialty.

However, broken financing, staffing, and procurement systems severely hamper quality. There is wide variation in the quality of care, the comprehensiveness of services on offer, and general service readiness. This is partly driven by the absence of national standards that spell out the staffing and other resources that ought to be available at every facility if they are to deliver a set list of essential medical services. Patients are savvy about which facilities function better and tend to bypass those that lack the requisite staff, medicines, or access to investigations. Routine mechanisms should be introduced to collect patient and community feedback on experiences of care.

The complex tension between centralisation and devolution requires careful unpacking. There are many proponents of the policy perspective that PHC managers should be vested with a higher degree of responsibility for staffing, procurement of medical supplies, and the design and delivery of services. However, a high degree of training, experience, professional integrity, and competence is required to undertake this complex responsibility well. Efforts to upskill the PHC manager workforce should be complemented by an effort to reform the currently centralized staffing and procurement systems with the aim of providing the resources needed to consistently deliver a basic package of services at every facility. PHC stakeholders also need to collectively agree on the best way forward, looking to the strengths and weaknesses of both approaches, and looking to learn from other countries.

Providing high-quality PHC will be difficult without the introduction of electronic health records, empanelment, and upscaling of the GP training scheme. In Libya, doctors do not currently perform the care coordination role, and PHC is not the first point of access to the health system. Care coordination and two-way referrals between primary and secondary care, and between different team members, are predicated on the existence of shared medical records. It is possible to send paper records with a patient, but interoperable medical records make this much easier. Empanelment connects patients with a given PHC facility and its clinical team. This gives clinicians greater ownership over patient care and forms the basis of capitation payment systems that can incentivise proactive health promotion and community engagement. The current PHC workforce is mainly comprised of axillary clinical staff and secondary care-trained doctors. There is a need for more nurses and doctors trained specifically in primary care, including patient coordination across the health system.

There is a widespread appetite for reform among PHC stakeholders, but a number of broader contextual factors pose a major threat. There is remarkable consensus among the various Libyan PHC agencies and NGOs that the system needs a wide range of core reforms. But ongoing conflict, political upheaval, and rapid personnel changes at the higher levels of government have made it extremely challenging to work toward long-term reform with any meaningful consistency. A strong civil society/patient voice calling for PHC reform does not currently exist, and recent budgetary discussions barely mentioned primary care.

The PHC community may be able to make incremental reforms and use pilot sites to test core innovations. Communicating the value of topflight primary care in a way that is digestible to the public and politicians—and the armed groups that pose a threat to PHC facilities, patient, and staff—may help to carve out the essential policy space for this work. The growing burden of noncommunicable diseases is widely perceived as a strategic threat to national health and wellbeing. These chronic diseases are best managed through primary care, and growing recognition of their significance should help spur deeper investment in PHC and the reforms required for continuous and coordinated management of long-term conditions.

This study has a number of strengths: our multidisciplinary team comprised of Libyans and international primary care system experts conducted careful search for all relevant policy documents and quantitative data. We did not perform a full systematic review because we lacked the time, however in partnering with national academic and policy leaders we are confident that we did not miss any important studies or Libyan policy documents from the past decade. We employed the widely used PHCPI framework to ground our deductive analysis, and our exploratory mixed-methods approach enabled us to get the most out of our interviews. Given a longer than three months, we would have interviewed a wider range of Libyan care providers, however there was a very high level of agreement amongst our interviewees and we felt that we achieved data saturation. Ideally, we would have interviewed patients, however this was not logistically possible at the time. Our findings are geared for policymakers and will inform the next phase of national reform. Whilst this assessment brings together multiple evidence streams, our understanding of Libya’s PHC system is still incomplete. Many of the underlying surveys are over five years old, and much has changed during the course of the pandemic. We have very little empirical data on community engagement, the use of surveillance data, priority setting, population outreach, models of care, out of pocket payments, information system use, and adherence to quality standards. Other important limitations are centred around the paucity of data on the Libyan primary care system. Besides the 2017 SARA report and the 2018 World Bank survey, there is very little high quality empirical evidence on how the system is actually functioning across the country. We found major data gaps around the core aspects of high-quality primary care delivery (continuity, coordination, comprehensiveness etc.), an absence of subnational data, and a lack of interval data meaning that we are unable to comment on temporal trends. As such, we had to rely on our key informants who are very well placed to provide a broad overview of the system. However, better data are foundational to future attempts to reform the system.

Libya has a high number of staff and a high density of primary care facilities. Years of conflict have fundamentally broken the governance, financing, and logistics systems. The PHC Institute is well positioned to begin rationalising and consolidating PHC resourcing and governance, and should start with establishing basic standards.

Availability of data and materials

All underlying data are publicly available (Table  1 ) except for the interview transcripts. These transcripts will not be made available in order to protect participant confidentiality. For data requests please contact Dr Luke Allen, corresponding author at: [email protected].

World Bank. World Bank Country and Lending Groups. 2022. Available from: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups . Cited 2022 Mar 23.

UNFPA. World Population Dashboard -Libya. United Nations Population Fund. 2022. Available from: https://www.unfpa.org/data/world-population/LY . Cited 2022 Mar 23.

CIA. Libya - The World Factbook. 2022. Available from: https://www.cia.gov/the-world-factbook/countries/libya/ . Cited 2022 Mar 23.

Institute for Health Metrics and Evaluation. Viz Hub. 2020. Available from: http://vizhub.healthdata.org/gbd-compare . Cited 2021 Aug 19.

World Health Organization. Global Health Observatory (GHO). Available from: https://www.who.int/data/gho . Cited 2021 Jul 25.

IOM. Libya — Migrant Report 35 (January—February 2021). 2021. Available from: https://migration.iom.int/sites/default/files/public/reports/DTM_Libya_R35_Migrant_Report.pdf . Cited 2022 Mar 23.

Ritchie J, Spencer L. Qualitative Data Analysis for Applied Policy Research. In: Bryman A, Burgess R, editors. Analysis of Quantitative Data. London: Routledge; 1994. p. 173–94.

Google Scholar  

Pope C, Ziebland S, Mays N. Analysing qualitative data. BMJ. 2000;320(7227):114–6.

Article   CAS   PubMed   PubMed Central   Google Scholar  

PHCPI. The PHCPI Conceptual Framework. 2022. Available from: https://improvingphc.org/phcpi-conceptual-framework . Cited 2022 Mar 23.

Veillard J, Cowling K, Bitton A, Ratcliffe H, Kimball M, Barkley S, et al. Better Measurement for Performance Improvement in Low- and Middle-Income Countries: The Primary Health Care Performance Initiative (PHCPI) Experience of Conceptual Framework Development and Indicator Selection. Milbank Q. 2017;95(4):836–83.

Article   PubMed   PubMed Central   Google Scholar  

WHO and UNICEF. Declaration of Alma-Ata. 1978. Available from: https://www.who.int/teams/social-determinants-of-health/declaration-of-alma-ata . Cited 2022 Mar 23.

WHO and UNICEF. Declaration of Astana on Primary Health Care. 2018. Available from: https://www.who.int/teams/primary-health-care/conference/declaration .  Cited 2022 Mar 23.

Starfield B, Shi L, Macinko J. Contribution of Primary Care to Health Systems and Health. Milbank Q. 2005;83(3):457–502.

Curry L, Nunez-Smith M. Mixed Methods in Health Sciences Research: A Practical Primer. 2455 Teller Road, Thousand Oaks California 91320: SAGE Publications, Inc.; 2015. Available from: http://methods.sagepub.com/book/mixed-methods-in-health-sciences-research-a-practical-primer . Cited 2021 Nov 10.

Cresswell J. A Concise Introduction to Mixed Methods Research. London: Sage; 2015. Available from: https://www.worldofbooks.com/en-gb/books/john-w-creswell/concise-introduction-to-mixed-methods-research/9781483359045?gclid=Cj0KCQjwl9GCBhDvARIsAFunhsnPh-KIFniCodadPK2uMBxDWAKSFl3Z27gqfF1Grs4IsWCWgwFCA9QaAjMrEALw_wcB . Cited 2021 Mar 19.

Denscombe M. Communities of Practice: A Research Paradigm for the Mixed Methods Approach. J Mixed Methods Res. 2008;2(3):270–83.

Article   Google Scholar  

Johnson RB, Onwuegbuzie AJ. Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educ Res. 2004;33(7):14–26.

World Health Organization. Universal health coverage (UHC). 2021. Available from: https://www.who.int/news-room/fact-sheets/detail/universal-health-coverage-(uhc) . Cited 2021 Nov 11.

World Bank. Libya Primary Health Care Survey Analysis. 2018. Available from: https://seha.ly/wp-content/uploads/2020/08/Libya-Satisfaction-Survey-Findings.pdf

World Bank. World Bank family health survey. 2018.

WHO EMRO and Libyan MoH. Service Availability and Readiness Assessment (SARA) report: Libya. 2017. Available from: https://www.emro.who.int/lby/libya-infocus/service-availability-and-readiness-assessment-sara-report.html

Libyan Ministry of Health. Primary Health Care strategy 2020–2022. Tripoli; 2020.

Ministry of Health and National Centre for Health Sector Reform. Well and Healthy Libya: National Health Policy, 2030. 2020.

Libyan Ministry of Health. The Libyan Health System: Study of Medical and Allied Health Education and Training Institutions. 2017.

National Centre for Health Sector Reform and WHO. Reorganized Structure of the Ministry of Health. 2020.

World Bank Libya Local Governance Forum. Case study: Service Delivery in the Perspective of the Health Sector in Libya. 2019.

ReliefWeb. Libya: Health Sector Bulletin (August 2021) - Libya. ReliefWeb. Available from: https://reliefweb.int/report/libya/libya-health-sector-bulletin-august-2021 . Cited 2021 Sep 15.

WHO EMRO. Libya: PHC country profile and vital signs. PHCPI. 2015. Available from: http://www.emro.who.int/images/stories/phc/libya_vsp.pdf?ua=1 . Cited 2022 Jun 16.

WHO Libya Country Office. Libya Annual Report 2020. 2020.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

WHO. A vision for primary health care in the 21st century: towards universal health coverage and the Sustainable Development Goals. 2018. Available from: https://apps.who.int/iris/handle/10665/328065 . Cited 2022 Mar 23.

Government of Libya. National Budget, Chaptrer 1. Tripoli; 2018.

World Bank. Out-of-pocket expenditure (% of current health expenditure) - Libya. 2022. Available from: https://data.worldbank.org/indicator/SH.XPD.OOPC.CH.ZS?locations=LY . Cited 2022 Mar 23.

Sattar A. The Vital Registration and Statistic System in Libya and its Improvement: technical paper No. 47. 1991.

WHO. Operational Framework for Primary Health Care. 2020. Available from: https://www.who.int/publications-detail-redirect/9789240017832 . Cited 2022 Mar 23.

Download references

Acknowledgements

This paper was produced by the World Bank (WB) in collaboration with Ministry of Health (MOH), Government of Libya. It is an output of the Libya Health Sector Support Grant (P163565) program led by Christopher. H. Herbst, Senior Health Specialist, WB and Mohini Kak, Senior Health Specialist, WB.

The MOH and the WB do not guarantee the accuracy of the data included in this work. The findings, interpretations, and conclusions expressed in this work are those of the authors, and do not necessarily reflect the views of Ministry of Health of Libya or the World Bank, its Board of Directors, or the governments they represent.

This study was funded by World Bank Group.

Author information

Authors and affiliations.

University of Oxford Centre for Global Primary Care, Oxford, UK

Luke N. Allen

World Bank, San Francisco, USA

Arian Hatefi

World Bank, Tunis, Tunisia

World Bank, MENA, Riyadh, Saudi Arabia

Christopher H. Herbst

Economics By Design, London, UK

Jacqueline Mallender

PHC Institute, Tripoli, Libya

Ghassan Karem

You can also search for this author in PubMed   Google Scholar

Contributions

This paper was produced by the World Bank (WB) in collaboration with Ministry of Health (MOH), Government of Libya. It is an output of the Libya Health Sector Support Grant (P163565) program led by CH and MK. The project was conceived by GK, MK, CH and AH. LA devised the methods with input from all authors. LA, MK, GK and AH collected the data. LA performed the primary analysis and wrote the paper with contributions from all other authors. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Luke N. Allen .

Ethics declarations

Ethics approval and consent to participate.

Interviewees were fully informed of the project scope and provided consent to participate.

All methods were carried out in accordance with The Declaration of Helsinki. After the project was completed it became possible to seek retrospective review from the Libyan Ministry of Health National Centre for Disease Control Ethics Committee. This committee found no issues with the protocol or the way that the project had been conducted.

Consent for publication

All participants and co-authors consented to publication of anonymised findings.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Allen, L.N., Hatefi, A., Kak, M. et al. A rapid mixed-methods assessment of Libya’s primary care system. BMC Health Serv Res 24 , 721 (2024). https://doi.org/10.1186/s12913-024-11121-w

Download citation

Received : 22 March 2023

Accepted : 19 May 2024

Published : 11 June 2024

DOI : https://doi.org/10.1186/s12913-024-11121-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Primary care
  • Health systems
  • Primary health care
  • Global health
  • Mixed-methods

BMC Health Services Research

ISSN: 1472-6963

deductive method research paper

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Will Knight

OpenAI Offers a Peek Inside the Guts of ChatGPT

Person using ChatGPT on a computer

ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper , researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.

Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.

The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI —are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

“Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post . Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.

The Titan Submersible Disaster Shocked the World. The Inside Story Is More Disturbing Than Anyone Imagined

By Mark Harris

The West Coast’s Fanciest Stolen Bikes Are Getting Trafficked by One Mastermind in Jalisco, Mexico

By Christopher Solomon

A January 6 Rioter Is Leading an Armed National Militia From Prison

By David Gilbert

If Ray Kurzweil Is Right (Again), You’ll Meet His Immortal Soul in the Cloud

By Steven Levy

Even though LLMs defy easy interrogation, a growing body of research suggests they can be poked and prodded in ways that reveal useful information. Anthropic, an OpenAI competitor backed by Amazon and Google, published similar work on AI interpretability last month. To demonstrate how the behavior of AI systems might be tuned, the company's researchers created a chatbot obsessed with San Francisco's Golden Gate Bridge . And simply asking an LLM to explain its reasoning can sometimes yield insights .

“It’s exciting progress,” says David Bau , a professor at Northeastern University who works on AI explainability, of the new OpenAI research. “As a field, we need to be learning how to understand and scrutinize these large models much better.”

Bau says the OpenAI team’s main innovation is in showing a more efficient way to configure a small neural network that can be used to understand the components of a larger one. But he also notes that the technique needs to be refined to make it more reliable. “There’s still a lot of work ahead in using these methods to create fully understandable explanations,” Bau says.

Bau is part of a US government-funded effort called the National Deep Inference Fabric , which will make cloud computing resources available to academic researchers so that they too can probe especially powerful AI models. “We need to figure out how we can enable scientists to do this work even if they are not working at these large companies,” he says.

OpenAI’s researchers acknowledge in their paper that further work needs to be done to improve their method, but also say they hope it will lead to practical ways to control AI models. “We hope that one day, interpretability can provide us with new ways to reason about model safety and robustness, and significantly increase our trust in powerful AI models by giving strong assurances about their behavior,” they write.

You Might Also Like …

Don’t think breakdancing is an Olympic sport ? The world champ agrees (kinda)

How researchers cracked an 11-year-old password to a $3M crypto wallet

The uncanny rise of the world’s first AI beauty pageant

Give your back a break: Here are the best office chairs we’ve tested

deductive method research paper

Kate Knibbs

Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

Reece Rogers

An AI Cartoon May Interview You for Your Next Job

Amanda Hoover

How Game Theory Can Make AI More Reliable

Steve Nadis

US National Security Experts Warn AI Giants Aren't Doing Enough to Protect Their Secrets

Paresh Dave

IMAGES

  1. The Deductive Method-Methodology of Economics

    deductive method research paper

  2. How to Deal With Deductive Essay Writing Successfully

    deductive method research paper

  3. Deductive Approach (Deductive Reasoning)

    deductive method research paper

  4. 15 Deductive Reasoning Examples (2024)

    deductive method research paper

  5. Distinguish Between Deductive And Inductive Method Of Research

    deductive method research paper

  6. Inductive and Deductive Research Approaches

    deductive method research paper

VIDEO

  1. HKDSE Practice Maths Core Paper 2 Q22: Deductive Geometry 演繹幾何、Trigonometry 三角比

  2. Method of Economics

  3. hnbgu Entrane MCQ||Types Of Research for UGC NET/PhD entrance exam||अनुसंधान के प्रकार

  4. Inductive vs Deductive Reasoning

  5. AiO5_Method_Inductive_and_Deductive_Method

  6. Fact , concept , theory and Inductive and deductive methods in 5 minutes for sociology UPSC and NET

COMMENTS

  1. What Is Deductive Reasoning?

    It's the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data. This method is used for academic as well as non-academic research. Example: Deductive research problem You work as an organizational researcher at a large insurance organization. Currently, the organization is dealing with ...

  2. Deductive Approach (Deductive Reasoning)

    A deductive approach is concerned with "developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis" [1] It has been stated that "deductive means reasoning from the particular to the general. If a causal relationship or link seems to be implied by a particular theory or ...

  3. Inductive vs. Deductive Research Approach

    Revised on June 22, 2023. The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory. In other words, inductive reasoning moves from specific observations to broad generalizations. Deductive reasoning works the other way around.

  4. Deductive Qualitative Analysis: Evaluating, Expanding, and Refining

    Deductive qualitative analysis (DQA; Gilgun, 2005) is a specific approach to deductive qualitative research intended to systematically test, refine, or refute theory by integrating deductive and inductive strands of inquiry.The purpose of the present paper is to provide a primer on the basic principles and practices of DQA and to exemplify the methodology using two studies that were conducted ...

  5. (PDF) Evaluating Inductive versus Deductive Research in Management

    Purpose The purpose of this paper is to address the imbalance between inductive and deductive research in management and organizational studies and to suggest changes in the journal review and ...

  6. The potential of working hypotheses for deductive exploratory research

    The remainder of this paper focuses on exploratory research and the answers to questions found in the table: 1. ... Yin (1981, 1992, 2011, 2017), a prominent case study scholar, showcases a deductive research methodology that crosses boundaries using both quantaitive and qualitative evidence when appropriate.

  7. What Is Deductive Reasoning?

    Deductive reasoning is commonly used in scientific research, and it's especially associated with quantitative research. In research, you might have come across something called the hypothetico-deductive method. It's the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

  8. How do you use deductive reasoning in research

    Deductive reasoning is commonly used in scientific research, and it's especially associated with quantitative research. In research, you might have come across something called the hypothetico-deductive method. It's the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

  9. Inductive vs Deductive Reasoning

    Revised on 10 October 2022. The main difference between inductive and deductive reasoning is that inductive reasoning aims at developing a theory while deductive reasoning aims at testing an existing theory. Inductive reasoning moves from specific observations to broad generalisations, and deductive reasoning the other way around.

  10. 1.7 Deductive Approaches to Research

    A deductive approach to research is the one that people typically associate with scientific investigation. The researcher studies what others have done, reads existing theories of whatever phenomenon he or she is studying, and then tests hypotheses that emerge from those theories (see Figure 1.5). Theorize/Hypothesize. Analyze Data.

  11. PDF Compare and Contrast Inductive and Deductive Research Approaches By L

    Inductive and Deductive Research Approaches 4 objectivity and was considered the only way to conduct research. The beginning of the 20th century marked what they refer to as the second research methodology phase. It was at this time that the qualitative research method emerged. Researchers who followed this scientific method

  12. Deductive Reasoning

    Categorical syllogism is the most common form of deductive reasoning, which involves two premises and a conclusion. These premises and conclusion are based on categorical statements, which are statements that describe relationships between categories. For example, "All men are mortal, and Socrates is a man, therefore Socrates is mortal.".

  13. International Journal of Qualitative Methods Deductive Qualitative

    Deductive qualitative analysis (DQA; Gilgun, 2005, 2019) is one form of deductive qualitative research that is suited to theory application, testing, and refinement. Within DQA, researchers combine deductive and inductive analysis to examine supporting, contradicting, rening, fi. and expanding evidence for the theory or conceptual model being ...

  14. Deductive Qualitative Analysis and Grounded Theory: Sensitizing

    The purposes of this chapter are to describe a method of generating theory - as well as the development of descriptive research such as ethnographies and narratives - that I call deductive ...

  15. Inductive/Deductive Hybrid Thematic Analysis in Mixed Methods Research

    The question of analysis in mixed methods research is an important topic of contemporary debate. For example, Onwuegbuzie and Johnson (2021) note "data analysis in mixed methods research [can be]…the most difficult step of the mixed methods research process" (p. 1) and there is a "lack of methodological guidance in the extant literature on these topics" (p. 16).

  16. (PDF) Inductive and Deductive Research Approach

    Burney, S.M.A; Saleem, H., (2008), "Inductive & Deductive Research Approach", Lecture delivered on 06-03-2008 at Auditorium of. Faculty of Arts and Science, University of Karachi, Karachi ...

  17. Inductive Vs Deductive Research

    Inductive: More open-ended and exploratory. Deductive: More focused and aimed at testing specific hypotheses. Both inductive and deductive research methods are valuable in the field of research. Inductive research is useful for developing new theories, while deductive research is essential for testing and validating existing theories.

  18. Teaching clinical reasoning through hypothetico-deduction is (slightly

    In conclusion, the much-used hypothetico-deductive method for teaching clinical reasoning did relatively well in our study. Tentative explanations have been raised but further research is required to explore which approach works better and under which conditions. New methods, such as self-explanation, need further scrutiny.

  19. Deductive and Inductive Approach of Research

    Nirmalya Das. A deductive approach is concerned with the developing a hypothesis (or hypotheses) based on existing theory, and then designing a research strategy to test the hypothesis. The deductive means reasoning from the particular to the general. If a causal relationship or link seems to be.

  20. Inductive vs Deductive Research: Difference of Approaches

    The main differences between inductive and deductive research are how the research is done, the goal, and how the data is analyzed. Inductive research is exploratory, flexible, and based on qualitative observation analysis. Deductive research, on the other hand, is about proving something and is structured and based on quantitative analysis.

  21. Inductive or deductive? Research by maxillofacial surgeons

    That is, deductive research narrows information from a general to a more specific level. The deductive approach is typically associated with scientific investigation. The researcher studies what is known, analyzes the existing theories of the topic of interest, and then tests the hypotheses that emerge from the deductive methods.(Fig. 1)

  22. Deductive Research

    Experimental research designs. Kerry Tanner, in Research Methods for Students, Academics and Professionals (Second Edition), 2002. Definitions and overview. The true experiment is the classic example of 'scientific method' or the hypothetico- deductive research model referred to earlier. It is the most suitable of any method in testing hypotheses involving cause-and-effect relationships ...

  23. Qualitative research: deductive and inductive approaches to data

    Design/methodology/approach Despite the numerous examples of qualitative methods of data generation, little is known particularly to the novice researcher about how to analyse qualitative data. This paper develops a model to explain in a systematic manner how to methodically analyse qualitative data using both deductive and inductive approaches.

  24. A rapid mixed-methods assessment of Libya's primary care system

    Secondary analysis. We followed the key stages of deductive framework analysis to extract, sort, and analyse the data; 1) familiarization; 2) indexing—annotating each source to link data with relevant domains from the PHCPI framework; 3) charting—'lifting data from their original context and rearranging according to thematic area'; and 4) mapping and interpretation of the dataset as a ...

  25. OpenAI Offers a Peek Inside the Guts of ChatGPT

    Days after former employees said the company was being too reckless with its technology, OpenAI released a research paper on a method for reverse engineering the workings of AI models.