Understanding and solving intractable resource governance problems.

  • Conferences and Talks
  • Exploring models of electronic wastes governance in the United States and Mexico: Recycling, risk and environmental justice
  • The Collaborative Resource Governance Lab (CoReGovLab)
  • Water Conflicts in Mexico: A Multi-Method Approach
  • Past projects
  • Publications and scholarly output
  • Research Interests
  • Higher education and academia
  • Public administration, public policy and public management research
  • Research-oriented blog posts
  • Stuff about research methods
  • Research trajectory
  • Publications
  • Developing a Writing Practice
  • Outlining Papers
  • Publishing strategies
  • Writing a book manuscript
  • Writing a research paper, book chapter or dissertation/thesis chapter
  • Everything Notebook
  • Literature Reviews
  • Note-Taking Techniques
  • Organization and Time Management
  • Planning Methods and Approaches
  • Qualitative Methods, Qualitative Research, Qualitative Analysis
  • Reading Notes of Books
  • Reading Strategies
  • Teaching Public Policy, Public Administration and Public Management
  • My Reading Notes of Books on How to Write a Doctoral Dissertation/How to Conduct PhD Research
  • Writing a Thesis (Undergraduate or Masters) or a Dissertation (PhD)
  • Reading strategies for undergraduates
  • Social Media in Academia
  • Resources for Job Seekers in the Academic Market
  • Writing Groups and Retreats
  • Regional Development (Fall 2015)
  • State and Local Government (Fall 2015)
  • Public Policy Analysis (Fall 2016)
  • Regional Development (Fall 2016)
  • Public Policy Analysis (Fall 2018)
  • Public Policy Analysis (Fall 2019)
  • Public Policy Analysis (Spring 2016)
  • POLI 351 Environmental Policy and Politics (Summer Session 2011)
  • POLI 352 Comparative Politics of Public Policy (Term 2)
  • POLI 375A Global Environmental Politics (Term 2)
  • POLI 350A Public Policy (Term 2)
  • POLI 351 Environmental Policy and Politics (Term 1)
  • POLI 332 Latin American Environmental Politics (Term 2, Spring 2012)
  • POLI 350A Public Policy (Term 1, Sep-Dec 2011)
  • POLI 375A Global Environmental Politics (Term 1, Sep-Dec 2011)

Writing theoretical frameworks, analytical frameworks and conceptual frameworks

Three of the most challenging concepts for me to explain are the interrelated ideas of a theoretical framework, a conceptual framework, and an analytical framework. All three of these tend to be used interchangeably. While I find these concepts somewhat fuzzy and I struggle sometimes to explain the differences between them and clarify their usage for my students (and clearly I am not alone in this challenge), this blog post is an attempt to help discern these analytical categories more clearly.

A lot of people (my own students included) have asked me if the theoretical framework is their literature review. That’s actually not the case. A theoretical framework , the way I define it, is comprised of the different theories and theoretical constructs that help explain a phenomenon. A theoretical framework sets out the various expectations that a theory posits and how they would apply to a specific case under analysis, and how one would use theory to explain a particular phenomenon. I like how theoretical frameworks are defined in this blog post . Dr. Cyrus Samii offers an explanation of what a good theoretical framework does for students .

For example, you can use framing theory to help you explain how different actors perceive the world. Your theoretical framework may be based on theories of framing, but it can also include others. For example, in this paper, Zeitoun and Allan explain their theoretical framework, aptly named hydro-hegemony . In doing so, Zeitoun and Allan explain the role of each theoretical construct (Power, Hydro-Hegemony, Political Economy) and how they apply to transboundary water conflict. Another good example of a theoretical framework is that posited by Dr. Michael J. Bloomfield in his book Dirty Gold, as I mention in this tweet:

In Chapter 2, @mj_bloomfield nicely sets his theoretical framework borrowing from sociology, IR, and business-strategy scholarship pic.twitter.com/jTGF4PPymn — Dr Raul Pacheco-Vega (@raulpacheco) December 24, 2017

An analytical framework is, the way I see it, a model that helps explain how a certain type of analysis will be conducted. For example, in this paper, Franks and Cleaver develop an analytical framework that includes scholarship on poverty measurement to help us understand how water governance and poverty are interrelated . Other authors describe an analytical framework as a “conceptual framework that helps analyse particular phenomena”, as posited here , ungated version can be read here .

I think it’s easy to conflate analytical frameworks with theoretical and conceptual ones because of the way in which concepts, theories and ideas are harnessed to explain a phenomenon. But I believe the most important element of an analytical framework is instrumental : their purpose is to help undertake analyses. You use elements of an analytical framework to deconstruct a specific concept/set of concepts/phenomenon. For example, in this paper , Bodde et al develop an analytical framework to characterise sources of uncertainties in strategic environmental assessments.

A robust conceptual framework describes the different concepts one would need to know to understand a particular phenomenon, without pretending to create causal links across variables and outcomes. In my view, theoretical frameworks set expectations, because theories are constructs that help explain relationships between variables and specific outcomes and responses. Conceptual frameworks, the way I see them, are like lenses through which you can see a particular phenomenon.

A conceptual framework should serve to help illuminate and clarify fuzzy ideas, and fill lacunae. Viewed this way, a conceptual framework offers insight that would not be otherwise be gained without a more profound understanding of the concepts explained in the framework. For example, in this article, Beck offers social movement theory as a conceptual framework that can help understand terrorism . As I explained in my metaphor above, social movement theory is the lens through which you see terrorism, and you get a clearer understanding of how it operates precisely because you used this particular theory.

Dan Kaminsky offered a really interesting explanation connecting these topics to time, read his tweet below.

I think this maps to time. Theoretical frameworks talk about how we got here. Conceptual frameworks discuss what we have. Analytical frameworks discuss where we can go with this. See also legislative/executive/judicial. — Dan Kaminsky (@dakami) September 28, 2018

One of my CIDE students, Andres Ruiz, reminded me of this article on conceptual frameworks in the International Journal of Qualitative Methods. I’ll also be adding resources as I get them via Twitter or email. Hopefully this blog post will help clarify this idea!

You can share this blog post on the following social networks by clicking on their icon.

Posted in academia .

Tagged with analytical framework , conceptual framework , theoretical framework .

By Raul Pacheco-Vega – September 28, 2018

7 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post .

' src=

Thanks, this had some useful clarifications for me!

' src=

I GOT CONFUSED AGAIN!

' src=

No need to be confused!

' src=

Thanks for the Clarification, Dr Raul. My cluttered mind is largely cleared, now.

' src=

Thanks,very helpful

' src=

I too was/am confused but this helps 🙂

' src=

Thank you very much, Dr.

Leave a Reply Cancel Some HTML is OK

Name (required)

Email (required, but never shared)

or, reply to this post via trackback .

About Raul Pacheco-Vega, PhD

Find me online.

My Research Output

  • Google Scholar Profile
  • Academia.Edu
  • ResearchGate

My Social Networks

  • Polycentricity Network

Recent Posts

  • “State-Sponsored Activism: Bureaucrats and Social Movements in Brazil” – Jessica Rich – my reading notes
  • Reading Like a Writer – Francine Prose – my reading notes
  • Using the Pacheco-Vega workflows and frameworks to write and/or revise a scholarly book
  • On framing, the value of narrative and storytelling in scholarly research, and the importance of asking the “what is this a story of” question
  • The Abstract Decomposition Matrix Technique to find a gap in the literature

Recent Comments

  • Alan Parker on Project management for academics I: Managing a research pipeline
  • André Mascarenhas on On multiple academic projects’ management, time management and the realities of what we think we can accomplish in a certain period of time versus the realities of what we actually are able to.
  • Hazera on On framing, the value of narrative and storytelling in scholarly research, and the importance of asking the “what is this a story of” question
  • Kipi Fidelis on A sequential framework for teaching how to write good research questions
  • Razib Paul on On framing, the value of narrative and storytelling in scholarly research, and the importance of asking the “what is this a story of” question

Follow me on Twitter:

Proudly powered by WordPress and Carrington .

Carrington Theme by Crowd Favorite

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Theoretical Framework
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Theories are formulated to explain, predict, and understand phenomena and, in many cases, to challenge and extend existing knowledge within the limits of critical bounded assumptions or predictions of behavior. The theoretical framework is the structure that can hold or support a theory of a research study. The theoretical framework encompasses not just the theory, but the narrative explanation about how the researcher engages in using the theory and its underlying assumptions to investigate the research problem. It is the structure of your paper that summarizes concepts, ideas, and theories derived from prior research studies and which was synthesized in order to form a conceptual basis for your analysis and interpretation of meaning found within your research.

Abend, Gabriel. "The Meaning of Theory." Sociological Theory 26 (June 2008): 173–199; Kivunja, Charles. "Distinguishing between Theory, Theoretical Framework, and Conceptual Framework: A Systematic Review of Lessons from the Field." International Journal of Higher Education 7 (December 2018): 44-53; Swanson, Richard A. Theory Building in Applied Disciplines . San Francisco, CA: Berrett-Koehler Publishers 2013; Varpio, Lara, Elise Paradis, Sebastian Uijtdehaage, and Meredith Young. "The Distinctions between Theory, Theoretical Framework, and Conceptual Framework." Academic Medicine 95 (July 2020): 989-994.

Importance of Theory and a Theoretical Framework

Theories can be unfamiliar to the beginning researcher because they are rarely applied in high school social studies curriculum and, as a result, can come across as unfamiliar and imprecise when first introduced as part of a writing assignment. However, in their most simplified form, a theory is simply a set of assumptions or predictions about something you think will happen based on existing evidence and that can be tested to see if those outcomes turn out to be true. Of course, it is slightly more deliberate than that, therefore, summarized from Kivunja (2018, p. 46), here are the essential characteristics of a theory.

  • It is logical and coherent
  • It has clear definitions of terms or variables, and has boundary conditions [i.e., it is not an open-ended statement]
  • It has a domain where it applies
  • It has clearly described relationships among variables
  • It describes, explains, and makes specific predictions
  • It comprises of concepts, themes, principles, and constructs
  • It must have been based on empirical data [i.e., it is not a guess]
  • It must have made claims that are subject to testing, been tested and verified
  • It must be clear and concise
  • Its assertions or predictions must be different and better than those in existing theories
  • Its predictions must be general enough to be applicable to and understood within multiple contexts
  • Its assertions or predictions are relevant, and if applied as predicted, will result in the predicted outcome
  • The assertions and predictions are not immutable, but subject to revision and improvement as researchers use the theory to make sense of phenomena
  • Its concepts and principles explain what is going on and why
  • Its concepts and principles are substantive enough to enable us to predict a future

Given these characteristics, a theory can best be understood as the foundation from which you investigate assumptions or predictions derived from previous studies about the research problem, but in a way that leads to new knowledge and understanding as well as, in some cases, discovering how to improve the relevance of the theory itself or to argue that the theory is outdated and a new theory needs to be formulated based on new evidence.

A theoretical framework consists of concepts and, together with their definitions and reference to relevant scholarly literature, existing theory that is used for your particular study. The theoretical framework must demonstrate an understanding of theories and concepts that are relevant to the topic of your research paper and that relate to the broader areas of knowledge being considered.

The theoretical framework is most often not something readily found within the literature . You must review course readings and pertinent research studies for theories and analytic models that are relevant to the research problem you are investigating. The selection of a theory should depend on its appropriateness, ease of application, and explanatory power.

The theoretical framework strengthens the study in the following ways :

  • An explicit statement of  theoretical assumptions permits the reader to evaluate them critically.
  • The theoretical framework connects the researcher to existing knowledge. Guided by a relevant theory, you are given a basis for your hypotheses and choice of research methods.
  • Articulating the theoretical assumptions of a research study forces you to address questions of why and how. It permits you to intellectually transition from simply describing a phenomenon you have observed to generalizing about various aspects of that phenomenon.
  • Having a theory helps you identify the limits to those generalizations. A theoretical framework specifies which key variables influence a phenomenon of interest and highlights the need to examine how those key variables might differ and under what circumstances.
  • The theoretical framework adds context around the theory itself based on how scholars had previously tested the theory in relation their overall research design [i.e., purpose of the study, methods of collecting data or information, methods of analysis, the time frame in which information is collected, study setting, and the methodological strategy used to conduct the research].

By virtue of its applicative nature, good theory in the social sciences is of value precisely because it fulfills one primary purpose: to explain the meaning, nature, and challenges associated with a phenomenon, often experienced but unexplained in the world in which we live, so that we may use that knowledge and understanding to act in more informed and effective ways.

The Conceptual Framework. College of Education. Alabama State University; Corvellec, Hervé, ed. What is Theory?: Answers from the Social and Cultural Sciences . Stockholm: Copenhagen Business School Press, 2013; Asher, Herbert B. Theory-Building and Data Analysis in the Social Sciences . Knoxville, TN: University of Tennessee Press, 1984; Drafting an Argument. Writing@CSU. Colorado State University; Kivunja, Charles. "Distinguishing between Theory, Theoretical Framework, and Conceptual Framework: A Systematic Review of Lessons from the Field." International Journal of Higher Education 7 (2018): 44-53; Omodan, Bunmi Isaiah. "A Model for Selecting Theoretical Framework through Epistemology of Research Paradigms." African Journal of Inter/Multidisciplinary Studies 4 (2022): 275-285; Ravitch, Sharon M. and Matthew Riggan. Reason and Rigor: How Conceptual Frameworks Guide Research . Second edition. Los Angeles, CA: SAGE, 2017; Trochim, William M.K. Philosophy of Research. Research Methods Knowledge Base. 2006; Jarvis, Peter. The Practitioner-Researcher. Developing Theory from Practice . San Francisco, CA: Jossey-Bass, 1999.

Strategies for Developing the Theoretical Framework

I.  Developing the Framework

Here are some strategies to develop of an effective theoretical framework:

  • Examine your thesis title and research problem . The research problem anchors your entire study and forms the basis from which you construct your theoretical framework.
  • Brainstorm about what you consider to be the key variables in your research . Answer the question, "What factors contribute to the presumed effect?"
  • Review related literature to find how scholars have addressed your research problem. Identify the assumptions from which the author(s) addressed the problem.
  • List  the constructs and variables that might be relevant to your study. Group these variables into independent and dependent categories.
  • Review key social science theories that are introduced to you in your course readings and choose the theory that can best explain the relationships between the key variables in your study [note the Writing Tip on this page].
  • Discuss the assumptions or propositions of this theory and point out their relevance to your research.

A theoretical framework is used to limit the scope of the relevant data by focusing on specific variables and defining the specific viewpoint [framework] that the researcher will take in analyzing and interpreting the data to be gathered. It also facilitates the understanding of concepts and variables according to given definitions and builds new knowledge by validating or challenging theoretical assumptions.

II.  Purpose

Think of theories as the conceptual basis for understanding, analyzing, and designing ways to investigate relationships within social systems. To that end, the following roles served by a theory can help guide the development of your framework.

  • Means by which new research data can be interpreted and coded for future use,
  • Response to new problems that have no previously identified solutions strategy,
  • Means for identifying and defining research problems,
  • Means for prescribing or evaluating solutions to research problems,
  • Ways of discerning certain facts among the accumulated knowledge that are important and which facts are not,
  • Means of giving old data new interpretations and new meaning,
  • Means by which to identify important new issues and prescribe the most critical research questions that need to be answered to maximize understanding of the issue,
  • Means of providing members of a professional discipline with a common language and a frame of reference for defining the boundaries of their profession, and
  • Means to guide and inform research so that it can, in turn, guide research efforts and improve professional practice.

Adapted from: Torraco, R. J. “Theory-Building Research Methods.” In Swanson R. A. and E. F. Holton III , editors. Human Resource Development Handbook: Linking Research and Practice . (San Francisco, CA: Berrett-Koehler, 1997): pp. 114-137; Jacard, James and Jacob Jacoby. Theory Construction and Model-Building Skills: A Practical Guide for Social Scientists . New York: Guilford, 2010; Ravitch, Sharon M. and Matthew Riggan. Reason and Rigor: How Conceptual Frameworks Guide Research . Second edition. Los Angeles, CA: SAGE, 2017; Sutton, Robert I. and Barry M. Staw. “What Theory is Not.” Administrative Science Quarterly 40 (September 1995): 371-384.

Structure and Writing Style

The theoretical framework may be rooted in a specific theory , in which case, your work is expected to test the validity of that existing theory in relation to specific events, issues, or phenomena. Many social science research papers fit into this rubric. For example, Peripheral Realism Theory, which categorizes perceived differences among nation-states as those that give orders, those that obey, and those that rebel, could be used as a means for understanding conflicted relationships among countries in Africa. A test of this theory could be the following: Does Peripheral Realism Theory help explain intra-state actions, such as, the disputed split between southern and northern Sudan that led to the creation of two nations?

However, you may not always be asked by your professor to test a specific theory in your paper, but to develop your own framework from which your analysis of the research problem is derived . Based upon the above example, it is perhaps easiest to understand the nature and function of a theoretical framework if it is viewed as an answer to two basic questions:

  • What is the research problem/question? [e.g., "How should the individual and the state relate during periods of conflict?"]
  • Why is your approach a feasible solution? [i.e., justify the application of your choice of a particular theory and explain why alternative constructs were rejected. I could choose instead to test Instrumentalist or Circumstantialists models developed among ethnic conflict theorists that rely upon socio-economic-political factors to explain individual-state relations and to apply this theoretical model to periods of war between nations].

The answers to these questions come from a thorough review of the literature and your course readings [summarized and analyzed in the next section of your paper] and the gaps in the research that emerge from the review process. With this in mind, a complete theoretical framework will likely not emerge until after you have completed a thorough review of the literature .

Just as a research problem in your paper requires contextualization and background information, a theory requires a framework for understanding its application to the topic being investigated. When writing and revising this part of your research paper, keep in mind the following:

  • Clearly describe the framework, concepts, models, or specific theories that underpin your study . This includes noting who the key theorists are in the field who have conducted research on the problem you are investigating and, when necessary, the historical context that supports the formulation of that theory. This latter element is particularly important if the theory is relatively unknown or it is borrowed from another discipline.
  • Position your theoretical framework within a broader context of related frameworks, concepts, models, or theories . As noted in the example above, there will likely be several concepts, theories, or models that can be used to help develop a framework for understanding the research problem. Therefore, note why the theory you've chosen is the appropriate one.
  • The present tense is used when writing about theory. Although the past tense can be used to describe the history of a theory or the role of key theorists, the construction of your theoretical framework is happening now.
  • You should make your theoretical assumptions as explicit as possible . Later, your discussion of methodology should be linked back to this theoretical framework.
  • Don’t just take what the theory says as a given! Reality is never accurately represented in such a simplistic way; if you imply that it can be, you fundamentally distort a reader's ability to understand the findings that emerge. Given this, always note the limitations of the theoretical framework you've chosen [i.e., what parts of the research problem require further investigation because the theory inadequately explains a certain phenomena].

The Conceptual Framework. College of Education. Alabama State University; Conceptual Framework: What Do You Think is Going On? College of Engineering. University of Michigan; Drafting an Argument. Writing@CSU. Colorado State University; Lynham, Susan A. “The General Method of Theory-Building Research in Applied Disciplines.” Advances in Developing Human Resources 4 (August 2002): 221-241; Tavallaei, Mehdi and Mansor Abu Talib. "A General Perspective on the Role of Theory in Qualitative Research." Journal of International Social Research 3 (Spring 2010); Ravitch, Sharon M. and Matthew Riggan. Reason and Rigor: How Conceptual Frameworks Guide Research . Second edition. Los Angeles, CA: SAGE, 2017; Reyes, Victoria. Demystifying the Journal Article. Inside Higher Education; Trochim, William M.K. Philosophy of Research. Research Methods Knowledge Base. 2006; Weick, Karl E. “The Work of Theorizing.” In Theorizing in Social Science: The Context of Discovery . Richard Swedberg, editor. (Stanford, CA: Stanford University Press, 2014), pp. 177-194.

Writing Tip

Borrowing Theoretical Constructs from Other Disciplines

An increasingly important trend in the social and behavioral sciences is to think about and attempt to understand research problems from an interdisciplinary perspective. One way to do this is to not rely exclusively on the theories developed within your particular discipline, but to think about how an issue might be informed by theories developed in other disciplines. For example, if you are a political science student studying the rhetorical strategies used by female incumbents in state legislature campaigns, theories about the use of language could be derived, not only from political science, but linguistics, communication studies, philosophy, psychology, and, in this particular case, feminist studies. Building theoretical frameworks based on the postulates and hypotheses developed in other disciplinary contexts can be both enlightening and an effective way to be more engaged in the research topic.

CohenMiller, A. S. and P. Elizabeth Pate. "A Model for Developing Interdisciplinary Research Theoretical Frameworks." The Qualitative Researcher 24 (2019): 1211-1226; Frodeman, Robert. The Oxford Handbook of Interdisciplinarity . New York: Oxford University Press, 2010.

Another Writing Tip

Don't Undertheorize!

Do not leave the theory hanging out there in the introduction never to be mentioned again. Undertheorizing weakens your paper. The theoretical framework you describe should guide your study throughout the paper. Be sure to always connect theory to the review of pertinent literature and to explain in the discussion part of your paper how the theoretical framework you chose supports analysis of the research problem or, if appropriate, how the theoretical framework was found to be inadequate in explaining the phenomenon you were investigating. In that case, don't be afraid to propose your own theory based on your findings.

Yet Another Writing Tip

What's a Theory? What's a Hypothesis?

The terms theory and hypothesis are often used interchangeably in newspapers and popular magazines and in non-academic settings. However, the difference between theory and hypothesis in scholarly research is important, particularly when using an experimental design. A theory is a well-established principle that has been developed to explain some aspect of the natural world. Theories arise from repeated observation and testing and incorporates facts, laws, predictions, and tested assumptions that are widely accepted [e.g., rational choice theory; grounded theory; critical race theory].

A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, an experiment designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, "We predict that students with better study habits will suffer less test anxiety." Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your research.

The key distinctions are:

  • A theory predicts events in a broad, general context;  a hypothesis makes a specific prediction about a specified set of circumstances.
  • A theory has been extensively tested and is generally accepted among a set of scholars; a hypothesis is a speculative guess that has yet to be tested.

Cherry, Kendra. Introduction to Research Methods: Theory and Hypothesis. About.com Psychology; Gezae, Michael et al. Welcome Presentation on Hypothesis. Slideshare presentation.

Still Yet Another Writing Tip

Be Prepared to Challenge the Validity of an Existing Theory

Theories are meant to be tested and their underlying assumptions challenged; they are not rigid or intransigent, but are meant to set forth general principles for explaining phenomena or predicting outcomes. Given this, testing theoretical assumptions is an important way that knowledge in any discipline develops and grows. If you're asked to apply an existing theory to a research problem, the analysis will likely include the expectation by your professor that you should offer modifications to the theory based on your research findings.

Indications that theoretical assumptions may need to be modified can include the following:

  • Your findings suggest that the theory does not explain or account for current conditions or circumstances or the passage of time,
  • The study reveals a finding that is incompatible with what the theory attempts to explain or predict, or
  • Your analysis reveals that the theory overly generalizes behaviors or actions without taking into consideration specific factors revealed from your analysis [e.g., factors related to culture, nationality, history, gender, ethnicity, age, geographic location, legal norms or customs , religion, social class, socioeconomic status, etc.].

Philipsen, Kristian. "Theory Building: Using Abductive Search Strategies." In Collaborative Research Design: Working with Business for Meaningful Findings . Per Vagn Freytag and Louise Young, editors. (Singapore: Springer Nature, 2018), pp. 45-71; Shepherd, Dean A. and Roy Suddaby. "Theory Building: A Review and Integration." Journal of Management 43 (2017): 59-86.

  • << Previous: The Research Problem/Question
  • Next: 5. The Literature Review >>
  • Last Updated: Jun 18, 2024 10:45 AM
  • URL: https://libguides.usc.edu/writingguide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

What Is a Theoretical Framework? | Guide to Organizing

Published on October 14, 2022 by Sarah Vinz . Revised on November 20, 2023 by Tegan George.

A theoretical framework is a foundational review of existing theories that serves as a roadmap for developing the arguments you will use in your own work.

Theories are developed by researchers to explain phenomena, draw connections, and make predictions. In a theoretical framework, you explain the existing theories that support your research, showing that your paper or dissertation topic is relevant and grounded in established ideas.

In other words, your theoretical framework justifies and contextualizes your later research, and it’s a crucial first step for your research paper , thesis , or dissertation . A well-rounded theoretical framework sets you up for success later on in your research and writing process.

Table of contents

Why do you need a theoretical framework, how to write a theoretical framework, structuring your theoretical framework, example of a theoretical framework, other interesting articles, frequently asked questions about theoretical frameworks.

Before you start your own research, it’s crucial to familiarize yourself with the theories and models that other researchers have already developed. Your theoretical framework is your opportunity to present and explain what you’ve learned, situated within your future research topic.

There’s a good chance that many different theories about your topic already exist, especially if the topic is broad. In your theoretical framework, you will evaluate, compare, and select the most relevant ones.

By “framing” your research within a clearly defined field, you make the reader aware of the assumptions that inform your approach, showing the rationale behind your choices for later sections, like methodology and discussion . This part of your dissertation lays the foundations that will support your analysis, helping you interpret your results and make broader generalizations .

  • In literature , a scholar using postmodernist literary theory would analyze The Great Gatsby differently than a scholar using Marxist literary theory.
  • In psychology , a behaviorist approach to depression would involve different research methods and assumptions than a psychoanalytic approach.
  • In economics , wealth inequality would be explained and interpreted differently based on a classical economics approach than based on a Keynesian economics one.

To create your own theoretical framework, you can follow these three steps:

  • Identifying your key concepts
  • Evaluating and explaining relevant theories
  • Showing how your research fits into existing research

1. Identify your key concepts

The first step is to pick out the key terms from your problem statement and research questions . Concepts often have multiple definitions, so your theoretical framework should also clearly define what you mean by each term.

To investigate this problem, you have identified and plan to focus on the following problem statement, objective, and research questions:

Problem : Many online customers do not return to make subsequent purchases.

Objective : To increase the quantity of return customers.

Research question : How can the satisfaction of company X’s online customers be improved in order to increase the quantity of return customers?

2. Evaluate and explain relevant theories

By conducting a thorough literature review , you can determine how other researchers have defined these key concepts and drawn connections between them. As you write your theoretical framework, your aim is to compare and critically evaluate the approaches that different authors have taken.

After discussing different models and theories, you can establish the definitions that best fit your research and justify why. You can even combine theories from different fields to build your own unique framework if this better suits your topic.

Make sure to at least briefly mention each of the most important theories related to your key concepts. If there is a well-established theory that you don’t want to apply to your own research, explain why it isn’t suitable for your purposes.

3. Show how your research fits into existing research

Apart from summarizing and discussing existing theories, your theoretical framework should show how your project will make use of these ideas and take them a step further.

You might aim to do one or more of the following:

  • Test whether a theory holds in a specific, previously unexamined context
  • Use an existing theory as a basis for interpreting your results
  • Critique or challenge a theory
  • Combine different theories in a new or unique way

A theoretical framework can sometimes be integrated into a literature review chapter , but it can also be included as its own chapter or section in your dissertation. As a rule of thumb, if your research involves dealing with a lot of complex theories, it’s a good idea to include a separate theoretical framework chapter.

There are no fixed rules for structuring your theoretical framework, but it’s best to double-check with your department or institution to make sure they don’t have any formatting guidelines. The most important thing is to create a clear, logical structure. There are a few ways to do this:

  • Draw on your research questions, structuring each section around a question or key concept
  • Organize by theory cluster
  • Organize by date

It’s important that the information in your theoretical framework is clear for your reader. Make sure to ask a friend to read this section for you, or use a professional proofreading service .

As in all other parts of your research paper , thesis , or dissertation , make sure to properly cite your sources to avoid plagiarism .

To get a sense of what this part of your thesis or dissertation might look like, take a look at our full example .

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

While a theoretical framework describes the theoretical underpinnings of your work based on existing research, a conceptual framework allows you to draw your own conclusions, mapping out the variables you may use in your study and the interplay between them.

A literature review and a theoretical framework are not the same thing and cannot be used interchangeably. While a theoretical framework describes the theoretical underpinnings of your work, a literature review critically evaluates existing research relating to your topic. You’ll likely need both in your dissertation .

A theoretical framework can sometimes be integrated into a  literature review chapter , but it can also be included as its own chapter or section in your dissertation . As a rule of thumb, if your research involves dealing with a lot of complex theories, it’s a good idea to include a separate theoretical framework chapter.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Vinz, S. (2023, November 20). What Is a Theoretical Framework? | Guide to Organizing. Scribbr. Retrieved June 24, 2024, from https://www.scribbr.com/dissertation/theoretical-framework/

Is this article helpful?

Sarah Vinz

Sarah's academic background includes a Master of Arts in English, a Master of International Affairs degree, and a Bachelor of Arts in Political Science. She loves the challenge of finding the perfect formulation or wording and derives much satisfaction from helping students take their academic writing up a notch.

Other students also liked

What is a research methodology | steps & tips, how to write a literature review | guide, examples, & templates, what is a conceptual framework | tips & examples, what is your plagiarism score.

  • Correspondence
  • Open access
  • Published: 18 September 2013

Using the framework method for the analysis of qualitative data in multi-disciplinary health research

  • Nicola K Gale 1 ,
  • Gemma Heath 2 ,
  • Elaine Cameron 3 ,
  • Sabina Rashid 4 &
  • Sabi Redwood 2  

BMC Medical Research Methodology volume  13 , Article number:  117 ( 2013 ) Cite this article

528k Accesses

5421 Citations

104 Altmetric

Metrics details

The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations.

The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study.

Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research.

The Framework Method for the management and analysis of qualitative data has been used since the 1980s [ 1 ]. The method originated in large-scale social policy research but is becoming an increasingly popular approach in medical and health research; however, there is some confusion about its potential application and limitations. In this article we discuss when it is appropriate to use the Framework Method and how it compares to other qualitative analysis methods. In particular, we explore how it can be used in multi-disciplinary health research teams. Multi-disciplinary and mixed methods studies are becoming increasingly commonplace in applied health research. As well as disciplines familiar with qualitative research, such as nursing, psychology and sociology, teams often include epidemiologists, health economists, management scientists and others. Furthermore, applied health research often has clinical representation and, increasingly, patient and public involvement [ 2 ]. We argue that while leadership is undoubtedly required from an experienced qualitative methodologist, non-specialists from the wider team can and should be involved in the analysis process. We then present a step-by-step guide to the application of the Framework Method, illustrated using a worked example (See Additional File 1 ) from a published study [ 3 ] to illustrate the main stages of the process. Technical terms are included in the glossary (below). Finally, we discuss the strengths and limitations of the approach.

Glossary of key terms used in the Framework Method

Analytical framework: A set of codes organised into categories that have been jointly developed by researchers involved in analysis that can be used to manage and organise the data. The framework creates a new structure for the data (rather than the full original accounts given by participants) that is helpful to summarize/reduce the data in a way that can support answering the research questions.

Analytic memo: A written investigation of a particular concept, theme or problem, reflecting on emerging issues in the data that captures the analytic process (see Additional file 1 , Section 7).

Categories: During the analysis process, codes are grouped into clusters around similar and interrelated ideas or concepts. Categories and codes are usually arranged in a tree diagram structure in the analytical framework. While categories are closely and explicitly linked to the raw data, developing categories is a way to start the process of abstraction of the data (i.e. towards the general rather than the specific or anecdotal).

Charting: Entering summarized data into the Framework Method matrix (see Additional File 1 , Section 6).

Code: A descriptive or conceptual label that is assigned to excerpts of raw data in a process called ‘coding’ (see Additional File 1 , Section 3).

Data: Qualitative data usually needs to be in textual form before analysis. These texts can either be elicited texts (written specifically for the research, such as food diaries), or extant texts (pre-existing texts, such as meeting minutes, policy documents or weblogs), or can be produced by transcribing interview or focus group data, or creating ‘field’ notes while conducting participant-observation or observing objects or social situations.

Indexing: The systematic application of codes from the agreed analytical framework to the whole dataset (see Additional File 1 , Section 5).

Matrix: A spreadsheet contains numerous cells into which summarized data are entered by codes (columns) and cases (rows) (see Additional File 1 , Section 6).

Themes: Interpretive concepts or propositions that describe or explain aspects of the data, which are the final output of the analysis of the whole dataset. Themes are articulated and developed by interrogating data categories through comparison between and within cases. Usually a number of categories would fall under each theme or sub-theme [ 3 ].

Transcript: A written verbatim (word-for-word) account of a verbal interaction, such as an interview or conversation.

The Framework Method sits within a broad family of analysis methods often termed thematic analysis or qualitative content analysis. These approaches identify commonalities and differences in qualitative data, before focusing on relationships between different parts of the data, thereby seeking to draw descriptive and/or explanatory conclusions clustered around themes. The Framework Method was developed by researchers, Jane Ritchie and Liz Spencer, from the Qualitative Research Unit at the National Centre for Social Research in the United Kingdom in the late 1980s for use in large-scale policy research [ 1 ]. It is now used widely in other areas, including health research [ 3 – 12 ]. Its defining feature is the matrix output: rows (cases), columns (codes) and ‘cells’ of summarised data, providing a structure into which the researcher can systematically reduce the data, in order to analyse it by case and by code [ 1 ]. Most often a ‘case’ is an individual interviewee, but this can be adapted to other units of analysis, such as predefined groups or organisations. While in-depth analyses of key themes can take place across the whole data set, the views of each research participant remain connected to other aspects of their account within the matrix so that the context of the individual’s views is not lost. Comparing and contrasting data is vital to qualitative analysis and the ability to compare with ease data across cases as well as within individual cases is built into the structure and process of the Framework Method.

The Framework Method provides clear steps to follow and produces highly structured outputs of summarised data. It is therefore useful where multiple researchers are working on a project, particularly in multi-disciplinary research teams were not all members have experience of qualitative data analysis, and for managing large data sets where obtaining a holistic, descriptive overview of the entire data set is desirable. However, caution is recommended before selecting the method as it is not a suitable tool for analysing all types of qualitative data or for answering all qualitative research questions, nor is it an ‘easy’ version of qualitative research for quantitative researchers. Importantly, the Framework Method cannot accommodate highly heterogeneous data, i.e. data must cover similar topics or key issues so that it is possible to categorize it. Individual interviewees may, of course, have very different views or experiences in relation to each topic, which can then be compared and contrasted. The Framework Method is most commonly used for the thematic analysis of semi-structured interview transcripts, which is what we focus on in this article, although it could, in principle, be adapted for other types of textual data [ 13 ], including documents, such as meeting minutes or diaries [ 12 ], or field notes from observations [ 10 ].

For quantitative researchers working with qualitative colleagues or when exploring qualitative research for the first time, the nature of the Framework Method is seductive because its methodical processes and ‘spreadsheet’ approach seem more closely aligned to the quantitative paradigm [ 14 ]. Although the Framework Method is a highly systematic method of categorizing and organizing what may seem like unwieldy qualitative data, it is not a panacea for problematic issues commonly associated with qualitative data analysis such as how to make analytic choices and make interpretive strategies visible and auditable. Qualitative research skills are required to appropriately interpret the matrix, and facilitate the generation of descriptions, categories, explanations and typologies. Moreover, reflexivity, rigour and quality are issues that are requisite in the Framework Method just as they are in other qualitative methods. It is therefore essential that studies using the Framework Method for analysis are overseen by an experienced qualitative researcher, though this does not preclude those new to qualitative research from contributing to the analysis as part of a wider research team.

There are a number of approaches to qualitative data analysis, including those that pay close attention to language and how it is being used in social interaction such as discourse analysis [ 15 ] and ethnomethodology [ 16 ]; those that are concerned with experience, meaning and language such as phenomenology [ 17 , 18 ] and narrative methods [ 19 ]; and those that seek to develop theory derived from data through a set of procedures and interconnected stages such as Grounded Theory [ 20 , 21 ]. Many of these approaches are associated with specific disciplines and are underpinned by philosophical ideas which shape the process of analysis [ 22 ]. The Framework Method, however, is not aligned with a particular epistemological, philosophical, or theoretical approach. Rather it is a flexible tool that can be adapted for use with many qualitative approaches that aim to generate themes.

The development of themes is a common feature of qualitative data analysis, involving the systematic search for patterns to generate full descriptions capable of shedding light on the phenomenon under investigation. In particular, many qualitative approaches use the ‘constant comparative method’ , developed as part of Grounded Theory, which involves making systematic comparisons across cases to refine each theme [ 21 , 23 ]. Unlike Grounded Theory, the Framework Method is not necessarily concerned with generating social theory, but can greatly facilitate constant comparative techniques through the review of data across the matrix.

Perhaps because the Framework Method is so obviously systematic, it has often, as other commentators have noted, been conflated with a deductive approach to qualitative analysis [ 13 , 14 ]. However, the tool itself has no allegiance to either inductive or deductive thematic analysis; where the research sits along this inductive-deductive continuum depends on the research question. A question such as, ‘Can patients give an accurate biomedical account of the onset of their cardiovascular disease?’ is essentially a yes/no question (although it may be nuanced by the extent of their account or by appropriate use of terminology) and so requires a deductive approach to both data collection and analysis (e.g. structured or semi-structured interviews and directed qualitative content analysis [ 24 ]). Similarly, a deductive approach may be taken if basing analysis on a pre-existing theory, such as behaviour change theories, for example in the case of a research question such as ‘How does the Theory of Planned Behaviour help explain GP prescribing?’ [ 11 ]. However, a research question such as, ‘How do people construct accounts of the onset of their cardiovascular disease?’ would require a more inductive approach that allows for the unexpected, and permits more socially-located responses [ 25 ] from interviewees that may include matters of cultural beliefs, habits of food preparation, concepts of ‘fate’, or links to other important events in their lives, such as grief, which cannot be predicted by the researcher in advance (e.g. an interviewee-led open ended interview and grounded theory [ 20 ]). In all these cases, it may be appropriate to use the Framework Method to manage the data. The difference would become apparent in how themes are selected: in the deductive approach, themes and codes are pre-selected based on previous literature, previous theories or the specifics of the research question; whereas in the inductive approach, themes are generated from the data though open (unrestricted) coding, followed by refinement of themes. In many cases, a combined approach is appropriate when the project has some specific issues to explore, but also aims to leave space to discover other unexpected aspects of the participants’ experience or the way they assign meaning to phenomena. In sum, the Framework Method can be adapted for use with deductive, inductive, or combined types of qualitative analysis. However, there are some research questions where analysing data by case and theme is not appropriate and so the Framework Method should be avoided. For instance, depending on the research question, life history data might be better analysed using narrative analysis [ 19 ]; recorded consultations between patients and their healthcare practitioners using conversation analysis [ 26 ]; and documentary data, such as resources for pregnant women, using discourse analysis [ 27 ].

It is not within the scope of this paper to consider study design or data collection in any depth, but before moving on to describe the Framework Method analysis process, it is worth taking a step back to consider briefly what needs to happen before analysis begins. The selection of analysis method should have been considered at the proposal stage of the research and should fit with the research questions and overall aims of the study. Many qualitative studies, particularly ones using inductive analysis, are emergent in nature; this can be a challenge and the researchers can only provide an “imaginative rehearsal” of what is to come [ 28 ]. In mixed methods studies, the role of the qualitative component within the wider goals of the project must also be considered. In the data collection stage, resources must be allocated for properly trained researchers to conduct the qualitative interviewing because it is a highly skilled activity. In some cases, a research team may decide that they would like to use lay people, patients or peers to do the interviews [ 29 – 32 ] and in this case they must be properly trained and mentored which requires time and resources. At this early stage it is also useful to consider whether the team will use Computer Assisted Qualitative Data Analysis Software (CAQDAS), which can assist with data management and analysis.

As any form of qualitative or quantitative analysis is not a purely technical process, but influenced by the characteristics of the researchers and their disciplinary paradigms, critical reflection throughout the research process is paramount, including in the design of the study, the construction or collection of data, and the analysis. All members of the team should keep a research diary, where they record reflexive notes, impressions of the data and thoughts about analysis throughout the process. Experienced qualitative researchers become more skilled at sifting through data and analysing it in a rigorous and reflexive way. They cannot be too attached to certainty, but must remain flexible and adaptive throughout the research in order to generate rich and nuanced findings that embrace and explain the complexity of real social life and can be applied to complex social issues. It is important to remember when using the Framework Method that, unlike quantitative research where data collection and data analysis are strictly sequential and mutually exclusive stages of the research process, in qualitative analysis there is, to a greater or lesser extent depending on the project, ongoing interplay between data collection, analysis, and theory development. For example, new ideas or insights from participants may suggest potentially fruitful lines of enquiry, or close analysis might reveal subtle inconsistencies in an account which require further exploration.

Procedure for analysis

Stage 1: transcription.

A good quality audio recording and, ideally, a verbatim (word for word) transcription of the interview is needed. For Framework Method analysis, it is not necessarily important to include the conventions of dialogue transcriptions which can be difficult to read (e.g. pauses or two people talking simultaneously), because the content is what is of primary interest. Transcripts should have large margins and adequate line spacing for later coding and making notes. The process of transcription is a good opportunity to become immersed in the data and is to be strongly encouraged for new researchers. However, in some projects, the decision may be made that it is a better use of resources to outsource this task to a professional transcriber.

Stage 2: Familiarisation with the interview

Becoming familiar with the whole interview using the audio recording and/or transcript and any contextual or reflective notes that were recorded by the interviewer is a vital stage in interpretation. It can also be helpful to re-listen to all or parts of the audio recording. In multi-disciplinary or large research projects, those involved in analysing the data may be different from those who conducted or transcribed the interviews, which makes this stage particularly important. One margin can be used to record any analytical notes, thoughts or impressions.

Stage 3: Coding

After familiarization, the researcher carefully reads the transcript line by line, applying a paraphrase or label (a ‘code’) that describes what they have interpreted in the passage as important. In more inductive studies, at this stage ‘open coding’ takes place, i.e. coding anything that might be relevant from as many different perspectives as possible. Codes could refer to substantive things (e.g. particular behaviours, incidents or structures), values (e.g. those that inform or underpin certain statements, such as a belief in evidence-based medicine or in patient choice), emotions (e.g. sorrow, frustration, love) and more impressionistic/methodological elements (e.g. interviewee found something difficult to explain, interviewee became emotional, interviewer felt uncomfortable) [ 33 ]. In purely deductive studies, the codes may have been pre-defined (e.g. by an existing theory, or specific areas of interest to the project) so this stage may not be strictly necessary and you could just move straight onto indexing, although it is generally helpful even if you are taking a broadly deductive approach to do some open coding on at least a few of the transcripts to ensure important aspects of the data are not missed. Coding aims to classify all of the data so that it can be compared systematically with other parts of the data set. At least two researchers (or at least one from each discipline or speciality in a multi-disciplinary research team) should independently code the first few transcripts, if feasible. Patients, public involvement representatives or clinicians can also be productively involved at this stage, because they can offer alternative viewpoints thus ensuring that one particular perspective does not dominate. It is vital in inductive coding to look out for the unexpected and not to just code in a literal, descriptive way so the involvement of people from different perspectives can aid greatly in this. As well as getting a holistic impression of what was said, coding line-by-line can often alert the researcher to consider that which may ordinarily remain invisible because it is not clearly expressed or does not ‘fit’ with the rest of the account. In this way the developing analysis is challenged; to reconcile and explain anomalies in the data can make the analysis stronger. Coding can also be done digitally using CAQDAS, which is a useful way to keep track automatically of new codes. However, some researchers prefer to do the early stages of coding with a paper and pen, and only start to use CAQDAS once they reach Stage 5 (see below).

Stage 4: Developing a working analytical framework

After coding the first few transcripts, all researchers involved should meet to compare the labels they have applied and agree on a set of codes to apply to all subsequent transcripts. Codes can be grouped together into categories (using a tree diagram if helpful), which are then clearly defined. This forms a working analytical framework. It is likely that several iterations of the analytical framework will be required before no additional codes emerge. It is always worth having an ‘other’ code under each category to avoid ignoring data that does not fit; the analytical framework is never ‘final’ until the last transcript has been coded.

Stage 5: Applying the analytical framework

The working analytical framework is then applied by indexing subsequent transcripts using the existing categories and codes. Each code is usually assigned a number or abbreviation for easy identification (and so the full names of the codes do not have to be written out each time) and written directly onto the transcripts. Computer Assisted Qualitative Data Analysis Software (CAQDAS) is particularly useful at this stage because it can speed up the process and ensures that, at later stages, data is easily retrievable. It is worth noting that unlike software for statistical analyses, which actually carries out the calculations with the correct instruction, putting the data into a qualitative analysis software package does not analyse the data; it is simply an effective way of storing and organising the data so that they are accessible for the analysis process.

Stage 6: Charting data into the framework matrix

Qualitative data are voluminous (an hour of interview can generate 15–30 pages of text) and being able to manage and summarize (reduce) data is a vital aspect of the analysis process. A spreadsheet is used to generate a matrix and the data are ‘charted’ into the matrix. Charting involves summarizing the data by category from each transcript. Good charting requires an ability to strike a balance between reducing the data on the one hand and retaining the original meanings and ‘feel’ of the interviewees’ words on the other. The chart should include references to interesting or illustrative quotations. These can be tagged automatically if you are using CAQDAS to manage your data (N-Vivo version 9 onwards has the capability to generate framework matrices), or otherwise a capital ‘Q’, an (anonymized) transcript number, page and line reference will suffice. It is helpful in multi-disciplinary teams to compare and contrast styles of summarizing in the early stages of the analysis process to ensure consistency within the team. Any abbreviations used should be agreed by the team. Once members of the team are familiar with the analytical framework and well practised at coding and charting, on average, it will take about half a day per hour-long transcript to reach this stage. In the early stages, it takes much longer.

Stage 7: Interpreting the data

It is useful throughout the research to have a separate note book or computer file to note down impressions, ideas and early interpretations of the data. It may be worth breaking off at any stage to explore an interesting idea, concept or potential theme by writing an analytic memo [ 20 , 21 ] to then discuss with other members of the research team, including lay and clinical members. Gradually, characteristics of and differences between the data are identified, perhaps generating typologies, interrogating theoretical concepts (either prior concepts or ones emerging from the data) or mapping connections between categories to explore relationships and/or causality. If the data are rich enough, the findings generated through this process can go beyond description of particular cases to explanation of, for example, reasons for the emergence of a phenomena, predicting how an organisation or other social actor is likely to instigate or respond to a situation, or identifying areas that are not functioning well within an organisation or system. It is worth noting that this stage often takes longer than anticipated and that any project plan should ensure that sufficient time is allocated to meetings and individual researcher time to conduct interpretation and writing up of findings (see Additional file 1 , Section 7).

The Framework Method has been developed and used successfully in research for over 25 years, and has recently become a popular analysis method in qualitative health research. The issue of how to assess quality in qualitative research has been highly debated [ 20 , 34 – 40 ], but ensuring rigour and transparency in analysis is a vital component. There are, of course, many ways to do this but in the Framework Method the following are helpful:

Summarizing the data during charting, as well as being a practical way to reduce the data, means that all members of a multi-disciplinary team, including lay, clinical and (quantitative) academic members can engage with the data and offer their perspectives during the analysis process without necessarily needing to read all the transcripts or be involved in the more technical parts of analysis.

Charting also ensures that researchers pay close attention to describing the data using each participant’s own subjective frames and expressions in the first instance, before moving onto interpretation.

The summarized data is kept within the wider context of each case, thereby encouraging thick description that pays attention to complex layers of meaning and understanding [ 38 ].

The matrix structure is visually straightforward and can facilitate recognition of patterns in the data by any member of the research team, including through drawing attention to contradictory data, deviant cases or empty cells.

The systematic procedure (described in this article) makes it easy to follow, even for multi-disciplinary teams and/or with large data sets.

It is flexible enough that non-interview data (such as field notes taken during the interview or reflexive considerations) can be included in the matrix.

It is not aligned with a particular epistemological viewpoint or theoretical approach and therefore can be adapted for use in inductive or deductive analysis or a combination of the two (e.g. using pre-existing theoretical constructs deductively, then revising the theory with inductive aspects; or using an inductive approach to identify themes in the data, before returning to the literature and using theories deductively to help further explain certain themes).

It is easy to identify relevant data extracts to illustrate themes and to check whether there is sufficient evidence for a proposed theme.

Finally, there is a clear audit trail from original raw data to final themes, including the illustrative quotes.

There are also a number of potential pitfalls to this approach:

The systematic approach and matrix format, as we noted in the background, is intuitively appealing to those trained quantitatively but the ‘spreadsheet’ look perhaps further increases the temptation for those without an in-depth understanding of qualitative research to attempt to quantify qualitative data (e.g. “13 out of 20 participants said X). This kind of statement is clearly meaningless because the sampling in qualitative research is not designed to be representative of a wider population, but purposive to capture diversity around a phenomenon [ 41 ].

Like all qualitative analysis methods, the Framework Method is time consuming and resource-intensive. When involving multiple stakeholders and disciplines in the analysis and interpretation of the data, as is good practice in applied health research, the time needed is extended. This time needs to be factored into the project proposal at the pre-funding stage.

There is a high training component to successfully using the method in a new multi-disciplinary team. Depending on their role in the analysis, members of the research team may have to learn how to code, index, and chart data, to think reflexively about how their identities and experience affect the analysis process, and/or they may have to learn about the methods of generalisation (i.e. analytic generalisation and transferability, rather than statistical generalisation [ 41 ]) to help to interpret legitimately the meaning and significance of the data.

While the Framework Method is amenable to the participation of non-experts in data analysis, it is critical to the successful use of the method that an experienced qualitative researcher leads the project (even if the overall lead for a large mixed methods study is a different person). The qualitative lead would ideally be joined by other researchers with at least some prior training in or experience of qualitative analysis. The responsibilities of the lead qualitative researcher are: to contribute to study design, project timelines and resource planning; to mentor junior qualitative researchers; to train clinical, lay and other (non-qualitative) academics to contribute as appropriate to the analysis process; to facilitate analysis meetings in a way that encourages critical and reflexive engagement with the data and other team members; and finally to lead the write-up of the study.

We have argued that Framework Method studies can be conducted by multi-disciplinary research teams that include, for example, healthcare professionals, psychologists, sociologists, economists, and lay people/service users. The inclusion of so many different perspectives means that decision-making in the analysis process can be very time consuming and resource-intensive. It may require extensive, reflexive and critical dialogue about how the ideas expressed by interviewees and identified in the transcript are related to pre-existing concepts and theories from each discipline, and to the real ‘problems’ in the health system that the project is addressing. This kind of team effort is, however, an excellent forum for driving forward interdisciplinary collaboration, as well as clinical and lay involvement in research, to ensure that ‘the whole is greater than the sum of the parts’, by enhancing the credibility and relevance of the findings.

The Framework Method is appropriate for thematic analysis of textual data, particularly interview transcripts, where it is important to be able to compare and contrast data by themes across many cases, while also situating each perspective in context by retaining the connection to other aspects of each individual’s account. Experienced qualitative researchers should lead and facilitate all aspects of the analysis, although the Framework Method’s systematic approach makes it suitable for involving all members of a multi-disciplinary team. An open, critical and reflexive approach from all team members is essential for rigorous qualitative analysis.

Acceptance of the complexity of real life health systems and the existence of multiple perspectives on health issues is necessary to produce high quality qualitative research. If done well, qualitative studies can shed explanatory and predictive light on important phenomena, relate constructively to quantitative parts of a larger study, and contribute to the improvement of health services and development of health policy. The Framework Method, when selected and implemented appropriately, can be a suitable tool for achieving these aims through producing credible and relevant findings.

The Framework Method is an excellent tool for supporting thematic (qualitative content) analysis because it provides a systematic model for managing and mapping the data.

The Framework Method is most suitable for analysis of interview data, where it is desirable to generate themes by making comparisons within and between cases.

The management of large data sets is facilitated by the Framework Method as its matrix form provides an intuitively structured overview of summarised data.

The clear, step-by-step process of the Framework Method makes it is suitable for interdisciplinary and collaborative projects.

The use of the method should be led and facilitated by an experienced qualitative researcher.

Ritchie J, Lewis J: Qualitative research practice: a guide for social science students and researchers. 2003, London: Sage

Google Scholar  

Ives J, Damery S, Redwod S: PPI, paradoxes and Plato: who's sailing the ship?. J Med Ethics. 2013, 39 (3): 181-185. 10.1136/medethics-2011-100150.

Article   Google Scholar  

Heath G, Cameron E, Cummins C, Greenfield S, Pattison H, Kelly D, Redwood S: Paediatric ‘care closer to home’: stake-holder views and barriers to implementation. Health Place. 2012, 18 (5): 1068-1073. 10.1016/j.healthplace.2012.05.003.

Elkington H, White P, Addington-Hall J, Higgs R, Petternari C: The last year of life of COPD: a qualitative study of symptoms and services. Respir Med. 2004, 98 (5): 439-445. 10.1016/j.rmed.2003.11.006.

Murtagh J, Dixey R, Rudolf M: A qualitative investigation into the levers and barriers to weight loss in children: opinions of obese children. Archives Dis Child. 2006, 91 (11): 920-923. 10.1136/adc.2005.085712.

Barnard M, Webster S, O’Connor W, Jones A, Donmall M: The drug treatment outcomes research study (DTORS): qualitative study. 2009, London: Home Office

Ayatollahi H, Bath PA, Goodacre S: Factors influencing the use of IT in the emergency department: a qualitative study. Health Inform J. 2010, 16 (3): 189-200. 10.1177/1460458210377480.

Sheard L, Prout H, Dowding D, Noble S, Watt I, Maraveyas A, Johnson M: Barriers to the diagnosis and treatment of venous thromboembolism in advanced cancer patients: a qualitative study. Palliative Med. 2012, 27 (2): 339-348.

Ellis J, Wagland R, Tishelman C, Williams ML, Bailey CD, Haines J, Caress A, Lorigan P, Smith JA, Booton R, et al: Considerations in developing and delivering a nonpharmacological intervention for symptom management in lung cancer: the views of patients and informal caregivers. J Pain Symptom Manag (0). 2012, 44 (6): 831-842. 10.1016/j.jpainsymman.2011.12.274.

Gale N, Sultan H: Telehealth as ‘peace of mind’: embodiment, emotions and the home as the primary health space for people with chronic obstructive pulmonary disorder. Health place. 2013, 21: 140-147.

Rashidian A, Eccles MP, Russell I: Falling on stony ground? A qualitative study of implementation of clinical guidelines’ prescribing recommendations in primary care. Health policy. 2008, 85 (2): 148-161. 10.1016/j.healthpol.2007.07.011.

Jones RK: The unsolicited diary as a qualitative research tool for advanced research capacity in the field of health and illness. Qualitative Health Res. 2000, 10 (4): 555-567. 10.1177/104973200129118543.

Pope C, Ziebland S, Mays N: Analysing qualitative data. British Med J. 2000, 320: 114-116. 10.1136/bmj.320.7227.114.

Pope C, Mays N: Critical reflections on the rise of qualitative research. British Med J. 2009, 339: 737-739.

Fairclough N: Critical discourse analysis: the critical study of language. 2010, London: Longman

Garfinkel H: Ethnomethodology’s program. Soc Psychol Quarter. 1996, 59 (1): 5-21. 10.2307/2787116.

Merleau-Ponty M: The phenomenology of perception. 1962, London: Routledge and Kegan Paul

Svenaeus F: The phenomenology of health and illness. Handbook of phenomenology and medicine. 2001, Netherlands: Springer, 87-108.

Reissmann CK: Narrative methods for the human sciences. 2008, London: Sage

Charmaz K: Constructing grounded theory: a practical guide through qualitative analysis. 2006, London: Sage

Glaser A, Strauss AL: The discovery of grounded theory. 1967, Chicago: Aldine

Crotty M: The foundations of social research: meaning and perspective in the research process. 1998, London: Sage

Boeije H: A purposeful approach to the constant comparative method in the analysis of qualitative interviews. Qual Quant. 2002, 36 (4): 391-409. 10.1023/A:1020909529486.

Hsieh H-F, Shannon SE: Three approaches to qualitative content analysis. Qual Health Res. 2005, 15 (9): 1277-1288. 10.1177/1049732305276687.

Redwood S, Gale NK, Greenfield S: ‘You give us rangoli, we give you talk’: using an art-based activity to elicit data from a seldom heard group. BMC Med Res Methodol. 2012, 12 (1): 7-10.1186/1471-2288-12-7.

Mishler EG: The struggle between the voice of medicine and the voice of the lifeworld. The sociology of health and illness: critical perspectives. Edited by: Conrad P, Kern R. 1990, New York: St Martins Press, Third

Hodges BD, Kuper A, Reeves S: Discourse analysis. British Med J. 2008, 337: 570-572. 10.1136/bmj.39370.701782.DE.

Sandelowski M, Barroso J: Writing the proposal for a qualitative research methodology project. Qual Health Res. 2003, 13 (6): 781-820. 10.1177/1049732303013006003.

Ellins J: It’s better together: involving older people in research. HSMC Newsletter Focus Serv Users Publ. 2010, 16 (1): 4-

Phillimore J, Goodson L, Hennessy D, Ergun E: Empowering Birmingham’s migrant and refugee community organisations: making a difference. 2009, York: Joseph Rowntree Foundation

Leamy M, Clough R: How older people became researchers. 2006, York: Joseph Rowntree Foundation

Glasby J, Miller R, Ellins J, Durose J, Davidson D, McIver S, Littlechild R, Tanner D, Snelling I, Spence K: Final report NIHR service delivery and organisation programme. Understanding and improving transitions of older people: a user and carer centred approach. 2012, London: The Stationery Office

Saldaña J: The coding manual for qualitative researchers. 2009, London: Sage

Lincoln YS: Emerging criteria for quality in qualitative and interpretive research. Qual Inquiry. 1995, 1 (3): 275-289. 10.1177/107780049500100301.

Mays N, Pope C: Qualitative research in health care: assessing quality in qualitative research. BMJ British Med J. 2000, 320 (7226): 50-10.1136/bmj.320.7226.50.

Seale C: Quality in qualitative research. Qual Inquiry. 1999, 5 (4): 465-478. 10.1177/107780049900500402.

Dingwall R, Murphy E, Watson P, Greatbatch D, Parker S: Catching goldfish: quality in qualitative research. J Health serv Res Policy. 1998, 3 (3): 167-172.

Popay J, Rogers A, Williams G: Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998, 8 (3): 341-351. 10.1177/104973239800800305.

Morse JM, Barrett M, Mayan M, Olson K, Spiers J: Verification strategies for establishing reliability and validity in qualitative research. Int J Qual Methods. 2008, 1 (2): 13-22.

Smith JA: Reflecting on the development of interpretative phenomenological analysis and its contribution to qualitative research in psychology. Qual Res Psychol. 2004, 1 (1): 39-54.

Polit DF, Beck CT: Generalization in quantitative and qualitative research: Myths and strategies. Int J Nurs Studies. 2010, 47 (11): 1451-1458. 10.1016/j.ijnurstu.2010.06.004.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/13/117/prepub

Download references

Acknowledgments

All authors were funded by the National Institute for Health Research (NIHR) through the Collaborations for Leadership in Applied Health Research and Care for Birmingham and Black Country (CLAHRC-BBC) programme. The views in this publication expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Health Services Management Centre, University of Birmingham, Park House, 40 Edgbaston Park Road, Birmingham, B15 2RT, UK

Nicola K Gale

School of Health and Population Sciences, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK

Gemma Heath & Sabi Redwood

School of Life and Health Sciences, Aston University, Aston Triangle, Birmingham, B4 7ET, UK

Elaine Cameron

East and North Hertfordshire NHS Trust, Lister hospital, Coreys Mill Lane, Stevenage, SG1 4AB, UK

Sabina Rashid

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nicola K Gale .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

All authors were involved in the development of the concept of the article and drafting the article. NG wrote the first draft of the article, GH and EC prepared the text and figures related to the illustrative example, SRa did the literature search to identify if there were any similar articles currently available and contributed to drafting of the article, and SRe contributed to drafting of the article and the illustrative example. All authors read and approved the final manuscript.

Electronic supplementary material

Additional file 1: illustrative example of the use of the framework method.(docx 167 kb), authors’ original submitted files for images.

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, authors’ original file for figure 3, authors’ original file for figure 4, authors’ original file for figure 5, authors’ original file for figure 6, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Gale, N.K., Heath, G., Cameron, E. et al. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol 13 , 117 (2013). https://doi.org/10.1186/1471-2288-13-117

Download citation

Received : 17 December 2012

Accepted : 06 September 2013

Published : 18 September 2013

DOI : https://doi.org/10.1186/1471-2288-13-117

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Qualitative content analysis
  • Multi-disciplinary research

BMC Medical Research Methodology

ISSN: 1471-2288

analytical framework research paper

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

systems-logo

Article Menu

analytical framework research paper

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The role of analytical frameworks for systemic research design, explained in the analysis of drivers and dynamics of historic land-use changes.

analytical framework research paper

1. Introduction

2. introduction to the analytical framework for a systemic analysis of drivers and dynamics of historical land-use changes, 2.1. schematic representation of the analytical framework for a systemic analysis of drivers and dynamics of historical land-use changes, 2.2. analytical categories and assumptions, 2.2.1. external drivers, 2.2.2. biophysical characteristics of the environment, 2.2.3. internal drivers, 2.2.4. community, 2.2.5. land-use functions, 2.2.6. linkages and inter-linkages, 2.3. historical perspective, 3. research design for the case study “systemic evaluation of the drivers and dynamics of historical changes of land-use in the northwest of pichincha, ecuador”, 3.1. research questions and aims.

  • What have been historic land-use changes in this area?
  • economic drivers; (including migration)
  • institutional/historical drivers (e.g., agrarian reforms, conservation policies)
  • social drivers (e.g., social constructions)
  • biophysical characteristics of the land
  • cognitive drivers (personal needs, values, and emotions; past experiences, beliefs, and assumptions; perceptions—perceived risks and benefits of different land-uses, among others)
  • individual history of each decision-maker
  • By which mechanism have these factors influenced historical land-use changes in this area?

3.2. Methods

4. discussion, 5. conclusions, acknowledgments, conflicts of interest.

  • Ostrom, E. A general framework for analyzing sustainability of social-ecological systems. Science 2009 , 325 , 419–422. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ostrom, E. Institutional analysis and development: Elements of the framework in historical perspective. In Historical Developments and Theoretical Approaches in Sociology in Encyclopedia of Life Support Systems (EOLSS) ; Crothers, C., Ed.; Eolss Publishers: Oxford, UK, 2010. [ Google Scholar ]
  • McGinnis, M.D.; Ostrom, E. Social-ecological system framework: Initial changes and continuing challenges. Ecol. Soc. 2014 , 19 , 30. [ Google Scholar ] [ CrossRef ]
  • Coral, C. Chapter 14: Analytical Framework for a Systemic Analysis of Drivers and Dynamics of Historical Land-Use Changes: A Shift Toward Systems Thinking. In Balancing Individualism and Collectivism to Support Social and Environmental Justice ; Contemporary Systems Thinking Series; (In Press)
  • Coral, C. Analytical Framework for a Systemic Analysis of Drivers and Dynamics of Historical Land Use Changes: A Shift towards Systems Thinking. In Presented at the International Society for the Systems Sciences (ISSS) Conference, Berlin, Germany, 30 July–7 August 2015.
  • Cilliers, P. Complexity, Deconstruction and Relativism. Theory Cult. Soc. 2005 , 22 , 255–267. [ Google Scholar ] [ CrossRef ]
  • Kruseman, G.; Ruben, R.; Kuyvenhoven, A.; Hengsdijk, H.; van Keulen, H. Analytical framework for disentangling the concept of sustainable land use. Agric. Syst. 1996 , 50 , 191–207. [ Google Scholar ] [ CrossRef ]
  • Erb, K.-H.; Haberl, H.; Jepsen, M.R.; Kuemmerle, T.; Lindner, M.; Müller, D.; Verburg, P.H.; Reenberg, A. A conceptual framework for analysing and measuring land-use intensity. Curr. Opin. Environ. Sustain. 2013 , 5 , 464–470. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ramankutty, N.; Coomes, O.T. Land-use regime shifts: an analytical framework and agenda for future land-use research. Ecol. Soc. 2016 , 21 , 1. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Hull, V.; Batistella, M.; DeFries, R.; Dietz, T.; Fu, F.; Hertel, T.W.; Izaurralde, R.C.; Lambin, E.F.; et al. Framing sustainability in a telecoupled world. Ecol. Soc. 2013 , 18 , 26. [ Google Scholar ]
  • Friis, C.; Nielsen, J.Ø. Exploring the potential of the telecoupling framework for understanding land change. THESys Discussion Papers 2014; IRI THESys: Berlin, Germany.
  • Friis, C.; Nielsen, J.Ø.; Otero, I.; Haberl, H.; Niewöhner, J.; Hostert, P. From teleconnection to telecoupling: taking stock of an emerging framework in land system science. J. Land Use Sci. 2015 , 1–23. [ Google Scholar ] [ CrossRef ]
  • Kristensen, P. The DPSIR Framework. In Paper presented at the 27-29 September 2004 workshop on a comprehensive /detailed assessment of the vulnerability of water resources to environmental change in Africa using river basin approach; National Environmental Research Institute, Denmark, Department of Policy Analysis, European Topic Centre on Water, European Environment Agency, UNEP Headquarters: Nairobi, Kenya, 2004. [ Google Scholar ]
  • Hettig, E.; Lay, J.; Sipangule, K. Drivers of Households’ Land-Use Decisions: A Critical Review of Micro-Level Studies in Tropical Regions. Mimeo ; German Institute of Global and Area Studies (GIGA): Hamburg, Germany, 2015. [ Google Scholar ]
  • Richmond, B. Systems Dynamics/Systems Thinking: Let’s Just Get On With It. In Proceedings of the International Systems Dynamics Conference, Sterling, UK, 11–15 July 1994.
  • Arnold, R.D.; Wade, J.P. A Definition of Systems Thinking: A Systems Approach. Procedia Comp. Sci. 2015 , 44 , 669–678. [ Google Scholar ] [ CrossRef ]
  • Darnhofer, I.; Gibbon, D.; Dedieu, B. Farming Systems Research: An approach to inquiry. In Farming Systems Research into the 21st century: The New Dynamic ; Springer Science + Business Media: Dordrecht, The Netherlands, Chapter 1; 2012; pp. 3–31. [ Google Scholar ]
  • Verburg, P.H.; Erb, K.H.; Mertz, O.; Espindola, G. Land System Science: between global challenges and local realities. Curr. Opin. Environ. Sustain. 2013 , 5 , 433–437. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Casanova, L.; Martínez, J.; López, S.; López, G. De von Bertalanffy a Luhmann: Deconstrucción del concepto “agroecosistema” a través de las generaciones sistémicas. Rev. Mad 2016 , 35 , 60–74. (In Spanish) [ Google Scholar ] [ CrossRef ]
  • De Koning, G.H.J.; Veldkamp, A.; Fresco, L.O. Land use in Ecuador: A statistical analysis at different aggregation levels. Agric. Ecosyst. Environ. 1998 , 70 , 231–247. [ Google Scholar ] [ CrossRef ]
  • De Koning, G.H.J.; Verburg, P.H.; Veldkamp, A.; Fresco, L.O. Multi-scale modelling of land use change dynamics in Ecuador. Agric. Syst. 1999 , 61 , 77–93. [ Google Scholar ] [ CrossRef ]
  • Mena, C.F.; Walsh, S.J.; Frizzelle, B.G.; Yao, X.; Malanson, G.P. Land Use Change on Household Farms in the Ecuadorian Amazon: Design and Implementation of an Agent-Based Model. Appl. Geogr. 2011 , 31 , 210–222. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Boris, T.; Meyer, H.; Nauss, T.; Bendix, J. Projecting land-use and land-cover changes in a tropical mountain forest of Southern Ecuador. J. Land Use Sci. 2014 , 9 , 1–33. [ Google Scholar ]
  • Hutchison, E.; Charlesworth, L. Theoretical Perspectives on Human Behavior. In Dimensions of Human Behavior: Person and Environment ; Hutchison, E., Ed.; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2007; Chapter 2; Volume 2, pp. 46–88. [ Google Scholar ]
  • Moeller, H. Luhmann Explained: From Soul to Systems ; Ideas Explained Series; Open Court Publishing Company: Peru, Illinois, 2006; Volume 3. [ Google Scholar ]
  • Mead, G.H. Mind, self, and society. In Works of Georg Hebert Mead ; Morris, C.W., Ed.; University of Chicago Press: Chicago, IL, USA, 1962; Volume 2. [ Google Scholar ]
  • Dewey, J. The Quest for Certainty ; Minton Balch And Company: New York, NY, USA, 1929. [ Google Scholar ]
  • Museo Ecuatoriano de Ciencias Naturales (MECN); Ecosistemas del Distrito Metropolitano de Quito (DMQ). Serie de Publicaciones del Museo Ecuatoriano de Ciencias Naturales (MECN)—Fondo Ambiental del MDMQ ; Publicación Miscelánea 6; 1—Imprenta Nuevo Arte: Quito, Ecuador, 2009; 51p. (In Spanish) [ Google Scholar ]
  • Bréton solo de Zaldívar, V. From Agrarian Reform to Ethnodevelopment in the Highlands of Ecuador. J. Agrar. Chang. 2008 , 8 , 583–617. [ Google Scholar ] [ CrossRef ]
  • Mecham, J. Causes and Consequences of Deforestation in Ecuador. Centro de Investigacion de los Bosques Tropicales—CIBT: Quito, Ecuador, 2001; Available online: http://www.rainforestinfo.org.au/projects/jefferson.htm (accessed on 23 February 2017).
  • Portugal, V. Tesis Ciencias Económicas y Financieras: La Influencia de los Derechos de Propiedad Intelectual en la Conservación y Uso Sustentable de los Recursos Genéticos en la Parroquia de Mindo ; Escuela Politècnica Nacional: Quito, Ecuador, 2006; Available online: http://bibdigital.epn.edu.ec/handle/15000/218 (accessed on 1 December 2016). (In Spanish)
  • Jupp, V. The SAGE Dictionary of Social Research Methods ; SAGE Publications Ltd.: London, UK, 2006. [ Google Scholar ]
  • Cronon, W. Learning to Do Historical Research: A Primer for Environmental Historians and Others. Available online: http://www.williamcronon.net/researching/ (accessed on 1 December 2016).
  • Jones, N.A.; Ross, H.; Lynam, T.; Perez, P.; Leitch, A. Mental models: An interdisciplinary synthesis of theory and methods. Ecol. Soc. 2011 , 16 , 46. [ Google Scholar ] [ CrossRef ]
  • Jones, N.A.; Ross, H.; Lynam, T.; Perez, P. Eliciting mental models: A comparison of interview procedures in the context of natural resource management. Ecol. Soc. 2014 , 19 , 13. [ Google Scholar ] [ CrossRef ]
  • Corbin, J.M.; Strauss, A.L. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory ; SAGE Publications, Inc.: Thousand Oaks, California, 2015; p. 456. [ Google Scholar ]
  • Timmermans, S.; Tavory, I. Theory Construction in Qualitative Research. Sociol. Theory 2012 , 30 , 167–186. [ Google Scholar ] [ CrossRef ]
  • Tilly, C.; Goodin, R.E. Overview of Contextual Political Analysis—It Depends. In The Oxford Handbook of Political Science ; Goodin, R.E., Ed.; Oxford University Press Inc.: New York, NY, USA, 2009; Chapter 22. [ Google Scholar ]

Click here to enlarge figure

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/ ).

Share and Cite

Coral, C.; Bokelmann, W. The Role of Analytical Frameworks for Systemic Research Design, Explained in the Analysis of Drivers and Dynamics of Historic Land-Use Changes. Systems 2017 , 5 , 20. https://doi.org/10.3390/systems5010020

Coral C, Bokelmann W. The Role of Analytical Frameworks for Systemic Research Design, Explained in the Analysis of Drivers and Dynamics of Historic Land-Use Changes. Systems . 2017; 5(1):20. https://doi.org/10.3390/systems5010020

Coral, Claudia, and Wolfgang Bokelmann. 2017. "The Role of Analytical Frameworks for Systemic Research Design, Explained in the Analysis of Drivers and Dynamics of Historic Land-Use Changes" Systems 5, no. 1: 20. https://doi.org/10.3390/systems5010020

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Using the framework approach to analyse qualitative data: a worked example

Affiliations.

  • 1 Faculty of Health Sciences and Sport, University of Stirling, Scotland.
  • 2 Clinical chair, Faculty of Health, University of Canberra, Australia.
  • PMID: 30215482
  • DOI: 10.7748/nr.2018.e1580

Background: Data management and analysis are crucial stages in research, particularly qualitative research, which accumulates large volumes of data. There are various approaches that can be used to manage and analyse qualitative data, the framework approach being one example widely used in nursing research.

Aims: To consider the strengths and challenges of the framework approach and its application to practice. To help the novice researcher select an approach to thematic analysis.

Discussion: This paper provides an account of one novice researcher's experience of using the framework approach for thematic analysis. It begins with an explanation of the approach and why it was selected, followed by its application to practice using a worked example, and an account of the strengths and challenges of using this approach.

Conclusion: The framework approach offers the researcher a systematic structure to manage, analyse and identify themes, enabling the development and maintenance of a transparent audit trail. It is particularly useful with large volumes of text and is suitable for use with different qualitative approaches.

Keywords: data collection; nursing research; qualitative research.

©2018 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

PubMed Disclaimer

Conflict of interest statement

None declared

Similar articles

  • Finding the stories: a novice qualitative researcher learns to analyse narrative inquiry data. Lewis L. Lewis L. Nurse Res. 2019 Sep 21;26(2):14-18. doi: 10.7748/nr.2018.e1578. Epub 2018 Sep 5. Nurse Res. 2019. PMID: 30187742
  • A beginner's guide to ethnographic observation in nursing research. Conroy T. Conroy T. Nurse Res. 2017 Mar 22;24(4):10-14. doi: 10.7748/nr.2017.e1472. Nurse Res. 2017. PMID: 28326918
  • Using Framework Analysis in nursing research: a worked example. Ward DJ, Furber C, Tierney S, Swallow V. Ward DJ, et al. J Adv Nurs. 2013 Nov;69(11):2423-31. doi: 10.1111/jan.12127. Epub 2013 Mar 21. J Adv Nurs. 2013. PMID: 23517523
  • The researcher's reflections on the research process. Welch AJ. Welch AJ. Nurs Sci Q. 2004 Jul;17(3):201-7. doi: 10.1177/0894318404266424. Nurs Sci Q. 2004. PMID: 15200719 Review.
  • The authenticity and ethics of phenomenological research: how to overcome the researcher's own views. Häggman-Laitila A. Häggman-Laitila A. Nurs Ethics. 1999 Jan;6(1):12-22. doi: 10.1177/096973309900600103. Nurs Ethics. 1999. PMID: 10067553 Review.
  • Registrars' experience with research in family medicine training programmes in South Africa. Louw E, Mash RJ. Louw E, et al. S Afr Fam Pract (2004). 2024 Apr 10;66(1):e1-e12. doi: 10.4102/safp.v66i1.5907. S Afr Fam Pract (2004). 2024. PMID: 38708745 Free PMC article.
  • Digital Intervention (Keep-On-Keep-Up Nutrition) to Improve Nutrition in Older Adults: Protocol for a Feasibility Randomized Controlled Trial. French C, Burden S, Stanmore E. French C, et al. JMIR Res Protoc. 2024 Apr 30;13:e50922. doi: 10.2196/50922. JMIR Res Protoc. 2024. PMID: 38687981 Free PMC article. Clinical Trial.
  • Utilization of technology to provide on-the-job trainings on Emergency Obstetric and Neonatal Care: Perspectives of nurses and midwives working in Rwanda's remote health facilities. Uhawenimana TC, Gakwerere M, Ngabonzima A, Yamuragiye A, Harindimana F, Ndayisenga JP. Uhawenimana TC, et al. PLoS One. 2024 Apr 26;19(4):e0291219. doi: 10.1371/journal.pone.0291219. eCollection 2024. PLoS One. 2024. PMID: 38669298 Free PMC article.
  • A mixed methods evaluation of the impact of ECHO ® telementoring model for capacity building of community health workers in India. Panda R, Lahoti S, Mishra N, Prabhu RR, Singh K, Rai AK, Rai K. Panda R, et al. Hum Resour Health. 2024 Apr 23;22(1):26. doi: 10.1186/s12960-024-00907-y. Hum Resour Health. 2024. PMID: 38654359 Free PMC article.
  • Determinants of adherence to insulin and blood glucose monitoring among adolescents and young adults with type 1 diabetes in Qatar: a qualitative study. AlBurno H, Schneider F, de Vries H, Al Mohannadi D, Mercken L. AlBurno H, et al. F1000Res. 2024 Feb 20;11:907. doi: 10.12688/f1000research.123468.2. eCollection 2022. F1000Res. 2024. PMID: 38515508 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Other literature sources.

  • scite Smart Citations
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Educational resources and simple solutions for your research journey

theoretical framework

What is a Theoretical Framework? How to Write It (with Examples) 

What is a Theoretical Framework? How to Write It (with Examples)

Theoretical framework 1,2 is the structure that supports and describes a theory. A theory is a set of interrelated concepts and definitions that present a systematic view of phenomena by describing the relationship among the variables for explaining these phenomena. A theory is developed after a long research process and explains the existence of a research problem in a study. A theoretical framework guides the research process like a roadmap for the research study and helps researchers clearly interpret their findings by providing a structure for organizing data and developing conclusions.   

A theoretical framework in research is an important part of a manuscript and should be presented in the first section. It shows an understanding of the theories and concepts relevant to the research and helps limit the scope of the research.  

Table of Contents

What is a theoretical framework ?  

A theoretical framework in research can be defined as a set of concepts, theories, ideas, and assumptions that help you understand a specific phenomenon or problem. It can be considered a blueprint that is borrowed by researchers to develop their own research inquiry. A theoretical framework in research helps researchers design and conduct their research and analyze and interpret their findings. It explains the relationship between variables, identifies gaps in existing knowledge, and guides the development of research questions, hypotheses, and methodologies to address that gap.  

Researcher Life

Now that you know the answer to ‘ What is a theoretical framework? ’, check the following table that lists the different types of theoretical frameworks in research: 3

   
Conceptual  Defines key concepts and relationships 
Deductive  Starts with a general hypothesis and then uses data to test it; used in quantitative research 
Inductive  Starts with data and then develops a hypothesis; used in qualitative research 
Empirical  Focuses on the collection and analysis of empirical data; used in scientific research 
Normative  Defines a set of norms that guide behavior; used in ethics and social sciences 
Explanatory  Explains causes of particular behavior; used in psychology and social sciences 

Developing a theoretical framework in research can help in the following situations: 4

  • When conducting research on complex phenomena because a theoretical framework helps organize the research questions, hypotheses, and findings  
  • When the research problem requires a deeper understanding of the underlying concepts  
  • When conducting research that seeks to address a specific gap in knowledge  
  • When conducting research that involves the analysis of existing theories  

Summarizing existing literature for theoretical frameworks is easy. Get our Research Ideation pack  

Importance of a theoretical framework  

The purpose of theoretical framework s is to support you in the following ways during the research process: 2  

  • Provide a structure for the complete research process  
  • Assist researchers in incorporating formal theories into their study as a guide  
  • Provide a broad guideline to maintain the research focus  
  • Guide the selection of research methods, data collection, and data analysis  
  • Help understand the relationships between different concepts and develop hypotheses and research questions  
  • Address gaps in existing literature  
  • Analyze the data collected and draw meaningful conclusions and make the findings more generalizable  

Theoretical vs. Conceptual framework  

While a theoretical framework covers the theoretical aspect of your study, that is, the various theories that can guide your research, a conceptual framework defines the variables for your study and presents how they relate to each other. The conceptual framework is developed before collecting the data. However, both frameworks help in understanding the research problem and guide the development, collection, and analysis of the research.  

The following table lists some differences between conceptual and theoretical frameworks . 5

   
Based on existing theories that have been tested and validated by others  Based on concepts that are the main variables in the study 
Used to create a foundation of the theory on which your study will be developed  Visualizes the relationships between the concepts and variables based on the existing literature 
Used to test theories, to predict and control the situations within the context of a research inquiry  Helps the development of a theory that would be useful to practitioners 
Provides a general set of ideas within which a study belongs  Refers to specific ideas that researchers utilize in their study 
Offers a focal point for approaching unknown research in a specific field of inquiry  Shows logically how the research inquiry should be undertaken 
Works deductively  Works inductively 
Used in quantitative studies  Used in qualitative studies 

analytical framework research paper

How to write a theoretical framework  

The following general steps can help those wondering how to write a theoretical framework: 2

  • Identify and define the key concepts clearly and organize them into a suitable structure.  
  • Use appropriate terminology and define all key terms to ensure consistency.  
  • Identify the relationships between concepts and provide a logical and coherent structure.  
  • Develop hypotheses that can be tested through data collection and analysis.  
  • Keep it concise and focused with clear and specific aims.  

Write a theoretical framework 2x faster. Get our Manuscript Writing pack  

Examples of a theoretical framework  

Here are two examples of a theoretical framework. 6,7

Example 1 .   

An insurance company is facing a challenge cross-selling its products. The sales department indicates that most customers have just one policy, although the company offers over 10 unique policies. The company would want its customers to purchase more than one policy since most customers are purchasing policies from other companies.  

Objective : To sell more insurance products to existing customers.  

Problem : Many customers are purchasing additional policies from other companies.  

Research question : How can customer product awareness be improved to increase cross-selling of insurance products?  

Sub-questions: What is the relationship between product awareness and sales? Which factors determine product awareness?  

Since “product awareness” is the main focus in this study, the theoretical framework should analyze this concept and study previous literature on this subject and propose theories that discuss the relationship between product awareness and its improvement in sales of other products.  

Example 2 .

A company is facing a continued decline in its sales and profitability. The main reason for the decline in the profitability is poor services, which have resulted in a high level of dissatisfaction among customers and consequently a decline in customer loyalty. The management is planning to concentrate on clients’ satisfaction and customer loyalty.  

Objective: To provide better service to customers and increase customer loyalty and satisfaction.  

Problem: Continued decrease in sales and profitability.  

Research question: How can customer satisfaction help in increasing sales and profitability?  

Sub-questions: What is the relationship between customer loyalty and sales? Which factors influence the level of satisfaction gained by customers?  

Since customer satisfaction, loyalty, profitability, and sales are the important topics in this example, the theoretical framework should focus on these concepts.  

Benefits of a theoretical framework  

There are several benefits of a theoretical framework in research: 2  

  • Provides a structured approach allowing researchers to organize their thoughts in a coherent way.  
  • Helps to identify gaps in knowledge highlighting areas where further research is needed.  
  • Increases research efficiency by providing a clear direction for research and focusing efforts on relevant data.  
  • Improves the quality of research by providing a rigorous and systematic approach to research, which can increase the likelihood of producing valid and reliable results.  
  • Provides a basis for comparison by providing a common language and conceptual framework for researchers to compare their findings with other research in the field, facilitating the exchange of ideas and the development of new knowledge.  

analytical framework research paper

Frequently Asked Questions 

Q1. How do I develop a theoretical framework ? 7

A1. The following steps can be used for developing a theoretical framework :  

  • Identify the research problem and research questions by clearly defining the problem that the research aims to address and identifying the specific questions that the research aims to answer.
  • Review the existing literature to identify the key concepts that have been studied previously. These concepts should be clearly defined and organized into a structure.
  • Develop propositions that describe the relationships between the concepts. These propositions should be based on the existing literature and should be testable.
  • Develop hypotheses that can be tested through data collection and analysis.
  • Test the theoretical framework through data collection and analysis to determine whether the framework is valid and reliable.

Q2. How do I know if I have developed a good theoretical framework or not? 8

A2. The following checklist could help you answer this question:  

  • Is my theoretical framework clearly seen as emerging from my literature review?  
  • Is it the result of my analysis of the main theories previously studied in my same research field?  
  • Does it represent or is it relevant to the most current state of theoretical knowledge on my topic?  
  • Does the theoretical framework in research present a logical, coherent, and analytical structure that will support my data analysis?  
  • Do the different parts of the theory help analyze the relationships among the variables in my research?  
  • Does the theoretical framework target how I will answer my research questions or test the hypotheses?  
  • Have I documented every source I have used in developing this theoretical framework ?  
  • Is my theoretical framework a model, a table, a figure, or a description?  
  • Have I explained why this is the appropriate theoretical framework for my data analysis?  

Q3. Can I use multiple theoretical frameworks in a single study?  

A3. Using multiple theoretical frameworks in a single study is acceptable as long as each theory is clearly defined and related to the study. Each theory should also be discussed individually. This approach may, however, be tedious and effort intensive. Therefore, multiple theoretical frameworks should be used only if absolutely necessary for the study.  

Q4. Is it necessary to include a theoretical framework in every research study?  

A4. The theoretical framework connects researchers to existing knowledge. So, including a theoretical framework would help researchers get a clear idea about the research process and help structure their study effectively by clearly defining an objective, a research problem, and a research question.  

Q5. Can a theoretical framework be developed for qualitative research?  

A5. Yes, a theoretical framework can be developed for qualitative research. However, qualitative research methods may or may not involve a theory developed beforehand. In these studies, a theoretical framework can guide the study and help develop a theory during the data analysis phase. This resulting framework uses inductive reasoning. The outcome of this inductive approach can be referred to as an emergent theoretical framework . This method helps researchers develop a theory inductively, which explains a phenomenon without a guiding framework at the outset.  

analytical framework research paper

Q6. What is the main difference between a literature review and a theoretical framework ?  

A6. A literature review explores already existing studies about a specific topic in order to highlight a gap, which becomes the focus of the current research study. A theoretical framework can be considered the next step in the process, in which the researcher plans a specific conceptual and analytical approach to address the identified gap in the research.  

Theoretical frameworks are thus important components of the research process and researchers should therefore devote ample amount of time to develop a solid theoretical framework so that it can effectively guide their research in a suitable direction. We hope this article has provided a good insight into the concept of theoretical frameworks in research and their benefits.  

References  

  • Organizing academic research papers: Theoretical framework. Sacred Heart University library. Accessed August 4, 2023. https://library.sacredheart.edu/c.php?g=29803&p=185919#:~:text=The%20theoretical%20framework%20is%20the,research%20problem%20under%20study%20exists .  
  • Salomao A. Understanding what is theoretical framework. Mind the Graph website. Accessed August 5, 2023. https://mindthegraph.com/blog/what-is-theoretical-framework/  
  • Theoretical framework—Types, examples, and writing guide. Research Method website. Accessed August 6, 2023. https://researchmethod.net/theoretical-framework/  
  • Grant C., Osanloo A. Understanding, selecting, and integrating a theoretical framework in dissertation research: Creating the blueprint for your “house.” Administrative Issues Journal : Connecting Education, Practice, and Research; 4(2):12-26. 2014. Accessed August 7, 2023. https://files.eric.ed.gov/fulltext/EJ1058505.pdf  
  • Difference between conceptual framework and theoretical framework. MIM Learnovate website. Accessed August 7, 2023. https://mimlearnovate.com/difference-between-conceptual-framework-and-theoretical-framework/  
  • Example of a theoretical framework—Thesis & dissertation. BacherlorPrint website. Accessed August 6, 2023. https://www.bachelorprint.com/dissertation/example-of-a-theoretical-framework/  
  • Sample theoretical framework in dissertation and thesis—Overview and example. Students assignment help website. Accessed August 6, 2023. https://www.studentsassignmenthelp.co.uk/blogs/sample-dissertation-theoretical-framework/#Example_of_the_theoretical_framework  
  • Kivunja C. Distinguishing between theory, theoretical framework, and conceptual framework: A systematic review of lessons from the field. Accessed August 8, 2023. https://files.eric.ed.gov/fulltext/EJ1198682.pdf  

Researcher.Life is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Researcher.Life All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 21+ years of experience in academia, Researcher.Life All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $17 a month !    

Related Posts

thesis defense

Thesis Defense: How to Ace this Crucial Step

independent publishing

What is Independent Publishing in Academia?

  • Privacy Policy

Research Method

Home » Theoretical Framework – Types, Examples and Writing Guide

Theoretical Framework – Types, Examples and Writing Guide

Table of Contents

Theoretical Framework

Theoretical Framework

Definition:

Theoretical framework refers to a set of concepts, theories, ideas , and assumptions that serve as a foundation for understanding a particular phenomenon or problem. It provides a conceptual framework that helps researchers to design and conduct their research, as well as to analyze and interpret their findings.

In research, a theoretical framework explains the relationship between various variables, identifies gaps in existing knowledge, and guides the development of research questions, hypotheses, and methodologies. It also helps to contextualize the research within a broader theoretical perspective, and can be used to guide the interpretation of results and the formulation of recommendations.

Types of Theoretical Framework

Types of Types of Theoretical Framework are as follows:

Conceptual Framework

This type of framework defines the key concepts and relationships between them. It helps to provide a theoretical foundation for a study or research project .

Deductive Framework

This type of framework starts with a general theory or hypothesis and then uses data to test and refine it. It is often used in quantitative research .

Inductive Framework

This type of framework starts with data and then develops a theory or hypothesis based on the patterns and themes that emerge from the data. It is often used in qualitative research .

Empirical Framework

This type of framework focuses on the collection and analysis of empirical data, such as surveys or experiments. It is often used in scientific research .

Normative Framework

This type of framework defines a set of norms or values that guide behavior or decision-making. It is often used in ethics and social sciences.

Explanatory Framework

This type of framework seeks to explain the underlying mechanisms or causes of a particular phenomenon or behavior. It is often used in psychology and social sciences.

Components of Theoretical Framework

The components of a theoretical framework include:

  • Concepts : The basic building blocks of a theoretical framework. Concepts are abstract ideas or generalizations that represent objects, events, or phenomena.
  • Variables : These are measurable and observable aspects of a concept. In a research context, variables can be manipulated or measured to test hypotheses.
  • Assumptions : These are beliefs or statements that are taken for granted and are not tested in a study. They provide a starting point for developing hypotheses.
  • Propositions : These are statements that explain the relationships between concepts and variables in a theoretical framework.
  • Hypotheses : These are testable predictions that are derived from the theoretical framework. Hypotheses are used to guide data collection and analysis.
  • Constructs : These are abstract concepts that cannot be directly measured but are inferred from observable variables. Constructs provide a way to understand complex phenomena.
  • Models : These are simplified representations of reality that are used to explain, predict, or control a phenomenon.

How to Write Theoretical Framework

A theoretical framework is an essential part of any research study or paper, as it helps to provide a theoretical basis for the research and guide the analysis and interpretation of the data. Here are some steps to help you write a theoretical framework:

  • Identify the key concepts and variables : Start by identifying the main concepts and variables that your research is exploring. These could include things like motivation, behavior, attitudes, or any other relevant concepts.
  • Review relevant literature: Conduct a thorough review of the existing literature in your field to identify key theories and ideas that relate to your research. This will help you to understand the existing knowledge and theories that are relevant to your research and provide a basis for your theoretical framework.
  • Develop a conceptual framework : Based on your literature review, develop a conceptual framework that outlines the key concepts and their relationships. This framework should provide a clear and concise overview of the theoretical perspective that underpins your research.
  • Identify hypotheses and research questions: Based on your conceptual framework, identify the hypotheses and research questions that you want to test or explore in your research.
  • Test your theoretical framework: Once you have developed your theoretical framework, test it by applying it to your research data. This will help you to identify any gaps or weaknesses in your framework and refine it as necessary.
  • Write up your theoretical framework: Finally, write up your theoretical framework in a clear and concise manner, using appropriate terminology and referencing the relevant literature to support your arguments.

Theoretical Framework Examples

Here are some examples of theoretical frameworks:

  • Social Learning Theory : This framework, developed by Albert Bandura, suggests that people learn from their environment, including the behaviors of others, and that behavior is influenced by both external and internal factors.
  • Maslow’s Hierarchy of Needs : Abraham Maslow proposed that human needs are arranged in a hierarchy, with basic physiological needs at the bottom, followed by safety, love and belonging, esteem, and self-actualization at the top. This framework has been used in various fields, including psychology and education.
  • Ecological Systems Theory : This framework, developed by Urie Bronfenbrenner, suggests that a person’s development is influenced by the interaction between the individual and the various environments in which they live, such as family, school, and community.
  • Feminist Theory: This framework examines how gender and power intersect to influence social, cultural, and political issues. It emphasizes the importance of understanding and challenging systems of oppression.
  • Cognitive Behavioral Theory: This framework suggests that our thoughts, beliefs, and attitudes influence our behavior, and that changing our thought patterns can lead to changes in behavior and emotional responses.
  • Attachment Theory: This framework examines the ways in which early relationships with caregivers shape our later relationships and attachment styles.
  • Critical Race Theory : This framework examines how race intersects with other forms of social stratification and oppression to perpetuate inequality and discrimination.

When to Have A Theoretical Framework

Following are some situations When to Have A Theoretical Framework:

  • A theoretical framework should be developed when conducting research in any discipline, as it provides a foundation for understanding the research problem and guiding the research process.
  • A theoretical framework is essential when conducting research on complex phenomena, as it helps to organize and structure the research questions, hypotheses, and findings.
  • A theoretical framework should be developed when the research problem requires a deeper understanding of the underlying concepts and principles that govern the phenomenon being studied.
  • A theoretical framework is particularly important when conducting research in social sciences, as it helps to explain the relationships between variables and provides a framework for testing hypotheses.
  • A theoretical framework should be developed when conducting research in applied fields, such as engineering or medicine, as it helps to provide a theoretical basis for the development of new technologies or treatments.
  • A theoretical framework should be developed when conducting research that seeks to address a specific gap in knowledge, as it helps to define the problem and identify potential solutions.
  • A theoretical framework is also important when conducting research that involves the analysis of existing theories or concepts, as it helps to provide a framework for comparing and contrasting different theories and concepts.
  • A theoretical framework should be developed when conducting research that seeks to make predictions or develop generalizations about a particular phenomenon, as it helps to provide a basis for evaluating the accuracy of these predictions or generalizations.
  • Finally, a theoretical framework should be developed when conducting research that seeks to make a contribution to the field, as it helps to situate the research within the broader context of the discipline and identify its significance.

Purpose of Theoretical Framework

The purposes of a theoretical framework include:

  • Providing a conceptual framework for the study: A theoretical framework helps researchers to define and clarify the concepts and variables of interest in their research. It enables researchers to develop a clear and concise definition of the problem, which in turn helps to guide the research process.
  • Guiding the research design: A theoretical framework can guide the selection of research methods, data collection techniques, and data analysis procedures. By outlining the key concepts and assumptions underlying the research questions, the theoretical framework can help researchers to identify the most appropriate research design for their study.
  • Supporting the interpretation of research findings: A theoretical framework provides a framework for interpreting the research findings by helping researchers to make connections between their findings and existing theory. It enables researchers to identify the implications of their findings for theory development and to assess the generalizability of their findings.
  • Enhancing the credibility of the research: A well-developed theoretical framework can enhance the credibility of the research by providing a strong theoretical foundation for the study. It demonstrates that the research is based on a solid understanding of the relevant theory and that the research questions are grounded in a clear conceptual framework.
  • Facilitating communication and collaboration: A theoretical framework provides a common language and conceptual framework for researchers, enabling them to communicate and collaborate more effectively. It helps to ensure that everyone involved in the research is working towards the same goals and is using the same concepts and definitions.

Characteristics of Theoretical Framework

Some of the characteristics of a theoretical framework include:

  • Conceptual clarity: The concepts used in the theoretical framework should be clearly defined and understood by all stakeholders.
  • Logical coherence : The framework should be internally consistent, with each concept and assumption logically connected to the others.
  • Empirical relevance: The framework should be based on empirical evidence and research findings.
  • Parsimony : The framework should be as simple as possible, without sacrificing its ability to explain the phenomenon in question.
  • Flexibility : The framework should be adaptable to new findings and insights.
  • Testability : The framework should be testable through research, with clear hypotheses that can be falsified or supported by data.
  • Applicability : The framework should be useful for practical applications, such as designing interventions or policies.

Advantages of Theoretical Framework

Here are some of the advantages of having a theoretical framework:

  • Provides a clear direction : A theoretical framework helps researchers to identify the key concepts and variables they need to study and the relationships between them. This provides a clear direction for the research and helps researchers to focus their efforts and resources.
  • Increases the validity of the research: A theoretical framework helps to ensure that the research is based on sound theoretical principles and concepts. This increases the validity of the research by ensuring that it is grounded in established knowledge and is not based on arbitrary assumptions.
  • Enables comparisons between studies : A theoretical framework provides a common language and set of concepts that researchers can use to compare and contrast their findings. This helps to build a cumulative body of knowledge and allows researchers to identify patterns and trends across different studies.
  • Helps to generate hypotheses: A theoretical framework provides a basis for generating hypotheses about the relationships between different concepts and variables. This can help to guide the research process and identify areas that require further investigation.
  • Facilitates communication: A theoretical framework provides a common language and set of concepts that researchers can use to communicate their findings to other researchers and to the wider community. This makes it easier for others to understand the research and its implications.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Ethical Considerations

Ethical Considerations – Types, Examples and...

Limitations in Research

Limitations in Research – Types, Examples and...

Research Gap

Research Gap – Types, Examples and How to...

Research Contribution

Research Contribution – Thesis Guide

Conceptual Framework

Conceptual Framework – Types, Methodology and...

Research Approach

Research Approach – Types Methods and Examples

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

Using the framework method for the analysis of qualitative data in multi-disciplinary health research

Nicola k gale.

1 Health Services Management Centre, University of Birmingham, Park House, 40 Edgbaston Park Road, Birmingham B15 2RT, UK

Gemma Heath

2 School of Health and Population Sciences, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK

Elaine Cameron

3 School of Life and Health Sciences, Aston University, Aston Triangle, Birmingham B4 7ET, UK

Sabina Rashid

4 East and North Hertfordshire NHS Trust, Lister hospital, Coreys Mill Lane, Stevenage SG1 4AB, UK

Sabi Redwood

Associated data.

The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations.

The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study.

Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research.

The Framework Method for the management and analysis of qualitative data has been used since the 1980s [ 1 ]. The method originated in large-scale social policy research but is becoming an increasingly popular approach in medical and health research; however, there is some confusion about its potential application and limitations. In this article we discuss when it is appropriate to use the Framework Method and how it compares to other qualitative analysis methods. In particular, we explore how it can be used in multi-disciplinary health research teams. Multi-disciplinary and mixed methods studies are becoming increasingly commonplace in applied health research. As well as disciplines familiar with qualitative research, such as nursing, psychology and sociology, teams often include epidemiologists, health economists, management scientists and others. Furthermore, applied health research often has clinical representation and, increasingly, patient and public involvement [ 2 ]. We argue that while leadership is undoubtedly required from an experienced qualitative methodologist, non-specialists from the wider team can and should be involved in the analysis process. We then present a step-by-step guide to the application of the Framework Method, illustrated using a worked example (See Additional File 1 ) from a published study [ 3 ] to illustrate the main stages of the process. Technical terms are included in the glossary (below). Finally, we discuss the strengths and limitations of the approach.

Glossary of key terms used in the Framework Method

Analytical framework: A set of codes organised into categories that have been jointly developed by researchers involved in analysis that can be used to manage and organise the data. The framework creates a new structure for the data (rather than the full original accounts given by participants) that is helpful to summarize/reduce the data in a way that can support answering the research questions.

Analytic memo: A written investigation of a particular concept, theme or problem, reflecting on emerging issues in the data that captures the analytic process (see Additional file 1 , Section 7).

Categories: During the analysis process, codes are grouped into clusters around similar and interrelated ideas or concepts. Categories and codes are usually arranged in a tree diagram structure in the analytical framework. While categories are closely and explicitly linked to the raw data, developing categories is a way to start the process of abstraction of the data (i.e. towards the general rather than the specific or anecdotal).

Charting: Entering summarized data into the Framework Method matrix (see Additional File 1 , Section 6).

Code: A descriptive or conceptual label that is assigned to excerpts of raw data in a process called ‘coding’ (see Additional File 1 , Section 3).

Data: Qualitative data usually needs to be in textual form before analysis. These texts can either be elicited texts (written specifically for the research, such as food diaries), or extant texts (pre-existing texts, such as meeting minutes, policy documents or weblogs), or can be produced by transcribing interview or focus group data, or creating ‘field’ notes while conducting participant-observation or observing objects or social situations.

Indexing: The systematic application of codes from the agreed analytical framework to the whole dataset (see Additional File 1 , Section 5).

Matrix: A spreadsheet contains numerous cells into which summarized data are entered by codes (columns) and cases (rows) (see Additional File 1 , Section 6).

Themes: Interpretive concepts or propositions that describe or explain aspects of the data, which are the final output of the analysis of the whole dataset. Themes are articulated and developed by interrogating data categories through comparison between and within cases. Usually a number of categories would fall under each theme or sub-theme [ 3 ].

Transcript: A written verbatim (word-for-word) account of a verbal interaction, such as an interview or conversation.

The Framework Method sits within a broad family of analysis methods often termed thematic analysis or qualitative content analysis. These approaches identify commonalities and differences in qualitative data, before focusing on relationships between different parts of the data, thereby seeking to draw descriptive and/or explanatory conclusions clustered around themes. The Framework Method was developed by researchers, Jane Ritchie and Liz Spencer, from the Qualitative Research Unit at the National Centre for Social Research in the United Kingdom in the late 1980s for use in large-scale policy research [ 1 ]. It is now used widely in other areas, including health research [ 3 - 12 ]. Its defining feature is the matrix output: rows (cases), columns (codes) and ‘cells’ of summarised data, providing a structure into which the researcher can systematically reduce the data, in order to analyse it by case and by code [ 1 ]. Most often a ‘case’ is an individual interviewee, but this can be adapted to other units of analysis, such as predefined groups or organisations. While in-depth analyses of key themes can take place across the whole data set, the views of each research participant remain connected to other aspects of their account within the matrix so that the context of the individual’s views is not lost. Comparing and contrasting data is vital to qualitative analysis and the ability to compare with ease data across cases as well as within individual cases is built into the structure and process of the Framework Method.

The Framework Method provides clear steps to follow and produces highly structured outputs of summarised data. It is therefore useful where multiple researchers are working on a project, particularly in multi-disciplinary research teams were not all members have experience of qualitative data analysis, and for managing large data sets where obtaining a holistic, descriptive overview of the entire data set is desirable. However, caution is recommended before selecting the method as it is not a suitable tool for analysing all types of qualitative data or for answering all qualitative research questions, nor is it an ‘easy’ version of qualitative research for quantitative researchers. Importantly, the Framework Method cannot accommodate highly heterogeneous data, i.e. data must cover similar topics or key issues so that it is possible to categorize it. Individual interviewees may, of course, have very different views or experiences in relation to each topic, which can then be compared and contrasted. The Framework Method is most commonly used for the thematic analysis of semi-structured interview transcripts, which is what we focus on in this article, although it could, in principle, be adapted for other types of textual data [ 13 ], including documents, such as meeting minutes or diaries [ 12 ], or field notes from observations [ 10 ].

For quantitative researchers working with qualitative colleagues or when exploring qualitative research for the first time, the nature of the Framework Method is seductive because its methodical processes and ‘spreadsheet’ approach seem more closely aligned to the quantitative paradigm [ 14 ]. Although the Framework Method is a highly systematic method of categorizing and organizing what may seem like unwieldy qualitative data, it is not a panacea for problematic issues commonly associated with qualitative data analysis such as how to make analytic choices and make interpretive strategies visible and auditable. Qualitative research skills are required to appropriately interpret the matrix, and facilitate the generation of descriptions, categories, explanations and typologies. Moreover, reflexivity, rigour and quality are issues that are requisite in the Framework Method just as they are in other qualitative methods. It is therefore essential that studies using the Framework Method for analysis are overseen by an experienced qualitative researcher, though this does not preclude those new to qualitative research from contributing to the analysis as part of a wider research team.

There are a number of approaches to qualitative data analysis, including those that pay close attention to language and how it is being used in social interaction such as discourse analysis [ 15 ] and ethnomethodology [ 16 ]; those that are concerned with experience, meaning and language such as phenomenology [ 17 , 18 ] and narrative methods [ 19 ]; and those that seek to develop theory derived from data through a set of procedures and interconnected stages such as Grounded Theory [ 20 , 21 ]. Many of these approaches are associated with specific disciplines and are underpinned by philosophical ideas which shape the process of analysis [ 22 ]. The Framework Method, however, is not aligned with a particular epistemological, philosophical, or theoretical approach. Rather it is a flexible tool that can be adapted for use with many qualitative approaches that aim to generate themes.

The development of themes is a common feature of qualitative data analysis, involving the systematic search for patterns to generate full descriptions capable of shedding light on the phenomenon under investigation. In particular, many qualitative approaches use the ‘constant comparative method’ , developed as part of Grounded Theory, which involves making systematic comparisons across cases to refine each theme [ 21 , 23 ]. Unlike Grounded Theory, the Framework Method is not necessarily concerned with generating social theory, but can greatly facilitate constant comparative techniques through the review of data across the matrix.

Perhaps because the Framework Method is so obviously systematic, it has often, as other commentators have noted, been conflated with a deductive approach to qualitative analysis [ 13 , 14 ]. However, the tool itself has no allegiance to either inductive or deductive thematic analysis; where the research sits along this inductive-deductive continuum depends on the research question. A question such as, ‘Can patients give an accurate biomedical account of the onset of their cardiovascular disease?’ is essentially a yes/no question (although it may be nuanced by the extent of their account or by appropriate use of terminology) and so requires a deductive approach to both data collection and analysis (e.g. structured or semi-structured interviews and directed qualitative content analysis [ 24 ]). Similarly, a deductive approach may be taken if basing analysis on a pre-existing theory, such as behaviour change theories, for example in the case of a research question such as ‘How does the Theory of Planned Behaviour help explain GP prescribing?’ [ 11 ]. However, a research question such as, ‘How do people construct accounts of the onset of their cardiovascular disease?’ would require a more inductive approach that allows for the unexpected, and permits more socially-located responses [ 25 ] from interviewees that may include matters of cultural beliefs, habits of food preparation, concepts of ‘fate’, or links to other important events in their lives, such as grief, which cannot be predicted by the researcher in advance (e.g. an interviewee-led open ended interview and grounded theory [ 20 ]). In all these cases, it may be appropriate to use the Framework Method to manage the data. The difference would become apparent in how themes are selected: in the deductive approach, themes and codes are pre-selected based on previous literature, previous theories or the specifics of the research question; whereas in the inductive approach, themes are generated from the data though open (unrestricted) coding, followed by refinement of themes. In many cases, a combined approach is appropriate when the project has some specific issues to explore, but also aims to leave space to discover other unexpected aspects of the participants’ experience or the way they assign meaning to phenomena. In sum, the Framework Method can be adapted for use with deductive, inductive, or combined types of qualitative analysis. However, there are some research questions where analysing data by case and theme is not appropriate and so the Framework Method should be avoided. For instance, depending on the research question, life history data might be better analysed using narrative analysis [ 19 ]; recorded consultations between patients and their healthcare practitioners using conversation analysis [ 26 ]; and documentary data, such as resources for pregnant women, using discourse analysis [ 27 ].

It is not within the scope of this paper to consider study design or data collection in any depth, but before moving on to describe the Framework Method analysis process, it is worth taking a step back to consider briefly what needs to happen before analysis begins. The selection of analysis method should have been considered at the proposal stage of the research and should fit with the research questions and overall aims of the study. Many qualitative studies, particularly ones using inductive analysis, are emergent in nature; this can be a challenge and the researchers can only provide an “imaginative rehearsal” of what is to come [ 28 ]. In mixed methods studies, the role of the qualitative component within the wider goals of the project must also be considered. In the data collection stage, resources must be allocated for properly trained researchers to conduct the qualitative interviewing because it is a highly skilled activity. In some cases, a research team may decide that they would like to use lay people, patients or peers to do the interviews [ 29 - 32 ] and in this case they must be properly trained and mentored which requires time and resources. At this early stage it is also useful to consider whether the team will use Computer Assisted Qualitative Data Analysis Software (CAQDAS), which can assist with data management and analysis.

As any form of qualitative or quantitative analysis is not a purely technical process, but influenced by the characteristics of the researchers and their disciplinary paradigms, critical reflection throughout the research process is paramount, including in the design of the study, the construction or collection of data, and the analysis. All members of the team should keep a research diary, where they record reflexive notes, impressions of the data and thoughts about analysis throughout the process. Experienced qualitative researchers become more skilled at sifting through data and analysing it in a rigorous and reflexive way. They cannot be too attached to certainty, but must remain flexible and adaptive throughout the research in order to generate rich and nuanced findings that embrace and explain the complexity of real social life and can be applied to complex social issues. It is important to remember when using the Framework Method that, unlike quantitative research where data collection and data analysis are strictly sequential and mutually exclusive stages of the research process, in qualitative analysis there is, to a greater or lesser extent depending on the project, ongoing interplay between data collection, analysis, and theory development. For example, new ideas or insights from participants may suggest potentially fruitful lines of enquiry, or close analysis might reveal subtle inconsistencies in an account which require further exploration.

Procedure for analysis

Stage 1: transcription.

A good quality audio recording and, ideally, a verbatim (word for word) transcription of the interview is needed. For Framework Method analysis, it is not necessarily important to include the conventions of dialogue transcriptions which can be difficult to read (e.g. pauses or two people talking simultaneously), because the content is what is of primary interest. Transcripts should have large margins and adequate line spacing for later coding and making notes. The process of transcription is a good opportunity to become immersed in the data and is to be strongly encouraged for new researchers. However, in some projects, the decision may be made that it is a better use of resources to outsource this task to a professional transcriber.

Stage 2: Familiarisation with the interview

Becoming familiar with the whole interview using the audio recording and/or transcript and any contextual or reflective notes that were recorded by the interviewer is a vital stage in interpretation. It can also be helpful to re-listen to all or parts of the audio recording. In multi-disciplinary or large research projects, those involved in analysing the data may be different from those who conducted or transcribed the interviews, which makes this stage particularly important. One margin can be used to record any analytical notes, thoughts or impressions.

Stage 3: Coding

After familiarization, the researcher carefully reads the transcript line by line, applying a paraphrase or label (a ‘code’) that describes what they have interpreted in the passage as important. In more inductive studies, at this stage ‘open coding’ takes place, i.e. coding anything that might be relevant from as many different perspectives as possible. Codes could refer to substantive things (e.g. particular behaviours, incidents or structures), values (e.g. those that inform or underpin certain statements, such as a belief in evidence-based medicine or in patient choice), emotions (e.g. sorrow, frustration, love) and more impressionistic/methodological elements (e.g. interviewee found something difficult to explain, interviewee became emotional, interviewer felt uncomfortable) [ 33 ]. In purely deductive studies, the codes may have been pre-defined (e.g. by an existing theory, or specific areas of interest to the project) so this stage may not be strictly necessary and you could just move straight onto indexing, although it is generally helpful even if you are taking a broadly deductive approach to do some open coding on at least a few of the transcripts to ensure important aspects of the data are not missed. Coding aims to classify all of the data so that it can be compared systematically with other parts of the data set. At least two researchers (or at least one from each discipline or speciality in a multi-disciplinary research team) should independently code the first few transcripts, if feasible. Patients, public involvement representatives or clinicians can also be productively involved at this stage, because they can offer alternative viewpoints thus ensuring that one particular perspective does not dominate. It is vital in inductive coding to look out for the unexpected and not to just code in a literal, descriptive way so the involvement of people from different perspectives can aid greatly in this. As well as getting a holistic impression of what was said, coding line-by-line can often alert the researcher to consider that which may ordinarily remain invisible because it is not clearly expressed or does not ‘fit’ with the rest of the account. In this way the developing analysis is challenged; to reconcile and explain anomalies in the data can make the analysis stronger. Coding can also be done digitally using CAQDAS, which is a useful way to keep track automatically of new codes. However, some researchers prefer to do the early stages of coding with a paper and pen, and only start to use CAQDAS once they reach Stage 5 (see below).

Stage 4: Developing a working analytical framework

After coding the first few transcripts, all researchers involved should meet to compare the labels they have applied and agree on a set of codes to apply to all subsequent transcripts. Codes can be grouped together into categories (using a tree diagram if helpful), which are then clearly defined. This forms a working analytical framework. It is likely that several iterations of the analytical framework will be required before no additional codes emerge. It is always worth having an ‘other’ code under each category to avoid ignoring data that does not fit; the analytical framework is never ‘final’ until the last transcript has been coded.

Stage 5: Applying the analytical framework

The working analytical framework is then applied by indexing subsequent transcripts using the existing categories and codes. Each code is usually assigned a number or abbreviation for easy identification (and so the full names of the codes do not have to be written out each time) and written directly onto the transcripts. Computer Assisted Qualitative Data Analysis Software (CAQDAS) is particularly useful at this stage because it can speed up the process and ensures that, at later stages, data is easily retrievable. It is worth noting that unlike software for statistical analyses, which actually carries out the calculations with the correct instruction, putting the data into a qualitative analysis software package does not analyse the data; it is simply an effective way of storing and organising the data so that they are accessible for the analysis process.

Stage 6: Charting data into the framework matrix

Qualitative data are voluminous (an hour of interview can generate 15–30 pages of text) and being able to manage and summarize (reduce) data is a vital aspect of the analysis process. A spreadsheet is used to generate a matrix and the data are ‘charted’ into the matrix. Charting involves summarizing the data by category from each transcript. Good charting requires an ability to strike a balance between reducing the data on the one hand and retaining the original meanings and ‘feel’ of the interviewees’ words on the other. The chart should include references to interesting or illustrative quotations. These can be tagged automatically if you are using CAQDAS to manage your data (N-Vivo version 9 onwards has the capability to generate framework matrices), or otherwise a capital ‘Q’, an (anonymized) transcript number, page and line reference will suffice. It is helpful in multi-disciplinary teams to compare and contrast styles of summarizing in the early stages of the analysis process to ensure consistency within the team. Any abbreviations used should be agreed by the team. Once members of the team are familiar with the analytical framework and well practised at coding and charting, on average, it will take about half a day per hour-long transcript to reach this stage. In the early stages, it takes much longer.

Stage 7: Interpreting the data

It is useful throughout the research to have a separate note book or computer file to note down impressions, ideas and early interpretations of the data. It may be worth breaking off at any stage to explore an interesting idea, concept or potential theme by writing an analytic memo [ 20 , 21 ] to then discuss with other members of the research team, including lay and clinical members. Gradually, characteristics of and differences between the data are identified, perhaps generating typologies, interrogating theoretical concepts (either prior concepts or ones emerging from the data) or mapping connections between categories to explore relationships and/or causality. If the data are rich enough, the findings generated through this process can go beyond description of particular cases to explanation of, for example, reasons for the emergence of a phenomena, predicting how an organisation or other social actor is likely to instigate or respond to a situation, or identifying areas that are not functioning well within an organisation or system. It is worth noting that this stage often takes longer than anticipated and that any project plan should ensure that sufficient time is allocated to meetings and individual researcher time to conduct interpretation and writing up of findings (see Additional file 1 , Section 7).

The Framework Method has been developed and used successfully in research for over 25 years, and has recently become a popular analysis method in qualitative health research. The issue of how to assess quality in qualitative research has been highly debated [ 20 , 34 - 40 ], but ensuring rigour and transparency in analysis is a vital component. There are, of course, many ways to do this but in the Framework Method the following are helpful:

•Summarizing the data during charting, as well as being a practical way to reduce the data, means that all members of a multi-disciplinary team, including lay, clinical and (quantitative) academic members can engage with the data and offer their perspectives during the analysis process without necessarily needing to read all the transcripts or be involved in the more technical parts of analysis.

•Charting also ensures that researchers pay close attention to describing the data using each participant’s own subjective frames and expressions in the first instance, before moving onto interpretation.

•The summarized data is kept within the wider context of each case, thereby encouraging thick description that pays attention to complex layers of meaning and understanding [ 38 ].

•The matrix structure is visually straightforward and can facilitate recognition of patterns in the data by any member of the research team, including through drawing attention to contradictory data, deviant cases or empty cells.

•The systematic procedure (described in this article) makes it easy to follow, even for multi-disciplinary teams and/or with large data sets.

•It is flexible enough that non-interview data (such as field notes taken during the interview or reflexive considerations) can be included in the matrix.

•It is not aligned with a particular epistemological viewpoint or theoretical approach and therefore can be adapted for use in inductive or deductive analysis or a combination of the two (e.g. using pre-existing theoretical constructs deductively, then revising the theory with inductive aspects; or using an inductive approach to identify themes in the data, before returning to the literature and using theories deductively to help further explain certain themes).

•It is easy to identify relevant data extracts to illustrate themes and to check whether there is sufficient evidence for a proposed theme.

•Finally, there is a clear audit trail from original raw data to final themes, including the illustrative quotes.

There are also a number of potential pitfalls to this approach:

•The systematic approach and matrix format, as we noted in the background, is intuitively appealing to those trained quantitatively but the ‘spreadsheet’ look perhaps further increases the temptation for those without an in-depth understanding of qualitative research to attempt to quantify qualitative data (e.g. “13 out of 20 participants said X). This kind of statement is clearly meaningless because the sampling in qualitative research is not designed to be representative of a wider population, but purposive to capture diversity around a phenomenon [ 41 ].

•Like all qualitative analysis methods, the Framework Method is time consuming and resource-intensive. When involving multiple stakeholders and disciplines in the analysis and interpretation of the data, as is good practice in applied health research, the time needed is extended. This time needs to be factored into the project proposal at the pre-funding stage.

•There is a high training component to successfully using the method in a new multi-disciplinary team. Depending on their role in the analysis, members of the research team may have to learn how to code, index, and chart data, to think reflexively about how their identities and experience affect the analysis process, and/or they may have to learn about the methods of generalisation (i.e. analytic generalisation and transferability, rather than statistical generalisation [ 41 ]) to help to interpret legitimately the meaning and significance of the data.

While the Framework Method is amenable to the participation of non-experts in data analysis, it is critical to the successful use of the method that an experienced qualitative researcher leads the project (even if the overall lead for a large mixed methods study is a different person). The qualitative lead would ideally be joined by other researchers with at least some prior training in or experience of qualitative analysis. The responsibilities of the lead qualitative researcher are: to contribute to study design, project timelines and resource planning; to mentor junior qualitative researchers; to train clinical, lay and other (non-qualitative) academics to contribute as appropriate to the analysis process; to facilitate analysis meetings in a way that encourages critical and reflexive engagement with the data and other team members; and finally to lead the write-up of the study.

We have argued that Framework Method studies can be conducted by multi-disciplinary research teams that include, for example, healthcare professionals, psychologists, sociologists, economists, and lay people/service users. The inclusion of so many different perspectives means that decision-making in the analysis process can be very time consuming and resource-intensive. It may require extensive, reflexive and critical dialogue about how the ideas expressed by interviewees and identified in the transcript are related to pre-existing concepts and theories from each discipline, and to the real ‘problems’ in the health system that the project is addressing. This kind of team effort is, however, an excellent forum for driving forward interdisciplinary collaboration, as well as clinical and lay involvement in research, to ensure that ‘the whole is greater than the sum of the parts’, by enhancing the credibility and relevance of the findings.

The Framework Method is appropriate for thematic analysis of textual data, particularly interview transcripts, where it is important to be able to compare and contrast data by themes across many cases, while also situating each perspective in context by retaining the connection to other aspects of each individual’s account. Experienced qualitative researchers should lead and facilitate all aspects of the analysis, although the Framework Method’s systematic approach makes it suitable for involving all members of a multi-disciplinary team. An open, critical and reflexive approach from all team members is essential for rigorous qualitative analysis.

Acceptance of the complexity of real life health systems and the existence of multiple perspectives on health issues is necessary to produce high quality qualitative research. If done well, qualitative studies can shed explanatory and predictive light on important phenomena, relate constructively to quantitative parts of a larger study, and contribute to the improvement of health services and development of health policy. The Framework Method, when selected and implemented appropriately, can be a suitable tool for achieving these aims through producing credible and relevant findings.

•The Framework Method is an excellent tool for supporting thematic (qualitative content) analysis because it provides a systematic model for managing and mapping the data.

•The Framework Method is most suitable for analysis of interview data, where it is desirable to generate themes by making comparisons within and between cases.

•The management of large data sets is facilitated by the Framework Method as its matrix form provides an intuitively structured overview of summarised data.

•The clear, step-by-step process of the Framework Method makes it is suitable for interdisciplinary and collaborative projects.

•The use of the method should be led and facilitated by an experienced qualitative researcher.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors were involved in the development of the concept of the article and drafting the article. NG wrote the first draft of the article, GH and EC prepared the text and figures related to the illustrative example, SRa did the literature search to identify if there were any similar articles currently available and contributed to drafting of the article, and SRe contributed to drafting of the article and the illustrative example. All authors read and approved the final manuscript.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-2288/13/117/prepub

Supplementary Material

Illustrative Example of the use of the Framework Method.

Acknowledgments

All authors were funded by the National Institute for Health Research (NIHR) through the Collaborations for Leadership in Applied Health Research and Care for Birmingham and Black Country (CLAHRC-BBC) programme. The views in this publication expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.

  • Ritchie J, Lewis J. Qualitative research practice: a guide for social science students and researchers. London: Sage; 2003. [ Google Scholar ]
  • Ives J, Damery S, Redwod S. PPI, paradoxes and Plato: who's sailing the ship? J Med Ethics. 2013; 39 (3):181–185. doi: 10.1136/medethics-2011-100150. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Heath G, Cameron E, Cummins C, Greenfield S, Pattison H, Kelly D, Redwood S. Paediatric ‘care closer to home’: stake-holder views and barriers to implementation. Health Place. 2012; 18 (5):1068–1073. doi: 10.1016/j.healthplace.2012.05.003. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Elkington H, White P, Addington-Hall J, Higgs R, Petternari C. The last year of life of COPD: a qualitative study of symptoms and services. Respir Med. 2004; 98 (5):439–445. doi: 10.1016/j.rmed.2003.11.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Murtagh J, Dixey R, Rudolf M. A qualitative investigation into the levers and barriers to weight loss in children: opinions of obese children. Archives Dis Child. 2006; 91 (11):920–923. doi: 10.1136/adc.2005.085712. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barnard M, Webster S, O’Connor W, Jones A, Donmall M. The drug treatment outcomes research study (DTORS): qualitative study. London: Home Office; 2009. [ Google Scholar ]
  • Ayatollahi H, Bath PA, Goodacre S. Factors influencing the use of IT in the emergency department: a qualitative study. Health Inform J. 2010; 16 (3):189–200. doi: 10.1177/1460458210377480. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sheard L, Prout H, Dowding D, Noble S, Watt I, Maraveyas A, Johnson M. Barriers to the diagnosis and treatment of venous thromboembolism in advanced cancer patients: a qualitative study. Palliative Med. 2012; 27 (2):339–348. [ PubMed ] [ Google Scholar ]
  • Ellis J, Wagland R, Tishelman C, Williams ML, Bailey CD, Haines J, Caress A, Lorigan P, Smith JA, Booton R. et al. Considerations in developing and delivering a nonpharmacological intervention for symptom management in lung cancer: the views of patients and informal caregivers. J Pain Symptom Manag (0) 2012; 44 (6):831–842. doi: 10.1016/j.jpainsymman.2011.12.274. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gale N, Sultan H. Telehealth as ‘peace of mind’: embodiment, emotions and the home as the primary health space for people with chronic obstructive pulmonary disorder. Health place. 2013; 21 :140–147. [ PubMed ] [ Google Scholar ]
  • Rashidian A, Eccles MP, Russell I. Falling on stony ground? A qualitative study of implementation of clinical guidelines’ prescribing recommendations in primary care. Health policy. 2008; 85 (2):148–161. doi: 10.1016/j.healthpol.2007.07.011. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jones RK. The unsolicited diary as a qualitative research tool for advanced research capacity in the field of health and illness. Qualitative Health Res. 2000; 10 (4):555–567. doi: 10.1177/104973200129118543. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pope C, Ziebland S, Mays N. Analysing qualitative data. British Med J. 2000; 320 :114–116. doi: 10.1136/bmj.320.7227.114. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pope C, Mays N. Critical reflections on the rise of qualitative research. British Med J. 2009; 339 :737–739. [ Google Scholar ]
  • Fairclough N. Critical discourse analysis: the critical study of language. London: Longman; 2010. [ Google Scholar ]
  • Garfinkel H. Ethnomethodology’s program. Soc Psychol Quarter. 1996; 59 (1):5–21. doi: 10.2307/2787116. [ CrossRef ] [ Google Scholar ]
  • Merleau-Ponty M. The phenomenology of perception. London: Routledge and Kegan Paul; 1962. [ Google Scholar ]
  • Svenaeus F. Handbook of phenomenology and medicine. Netherlands: Springer; 2001. The phenomenology of health and illness; pp. 87–108. [ Google Scholar ]
  • Reissmann CK. Narrative methods for the human sciences. London: Sage; 2008. [ Google Scholar ]
  • Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage; 2006. [ Google Scholar ]
  • Glaser A, Strauss AL. The discovery of grounded theory. Chicago: Aldine; 1967. [ Google Scholar ]
  • Crotty M. The foundations of social research: meaning and perspective in the research process. London: Sage; 1998. [ Google Scholar ]
  • Boeije H. A purposeful approach to the constant comparative method in the analysis of qualitative interviews. Qual Quant. 2002; 36 (4):391–409. doi: 10.1023/A:1020909529486. [ CrossRef ] [ Google Scholar ]
  • Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005; 15 (9):1277–1288. doi: 10.1177/1049732305276687. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Redwood S, Gale NK, Greenfield S. ‘You give us rangoli, we give you talk’: using an art-based activity to elicit data from a seldom heard group. BMC Med Res Methodol. 2012; 12 (1):7. doi: 10.1186/1471-2288-12-7. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mishler EG. In: The sociology of health and illness: critical perspectives. Third. Conrad P, Kern R, editor. New York: St Martins Press; 1990. The struggle between the voice of medicine and the voice of the lifeworld. [ Google Scholar ]
  • Hodges BD, Kuper A, Reeves S. Discourse analysis. British Med J. 2008; 337 :570–572. doi: 10.1136/bmj.39370.701782.DE. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M, Barroso J. Writing the proposal for a qualitative research methodology project. Qual Health Res. 2003; 13 (6):781–820. doi: 10.1177/1049732303013006003. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ellins J. It’s better together: involving older people in research. HSMC Newsletter Focus Serv Users Publ. 2010; 16 (1):4. [ Google Scholar ]
  • Phillimore J, Goodson L, Hennessy D, Ergun E. Empowering Birmingham’s migrant and refugee community organisations: making a difference. York: Joseph Rowntree Foundation; 2009. [ Google Scholar ]
  • Leamy M, Clough R. How older people became researchers. York: Joseph Rowntree Foundation; 2006. [ Google Scholar ]
  • Glasby J, Miller R, Ellins J, Durose J, Davidson D, McIver S, Littlechild R, Tanner D, Snelling I, Spence K. Understanding and improving transitions of older people: a user and carer centred approach. London: The Stationery Office; 2012. (Final report NIHR service delivery and organisation programme). [ Google Scholar ]
  • Saldaña J. The coding manual for qualitative researchers. London: Sage; 2009. [ Google Scholar ]
  • Lincoln YS. Emerging criteria for quality in qualitative and interpretive research. Qual Inquiry. 1995; 1 (3):275–289. doi: 10.1177/107780049500100301. [ CrossRef ] [ Google Scholar ]
  • Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ British Med J. 2000; 320 (7226):50. doi: 10.1136/bmj.320.7226.50. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Seale C. Quality in qualitative research. Qual Inquiry. 1999; 5 (4):465–478. doi: 10.1177/107780049900500402. [ CrossRef ] [ Google Scholar ]
  • Dingwall R, Murphy E, Watson P, Greatbatch D, Parker S. Catching goldfish: quality in qualitative research. J Health serv Res Policy. 1998; 3 (3):167–172. [ PubMed ] [ Google Scholar ]
  • Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998; 8 (3):341–351. doi: 10.1177/104973239800800305. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research. Int J Qual Methods. 2008; 1 (2):13–22. [ Google Scholar ]
  • Smith JA. Reflecting on the development of interpretative phenomenological analysis and its contribution to qualitative research in psychology. Qual Res Psychol. 2004; 1 (1):39–54. [ Google Scholar ]
  • Polit DF, Beck CT. Generalization in quantitative and qualitative research: Myths and strategies. Int J Nurs Studies. 2010; 47 (11):1451–1458. doi: 10.1016/j.ijnurstu.2010.06.004. [ PubMed ] [ CrossRef ] [ Google Scholar ]

Framework Analysis: A Qualitative Methodology for Applied Policy Research

4 Journal of Administration and Governance 72 (2009)

8 Pages Posted: 9 Apr 2016

Aashish Srivastava

Monash University - Department of Business Law & Taxation

S Bruce Thomson

MacEwan University; MacEwan University

Date Written: 2 Jan, 2009

Policies and procedures govern organizations whether they are private or public, for-profit or not-for profit. Review of such policies and procedures are done periodically to ensure optimum efficiency within the organization. Framework analysis is a qualitative method that is aptly suited for applied policy research. Framework analysis is better adapted to research that has specific questions, a limited time frame, a pre-designed sample and a priori issues. In the analysis, data is sifted, charted and sorted in accordance with key issues and themes using five steps: familiarization; identifying a thematic framework; indexing; charting; and mapping and interpretation. Framework analysis provides an excellent tool to assess policies and procedures from the very people that they affect.

Suggested Citation: Suggested Citation

Aashish Srivastava (Contact Author)

Monash university - department of business law & taxation ( email ).

Caulfield Campus Sir John Monash Drive Caulfield East, Victoria 3084 Australia 99058418 (Phone)

Stanley Thomson

Macewan university ( email ).

!05 st 104 ave Edmonton, Alberta Canada

P.O. Box 1796 Rm 5-306U Edmonton, Alberta T5J4S2 Canada

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, comparative & global administrative law ejournal.

Subscribe to this fee journal for more curated articles on this topic

Political Institutions: Bureaucracies & Public Administration eJournal

Philosophy & methodology of economics ejournal.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 19 June 2024

A pathologist–AI collaboration framework for enhancing diagnostic accuracies and efficiencies

  • Zhi Huang   ORCID: orcid.org/0000-0001-6982-8285 1 , 2 ,
  • Eric Yang 1 ,
  • Jeanne Shen 1 ,
  • Dita Gratzinger   ORCID: orcid.org/0000-0002-9182-8123 1 ,
  • Frederick Eyerer 1 ,
  • Brooke Liang   ORCID: orcid.org/0000-0002-8823-2804 1 ,
  • Jeffrey Nirschl   ORCID: orcid.org/0000-0001-6857-341X 1 ,
  • David Bingham 1 ,
  • Alex M. Dussaq 1 ,
  • Christian Kunder   ORCID: orcid.org/0000-0003-1514-7550 1 ,
  • Rebecca Rojansky 1 ,
  • Aubre Gilbert 1 ,
  • Alexandra L. Chang-Graham 1 ,
  • Brooke E. Howitt 1 ,
  • Ying Liu 1 ,
  • Emily E. Ryan 1 ,
  • Troy B. Tenney 1 ,
  • Xiaoming Zhang 1 ,
  • Ann Folkins 1 ,
  • Edward J. Fox 1 ,
  • Kathleen S. Montine 1 ,
  • Thomas J. Montine   ORCID: orcid.org/0000-0002-1346-2728 1 &
  • James Zou   ORCID: orcid.org/0000-0001-8880-4764 2  

Nature Biomedical Engineering ( 2024 ) Cite this article

1046 Accesses

57 Altmetric

Metrics details

  • Computational science

In pathology, the deployment of artificial intelligence (AI) in clinical settings is constrained by limitations in data collection and in model transparency and interpretability. Here we describe a digital pathology framework, nuclei.io, that incorporates active learning and human-in-the-loop real-time feedback for the rapid creation of diverse datasets and models. We validate the effectiveness of the framework via two crossover user studies that leveraged collaboration between the AI and the pathologist, including the identification of plasma cells in endometrial biopsies and the detection of colorectal cancer metastasis in lymph nodes. In both studies, nuclei.io yielded considerable diagnostic performance improvements. Collaboration between clinicians and AI will aid digital pathology by enhancing accuracies and efficiencies.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

92,52 € per year

only 7,71 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

analytical framework research paper

Similar content being viewed by others

analytical framework research paper

Digital pathology and artificial intelligence in translational medicine and clinical practice

analytical framework research paper

Understanding the errors made by artificial intelligence algorithms in histopathology in terms of patient impact

analytical framework research paper

Artificial intelligence in histopathology: enhancing cancer research and clinical oncology

Data availability.

The data supporting the results in this study are available within the paper and its Supplementary Information . The deidentified nuclei image patches, and pathologists’ annotations are available at https://huangzhii.github.io/nuclei-HAI . Source data are provided with this paper.

Code availability

The source code of nuclei.io is available at https://huangzhii.github.io/nuclei-HAI .

Kirillov, A. et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 4015–4026 (IEEE, 2023).

Gamper, J., Alemi Koohbanani, N., Benet, K., Khuram, A. & Rajpoot, N. PanNuke: An open pan-cancer histology dataset for nuclei instance segmentation and classification. In Digital Pathology. ECDP 2019. Lecture Notes in Computer Science Vol. 11435 (eds Reyes-Aldasoro, C. C. et al.) 11–19 (Springer, 2019).

Kather, J. N. et al. Predicting survival from colorectal cancer histology slides using deep learning: a retrospective multicenter study. PLoS Med. 16 , e1002730 (2019).

Article   PubMed   PubMed Central   Google Scholar  

Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T. J. & Zou, J. A visual–language foundation model for pathology image analysis using medical Twitter. Nat. Med. 29 , 2307–2316 (2023).

Article   CAS   PubMed   Google Scholar  

Lu, M. Y. et al. A visual-language foundation model for computational pathology. Nat. Med. 30 , 863–874 (2024).

Chen, R. J. et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 30 , 850–862 (2024).

Amgad, M. et al. A population-level digital histologic biomarker for enhanced prognosis of invasive breast cancer. Nat. Med. 30 , 85–97 (2024).

Jiang, X. et al. End-to-end prognostication in colorectal cancer by deep learning: a retrospective, multicentre study. Lancet Digit. Health 6 , e33–e43 (2024).

Liu, Y. et al. Artificial intelligence-based breast cancer nodal metastasis detection: insights into the black box for pathologists. Arch. Pathol. Lab. Med. 143 , 859–868 (2019).

Krogue, J. D. et al. Predicting lymph node metastasis from primary tumor histology and clinicopathologic factors in colorectal cancer using deep learning. Commun. Med. 3 , 59 (2023).

Huang, Z. et al. Artificial intelligence reveals features associated with breast cancer neoadjuvant chemotherapy responses from multi-stain histopathologic images. npj Precis. Oncol. 7 , 14 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Yamashita, R. et al. Deep learning model for the prediction of microsatellite instability in colorectal cancer: a diagnostic study. Lancet Oncol. 22 , 132–141 (2021).

Article   PubMed   Google Scholar  

He, J. et al. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 25 , 30–36 (2019).

Price, W. N. II, Gerke, S. & Cohen, I. G. Potential liability for physicians using artificial intelligence. JAMA 322 , 1765–1766 (2019).

Acs, B., Rantalainen, M. & Hartman, J. Artificial intelligence as the next step towards precision pathology. J. Intern. Med. 288 , 62–81 (2020).

Steiner, D. F. et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am. J. Surg. Pathol. 42 , 1636–1646 (2018).

Kiani, A. et al. Impact of a deep learning assistant on the histopathologic classification of liver cancer. npj Digit. Med. 3 , 23 (2020).

Challa, B. et al. Artificial intelligence-aided diagnosis of breast cancer lymph node metastasis on histologic slides in a digital workflow. Mod. Pathol. 36 , 100216 (2023).

Bankhead, P., Loughrey, M. B. & Fernández, J. A. QuPath: open source software for digital pathology image analysis. Sci. Rep. 7 , 16878 (2017).

Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9 , 671–675 (2012).

Chiu, C. & Clack, N. Napari: a Python multi-dimensional image viewer platform for the research community. Microsc. Microanal. 28 (S1), 1576–1577 (2022).

Article   Google Scholar  

Aubreville, M., Bertram, C., Klopfleisch, R. & Maier, A. SlideRunner—a tool for massive cell annotations in whole slide images. in Bildverarbeitung für die Medizin 2018 (eds Maier, A. et al.) 309–314 (Springer, 2018).

Pocock, J. et al. TIAToolbox as an end-to-end library for advanced tissue image analytics. Commun. Med. 2 , 120 (2022).

Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9 , 676–682 (2012).

MONAI model zoo. GitHub https://github.com/Project-MONAI/model-zoo (2022).

Amgad, M. et al. HistomicsTK. GitHub https://digitalslidearchive.github.io/HistomicsTK/ (2016).

Dietvorst, B. J., Simmons, J. P. & Massey, C. Overcoming algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manage. Sci. 64 , 1155–1170 (2018).

Longoni, C., Bonezzi, A. & Morewedge, C. K. Resistance to medical artificial intelligence. J. Consum. Res. 46 , 629–650 (2019).

Medela, A. et al. Few shot learning in histopathological images: reducing the need of labeled data on biological datasets. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 1860–1864 (IEEE, 2019).

van Rijthoven, M. et al. Few-shot weakly supervised detection and retrieval in histopathology whole-slide images. In Medical Imaging 2021: Digital Pathology Vol. 11603, 137–143 (SPIE, 2021).

Chen, J., Jiao, J., He, S., Han, G. & Qin, J. Few-shot breast cancer metastases classification via unsupervised cell ranking. IEEE/ACM Trans. Comput. Biol. Bioinform. 18 , 1914–1923 (2021).

Zhu, Z. et al. EasierPath: an open-source tool for human-in-the-loop deep learning of renal pathology. In Interpretable and Annotation-Efficient Learning for Medical Image Computing. IMIMIC 2020, MIL3ID 2020, LABELS 2020 Vol. 12446 (eds Cardoso, J., et al.) 214–222 (Springer, 2020).

Singh, H. & Graber, M. L. Improving diagnosis in health care–the next imperative for patient safety. N. Engl. J. Med. 373 , 2493–2495 (2015).

Erickson, L. A., Mete, O., Juhlin, C. C., Perren, A. & Gill, A. J. Overview of the 2022 WHO classification of parathyroid tumors. Endocr. Pathol. 33 , 64–89 (2022).

Budd, S., Robinson, E. C. & Kainz, B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med. Image Anal. 71 , 102062 (2021).

van der Wal, D. et al. Biological data annotation via a human-augmenting AI-based labeling system. npj Digit. Med. 4 , 145 (2021).

Settles, B. Active Learning Literature Survey (University of Wisconsin-Madison Department of Computer Sciences, 2009); https://digital.library.wisc.edu/1793/60660

Go, H. Digital pathology and artificial intelligence applications in pathology. Brain Tumor Res. Treat. 10 , 76–82 (2022).

Wen, S. et al. Comparison of different classifiers with active learning to support quality control in nucleus segmentation in pathology images. AMIA Jt. Summits Transl. Sci. Proc. 2017 , 227–236 (2018).

PubMed   Google Scholar  

Hamilton, P. W. et al. Digital pathology and image analysis in tissue biomarker research. Methods 70 , 59–73 (2014).

Cheng, J. et al. Integrative analysis of histopathological images and genomic data predicts clear cell renal cell carcinoma prognosis. Cancer Res. 77 , e91–e100 (2017).

McQueen, D. B., Perfetto, C. O., Hazard, F. K. & Lathi, R. B. Pregnancy outcomes in women with chronic endometritis and recurrent pregnancy loss. Fertil. Steril. 104 , 927–931 (2015).

Ryan, E. et al. The menstrual cycle phase impacts the detection of plasma cells and the diagnosis of chronic endometritis in endometrial biopsy specimens. Fertil. Steril. 118 , 787–794 (2022).

Kim, H. J. & Choi, G.-S. Clinical Implications of lymph node metastasis in colorectal cancer: current status and future perspectives. Ann. Coloproctol. 35 , 109–117 (2019).

Kiehl, L. et al. Deep learning can predict lymph node status directly from histology in colorectal cancer. Eur. J. Cancer 157 , 464–473 (2021).

Khan, A. et al. Computer-assisted diagnosis of lymph node metastases in colorectal cancers using transfer learning with an ensemble model. Mod. Pathol. 36 , 100118 (2023).

Mescoli, C. et al. Isolated tumor cells in regional lymph nodes as relapse predictors in stage I and II colorectal cancer. J. Clin. Oncol. 30 , 965–971 (2012).

Tizhoosh, H. R. & Pantanowitz, L. Artificial intelligence and digital pathology: challenges and opportunities. J. Pathol. Inform. 9 , 38 (2018).

Baxi, V. et al. Association of artificial intelligence-powered and manual quantification of programmed death-ligand 1 (PD-L1) expression with outcomes in patients treated with nivolumab ± ipilimumab. Mod. Pathol. 35 , 1529–1539 (2022).

Graham, S. et al. Screening of normal endoscopic large bowel biopsies with interpretable graph learning: a retrospective study. Gut 72 , 1709–1721 (2023).

Alemi Koohbanani, N., Jahanifar, M., Zamani Tajadin, N. & Rajpoot, N. NuClick: a deep learning framework for interactive segmentation of microscopic images. Med. Image Anal. 65 , 101771 (2020).

Schemmer, M., Kühl, N., Benz, C. & Satzger, G. On the influence of explainable AI on automation bias. Preprint at https://arxiv.org/abs/2204.08859 (2022).

Bond, R. R. et al. Automation bias in medicine: the influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J. Electrocardiol. 51 , S6–S11 (2018).

Parikh, R. B., Teeple, S. & Navathe, A. S. Addressing bias in artificial intelligence in health care. JAMA 322 , 2377–2378 (2019).

Alon-Barkat, S. & Busuioc, M. Human–AI interactions in public sector decision making: ‘automation bias’ and ‘selective adherence’ to algorithmic advice. J. Public Adm. Res. Theory 33 , 153–169 (2022).

Schmidt, U., Weigert, M., Broaddus, C. & Myers, G. Cell detection with star-convex polygons. In Medical Image Computing and Computer Assisted Intervention — MICCAI 2018. Lecture Notes in Computer Science Vol. 11071 (eds Frangi, A. et al.) 265–273 (Springer, 2018).

Haralick, R. M., Shanmugam, K. & Dinstein, H. I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. SMC-3 , 610–621 (1973).

Liu, Z. et al. A ConvNet for the 2020s. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 11966–11976 (IEEE, 2022).

Chen, T. & Guestrin, C. XGBoost: a scalable tree boosting system. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 785–794 (Association for Computing Machinery, 2016).

Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J. & Fernández-Leal, Á. Human-in-the-loop machine learning: a state of the art. Artif. Intell. Rev. 56 , 3005–3054 (2023).

He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).

Deng, J. et al. ImageNet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).

Li, W., Zhu, X. & Gong, S. Harmonious attention network for person re-identification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2285–2294 (IEEE, 2018).

McHugh, M. L. Interrater reliability: the kappa statistic. Biochem. Med. 22 , 276–282 (2012).

Zou, K. H., Fielding, J. R., Silverman, S. G. & Tempany, C. M. C. Hypothesis testing I: proportions. Radiology 226 , 609–613 (2003).

Download references

Acknowledgements

J.Z. is supported by the Chan-Zuckerberg Biohub Investigator Award. We thank M. Yuksekgonul and F. Bianchi for their helpful suggestions in improving our manuscript.

Author information

Authors and affiliations.

Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA

Zhi Huang, Eric Yang, Jeanne Shen, Dita Gratzinger, Frederick Eyerer, Brooke Liang, Jeffrey Nirschl, David Bingham, Alex M. Dussaq, Christian Kunder, Rebecca Rojansky, Aubre Gilbert, Alexandra L. Chang-Graham, Brooke E. Howitt, Ying Liu, Emily E. Ryan, Troy B. Tenney, Xiaoming Zhang, Ann Folkins, Edward J. Fox, Kathleen S. Montine & Thomas J. Montine

Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA

Zhi Huang & James Zou

You can also search for this author in PubMed   Google Scholar

Contributions

Z.H. conducted study design, software development, experimental setup, data analysis, data visualization and manuscript writing. E.Y. provided numerous insights and participated in the PC study. J.S. provided numerous insights and participated in the CRC LN study. D.G. provided feedback into both studies and participated in the CRC LN study. F.E. helped collect data for the PC study data and participated in it. B.L. and J.N. participated in both studies. D.B., A.M.D., C.K. and R.R. participated in the most time-consuming CRC LN study. A.G., A.L.C.-G., B.E.H., Y.L., E.E.R., T.B.T. and X.Z. participated in the second most time-consuming PC study. A.F. helped with data collection. E.J.F. and K.S.M. were partially involved in designing the study. T.J.M. and J.Z. oversaw the project, conducted study design, experimental setup, data analysis and manuscript writing.

Corresponding authors

Correspondence to Thomas J. Montine or James Zou .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Biomedical Engineering thanks Jakob Nikolas Kather and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 time comparison for colorectal cancer lymph node identification study..

(a) Overall time comparison between AI-assisted mode and unassisted mode. (b) Time comparison between AI-assisted mode and unassisted mode compared within lymph node positive cases (LN+) and lymph node negative cases (LN-). (c) Time comparison between AI-assisted mode and unassisted mode compared across the 8 pathologists. Note: A few slides were missed/skipped by some pathologists during the experiments and were thus excluded from the final comparison, leading to a reduced sample size (N < 137). (d) Time comparison between AI-assisted mode and unassisted mode stratified by different pathologist groups and lymph node status. P-values were calculated using a two-sided t-test without adjustment. For the boxplots, the interior horizontal line represents the median value, the upper and lower box edges represent the 75th and 25th percentile, and the upper and lower bars represent the 90th and 10th percentiles, respectively.

Source data

Extended data fig. 2 a lymph node from an experimental slide..

The experimental slide is used for evaluation, with tumor regions highlighted in green rectangles, and tumor cells highlighted in red scatters.

Extended Data Fig. 3 Evaluating individualized model performance to inspection errors (false negatives).

(a) Approach to calculate the ratio of positive nuclei inside the tumor region (green contour) to the lymph node; this ratio is also known as sensitivity (TP/P) (b) Comparison between the ratio of positive nuclei inside tumor region to lymph node when false negatives appear. The tumor regions were manually annotated for all lymph node slides. Abbreviations: lymph node (LN), isolated tumor cells (ITC), micro-metastasis (micromet), macro-metastasis (macromet). P-values were calculated using a two-sided Spearman test without adjustment in Python ‘scipy’ package. For the boxplots, the interior horizontal line represents the median value, the upper and lower box edges represent the 75th and 25th percentile, and the upper and lower bars represent the 90th and 10th percentiles, respectively.

Extended Data Fig. 4 A screenshot of the plasma cell classifier applied to an external slide from colorectal tissue.

In the screenshot, green squares are generated by the program, which are the prediction results for potential plasma cells (N = 27). Upon further manual verification, we highlighted five potential false positives with red circles.

Supplementary information

Supplementary information.

Supplementary figures and tables.

Reporting Summary

Source data figs. 2–5 and extended data figs. 1 and 3.

Source data.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Huang, Z., Yang, E., Shen, J. et al. A pathologist–AI collaboration framework for enhancing diagnostic accuracies and efficiencies. Nat. Biomed. Eng (2024). https://doi.org/10.1038/s41551-024-01223-5

Download citation

Received : 09 June 2023

Accepted : 03 May 2024

Published : 19 June 2024

DOI : https://doi.org/10.1038/s41551-024-01223-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

analytical framework research paper

  • Open access
  • Published: 22 June 2024

Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis

  • Yusuf Brima 1 &
  • Marcellin Atemkeng 2  

BioData Mining volume  17 , Article number:  18 ( 2024 ) Cite this article

102 Accesses

Metrics details

Deep learning shows great promise for medical image analysis but often lacks explainability, hindering its adoption in healthcare. Attribution techniques that explain model reasoning can potentially increase trust in deep learning among clinical stakeholders. In the literature, much of the research on attribution in medical imaging focuses on visual inspection rather than statistical quantitative analysis.

In this paper, we proposed an image-based saliency framework to enhance the explainability of deep learning models in medical image analysis. We use adaptive path-based gradient integration, gradient-free techniques, and class activation mapping along with its derivatives to attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.

The proposed framework integrates qualitative and statistical quantitative assessments, employing Accuracy Information Curves (AICs) and Softmax Information Curves (SICs) to measure the effectiveness of saliency methods in retaining critical image information and their correlation with model predictions. Visual inspections indicate that methods such as ScoreCAM, XRAI, GradCAM, and GradCAM++ consistently produce focused and clinically interpretable attribution maps. These methods highlighted possible biomarkers, exposed model biases, and offered insights into the links between input features and predictions, demonstrating their ability to elucidate model reasoning on these datasets. Empirical evaluations reveal that ScoreCAM and XRAI are particularly effective in retaining relevant image regions, as reflected in their higher AUC values. However, SICs highlight variability, with instances of random saliency masks outperforming established methods, emphasizing the need for combining visual and empirical metrics for a comprehensive evaluation.

The results underscore the importance of selecting appropriate saliency methods for specific medical imaging tasks and suggest that combining qualitative and quantitative approaches can enhance the transparency, trustworthiness, and clinical adoption of deep learning models in healthcare. This study advances model explainability to increase trust in deep learning among healthcare stakeholders by revealing the rationale behind predictions. Future research should refine empirical metrics for stability and reliability, include more diverse imaging modalities, and focus on improving model explainability to support clinical decision-making.

Peer Review reports

The field of medical image analysis has seen significant advancements in explainability methods for deep learning (DL) models, driven by the imperative for trustworthy artificial intelligence systems in healthcare [ 1 ]. Traditional medical imaging modalities like Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI), Positron Emission Tomography (PET), Mammography, Ultrasound, and X-ray play a crucial role in disease detection and diagnosis, often relying on the expertise of radiologists and physicians [ 2 ]. However, the healthcare field faces a growing demand for skilled professionals, leading to potential fatigue and highlighting the need for computer-aided diagnostic (CAD) tools. The rapid advancements in DL architectures and compute have fueled significant progress in automated medical image analysis [ 3 , 4 , 5 , 6 , 7 ]. The maturation of DL offers a promising solution, accelerating the adoption of computer-assisted systems to support experts and reduce reliance on manual analysis. DL holds particular promise for democratizing healthcare globally by alleviating the cost burden associated with scarce expertise [ 8 ]. However, successful clinical adoption hinges on establishing trust in the robustness and explainability of these models [ 9 ]. Despite their inherent complexity, DL models can be illuminated to understand their inference mechanisms, that is, how they process medical images to generate predictions . An adjacent line of work, explainability , focuses on understanding the inner workings of the models, while explainability focuses on explaining the decisions made by these models. Explainable models enable a human-in-the-loop approach, enhancing diagnostic performance through collaboration between domain experts and artificial intelligence.

Various techniques have been proposed, each with distinct advantages and limitations. Concept learning, for example, facilitates multi-stage prediction by leveraging high-level concepts. Studies such as [ 10 , 11 , 12 ] illustrate the potential of concept learning in disease categorization. However, these methods often require extensive annotation to define concepts accurately and risk information leakage if concepts do not align well with the disease pathology. Case-Based Models (CBMs) learn class-specific, disentangled representations and feature mappings, achieving final classification through similarity measurements between input images and stored base templates [ 13 , 14 , 15 ]. While CBMs are robust to noise and compression artifacts, their training is complex, particularly for the large and diverse datasets typical of medical imaging. Counterfactual explanation methods generate pseudo-realistic perturbations of input images to produce opposite predictions, aiming to identify influential features for the model’s original prediction. However, generating realistic perturbations for medical images, which often contain subtle anatomical details, is challenging and can lead to misleading explanations [ 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Unrealistic perturbations compromise the trustworthiness of these explanations. Another approach involves visualizing internal network representations of learned features in CNN kernels [ 24 ]. Interpreting these feature maps in the context of medical image analysis is difficult due to the abstract nature of the features learned by DL models [ 25 , 26 ]. This abstraction challenges human experts in deriving clinically meaningful insights.

Attribution maps are visual representations that highlight regions of an image most relevant to the predictions made by a DL model. Serving as potent post-hoc explainability tools, these maps provide crucial insights into how models make decisions based on input images. Several studies have demonstrated the application of attribution maps in medical imaging tasks. For instance, Bohle et al. [ 27 ] utilized layer-wise relevance propagation to elucidate deep neural network decisions in MRI-based Alzheimer’s disease classification. Camalan et al. [ 28 ] employed a deep CNN-based Grad-CAM approach for classifying oral lesions in clinical photographs. Similarly, Kermany et al. [ 29 ] applied Grad-CAM for oral dysplasia classification. Shi et al. presented an explainable attention-based model for COVID-19 automatic diagnosis, showcasing the integration of attention mechanisms to improve explainability in radiographic imaging [ 30 ]. Another study by Shi et al. introduced an attention transfer deep neural network for COVID-19 automatic diagnosis, further enhancing the explainability and performance of diagnostic models [ 31 ]. Recently, Nhlapho et al. [ 32 ] presented an overview of select image-based attribution methods for brain tumor detection, though their approach lacked ground-truth segmentation masks and did not quantitatively evaluate the chosen saliency methods.

Building on these efforts, our research leverages both gradient-based and gradient-free image-based saliency methods. However, the deployment of attribution maps alone is insufficient for establishing comprehensive model explainability. A rigorous evaluation framework is essential. We propose a comprehensive evaluation framework that extends beyond qualitative assessment. This framework includes metrics specifically designed to evaluate image-based saliency methods. By incorporating performance information curves (PICs) such as Accuracy Information Curves (AICs) and Softmax Information Curves (SICs), we objectively assess the correlation between saliency map intensity and model predictions. This robust evaluation aims to enhance the transparency and trustworthiness of DL models in clinical settings. Given this context, this paper centers on How effective are state-of-the-art (SoTA) image-based saliency methods in aiding the explainability of DL models for medical image analysis tasks? By investigating this question, we aim to contribute to the broader effort of enhancing the trustworthiness, transparency, and reliability of DL applications in healthcare.

To this end, we leverage the proposed framework to systematically analyze model predictions on brain tumor MRI [ 33 ] and COVID-19 chest X-ray [ 34 ] datasets. Resulting attribution maps highlight the salient features within the input images that most significantly influence the model’s predictions. By evaluating these techniques both qualitatively and quantitatively across different SoTA DL architectures and the aforementioned medical imaging modalities, we aim to assess their effectiveness in promoting explainability. Our assessment is focused on several key aspects:

Clarity of Insights: Do these saliency methods provide clear non-spurious and explainable insights into the relationship between medical image features and model predictions? We achieve this assessment by comparing the highlighted features in the attribution maps with the known anatomical structures and disease signatures relevant to the specific medical imaging task (e.g., brain tumor location in MRI).

Biomarker Identification: Can these techniques aid in identifying potential biomarkers for disease detection or classification? We investigate whether the saliency methods consistently highlight specific image features that correlate with known or emerging disease biomarkers. This analysis can provide valuable insights into potential new avenues for clinical research.

Model Bias Detection: Do saliency methods help uncover potential biases within the DL used for medical image analysis? We explore whether the saliency maps reveal a consistent focus on irrelevant features or artifacts that might not be clinically meaningful. This analysis can help identify potential biases in the training data or model architecture that may require mitigation strategies.

Quantitative Effectiveness: How quantitatively effective are these methods in capturing the relationship between image features and model predictions? We explore this by employing PICs such as AICs and SICs. These metrics assess the correlation between the saliency map intensity and the model’s accuracy or class probabilities.

Contributions

We proposed a comprehensive framework to evaluate SoTA image-based saliency methods applied to Deep Convolutional Neural Networks (CNNs) for medical image classification tasks. Our study included MRI and X-ray modalities, focusing on tasks such as brain tumor classification and COVID-19 detection within these respective imaging techniques. For a novel quantitative evaluation, beyond the visual inspection of saliency maps, we used AICs and SICs to measure the effectiveness of the saliency methods. AICs measure the relationship between the model’s predicted accuracy and the intensity of the saliency map. A strong correlation between high-intensity areas on the saliency map and high model accuracy indicates that the method effectively emphasizes relevant image features. Meanwhile, SICs examine the link between the saliency map and the model’s class probabilities (softmax outputs). An effective saliency method should highlight areas that guide the model toward the correct classification, corresponding to the disease’s localized region in the image.

To our knowledge, this study is the first empirical investigation that uses AICs and SICs to assess saliency methods in medical image analysis using DL. This offers a solid and objective framework for determining the efficacy of saliency methods in elucidating the decision-making mechanisms of DL models for classification and detection tasks in medical imaging.

Paper outline

The paper is organized as follows. Materials and methods  section describes the materials and methods employed in this paper. Results  section presents experimental results on two datasets. Conclusion  section concludes and proposes future directions.

Materials and methods

This section introduces the deep CNN models used for conducting experiments. We also detail the training process for these models and present our proposed framework, which provides an in-depth explanation of image-based saliency methods and their direct applications to DL-based models in medical image analysis.

We use two medical image data modalities to test the attribution framework. The choice of the two modalities depends on the availability of data. Other types of modalities are also applicable to the attribution framework. We leave this for future work.

The brain tumors MRI dataset [ 33 ] is used. MRI data typically comprises a 3D tensor. However, the dataset provided in [ 33 ] is transformed from 3D tensors into 2D slices. Specifically, it includes contrast-enhanced MRI (CE-MRI) T1-weighted images, amounting to 3064 slices obtained from 233 patients. It includes 708 Meningiomas, 1426 Gliomas, and 930 Pituitary tumors. In each slice, the tumor boundary is manually delineated and verified by radiologists. We have plotted 16 random samples from the three classes with tumor borders depicted in red as shown in Fig.  1 . These 2D slices of T1-weighted images train standard deep CNNs for a 3-class classification task into Glioma, Meningioma, and Pituitary tumors. The input to each model is a \(\mathbb {R}^{225\times 225\times 1}\) tensor that is a resized version of the original \(\mathbb {R}^{512\times 512}\) image slices primarily due to computational concerns. Unlike the brain cancer MRI dataset which comes with segmentation masks from experts in the field, the COVID-19 X-ray dataset [ 34 ] used in this work has no ground truth segmentation masks. This was chosen as an edge-case analysis because a vast majority of datasets do not have segmentation masks. This dataset was curated from multiple international COVID-19 X-ray testing facilities during several periods. The dataset is made up of an unbalanced percentage of the four classes in which we have 48.2 \(\%\) normal X-ray images, 28.4 \(\%\) cases with lung opacity, 17.1 \(\%\) of COVID-19 patients and \(6.4\%\) of patients with viral pneumonia of the 19,820 total images in the dataset. This unbalanced nature of the dataset comes with its classification challenges, which has prompted several researchers to implement DL methods to classify the dataset. Out of the four classes, for consistency with the other datasets used in this work, we choose to classify three classes (i.e., Normal, Lung Opacity, and COVID-19). For an in-depth discussion of works that deal with this dataset, we refer to [ 35 ]. Figure  2 shows 16 selected random samples. Table  1 summarizes those three datasets.

figure 1

MRI Scans of Various Brain Tumors with Annotated Tumor Regions. This figure shows MRI images of different brain tumor types, with the tumor region boundaries highlighted in red. The tumor types include pituitary tumors, gliomas, and meningiomas. Each image presents a different view (axial, sagittal, or coronal) of the brain, illustrating the diversity in tumor appearance and location

figure 2

Sample chest X-ray images from the dataset used in this study, labeled with their respective conditions. The conditions include Normal, Lung opacity, and Covid. The dataset was curated from multiple international COVID-19 X-ray testing centers during several periods. The diversity in conditions showcases the varying features that the models need to identify for accurate classification

Deep learning architectures

We use 9 standard CNN architectures: Visual Geometric Group (VGG16 and VGG19 [ 7 ]), Deep Residual Network (ResNet50, ResNet50V2) [ 4 ], Densely Connected Convolutional Networks (DenseNet) [ 36 ], DL with Depthwise Separable Convolutions (Xception) [ 5 ], Going deeper with convolutions (Inception) [ 37 ], a hybrid deep Inception and ResNet and EfficientNet: Rethinking model scaling for convolutional neural networks [ 38 ] for classifying COVID-19 X-ray images and brain tumors from the T1-weighted MRI slices. The choice of these deep models is explained by the fact that they are modern techniques that are widely used in solving vision tasks and by extension medical image feature extraction for prediction.

Image-based saliency methods and proposed framework

To facilitate the explainability of model inference mechanisms, which is crucial for building trust in clinical applications of DL-based CAD systems, we have investigated a variety of saliency methods. These saliency methods are integrated into the proposed framework, depicted in Fig.  3 . According to [ 39 ], effective attribution methods must satisfy the fundamental axioms of Sensitivity and Implementation Invariance . All selected saliency methods in this study adhere to these axioms.

figure 3

An illustration of model development and explainability pipeline for a path-based saliency method. A dataset of m samples say T1-weighted contrast-enhanced image slices, for example, is the input to a standard CNN classification model depicted in the figure as \(h(\cdot )\) that learns the non-linear mapping of the features to the output labels. \(h(\cdot )\) is utilized with an attribution operator \(A_h\) to attribute salient features \(\hat{\textbf{x}}\) of the input image. \(A_h\) is an operator that can be used with varied different architectures. This proposed framework is general and can be applied to any problem instances where explainability is vital

The saliency methods evaluated include both gradient-based and gradient-free techniques. Adaptive path-based integrated gradients (APMs), which are gradient-based, are useful in reducing noise in attribution maps, which is critical for medical imaging diagnostics. Gradient-free techniques do not rely on model gradients, making them suitable for non-differentiable models or scenarios where gradients are noisy. Class Activation Mapping (CAM) and its derivatives are effective in highlighting high-level activations for visual localization, providing clear insights into decision-making processes. Each method’s distinct characteristics justify their inclusion and comparison in this study, aimed at enhancing diagnostic and patient outcomes in medical imaging.

The specific saliency methods employed in this study include several prominent techniques. Vanilla Gradient [ 40 ] computes the gradient of the output with respect to the input image, highlighting the most influential pixels for the target class prediction. Integrated Gradients (IG)[ 39 ], which are gradient-based, attribute the model’s prediction to its input features by integrating the gradients along the path from a baseline to the input image. SmoothGrad IG [ 41 ] enhances IG by averaging the gradients of multiple noisy copies of the input image, thus reducing visual noise in the saliency maps. Guided Integrated Gradient (GIG) [ 42 ] refines IG further by guiding the gradients to produce less noisy and more interpretable saliency maps. eXplanation with Ranked Area Integrals (XRAI) [ 43 ] generates region-based attributions by ranking areas based on their contribution to the prediction, providing a more holistic view of important regions. GradCAM [ 21 ] uses the gradients of the target class flowing into the final convolutional layer to produce a coarse localization map of important regions in the image. GradCAM++ [ 44 ] improves upon GradCAM by providing better localization by considering the importance of each neuron in the last convolutional layer. ScoreCAM [ 45 ], unlike gradient-based methods, uses the model’s confidence scores to weigh the importance of each activation map, potentially leading to more accurate and less noisy explanations.

These methods are integrated into the proposed framework to analyze the attribution of salient features in medical images. As shown in Fig.  3 , a dataset of m samples is input into a standard CNN classification model. The model, represented as \(h(\cdot )\) , learns the non-linear mapping of features to output labels. The trained model is then utilized together with an attribution operator \(A_h\) , which could be any of the saliency methods, to attribute salient features \(\hat{\textbf{x}}\) of the input image. This operator \(A_h\) is versatile and can be applied to any problem where explainability is essential for building trust in the model’s inference mechanism.

Quantitative and empirical assessment of saliency methods

In this work, we adapted and applied empirical methods from Kapishnikov et al. (2021) [ 42 ] for evaluating saliency frameworks in the field of medical image analysis, making slight adjustments to the image entropy calculation. Our adaptation maintained the core approach of using saliency methods to attribute importance to regions within medical images while tailoring them to meet the specific demands of medical imaging.

Our method for estimating image entropy involves computing the Shannon entropy of the image histogram. We begin by deriving the histogram of the original image with 256 bins and density normalization, followed by using the entropy computation as shown in Equation  1 . In contrast, their method estimates image entropy by determining the file size of the image after lossless compression and calculating the buffer length as a proxy for entropy. While both approaches aim to gauge the information content of an image, ours relies on pixel intensity distribution, while theirs assesses file size post-compression.

where, H ( X ) represents the entropy of the image X , \(p_i\) is the probability of occurrence of each intensity level i in the image histogram, and n is the total number of intensity levels (256 in our case).

Our approach provides a direct measure of the information content inherent in the pixel intensity distribution, capturing the relative importance of different intensity levels and offering a comprehensive understanding of the image’s complexity. In contrast, using file size post-compression as a proxy for entropy may not fully capture the nuances of the image’s content. By focusing on pixel intensity distribution, our approach offers a more intrinsic and nuanced measure of image information content, particularly crucial for tasks such as medical image analysis or pattern recognition.

This evaluation framework entails initiating the process with a completely blurred version of the medical image and incrementally reintroducing pixels identified as significant by the saliency method. We then measure the resulting image’s entropy and conduct classification tasks to correlate the model’s performance, such as accuracy, with the calculated entropy or information level for each medical image, resulting in Performance Information Curves (PICs). Thus, two variants of PICs were introduced – Accuracy Information Curve (AIC) and Softmax Information Curve (SIC) – to provide a more nuanced evaluation of the saliency methods’ effectiveness.

Experimental setup

We conducted all experiments on Nvidia Quadro RTX 8000 hardware, leveraging its robust computational capabilities to handle the extensive DL training processes. For the implementation, we used the Keras API with the TensorFlow backend, enabling efficient and flexible development of the CNNs.

In this section, we present a comprehensive analysis of our experimental findings, structured around three key questions: (i) How good are these models on standard classification performance metrics? (ii) How visually explainable are studied image-based saliency-based methods? (iii) How empirically comparable are image-based saliency methods?

How good are these models on standard classification performance metrics?

We evaluated the performance of the 9 DL model architectures on classification tasks using standard metrics such as F1 score and confusion matrices as depicted in Figs. 4 and 5 . Appendix 1 shows the optimal hyperparameters for training the DL models. The results provide insights into the effectiveness of each model in terms of classification accuracy and error distribution.

figure 4

The F1 scores (top-panel) for each model are compared to assess their accuracy and robustness in classifying brain tumors into three categories: Meningioma, Glioma, and Pituitary tumor. The bottom-panel shows the confusion matrix for the top-performing model, InceptionResNetV2

The performance of various DL models on brain tumor MRI classification is illustrated in Fig.  4 . Figure  4 (top-panel) The bar plot presents the F1 scores of various DL model architectures evaluated on the brain MRI image testset classification task. The F1 scores for these models range from 0.76 to 0.95. The InceptionResNetV2 model achieves the highest F1 score of 0.95, indicating superior performance in accurately classifying brain tumors. EfficientNetB0, on the other hand, scores the lowest with an F1 score of 0.76, showing a relatively lower performance compared to the other models. Figure  4 (bottom-panel) shows the confusion matrix for the top-performing model, InceptionResNetV2, which displays the number of correctly and incorrectly classified cases for different types of brain tumors. The matrix shows that out of the 72 cases of Meningioma, 69 cases are correctly predicted, 1 case is misclassified as Glioma, and 2 cases are misclassified as Pituitary tumor. Out of the 143 cases of Glioma, 133 cases are correctly predicted, 10 cases are misclassified as Meningioma, and no case is misclassified as a Pituitary tumor. Out of the 92 Pituitary tumor cases, 91 cases are correctly predicted, 1 case is misclassified as Glioma, and no cases misclassified as Meningioma. This detailed breakdown demonstrates the model’s effectiveness in correctly identifying the majority of cases while highlighting specific areas where misclassifications occur, particularly in distinguishing between Meningioma and Glioma.

Figure  5 shows the performance comparison of different model architectures for COVID-19 X-ray image classification. The models were evaluated based on their ability to classify images into Normal, Lung Opacity, and COVID-19 categories. Figure  5 (top-panel) shows the F1 scores of various DL model architectures evaluated for COVID-19 classification. The F1 scores range from 0.87 to 0.89. The models perform consistently well, with minimal variation in F1 scores. Figure  5 (bottom-panel) shows the confusion matrix for the Xception model and provides a detailed view of its classification performance for chest X-ray images. The matrix shows that out of the 208 Lung opacity cases, 247 cases are correctly predicted, 1 case is misclassified as COVID-19, and 60 cases are misclassified as Normal. Out of the 19 COVID-19 cases, 7 cases are correctly predicted, 5 cases are misclassified as Lung opacity, and 7 cases are misclassified as Normal. Out of the 651 Normal cases, 621 cases are correctly predicted, no case is misclassified as COVID-19, and 30 cases are misclassified as Lung opacity. This confusion matrix highlights the Xception model’s strengths and weaknesses in COVID-19 classification. While it correctly identifies a large number of cases, there are notable misclassifications, particularly with Lung opacity being misclassified as Normal in 60 instances.

figure 5

The F1 scores (top panel) for each model are compared to assess their accuracy and robustness in classifying chest X-ray images into three categories: Normal, Lung Opacity, and COVID-19. The bottom panel shows the confusion matrix for the top-performing model, Xception

The results from the F1 scores and confusion matrices demonstrate the effectiveness of various DL architectures in medical image classification tasks. InceptionResNetV2 consistently outperforms other models in brain tumor classification, achieving the highest F1 score and demonstrating excellent accuracy. The detailed confusion matrix for InceptionResNetV2 reveals minimal misclassifications, underscoring its reliability. The performance of models on the COVID-19 X-ray dataset shows high F1 scores across different architectures, with models like Xception also performing exceptionally well. The confusion matrix for Xception indicates strong classification capabilities, although some misclassifications are present, particularly in distinguishing between Lung opacity and Normal. These results underscore the importance of selecting appropriate model architectures for specific medical image classification tasks. The high F1 scores and detailed confusion matrices provide valuable insights into each model’s strengths and areas for improvement. However, the focus of this study is not to beat SoTA performance but to provide a basis for investigating the chosen saliency methods. Therefore, the top-performing models, InceptionResNetV2 for brain tumor classification and Xception for COVID-19 classification will serve as the basis for further analysis Sections in  How visually explainable are image-based saliency methods? and How empirically comparable are image-based saliency methods?  sections.

How visually explainable are image-based saliency methods?

Figure  6 presents the visualization of feature attributions for brain tumor classification using our proposed framework and various explainability methods applied to the Inception-ResNetV2 model. The attribution maps provide insights into the regions of the input images that significantly influence the model’s predictions for three types of brain tumors: Glioma, Meningioma, and Pituitary Tumor. The top row represents the input image with ground-truth tumor boundaries, and the other rows are attribution maps produced by each method.

figure 6

Visualization of feature attributions for brain tumor classification using various explainability methods for the best-performing model, Inception-ResNetV2. This figure displays the feature attribution maps generated by different explainability techniques for the model on three types of brain tumors: Glioma, Meningioma, and Pituitary Tumor. The columns represent the input image with ground-truth tumor boundaries followed by the attribution maps produced by each method. From visual inspection, Fast XRAI 30% and ScoreCAM outperform other methods. For Glioma, ScoreCAM effectively focuses on the tumor regions. For Meningioma, ScoreCAM highlights some tumor regions, though the heatmap shows three regions instead of the actual two. Most other methods, except GradCAM++ for Glioma, generate coarse and noisy saliency maps, particularly Vanilla Gradient and SmoothGrad. Path-integration methods tend to be more susceptible to image edges compared to GradCAM, GradCAM++, and ScoreCAM methods

From visual inspection, Fast XRAI 30% and ScoreCAM outperform other methods. For Glioma, ScoreCAM effectively focuses on the tumor regions, providing clear and accurate attributions. For Meningioma, ScoreCAM highlights some tumor regions, although the heatmap shows three regions instead of the actual two. Other methods, such as Vanilla Gradient and SmoothGrad, produce coarse and noisy saliency maps. GradCAM and GradCAM++ generate more focused heatmaps but are still less precise than ScoreCAM. Path-integration methods, like Integrated Gradients, are more susceptible to highlighting image edges rather than the tumor regions, reducing their clinical explainability.

Figure  7 illustrates our proposed framework and application of various explainability methods on chest X-ray images for differentiating between Normal, Lung Opacity, and COVID-19 cases using the Xception model. The figure includes input X-ray images in the first row, followed by the attribution maps generated by different explainability methods. GradCAM, GradCAM++, and ScoreCAM tend to produce more focused and clinically explainable heatmaps, accurately highlighting relevant regions such as lung abnormalities. Other methods, like Vanilla Gradient and SmoothGrad, show more dispersed activations, making it challenging to interpret the model’s focus. XRAI and Fast XRAI provide region-based explanations that are intermediate, balancing between detailed local features and broader regions of interest.

figure 7

Comparison of various explainability methods applied to chest X-ray images for distinguishing between Normal, Lung Opacity, and COVID-19 cases. The figure includes the input X-ray images in the first column, followed by visualization results from different explainability methods across the subsequent columns. For each condition (Normal, Lung Opacity, and COVID-19), the visualization techniques highlight different regions of the X-ray images that contribute to the model’s decision-making process. GradCAM, GradCAM++, and ScoreCAM methods tend to produce more focused and clinically interpretable heatmaps, while other methods show more dispersed activations. XRAI and Fast XRAI provide region-based explanations that are intermediate. Unlike the brain tumor dataset, this dataset does not have ground-truth biomarkers

The comparison of these saliency methods on the two datasets reveals the strengths and limitations of each technique in providing visual explanations. The presence of ground-truth biomarkers in the brain tumor dataset allows for a more nuanced assessment of the methods’ accuracy, whereas the COVID-19 dataset lacks such markers, relying on visual plausibility for evaluation. Overall, the findings suggest that methods like ScoreCAM, XRAI, GradCAM, and GradCAM++ offer more precise and clinically useful explanations, which are crucial for enhancing the transparency and trustworthiness of DL models in medical applications.

How empirically comparable are image-based saliency methods?

While visual explanations provide valuable qualitative insights, it is crucial to quantitatively evaluate the effectiveness of different saliency methods. In this section, we empirically compare these methods using PICs, specifically AICs and SICs. These metrics allow us to objectively assess the correlation between the saliency map intensity and the model’s predictions, providing a comprehensive understanding of each method’s performance.

In Fig.  8 , we present the aggregated AICs for over 1200 data points for various saliency methods applied to brain tumor MRI classification. The AUC values indicate the effectiveness of each method in retaining critical image information necessary for accurate classification. We observe that ScoreCAM achieves the highest AUC of 0.084, followed by XRAI at 0.033. This suggests that these methods are more effective in highlighting relevant regions for the model’s predictions. In contrast, methods like Guided IG, Vanilla IG, SmoothGrad IG, GradCAM, and GradCAM++ show minimal to zero AUC values, indicating limited effectiveness. These empirical results align with our visual inspection findings, where ScoreCAM and XRAI also provided clearer and more accurate attributions.

figure 8

Aggregated AICs for evaluating the effectiveness of different saliency methods in attributing importance to regions of Brain Tumor MRI images for classification. The plot shows the prediction score as a function of the fraction of the image retained after reintroducing pixels identified as important by each saliency method. The area under the curve (AUC) values are provided for each method, indicating their performance in retaining critical image information necessary for accurate classification. ScoreCAM demonstrates the highest AUC of 0.084, suggesting it retains the most relevant image regions effectively, followed by XRAI with an AUC of 0.033. Other methods, including Guided IG, Vanilla IG, SmoothGrad IG, GradCAM, and GradCAM++, show minimal to zero AUC values, indicating limited effectiveness in this evaluation

Figure  9 illustrates the aggregated SICs for over 1300 samples of a brain tumor MRI dataset. The SIC evaluates how well the saliency methods identify regions that contribute to the model’s class probabilities. Surprisingly, the Random saliency mask shows the highest AUC of 0.705, followed by ScoreCAM (0.579), XRAI (0.574), and Guided IG (0.536). This anomaly indicates that the Random saliency mask may retain some critical regions by chance, emphasizing the need for careful interpretation of this metric. While Guided IG and ScoreCAM perform well, their AUC values suggest that these methods provide moderately effective attributions. These findings partly contrast with our visual evaluations and AICs, where ScoreCAM was a top performer, highlighting the importance of combining visual and empirical assessments for a holistic understanding.

figure 9

Aggregated SICs for evaluating the effectiveness of different saliency methods in attributing importance to regions of Brain Tumor MRI images. The plot shows the prediction score as a function of the fraction of the image retained after reintroducing pixels identified as significant by each saliency method. The AUC values are provided for each method, indicating their performance in retaining critical image information necessary for accurate classification. Random saliency mask, surprisingly, exhibits the highest AUC of 0.705, followed by ScoreCAM (AUC=0.579), XRAI (AUC=0.574), and Guided IG (AUC=0.536). GradCAM, GradCAM++, Vanilla IG, and SmoothGrad IG show lower AUC values, indicating less effectiveness. This analysis highlights the variability in performance among different saliency methods when applied to medical image analysis, with the Random saliency mask unexpectedly showing the highest effectiveness under this specific evaluation criterion, which indicates the instability of this metric

In Fig.  10 , we evaluate the performance of various saliency methods on chest X-ray classification tasks using the Aggregated AIC. XRAI shows a noticeable deviation from the baseline with an AUC of 0.055, indicating some effectiveness in identifying relevant regions. Other methods, including ScoreCAM, Guided IG, and Vanilla IG, closely follow the random with AUC values of 0.000, suggesting limited effectiveness in this context. This observation is consistent with our visual inspection, where methods like ScoreCAM and XRAI provided intermediate-level explanations compared to others.

figure 10

Aggregated AICs evaluating the performance of various saliency attribution methods on the chest X-ray image classification problem. The x-axis represents the fraction of the original image retained based on the saliency maps generated by each method. The y-axis shows the corresponding prediction score or accuracy. The curve for XRAI (AUC=0.055) deviates slightly from the baselines, indicating a minimal ability to identify relevant image regions for the classification task. Other methods, including ScoreCAM, Guided IG, GradCAM, and Vanilla IG, show negligible scores with an AUC of 0.000. This plot highlights the limited efficacy of these saliency techniques in attributing importance to salient regions within medical images for model explainability in this specific evaluation

Figure  11 shows the aggregated SICs for chest X-ray classification. Guided IG achieves the highest AUC of 0.735, outperforming the random mask (0.683), Vanilla IG (0.711), and SmoothGrad IG (0.639). This suggests that Guided IG is particularly effective in highlighting regions that influence the model’s class probabilities. The performance of XRAI, GradCAM, GradCAM++, and ScoreCAM is moderate, with lower AUC values (0.610, 0.594, 0.493, and 0.491 respectively), indicating less effective saliency attribution compared to Guided IG. These empirical results, similar to those for the brain tumor dataset, do not align with our visual analysis and AICs, where methods like XRAI, GradCAM, GradCAM++, and ScoreCAM provided more focused and explainable heatmaps. Thus, this metric should be cautiously used for evaluating saliency methods in given datasets.

figure 11

Aggregated SICs comparing the performance of various saliency methods on the chest X-ray image classification task. The x-axis represents the fraction of the image retained based on the saliency maps, and the y-axis denotes the corresponding prediction score. The guided integrated gradients (Guided IG) method achieves the highest AUC of 0.735, outperforming the random mask (AUC=0.683), vanilla integrated gradients (Vanilla IG, AUC=0.711), SmoothGrad integrated gradients (SmoothGrad IG, AUC=0.639), and other saliency methods like XRAI (AUC=0.610), GradCAM (AUC=0.594), GradCAM++ (AUC=0.493), and ScoreCAM (AUC=0.491)

In summary, the empirical evaluation using AICs closely aligns with the visual results. However, SICs highlight the variability in performance among different saliency methods, with instances of a random mask outperforming established saliency methods. While our visual inspections revealed clear strengths for methods like ScoreCAM and GradCAM++, the empirical metrics provide a nuanced understanding of each method’s effectiveness in retaining and highlighting relevant image regions. By combining visual and empirical analyses, we ensure a robust evaluation of saliency methods, enhancing their applicability in clinical settings.

Further analysis results are included in Appendix  2 . We present a saliency analysis of the second and third-best models for each dataset. Additionally, AICs and SICs based on the entropy method from Kapishnikov et al. (2021) are provided in Appendix  2 “ Buffer-size-based AICs and SICs evaluations ” section. We also explore varied blurred versions of the top-performing saliency methods and their scores in Appendix 2 “ Computed saliency scores for top performing models for each image-based saliency method ” section.

In this study, we proposed a saliency-based attribution framework and assessed various state-of-the-art saliency methods for enhancing the explainability of DL models in medical image analysis, focusing on brain tumor classification using MRI scans and COVID-19 detection using chest X-ray images. Both qualitative and quantitative evaluations provided insights into these methods’ utility in clinical settings.

Qualitative assessments showed that ScoreCAM, XRAI, GradCAM, and GradCAM++ consistently produced focused and clinically interpretable attribution maps. These methods highlighted relevant regions that aligned with known anatomical structures and disease markers, thereby enhancing model transparency and trustworthiness.

This study is the first to use AICs and SICs to quantitatively evaluate these saliency methods for medical image analysis. The AICs confirmed that ScoreCAM and XRAI effectively retained critical image information, while SICs revealed variability, with random saliency masks sometimes outperforming established methods. This underscores the need for combining qualitative and quantitative metrics for a comprehensive evaluation. Our results highlight the importance of selecting appropriate saliency methods for specific tasks. While visual explanations are valuable, empirical metrics offer a nuanced understanding of each method’s effectiveness. Combining these approaches ensures robust assessments, fostering greater trust and adoption of DL models in clinical settings.

Future research should refine empirical metrics for stability and reliability across different models and datasets, include more diverse imaging modalities, and focus on enhancing model explainability to support clinical decision-making.

Availability of data and materials

This research used the brain tumor dataset from the School of Biomedical Engineering Southern Medical University, Guangzhou, contains 3064 T1-weighted contrast-enhanced images with three kinds of brain tumors. The data is publicly available at Brain Tumor Dataset . The Chest X-ray dataset is publicly available at: Chest X-Ray Images (Pneumonia) Dataset .

Code availability

The code is available at XAIBiomedical for reproducibility.

Giuste F, Shi W, Zhu Y, Naren T, Isgut M, Sha Y, et al. Explainable artificial intelligence methods in combating pandemics: A systematic review. IEEE Rev Biomed Eng. 2022;16:5–21.

Article   Google Scholar  

Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.

Article   PubMed   Google Scholar  

Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323(6088):533–6.

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. pp. 770–8.

Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017. pp. 1251–258.

Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012;25.  https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html .

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. https://arxiv.org/abs/1409.1556 . arXiv preprint arXiv:14091556.

Murtaza G, Shuib L, Abdul Wahab AW, Mujtaba G, Nweke HF, Al-garadi MA, et al. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artif Intell Rev. 2020;53(3):1655–720.

Reyes M, Meier R, Pereira S, Silva CA, Dahlweid FM, Tengg-Kobligk HV, et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol Artif Intell. 2020;2(3):e190043.

Article   PubMed   PubMed Central   Google Scholar  

Koh PW, Nguyen T, Tang YS, Mussmann S, Pierson E, Kim B, Liang P. Concept bottleneck models. International Conference on Machine Learning. Vienna: PMLR; 2020. pp. 5338–48.

Sabour S, Frosst N, Hinton GE. Dynamic routing between capsules. Adv Neural Inf Process Syst. 2017;30.

Shen S, Han SX, Aberle DR, Bui AA, Hsu W. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst Appl. 2019;128:84–95.

Bass C, da Silva M, Sudre C, Tudosiu PD, Smith S, Robinson E. Icam: Interpretable classification via disentangled representations and feature attribution mapping. Adv Neural Inf Process Syst. 2020;33:7697–709.

Google Scholar  

Kim E, Kim S, Seo M, Yoon S. XProtoNet: diagnosis in chest radiography with global and local explanations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2021. pp. 15719–28.

Li O, Liu H, Chen C, Rudin C. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In: Proceedings of the AAAI Conference on Artificial Intelligence. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2018. p.  1.

Baumgartner CF, Koch LM, Tezcan KC, Ang JX, Konukoglu E. Visual feature attribution using wasserstein gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2018. pp. 8309–19.

Cohen JP, Brooks R, En S, Zucker E, Pareek A, Lungren MP, et al. Gifsplanation via latent shift: a simple autoencoder approach to counterfactual generation for chest x-rays. In: Medical Imaging with Deep Learning. PMLR; 2021. pp. 74–104.

Lenis D, Major D, Wimmer M, Berg A, Sluiter G, Bühler K. Domain aware medical image classifier interpretation by counterfactual impact analysis. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2020. pp. 315–25.

Schutte K, Moindrot O, Hérent P, Schiratti JB, Jégou S. Using stylegan for visual interpretability of deep learning models on medical images. 2021. arXiv preprint arXiv:210107563.

Seah JC, Tang JS, Kitchen A, Gaillard F, Dixon AF. Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology. 2019;290(2):514–22.

Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017. pp. 618–26.

Simonyan K, Vedaldi A, Zisserman A. Deep inside convolutional networks: Visualising image classification models and saliency maps. 2013. arXiv preprint arXiv:13126034.

Singla S, Pollack B, Wallace S, Batmanghelich K. Explaining the black-box smoothly-a counterfactual approach. 2021. arXiv preprint arXiv:210104230.

Brima Y, Atemkeng M, Tankio Djiokap S, Ebiele J, Tchakounté F. Transfer learning for the detection and diagnosis of types of pneumonia including pneumonia induced by COVID-19 from chest X-ray images. Diagnostics. 2021;11(8):1480.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bau D, Zhou B, Khosla A, Oliva A, Torralba A. Network dissection: Quantifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017. pp. 6541–9.

Natekar P, Kori A, Krishnamurthi G. Demystifying brain tumor segmentation networks: interpretability and uncertainty analysis. Front Comput Neurosci. 2020;14:6.

Böhle M, Eitel F, Weygandt M, Ritter K. Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer’s disease classification. Front Aging Neurosci. 2019;11:194.

Camalan S, Mahmood H, Binol H, Araújo ALD, Santos-Silva AR, Vargas PA, et al. Convolutional neural network-based clinical predictors of oral dysplasia: class activation map analysis of deep learning results. Cancers. 2021;13(6):1291.

Kermany DS, Goldbaum M, Cai W, Valentim CC, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122–31.

Article   CAS   PubMed   Google Scholar  

Shi W, Tong L, Zhuang Y, Zhu Y, Wang MD. Exam: an explainable attention-based model for covid-19 automatic diagnosis. In: Proceedings of the 11th ACM international conference on bioinformatics, computational biology and health informatics. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2020. pp. 1–6.

Shi W, Tong L, Zhu Y, Wang MD. COVID-19 automatic diagnosis with radiographic imaging: Explainable attention transfer deep neural networks. IEEE J Biomed Health Inform. 2021;25(7):2376–87.

Nhlapho W, Atemkeng M, Brima Y, Ndogmo JC. Bridging the Gap: Exploring Interpretability in Deep Learning Models for Brain Tumor Detection and Diagnosis from MRI Images. Information. 2024;15(4):182.

Cheng J. Brain tumor dataset. figshare. 2017. https://doi.org/10.6084/m9.figshare.1512427.v5 .

Chowdhury MEH, Rahman T, Khandakar A, Mazhar R, Kadir MA, Mahbub ZB, et al. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access. 2020;8:132665–76. https://doi.org/10.1109/ACCESS.2020.3010287 .

Brima Y, Atemkeng M, Tankio Djiokap S, Ebiele J, Tchakounté F. Transfer Learning for the Detection and Diagnosis of Types of Pneumonia including Pneumonia Induced by COVID-19 from Chest X-ray Images. Diagnostics. 2021;11(8). https://doi.org/10.3390/diagnostics11081480 .

Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017. pp. 4700–8.

Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015. pp. 1–9.

Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning. PMLR; 2019. pp. 6105–14.

Sundararajan M, Taly A, Yan Q. Axiomatic attribution for deep networks. In: International conference on machine learning. PMLR; 2017. pp. 3319–28.

Shrikumar A, Greenside P, Kundaje A. Learning important features through propagating activation differences. In: International conference on machine learning. PMLR; 2017. pp. 3145–53.

Smilkov D, Thorat N, Kim B, Viégas F, Wattenberg M. Smoothgrad: removing noise by adding noise. 2017. arXiv preprint arXiv:170603825.

Kapishnikov A, Venugopalan S, Avci B, Wedin B, Terry M, Bolukbasi T. Guided integrated gradients: An adaptive path method for removing noise. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2021. pp. 5050–8.

Kapishnikov A, Bolukbasi T, Viégas F, Terry M. Xrai: Better attributions through regions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2019. pp. 4948–57.

Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). IEEE; 2018. pp. 839–47.

Wang H, Wang Z, Du M, Yang F, Zhang Z, Ding S, et al. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Honolulu: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2020. pp. 24–5.

Download references

Acknowledgements

We extend our gratitude to the reviewers for providing constructive feedback and valuable suggestions.

Open Access funding enabled and organized by Projekt DEAL. This research received no external funding.

Author information

Authors and affiliations.

Computer Vision, Institute of Cognitive Science, Osnabrück University, Osnabrueck, D-49090, Lower Saxony, Germany

Yusuf Brima

Department of Mathematics, Rhodes University, Grahamstown, 6140, Eastern Cape, South Africa

Marcellin Atemkeng

You can also search for this author in PubMed   Google Scholar

Yusuf Birma: Conception and designed experiments, data preprocessing, Analysis, and Interpretation of results. Proofreaded and drafted the article. Marcellin Atemkeng: Mathematical modeling, statistical analysis and Interpretation of results, Data Analysis. Proofreaded and drafted the article.

Corresponding authors

Correspondence to Yusuf Brima or Marcellin Atemkeng .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Models’ configuration

Table 2 shows the optimal hyperparameters for training the DL models discussed in this paper.

Exaplanability results

Visual explainability for top 2nd and 3rd models for each dataset.

figure 12

Comparative assessment of saliency techniques applied to brain MRI data using the DenseNet121 model, the second-best performing model on this dataset. Among these, ScoreCAM and GradCAM++ appear to provide the more focused highlighting of the tumor regions across all types of tumors, suggesting that they are more effective in localizing and interpreting the model’s important feature areas for accurate prediction

figure 13

The figure presents a comparison of various saliency techniques applied to brain MRI data using a ResNetV2 model. We noticed that Fast XRAI at 30% feature masking was able to highlight relevant tumor regions across the three disease classes. Other methods produced more coarse-grained saliency masks as depicted in the plot

figure 14

This figure illustrates a comparative evaluation of various techniques applied to chest X-ray images using an InceptionResNetV2 model, which is identified as the second-best performing model on this chest X-ray dataset. Here, we noticed that most methods other than XRAI, Fast XRAI 30%, and GradCAM did not produce clinically meaningful saliency masks contrary to the models’ prediction performance. It is, however, hard to qualitatively evaluate these methods since the dataset does not have a ground-truth segmentation mask

figure 15

Visualization of feature importance for different chest X-ray classifications using a VGG16 model. Rows correspond to different diagnostic categories: Lung Opacity, Normal, and COVID-19. Columns represent various explainability methods. We noticed that XRAI Full, Fast XRAI 30%, GradCAM++, and ScoreCAM highlighted more meaningful features compared to other methods. It is also noticed that Fast XRAI has consistent salient features across InceptionResNetV2 and VGG16 models

Computed saliency scores for top performing models for each image-based saliency method

figure 16

Visualization of GIG SIC scores at varying blurring thresholds for the best-performing model, Inception-ResNetV2, on the Brain Tumor dataset. Each panel displays the GIG Blurred image for a specific threshold, with the corresponding score indicating the model’s confidence level. The thresholds range from 0 to 1.0, showcasing the progression of identified significant regions as the threshold increases. Higher thresholds emphasize more critical features, aligning with the model’s high-confidence predictions, thus offering insights into the explainability and robustness of the Inception-ResNetV2 model in detecting and analyzing brain tumor regions

figure 17

Visualization of GradCAM SIC scores at varying thresholds for the same Inception-ResNetV2, on the Brain Tumor dataset. Unlike GIG, scores only converge at higher thresholds, row three of this plot

figure 18

Visualization of GradCAM++ SIC scores at varying thresholds for the best-performing model, Inception-ResNetV2, on the Brain Tumor dataset. Like GradCAM, we noticed a similar trend in score convergence. However, the score converged at a threshold of 0.5 instead of 0.34 as in GradCAM

figure 19

Visualization of XRAI SIC scores at varying thresholds for the best-performing model, Inception-ResNetV2, on the Brain Tumor dataset. This method also converges in the last three thresholds as depicted in the figure

figure 20

Visualization of GIG Blurred SIC scores at varying thresholds for the best-performing model, Xception, on the Chest X-ray dataset. Unlike the Brain Tumor case, we noticed a different pattern here. The scores remain constant at the different thresholds which is unexpected and counter-intuitive

figure 21

Visualization of GradCAM scores at varying thresholds for the best-performing model, Xception, on the Chest X-ray dataset. Like the previous result, we noticed a similar pattern here as the scores remain invariant across varied thresholds of blurring. This is the case for GradCAM++ and XRAI full

Buffer-size-based AICs and SICs evaluations

figure 22

Aggregated AICs comparing the performance of various saliency methods on the Brain Tumor MRI image classification task. Vanilla IG achieves the highest AUC of 0.871, followed closely by SmoothGrad IG (0.866) and Guided IG (0.835), suggesting these methods are particularly effective in retaining relevant image regions. ScoreCAM shows a respectable AUC of 0.706, indicating good performance as well. GradCAM and GradCAM++ display moderate effectiveness with AUC values of 0.595 and 0.560, respectively. XRAI has an AUC of 0.511, and the Random saliency mask shows an AUC of 0.493, suggesting that some important regions might be retained by chance. This comparison highlights the variability of the entropy estimation to compute the saliency metric scores across datasets. This is primarily because the AUCs are not in agreement with the visual saliency results nor the Shannon entropy-based approach

figure 23

Aggregated SICs comparing the performance of various saliency methods on the Brain Tumor MRI image classification task. Vanilla IG achieves the highest AUC of 0.893, closely followed by SmoothGrad IG (0.884) and Guided IG (0.865), suggesting these methods are particularly effective in highlighting regions that influence the model’s class probabilities. ScoreCAM also performs well with an AUC of 0.768. GradCAM++ and GradCAM show moderate performance with AUC values of 0.634 and 0.620, respectively. XRAI shows an AUC of 0.530, and the Random saliency mask exhibits an AUC of 0.573, indicating some critical regions might be retained by chance. This comparison highlights the variability in this evaluation metric irrespective of the underlying approach to estimating image entropy

figure 24

Aggregated AICs evaluating the performance of various saliency attribution methods on the Chest X-ray image classification task. ScoreCAM demonstrates the highest AUC of 0.077, suggesting it retains the most relevant image regions effectively. This is followed by XRAI with an AUC of 0.071, Vanilla IG with an AUC of 0.053, and Guided IG with an AUC of 0.042. Methods like SmoothGrad IG, GradCAM, and GradCAM++ show minimal to zero AUC values, indicating limited effectiveness in this evaluation. The overall trend highlights that some methods, particularly ScoreCAM and XRAI, provide better retention of relevant regions compared to others. This result is in line with the Shannon entropy-based approach

figure 25

Aggregated SICs comparing the performance of various saliency methods on the Chest X-ray images. The overall trend shows that Vanilla IG achieves the highest AUC of 0.972, closely followed by SmoothGrad IG (0.970) and Guided IG (0.961). Random saliency exhibits a high AUC of 0.828, suggesting that some important regions might be retained by chance. Other methods, including XRAI (0.731), GradCAM (0.694), ScoreCAM (0.692), and GradCAM++ (0.660), show moderate performance. This detailed comparison highlights a somewhat inverse relation with the visual explainability results

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Brima, Y., Atemkeng, M. Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis. BioData Mining 17 , 18 (2024). https://doi.org/10.1186/s13040-024-00370-4

Download citation

Received : 07 August 2023

Accepted : 10 June 2024

Published : 22 June 2024

DOI : https://doi.org/10.1186/s13040-024-00370-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

BioData Mining

ISSN: 1756-0381

analytical framework research paper

  • DOI: 10.1007/s43937-024-00033-9
  • Corpus ID: 270623994

European grid development modeling and analysis: established frameworks, research trends, and future opportunities

  • Chunzi Qu , R. Bang
  • Published in Discover Energy 19 June 2024
  • Environmental Science, Engineering

83 References

How to connect energy islands: trade-offs between hydrogen and electricity infrastructure, influence of bioenergy and transmission expansion on electrical energy storage requirements in a gradually decarbonized european power system, advanced spatial and technological aggregation scheme for energy system models, automated routing of feeders in electrical distribution grids, impact of energy communities on the european electricity and heating system decarbonization pathway: comparing local and global flexibility responses, powering europe with north sea offshore wind: the impact of hydrogen investments on grid infrastructure and power prices, inverse methods: how feasible are spatially low-resolved capacity expansion modelling results when disaggregated at high spatial resolution, off-grid solar expansion and economic development in the global south: a critical review and research agenda, the impact of variable renewables on the distribution of hourly electricity prices and their variability: a panel approach, wind data introduce error in time-series reduction for capacity expansion modelling, related papers.

Showing 1 through 3 of 0 Related Papers

IMAGES

  1. How to Write an Analytical Research Paper Guide

    analytical framework research paper

  2. How to Write an Analytical Research Paper Guide

    analytical framework research paper

  3. Learn How to Write an Analytical Essay on Trust My Paper

    analytical framework research paper

  4. Analytical Framework.

    analytical framework research paper

  5. Analytical Framework Analysis Free Essay Example

    analytical framework research paper

  6. (PDF) Analytical framework

    analytical framework research paper

VIDEO

  1. Strategic Formulation Analytical Framework

  2. THEORETICAL FRAMEWORK CHECKLISTS l PART 2

  3. Analytical Framework

  4. OUtCoMES: A New Framework to Guide Analytical Evidence-based Projects

  5. Sciducio: Academic Leadership Framework

  6. how to frame sociological questions for research papers, from a professor

COMMENTS

  1. Writing theoretical frameworks, analytical frameworks and conceptual

    An analytical framework is, the way I see it, a model that helps explain how a certain type of analysis will be conducted. For example, in this paper, Franks and Cleaver develop an analytical framework that includes scholarship on poverty measurement to help us understand how water governance and poverty are interrelated.

  2. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    Theoretical frameworks are explicitly stated by an educational researcher in the paper's framework, theory, or relevant literature section. The framework shapes the types of questions asked, guides the method by which data are collected and analyzed, and informs the discussion of the results of the study. ... A longitudinal analysis of ...

  3. Framework Analysis

    Framework Analysis. Definition: Framework Analysis is a qualitative research method that involves organizing and analyzing data using a predefined analytical framework. The analytical framework is a set of predetermined themes or categories that are derived from the research questions or objectives. The framework provides a structured approach ...

  4. A Step-by-Step Process of Thematic Analysis to Develop a Conceptual

    Thematic analysis is a research method used to identify and interpret patterns or themes in a data set; it often leads to new insights and understanding (Boyatzis, ... Using a comprehensive framework, this paper gives a flexible and methodical method for thematic analysis in qualitative research. This six-stage procedure goes above and beyond a ...

  5. Using Framework Analysis in Applied Qualitative Research

    research. Framework analysis is an inherently comparative form of thematic analysis which employs an organized structure of inductively- and deductively-derived themes (i.e., a framework) to conduct cross- ... creating an analytic framework and applying this analytic framework. This paper details the five steps in framework analysis (data ...

  6. Theoretical Framework

    The theoretical framework adds context around the theory itself based on how scholars had previously tested the theory in relation their overall research design [i.e., purpose of the study, methods of collecting data or information, methods of analysis, the time frame in which information is collected, study setting, and the methodological ...

  7. What Is a Theoretical Framework?

    A theoretical framework is a foundational review of existing theories that serves as a roadmap for developing the arguments you will use in your own work. Theories are developed by researchers to explain phenomena, draw connections, and make predictions. In a theoretical framework, you explain the existing theories that support your research ...

  8. Using the framework method for the analysis of qualitative data in

    The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations. The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve ...

  9. Developing an analytical framework for multiple perspective

    To develop an analytical framework for the analysis of MPQLI data, we start with a theoretical conceptualization of potentially relevant dimensions for the analysis. The multiple levels of analysis are positioned together across two dimensions: Unit of analysis (individual vs. couple) and time (cross-sectional vs. longitudinal).

  10. The Role of Analytical Frameworks for Systemic Research Design

    Analytical frameworks provide the basic vocabulary of concepts and terms that may be used to construct the kinds of causal explanations expected of a theory. In addition, framework-based approaches are applied as a way of dealing with the complexity that arises in situations involving human interactions with the environment. This paper presents an example of an application of the "Analytical ...

  11. (PDF) Using the framework approach to analyse qualitative data: a

    The framework method, outlined by Gale et al. (2013), is a highly structured approach to analyzing qualitative data that can be used either inductively or deductively and is not aligned with any ...

  12. Using Framework Analysis in nursing research: a worked example

    Framework Analysis is flexible, systematic, and rigorous, offering clarity, transparency, an audit trail, an option for theme-based and case-based analysis and for readily retrievable data. This paper offers further explanation of the process undertaken which is illustrated with a worked example. Data source and research design: Data were ...

  13. Using the framework approach to analyse qualitative data: a worked

    Aims: To consider the strengths and challenges of the framework approach and its application to practice. To help the novice researcher select an approach to thematic analysis. Discussion: This paper provides an account of one novice researcher's experience of using the framework approach for thematic analysis. It begins with an explanation of ...

  14. What is a Theoretical Framework? How to Write It (with Examples)

    A theoretical framework guides the research process like a roadmap for the study, so you need to get this right. Theoretical framework 1,2 is the structure that supports and describes a theory. A theory is a set of interrelated concepts and definitions that present a systematic view of phenomena by describing the relationship among the variables for explaining these phenomena.

  15. Understanding Framework Analysis: An Introductory Guide

    Framework Analysis is more prescriptive than other research methodologies as it provides a more step-by-step approach and is primarily used for applied research. The feature that differentiates framework analysis from many other qualitative analysis techniques is its use of a matrix output that enables researchers to systematically analyze data ...

  16. Using framework analysis methods for qualitative research: AMEE Guide

    A defining feature of FAMs is the development and application of a matrix-based analytical framework. These methods can be used across research paradigms and are thus particularly useful tools in the health professions education (HPE) researcher's toolbox. Despite their utility, FAMs are not frequently used in HPE research.

  17. Theoretical Framework

    A theoretical framework is an essential part of any research study or paper, as it helps to provide a theoretical basis for the research and guide the analysis and interpretation of the data. Here are some steps to help you write a theoretical framework: ... Increases the validity of the research: A theoretical framework helps to ensure that ...

  18. Analytical Framework, Study Design, and Methodology

    Chapter 2. -. Analytical Framework, Study Design, and Methodology. As most of the chapters in this book are based on research that has adopted a clear analytical framework and scientific approach for assessing watershed impacts, it is necessary to discuss the aims of the research and the adopted framework and approach before we go into these ...

  19. Using the framework method for the analysis of qualitative data in

    The Framework Method has been developed and used successfully in research for over 25 years, and has recently become a popular analysis method in qualitative health research. The issue of how to assess quality in qualitative research has been highly debated [ 20 , 34 - 40 ], but ensuring rigour and transparency in analysis is a vital component.

  20. Framework Analysis: A Qualitative Methodology for Applied Policy Research

    Review of such policies and procedures are done periodically to ensure optimum efficiency within the organization. Framework analysis is a qualitative method that is aptly suited for applied policy research. Framework analysis is better adapted to research that has specific questions, a limited time frame, a pre-designed sample and a priori issues.

  21. PDF Chapter 3 Analytical Framework and Research Methodology 3.1

    Receiver of ST =. o Translator = o Sender of Target Text (TT) Receiver of TT Target Language (TL) Target Culture (TC) The model gives the three main participants in the translational communication process, namely the sender, translator and receiver (see Chapter 2, par. 2.2.2.1). The sender produces a source text (ST) that is written in a source ...

  22. Example of an analytical framework within an overarching conceptual

    Interventions were each time effective in decreasing AP and increasing MMO in a 6-month follow-up period by an average of about 85% and over 40%, respectively. Regression analysis showed a good ...

  23. (PDF) Analytical framework

    Analytical framework 23. 2543Chap2 16/7/03 9:57 am Page 23. Jon Birger Skjærseth and Tora Skodvin - 9781526137296. Downloaded from manchesteropenhive.com at 08/21/2018 01:40:58AM. via free access ...

  24. A pathologist-AI collaboration framework for enhancing ...

    A digital pathology-artificial intelligence framework that leverages active learning and clinician-in-the-loop real-time feedback improves performance in diagnostic tasks.

  25. Saliency-driven explainable deep learning in medical imaging: bridging

    In this paper, we proposed an image-based saliency framework to enhance the explainability of deep learning models in medical image analysis. We use adaptive path-based gradient integration, gradient-free techniques, and class activation mapping along with its derivatives to attribute predictions from brain tumor MRI and COVID-19 chest X-ray ...

  26. European grid development modeling and analysis: established frameworks

    This paper presents a comprehensive survey of recent literature on European energy system modeling and analysis with special focus on grid development. Spanning the years from 2013 to 2023, we analyze 59 selected articles, organizing them by geographical scope, grid expansion strategies, research focus, and methodology. Additionally, we provide an overview of established and recurring ...