Qualitative vs Quantitative Research Methods & Data Analysis

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

What is the difference between quantitative and qualitative?

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.

Qualitative research , on the other hand, collects non-numerical data such as words, images, and sounds. The focus is on exploring subjective experiences, opinions, and attitudes, often through observation and interviews.

Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography.

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis .

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded .

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Mixed methods research
  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

Print Friendly, PDF & Email

Logo for Open Educational Resources

Chapter 21. Conclusion: The Value of Qualitative Research

Qualitative research is engaging research, in the best sense of the word.

A few of the meanings of engage = to attract or hold by influence or power; to hold the attention of; to induce to participate; to enter into contest with; to bring together or interlock; to deal with at length; to pledge oneself; to begin and carry on an enterprise; to take part or participate; to come together; engaged = to be actively involved in or committed; to greatly interest; to be embedded with. ( Merriam-Webster Unabridged Dictionary )

There really is no “cookbook” for conducting qualitative research. Each study is unique because the social world is rich and full of wonders, and those of us who are curious about it have our own position in that world and our own understandings and experiences we bring with us when we seek to explore it. And yet even though our reports may be subjective, we can do what we can to make them honest and intelligible to everyone else. Learning how to do that is learning how to be a qualitative researcher rather than simply an amateur observer. Helping you understand that and getting you ready for doing so have been the goal of this book.

conclusion on quantitative and qualitative research

According to Lareau ( 2021:36 ), excellent qualitative work must include all the following elements: a clear contribution to new knowledge, a succinct assessment of previous literature that shows the holes in the literature, a research question that can be answered with the data in hand, a breadth and depth in the data collection, a clear exposition of the results, a deep analysis that links the evidence to the interpretation, an acknowledgment of disconfirming evidence, a discussion that uses the case as a springboard to reflect on more general concerns, and a full discussion of implications for ideas and practices. The emphasis on rigor, the clear contribution to new knowledge, and the reflection on more general concerns place qualitative research within the “scientific” camp vis-à-vis the “humanistic inquiry” camp of pure description or ideographic approaches. The attention to previous literature and filling the holes in what we know about a phenomenon or case or situation set qualitative research apart from otherwise excellent journalism, which makes no pretensions of writing to or for a larger body of knowledge.

In the magnificently engaging untextbook Rocking Qualitative Social Science , Ashley Rubin ( 2021 ) notes, “Rigorous research does not have to be rigid” ( 3 ). I agree with her claim that there are many ways to get to the top of the mountain, and you can have fun doing so. An ardent rock climber, Rubin calls her approach the Dirtbagger approach, a way of climbing the mountain that is creative, flexible, and definitely outside proscribed methods. Here are eleven lessons offered by Rubin in paraphrase form with commentary and direct quotes noted:

  • There is no right way to do qualitative social science, “and people should choose the approach that works for them, for the particular project at hand, given whatever constraints and opportunities are happening in their life at the time. ( 252 )”
  • Disagreements about what is proper qualitative research are distracting and misleading.
  • Even though research questions are very important, they can and most likely will change during data collection or even data analysis—don’t worry about this.
  • Your findings will have a bigger impact if you’ve connected them to previous literature; this shows that you are part of the larger conversation. This “anchor” can be a policy issue or a theoretical debate in the literature, but it need not be either. Sometimes what we do is really novel (but rarely—so always poke around and check before proceeding as if you are inventing the wheel).
  • Although there are some rules you really must follow when designing your study (e.g., how to obtain informed consent, defining a sample), unexpected things often happen in the course of data collection that make a mockery of your original plans. Be flexible.
  • Sometimes you have chosen a topic for some reason you can’t yet articulate to yourself—the subject or site just calls to you in some way. That’s fine. But you will still need to justify your choice in some way (hint: see number 4 above).
  • Pay close attention to your sample: “Think about what you are leaving out, what your data allow you to observe, and what you can do to fill in some of those blanks” (252).  And when you can’t fill them in, be honest about this when writing about the limitations of your study.
  • Even if you are doing interviews, archival research, focus groups, or any other method of data collection that does not actually require “going into the field,” you can still approach your work as fieldwork. This means taking fieldnotes or memos about what you are observing and how you are reacting and processing those observations or interviews or interactions or documents. Remember that you yourself are the instrument of data collection, so keep a reflective eye on yourself throughout.
  • Memo, memo, memo. There is no magic about how data become findings. It takes a lot of work, a lot of reflection, a lot of writing. Analytic memos are the helpful bridge between all that raw data and the presented findings.
  • Rubin strongly rejects the idea that qualitative research cannot make causal claims. I would agree, but only to a point. We don’t make the kinds of predictive causal claims you see in quantitative research, and it can confuse you and lead you down some unpromising paths if you think you can. That said, qualitative research can help demonstrate the causal mechanisms by which something happens. Qualitative research is also helpful in exploring alternative explanations and counterfactuals. If you want to know more about qualitative research and causality, I encourage you to read chapter 10 of Rubin’s text.
  • Some people are still skeptical about the value of qualitative research because they don’t understand the rigor required of it and confuse it with journalism or even fiction writing. You are just going to have to deal with this—maybe even people sitting on your committee are going to question your research. So be prepared to defend qualitative research by knowing the common misconceptions and criticisms and how to respond to them. We’ve talked a bit about these in chapter 20, and I also encourage you to read chapter 10 of Rubin’s text for more.

Null

Hopefully, by the time you have reached the end of this book, you will have done a bit of your own qualitative research—maybe you’ve conducted an interview or practiced taking fieldnotes. You may have read some examples of excellent qualitative research and have (hopefully!) come to appreciate the value of this approach. This is a good time, then, to take a step back and think about the ways that qualitative research is valuable, distinct and different from both quantitative methods and humanistic (nonscientific) inquiry.

Researcher Note

Why do you employ qualitative research methods in your area of study?

Across all Western countries, we can observe a strong statistical relationship between young people’s educational attainment and their parent’s level of education. If you have at least one parent who went to university, your own chances of going to and graduating from university are much higher compared to not having university-educated parents. Why this happens is much less clear… This is where qualitative research becomes important: to help us get a clearer understanding of the dynamics that lead to this observed statistical relationship.

In my own research, I go a step further and look at young men and women who have crossed this barrier: they have become the first in their family to go to university. I am interested in finding out why and how first-in-family university students made it to university and how being at university is experienced. In-depth interviews allow me to learn about hopes, aspirations, fears, struggles, resilience and success. Interviews give participants an opportunity to tell their stories in their own words while also validating their experiences.

I often ask the young people I interview what being in my studies means to them. As one of my participants told me, it is good to know that “people like me are worth studying.” I cannot think of a better way to explain why qualitative research is important.

-Wolfgang Lehman, author of Education and Society: Canadian Perspectives

For me personally, the real value of the qualitative approach is that it helps me address the concerns I have about the social world—how people make sense of their lives, how they create strategies to deal with unfair circumstances or systems of oppression, and why they are motivated to act in some situations but not others. Surveys and other forms of large impersonal data collection simply do not allow me to get at these concerns. I appreciate other forms of research for other kinds of questions. This ecumenical approach has served me well in my own career as a sociologist—I’ve used surveys of students to help me describe classed pathways through college and into the workforce, supplemented by interviews and focus groups that help me explain and understand the patterns uncovered by quantitative methods ( Hurst 2019 ). My goal for this book has not been to convince you to become a qualitative researcher exclusively but rather to understand and appreciate its value under the right circumstances (e.g., with the right questions and concerns).

In the same way that we would not use a screwdriver to hammer a nail into the wall, we don’t want to misuse the tools we have at hand. Nor should we critique the screwdriver for its failure to do the hammer’s job. Qualitative research is not about generating predictions or demonstrating causality. We can never statistically generalize our findings from a small sample of people in a particular context to the world at large. But that doesn’t mean we can’t generate better understandings of how the world works, despite “small” samples. Excellent qualitative research does a great job describing (whether through “thick description” or illustrative quotes) a phenomenon, case, or setting and generates deeper insight into the social world through the development of new concepts or identification of patterns and relationships that were previously unknown to us. The two components—accurate description and theoretical insight—are generated together through the iterative process of data analysis, which itself is based on a solid foundation of data collection. And along the way, we can have some fun and meet some interesting people!

conclusion on quantitative and qualitative research

Supplement: Twenty Great (engaging, insightful) Books Based on Qualitative Research

Armstrong, Elizabeth A. and Laura T. Hamilton. 2015. Paying for the Party: How College Maintains Inequality . Cambridge: Harvard University Press.

Bourgois, Phillipe and Jeffrey Schonberg. 2009. Righteous Dopefiend . Berkeley, CA: University of California Press.

DiTomaso, Nancy. 2013. The American Non-dilemma: Racial Inequality without Racism . Thousand Oaks, CA; SAGE.

Ehrenreich, Barbara. 2010. Nickel and Dimed: On (Not) Getting By in America . New York: Metropolitan Books.

Fine, Gary Alan. 2018. Talking Art: The Culture of Practice and the Practice of Culture in MFA Education . Chicago: University of Chicago Press.

Ghodsee, Kristen Rogheh. 2011. Lost in Transition: Ethnographies of Everyday Life after Communism . Durham, NC: Duke University Press.

Gowan, Teresa. 2010. Hobos, Hustlers, and Backsliders: Homeless in San Francisco . Minneapolis: University of Minnesota Press.

Graeber, David. 2013. The Democracy Project: A History, a Crisis, a Movement . New York: Spiegel & Grau.

Grazian, David. 2015. American Zoo: A Sociological Safari . Princeton, NJ: Princeton University Press.

Hartigan, John. 1999. Racial Situations: Class Predicaments of Whiteness in Detroit . Princeton, N.J.: Princeton University Press.

Ho, Karen Zouwen. 2009. Liquidated: An Ethnography of Wall Street. Durham, NC: Duke University Press.

Hochschild, Arlie Russell. 2018. Strangers in Their Own Land: Anger and Mourning on the American Right . New York: New Press.

Lamont, Michèle. 1994. Money, Morals, and Manners: The Culture of the French and the American Upper-Middle Class . Chicago: University of Chicago Press.

Lareau, Annette. 2011. Unequal Childhoods: Class, Race, and Family Life. 2nd ed with an Update a Decade Later. Berkeley, CA: University of California Press.

Leondar-Wright, Betsy. 2014. Missing Class: Strengthening Social Movement Groups by Seeing Class Cultures . Ithaca, NY: ILR Press.

Macleod, Jay. 2008. Ain’t No Makin’ It: Aspirations and Attainment in a Low-Income Neighborhood . 3rd ed. New York: Routledge.

Newman, Katherine T. 2000. No Shame in My Game: The Working Poor in the Inner City . 3rd ed. New York: Vintage Press.

Sherman, Rachel. 2006. Class Acts: Service and Inequality in Luxury Hotels . Berkeley: University of California Press.

Streib, Jessi. 2015. The Power of the Past: Understanding Cross-Class Marriages . Oxford: Oxford University Press.

Stuber, Jenny M. 2011. Inside the College Gates: How Class and Culture Matter in Higher Education . Lanham, Md.: Lexington Books.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

Qualitative and Quantitative Research — Explore the differences

Sumalatha G

In the research arena, there are two main approaches that researchers can take —  qualitative and quantitative research. Understanding the fundamentals of these two methods is crucial for conducting effective research and obtaining accurate results.

This article provides insights into the differences between qualitative and quantitative research and we also discuss how to develop research questions for qualitative and quantitative studies, and how to gather and analyze data using these research approaches. Furthermore, we will examine how to interpret findings from qualitative and quantitative research, as well as identify ethical considerations.

By the end of this comprehensive article, readers will be equipped with the knowledge and tools to apply qualitative and quantitative research to advance knowledge in their respective fields.

What is Qualitative and Quantitative Research?

Qualitative research aims to understand complex phenomena by exploring the subjective experiences and perspectives of individuals. It focuses on gathering in-depth data through techniques such as interviews, observations, and open-ended surveys. This approach allows researchers to delve into the intricacies of the topic, uncovering unique insights that may not be captured through quantitative methods alone.

For example, imagine a study on the impact of social media on mental health. Qualitative research would involve conducting interviews with individuals who have experienced negative effects from excessive social media use. Through these interviews, researchers can gain a deep understanding of the participants' experiences, emotions, and thoughts. They can explore the nuances of how social media affects different aspects of mental health, such as self-esteem, body image, and social comparison.

Conversely, quantitative research involves collecting numerical data and analyzing it using statistical methods to identify patterns, trends, and relationships. This approach allows researchers to generalize their findings to a larger population and calculate statistically significant results. It relies on structured surveys, experiments, and other data collection methods that provide standardized data for analysis.

Continuing with the example of social media and mental health, quantitative research would involve administering surveys to a large sample of individuals. The surveys would include questions that measure various aspects of mental health, such as anxiety, depression, and life satisfaction. By collecting numerical data from a large and diverse sample, researchers can identify trends and relationships between social media use and mental health outcomes.

Both qualitative and quantitative research have their strengths and weaknesses. Qualitative research allows for a deep understanding of the topic, providing rich insights and capturing the context of the participants' experiences. It allows researchers to uncover unique perspectives and shed light on subjective experiences.

On the other hand, quantitative research entails a structured and systematic approach to data collection and analysis, allowing for comparisons and generalizations across different groups and contexts.

However, it is crucial to emphasize that qualitative and quantitative research are not mutually exclusive. They frequently serve as a complement to one another within the realm of research studies. Researchers may use qualitative methods to explore a topic in-depth and generate hypotheses, which can then be tested using quantitative methods. This combination of approaches, known as mixed methods research, allows for a more comprehensive understanding of complex phenomena.

Advantages and Disadvantages of Each Research Method

Qualitative research offers the advantage of generating detailed and nuanced data. It allows researchers to explore complex issues and gain a deeper understanding of participants' thoughts, emotions, and behaviors. However, qualitative research can be time-consuming, and data analysis may be subjective.

In contrast, quantitative research provides objective and quantifiable data, making it easier to draw conclusions and establish causation. It enables researchers to collect data from large samples, increasing the generalizability of findings. Nevertheless, quantitative research may overlook important contextual information and fail to capture the complexities of human experiences. Additionally, it requires a solid understanding of statistical techniques for accurate analysis.

When to Use Qualitative or Quantitative Research?

The choice between qualitative and quantitative research depends on the research questions and objectives. Qualitative research is appropriate when exploring new or complex phenomena, seeking in-depth insights, or generating hypotheses for further investigation. It is particularly useful in social sciences and humanities. On the other hand, quantitative research is suitable when aiming to establish causal relationships, generalize findings to a larger population, or measure phenomena systematically and objectively. It is commonly employed in sciences such as psychology, economics, and medicine.

By considering the nature of the research question, the available resources, and the desired outcomes, researchers can make an informed decision on the appropriate research approach.

How to develop research Questions for Qualitative and Quantitative Studies?

A well-defined research question is essential for conducting meaningful research. In qualitative studies, research questions are exploratory and aim to understand the experiences, perceptions, and meanings of participants. These questions should be open-ended and allow for in-depth exploration of the phenomenon under investigation.

In quantitative research, research questions are often formulated to test hypotheses or examine relationships between variables. These questions should be clear, specific, and measurable to guide data collection and analysis.

Regardless of the research approach, it is crucial to develop research questions that align with the research objectives, is feasible to investigate and contribute to existing knowledge in the field.

Gathering and Analyzing Data

Qualitative research involves collecting data through various techniques, such as interviews, focus groups, and observations. Researchers must establish rapport with participants to encourage open and honest responses. The data collected is then analyzed using methods like thematic analysis and constant comparison to identify patterns, themes, and categories. In quantitative research, data is collected using surveys, experiments, or other structured methods. Researchers aim to obtain a representative sample and ensure the reliability and validity of the data. Statistical analysis techniques, such as descriptive statistics, correlation, and regression, are then applied to conclude.

Regardless of the research approach, it is essential to document the data collection and analysis process thoroughly to ensure transparency and reproducibility.

Interpreting Findings

Interpreting findings from qualitative research involves carefully analyzing the patterns, themes, and categories identified during data analysis. Researchers aim to understand the overarching meaning of the data and draw conclusions based on the participants' experiences and perspectives. The findings are often supported by direct quotes or examples from the data. In quantitative research, findings are interpreted by analyzing statistical results and examining the significance of relationships or differences. Researchers must carefully consider the limitations of the study and the generalizability of the findings. The results are often presented using tables, charts, and graphs for clarity.

Irrespective of the research approach, it is crucial to avoid generalizing beyond the scope of the data and to consider alternative interpretations.

Identifying Ethical Considerations in Qualitative and Quantitative Research

Both qualitative and quantitative research must adhere to ethical guidelines to protect the rights and well-being of participants. Researchers should obtain informed consent, ensure confidentiality, and prevent harm. In qualitative research, building trust and maintaining participant anonymity is crucial. In quantitative research, privacy and data protection are paramount.

Additionally, researchers must consider the potential biases, power dynamics, and conflicts of interest that may influence the research process and findings. Being aware of these ethical considerations helps ensure the integrity and reliability of the research.

How to Write a Research Report Based on Qualitative or Quantitative Data

When writing a research report, it is essential to structure it clearly and concisely. In qualitative research, the report typically includes an introduction, literature review, methodology, findings, discussion, and conclusion. The findings section focuses on the themes and patterns identified during analysis and is supported by quotes or examples from the data.

In quantitative research, the report generally consists of an introduction, literature review, methodology, results, discussion, and conclusion. The results section presents the statistical analysis and findings in a clear and organized manner, often using tables, charts, and graphs.

The report should be written in a scholarly tone, provide sufficient details, and communicate the research findings and implications.

Assessing Reliability and Validity of Qualitative and Quantitative Results

Reliability and validity are crucial considerations in research. In qualitative research, researchers can enhance reliability by using multiple researchers to analyze the data and compare their interpretations. Validity can be strengthened by employing rigorous data collection methods, establishing trustworthiness, and including participant validation.

In quantitative research, reliability can be assessed through test-retest reliability or inter-rater reliability. Validity can be evaluated by examining internal validity, external validity, and construct validity. Additionally, researchers should carefully consider potential confounding variables and ensure proper control measures are in place.

By assessing reliability and validity, researchers can enhance the credibility and trustworthiness of their research findings.

Qualitative and quantitative research are distinct yet complementary approaches to conducting research. Understanding when to use each method, developing appropriate research questions, gathering and analyzing data, interpreting findings, and addressing ethical considerations are all critical aspects of conducting valuable research. By embracing these methodologies and applying them appropriately, researchers can contribute to the advancement of knowledge and make meaningful contributions to their respective fields.

You might also like

Boosting Citations: A Comparative Analysis of Graphical Abstract vs. Video Abstract

Boosting Citations: A Comparative Analysis of Graphical Abstract vs. Video Abstract

Sumalatha G

The Impact of Visual Abstracts on Boosting Citations

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Educational resources and simple solutions for your research journey

qualitative vs quantitative research

Qualitative vs Quantitative Research: Differences, Examples, and Methods

There are two broad kinds of research approaches: qualitative and quantitative research that are used to study and analyze phenomena in various fields such as natural sciences, social sciences, and humanities. Whether you have realized it or not, your research must have followed either or both research types. In this article we will discuss what qualitative vs quantitative research is, their applications, pros and cons, and when to use qualitative vs quantitative research . Before we get into the details, it is important to understand the differences between the qualitative and quantitative research.     

Table of Contents

Qualitative v s Quantitative Research  

Quantitative research deals with quantity, hence, this research type is concerned with numbers and statistics to prove or disapprove theories or hypothesis. In contrast, qualitative research is all about quality – characteristics, unquantifiable features, and meanings to seek deeper understanding of behavior and phenomenon. These two methodologies serve complementary roles in the research process, each offering unique insights and methods suited to different research questions and objectives.    

Qualitative and quantitative research approaches have their own unique characteristics, drawbacks, advantages, and uses. Where quantitative research is mostly employed to validate theories or assumptions with the goal of generalizing facts to the larger population, qualitative research is used to study concepts, thoughts, or experiences for the purpose of gaining the underlying reasons, motivations, and meanings behind human behavior .   

What Are the Differences Between Qualitative and Quantitative Research  

Qualitative and quantitative research differs in terms of the methods they employ to conduct, collect, and analyze data. For example, qualitative research usually relies on interviews, observations, and textual analysis to explore subjective experiences and diverse perspectives. While quantitative data collection methods include surveys, experiments, and statistical analysis to gather and analyze numerical data. The differences between the two research approaches across various aspects are listed in the table below.    

     
  Understanding meanings, exploring ideas, behaviors, and contexts, and formulating theories  Generating and analyzing numerical data, quantifying variables by using logical, statistical, and mathematical techniques to test or prove hypothesis  
  Limited sample size, typically not representative  Large sample size to draw conclusions about the population  
  Expressed using words. Non-numeric, textual, and visual narrative  Expressed using numerical data in the form of graphs or values. Statistical, measurable, and numerical 
  Interviews, focus groups, observations, ethnography, literature review, and surveys  Surveys, experiments, and structured observations 
  Inductive, thematic, and narrative in nature  Deductive, statistical, and numerical in nature 
  Subjective  Objective 
  Open-ended questions  Close-ended (Yes or No) or multiple-choice questions 
  Descriptive and contextual   Quantifiable and generalizable 
  Limited, only context-dependent findings  High, results applicable to a larger population 
  Exploratory research method  Conclusive research method 
  To delve deeper into the topic to understand the underlying theme, patterns, and concepts  To analyze the cause-and-effect relation between the variables to understand a complex phenomenon 
  Case studies, ethnography, and content analysis  Surveys, experiments, and correlation studies 

conclusion on quantitative and qualitative research

Data Collection Methods  

There are differences between qualitative and quantitative research when it comes to data collection as they deal with different types of data. Qualitative research is concerned with personal or descriptive accounts to understand human behavior within society. Quantitative research deals with numerical or measurable data to delineate relations among variables. Hence, the qualitative data collection methods differ significantly from quantitative data collection methods due to the nature of data being collected and the research objectives. Below is the list of data collection methods for each research approach:    

Qualitative Research Data Collection  

  • Interviews  
  • Focus g roups  
  • Content a nalysis  
  • Literature review  
  • Observation  
  • Ethnography  

Qualitative research data collection can involve one-on-one group interviews to capture in-depth perspectives of participants using open-ended questions. These interviews could be structured, semi-structured or unstructured depending upon the nature of the study. Focus groups can be used to explore specific topics and generate rich data through discussions among participants. Another qualitative data collection method is content analysis, which involves systematically analyzing text documents, audio, and video files or visual content to uncover patterns, themes, and meanings. This can be done through coding and categorization of raw data to draw meaningful insights. Data can be collected through observation studies where the goal is to simply observe and document behaviors, interaction, and phenomena in natural settings without interference. Lastly, ethnography allows one to immerse themselves in the culture or environment under study for a prolonged period to gain a deep understanding of the social phenomena.   

Quantitative Research Data Collection  

  • Surveys/ q uestionnaires  
  • Experiments
  • Secondary data analysis  
  • Structured o bservations  
  • Case studies   
  • Tests and a ssessments  

Quantitative research data collection approaches comprise of fundamental methods for generating numerical data that can be analyzed using statistical or mathematical tools. The most common quantitative data collection approach is the usage of structured surveys with close-ended questions to collect quantifiable data from a large sample of participants. These can be conducted online, over the phone, or in person.   

Performing experiments is another important data collection approach, in which variables are manipulated under controlled conditions to observe their effects on dependent variables. This often involves random assignment of participants to different conditions or groups. Such experimental settings are employed to gauge cause-and-effect relationships and understand a complex phenomenon. At times, instead of acquiring original data, researchers may deal with secondary data, which is the dataset curated by others, such as government agencies, research organizations, or academic institute. With structured observations, subjects in a natural environment can be studied by controlling the variables which aids in understanding the relationship among various variables. The secondary data is then analyzed to identify patterns and relationships among variables. Observational studies provide a means to systematically observe and record behaviors or phenomena as they occur in controlled environments. Case studies form an interesting study methodology in which a researcher studies a single entity or a small number of entities (individuals or organizations) in detail to understand complex phenomena within a specific context.   

Qualitative vs Quantitative Research Outcomes  

Qualitative research and quantitative research lead to varied research outcomes, each with its own strengths and limitations. For example, qualitative research outcomes provide deep descriptive accounts of human experiences, motivations, and perspectives that allow us to identify themes or narratives and context in which behavior, attitudes, or phenomena occurs.  Quantitative research outcomes on the other hand produce numerical data that is analyzed statistically to establish patterns and relationships objectively, to form generalizations about the larger population and make predictions. This numerical data can be presented in the form of graphs, tables, or charts. Both approaches offer valuable perspectives on complex phenomena, with qualitative research focusing on depth and interpretation, while quantitative research emphasizes numerical analysis and objectivity.  

conclusion on quantitative and qualitative research

When to Use Qualitative vs Quantitative Research Approach  

The decision to choose between qualitative and quantitative research depends on various factors, such as the research question, objectives, whether you are taking an inductive or deductive approach, available resources, practical considerations such as time and money, and the nature of the phenomenon under investigation. To simplify, quantitative research can be used if the aim of the research is to prove or test a hypothesis, while qualitative research should be used if the research question is more exploratory and an in-depth understanding of the concepts, behavior, or experiences is needed.     

Qualitative research approach  

Qualitative research approach is used under following scenarios:   

  • To study complex phenomena: When the research requires understanding the depth, complexity, and context of a phenomenon.  
  • Collecting participant perspectives: When the goal is to understand the why behind a certain behavior, and a need to capture subjective experiences and perceptions of participants.  
  • Generating hypotheses or theories: When generating hypotheses, theories, or conceptual frameworks based on exploratory research.  

Example: If you have a research question “What obstacles do expatriate students encounter when acquiring a new language in their host country?”  

This research question can be addressed using the qualitative research approach by conducting in-depth interviews with 15-25 expatriate university students. Ask open-ended questions such as “What are the major challenges you face while attempting to learn the new language?”, “Do you find it difficult to learn the language as an adult?”, and “Do you feel practicing with a native friend or colleague helps the learning process”?  

Based on the findings of these answers, a follow-up questionnaire can be planned to clarify things. Next step will be to transcribe all interviews using transcription software and identify themes and patterns.   

Quantitative research approach  

Quantitative research approach is used under following scenarios:   

  • Testing hypotheses or proving theories: When aiming to test hypotheses, establish relationships, or examine cause-and-effect relationships.   
  • Generalizability: When needing findings that can be generalized to broader populations using large, representative samples.  
  • Statistical analysis: When requiring rigorous statistical analysis to quantify relationships, patterns, or trends in data.   

Example : Considering the above example, you can conduct a survey of 200-300 expatriate university students and ask them specific questions such as: “On a scale of 1-10 how difficult is it to learn a new language?”  

Next, statistical analysis can be performed on the responses to draw conclusions like, on an average expatriate students rated the difficulty of learning a language 6.5 on the scale of 10.    

Mixed methods approach  

In many cases, researchers may opt for a mixed methods approach , combining qualitative and quantitative methods to leverage the strengths of both approaches. Researchers may use qualitative data to explore phenomena in-depth and generate hypotheses, while quantitative data can be used to test these hypotheses and generalize findings to broader populations.  

Example: Both qualitative and quantitative research methods can be used in combination to address the above research question. Through open-ended questions you can gain insights about different perspectives and experiences while quantitative research allows you to test that knowledge and prove/disprove your hypothesis.   

How to Analyze Qualitative and Quantitative Data  

When it comes to analyzing qualitative and quantitative data, the focus is on identifying patterns in the data to highlight the relationship between elements. The best research method for any given study should be chosen based on the study aim. A few methods to analyze qualitative and quantitative data are listed below.  

Analyzing qualitative data  

Qualitative data analysis is challenging as it is not expressed in numbers and consists majorly of texts, images, or videos. Hence, care must be taken while using any analytical approach. Some common approaches to analyze qualitative data include:  

  • Organization: The first step is data (transcripts or notes) organization into different categories with similar concepts, themes, and patterns to find inter-relationships.  
  • Coding: Data can be arranged in categories based on themes/concepts using coding.  
  • Theme development: Utilize higher-level organization to group related codes into broader themes.  
  • Interpretation: Explore the meaning behind different emerging themes to understand connections. Use different perspectives like culture, environment, and status to evaluate emerging themes.  
  • Reporting: Present findings with quotes or excerpts to illustrate key themes.   

Analyzing quantitative data  

Quantitative data analysis is more direct compared to qualitative data as it primarily deals with numbers. Data can be evaluated using simple math or advanced statistics (descriptive or inferential). Some common approaches to analyze quantitative data include:  

  • Processing raw data: Check missing values, outliers, or inconsistencies in raw data.  
  • Descriptive statistics: Summarize data with means, standard deviations, or standard error using programs such as Excel, SPSS, or R language.  
  • Exploratory data analysis: Usage of visuals to deduce patterns and trends.  
  • Hypothesis testing: Apply statistical tests to find significance and test hypothesis (Student’s t-test or ANOVA).  
  • Interpretation: Analyze results considering significance and practical implications.  
  • Validation: Data validation through replication or literature review.  
  • Reporting: Present findings by means of tables, figures, or graphs.   

conclusion on quantitative and qualitative research

Benefits and limitations of qualitative vs quantitative research  

There are significant differences between qualitative and quantitative research; we have listed the benefits and limitations of both methods below:  

Benefits of qualitative research  

  • Rich insights: As qualitative research often produces information-rich data, it aids in gaining in-depth insights into complex phenomena, allowing researchers to explore nuances and meanings of the topic of study.  
  • Flexibility: One of the most important benefits of qualitative research is flexibility in acquiring and analyzing data that allows researchers to adapt to the context and explore more unconventional aspects.  
  • Contextual understanding: With descriptive and comprehensive data, understanding the context in which behaviors or phenomena occur becomes accessible.   
  • Capturing different perspectives: Qualitative research allows for capturing different participant perspectives with open-ended question formats that further enrich data.   
  • Hypothesis/theory generation: Qualitative research is often the first step in generating theory/hypothesis, which leads to future investigation thereby contributing to the field of research.

Limitations of qualitative research  

  • Subjectivity: It is difficult to have objective interpretation with qualitative research, as research findings might be influenced by the expertise of researchers. The risk of researcher bias or interpretations affects the reliability and validity of the results.   
  • Limited generalizability: Due to the presence of small, non-representative samples, the qualitative data cannot be used to make generalizations to a broader population.  
  • Cost and time intensive: Qualitative data collection can be time-consuming and resource-intensive, therefore, it requires strategic planning and commitment.   
  • Complex analysis: Analyzing qualitative data needs specialized skills and techniques, hence, it’s challenging for researchers without sufficient training or experience.   
  • Potential misinterpretation: There is a risk of sampling bias and misinterpretation in data collection and analysis if researchers lack cultural or contextual understanding.   

Benefits of quantitative research  

  • Objectivity: A key benefit of quantitative research approach, this objectivity reduces researcher bias and subjectivity, enhancing the reliability and validity of findings.   
  • Generalizability: For quantitative research, the sample size must be large and representative enough to allow for generalization to broader populations.   
  • Statistical analysis: Quantitative research enables rigorous statistical analysis (increasing power of the analysis), aiding hypothesis testing and finding patterns or relationship among variables.   
  • Efficiency: Quantitative data collection and analysis is usually more efficient compared to the qualitative methods, especially when dealing with large datasets.   
  • Clarity and Precision: The findings are usually clear and precise, making it easier to present them as graphs, tables, and figures to convey them to a larger audience.  

Limitations of quantitative research  

  • Lacks depth and details: Due to its objective nature, quantitative research might lack the depth and richness of qualitative approaches, potentially overlooking important contextual factors or nuances.   
  • Limited exploration: By not considering the subjective experiences of participants in depth , there’s a limited chance to study complex phenomenon in detail.   
  • Potential oversimplification: Quantitative research may oversimplify complex phenomena by boiling them down to numbers, which might ignore key nuances.   
  • Inflexibility: Quantitative research deals with predecided varibales and measures , which limits the ability of researchers to explore unexpected findings or adjust the research design as new findings become available .  
  • Ethical consideration: Quantitative research may raise ethical concerns especially regarding privacy, informed consent, and the potential for harm, when dealing with sensitive topics or vulnerable populations.   

Frequently asked questions  

  • What is the difference between qualitative and quantitative research? 

Quantitative methods use numerical data and statistical analysis for objective measurement and hypothesis testing, emphasizing generalizability. Qualitative methods gather non-numerical data to explore subjective experiences and contexts, providing rich, nuanced insights.  

  • What are the types of qualitative research? 

Qualitative research methods include interviews, observations, focus groups, and case studies. They provide rich insights into participants’ perspectives and behaviors within their contexts, enabling exploration of complex phenomena.  

  • What are the types of quantitative research? 

Quantitative research methods include surveys, experiments, observations, correlational studies, and longitudinal research. They gather numerical data for statistical analysis, aiming for objectivity and generalizability.  

  • Can you give me examples for qualitative and quantitative research? 

Qualitative Research Example: 

Research Question: What are the experiences of parents with autistic children in accessing support services?  

Method: Conducting in-depth interviews with parents to explore their perspectives, challenges, and needs.  

Quantitative Research Example: 

Research Question: What is the correlation between sleep duration and academic performance in college students?  

Method: Distributing surveys to a large sample of college students to collect data on their sleep habits and academic performance, then analyzing the data statistically to determine any correlations.  

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

cluster sampling

What is Cluster Sampling? Definition, Method, and Examples

article processing charges

What is Thematic Analysis and How to Do It (with Examples)

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

conclusion on quantitative and qualitative research

Home Market Research

Qualitative vs Quantitative Research: Differences and Examples

Qualitative vs Quantitative Research

Understanding the differences between qualitative vs quantitative research is essential when conducting a research project, as both methods underpin the two key approaches in conducting a study.

In recent blogs, we elaborately discussed quantitative and qualitative research methods b ut what is the difference between the two? Which one is the best? Let’s find out.

Qualitative Research In a nutshell

Qualitative research is a research methodology where “quality” or opinion based research is conducted to derive research conclusions. This type of research is often conversational in nature rather than being quantifiable through empirical research and measurements.

Qualitative research: Methods & Characteristics

1. Conversation : A conversation takes place between the researcher and the respondent. This can be in the form of focus groups , in-depth interviews using telephonic / video / face-to-face conversations.

However, with the rise of online platforms, a bulk of steps in qualitative research involves creating and maintaining online community portals for a more quantifiable and recordable qualitative study.

LEARN ABOUT: Qualitative Interview

2. Conclusions : Research conclusions are subjective in nature when conducting qualitative research. The researcher may derive conclusions based on in-depth analysis of respondent attitude, reason behind responses and understanding of psychological motivations.

Quantitative Research In a nutshell

Quantitative research is a research methodology which uses questions and questionnaires to gather quantifiable data and perform statistical analysis to derive meaningful research conclusions.

Quantitative research: Methods & Characteristics

1. Questions : Quantitative research method uses surveys and polls to gather information on a given subject. There are a variety of question types used based on a nature of the research study.

For Example: If you want to conduct a customer satisfaction quantitative research, the Net Promoter Score is one of the critically acclaimed survey questions for this purpose.

2. Distribution : Quantitative research uses email surveys as the primary mode of gathering responses to questions. Alternatively, technology has given rise to offline distribution methods for relatively remote locations using offline mobile data capture apps. For social sciences and psychological quantitative research, social media surveys are also used to gather data.

3. Statistical Analysis : Quantitative research uses a wide range of data analysis techniques such as Conjoint Analysis , Cross Tabulation and Trend Analysis .

Qualitative vs Quantitative Research

Now let’s compare the qualitative and quantitative research methods in different aspects so that you can choose the right one in your next investigation.:

1. Objective and flow of research

Quantitative research is used in data-oriented research where the objective of research design is to derive “measurable empirical evidence” based on fixed and pre-determined questions. The flow of research, is therefore, decided before the research is conducted.

Where as, qualitative research is used where the objective is research is to keep probing the respondents based on previous answers under the complete discretion of the interviewer. The flow of research is not determined and the researcher / interviewer has the liberty to frame and ask new questions.

2. Respondent sample size

Respondents or sample of a particular panel is much larger for quantitative research such that enough verifiable information is gather to reach a conclusion without opinion bias. In large scale quantitative research, sample size can be in thousands.

Where as, qualitative research inherently uses less sample size because a large sample size makes it difficult of the research to probe respondents. For instance, a typical political focus group study evaluating election candidates involves no more than 5-10 panelists.

3. Information gathering

Quantitative research uses information gathering methods that can be quantified and processed for statistical analysis techniques. Simply put – quantitative research is heavily dependent on “numbers”, data and stats.

LEARN ABOUT: Research Process Steps

Where as, qualitative research uses conversational methods to gather relevant information on a given subject.

4. Post-research response analysis and conclusions

Quantitative research uses a variety of statistical analysis methods to derive quantifiable research conclusions. These are based on mathematical processes applied on the gather data.

Where as, qualitative researc h depends on the interviewer to derive research conclusions based on qualitative conversations held with the respondents. This conclusion is effectively subjective in nature. This is why quantitative research recordings are often reviewed by senior researchers before the final research conclusion is drawn.

Differences between qualitative vs quantitative research

Differences between Qualitative vs quantitative

We hope that this information helps you choose your next research method and achieve your goals.

If you want to carry out any qualitative or qualitative research questions , ask about the tools that QuestionPro has available to help you with the qualitative data collection of the data you need. We have functions for all types of research!.

MORE LIKE THIS

Types of organizational change

Types of Organizational Change & Strategies for Business

Aug 2, 2024

voice of the customer programs

Voice of the Customer Programs: What It Is, Implementations

Aug 1, 2024

conclusion on quantitative and qualitative research

A Case for Empowerment and Being Bold — Tuesday CX Thoughts

Jul 30, 2024

typeform vs google forms

Typeform vs. Google Forms: Which one is best for my needs?

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

conclusion on quantitative and qualitative research

Quantitative and Qualitative Research

  • Quantitative vs. Qualitative Research
  • Find quantitative or qualitative research in CINAHL
  • Find quantitative or qualitative research in PsycINFO
  • Relevant book titles

Mixed Methods Research

As its name suggests, mixed methods research involves using elements of both quantitative and qualitative research methods. Using mixed methods, a researcher can more fully explore a research question and provide greater insight. 

What is Empirical Research?

Empirical research is based on observed  and measured phenomena. Knowledge is extracted from real lived experience rather than from theory or belief. 

IMRaD: Scholarly journals sometimes use the "IMRaD" format to communicate empirical research findings.

Introduction:  explains why this research is important or necessary. Provides context ("literature review").

Methodology:  explains how the research was conducted ("research design").

Results: presents what was learned through the study ("findings").

Discussion:  explains or comments upon the findings including why the study is important and connecting to other research ("conclusion").

What is Quantitative Research?

Quantitative research gathers data that can be measured numerically and analyzed mathematically. Quantitative research attempts to answer research questions through the quantification of data. 

Indicators of quantitative research include:

contains statistical analysis 

large sample size 

objective - little room to argue with the numbers 

types of research: descriptive studies, exploratory studies, experimental studies, explanatory studies, predictive studies, clinical trials 

What is Qualitative Research?

Qualitative research is based upon data that is gathered by observation. Qualitative research articles will attempt to answer questions that cannot be measured by numbers but rather by perceived meaning. Qualitative research will likely include interviews, case studies, ethnography, or focus groups. 

Indicators of qualitative research include:

interviews or focus groups 

small sample size 

subjective - researchers are often interpreting meaning 

methods used: phenomenology, ethnography, grounded theory, historical method, case study 

Video: Empirical Studies: Qualitative vs. Quantitative

This video from usu libraries walks you through the differences between quantitative and qualitative research methods. (5:51 minutes) creative commons attribution license (reuse allowed)  https://youtu.be/rzcfma1l6ce.

  • << Previous: Home
  • Next: Find quantitative or qualitative research in CINAHL >>
  • Last Updated: Mar 25, 2024 12:23 PM
  • URL: https://libguides.hofstra.edu/quantitative-and-qualitative-research

This site is compliant with the W3C-WAI Web Content Accessibility Guidelines HOFSTRA UNIVERSITY Hempstead, NY 11549-1000 (516) 463-6600 © 2000-2009 Hofstra University

Quantitative and Qualitative Research: An Overview of Approaches

  • First Online: 03 January 2022

Cite this chapter

conclusion on quantitative and qualitative research

  • Euclid Seeram 5 , 6 , 7  

In Chap. 1 , the nature and scope of research were outlined and included an overview of quantitative and qualitative research and a brief description of research designs. In this chapter, both quantitative and qualitative research will be described in a little more detail with respect to essential features and characteristics. Furthermore, the research designs used in each of these approaches will be reviewed. Finally, this chapter will conclude with examples of published quantitative and qualitative research in medical imaging and radiation therapy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

conclusion on quantitative and qualitative research

Health Technology Assessment

conclusion on quantitative and qualitative research

How to avoid describing your radiological research study incorrectly

conclusion on quantitative and qualitative research

Statistics and Research & Diagnostic Imaging

Anvari, A., Halpern, E. F., & Samir, A. E. (2015). Statistics 101 for radiologists. Radiographics, 35 , 1789–1801.

Article   Google Scholar  

Battistelli, A., Portoghese, I., Galletta, M., & Pohl, S. (2013). Beyond the tradition: Test of an integrative conceptual model on nurse turnover. International Nursing Review, 60 (1), 103–111. https://doi.org/10.1111/j.1466-7657.2012.01024.x

Article   CAS   PubMed   Google Scholar  

Bhattacherjee, A. (2012). Social science research: Principles, methods, and practices . In Textbooks Collection , 3. http://scholarcommons.usf.edu/oa_textbooks/3 . University of South Florida.

Chenail, R. (2011). Ten steps for conceptualizing and conducting qualitative research studies in a pragmatically curious manner. The Qualitative Report, 16 (6), 1713–1730. http://www.nova.edu/ssss/QR/QR16-6/chenail.pdf

Google Scholar  

Coyle, M. K. (2012). Depressive symptoms after a myocardial infarction and self-care. Archives of Psychiatric Nursing, 26 (2), 127–134. https://doi.org/10.1016/j.apnu.2011.06.004

Article   PubMed   Google Scholar  

Creswell, J. W., & Guetterman, T. C. (2019). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (6th ed.). Pearson Education.

Curtis, E. A., Comiskey, C., & Dempsey, O. (2016). Importance and use of correlational research. Nurse Researcher, 23 (6), 20–25. https://doi.org/10.7748/nr.2016.e1382

Gibson, D. J., & Davidson, R. A. (2012). Exposure creep in computed radiography: A longitudinal study. Academic Radiology, 19 (4), 458–462. https://doi.org/10.1016/j.acra.2011.12.003 . Epub 2012 Jan 5.

Gray, J. R., Grove, S. K., & Sutherland, S. (2017). The practice of nursing research: Appraisal, synthesis, and generation of evidence . Elsevier.

Miles, M., Hubermann, A., & Saldana, J. (2014). Qualitative data analysis: a methods sourcebook (3rd ed.). Sage.

Munhall, P. L. (2012). Nursing research: A qualitative perspective (5th ed.). Jones and Bartlett.

Munn, Z., & Jordan, Z. (2011). The patient experience of high technology medical imaging: A systematic review of the qualitative evidence. JBI Library of Systematic Reviews, 9 (19), 631–678. https://doi.org/10.11124/01938924-201109190-00001

Munn, Z., Pearson, A., Jordan, Z., Murphy, F., & Pilkington, D. (2013). Action research in radiography: What it is and how it can be conducted. Journal of Medical Radiation Sciences, 60 (2), 47–52. https://doi.org/10.1002/jmrs.8

Article   PubMed   PubMed Central   Google Scholar  

O’Regan, T., Robinson, L., Newton-Hughes, A., & Strudwick, R. (2019). A review of visual ethnography: Radiography viewed through a different lens. Radiography, 25 (Supplement 1), S9–S13.

Price, P., Jhangiani, R., & Chiang, I. (2015). Research methods of psychology (2nd Canadian ed.). BC Campus. Retrieved from https://opentextbc.ca/researchmethods/

Seeram, E., Davidson, R., Bushong, S., & Swan, H. (2015). Education and training required for the digital radiography environment: A non-interventional quantitative survey study of radiologic technologists. International Journal of Radiology & Medical Imaging, 2 , 103. https://doi.org/10.15344/ijrmi/2015/103

Seeram, E., Davidson, R., Bushong, S., & Swan, H. (2016). Optimizing the exposure indicator as a dose management strategy in computed radiography. Radiologic Technology, 87 (4), 380–391.

PubMed   Google Scholar  

Solomon, P., & Draine, J. (2010). An overview of quantitative methods. In B. Thyer (Ed.), The handbook of social work research methods (2nd ed., pp. 26–36). Sage.

Chapter   Google Scholar  

Suchsland, M. Z., Cruz, M. J., Hardy, V., Jarvik, J., McMillan, G., Brittain, A., & Thompson, M. (2020). Qualitative study to explore radiologist and radiologic technologist perceptions of outcomes patients experience during imaging in the USA. BMJ Open, 10 , e033961. https://doi.org/10.1136/bmjopen-2019-033961

Thomas, L. (2020). An introduction to quasi-experimental designs. Retrieved from Scribbr.com https://www.scribbr.com/methodology/quasi-experimental-design/ . Accessed 8 Jan 2021.

University of Lethbridge (Alberta, Canada). (2020). An introduction to action research. https://www.uleth.ca/education/research/research-centers/action-research/introduction . Accessed 12 Jan 2020.

Download references

Author information

Authors and affiliations.

Medical Imaging and Radiation Sciences, Monash University, Melbourne, VIC, Australia

Euclid Seeram

Faculty of Science, Charles Sturt University, Bathurst, NSW, Australia

Medical Radiation Sciences, Faculty of Health, University of Canberra, Canberra, ACT, Australia

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Medical Imaging, Faculty of Health, University of Canberra, Burnaby, BC, Canada

Faculty of Health, University of Canberra, Canberra, ACT, Australia

Robert Davidson

Brookfield Health Sciences, University College Cork, Cork, Ireland

Andrew England

Mark F. McEntee

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Seeram, E. (2021). Quantitative and Qualitative Research: An Overview of Approaches. In: Seeram, E., Davidson, R., England, A., McEntee, M.F. (eds) Research for Medical Imaging and Radiation Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-79956-4_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-79956-4_2

Published : 03 January 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-79955-7

Online ISBN : 978-3-030-79956-4

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Qualitative vs. Quantitative Research: Comparing the Methods and Strategies for Education Research

A woman sits at a library table with stacks of books and a laptop.

No matter the field of study, all research can be divided into two distinct methodologies: qualitative and quantitative research. Both methodologies offer education researchers important insights.

Education research assesses problems in policy, practices, and curriculum design, and it helps administrators identify solutions. Researchers can conduct small-scale studies to learn more about topics related to instruction or larger-scale ones to gain insight into school systems and investigate how to improve student outcomes.

Education research often relies on the quantitative methodology. Quantitative research in education provides numerical data that can prove or disprove a theory, and administrators can easily share the number-based results with other schools and districts. And while the research may speak to a relatively small sample size, educators and researchers can scale the results from quantifiable data to predict outcomes in larger student populations and groups.

Qualitative vs. Quantitative Research in Education: Definitions

Although there are many overlaps in the objectives of qualitative and quantitative research in education, researchers must understand the fundamental functions of each methodology in order to design and carry out an impactful research study. In addition, they must understand the differences that set qualitative and quantitative research apart in order to determine which methodology is better suited to specific education research topics.

Generate Hypotheses with Qualitative Research

Qualitative research focuses on thoughts, concepts, or experiences. The data collected often comes in narrative form and concentrates on unearthing insights that can lead to testable hypotheses. Educators use qualitative research in a study’s exploratory stages to uncover patterns or new angles.

Form Strong Conclusions with Quantitative Research

Quantitative research in education and other fields of inquiry is expressed in numbers and measurements. This type of research aims to find data to confirm or test a hypothesis.

Differences in Data Collection Methods

Keeping in mind the main distinction in qualitative vs. quantitative research—gathering descriptive information as opposed to numerical data—it stands to reason that there are different ways to acquire data for each research methodology. While certain approaches do overlap, the way researchers apply these collection techniques depends on their goal.

Interviews, for example, are common in both modes of research. An interview with students that features open-ended questions intended to reveal ideas and beliefs around attendance will provide qualitative data. This data may reveal a problem among students, such as a lack of access to transportation, that schools can help address.

An interview can also include questions posed to receive numerical answers. A case in point: how many days a week do students have trouble getting to school, and of those days, how often is a transportation-related issue the cause? In this example, qualitative and quantitative methodologies can lead to similar conclusions, but the research will differ in intent, design, and form.

Taking a look at behavioral observation, another common method used for both qualitative and quantitative research, qualitative data may consider a variety of factors, such as facial expressions, verbal responses, and body language.

On the other hand, a quantitative approach will create a coding scheme for certain predetermined behaviors and observe these in a quantifiable manner.

Qualitative Research Methods

  • Case Studies : Researchers conduct in-depth investigations into an individual, group, event, or community, typically gathering data through observation and interviews.
  • Focus Groups : A moderator (or researcher) guides conversation around a specific topic among a group of participants.
  • Ethnography : Researchers interact with and observe a specific societal or ethnic group in their real-life environment.
  • Interviews : Researchers ask participants questions to learn about their perspectives on a particular subject.

Quantitative Research Methods

  • Questionnaires and Surveys : Participants receive a list of questions, either closed-ended or multiple choice, which are directed around a particular topic.
  • Experiments : Researchers control and test variables to demonstrate cause-and-effect relationships.
  • Observations : Researchers look at quantifiable patterns and behavior.
  • Structured Interviews : Using a predetermined structure, researchers ask participants a fixed set of questions to acquire numerical data.

Choosing a Research Strategy

When choosing which research strategy to employ for a project or study, a number of considerations apply. One key piece of information to help determine whether to use a qualitative vs. quantitative research method is which phase of development the study is in.

For example, if a project is in its early stages and requires more research to find a testable hypothesis, qualitative research methods might prove most helpful. On the other hand, if the research team has already established a hypothesis or theory, quantitative research methods will provide data that can validate the theory or refine it for further testing.

It’s also important to understand a project’s research goals. For instance, do researchers aim to produce findings that reveal how to best encourage student engagement in math? Or is the goal to determine how many students are passing geometry? These two scenarios require distinct sets of data, which will determine the best methodology to employ.

In some situations, studies will benefit from a mixed-methods approach. Using the goals in the above example, one set of data could find the percentage of students passing geometry, which would be quantitative. The research team could also lead a focus group with the students achieving success to discuss which techniques and teaching practices they find most helpful, which would produce qualitative data.

Learn How to Put Education Research into Action

Those with an interest in learning how to harness research to develop innovative ideas to improve education systems may want to consider pursuing a doctoral degree. American University’s School of Education online offers a Doctor of Education (EdD) in Education Policy and Leadership that prepares future educators, school administrators, and other education professionals to become leaders who effect positive changes in schools. Courses such as Applied Research Methods I: Enacting Critical Research provides students with the techniques and research skills needed to begin conducting research exploring new ways to enhance education. Learn more about American’ University’s EdD in Education Policy and Leadership .

What’s the Difference Between Educational Equity and Equality?

EdD vs. PhD in Education: Requirements, Career Outlook, and Salary

Top Education Technology Jobs for Doctorate in Education Graduates

American University, EdD in Education Policy and Leadership

Edutopia, “2019 Education Research Highlights”

Formplus, “Qualitative vs. Quantitative Data: 15 Key Differences and Similarities”

iMotion, “Qualitative vs. Quantitative Research: What Is What?”

Scribbr, “Qualitative vs. Quantitative Research”

Simply Psychology, “What’s the Difference Between Quantitative and Qualitative Research?”

Typeform, “A Simple Guide to Qualitative and Quantitative Research”

Request Information

Chatbot avatar

AU Program Helper

This AI chatbot provides automated responses, which may not always be accurate. By continuing with this conversation, you agree that the contents of this chat session may be transcribed and retained. You also consent that this chat session and your interactions, including cookie usage, are subject to our  privacy policy .

  • Key Differences

Know the Differences & Comparisons

Difference Between Qualitative and Quantitative Research

qualitative vs quantitative research

In a qualitative research, there are only a few non-representative cases are used as a sample to develop an initial understanding. Unlike, quantitative research in which a sufficient number of representative cases are taken to consideration to recommend a final course of action.

There is a never-ending debate on, which research is better than the other, so in this article, we are going to shed light on the difference between qualitative and quantitative research.

Content: Qualitative Research Vs Quantitative Research

Comparison chart.

Basis for ComparisonQualitative ResearchQuantitative Research
MeaningQualitative research is a method of inquiry that develops understanding on human and social sciences, to find the way people think and feel.Quantitative research is a research method that is used to generate numerical data and hard facts, by employing statistical, logical and mathematical technique.
NatureHolisticParticularistic
ApproachSubjectiveObjective
Research typeExploratoryConclusive
ReasoningInductiveDeductive
SamplingPurposiveRandom
DataVerbalMeasurable
InquiryProcess-orientedResult-oriented
HypothesisGeneratedTested
Elements of analysisWords, pictures and objectsNumerical data
ObjectiveTo explore and discover ideas used in the ongoing processes.To examine cause and effect relationship between variables.
MethodsNon-structured techniques like In-depth interviews, group discussions etc.Structured techniques such as surveys, questionnaires and observations.
ResultDevelops initial understandingRecommends final course of action

Definition of Qualitative Research

Qualitative research is one which provides insights and understanding of the problem setting. It is an unstructured, exploratory research method that studies highly complex phenomena that are impossible to elucidate with the quantitative research. Although, it generates ideas or hypothesis for later quantitative research.

Qualitative research is used to gain an in-depth understanding of human behaviour, experience, attitudes, intentions, and motivations, on the basis of observation and interpretation, to find out the way people think and feel. It is a form of research in which the researcher gives more weight to the views of the participants. Case study, grounded theory, ethnography, historical and phenomenology are the types of qualitative research.

Definition of Quantitative Research

Quantitative research is a form of research that relies on the methods of natural sciences, which produces numerical data and hard facts. It aims at establishing cause and effect relationship between two variables by using mathematical, computational and statistical methods. The research is also known as empirical research as it can be accurately and precisely measured.

The data collected by the researcher can be divided into categories or put into rank, or it can be measured in terms of units of measurement. Graphs and tables of raw data can be constructed with the help quantitative research, making it easier for the researcher to analyse the results.

Key Differences Between Qualitative And Quantitative Research

The differences between qualitative and quantitative research are provided can be drawn clearly on the following grounds:

  • Qualitative research is a method of inquiry that develops understanding on human and social sciences, to find the way people think and feel. A scientific and empirical research method that is used to generate numerical data, by employing statistical, logical and mathematical technique is called quantitative research.
  • Qualitative research is holistic in nature while quantitative research is particularistic.
  • The qualitative research follows a subjective approach as the researcher is intimately involved, whereas the approach of quantitative research is objective, as the researcher is uninvolved and attempts to precise the observations and analysis on the topic to answer the inquiry.
  • Qualitative research is exploratory. As opposed to quantitative research which is conclusive.
  • The reasoning used to synthesise data in qualitative research is inductive whereas in the case of quantitative research the reasoning is deductive.
  • Qualitative research is based on purposive sampling, where a small sample size is selected with a view to get a thorough understanding of the target concept. On the other hand, quantitative research relies on random sampling; wherein a large representative sample is chosen in order to extrapolate the results to the whole population.
  • Verbal data are collected in qualitative research. Conversely, in quantitative research measurable data is gathered.
  • Inquiry in qualitative research is a process-oriented, which is not in the case of quantitative research.
  • Elements used in the analysis of qualitative research are words, pictures, and objects while that of quantitative research is numerical data.
  • Qualitative Research is conducted with the aim of exploring and discovering ideas used in the ongoing processes. As opposed to quantitative research the purpose is to examine cause and effect relationship between variables.
  • Lastly, the methods used in qualitative research are in-depth interviews, focus groups, etc. In contrast, the methods of conducting quantitative research are structured interviews and observations.
  • Qualitative Research develops the initial understanding whereas quantitative research recommends a final course of action.

Video: Qualitative Vs Quantitative Research

An ideal research is one, which is conducted by considering both the methods, together. Although, there are some particular areas which require, only one type of research which mainly depends on the information required by the researcher.  While qualitative research tends to be interpretative, quantitative research is concrete.

You Might Also Like:

qualitative vs quantitative data

Zeenat khan says

February 14, 2018 at 11:14 pm

Thank you so much it helped me a lot..

Janine says

April 9, 2019 at 7:04 pm

thanks this helps

SUZANA JAMES says

April 14, 2023 at 2:11 pm

Thanks a lot of the help

Chaudhuri Behera says

February 13, 2022 at 10:20 pm

Really this is very helpful 👍

Brenda says

February 20, 2018 at 11:32 am

Thank you!!

May the force be with you ,!For giving me the light of this paradigm..

October 16, 2022 at 6:44 am

Joseph liwa says

March 27, 2018 at 4:26 pm

Nice material for Reading

Michael Nyabasa says

August 15, 2018 at 5:41 pm

This is helpful to my learning

kebbie says

August 18, 2018 at 12:43 pm

Naveen says

August 19, 2018 at 1:52 pm

Appreciate the effort. Nicely articulated.

October 24, 2018 at 7:45 pm

It was really helpful kindly don’t tell my teacher that i copied it from here.

November 24, 2018 at 4:14 pm

it was extraordinary. I am writing a thesis using both the qualitative and quantitative methods can I please get more insight. On how to use the mixed method in educational psychology

William Ndwapi says

October 2, 2021 at 10:00 am

Awesome information clearly making the difference between the 2 methods. Currently writing an assignment on the same topic and this has been so helpful.

Terkimbi Felix Avalumun says

August 27, 2023 at 8:23 pm

Wow! This beautifully articulated

Okello Newton says

September 9, 2023 at 5:35 pm

love comprehensive explanations of key terms I needed for my assignment. Thankyou

zainab says

November 26, 2018 at 12:02 pm

I have got an exam of research methodology tomorrow. It helped me a lot. Such a nice explanation. Thanks

Janet Mayowa Olowe says

December 27, 2018 at 1:57 pm

Wow… you are doing a great job… much hugs for this…. well explained to the level of a grade four… THANK YOU….XOXO….

January 8, 2019 at 2:50 pm

It helped alot. Thank you so much.

S. Smith says

January 28, 2019 at 12:09 pm

Thank you so much for this! My professor did not know how to differentiate the two, and it was extremely annoying!

Joel Mayowa Folarin says

May 8, 2019 at 6:33 pm

thanks very much, thanks for your support

May 19, 2019 at 5:29 pm

Thanks for the video and ‘Key Differences’ as a whole.

DANIEL says

June 6, 2019 at 9:32 am

This is very insightful!

Maloney M Chirumiko says

July 4, 2019 at 9:33 am

Benefited a lot, Thank you

Bali Kumar says

July 7, 2019 at 1:33 pm

Simple and straight forward

Surbhi S says

July 18, 2019 at 3:33 pm

To all the readers, Thanks a ton for appreciating the article and sharing your views with us. Keep reading. 🙂

Sadeeq Ruma says

August 25, 2019 at 2:51 am

Thanks so much. I used it to write my exam. It really helps me so much

Janvier Agbotome says

October 30, 2019 at 10:54 am

Very grateful for this useful information. I can now write my thesis paper in religious research paper with precision. Thanks and God bless you.

Thalia says

November 2, 2019 at 1:04 pm

Very interesting and good results I’m satisfied

Weber Irembere says

November 16, 2019 at 6:12 pm

Well clearly explained. I appreciate

Maimuna Mohamed says

November 25, 2019 at 5:28 pm

Very insightful! i have an assignment to write on research methodology and this really helped thanks you so much.

Austine Okereke says

January 14, 2020 at 9:31 pm

Wow this is such a great write-up. thumbs up to the writer. i have learnt alot from this. God bless you real good for impacting.

Suqita J Abdullah says

January 18, 2020 at 6:11 am

Thank You, I need some information that sums up all the articles I read on some of these subjects. I need a website like this one who realizes that students need help and full clarity of the subject matter.

KAFEERO NATHAN says

February 12, 2020 at 9:42 pm

GOD IS GOOD because HE made me arrive at the exact work i was looking for all along.

Olga Simon says

March 27, 2020 at 11:04 pm

Thank so much . this information was helpful with my assignment.

Dr. Delina Barrera says

April 28, 2020 at 12:20 am

Is it possible to close caption or provide a transcript of your video? I really like the video and one of the few reviewing the differences between quantitative and qualitative research. I would like to use it for one of my classes (Introduction to Political Science). However, we are required to have closed closed captioning.

Nadine Riche says

June 20, 2020 at 6:57 pm

This video gives me a great understanding between quantitative and qualitative. Very helpful for my research.

Florence Okayo says

July 27, 2020 at 7:27 pm

This was very helpful, it has help me complete my assignment. Thanks to the writer

September 4, 2020 at 5:53 pm

It very nice as much I know and all your answers are already here

September 11, 2020 at 1:19 pm

Thank you so, so, so much for writing this. it’s been incredibly helpful for me.

Ikenna Dialoke says

September 17, 2020 at 5:25 pm

Greatly appreciated! Very helpful.

C.Ramulu says

January 15, 2021 at 8:47 am

Thanks, Very clear and useful

Athumani says

February 4, 2021 at 11:51 am

Gratitude it made me aware and more bright

Samuel Mulilo says

March 23, 2021 at 1:58 pm

This was very helpful in my assignment

neha roy says

May 17, 2021 at 3:58 pm

Awesome and easy to understand, Thank you.

George Shyaka says

July 1, 2021 at 8:25 pm

Easy to understand thank you

Baviny Masowe says

September 23, 2021 at 11:45 am

On point for academic purposes. Very helpful.

October 31, 2021 at 2:13 pm

It is very clearly articulated. However, could you offer some source citations?

Kassegn says

December 9, 2021 at 3:28 pm

Thank you this is main Knowledge for the student.

January 10, 2022 at 5:23 pm

Thank you so much this is very interesting

samuel says

January 20, 2022 at 7:14 pm

Thanks, I really love this.

Ruthina says

April 26, 2022 at 2:00 pm

well explained and precisely easy to understand ,thanks so much.

Nikita nyati says

May 11, 2022 at 12:21 pm

Very helpful material for study

Shubham says

May 21, 2022 at 1:56 am

Hey, Thanks for this beautiful info.

June 27, 2022 at 1:17 pm

This article saved me. Thanks

Tukam Enos says

October 12, 2022 at 8:13 pm

Very interesting

Krishna GC says

October 26, 2022 at 4:41 pm

Very useful article.

LUSIGE says

November 26, 2022 at 7:18 pm

very clear and elaborate summary. great job 😊👌

Dr. Sibongile Chituku says

January 24, 2023 at 12:56 pm

The information is easy to understand. Thank you

Rafika says

January 29, 2023 at 2:56 pm

Hi, thankyou for the material; it really helps me to finish the assignment.

March 30, 2023 at 1:37 pm

Thank you, very helpful material.

Getachew says

July 21, 2023 at 10:34 am

Hassam Simunomba says

July 27, 2023 at 3:29 am

Thanks for this explaination.

Williams A Yari says

August 22, 2023 at 7:40 am

So interesting write up, very educative and resourceful to the teaching and learning sectors.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Banner Image

Quantitative and Qualitative Research

  • I NEED TO . . .
  • What is Quantitative Research?
  • What is Qualitative Research?
  • Quantitative vs Qualitative
  • Step 1: Accessing CINAHL
  • Step 2: Create a Keyword Search
  • Step 3: Create a Subject Heading Search
  • Step 4: Repeat Steps 1-3 for Second Concept
  • Step 5: Repeat Steps 1-3 for Quantitative Terms
  • Step 6: Combining All Searches
  • Step 7: Adding Limiters
  • Step 8: Save Your Search!
  • What Kind of Article is This?
  • More Research Help This link opens in a new window

What is qualitative research?

Qualitative research is a process of naturalistic inquiry that seeks an in-depth understanding of social phenomena within their natural setting. It focuses on the "why" rather than the "what" of social phenomena and relies on the direct experiences of human beings as meaning-making agents in their every day lives. Rather than by logical and statistical procedures, qualitative researchers use multiple systems of inquiry for the study of human phenomena including biography, case study, historical analysis, discourse analysis, ethnography, grounded theory, and phenomenology.

University of Utah College of Nursing, (n.d.). What is qualitative research? [Guide] Retrieved from  https://nursing.utah.edu/research/qualitative-research/what-is-qualitative-research.php#what 

The following video will explain the fundamentals of qualitative research.

  • << Previous: What is Quantitative Research?
  • Next: Quantitative vs Qualitative >>
  • Last Updated: Jun 12, 2024 8:56 AM
  • URL: https://libguides.uta.edu/quantitative_and_qualitative_research

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can Fam Physician
  • v.55(8); 2009 Aug

Quantitative and qualitative research

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein 1

Some clinicians still believe that qualitative research is a “soft” science and of lesser value to clinical decision making, but this position is no longer tenable. 2 - 4 A quick search using the key word qualitative on the Canadian Family Physician website generated more than 100 qualitative research articles published in the past 3 years alone.

This paper provides an overview of the history of science to help readers appreciate the basic epistemological commonalities and differences between qualitative and quantitative approaches to research.

Age of Enlightenment

Copernicus (1473-1543), Galileo (1564-1642), Descartes (1596-1650), and Newton (1643-1727) were instrumental in carving the path to the Enlightenment (1700-1789)—an intellectual movement credited with introducing systematic inquiry and the scientific method. Auguste Comte (1798-1857), regarded as the founder of modern social science and credited with advancing a philosophic theory of positivism (ie, that factual knowledge can only be attained through observable experience), emphasized that the search for objective truth and knowledge must follow a nomothetic (ie, relating to the discovery of universal laws) and empirical (ie, based on experiment and observation) approach. Scientists of the Enlightenment era asserted that we must be free of the uncertainties of time, place, history, and culture in order to discover how the world works. This is referred to as the received view of science. 5

Received view

Essentially, the received view posits that the world is made up of absolute truths existing independently of human consciousness. Knowledge is available for objective discovery within a causal and factual form. A reductionist approach to problem solving is used; theories are formulated and tested experimentally to verify or falsify different hypotheses; and numerical tests based on probabilistic theory are used to establish the levels of relationships between measurable variables.

Conversely, in Critique of Pure Reason (Immanuel Kant’s 1781 thesis, which followed the work of Plato), Kant asserts that human reason also plays a key role in determining what constitutes knowledge. Unlike Comte, who favoured empirical experience as the most legitimate source of knowledge and who argued that pure knowledge begins and ends with sense experience free of subjective interpretation, Kant states that we not only experience the world as it presents itself to us, but we also interpret it. 4

Interpretivist view

Karl Marx (1818-1883), Friedrich Nietzsche (1844-1900), Georg Simmel (1858-1918), Max Weber (1864-1920), Max Scheler (1874-1928), and Karl Mannheim (1893-1947), among others, produced sharp criticisms against the prevailing conception of science for understanding social interactions. Using Georg Wilhelm Friedrich Hegel’s (1770-1831) idea that subjectivity is an inherent part of cognition, these social scientists rejected the claims that science, as a practice of discovery of a world independent of our senses, can in fact represent the absolute reality of social phenomena. The interpretivist view, 6 therefore, posits that knowledge is socially constructed and ephemeral. 7 In other words, it is influenced by history, culture, power differences in society, and politics. 8 In his cogent thesis The Structure of Scientific Revolutions , Thomas Kuhn argues that the interpretive nature is deeply and undeniably embedded in science. 9

What is common among both experienced and budding researchers alike, whether from the positivist tradition or the interpretivist one, is a realization that an increasingly sophisticated representation of any particular phenomenon requires a form of systematic investigation. Those who employ qualitative methods usually seek in-depth perspectives on how society is thought to operate and the related historical, cultural, social, and political influences that affect how decisions are made. Those who use quantitative methods search for laws and principles that can help to predict how the world works. To understand the world better, some researchers use laboratories and clinics while others use cultural and social spaces. Yet all researchers regard their endeavours as a means to improve quality of life and well-being.

Whether researchers use qualitative or quantitative methods, they are building knowledge, which, in the end, is applied to our understanding of the world, allowing us to better care for our patients.

Hypothesis is a quarterly series in Canadian Family Physician , coordinated by the Section of Researchers of the College of Family Physicians of Canada. The goal is to explore clinically relevant research concepts for all CFP readers. Submissions are invited from researchers and nonresearchers. Ideas or submissions can be submitted on-line at http://mc.manuscriptcentral.com/cfp or through the CFP website www.cfp.ca ; under “Authors.”

Competing interests

None declared

Frequently asked questions

What’s the difference between quantitative and qualitative methods.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Frequently asked questions: Knowledge Base

Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).

In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

The literature review usually comes near the beginning of your  dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

Harvard referencing uses an author–date system. Sources are cited by the author’s last name and the publication year in brackets. Each Harvard in-text citation corresponds to an entry in the alphabetised reference list at the end of the paper.

Vancouver referencing uses a numerical system. Sources are cited by a number in parentheses or superscript. Each number corresponds to a full reference at the end of the paper.

Harvard style Vancouver style
In-text citation Each referencing style has different rules (Pears and Shields, 2019). Each referencing style has different rules (1).
Reference list Pears, R. and Shields, G. (2019). . 11th edn. London: MacMillan. 1. Pears R, Shields G. Cite them right: The essential referencing guide. 11th ed. London: MacMillan; 2019.

A Harvard in-text citation should appear in brackets every time you quote, paraphrase, or refer to information from a source.

The citation can appear immediately after the quotation or paraphrase, or at the end of the sentence. If you’re quoting, place the citation outside of the quotation marks but before any other punctuation like a comma or full stop.

In Harvard referencing, up to three author names are included in an in-text citation or reference list entry. When there are four or more authors, include only the first, followed by ‘ et al. ’

In-text citation Reference list
1 author (Smith, 2014) Smith, T. (2014) …
2 authors (Smith and Jones, 2014) Smith, T. and Jones, F. (2014) …
3 authors (Smith, Jones and Davies, 2014) Smith, T., Jones, F. and Davies, S. (2014) …
4+ authors (Smith , 2014) Smith, T. (2014) …

A bibliography should always contain every source you cited in your text. Sometimes a bibliography also contains other sources that you used in your research, but did not cite in the text.

MHRA doesn’t specify a rule about this, so check with your supervisor to find out exactly what should be included in your bibliography.

Footnote numbers should appear in superscript (e.g. 11 ). You can use the ‘Insert footnote’ button in Word to do this automatically; it’s in the ‘References’ tab at the top.

Footnotes always appear after the quote or paraphrase they relate to. MHRA generally recommends placing footnote numbers at the end of the sentence, immediately after any closing punctuation, like this. 12

In situations where this might be awkward or misleading, such as a long sentence containing multiple quotations, footnotes can also be placed at the end of a clause mid-sentence, like this; 13 note that they still come after any punctuation.

When a source has two or three authors, name all of them in your MHRA references . When there are four or more, use only the first name, followed by ‘and others’:

Number of authors Footnote example Bibliography example
1 author David Smith Smith, David
2 authors David Smith and Hugh Jones Smith, David, and Hugh Jones
3 authors David Smith, Hugh Jones and Emily Wright Smith, David, Hugh Jones and Emily Wright
4+ authors David Smith and others Smith, David, and others

Note that in the bibliography, only the author listed first has their name inverted. The names of additional authors and those of translators or editors are written normally.

A citation should appear wherever you use information or ideas from a source, whether by quoting or paraphrasing its content.

In Vancouver style , you have some flexibility about where the citation number appears in the sentence – usually directly after mentioning the author’s name is best, but simply placing it at the end of the sentence is an acceptable alternative, as long as it’s clear what it relates to.

In Vancouver style , when you refer to a source with multiple authors in your text, you should only name the first author followed by ‘et al.’. This applies even when there are only two authors.

In your reference list, include up to six authors. For sources with seven or more authors, list the first six followed by ‘et al.’.

The words ‘ dissertation ’ and ‘thesis’ both refer to a large written research project undertaken to complete a degree, but they are used differently depending on the country:

  • In the UK, you write a dissertation at the end of a bachelor’s or master’s degree, and you write a thesis to complete a PhD.
  • In the US, it’s the other way around: you may write a thesis at the end of a bachelor’s or master’s degree, and you write a dissertation to complete a PhD.

The main difference is in terms of scale – a dissertation is usually much longer than the other essays you complete during your degree.

Another key difference is that you are given much more independence when working on a dissertation. You choose your own dissertation topic , and you have to conduct the research and write the dissertation yourself (with some assistance from your supervisor).

Dissertation word counts vary widely across different fields, institutions, and levels of education:

  • An undergraduate dissertation is typically 8,000–15,000 words
  • A master’s dissertation is typically 12,000–50,000 words
  • A PhD thesis is typically book-length: 70,000–100,000 words

However, none of these are strict guidelines – your word count may be lower or higher than the numbers stated here. Always check the guidelines provided by your university to determine how long your own dissertation should be.

At the bachelor’s and master’s levels, the dissertation is usually the main focus of your final year. You might work on it (alongside other classes) for the entirety of the final year, or for the last six months. This includes formulating an idea, doing the research, and writing up.

A PhD thesis takes a longer time, as the thesis is the main focus of the degree. A PhD thesis might be being formulated and worked on for the whole four years of the degree program. The writing process alone can take around 18 months.

References should be included in your text whenever you use words, ideas, or information from a source. A source can be anything from a book or journal article to a website or YouTube video.

If you don’t acknowledge your sources, you can get in trouble for plagiarism .

Your university should tell you which referencing style to follow. If you’re unsure, check with a supervisor. Commonly used styles include:

  • Harvard referencing , the most commonly used style in UK universities.
  • MHRA , used in humanities subjects.
  • APA , used in the social sciences.
  • Vancouver , used in biomedicine.
  • OSCOLA , used in law.

Your university may have its own referencing style guide.

If you are allowed to choose which style to follow, we recommend Harvard referencing, as it is a straightforward and widely used style.

To avoid plagiarism , always include a reference when you use words, ideas or information from a source. This shows that you are not trying to pass the work of others off as your own.

You must also properly quote or paraphrase the source. If you’re not sure whether you’ve done this correctly, you can use the Scribbr Plagiarism Checker to find and correct any mistakes.

In Harvard style , when you quote directly from a source that includes page numbers, your in-text citation must include a page number. For example: (Smith, 2014, p. 33).

You can also include page numbers to point the reader towards a passage that you paraphrased . If you refer to the general ideas or findings of the source as a whole, you don’t need to include a page number.

When you want to use a quote but can’t access the original source, you can cite it indirectly. In the in-text citation , first mention the source you want to refer to, and then the source in which you found it. For example:

It’s advisable to avoid indirect citations wherever possible, because they suggest you don’t have full knowledge of the sources you’re citing. Only use an indirect citation if you can’t reasonably gain access to the original source.

In Harvard style referencing , to distinguish between two sources by the same author that were published in the same year, you add a different letter after the year for each source:

  • (Smith, 2019a)
  • (Smith, 2019b)

Add ‘a’ to the first one you cite, ‘b’ to the second, and so on. Do the same in your bibliography or reference list .

To create a hanging indent for your bibliography or reference list :

  • Highlight all the entries
  • Click on the arrow in the bottom-right corner of the ‘Paragraph’ tab in the top menu.
  • In the pop-up window, under ‘Special’ in the ‘Indentation’ section, use the drop-down menu to select ‘Hanging’.
  • Then close the window with ‘OK’.

Though the terms are sometimes used interchangeably, there is a difference in meaning:

  • A reference list only includes sources cited in the text – every entry corresponds to an in-text citation .
  • A bibliography also includes other sources which were consulted during the research but not cited.

It’s important to assess the reliability of information found online. Look for sources from established publications and institutions with expertise (e.g. peer-reviewed journals and government agencies).

The CRAAP test (currency, relevance, authority, accuracy, purpose) can aid you in assessing sources, as can our list of credible sources . You should generally avoid citing websites like Wikipedia that can be edited by anyone – instead, look for the original source of the information in the “References” section.

You can generally omit page numbers in your in-text citations of online sources which don’t have them. But when you quote or paraphrase a specific passage from a particularly long online source, it’s useful to find an alternate location marker.

For text-based sources, you can use paragraph numbers (e.g. ‘para. 4’) or headings (e.g. ‘under “Methodology”’). With video or audio sources, use a timestamp (e.g. ‘10:15’).

In the acknowledgements of your thesis or dissertation, you should first thank those who helped you academically or professionally, such as your supervisor, funders, and other academics.

Then you can include personal thanks to friends, family members, or anyone else who supported you during the process.

Yes, it’s important to thank your supervisor(s) in the acknowledgements section of your thesis or dissertation .

Even if you feel your supervisor did not contribute greatly to the final product, you still should acknowledge them, if only for a very brief thank you. If you do not include your supervisor, it may be seen as a snub.

The acknowledgements are generally included at the very beginning of your thesis or dissertation, directly after the title page and before the abstract .

In a thesis or dissertation, the acknowledgements should usually be no longer than one page. There is no minimum length.

You may acknowledge God in your thesis or dissertation acknowledgements , but be sure to follow academic convention by also thanking the relevant members of academia, as well as family, colleagues, and friends who helped you.

All level 1 and 2 headings should be included in your table of contents . That means the titles of your chapters and the main sections within them.

The contents should also include all appendices and the lists of tables and figures, if applicable, as well as your reference list .

Do not include the acknowledgements or abstract   in the table of contents.

To automatically insert a table of contents in Microsoft Word, follow these steps:

  • Apply heading styles throughout the document.
  • In the references section in the ribbon, locate the Table of Contents group.
  • Click the arrow next to the Table of Contents icon and select Custom Table of Contents.
  • Select which levels of headings you would like to include in the table of contents.

Make sure to update your table of contents if you move text or change headings. To update, simply right click and select Update Field.

The table of contents in a thesis or dissertation always goes between your abstract and your introduction.

An abbreviation is a shortened version of an existing word, such as Dr for Doctor. In contrast, an acronym uses the first letter of each word to create a wholly new word, such as UNESCO (an acronym for the United Nations Educational, Scientific and Cultural Organization).

Your dissertation sometimes contains a list of abbreviations .

As a rule of thumb, write the explanation in full the first time you use an acronym or abbreviation. You can then proceed with the shortened version. However, if the abbreviation is very common (like UK or PC), then you can just use the abbreviated version straight away.

Be sure to add each abbreviation in your list of abbreviations !

If you only used a few abbreviations in your thesis or dissertation, you don’t necessarily need to include a list of abbreviations .

If your abbreviations are numerous, or if you think they won’t be known to your audience, it’s never a bad idea to add one. They can also improve readability, minimising confusion about abbreviations unfamiliar to your reader.

A list of abbreviations is a list of all the abbreviations you used in your thesis or dissertation. It should appear at the beginning of your document, immediately after your table of contents . It should always be in alphabetical order.

Fishbone diagrams have a few different names that are used interchangeably, including herringbone diagram, cause-and-effect diagram, and Ishikawa diagram.

These are all ways to refer to the same thing– a problem-solving approach that uses a fish-shaped diagram to model possible root causes of problems and troubleshoot solutions.

Fishbone diagrams (also called herringbone diagrams, cause-and-effect diagrams, and Ishikawa diagrams) are most popular in fields of quality management. They are also commonly used in nursing and healthcare, or as a brainstorming technique for students.

Some synonyms and near synonyms of among include:

  • In the company of
  • In the middle of
  • Surrounded by

Some synonyms and near synonyms of between  include:

  • In the space separating
  • In the time separating

In spite of   is a preposition used to mean ‘ regardless of ‘, ‘notwithstanding’, or ‘even though’.

It’s always used in a subordinate clause to contrast with the information given in the main clause of a sentence (e.g., ‘Amy continued to watch TV, in spite of the time’).

Despite   is a preposition used to mean ‘ regardless of ‘, ‘notwithstanding’, or ‘even though’.

It’s used in a subordinate clause to contrast with information given in the main clause of a sentence (e.g., ‘Despite the stress, Joe loves his job’).

‘Log in’ is a phrasal verb meaning ‘connect to an electronic device, system, or app’. The preposition ‘to’ is often used directly after the verb; ‘in’ and ‘to’ should be written as two separate words (e.g., ‘ log in to the app to update privacy settings’).

‘Log into’ is sometimes used instead of ‘log in to’, but this is generally considered incorrect (as is ‘login to’).

Some synonyms and near synonyms of ensure include:

  • Make certain

Some synonyms and near synonyms of assure  include:

Rest assured is an expression meaning ‘you can be certain’ (e.g., ‘Rest assured, I will find your cat’). ‘Assured’ is the adjectival form of the verb assure , meaning ‘convince’ or ‘persuade’.

Some synonyms and near synonyms for council include:

There are numerous synonyms and near synonyms for the two meanings of counsel :

Direct Direction
Guide Guidance
Instruct Instruction

AI writing tools can be used to perform a variety of tasks.

Generative AI writing tools (like ChatGPT ) generate text based on human inputs and can be used for interactive learning, to provide feedback, or to generate research questions or outlines.

These tools can also be used to paraphrase or summarise text or to identify grammar and punctuation mistakes. Y ou can also use Scribbr’s free paraphrasing tool , summarising tool , and grammar checker , which are designed specifically for these purposes.

Using AI writing tools (like ChatGPT ) to write your essay is usually considered plagiarism and may result in penalisation, unless it is allowed by your university. Text generated by AI tools is based on existing texts and therefore cannot provide unique insights. Furthermore, these outputs sometimes contain factual inaccuracies or grammar mistakes.

However, AI writing tools can be used effectively as a source of feedback and inspiration for your writing (e.g., to generate research questions ). Other AI tools, like grammar checkers, can help identify and eliminate grammar and punctuation mistakes to enhance your writing.

The Scribbr Knowledge Base is a collection of free resources to help you succeed in academic research, writing, and citation. Every week, we publish helpful step-by-step guides, clear examples, simple templates, engaging videos, and more.

The Knowledge Base is for students at all levels. Whether you’re writing your first essay, working on your bachelor’s or master’s dissertation, or getting to grips with your PhD research, we’ve got you covered.

As well as the Knowledge Base, Scribbr provides many other tools and services to support you in academic writing and citation:

  • Create your citations and manage your reference list with our free Reference Generators in APA and MLA style.
  • Scan your paper for in-text citation errors and inconsistencies with our innovative APA Citation Checker .
  • Avoid accidental plagiarism with our reliable Plagiarism Checker .
  • Polish your writing and get feedback on structure and clarity with our Proofreading & Editing services .

Yes! We’re happy for educators to use our content, and we’ve even adapted some of our articles into ready-made lecture slides .

You are free to display, distribute, and adapt Scribbr materials in your classes or upload them in private learning environments like Blackboard. We only ask that you credit Scribbr for any content you use.

We’re always striving to improve the Knowledge Base. If you have an idea for a topic we should cover, or you notice a mistake in any of our articles, let us know by emailing [email protected] .

The consequences of plagiarism vary depending on the type of plagiarism and the context in which it occurs. For example, submitting a whole paper by someone else will have the most severe consequences, while accidental citation errors are considered less serious.

If you’re a student, then you might fail the course, be suspended or expelled, or be obligated to attend a workshop on plagiarism. It depends on whether it’s your first offence or you’ve done it before.

As an academic or professional, plagiarising seriously damages your reputation. You might also lose your research funding or your job, and you could even face legal consequences for copyright infringement.

Paraphrasing without crediting the original author is a form of plagiarism , because you’re presenting someone else’s ideas as if they were your own.

However, paraphrasing is not plagiarism if you correctly reference the source . This means including an in-text referencing and a full reference , formatted according to your required citation style (e.g., Harvard , Vancouver ).

As well as referencing your source, make sure that any paraphrased text is completely rewritten in your own words.

Accidental plagiarism is one of the most common examples of plagiarism . Perhaps you forgot to cite a source, or paraphrased something a bit too closely. Maybe you can’t remember where you got an idea from, and aren’t totally sure if it’s original or not.

These all count as plagiarism, even though you didn’t do it on purpose. When in doubt, make sure you’re citing your sources . Also consider running your work through a plagiarism checker tool prior to submission, which work by using advanced database software to scan for matches between your text and existing texts.

Scribbr’s Plagiarism Checker takes less than 10 minutes and can help you turn in your paper with confidence.

The accuracy depends on the plagiarism checker you use. Per our in-depth research , Scribbr is the most accurate plagiarism checker. Many free plagiarism checkers fail to detect all plagiarism or falsely flag text as plagiarism.

Plagiarism checkers work by using advanced database software to scan for matches between your text and existing texts. Their accuracy is determined by two factors: the algorithm (which recognises the plagiarism) and the size of the database (with which your document is compared).

To avoid plagiarism when summarising an article or other source, follow these two rules:

  • Write the summary entirely in your own words by   paraphrasing the author’s ideas.
  • Reference the source with an in-text citation and a full reference so your reader can easily find the original text.

Plagiarism can be detected by your professor or readers if the tone, formatting, or style of your text is different in different parts of your paper, or if they’re familiar with the plagiarised source.

Many universities also use   plagiarism detection software like Turnitin’s, which compares your text to a large database of other sources, flagging any similarities that come up.

It can be easier than you think to commit plagiarism by accident. Consider using a   plagiarism checker prior to submitting your essay to ensure you haven’t missed any citations.

Some examples of plagiarism include:

  • Copying and pasting a Wikipedia article into the body of an assignment
  • Quoting a source without including a citation
  • Not paraphrasing a source properly (e.g. maintaining wording too close to the original)
  • Forgetting to cite the source of an idea

The most surefire way to   avoid plagiarism is to always cite your sources . When in doubt, cite!

Global plagiarism means taking an entire work written by someone else and passing it off as your own. This can include getting someone else to write an essay or assignment for you, or submitting a text you found online as your own work.

Global plagiarism is one of the most serious types of plagiarism because it involves deliberately and directly lying about the authorship of a work. It can have severe consequences for students and professionals alike.

Verbatim plagiarism means copying text from a source and pasting it directly into your own document without giving proper credit.

If the structure and the majority of the words are the same as in the original source, then you are committing verbatim plagiarism. This is the case even if you delete a few words or replace them with synonyms.

If you want to use an author’s exact words, you need to quote the original source by putting the copied text in quotation marks and including an   in-text citation .

Patchwork plagiarism , also called mosaic plagiarism, means copying phrases, passages, or ideas from various existing sources and combining them to create a new text. This includes slightly rephrasing some of the content, while keeping many of the same words and the same structure as the original.

While this type of plagiarism is more insidious than simply copying and pasting directly from a source, plagiarism checkers like Turnitin’s can still easily detect it.

To avoid plagiarism in any form, remember to reference your sources .

Yes, reusing your own work without citation is considered self-plagiarism . This can range from resubmitting an entire assignment to reusing passages or data from something you’ve handed in previously.

Self-plagiarism often has the same consequences as other types of plagiarism . If you want to reuse content you wrote in the past, make sure to check your university’s policy or consult your professor.

If you are reusing content or data you used in a previous assignment, make sure to cite yourself. You can cite yourself the same way you would cite any other source: simply follow the directions for the citation style you are using.

Keep in mind that reusing prior content can be considered self-plagiarism , so make sure you ask your instructor or consult your university’s handbook prior to doing so.

Most institutions have an internal database of previously submitted student assignments. Turnitin can check for self-plagiarism by comparing your paper against this database. If you’ve reused parts of an assignment you already submitted, it will flag any similarities as potential plagiarism.

Online plagiarism checkers don’t have access to your institution’s database, so they can’t detect self-plagiarism of unpublished work. If you’re worried about accidentally self-plagiarising, you can use Scribbr’s Self-Plagiarism Checker to upload your unpublished documents and check them for similarities.

Plagiarism has serious consequences and can be illegal in certain scenarios.

While most of the time plagiarism in an undergraduate setting is not illegal, plagiarism or self-plagiarism in a professional academic setting can lead to legal action, including copyright infringement and fraud. Many scholarly journals do not allow you to submit the same work to more than one journal, and if you do not credit a coauthor, you could be legally defrauding them.

Even if you aren’t breaking the law, plagiarism can seriously impact your academic career. While the exact consequences of plagiarism vary by institution and severity, common consequences include a lower grade, automatically failing a course, academic suspension or probation, and even expulsion.

Self-plagiarism means recycling work that you’ve previously published or submitted as an assignment. It’s considered academic dishonesty to present something as brand new when you’ve already gotten credit and perhaps feedback for it in the past.

If you want to refer to ideas or data from previous work, be sure to cite yourself.

Academic integrity means being honest, ethical, and thorough in your academic work. To maintain academic integrity, you should avoid misleading your readers about any part of your research and refrain from offences like plagiarism and contract cheating, which are examples of academic misconduct.

Academic dishonesty refers to deceitful or misleading behavior in an academic setting. Academic dishonesty can occur intentionally or unintentionally, and it varies in severity.

It can encompass paying for a pre-written essay, cheating on an exam, or committing plagiarism . It can also include helping others cheat, copying a friend’s homework answers, or even pretending to be sick to miss an exam.

Academic dishonesty doesn’t just occur in a classroom setting, but also in research and other academic-adjacent fields.

Consequences of academic dishonesty depend on the severity of the offence and your institution’s policy. They can range from a warning for a first offence to a failing grade in a course to expulsion from your university.

For those in certain fields, such as nursing, engineering, or lab sciences, not learning fundamentals properly can directly impact the health and safety of others. For those working in academia or research, academic dishonesty impacts your professional reputation, leading others to doubt your future work.

Academic dishonesty can be intentional or unintentional, ranging from something as simple as claiming to have read something you didn’t to copying your neighbour’s answers on an exam.

You can commit academic dishonesty with the best of intentions, such as helping a friend cheat on a paper. Severe academic dishonesty can include buying a pre-written essay or the answers to a multiple-choice test, or falsifying a medical emergency to avoid taking a final exam.

Plagiarism means presenting someone else’s work as your own without giving proper credit to the original author. In academic writing, plagiarism involves using words, ideas, or information from a source without including a citation .

Plagiarism can have serious consequences , even when it’s done accidentally. To avoid plagiarism, it’s important to keep track of your sources and cite them correctly.

Common knowledge does not need to be cited. However, you should be extra careful when deciding what counts as common knowledge.

Common knowledge encompasses information that the average educated reader would accept as true without needing the extra validation of a source or citation.

Common knowledge should be widely known, undisputed, and easily verified. When in doubt, always cite your sources.

Most online plagiarism checkers only have access to public databases, whose software doesn’t allow you to compare two documents for plagiarism.

However, in addition to our Plagiarism Checker , Scribbr also offers an Self-Plagiarism Checker . This is an add-on tool that lets you compare your paper with unpublished or private documents. This way you can rest assured that you haven’t unintentionally plagiarised or self-plagiarised .

Compare two sources for plagiarism

Rapport begrijpen OSC

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organise your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organisation to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analysed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualise your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analysed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

An observational study could be a good fit for your research if your research question is based on things you observe. If you have ethical, logistical, or practical concerns that make an experimental design challenging, consider an observational study. Remember that in an observational study, it is critical that there be no interference or manipulation of the research subjects. Since it’s not an experiment, there are no control or treatment groups either.

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

Exploratory research explores the main aspects of a new or barely researched question.

Explanatory research explains the causes and effects of an already widely researched question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analysing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Triangulation can help:

  • Reduce bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labour-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference between this and a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling , and quota sampling .

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from county to city to neighbourhood) to create a sample that’s less expensive and time-consuming to collect data from.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data are then collected from as large a percentage as possible of this random subset.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

In multistage sampling , you can use probability or non-probability sampling methods.

For a probability sample, you have to probability sampling at every stage. You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method .

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 × 5 = 15 subgroups.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

A sampling error is the difference between a population parameter and a sample statistic .

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction, and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

Attrition bias is a threat to internal validity . In experiments, differential rates of attrition between treatment and control groups can skew results.

This bias can affect the relationship between your independent and dependent variables . It can make variables appear to be correlated when they are not, or vice versa.

The external validity of a study is the extent to which you can generalise your findings to different groups of people, situations, and measures.

The two types of external validity are population validity (whether you can generalise to other groups of people) and ecological validity (whether you can generalise to other situations and settings).

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment, and situation effect.

Attrition bias can skew your sample so that your final sample differs significantly from your original sample. Your sample is biased because some groups from your population are underrepresented.

With a biased final sample, you may not be able to generalise your findings to the original population that you sampled from, so your external validity is compromised.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity: The extent to which your measure is unrelated or negatively related to measures of distinct constructs

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity, and criterion validity to achieve construct validity.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalisation : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalisation: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it ‘depends’ on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called ‘independent’ because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation)

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

On graphs, the explanatory variable is conventionally placed on the x -axis, while the response variable is placed on the y -axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term ‘ explanatory variable ‘ is sometimes preferred over ‘ independent variable ‘ because, in real-world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so ‘explanatory variables’ is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

There are 4 main types of extraneous variables :

  • Demand characteristics : Environmental cues that encourage participants to conform to researchers’ expectations
  • Experimenter effects : Unintentional actions by researchers that influence study outcomes
  • Situational variables : Eenvironmental variables that alter participants’ behaviours
  • Participant variables : Any characteristic or aspect of a participant’s background that could affect study results

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

‘Controlling for a variable’ means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

In statistics, ordinal and nominal variables are both considered categorical variables .

Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalisation .

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control, and randomisation.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomisation , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both.

You want to find out how blood sugar levels are affected by drinking diet cola and regular cola, so you conduct an experiment .

  • The type of cola – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of cola.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g., the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g., water volume or weight).

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyse your data quickly and efficiently
  • Your research question depends on strong parity between participants, with environmental conditions held constant

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualise your initial thoughts and hypotheses
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order.
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of four types of interviews .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favourably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias in research can also occur in observations if the participants know they’re being observed. They might alter their behaviour accordingly.

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups . Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with ‘yes’ or ‘no’ (questions that start with ‘why’ or ‘how’ are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a ‘cross-section’) in the population
Follows in participants over time Provides of society at a given point

Cross-sectional studies cannot establish a cause-and-effect relationship or analyse behaviour over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data are available for analysis; other times your research question may only require a cross-sectional study to answer it.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

A true experiment (aka a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment
  • Random assignment of participants to ensure the groups are equivalent

Depending on your study topic, there are various other methods of controlling variables .

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

Naturalistic observation is a qualitative research method where you record the behaviours of your research subjects in real-world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as ‘people watching’ with a purpose.

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

You can use several tactics to minimise observer bias .

  • Use masking (blinding) to hide the purpose of your study from all observers.
  • Triangulate your data with different data collection methods or sources.
  • Use multiple observers and ensure inter-rater reliability.
  • Train your observers to make sure data is consistently recorded between them.
  • Standardise your observation procedures to make sure they are structured and clear.

The observer-expectancy effect occurs when researchers influence the results of their own study through interactions with participants.

Researchers’ own beliefs and expectations about the study results may unintentionally influence participants through demand characteristics .

Observer bias occurs when a researcher’s expectations, opinions, or prejudices influence what they perceive or record in a study. It usually affects studies when observers are aware of the research aims or hypotheses. This type of research bias is also called detection bias or ascertainment bias .

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimise or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyse, detect, modify, or remove ‘dirty’ data to make your dataset ‘clean’. Data cleaning is also called data cleansing or data scrubbing.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardisation and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

Blinding is important to reduce bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behaviour in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analysing the data.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The two main types of social desirability bias are:

  • Self-deceptive enhancement (self-deception): The tendency to see oneself in a favorable light without realizing it.
  • Impression managemen t (other-deception): The tendency to inflate one’s abilities or achievement in order to make a good impression on other people.

Demand characteristics are aspects of experiments that may give away the research objective to participants. Social desirability bias occurs when participants automatically try to respond in ways that make them seem likeable in a study, even if it means misrepresenting how they truly feel.

Participants may use demand characteristics to infer social norms or experimenter expectancies and act in socially desirable ways, so you should try to control for demand characteristics wherever possible.

Response bias refers to conditions or factors that take place during the process of responding to surveys, affecting the responses. One type of response bias is social desirability bias .

When your population is large in size, geographically dispersed, or difficult to contact, it’s necessary to use a sampling method .

This allows you to gather information from a smaller part of the population, i.e. the sample, and make accurate statements by using statistical analysis. A few sampling methods include simple random sampling , convenience sampling , and snowball sampling .

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection , using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extra-marital affairs)

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones. 

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalisations – often the goal of quantitative research . As such, a snowball sample is not representative of the target population, and is usually a better fit for qualitative research .

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalysing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Construct validity refers to how well a test measures the concept (or construct) it was designed to measure. Assessing construct validity is especially important when you’re researching concepts that can’t be quantified and/or are intangible, like introversion. To ensure construct validity your test should be based on known indicators of introversion ( operationalisation ).

On the other hand, content validity assesses how well the test represents all aspects of the construct. If some aspects are missing or irrelevant parts are included, the test has low content validity.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analysing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Attrition refers to participants leaving a study. It always happens to some extent – for example, in randomised control trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.

Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:

  • Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time
  • Predictive validity is a validation strategy where the criterion variables are measured after the scores of the test

Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity :

  • Construct validity : Does the test measure the construct it was designed to measure?
  • Face validity : Does the test appear to be suitable for its objectives ?
  • Content validity : Does the test cover all relevant parts of the construct it aims to measure.
  • Criterion validity : Do the results accurately measure the concrete outcome they are designed to measure?

Convergent validity shows how much a measure of one construct aligns with other measures of the same or related constructs .

On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.

Although both types of validity are established by calculating the association or correlation between a test score and another variable , they represent distinct validation methods.

The purpose of theory-testing mode is to find evidence in order to disprove, refine, or support a theory. As such, generalisability is not the aim of theory-testing mode.

Due to this, the priority of researchers in theory-testing mode is to eliminate alternative causes for relationships between variables . In other words, they prioritise internal validity over external validity , including ecological validity .

Inclusion and exclusion criteria are typically presented and discussed in the methodology section of your thesis or dissertation .

Inclusion and exclusion criteria are predominantly used in non-probability sampling . In purposive sampling and snowball sampling , restrictions apply as to who can be included in the sample .

Scope of research is determined at the beginning of your research process , prior to the data collection stage. Sometimes called “scope of study,” your scope delineates what will and will not be covered in your project. It helps you focus your work and your time, ensuring that you’ll be able to achieve your goals and outcomes.

Defining a scope can be very useful in any research project, from a research proposal to a thesis or dissertation . A scope is needed for all types of research: quantitative , qualitative , and mixed methods .

To define your scope of research, consider the following:

  • Budget constraints or any specifics of grant funding
  • Your proposed timeline and duration
  • Specifics about your population of study, your proposed sample size , and the research methodology you’ll pursue
  • Any inclusion and exclusion criteria
  • Any anticipated control , extraneous , or confounding variables that could bias your research if not accounted for properly.

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Quantitative observations involve measuring or counting something and expressing the result in numerical form, while qualitative observations involve describing something in non-numerical terms, such as its appearance, texture, or color.

The Scribbr Reference Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Reference Generator in our publicly accessible repository on Github .

To paraphrase effectively, don’t just take the original sentence and swap out some of the words for synonyms. Instead, try:

  • Reformulating the sentence (e.g., change active to passive , or start from a different point)
  • Combining information from multiple sentences into one
  • Leaving out information from the original that isn’t relevant to your point
  • Using synonyms where they don’t distort the meaning

The main point is to ensure you don’t just copy the structure of the original text, but instead reformulate the idea in your own words.

Plagiarism means using someone else’s words or ideas and passing them off as your own. Paraphrasing means putting someone else’s ideas into your own words.

So when does paraphrasing count as plagiarism?

  • Paraphrasing is plagiarism if you don’t properly credit the original author.
  • Paraphrasing is plagiarism if your text is too close to the original wording (even if you cite the source). If you directly copy a sentence or phrase, you should quote it instead.
  • Paraphrasing  is not plagiarism if you put the author’s ideas completely into your own words and properly reference the source .

To present information from other sources in academic writing , it’s best to paraphrase in most cases. This shows that you’ve understood the ideas you’re discussing and incorporates them into your text smoothly.

It’s appropriate to quote when:

  • Changing the phrasing would distort the meaning of the original text
  • You want to discuss the author’s language choices (e.g., in literary analysis )
  • You’re presenting a precise definition
  • You’re looking in depth at a specific claim

A quote is an exact copy of someone else’s words, usually enclosed in quotation marks and credited to the original author or speaker.

Every time you quote a source , you must include a correctly formatted in-text citation . This looks slightly different depending on the citation style .

For example, a direct quote in APA is cited like this: ‘This is a quote’ (Streefkerk, 2020, p. 5).

Every in-text citation should also correspond to a full reference at the end of your paper.

In scientific subjects, the information itself is more important than how it was expressed, so quoting should generally be kept to a minimum. In the arts and humanities, however, well-chosen quotes are often essential to a good paper.

In social sciences, it varies. If your research is mainly quantitative , you won’t include many quotes, but if it’s more qualitative , you may need to quote from the data you collected .

As a general guideline, quotes should take up no more than 5–10% of your paper. If in doubt, check with your instructor or supervisor how much quoting is appropriate in your field.

If you’re quoting from a text that paraphrases or summarises other sources and cites them in parentheses , APA recommends retaining the citations as part of the quote:

  • Smith states that ‘the literature on this topic (Jones, 2015; Sill, 2019; Paulson, 2020) shows no clear consensus’ (Smith, 2019, p. 4).

Footnote or endnote numbers that appear within quoted text should be omitted.

If you want to cite an indirect source (one you’ve only seen quoted in another source), either locate the original source or use the phrase ‘as cited in’ in your citation.

A block quote is a long quote formatted as a separate ‘block’ of text. Instead of using quotation marks , you place the quote on a new line, and indent the entire quote to mark it apart from your own words.

APA uses block quotes for quotes that are 40 words or longer.

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Common examples of primary sources include interview transcripts , photographs, novels, paintings, films, historical documents, and official statistics.

Anything you directly analyze or use as first-hand evidence can be a primary source, including qualitative or quantitative data that you collected yourself.

Common examples of secondary sources include academic books, journal articles , reviews, essays , and textbooks.

Anything that summarizes, evaluates or interprets primary sources can be a secondary source. If a source gives you an overview of background information or presents another researcher’s ideas on your topic, it is probably a secondary source.

To determine if a source is primary or secondary, ask yourself:

  • Was the source created by someone directly involved in the events you’re studying (primary), or by another researcher (secondary)?
  • Does the source provide original information (primary), or does it summarize information from other sources (secondary)?
  • Are you directly analyzing the source itself (primary), or only using it for background information (secondary)?

Some types of sources are nearly always primary: works of art and literature, raw statistical data, official documents and records, and personal communications (e.g. letters, interviews ). If you use one of these in your research, it is probably a primary source.

Primary sources are often considered the most credible in terms of providing evidence for your argument, as they give you direct evidence of what you are researching. However, it’s up to you to ensure the information they provide is reliable and accurate.

Always make sure to properly cite your sources to avoid plagiarism .

A fictional movie is usually a primary source. A documentary can be either primary or secondary depending on the context.

If you are directly analysing some aspect of the movie itself – for example, the cinematography, narrative techniques, or social context – the movie is a primary source.

If you use the movie for background information or analysis about your topic – for example, to learn about a historical event or a scientific discovery – the movie is a secondary source.

Whether it’s primary or secondary, always properly cite the movie in the citation style you are using. Learn how to create an MLA movie citation or an APA movie citation .

Articles in newspapers and magazines can be primary or secondary depending on the focus of your research.

In historical studies, old articles are used as primary sources that give direct evidence about the time period. In social and communication studies, articles are used as primary sources to analyse language and social relations (for example, by conducting content analysis or discourse analysis ).

If you are not analysing the article itself, but only using it for background information or facts about your topic, then the article is a secondary source.

In academic writing , there are three main situations where quoting is the best choice:

  • To analyse the author’s language (e.g., in a literary analysis essay )
  • To give evidence from primary sources
  • To accurately present a precise definition or argument

Don’t overuse quotes; your own voice should be dominant. If you just want to provide information from a source, it’s usually better to paraphrase or summarise .

Your list of tables and figures should go directly after your table of contents in your thesis or dissertation.

Lists of figures and tables are often not required, and they aren’t particularly common. They specifically aren’t required for APA Style, though you should be careful to follow their other guidelines for figures and tables .

If you have many figures and tables in your thesis or dissertation, include one may help you stay organised. Your educational institution may require them, so be sure to check their guidelines.

Copyright information can usually be found wherever the table or figure was published. For example, for a diagram in a journal article , look on the journal’s website or the database where you found the article. Images found on sites like Flickr are listed with clear copyright information.

If you find that permission is required to reproduce the material, be sure to contact the author or publisher and ask for it.

A list of figures and tables compiles all of the figures and tables that you used in your thesis or dissertation and displays them with the page number where they can be found.

APA doesn’t require you to include a list of tables or a list of figures . However, it is advisable to do so if your text is long enough to feature a table of contents and it includes a lot of tables and/or figures .

A list of tables and list of figures appear (in that order) after your table of contents, and are presented in a similar way.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. Your glossary only needs to include terms that your reader may not be familiar with, and is intended to enhance their understanding of your work.

Definitional terms often fall into the category of common knowledge , meaning that they don’t necessarily have to be cited. This guidance can apply to your thesis or dissertation glossary as well.

However, if you’d prefer to cite your sources , you can follow guidance for citing dictionary entries in MLA or APA style for your glossary.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. In contrast, an index is a list of the contents of your work organised by page number.

Glossaries are not mandatory, but if you use a lot of technical or field-specific terms, it may improve readability to add one to your thesis or dissertation. Your educational institution may also require them, so be sure to check their specific guidelines.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. In contrast, dictionaries are more general collections of words.

The title page of your thesis or dissertation should include your name, department, institution, degree program, and submission date.

The title page of your thesis or dissertation goes first, before all other content or lists that you may choose to include.

Usually, no title page is needed in an MLA paper . A header is generally included at the top of the first page instead. The exceptions are when:

  • Your instructor requires one, or
  • Your paper is a group project

In those cases, you should use a title page instead of a header, listing the same information but on a separate page.

When you mention different chapters within your text, it’s considered best to use Roman numerals for most citation styles. However, the most important thing here is to remain consistent whenever using numbers in your dissertation .

A thesis or dissertation outline is one of the most critical first steps in your writing process. It helps you to lay out and organise your ideas and can provide you with a roadmap for deciding what kind of research you’d like to undertake.

Generally, an outline contains information on the different sections included in your thesis or dissertation, such as:

  • Your anticipated title
  • Your abstract
  • Your chapters (sometimes subdivided into further topics like literature review, research methods, avenues for future research, etc.)

While a theoretical framework describes the theoretical underpinnings of your work based on existing research, a conceptual framework allows you to draw your own conclusions, mapping out the variables you may use in your study and the interplay between them.

A literature review and a theoretical framework are not the same thing and cannot be used interchangeably. While a theoretical framework describes the theoretical underpinnings of your work, a literature review critically evaluates existing research relating to your topic. You’ll likely need both in your dissertation .

A theoretical framework can sometimes be integrated into a  literature review chapter , but it can also be included as its own chapter or section in your dissertation . As a rule of thumb, if your research involves dealing with a lot of complex theories, it’s a good idea to include a separate theoretical framework chapter.

An abstract is a concise summary of an academic text (such as a journal article or dissertation ). It serves two main purposes:

  • To help potential readers determine the relevance of your paper for their own research.
  • To communicate your key findings to those who don’t have time to read the whole paper.

Abstracts are often indexed along with keywords on academic databases, so they make your work more easily findable. Since the abstract is the first thing any reader sees, it’s important that it clearly and accurately summarises the contents of your paper.

The abstract is the very last thing you write. You should only write it after your research is complete, so that you can accurately summarize the entirety of your thesis or paper.

Avoid citing sources in your abstract . There are two reasons for this:

  • The abstract should focus on your original research, not on the work of others.
  • The abstract should be self-contained and fully understandable without reference to other sources.

There are some circumstances where you might need to mention other sources in an abstract: for example, if your research responds directly to another study or focuses on the work of a single theorist. In general, though, don’t include citations unless absolutely necessary.

The abstract appears on its own page, after the title page and acknowledgements but before the table of contents .

Results are usually written in the past tense , because they are describing the outcome of completed actions.

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Formulating a main research question can be a difficult task. Overall, your question should contribute to solving the problem that you have defined in your problem statement .

However, it should also fulfill criteria in three main areas:

  • Researchability
  • Feasibility and specificity
  • Relevance and originality

The best way to remember the difference between a research plan and a research proposal is that they have fundamentally different audiences. A research plan helps you, the researcher, organize your thoughts. On the other hand, a dissertation proposal or research proposal aims to convince others (e.g., a supervisor, a funding body, or a dissertation committee) that your research topic is relevant and worthy of being conducted.

A noun is a word that represents a person, thing, concept, or place (e.g., ‘John’, ‘house’, ‘affinity’, ‘river’). Most sentences contain at least one noun or pronoun .

Nouns are often, but not always, preceded by an article (‘the’, ‘a’, or ‘an’) and/or another determiner such as an adjective.

There are many ways to categorize nouns into various types, and the same noun can fall into multiple categories or even change types depending on context.

Some of the main types of nouns are:

  • Common nouns and proper nouns
  • Countable and uncountable nouns
  • Concrete and abstract nouns
  • Collective nouns
  • Possessive nouns
  • Attributive nouns
  • Appositive nouns
  • Generic nouns

Pronouns are words like ‘I’, ‘she’, and ‘they’ that are used in a similar way to nouns . They stand in for a noun that has already been mentioned or refer to yourself and other people.

Pronouns can function just like nouns as the head of a noun phrase and as the subject or object of a verb. However, pronouns change their forms (e.g., from ‘I’ to ‘me’) depending on the grammatical context they’re used in, whereas nouns usually don’t.

Common nouns are words for types of things, people, and places, such as ‘dog’, ‘professor’, and ‘city’. They are not capitalised and are typically used in combination with articles and other determiners.

Proper nouns are words for specific things, people, and places, such as ‘Max’, ‘Dr Prakash’, and ‘London’. They are always capitalised and usually aren’t combined with articles and other determiners.

A proper adjective is an adjective that was derived from a proper noun and is therefore capitalised .

Proper adjectives include words for nationalities, languages, and ethnicities (e.g., ‘Japanese’, ‘Inuit’, ‘French’) and words derived from people’s names (e.g., ‘Bayesian’, ‘Orwellian’).

The names of seasons (e.g., ‘spring’) are treated as common nouns in English and therefore not capitalised . People often assume they are proper nouns, but this is an error.

The names of days and months, however, are capitalised since they’re treated as proper nouns in English (e.g., ‘Wednesday’, ‘January’).

No, as a general rule, academic concepts, disciplines, theories, models, etc. are treated as common nouns , not proper nouns , and therefore not capitalised . For example, ‘five-factor model of personality’ or ‘analytic philosophy’.

However, proper nouns that appear within the name of an academic concept (such as the name of the inventor) are capitalised as usual. For example, ‘Darwin’s theory of evolution’ or ‘ Student’s t table ‘.

Collective nouns are most commonly treated as singular (e.g., ‘the herd is grazing’), but usage differs between US and UK English :

  • In US English, it’s standard to treat all collective nouns as singular, even when they are plural in appearance (e.g., ‘The Rolling Stones is …’). Using the plural form is usually seen as incorrect.
  • In UK English, collective nouns can be treated as singular or plural depending on context. It’s quite common to use the plural form, especially when the noun looks plural (e.g., ‘The Rolling Stones are …’).

The plural of “crisis” is “crises”. It’s a loanword from Latin and retains its original Latin plural noun form (similar to “analyses” and “bases”). It’s wrong to write “crisises”.

For example, you might write “Several crises destabilized the regime.”

Normally, the plural of “fish” is the same as the singular: “fish”. It’s one of a group of irregular plural nouns in English that are identical to the corresponding singular nouns (e.g., “moose”, “sheep”). For example, you might write “The fish scatter as the shark approaches.”

If you’re referring to several species of fish, though, the regular plural “fishes” is often used instead. For example, “The aquarium contains many different fishes , including trout and carp.”

The correct plural of “octopus” is “octopuses”.

People often write “octopi” instead because they assume that the plural noun is formed in the same way as Latin loanwords such as “fungus/fungi”. But “octopus” actually comes from Greek, where its original plural is “octopodes”. In English, it instead has the regular plural form “octopuses”.

For example, you might write “There are four octopuses in the aquarium.”

The plural of “moose” is the same as the singular: “moose”. It’s one of a group of plural nouns in English that are identical to the corresponding singular nouns. So it’s wrong to write “mooses”.

For example, you might write “There are several moose in the forest.”

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

In research, demand characteristics are cues that might indicate the aim of a study to participants. These cues can lead to participants changing their behaviors or responses based on what they think the research is about.

Demand characteristics are common problems in psychology experiments and other social science studies because they can bias your research findings.

Demand characteristics are a type of extraneous variable that can affect the outcomes of the study. They can invalidate studies by providing an alternative explanation for the results.

These cues may nudge participants to consciously or unconsciously change their responses, and they pose a threat to both internal and external validity . You can’t be sure that your independent variable manipulation worked, or that your findings can be applied to other people or settings.

You can control demand characteristics by taking a few precautions in your research design and materials.

Use these measures:

  • Deception: Hide the purpose of the study from participants
  • Between-groups design : Give each participant only one independent variable treatment
  • Double-blind design : Conceal the assignment of groups from participants and yourself
  • Implicit measures: Use indirect or hidden measurements for your variables

Some attrition is normal and to be expected in research. However, the type of attrition is important because systematic research bias can distort your findings. Attrition bias can lead to inaccurate results because it affects internal and/or external validity .

To avoid attrition bias , applying some of these measures can help you reduce participant dropout (attrition) by making it easy and appealing for participants to stay.

  • Provide compensation (e.g., cash or gift cards) for attending every session
  • Minimise the number of follow-ups as much as possible
  • Make all follow-ups brief, flexible, and convenient for participants
  • Send participants routine reminders to schedule follow-ups
  • Recruit more participants than you need for your sample (oversample)
  • Maintain detailed contact information so you can get in touch with participants even if they move

If you have a small amount of attrition bias , you can use a few statistical methods to try to make up for this research bias .

Multiple imputation involves using simulations to replace the missing data with likely values. Alternatively, you can use sample weighting to make up for the uneven balance of participants in your sample.

Placebos are used in medical research for new medication or therapies, called clinical trials. In these trials some people are given a placebo, while others are given the new medication being tested.

The purpose is to determine how effective the new medication is: if it benefits people beyond a predefined threshold as compared to the placebo, it’s considered effective.

Although there is no definite answer to what causes the placebo effect , researchers propose a number of explanations such as the power of suggestion, doctor-patient interaction, classical conditioning, etc.

Belief bias and confirmation bias are both types of cognitive bias that impact our judgment and decision-making.

Confirmation bias relates to how we perceive and judge evidence. We tend to seek out and prefer information that supports our preexisting beliefs, ignoring any information that contradicts those beliefs.

Belief bias describes the tendency to judge an argument based on how plausible the conclusion seems to us, rather than how much evidence is provided to support it during the course of the argument.

Positivity bias is phenomenon that occurs when a person judges individual members of a group positively, even when they have negative impressions or judgments of the group as a whole. Positivity bias is closely related to optimism bias , or the e xpectation that things will work out well, even if rationality suggests that problems are inevitable in life.

Perception bias is a problem because it prevents us from seeing situations or people objectively. Rather, our expectations, beliefs, or emotions interfere with how we interpret reality. This, in turn, can cause us to misjudge ourselves or others. For example, our prejudices can interfere with whether we perceive people’s faces as friendly or unfriendly.

There are many ways to categorize adjectives into various types. An adjective can fall into one or more of these categories depending on how it is used.

Some of the main types of adjectives are:

  • Attributive adjectives
  • Predicative adjectives
  • Comparative adjectives
  • Superlative adjectives
  • Coordinate adjectives
  • Appositive adjectives
  • Compound adjectives
  • Participial adjectives
  • Proper adjectives
  • Denominal adjectives
  • Nominal adjectives

Cardinal numbers (e.g., one, two, three) can be placed before a noun to indicate quantity (e.g., one apple). While these are sometimes referred to as ‘numeral adjectives ‘, they are more accurately categorised as determiners or quantifiers.

Proper adjectives are adjectives formed from a proper noun (i.e., the name of a specific person, place, or thing) that are used to indicate origin. Like proper nouns, proper adjectives are always capitalised (e.g., Newtonian, Marxian, African).

The cost of proofreading depends on the type and length of text, the turnaround time, and the level of services required. Most proofreading companies charge per word or page, while freelancers sometimes charge an hourly rate.

For proofreading alone, which involves only basic corrections of typos and formatting mistakes, you might pay as little as £0.01 per word, but in many cases, your text will also require some level of editing , which costs slightly more.

It’s often possible to purchase combined proofreading and editing services and calculate the price in advance based on your requirements.

Then and than are two commonly confused words . In the context of ‘better than’, you use ‘than’ with an ‘a’.

  • Julie is better than Jesse.
  • I’d rather spend my time with you than with him.
  • I understand Eoghan’s point of view better than Claudia’s.

Use to and used to are commonly confused words . In the case of ‘used to do’, the latter (with ‘d’) is correct, since you’re describing an action or state in the past.

  • I used to do laundry once a week.
  • They used to do each other’s hair.
  • We used to do the dishes every day .

There are numerous synonyms and near synonyms for the various meanings of “ favour ”:

Advocate Adoration
Approve of Appreciation
Endorse Praise
Support Respect

There are numerous synonyms and near synonyms for the two meanings of “ favoured ”:

Advocated Adored
Approved of Appreciated
Endorsed Praised
Supported Preferred

No one (two words) is an indefinite pronoun meaning ‘nobody’. People sometimes mistakenly write ‘noone’, but this is incorrect and should be avoided. ‘No-one’, with a hyphen, is also acceptable in UK English .

Nobody and no one are both indefinite pronouns meaning ‘no person’. They can be used interchangeably (e.g., ‘nobody is home’ means the same as ‘no one is home’).

Some synonyms and near synonyms of  every time include:

  • Without exception

‘Everytime’ is sometimes used to mean ‘each time’ or ‘whenever’. However, this is incorrect and should be avoided. The correct phrase is every time   (two words).

Yes, the conjunction because is a compound word , but one with a long history. It originates in Middle English from the preposition “bi” (“by”) and the noun “cause”. Over time, the open compound “bi cause” became the closed compound “because”, which we use today.

Though it’s spelled this way now, the verb “be” is not one of the words that makes up “because”.

Yes, today is a compound word , but a very old one. It wasn’t originally formed from the preposition “to” and the noun “day”; rather, it originates from their Old English equivalents, “tō” and “dæġe”.

In the past, it was sometimes written as a hyphenated compound: “to-day”. But the hyphen is no longer included; it’s always “today” now (“to day” is also wrong).

IEEE citation format is defined by the Institute of Electrical and Electronics Engineers and used in their publications.

It’s also a widely used citation style for students in technical fields like electrical and electronic engineering, computer science, telecommunications, and computer engineering.

An IEEE in-text citation consists of a number in brackets at the relevant point in the text, which points the reader to the right entry in the numbered reference list at the end of the paper. For example, ‘Smith [1] states that …’

A location marker such as a page number is also included within the brackets when needed: ‘Smith [1, p. 13] argues …’

The IEEE reference page consists of a list of references numbered in the order they were cited in the text. The title ‘References’ appears in bold at the top, either left-aligned or centered.

The numbers appear in square brackets on the left-hand side of the page. The reference entries are indented consistently to separate them from the numbers. Entries are single-spaced, with a normal paragraph break between them.

If you cite the same source more than once in your writing, use the same number for all of the IEEE in-text citations for that source, and only include it on the IEEE reference page once. The source is numbered based on the first time you cite it.

For example, the fourth source you cite in your paper is numbered [4]. If you cite it again later, you still cite it as [4]. You can cite different parts of the source each time by adding page numbers [4, p. 15].

A verb is a word that indicates a physical action (e.g., ‘drive’), a mental action (e.g., ‘think’) or a state of being (e.g., ‘exist’). Every sentence contains a verb.

Verbs are almost always used along with a noun or pronoun to describe what the noun or pronoun is doing.

There are many ways to categorize verbs into various types. A verb can fall into one or more of these categories depending on how it is used.

Some of the main types of verbs are:

  • Regular verbs
  • Irregular verbs
  • Transitive verbs
  • Intransitive verbs
  • Dynamic verbs
  • Stative verbs
  • Linking verbs
  • Auxiliary verbs
  • Modal verbs
  • Phrasal verbs

Regular verbs are verbs whose simple past and past participle are formed by adding the suffix ‘-ed’ (e.g., ‘walked’).

Irregular verbs are verbs that form their simple past and past participles in some way other than by adding the suffix ‘-ed’ (e.g., ‘sat’).

The indefinite articles a and an are used to refer to a general or unspecified version of a noun (e.g., a house). Which indefinite article you use depends on the pronunciation of the word that follows it.

  • A is used for words that begin with a consonant sound (e.g., a bear).
  • An is used for words that begin with a vowel sound (e.g., an eagle).

Indefinite articles can only be used with singular countable nouns . Like definite articles, they are a type of determiner .

Editing and proofreading are different steps in the process of revising a text.

Editing comes first, and can involve major changes to content, structure and language. The first stages of editing are often done by authors themselves, while a professional editor makes the final improvements to grammar and style (for example, by improving sentence structure and word choice ).

Proofreading is the final stage of checking a text before it is published or shared. It focuses on correcting minor errors and inconsistencies (for example, in punctuation and capitalization ). Proofreaders often also check for formatting issues, especially in print publishing.

Whether you’re publishing a blog, submitting a research paper , or even just writing an important email, there are a few techniques you can use to make sure it’s error-free:

  • Take a break : Set your work aside for at least a few hours so that you can look at it with fresh eyes.
  • Proofread a printout : Staring at a screen for too long can cause fatigue – sit down with a pen and paper to check the final version.
  • Use digital shortcuts : Take note of any recurring mistakes (for example, misspelling a particular word, switching between US and UK English , or inconsistently capitalizing a term), and use Find and Replace to fix it throughout the document.

If you want to be confident that an important text is error-free, it might be worth choosing a professional proofreading service instead.

There are many different routes to becoming a professional proofreader or editor. The necessary qualifications depend on the field – to be an academic or scientific proofreader, for example, you will need at least a university degree in a relevant subject.

For most proofreading jobs, experience and demonstrated skills are more important than specific qualifications. Often your skills will be tested as part of the application process.

To learn practical proofreading skills, you can choose to take a course with a professional organisation such as the Society for Editors and Proofreaders . Alternatively, you can apply to companies that offer specialised on-the-job training programmes, such as the Scribbr Academy .

Though they’re pronounced the same, there’s a big difference in meaning between its and it’s .

  • ‘The cat ate its food’.
  • ‘It’s almost Christmas’.

Its and it’s are often confused, but its (without apostrophe) is the possessive form of ‘it’ (e.g., its tail, its argument, its wing). You use ‘its’ instead of ‘his’ and ‘her’ for neuter, inanimate nouns.

Then and than are two commonly confused words with different meanings and grammatical roles.

  • Then (pronounced with a short ‘e’ sound) refers to time. It’s often an adverb , but it can also be used as a noun meaning ‘that time’ and as an adjective referring to a previous status.
  • Than (pronounced with a short ‘a’ sound) is used for comparisons. Grammatically, it usually functions as a conjunction , but sometimes it’s a preposition .
Examples: Then in a sentence Examples: Than in a sentence
Mix the dry ingredients first, and add the wet ingredients. Max is a better saxophonist you.
I was working as a teacher . I usually like coaching a team more I like playing soccer myself.

Use to and used to are commonly confused words . In the case of ‘used to be’, the latter (with ‘d’) is correct, since you’re describing an action or state in the past.

  • I used to be the new coworker.
  • There used to be 4 cookies left.
  • We used to walk to school every day .

A grammar checker is a tool designed to automatically check your text for spelling errors, grammatical issues, punctuation mistakes , and problems with sentence structure . You can check out our analysis of the best free grammar checkers to learn more.

A paraphrasing tool edits your text more actively, changing things whether they were grammatically incorrect or not. It can paraphrase your sentences to make them more concise and readable or for other purposes. You can check out our analysis of the best free paraphrasing tools to learn more.

Some tools available online combine both functions. Others, such as QuillBot , have separate grammar checker and paraphrasing tools. Be aware of what exactly the tool you’re using does to avoid introducing unwanted changes.

Good grammar is the key to expressing yourself clearly and fluently, especially in professional communication and academic writing . Word processors, browsers, and email programs typically have built-in grammar checkers, but they’re quite limited in the kinds of problems they can fix.

If you want to go beyond detecting basic spelling errors, there are many online grammar checkers with more advanced functionality. They can often detect issues with punctuation , word choice, and sentence structure that more basic tools would miss.

Not all of these tools are reliable, though. You can check out our research into the best free grammar checkers to explore the options.

Our research indicates that the best free grammar checker available online is the QuillBot grammar checker .

We tested 10 of the most popular checkers with the same sample text (containing 20 grammatical errors) and found that QuillBot easily outperformed the competition, scoring 18 out of 20, a drastic improvement over the second-place score of 13 out of 20.

It even appeared to outperform the premium versions of other grammar checkers, despite being entirely free.

A teacher’s aide is a person who assists in teaching classes but is not a qualified teacher. Aide is a noun meaning ‘assistant’, so it will always refer to a person.

‘Teacher’s aid’ is incorrect.

A visual aid is an instructional device (e.g., a photo, a chart) that appeals to vision to help you understand written or spoken information. Aid is often placed after an attributive noun or adjective (like ‘visual’) that describes the type of help provided.

‘Visual aide’ is incorrect.

A job aid is an instructional tool (e.g., a checklist, a cheat sheet) that helps you work efficiently. Aid is a noun meaning ‘assistance’. It’s often placed after an adjective or attributive noun (like ‘job’) that describes the specific type of help provided.

‘Job aide’ is incorrect.

There are numerous synonyms for the various meanings of truly :

Candidly Completely Accurately
Honestly Really Correctly
Openly Totally Exactly
Truthfully Precisely

Yours truly is a phrase used at the end of a formal letter or email. It can also be used (typically in a humorous way) as a pronoun to refer to oneself (e.g., ‘The dinner was cooked by yours truly ‘). The latter usage should be avoided in formal writing.

It’s formed by combining the second-person possessive pronoun ‘yours’ with the adverb ‘ truly ‘.

A pathetic fallacy can be a short phrase or a whole sentence and is often used in novels and poetry. Pathetic fallacies serve multiple purposes, such as:

  • Conveying the emotional state of the characters or the narrator
  • Creating an atmosphere or set the mood of a scene
  • Foreshadowing events to come
  • Giving texture and vividness to a piece of writing
  • Communicating emotion to the reader in a subtle way, by describing the external world.
  • Bringing inanimate objects to life so that they seem more relatable.

AMA citation format is a citation style designed by the American Medical Association. It’s frequently used in the field of medicine.

You may be told to use AMA style for your student papers. You will also have to follow this style if you’re submitting a paper to a journal published by the AMA.

An AMA in-text citation consists of the number of the relevant reference on your AMA reference page , written in superscript 1 at the point in the text where the source is used.

It may also include the page number or range of the relevant material in the source (e.g., the part you quoted 2(p46) ). Multiple sources can be cited at one point, presented as a range or list (with no spaces 3,5–9 ).

An AMA reference usually includes the author’s last name and initials, the title of the source, information about the publisher or the publication it’s contained in, and the publication date. The specific details included, and the formatting, depend on the source type.

References in AMA style are presented in numerical order (numbered by the order in which they were first cited in the text) on your reference page. A source that’s cited repeatedly in the text still only appears once on the reference page.

An AMA in-text citation just consists of the number of the relevant entry on your AMA reference page , written in superscript at the point in the text where the source is referred to.

You don’t need to mention the author of the source in your sentence, but you can do so if you want. It’s not an official part of the citation, but it can be useful as part of a signal phrase introducing the source.

On your AMA reference page , author names are written with the last name first, followed by the initial(s) of their first name and middle name if mentioned.

There’s a space between the last name and the initials, but no space or punctuation between the initials themselves. The names of multiple authors are separated by commas , and the whole list ends in a period, e.g., ‘Andreessen F, Smith PW, Gonzalez E’.

The names of up to six authors should be listed for each source on your AMA reference page , separated by commas . For a source with seven or more authors, you should list the first three followed by ‘ et al’ : ‘Isidore, Gilbert, Gunvor, et al’.

In the text, mentioning author names is optional (as they aren’t an official part of AMA in-text citations ). If you do mention them, though, you should use the first author’s name followed by ‘et al’ when there are three or more : ‘Isidore et al argue that …’

Note that according to AMA’s rather minimalistic punctuation guidelines, there’s no period after ‘et al’ unless it appears at the end of a sentence. This is different from most other styles, where there is normally a period.

Yes, you should normally include an access date in an AMA website citation (or when citing any source with a URL). This is because webpages can change their content over time, so it’s useful for the reader to know when you accessed the page.

When a publication or update date is provided on the page, you should include it in addition to the access date. The access date appears second in this case, e.g., ‘Published June 19, 2021. Accessed August 29, 2022.’

Don’t include an access date when citing a source with a DOI (such as in an AMA journal article citation ).

Some variables have fixed levels. For example, gender and ethnicity are always nominal level data because they cannot be ranked.

However, for other variables, you can choose the level of measurement . For example, income is a variable that can be recorded on an ordinal or a ratio scale:

  • At an ordinal level , you could create 5 income groupings and code the incomes that fall within them from 1–5.
  • At a ratio level , you would record exact numbers for income.

If you have a choice, the ratio level is always preferable because you can analyse data in more ways. The higher the level of measurement, the more precise your data is.

The level at which you measure a variable determines how you can analyse your data.

Depending on the level of measurement , you can perform different descriptive statistics to get an overall summary of your data and inferential statistics to see if your results support or refute your hypothesis .

Levels of measurement tell you how precisely variables are recorded. There are 4 levels of measurement, which can be ranked from low to high:

  • Nominal : the data can only be categorised.
  • Ordinal : the data can be categorised and ranked.
  • Interval : the data can be categorised and ranked, and evenly spaced.
  • Ratio : the data can be categorised, ranked, evenly spaced and has a natural zero.

Statistical analysis is the main method for analyzing quantitative research data . It uses probabilities and models to test predictions about a population from sample data.

The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).

The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).

As the degrees of freedom increase, Student’s t distribution becomes less leptokurtic , meaning that the probability of extreme values decreases. The distribution becomes more and more similar to a standard normal distribution .

When there are only one or two degrees of freedom , the chi-square distribution is shaped like a backwards ‘J’. When there are three or more degrees of freedom, the distribution is shaped like a right-skewed hump. As the degrees of freedom increase, the hump becomes less right-skewed and the peak of the hump moves to the right. The distribution becomes more and more similar to a normal distribution .

‘Looking forward in hearing from you’ is an incorrect version of the phrase looking forward to hearing from you . The phrasal verb ‘looking forward to’ always needs the preposition ‘to’, not ‘in’.

  • I am looking forward in hearing from you.
  • I am looking forward to hearing from you.

Some synonyms and near synonyms for the expression looking forward to hearing from you include:

  • Eagerly awaiting your response
  • Hoping to hear from you soon
  • It would be great to hear back from you
  • Thanks in advance for your reply

People sometimes mistakenly write ‘looking forward to hear from you’, but this is incorrect. The correct phrase is looking forward to hearing from you .

The phrasal verb ‘look forward to’ is always followed by a direct object, the thing you’re looking forward to. As the direct object has to be a noun phrase , it should be the gerund ‘hearing’, not the verb ‘hear’.

  • I’m looking forward to hear from you soon.
  • I’m looking forward to hearing from you soon.

Traditionally, the sign-off Yours sincerely is used in an email message or letter when you are writing to someone you have interacted with before, not a complete stranger.

Yours faithfully is used instead when you are writing to someone you have had no previous correspondence with, especially if you greeted them as ‘ Dear Sir or Madam ’.

Just checking in   is a standard phrase used to start an email (or other message) that’s intended to ask someone for a response or follow-up action in a friendly, informal way. However, it’s a cliché opening that can come across as passive-aggressive, so we recommend avoiding it in favor of a more direct opening like “We previously discussed …”

In a more personal context, you might encounter “just checking in” as part of a longer phrase such as “I’m just checking in to see how you’re doing”. In this case, it’s not asking the other person to do anything but rather asking about their well-being (emotional or physical) in a friendly way.

“Earliest convenience” is part of the phrase at your earliest convenience , meaning “as soon as you can”. 

It’s typically used to end an email in a formal context by asking the recipient to do something when it’s convenient for them to do so.

ASAP is an abbreviation of the phrase “as soon as possible”. 

It’s typically used to indicate a sense of urgency in highly informal contexts (e.g., “Let me know ASAP if you need me to drive you to the airport”).

“ASAP” should be avoided in more formal correspondence. Instead, use an alternative like at your earliest convenience .

Some synonyms and near synonyms of the verb   compose   (meaning “to make up”) are:

People increasingly use “comprise” as a synonym of “compose.” However, this is normally still seen as a mistake, and we recommend avoiding it in your academic writing . “Comprise” traditionally means “to be made up of,” not “to make up.”

Some synonyms and near synonyms of the verb comprise are:

  • Be composed of
  • Be made up of

People increasingly use “comprise” interchangeably with “compose,” meaning that they consider words like “compose,” “constitute,” and “form” to be synonymous with “comprise.” However, this is still normally regarded as an error, and we advise against using these words interchangeably in academic writing .

A fallacy is a mistaken belief, particularly one based on unsound arguments or one that lacks the evidence to support it. Common types of fallacy that may compromise the quality of your research are:

  • Correlation/causation fallacy: Claiming that two events that occur together have a cause-and-effect relationship even though this can’t be proven
  • Ecological fallacy : Making inferences about the nature of individuals based on aggregate data for the group
  • The sunk cost fallacy : Following through on a project or decision because we have already invested time, effort, or money into it, even if the current costs outweigh the benefits
  • The base-rate fallacy : Ignoring base-rate or statistically significant information, such as sample size or the relative frequency of an event, in favor of  less relevant information e.g., pertaining to a single case, or a small number of cases
  • The planning fallacy : Underestimating the time needed to complete a future task, even when we know that similar tasks in the past have taken longer than planned

The planning fallacy refers to people’s tendency to underestimate the resources needed to complete a future task, despite knowing that previous tasks have also taken longer than planned.

For example, people generally tend to underestimate the cost and time needed for construction projects. The planning fallacy occurs due to people’s tendency to overestimate the chances that positive events, such as a shortened timeline, will happen to them. This phenomenon is called optimism bias or positivity bias.

Although both red herring fallacy and straw man fallacy are logical fallacies or reasoning errors, they denote different attempts to “win” an argument. More specifically:

  • A red herring fallacy refers to an attempt to change the subject and divert attention from the original issue. In other words, a seemingly solid but ultimately irrelevant argument is introduced into the discussion, either on purpose or by mistake.
  • A straw man argument involves the deliberate distortion of another person’s argument. By oversimplifying or exaggerating it, the other party creates an easy-to-refute argument and then attacks it.

The red herring fallacy is a problem because it is flawed reasoning. It is a distraction device that causes people to become sidetracked from the main issue and draw wrong conclusions.

Although a red herring may have some kernel of truth, it is used as a distraction to keep our eyes on a different matter. As a result, it can cause us to accept and spread misleading information.

The sunk cost fallacy and escalation of commitment (or commitment bias ) are two closely related terms. However, there is a slight difference between them:

  • Escalation of commitment (aka commitment bias ) is the tendency to be consistent with what we have already done or said we will do in the past, especially if we did so in public. In other words, it is an attempt to save face and appear consistent.
  • Sunk cost fallacy is the tendency to stick with a decision or a plan even when it’s failing. Because we have already invested valuable time, money, or energy, quitting feels like these resources were wasted.

In other words, escalating commitment is a manifestation of the sunk cost fallacy: an irrational escalation of commitment frequently occurs when people refuse to accept that the resources they’ve already invested cannot be recovered. Instead, they insist on more spending to justify the initial investment (and the incurred losses).

When you are faced with a straw man argument , the best way to respond is to draw attention to the fallacy and ask your discussion partner to show how your original statement and their distorted version are the same. Since these are different, your partner will either have to admit that their argument is invalid or try to justify it by using more flawed reasoning, which you can then attack.

The straw man argument is a problem because it occurs when we fail to take an opposing point of view seriously. Instead, we intentionally misrepresent our opponent’s ideas and avoid genuinely engaging with them. Due to this, resorting to straw man fallacy lowers the standard of constructive debate.

A straw man argument is a distorted (and weaker) version of another person’s argument that can easily be refuted (e.g., when a teacher proposes that the class spend more time on math exercises, a parent complains that the teacher doesn’t care about reading and writing).

This is a straw man argument because it misrepresents the teacher’s position, which didn’t mention anything about cutting down on reading and writing. The straw man argument is also known as the straw man fallacy .

A slippery slope argument is not always a fallacy.

  • When someone claims adopting a certain policy or taking a certain action will automatically lead to a series of other policies or actions also being taken, this is a slippery slope argument.
  • If they don’t show a causal connection between the advocated policy and the consequent policies, then they commit a slippery slope fallacy .

There are a number of ways you can deal with slippery slope arguments especially when you suspect these are fallacious:

  • Slippery slope arguments take advantage of the gray area between an initial action or decision and the possible next steps that might lead to the undesirable outcome. You can point out these missing steps and ask your partner to indicate what evidence exists to support the claimed relationship between two or more events.
  • Ask yourself if each link in the chain of events or action is valid. Every proposition has to be true for the overall argument to work, so even if one link is irrational or not supported by evidence, then the argument collapses.
  • Sometimes people commit a slippery slope fallacy unintentionally. In these instances, use an example that demonstrates the problem with slippery slope arguments in general (e.g., by using statements to reach a conclusion that is not necessarily relevant to the initial statement). By attacking the concept of slippery slope arguments you can show that they are often fallacious.

People sometimes confuse cognitive bias and logical fallacies because they both relate to flawed thinking. However, they are not the same:

  • Cognitive bias is the tendency to make decisions or take action in an illogical way because of our values, memory, socialization, and other personal attributes. In other words, it refers to a fixed pattern of thinking rooted in the way our brain works.
  • Logical fallacies relate to how we make claims and construct our arguments in the moment. They are statements that sound convincing at first but can be disproven through logical reasoning.

In other words, cognitive bias refers to an ongoing predisposition, while logical fallacy refers to mistakes of reasoning that occur in the moment.

An appeal to ignorance (ignorance here meaning lack of evidence) is a type of informal logical fallacy .

It asserts that something must be true because it hasn’t been proven false—or that something must be false because it has not yet been proven true.

For example, “unicorns exist because there is no evidence that they don’t.” The appeal to ignorance is also called the burden of proof fallacy .

An ad hominem (Latin for “to the person”) is a type of informal logical fallacy . Instead of arguing against a person’s position, an ad hominem argument attacks the person’s character or actions in an effort to discredit them.

This rhetorical strategy is fallacious because a person’s character, motive, education, or other personal trait is logically irrelevant to whether their argument is true or false.

Name-calling is common in ad hominem fallacy (e.g., “environmental activists are ineffective because they’re all lazy tree-huggers”).

Ad hominem is a persuasive technique where someone tries to undermine the opponent’s argument by personally attacking them.

In this way, one can redirect the discussion away from the main topic and to the opponent’s personality without engaging with their viewpoint. When the opponent’s personality is irrelevant to the discussion, we call it an ad hominem fallacy .

Ad hominem tu quoque (‘you too”) is an attempt to rebut a claim by attacking its proponent on the grounds that they uphold a double standard or that they don’t practice what they preach. For example, someone is telling you that you should drive slowly otherwise you’ll get a speeding ticket one of these days, and you reply “but you used to get them all the time!”

Argumentum ad hominem means “argument to the person” in Latin and it is commonly referred to as ad hominem argument or personal attack. Ad hominem arguments are used in debates to refute an argument by attacking the character of the person making it, instead of the logic or premise of the argument itself.

The opposite of the hasty generalization fallacy is called slothful induction fallacy or appeal to coincidence .

It is the tendency to deny a conclusion even though there is sufficient evidence that supports it. Slothful induction occurs due to our natural tendency to dismiss events or facts that do not align with our personal biases and expectations. For example, a researcher may try to explain away unexpected results by claiming it is just a coincidence.

To avoid a hasty generalization fallacy we need to ensure that the conclusions drawn are well-supported by the appropriate evidence. More specifically:

  • In statistics , if we want to draw inferences about an entire population, we need to make sure that the sample is random and representative of the population . We can achieve that by using a probability sampling method , like simple random sampling or stratified sampling .
  • In academic writing , use precise language and measured phases. Try to avoid making absolute claims, cite specific instances and examples without applying the findings to a larger group.
  • As readers, we need to ask ourselves “does the writer demonstrate sufficient knowledge of the situation or phenomenon that would allow them to make a generalization?”

The hasty generalization fallacy and the anecdotal evidence fallacy are similar in that they both result in conclusions drawn from insufficient evidence. However, there is a difference between the two:

  • The hasty generalization fallacy involves genuinely considering an example or case (i.e., the evidence comes first and then an incorrect conclusion is drawn from this).
  • The anecdotal evidence fallacy (also known as “cherry-picking” ) is knowing in advance what conclusion we want to support, and then selecting the story (or a few stories) that support it. By overemphasizing anecdotal evidence that fits well with the point we are trying to make, we overlook evidence that would undermine our argument.

Although many sources use circular reasoning fallacy and begging the question interchangeably, others point out that there is a subtle difference between the two:

  • Begging the question fallacy occurs when you assume that an argument is true in order to justify a conclusion. If something begs the question, what you are actually asking is, “Is the premise of that argument actually true?” For example, the statement “Snakes make great pets. That’s why we should get a snake” begs the question “are snakes really great pets?”
  • Circular reasoning fallacy on the other hand, occurs when the evidence used to support a claim is just a repetition of the claim itself.  For example, “People have free will because they can choose what to do.”

In other words, we could say begging the question is a form of circular reasoning.

Circular reasoning fallacy uses circular reasoning to support an argument. More specifically, the evidence used to support a claim is just a repetition of the claim itself. For example: “The President of the United States is a good leader (claim), because they are the leader of this country (supporting evidence)”.

An example of a non sequitur is the following statement:

“Giving up nuclear weapons weakened the United States’ military. Giving up nuclear weapons also weakened China. For this reason, it is wrong to try to outlaw firearms in the United States today.”

Clearly there is a step missing in this line of reasoning and the conclusion does not follow from the premise, resulting in a non sequitur fallacy .

The difference between the post hoc fallacy and the non sequitur fallacy is that post hoc fallacy infers a causal connection between two events where none exists, whereas the non sequitur fallacy infers a conclusion that lacks a logical connection to the premise.

In other words, a post hoc fallacy occurs when there is a lack of a cause-and-effect relationship, while a non sequitur fallacy occurs when there is a lack of logical connection.

An example of post hoc fallacy is the following line of reasoning:

“Yesterday I had ice cream, and today I have a terrible stomachache. I’m sure the ice cream caused this.”

Although it is possible that the ice cream had something to do with the stomachache, there is no proof to justify the conclusion other than the order of events. Therefore, this line of reasoning is fallacious.

Post hoc fallacy and hasty generalisation fallacy are similar in that they both involve jumping to conclusions. However, there is a difference between the two:

  • Post hoc fallacy is assuming a cause and effect relationship between two events, simply because one happened after the other.
  • Hasty generalisation fallacy is drawing a general conclusion from a small sample or little evidence.

In other words, post hoc fallacy involves a leap to a causal claim; hasty generalisation fallacy involves a leap to a general proposition.

The fallacy of composition is similar to and can be confused with the hasty generalization fallacy . However, there is a difference between the two:

  • The fallacy of composition involves drawing an inference about the characteristics of a whole or group based on the characteristics of its individual members.
  • The hasty generalization fallacy involves drawing an inference about a population or class of things on the basis of few atypical instances or a small sample of that population or thing.

In other words, the fallacy of composition is using an unwarranted assumption that we can infer something about a whole based on the characteristics of its parts, while the hasty generalization fallacy is using insufficient evidence to draw a conclusion.

The opposite of the fallacy of composition is the fallacy of division . In the fallacy of division, the assumption is that a characteristic which applies to a whole or a group must necessarily apply to the parts or individual members. For example, “Australians travel a lot. Gary is Australian, so he must travel a lot.”

Base rate fallacy can be avoided by following these steps:

  • Avoid making an important decision in haste. When we are under pressure, we are more likely to resort to cognitive shortcuts like the availability heuristic and the representativeness heuristic . Due to this, we are more likely to factor in only current and vivid information, and ignore the actual probability of something happening (i.e., base rate).
  • Take a long-term view on the decision or question at hand. Look for relevant statistical data, which can reveal long-term trends and give you the full picture.
  • Talk to experts like professionals. They are more aware of probabilities related to specific decisions.

Suppose there is a population consisting of 90% psychologists and 10% engineers. Given that you know someone enjoyed physics at school, you may conclude that they are an engineer rather than a psychologist, even though you know that this person comes from a population consisting of far more psychologists than engineers.

When we ignore the rate of occurrence of some trait in a population (the base-rate information) we commit base rate fallacy .

Cost-benefit fallacy is a common error that occurs when allocating sources in project management. It is the fallacy of assuming that cost-benefit estimates are more or less accurate, when in fact they are highly inaccurate and biased. This means that cost-benefit analyses can be useful, but only after the cost-benefit fallacy has been acknowledged and corrected for. Cost-benefit fallacy is a type of base rate fallacy .

In advertising, the fallacy of equivocation is often used to create a pun. For example, a billboard company might advertise their billboards using a line like: “Looking for a sign? This is it!” The word sign has a literal meaning as billboard and a figurative one as a sign from God, the universe, etc.

Equivocation is a fallacy because it is a form of argumentation that is both misleading and logically unsound. When the meaning of a word or phrase shifts in the course of an argument, it causes confusion and also implies that the conclusion (which may be true) does not follow from the premise.

The fallacy of equivocation is an informal logical fallacy, meaning that the error lies in the content of the argument instead of the structure.

Fallacies of relevance are a group of fallacies that occur in arguments when the premises are logically irrelevant to the conclusion. Although at first there seems to be a connection between the premise and the conclusion, in reality fallacies of relevance use unrelated forms of appeal.

For example, the genetic fallacy makes an appeal to the source or origin of the claim in an attempt to assert or refute something.

The ad hominem fallacy and the genetic fallacy are closely related in that they are both fallacies of relevance. In other words, they both involve arguments that use evidence or examples that are not logically related to the argument at hand. However, there is a difference between the two:

  • In the ad hominem fallacy , the goal is to discredit the argument by discrediting the person currently making the argument.
  • In the genetic fallacy , the goal is to discredit the argument by discrediting the history or origin (i.e., genesis) of an argument.

False dilemma fallacy is also known as false dichotomy, false binary, and “either-or” fallacy. It is the fallacy of presenting only two choices, outcomes, or sides to an argument as the only possibilities, when more are available.

The false dilemma fallacy works in two ways:

  • By presenting only two options as if these were the only ones available
  • By presenting two options as mutually exclusive (i.e., only one option can be selected or can be true at a time)

In both cases, by using the false dilemma fallacy, one conceals alternative choices and doesn’t allow others to consider the full range of options. This is usually achieved through an“either-or” construction and polarised, divisive language (“you are either a friend or an enemy”).

The best way to avoid a false dilemma fallacy is to pause and reflect on two points:

  • Are the options presented truly the only ones available ? It could be that another option has been deliberately omitted.
  • Are the options mentioned mutually exclusive ? Perhaps all of the available options can be selected (or be true) at the same time, which shows that they aren’t mutually exclusive. Proving this is called “escaping between the horns of the dilemma.”

Begging the question fallacy is an argument in which you assume what you are trying to prove. In other words, your position and the justification of that position are the same, only slightly rephrased.

For example: “All freshmen should attend college orientation, because all college students should go to such an orientation.”

The complex question fallacy and begging the question fallacy are similar in that they are both based on assumptions. However, there is a difference between them:

  • A complex question fallacy occurs when someone asks a question that presupposes the answer to another question that has not been established or accepted by the other person. For example, asking someone “Have you stopped cheating on tests?”, unless it has previously been established that the person is indeed cheating on tests, is a fallacy.
  • Begging the question fallacy occurs when we assume the very thing as a premise that we’re trying to prove in our conclusion. In other words, the conclusion is used to support the premises, and the premises prove the validity of the conclusion. For example: “God exists because the Bible says so, and the Bible is true because it is the word of God.”

In other words, begging the question is about drawing a conclusion based on an assumption, while a complex question involves asking a question that presupposes the answer to a prior question.

“ No true Scotsman ” arguments aren’t always fallacious. When there is a generally accepted definition of who or what constitutes a group, it’s reasonable to use statements in the form of “no true Scotsman”.

For example, the statement that “no true pacifist would volunteer for military service” is not fallacious, since a pacifist is, by definition, someone who opposes war or violence as a means of settling disputes.

No true Scotsman arguments are fallacious because instead of logically refuting the counterexample, they simply assert that it doesn’t count. In other words, the counterexample is rejected for psychological, but not logical, reasons.

The appeal to purity or no true Scotsman fallacy is an attempt to defend a generalisation about a group from a counterexample by shifting the definition of the group in the middle of the argument. In this way, one can exclude the counterexample as not being “true”, “genuine”, or “pure” enough to be considered as part of the group in question.

To identify an appeal to authority fallacy , you can ask yourself the following questions:

  • Is the authority cited really a qualified expert in this particular area under discussion? For example, someone who has formal education or years of experience can be an expert.
  • Do experts disagree on this particular subject? If that is the case, then for almost any claim supported by one expert there will be a counterclaim that is supported by another expert. If there is no consensus, an appeal to authority is fallacious.
  • Is the authority in question biased? If you suspect that an expert’s prejudice and bias could have influenced their views, then the expert is not reliable and an argument citing this expert will be fallacious.To identify an appeal to authority fallacy, you ask yourself whether the authority cited is a qualified expert in the particular area under discussion.

Appeal to authority is a fallacy when those who use it do not provide any justification to support their argument. Instead they cite someone famous who agrees with their viewpoint, but is not qualified to make reliable claims on the subject.

Appeal to authority fallacy is often convincing because of the effect authority figures have on us. When someone cites a famous person, a well-known scientist, a politician, etc. people tend to be distracted and often fail to critically examine whether the authority figure is indeed an expert in the area under discussion.

The ad populum fallacy is common in politics. One example is the following viewpoint: “The majority of our countrymen think we should have military operations overseas; therefore, it’s the right thing to do.”

This line of reasoning is fallacious, because popular acceptance of a belief or position does not amount to a justification of that belief. In other words, following the prevailing opinion without examining the underlying reasons is irrational.

The ad populum fallacy plays on our innate desire to fit in (known as “bandwagon effect”). If many people believe something, our common sense tells us that it must be true and we tend to accept it. However, in logic, the popularity of a proposition cannot serve as evidence of its truthfulness.

Ad populum (or appeal to popularity) fallacy and appeal to authority fallacy are similar in that they both conflate the validity of a belief with its popular acceptance among a specific group. However there is a key difference between the two:

  • An ad populum fallacy tries to persuade others by claiming that something is true or right because a lot of people think so.
  • An appeal to authority fallacy tries to persuade by claiming a group of experts believe something is true or right, therefore it must be so.

To identify a false cause fallacy , you need to carefully analyse the argument:

  • When someone claims that one event directly causes another, ask if there is sufficient evidence to establish a cause-and-effect relationship. 
  • Ask if the claim is based merely on the chronological order or co-occurrence of the two events. 
  • Consider alternative possible explanations (are there other factors at play that could influence the outcome?).

By carefully analysing the reasoning, considering alternative explanations, and examining the evidence provided, you can identify a false cause fallacy and discern whether a causal claim is valid or flawed.

False cause fallacy examples include: 

  • Believing that wearing your lucky jersey will help your team win 
  • Thinking that everytime you wash your car, it rains
  • Claiming that playing video games causes violent behavior 

In each of these examples, we falsely assume that one event causes another without any proof.

The planning fallacy and procrastination are not the same thing. Although they both relate to time and task management, they describe different challenges:

  • The planning fallacy describes our inability to correctly estimate how long a future task will take, mainly due to optimism bias and a strong focus on the best-case scenario.
  • Procrastination refers to postponing a task, usually by focusing on less urgent or more enjoyable activities. This is due to psychological reasons, like fear of failure.

In other words, the planning fallacy refers to inaccurate predictions about the time we need to finish a task, while procrastination is a deliberate delay due to psychological factors.

A real-life example of the planning fallacy is the construction of the Sydney Opera House in Australia. When construction began in the late 1950s, it was initially estimated that it would be completed in four years at a cost of around $7 million.

Because the government wanted the construction to start before political opposition would stop it and while public opinion was still favorable, a number of design issues had not been carefully studied in advance. Due to this, several problems appeared immediately after the project commenced.

The construction process eventually stretched over 14 years, with the Opera House being completed in 1973 at a cost of over $100 million, significantly exceeding the initial estimates.

An example of appeal to pity fallacy is the following appeal by a student to their professor:

“Professor, please consider raising my grade. I had a terrible semester: my car broke down, my laptop got stolen, and my cat got sick.”

While these circumstances may be unfortunate, they are not directly related to the student’s academic performance.

While both the appeal to pity fallacy and   red herring fallacy can serve as a distraction from the original discussion topic, they are distinct fallacies. More specifically:

  • Appeal to pity fallacy attempts to evoke feelings of sympathy, pity, or guilt in an audience, so that they accept the speaker’s conclusion as truthful.
  • Red herring fallacy attempts to introduce an irrelevant piece of information that diverts the audience’s attention to a different topic.

Both fallacies can be used as a tool of deception. However, they operate differently and serve distinct purposes in arguments.

Argumentum ad misericordiam (Latin for “argument from pity or misery”) is another name for appeal to pity fallacy . It occurs when someone evokes sympathy or guilt in an attempt to gain support for their claim, without providing any logical reasons to support the claim itself. Appeal to pity is a deceptive tactic of argumentation, playing on people’s emotions to sway their opinion.

Yes, it’s quite common to start a sentence with a preposition, and there’s no reason not to do so.

For example, the sentence “ To many, she was a hero” is perfectly grammatical. It could also be rephrased as “She was a hero to  many”, but there’s no particular reason to do so. Both versions are fine.

Some people argue that you shouldn’t end a sentence with a preposition , but that “rule” can also be ignored, since it’s not supported by serious language authorities.

Yes, it’s fine to end a sentence with a preposition . The “rule” against doing so is overwhelmingly rejected by modern style guides and language authorities and is based on the rules of Latin grammar, not English.

Trying to avoid ending a sentence with a preposition often results in very unnatural phrasings. For example, turning “He knows what he’s talking about ” into “He knows about what he’s talking” or “He knows that about which he’s talking” is definitely not an improvement.

No, ChatGPT is not a credible source of factual information and can’t be cited for this purpose in academic writing . While it tries to provide accurate answers, it often gets things wrong because its responses are based on patterns, not facts and data.

Specifically, the CRAAP test for evaluating sources includes five criteria: currency , relevance , authority , accuracy , and purpose . ChatGPT fails to meet at least three of them:

  • Currency: The dataset that ChatGPT was trained on only extends to 2021, making it slightly outdated.
  • Authority: It’s just a language model and is not considered a trustworthy source of factual information.
  • Accuracy: It bases its responses on patterns rather than evidence and is unable to cite its sources .

So you shouldn’t cite ChatGPT as a trustworthy source for a factual claim. You might still cite ChatGPT for other reasons – for example, if you’re writing a paper about AI language models, ChatGPT responses are a relevant primary source .

ChatGPT is an AI language model that was trained on a large body of text from a variety of sources (e.g., Wikipedia, books, news articles, scientific journals). The dataset only went up to 2021, meaning that it lacks information on more recent events.

It’s also important to understand that ChatGPT doesn’t access a database of facts to answer your questions. Instead, its responses are based on patterns that it saw in the training data.

So ChatGPT is not always trustworthy . It can usually answer general knowledge questions accurately, but it can easily give misleading answers on more specialist topics.

Another consequence of this way of generating responses is that ChatGPT usually can’t cite its sources accurately. It doesn’t really know what source it’s basing any specific claim on. It’s best to check any information you get from it against a credible source .

No, it is not possible to cite your sources with ChatGPT . You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.

Instead, use a tool designed for this purpose, like the Scribbr Citation Generator .

But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.

GPT  stands for “generative pre-trained transformer”, which is a type of large language model: a neural network trained on a very large amount of text to produce convincing, human-like language outputs. The Chat part of the name just means “chat”: ChatGPT is a chatbot that you interact with by typing in text.

The technology behind ChatGPT is GPT-3.5 (in the free version) or GPT-4 (in the premium version). These are the names for the specific versions of the GPT model. GPT-4 is currently the most advanced model that OpenAI has created. It’s also the model used in Bing’s chatbot feature.

ChatGPT was created by OpenAI, an AI research company. It started as a nonprofit company in 2015 but became for-profit in 2019. Its CEO is Sam Altman, who also co-founded the company. OpenAI released ChatGPT as a free “research preview” in November 2022. Currently, it’s still available for free, although a more advanced premium version is available if you pay for it.

OpenAI is also known for developing DALL-E, an AI image generator that runs on similar technology to ChatGPT.

ChatGPT is owned by OpenAI, the company that developed and released it. OpenAI is a company dedicated to AI research. It started as a nonprofit company in 2015 but transitioned to for-profit in 2019. Its current CEO is Sam Altman, who also co-founded the company.

In terms of who owns the content generated by ChatGPT, OpenAI states that it will not claim copyright on this content , and the terms of use state that “you can use Content for any purpose, including commercial purposes such as sale or publication”. This means that you effectively own any content you generate with ChatGPT and can use it for your own purposes.

Be cautious about how you use ChatGPT content in an academic context. University policies on AI writing are still developing, so even if you “own” the content, you’re often not allowed to submit it as your own work according to your university or to publish it in a journal.

ChatGPT is a chatbot based on a large language model (LLM). These models are trained on huge datasets consisting of hundreds of billions of words of text, based on which the model learns to effectively predict natural responses to the prompts you enter.

ChatGPT was also refined through a process called reinforcement learning from human feedback (RLHF), which involves “rewarding” the model for providing useful answers and discouraging inappropriate answers – encouraging it to make fewer mistakes.

Essentially, ChatGPT’s answers are based on predicting the most likely responses to your inputs based on its training data, with a reward system on top of this to incentivise it to give you the most helpful answers possible. It’s a bit like an incredibly advanced version of predictive text. This is also one of ChatGPT’s limitations : because its answers are based on probabilities, they’re not always trustworthy .

OpenAI may store ChatGPT conversations for the purposes of future training. Additionally, these conversations may be monitored by human AI trainers.

Users can choose not to have their chat history saved. Unsaved chats are not used to train future models and are permanently deleted from ChatGPT’s system after 30 days.

The official ChatGPT app is currently only available on iOS devices. If you don’t have an iOS device, only use the official OpenAI website to access the tool. This helps to eliminate the potential risk of downloading fraudulent or malicious software.

ChatGPT conversations are generally used to train future models and to resolve issues/bugs. These chats may be monitored by human AI trainers.

However, users can opt out of having their conversations used for training. In these instances, chats are monitored only for potential abuse.

Yes, using ChatGPT as a conversation partner is a great way to practice a language in an interactive way.

Try using a prompt like this one:

“Please be my Spanish conversation partner. Only speak to me in Spanish. Keep your answers short (maximum 50 words). Ask me questions. Let’s start the conversation with the following topic: [conversation topic].”

Yes, there are a variety of ways to use ChatGPT for language learning , including treating it as a conversation partner, asking it for translations, and using it to generate a curriculum or practice exercises.

AI detectors aim to identify the presence of AI-generated text (e.g., from ChatGPT ) in a piece of writing, but they can’t do so with complete accuracy. In our comparison of the best AI detectors , we found that the 10 tools we tested had an average accuracy of 60%. The best free tool had 68% accuracy, the best premium tool 84%.

Because of how AI detectors work , they can never guarantee 100% accuracy, and there is always at least a small risk of false positives (human text being marked as AI-generated). Therefore, these tools should not be relied upon to provide absolute proof that a text is or isn’t AI-generated. Rather, they can provide a good indication in combination with other evidence.

Tools called AI detectors are designed to label text as AI-generated or human. AI detectors work by looking for specific characteristics in the text, such as a low level of randomness in word choice and sentence length. These characteristics are typical of AI writing, allowing the detector to make a good guess at when text is AI-generated.

But these tools can’t guarantee 100% accuracy. Check out our comparison of the best AI detectors to learn more.

You can also manually watch for clues that a text is AI-generated – for example, a very different style from the writer’s usual voice or a generic, overly polite tone.

Our research into the best summary generators (aka summarisers or summarising tools) found that the best summariser available in 2023 is the one offered by QuillBot.

While many summarisers just pick out some sentences from the text, QuillBot generates original summaries that are creative, clear, accurate, and concise. It can summarise texts of up to 1,200 words for free, or up to 6,000 with a premium subscription.

Try the QuillBot summarizer for free

Deep learning requires a large dataset (e.g., images or text) to learn from. The more diverse and representative the data, the better the model will learn to recognise objects or make predictions. Only when the training data is sufficiently varied can the model make accurate predictions or recognise objects from new data.

Deep learning models can be biased in their predictions if the training data consist of biased information. For example, if a deep learning model used for screening job applicants has been trained with a dataset consisting primarily of white male applicants, it will consistently favour this specific population over others.

A good ChatGPT prompt (i.e., one that will get you the kinds of responses you want):

  • Gives the tool a role to explain what type of answer you expect from it
  • Is precisely formulated and gives enough context
  • Is free from bias
  • Has been tested and improved by experimenting with the tool

ChatGPT prompts are the textual inputs (e.g., questions, instructions) that you enter into ChatGPT to get responses.

ChatGPT predicts an appropriate response to the prompt you entered. In general, a more specific and carefully worded prompt will get you better responses.

Yes, ChatGPT is currently available for free. You have to sign up for a free account to use the tool, and you should be aware that your data may be collected to train future versions of the model.

To sign up and use the tool for free, go to this page and click “Sign up”. You can do so with your email or with a Google account.

A premium version of the tool called ChatGPT Plus is available as a monthly subscription. It currently costs £16 and gets you access to features like GPT-4 (a more advanced version of the language model). But it’s optional: you can use the tool completely free if you’re not interested in the extra features.

You can access ChatGPT by signing up for a free account:

  • Follow this link to the ChatGPT website.
  • Click on “Sign up” and fill in the necessary details (or use your Google account). It’s free to sign up and use the tool.
  • Type a prompt into the chat box to get started!

A ChatGPT app is also available for iOS, and an Android app is planned for the future. The app works similarly to the website, and you log in with the same account for both.

According to OpenAI’s terms of use, users have the right to reproduce text generated by ChatGPT during conversations.

However, publishing ChatGPT outputs may have legal implications , such as copyright infringement.

Users should be aware of such issues and use ChatGPT outputs as a source of inspiration instead.

According to OpenAI’s terms of use, users have the right to use outputs from their own ChatGPT conversations for any purpose (including commercial publication).

However, users should be aware of the potential legal implications of publishing ChatGPT outputs. ChatGPT responses are not always unique: different users may receive the same response.

Furthermore, ChatGPT outputs may contain copyrighted material. Users may be liable if they reproduce such material.

ChatGPT can sometimes reproduce biases from its training data , since it draws on the text it has “seen” to create plausible responses to your prompts.

For example, users have shown that it sometimes makes sexist assumptions such as that a doctor mentioned in a prompt must be a man rather than a woman. Some have also pointed out political bias in terms of which political figures the tool is willing to write positively or negatively about and which requests it refuses.

The tool is unlikely to be consistently biased toward a particular perspective or against a particular group. Rather, its responses are based on its training data and on the way you phrase your ChatGPT prompts . It’s sensitive to phrasing, so asking it the same question in different ways will result in quite different answers.

Information extraction  refers to the process of starting from unstructured sources (e.g., text documents written in ordinary English) and automatically extracting structured information (i.e., data in a clearly defined format that’s easily understood by computers). It’s an important concept in natural language processing (NLP) .

For example, you might think of using news articles full of celebrity gossip to automatically create a database of the relationships between the celebrities mentioned (e.g., married, dating, divorced, feuding). You would end up with data in a structured format, something like MarriageBetween(celebrity 1 ,celebrity 2 ,date) .

The challenge involves developing systems that can “understand” the text well enough to extract this kind of data from it.

Knowledge representation and reasoning (KRR) is the study of how to represent information about the world in a form that can be used by a computer system to solve and reason about complex problems. It is an important field of artificial intelligence (AI) research.

An example of a KRR application is a semantic network, a way of grouping words or concepts by how closely related they are and formally defining the relationships between them so that a machine can “understand” language in something like the way people do.

A related concept is information extraction , concerned with how to get structured information from unstructured sources.

Yes, you can use ChatGPT to summarise text . This can help you understand complex information more easily, summarise the central argument of your own paper, or clarify your research question.

You can also use Scribbr’s free text summariser , which is designed specifically for this purpose.

Yes, you can use ChatGPT to paraphrase text to help you express your ideas more clearly, explore different ways of phrasing your arguments, and avoid repetition.

However, it’s not specifically designed for this purpose. We recommend using a specialised tool like Scribbr’s free paraphrasing tool , which will provide a smoother user experience.

Yes, you use ChatGPT to help write your college essay by having it generate feedback on certain aspects of your work (consistency of tone, clarity of structure, etc.).

However, ChatGPT is not able to adequately judge qualities like vulnerability and authenticity. For this reason, it’s important to also ask for feedback from people who have experience with college essays and who know you well. Alternatively, you can get advice using Scribbr’s essay editing service .

No, having ChatGPT write your college essay can negatively impact your application in numerous ways. ChatGPT outputs are unoriginal and lack personal insight.

Furthermore, Passing off AI-generated text as your own work is considered academically dishonest . AI detectors may be used to detect this offense, and it’s highly unlikely that any university will accept you if you are caught submitting an AI-generated admission essay.

However, you can use ChatGPT to help write your college essay during the preparation and revision stages (e.g., for brainstorming ideas and generating feedback).

ChatGPT and other AI writing tools can have unethical uses. These include:

  • Reproducing biases and false information
  • Using ChatGPT to cheat in academic contexts
  • Violating the privacy of others by inputting personal information

However, when used correctly, AI writing tools can be helpful resources for improving your academic writing and research skills. Some ways to use ChatGPT ethically include:

  • Following your institution’s guidelines
  • Critically evaluating outputs
  • Being transparent about how you used the tool

Ask our team

Want to contact us directly? No problem. We are always here for you.

Support team - Nina

Our support team is here to help you daily via chat, WhatsApp, email, or phone between 9:00 a.m. to 11:00 p.m. CET.

Our APA experts default to APA 7 for editing and formatting. For the Citation Editing Service you are able to choose between APA 6 and 7.

Yes, if your document is longer than 20,000 words, you will get a sample of approximately 2,000 words. This sample edit gives you a first impression of the editor’s editing style and a chance to ask questions and give feedback.

How does the sample edit work?

You will receive the sample edit within 24 hours after placing your order. You then have 24 hours to let us know if you’re happy with the sample or if there’s something you would like the editor to do differently.

Read more about how the sample edit works

Yes, you can upload your document in sections.

We try our best to ensure that the same editor checks all the different sections of your document. When you upload a new file, our system recognizes you as a returning customer, and we immediately contact the editor who helped you before.

However, we cannot guarantee that the same editor will be available. Your chances are higher if

  • You send us your text as soon as possible and
  • You can be flexible about the deadline.

Please note that the shorter your deadline is, the lower the chance that your previous editor is not available.

If your previous editor isn’t available, then we will inform you immediately and look for another qualified editor. Fear not! Every Scribbr editor follows the  Scribbr Improvement Model  and will deliver high-quality work.

Yes, our editors also work during the weekends and holidays.

Because we have many editors available, we can check your document 24 hours per day and 7 days per week, all year round.

If you choose a 72 hour deadline and upload your document on a Thursday evening, you’ll have your thesis back by Sunday evening!

Yes! Our editors are all native speakers, and they have lots of experience editing texts written by ESL students. They will make sure your grammar is perfect and point out any sentences that are difficult to understand. They’ll also notice your most common mistakes, and give you personal feedback to improve your writing in English.

Every Scribbr order comes with our award-winning Proofreading & Editing service , which combines two important stages of the revision process.

For a more comprehensive edit, you can add a Structure Check or Clarity Check to your order. With these building blocks, you can customize the kind of feedback you receive.

You might be familiar with a different set of editing terms. To help you understand what you can expect at Scribbr, we created this table:

Types of editing Available at Scribbr?


This is the “proofreading” in Scribbr’s standard service. It can only be selected in combination with editing.


This is the “editing” in Scribbr’s standard service. It can only be selected in combination with proofreading.


Select the Structure Check and Clarity Check to receive a comprehensive edit equivalent to a line edit.


This kind of editing involves heavy rewriting and restructuring. Our editors cannot help with this.

View an example

When you place an order, you can specify your field of study and we’ll match you with an editor who has familiarity with this area.

However, our editors are language specialists, not academic experts in your field. Your editor’s job is not to comment on the content of your dissertation, but to improve your language and help you express your ideas as clearly and fluently as possible.

This means that your editor will understand your text well enough to give feedback on its clarity, logic and structure, but not on the accuracy or originality of its content.

Good academic writing should be understandable to a non-expert reader, and we believe that academic editing is a discipline in itself. The research, ideas and arguments are all yours – we’re here to make sure they shine!

After your document has been edited, you will receive an email with a link to download the document.

The editor has made changes to your document using ‘Track Changes’ in Word. This means that you only have to accept or ignore the changes that are made in the text one by one.

It is also possible to accept all changes at once. However, we strongly advise you not to do so for the following reasons:

  • You can learn a lot by looking at the mistakes you made.
  • The editors don’t only change the text – they also place comments when sentences or sometimes even entire paragraphs are unclear. You should read through these comments and take into account your editor’s tips and suggestions.
  • With a final read-through, you can make sure you’re 100% happy with your text before you submit!

You choose the turnaround time when ordering. We can return your dissertation within 24 hours , 3 days or 1 week . These timescales include weekends and holidays. As soon as you’ve paid, the deadline is set, and we guarantee to meet it! We’ll notify you by text and email when your editor has completed the job.

Very large orders might not be possible to complete in 24 hours. On average, our editors can complete around 13,000 words in a day while maintaining our high quality standards. If your order is longer than this and urgent, contact us to discuss possibilities.

Always leave yourself enough time to check through the document and accept the changes before your submission deadline.

Scribbr is specialised in editing study related documents. We check:

  • Graduation projects
  • Dissertations
  • Admissions essays
  • College essays
  • Application essays
  • Personal statements
  • Process reports
  • Reflections
  • Internship reports
  • Academic papers
  • Research proposals
  • Prospectuses

Calculate the costs

The fastest turnaround time is 24 hours.

You can upload your document at any time and choose between four deadlines:

At Scribbr, we promise to make every customer 100% happy with the service we offer. Our philosophy: Your complaint is always justified – no denial, no doubts.

Our customer support team is here to find the solution that helps you the most, whether that’s a free new edit or a refund for the service.

Yes, in the order process you can indicate your preference for American, British, or Australian English .

If you don’t choose one, your editor will follow the style of English you currently use. If your editor has any questions about this, we will contact you.

Qualitative vs. Quantitative Data: 7 Key Differences

' src=

Qualitative data is information you can describe with words rather than numbers. 

Quantitative data is information represented in a measurable way using numbers. 

One type of data isn’t better than the other. 

To conduct thorough research, you need both. But knowing the difference between them is important if you want to harness the full power of both qualitative and quantitative data. 

In this post, we’ll explore seven key differences between these two types of data. 

#1. The Type of Data

The single biggest difference between quantitative and qualitative data is that one deals with numbers, and the other deals with concepts and ideas. 

The words “qualitative” and “quantitative” are really similar, which can make it hard to keep track of which one is which. I like to think of them this way: 

  • Quantitative = quantity = numbers-related data
  • Qualitative = quality = descriptive data

Qualitative data—the descriptive one—usually involves written or spoken words, images, or even objects. It’s collected in all sorts of ways: video recordings, interviews, open-ended survey responses, and field notes, for example. 

I like how researcher James W. Crick defines qualitative research in a 2021 issue of the Journal of Strategic Marketing : “Qualitative research is designed to generate in-depth and subjective findings to build theory.”

In other words, qualitative research helps you learn more about a topic—usually from a primary, or firsthand, source—so you can form ideas about what it means. This type of data is often rich in detail, and its interpretation can vary depending on who’s analyzing it. 

Here’s what I mean: if you ask five different people to observe how 60 kittens behave when presented with a hamster wheel, you’ll get five different versions of the same event. 

Quantitative data, on the other hand, is all about numbers and statistics. There’s no wiggle room when it comes to interpretation. In our kitten scenario, quantitative data might show us that of the 60 kittens presented with a hamster wheel, 40 pawed at it, 5 jumped inside and started spinning, and 15 ignored it completely.

There’s no ifs, ands, or buts about the numbers. They just are. 

#2. When to Use Each Type of Data

You should use both quantitative and quantitative data to make decisions for your business. 

Quantitative data helps you get to the what . Qualitative data unearths the why .

Quantitative data collects surface information, like numbers. Qualitative data dives deep beneath these same numbers and fleshes out the nuances there. 

Research projects can often benefit from both types of data, which is why you’ll see the term “mixed-method” research in peer-reviewed journals. The term “mixed-method” refers to using both quantitative and qualitative methods in a study. 

So, maybe you’re diving into original research. Or maybe you’re looking at other peoples’ studies to make an important business decision. In either case, you can use both quantitative and qualitative data to guide you.

Imagine you want to start a company that makes hamster wheels for cats. You run that kitten experiment, only to learn that most kittens aren’t all that interested in the hamster wheel. That’s what your quantitative data seems to say. Of the 60 kittens who participated in the study, only 5 hopped into the wheel. 

But 40 of the kittens pawed at the wheel. According to your quantitative data, these 40 kittens touched the wheel but did not get inside. 

This is where your qualitative data comes into play. Why did these 40 kittens touch the wheel but stop exploring it? You turn to the researchers’ observations. Since there were five different researchers, you have five sets of detailed notes to study. 

From these observations, you learn that many of the kittens seemed frightened when the wheel moved after they pawed it. They grew suspicious of the structure, meowing and circling it, agitated.

One researcher noted that the kittens seemed desperate to enjoy the wheel, but they didn’t seem to feel it was safe. 

So your idea isn’t a flop, exactly. 

It just needs tweaking. 

According to your quantitative data, 75% of the kittens studied either touched or actively participated in the hamster wheel. Your qualitative data suggests more kittens would have jumped into the wheel if it hadn’t moved so easily when they pawed at it. 

You decide to make your kitten wheel sturdier and try the whole test again with a new set of kittens. Hopefully, this time a higher percentage of your feline participants will hop in and enjoy the fun. 

This is a very simplistic and fictional example of how a mixed-method approach can help you make important choices for your business. 

#3. Data You Have Access To

When you can swing it, you should look at both qualitative and quantitative data before you make any big decisions. 

But this is where we come to another big difference between quantitative vs. qualitative data: it’s a lot easier to source qualitative data than quantitative data. 

Why? Because it’s easy to run a survey, host a focus group, or conduct a round of interviews. All you have to do is hop on SurveyMonkey or Zoom and you’re on your way to gathering original qualitative data. 

And yes, you can get some quantitative data here. If you run a survey and 45 customers respond, you can collect demographic data and yes/no answers for that pool of 45 respondents.

But this is a relatively small sample size. (More on why this matters in a moment.) 

To tell you anything meaningful, quantitative data must achieve statistical significance. 

If it’s been a while since your college statistics class, here’s a refresh: statistical significance is a measuring stick. It tells you whether the results you get are due to a specific cause or if they can be attributed to random chance. 

To achieve statistical significance in a study, you have to be really careful to set the study up the right way and with a meaningful sample size.

This doesn’t mean it’s impossible to get quantitative data. But unless you have someone on your team who knows all about null hypotheses and p-values and statistical analysis, you might need to outsource quantitative research. 

Plenty of businesses do this, but it’s pricey. 

When you’re just starting out or you’re strapped for cash, qualitative data can get you valuable information—quickly and without gouging your wallet. 

#4. Big vs. Small Sample Size

Another reason qualitative data is more accessible? It requires a smaller sample size to achieve meaningful results. 

Even one person’s perspective brings value to a research project—ever heard of a case study?

The sweet spot depends on the purpose of the study, but for qualitative market research, somewhere between 10-40 respondents is a good number. 

Any more than that and you risk reaching saturation. That’s when you keep getting results that echo each other and add nothing new to the research.

Quantitative data needs enough respondents to reach statistical significance without veering into saturation territory. 

The ideal sample size number is usually higher than it is for qualitative data. But as with qualitative data, there’s no single, magic number. It all depends on statistical values like confidence level, population size, and margin of error.

Because it often requires a larger sample size, quantitative research can be more difficult for the average person to do on their own. 

#5. Methods of Analysis

Running a study is just the first part of conducting qualitative and quantitative research. 

After you’ve collected data, you have to study it. Find themes, patterns, consistencies, inconsistencies. Interpret and organize the numbers or survey responses or interview recordings. Tidy it all up into something you can draw conclusions from and apply to various situations. 

This is called data analysis, and it’s done in completely different ways for qualitative vs. quantitative data. 

For qualitative data, analysis includes: 

  • Data prep: Make all your qualitative data easy to access and read. This could mean organizing survey results by date, or transcribing interviews, or putting photographs into a slideshow format. 
  • Coding: No, not that kind. Think color coding, like you did for your notes in school. Assign colors or codes to specific attributes that make sense for your study—green for positive emotions, for instance, and red for angry emotions. Then code each of your responses. 
  • Thematic analysis: Organize your codes into themes and sub-themes, looking for the meaning—and relationships—within each one. 
  • Content analysis: Quantify the number of times certain words or concepts appear in your data. If this sounds suspiciously like quantitative research to you, it is. Sort of. It’s looking at qualitative data with a quantitative eye to identify any recurring themes or patterns. 
  • Narrative analysis: Look for similar stories and experiences and group them together. Study them and draw inferences from what they say.
  • Interpret and document: As you organize and analyze your qualitative data, decide what the findings mean for you and your project.

You can often do qualitative data analysis manually or with tools like NVivo and ATLAS.ti. These tools help you organize, code, and analyze your subjective qualitative data. 

Quantitative data analysis is a lot less subjective. Here’s how it generally goes: 

  • Data cleaning: Remove all inconsistencies and inaccuracies from your data. Check for duplicates, incorrect formatting (mistakenly writing a 1.00 value as 10.1, for example), and incomplete numbers. 
  • Summarize data with descriptive statistics: Use mean, median, mode, range, and standard deviation to summarize your data. 
  • Interpret the data with inferential statistics: This is where it gets more complicated. Instead of simply summarizing stats, you’ll now use complicated mathematical and statistical formulas and tests—t-tests, chi-square tests, analysis of variance (ANOVA), and correlation, for starters—to assign meaning to your data. 

Researchers generally use sophisticated data analysis tools like RapidMiner and Tableau to help them do this work. 

#6. Flexibility 

Quantitative research tends to be less flexible than qualitative research. It relies on structured data collection methods, which researchers must set up well before the study begins.

This rigid structure is part of what makes quantitative data so reliable. But the downside here is that once you start the study, it’s hard to change anything without negatively affecting the results. If something unexpected comes up—or if new questions arise—researchers can’t easily change the scope of the study. 

Qualitative research is a lot more flexible. This is why qualitative data can go deeper than quantitative data. If you’re interviewing someone and an interesting, unexpected topic comes up, you can immediately explore it.

Other qualitative research methods offer flexibility, too. Most big survey software brands allow you to build flexible surveys using branching and skip logic. These features let you customize which questions respondents see based on the answers they give.  

This flexibility is unheard of in quantitative research. But even though it’s as flexible as an Olympic gymnast, qualitative data can be less reliable—and harder to validate. 

#7. Reliability and Validity

Quantitative data is more reliable than qualitative data. Numbers can’t be massaged to fit a certain bias. If you replicate the study—in other words, run the exact same quantitative study two or more times—you should get nearly identical results each time. The same goes if another set of researchers runs the same study using the same methods.

This is what gives quantitative data that reliability factor. 

There are a few key benefits here. First, reliable data means you can confidently make generalizations that apply to a larger population. It also means the data is valid and accurately measures whatever it is you’re trying to measure. 

And finally, reliable data is trustworthy. Big industries like healthcare, marketing, and education frequently use quantitative data to make life-or-death decisions. The more reliable and trustworthy the data, the more confident these decision-makers can be when it’s time to make critical choices. 

Unlike quantitative data, qualitative data isn’t overtly reliable. It’s not easy to replicate. If you send out the same qualitative survey on two separate occasions, you’ll get a new mix of responses. Your interpretations of the data might look different, too. 

There’s still incredible value in qualitative data, of course—and there are ways to make sure the data is valid. These include: 

  • Member checking: Circling back with survey, interview, or focus group respondents to make sure you accurately summarized and interpreted their feedback. 
  • Triangulation: Using multiple data sources, methods, or researchers to cross-check and corroborate findings.
  • Peer debriefing: Showing the data to peers—other researchers—so they can review the research process and its findings and provide feedback on both. 

Whether you’re dealing with qualitative or quantitative data, transparency, accuracy, and validity are crucial. Focus on sourcing (or conducting) quantitative research that’s easy to replicate and qualitative research that’s been peer-reviewed.

With rock-solid data like this, you can make critical business decisions with confidence.

Make your website better. Instantly.

Keep reading about user experience.

conclusion on quantitative and qualitative research

5 Qualitative Data Analysis Methods + When To Use Each

Qualitative data analysis is the work of organizing and interpreting descriptive data. Interview recordings, open-ended survey responses, and focus group observations all yield descriptive—qualitative—information. This…

conclusion on quantitative and qualitative research

The 5 Best UX Research Tools Compared

UX research tools help designers, product managers, and other teams understand users and how they interact with a company’s products and services. The tools provide…

conclusion on quantitative and qualitative research

Qualitative data is information you can describe with words rather than numbers.  Quantitative data is information represented in a measurable way using numbers.  One type…

conclusion on quantitative and qualitative research

6 Real Ways AI Has Improved the User Experience

It seems like every other company is bragging about their AI-enhanced user experiences. Consumers and the UX professionals responsible for designing great user experiences are…

conclusion on quantitative and qualitative research

12 Key UX Metrics: What They Mean + How To Calculate Each

UX metrics help identify where users struggle when using an app or website and where they are successful. The data collected helps designers, developers, and…

conclusion on quantitative and qualitative research

5 Key Principles Of Good Website Usability

Ease of use is a common expectation for a site to be considered well designed. Over the past few years, we have been used to…

increase website speed

20 Ways to Speed Up Your Website and Improve Conversion in 2024

Think that speeding up your website isn’t important? Big mistake. A one-second delay in page load time yields: Your site taking a few extra seconds to…

Why Usability Test

How to Do Usability Testing Right

User experience is one of the most important aspects of having a successful website, app, piece of software, or any other product that you’ve built. …

website navigation

Website Navigation: Tips, Examples and Best Practices

Your website’s navigation structure has a huge impact on conversions, sales, and bounce rates. If visitors can’t figure out where to find what they want,…

conclusion on quantitative and qualitative research

How to Create a Heatmap Online for Free in Less Than 15 Minutes

A heatmap is an extremely valuable tool for anyone with a website. Heatmaps are a visual representation of crucial website data. With just a simple…

Website Analysis: Our 4-Step Process Plus the Tools and Examples You Need to Get Started

Website Analysis: Our 4-Step Process Plus the Tools and Examples You Need to Get Started

The problem with a lot of the content that covers website analysis is that the term “website analysis” can refer to a lot of different things—and…

How to Improve My Website: Grow Engagement and Conversions by Fixing 3 Common Website Problems

How to Improve My Website: Grow Engagement and Conversions by Fixing 3 Common Website Problems

Here, we show you how to use Google Analytics together with Crazy Egg’s heatmap reports to easily identify and fix 3 common website problems.

Comprehensive Guide to Website Usability Testing (With Tools and Software to Help)

Comprehensive Guide to Website Usability Testing (With Tools and Software to Help)

We share the 3-step process for the website usability testing we recommend to our customers, plus the tools to pull actionable insights out of the process.

Hotjar Alternatives: 21 Tools for Optimizing Your Website and Conversion Rate

Hotjar Alternatives: 21 Tools for Optimizing Your Website and Conversion Rate

Hotjar is a great tool for website optimization, but some marketers may need something a little different. If that’s you, here are 21 Hotjar alternatives.

How to Interpret and Use Clickmaps to Improve Your Website’s UX

How to Interpret and Use Clickmaps to Improve Your Website’s UX

To use the info in a clickmap to improve your website experience and effectiveness, you have to know how to interpret what you see.

Over 300,000 websites use Crazy Egg to improve what's working, fix what isn't and test new ideas.

Last Updated on May 19, 2022

Quantitative and Qualitative Research Methods: Similarities and Differences Compare & Contrast Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

The aim of this paper is to analyze and to compare quantitative and qualitative research methods. The analysis will begin with the definition and description of the two methods. This will be followed by a discussion on the various aspects of the two research methods.

The similarities and differences between quantitative and qualitative research methods can be seen in their characteristics, data collection methods, data analysis methods, and the validity issues associated with them, as well as, their strengths and weaknesses.

Definition and Description

Qualitative research method is a technique of “studying phenomena by collecting and analyzing data in non-numeric form”. It focuses on exploring the topic of the study by finding as much detail as possible. The characteristics of qualitative research include the following.

First, it focuses on studying the behavior of individuals in their natural settings. Thus, it does not use artificial experiments. This helps researchers to avoid interfering with the participants’ normal way of life.

Second, qualitative research focuses on meanings, perspectives, and understandings. It aims at finding out the meanings that the subjects of the study “attach to their behavior, how they interpret situations, and what their perspectives are on particular issues”.

Concisely, it is concerned with the processes that explain why and how things happen.

Quantitative research is “explaining phenomena by collecting numerical data that are analyzed using mathematical techniques such as statistics”.

It normally uses experiments to answer research questions. Control is an important aspect of the experiments because it enables the researcher to find unambiguous answers to research questions.

Quantitative research also uses operational definitions. Concisely, the terms used in a quantitative study must be defined according to the operations employed to measure them in order to avoid confusion in meaning or communication.

Moreover, the results of quantitative research are considered to be reliable only if they are replicable. This means that the same results must be produced if the research is repeated using the same techniques.

Hypothesis testing is also an integral part of quantitative research. Concisely, hypotheses enable the researcher to concentrate on a specific aspect of a problem, and to identify the methods for solving it.

The similarities and differences between quantitative and qualitative research methods can be seen in their characteristics

Quantitative and qualitative studies are similar in the following ways. To begin with, qualitative research is normally used to generate theory. Similarly, quantitative studies can be used to explore new areas, thereby creating a new theory.

Even though qualitative research focuses on generating theory, it can also be used to test hypotheses and existing theories. In this regard, it is similar to quantitative studies that mainly focus on testing theories and hypotheses.

Both qualitative and quantitative studies use numeric and non-numeric data. For instance, the use of statements such as less than normally involves the use of quantitative data in qualitative studies.

Similarly, quantitative studies can use questionnaires with open-ended questions to collect qualitative data.

Despite these similarities, quantitative and qualitative studies differ in the following ways. To begin with, the purpose of qualitative research is to facilitate understanding of fundamental meanings, reasons, and motives.

It also aims at providing valuable insights concerning a problem through determination of common trends in thought and generation of ideas.

On the other hand, the purpose of quantitative research is to quantify data and to use the results obtained from a sample to make generalizations on a particular population.

The sample used in qualitative research is often small and non-representative of the population. On the contrary, quantitative research uses large samples that represent the population. In this regard, it uses random sampling techniques to select a representative sample.

Qualitative research uses unstructured or semi-structured data collection techniques such as focus group discussions, whereas quantitative research uses structured techniques such as questionnaires.

Moreover, qualitative research uses non-statistical data analysis techniques, whereas quantitative uses statistical methods to analyze data. Finally, the results of qualitative research are normally exploratory and inconclusive, whereas the results of quantitative research are usually conclusive.

The similarities and differences between quantitative and qualitative research methods can be seen in their data collection methods

The main data collection methods in qualitative research include observations, interviews, content review, and questionnaires. The researcher can use participant or systematic observation to collect data.

In participant observation, the researcher engages actively in the activities of the subjects of the study. Researchers prefer this technique because it enables them to avoid disturbing the natural settings of the study.

In systematic observation, schedules are used to observe the behaviors of the participants at regular intervals. This technique enhances objectivity and reduces bias during data collection.

Most qualitative studies use unstructured interviews in which the interviewer uses general ideas to guide the interview and prompts to solicit more information.

Content review involves reading official documents such as diaries, journals, and minutes of meetings in order to obtain data. The importance of this technique is that it enables the researcher to reconstruct events and to describe social relationships.

Questionnaires are often used when the sample size is too large to be reached through face-to-face interviews. However, its use is discouraged in qualitative research because it normally influences the way participants respond, rather than allowing them to act naturally during data collection.

Quantitative research mainly uses surveys for data collection. This involves the use of questionnaires and interviews with closed-ended questions to enable the researcher to obtain data that can be analyzed with the aid of statistical techniques.

The questionnaires can be mailed or they can be administered directly to the respondents.

Observations are also used to collect data in quantitative studies. For example, the researcher can count the number of customers queuing at a point of sale in a retail shop.

Finally, quantitative researchers use management information systems to collect data. This involves reviewing documents such as financial reports to obtain quantitative data.

The similarities and differences between quantitative and qualitative research methods can be seen in their data analysis methods

Qualitative researchers often start the analysis process during the data collection and preparation stage in order to discover emerging themes and patterns. This involves continuous examination of data in order to identify important points, contradictions, inconsistencies, and common themes.

After this preliminary analysis, qualitative data is usually organized through systematic categorization and concept formation. This involves summarizing data under major categories that appear in the data set.

Data can also be summarized through tabulation in order to reveal its underlying features. The summaries usually provide descriptions that are used to generate theories. Concisely, the data is used to develop theories that explain the causes of the participants’ behavior.

Theories are also developed through comparative analysis. This involves comparing observations “across a range of situations over a period of time among different participants through a variety of techniques”.

Continuous comparisons provide clues on why participants behave in a particular manner, thereby facilitating theory formulation.

Quantitative analysis begins with the identification of the level of measurement that is appropriate for the collected data. After identifying the measurement level, data is usually summarized under different categories in tables by calculating frequencies and percentage distributions.

A frequency distribution indicates the number of observations or scores in each category of data, whereas a percentage distribution indicates the proportion of the subjects of the study who are represented in each category.

Descriptive statistics help the researcher to describe quantitative data. It involves calculating the mean and median, as well as, minimum and maximum values. Other analytical tools include correlation, regression, and analysis of variance.

Correlation analysis reveals the direction and strength of the relationship associated with two variables. Analysis of variance tests the statistical significance of the independent variables. Regression analysis helps the researcher to determine whether the independent variables are predictors of the dependent variables.

The similarities and differences between quantitative and qualitative research methods can be seen in their validity issues

Validity refers to the “degree to which the evidence proves that the interpretations of the data are correct and appropriate”. Validity is achieved if the measurement instrument is reliable. Replicability is the most important aspect of reliability in quantitative research.

This is because the results of quantitative research can only be approved if they are replicable. In quantitative research, validity is established through experiment review, data triangulation, and participant feedback, as well as, regression and statistical analyses.

In qualitative research, validity depends on unobtrusive measures, respondent validation, and triangulation. The validity of the results is likely to improve if the researcher is unobtrusive. This is because the presence of the researcher will not influence the responses of the participants.

Respondent validation involves obtaining feedback from the respondents concerning the accuracy of the data in order to ensure reliability. Triangulation involves collecting data using different methods at different periods from different people in order to ensure reliability.

The similarities and differences between quantitative and qualitative research methods can be seen in their strengths and weaknesses

The strengths of qualitative research include the following. First, it enables the researcher to pay attention to detail, as well as, to understand meanings and complexities of phenomena.

Second, it enables respondents to convey their views, feelings, and experiences without the influence of the researcher.

Third, qualitative research involves contextualization of behavior within situations and time. This improves the researcher’s understanding, thereby enhancing the reliability of the conclusions made from the findings.

Finally, the findings of qualitative research are generalizable through the theory developed in the study.

Qualitative research has the following weaknesses. Participant observation can lead to interpretation of phenomena based only on particular situations, while ignoring external factors that may influence the behavior of participants.

This is likely to undermine the validity of the research. Additionally, conducting a qualitative research is usually difficult due to the amount of time and resources required to negotiate access, to build trust, and to collect data from the respondents.

Finally, qualitative research is associated with high levels of subjectivity and bias.

Quantitative research has the following strengths. First, it has high levels of precision, which is achieved through reliable measures.

Second, it uses controlled experiments, which enable the researcher to determine cause and effect relationships.

Third, the use of advanced statistical techniques such as regression analysis facilitates accurate and sophisticated analysis of data.

Despite these strengths, quantitative research is criticized because it ignores the fact that individuals are able to interpret their experiences, as well as, to develop their own meanings.

Furthermore, control of variables often leads to trivial findings, which may not explain the phenomena that are being studied. Finally, quantitative research cannot be used to study phenomena that are not quantifiable.

The aim of this paper was to analyze quantitative and qualitative research methods by comparing and contrasting them. The main difference between qualitative and quantitative research is that the former uses non-numeric data, whereas the later mainly uses numeric data.

The main similarity between them is that they can be used to test existing theories and hypothesis. Qualitative and quantitative research methods have strengths and weaknesses. The results obtained through these methods can be improved if the researcher addresses their weaknesses.

Gravetter, F., & Forzano, L.-A. (2011). Research methods for the behavioral sciences. New York, NY: McGraw-Hill.

Kothari, C. (2009). Research methodology: Methods adn techniques. London, England: Sage.

McNeill, P., & Chapman, S. (2005). Research methods. London, England: Palgrave.

Rosenthal, R., & Rosnow, R. (2007). Essentials of behavioral research: Methods and data analysis. Upper River Saddle, NJ: Prentice Hall.

Stangor, C. (2010). Research methods for the behavioral sciences. New York, NY: John Wiley and Sons.

Wallnau, L., & Gravetter, F. (2009). Statistics for the behavioral sciences. London, England: Macmillan.

  • The Caregiver Burnout and Long-Term Stress
  • Rules of Experimental Designs
  • Positivist and Constructivist Paradigm
  • Research Technique in Healthcare
  • Attitude Scales and Marketing Research
  • The Impact of NDEs upon Those in the Helping Professions
  • Ethics in Psychology Research Studies
  • Use of Hopfield Networks as Analytic Tools
  • Animal Testing Effects on Psychological Investigation
  • Fundamental Concepts of Research in the Field of Psychology
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, July 2). Quantitative and Qualitative Research Methods: Similarities and Differences. https://ivypanda.com/essays/qualitative-and-quantitative-research-methods/

"Quantitative and Qualitative Research Methods: Similarities and Differences." IvyPanda , 2 July 2019, ivypanda.com/essays/qualitative-and-quantitative-research-methods/.

IvyPanda . (2019) 'Quantitative and Qualitative Research Methods: Similarities and Differences'. 2 July.

IvyPanda . 2019. "Quantitative and Qualitative Research Methods: Similarities and Differences." July 2, 2019. https://ivypanda.com/essays/qualitative-and-quantitative-research-methods/.

1. IvyPanda . "Quantitative and Qualitative Research Methods: Similarities and Differences." July 2, 2019. https://ivypanda.com/essays/qualitative-and-quantitative-research-methods/.

Bibliography

IvyPanda . "Quantitative and Qualitative Research Methods: Similarities and Differences." July 2, 2019. https://ivypanda.com/essays/qualitative-and-quantitative-research-methods/.

  • Open access
  • Published: 27 July 2024

A qualitative study of the barriers and facilitators impacting the implementation of a quality improvement program for emergency departments: SurgeCon

  • Nahid Rahimipour Anaraki 1 ,
  • Meghraj Mukhopadhyay 1 ,
  • Jennifer Jewer 2 ,
  • Christopher Patey 3 ,
  • Paul Norman 4 ,
  • Oliver Hurley 1 ,
  • Holly Etchegary 5 &
  • Shabnam Asghari 1 , 6  

BMC Health Services Research volume  24 , Article number:  855 ( 2024 ) Cite this article

The implementation of intervention programs in Emergency Departments (EDs) is often fraught with complications due to the inherent complexity of the environment. Hence, the exploration and identification of barriers and facilitators prior to an implementation is imperative to formulate context-specific strategies to ensure the tenability of the intervention.

In assessing the context of four EDs prior to the implementation of SurgeCon, a quality improvement program for ED efficiency and patient satisfaction, this study identifies and explores the barriers and facilitators to successful implementation from the perspective of the healthcare providers, patients, researchers, and decision-makers involved in the implementation.

Two rural and two urban Canadian EDs with 24/7 on-site physician support.

Data were collected prior to the implementation of SurgeCon, by means of qualitative and quantitative methods consisting of semi-structured interviews with 31 clinicians (e.g., physicians, nurses, and managers), telephone surveys with 341 patients, and structured observations from four EDs. The interpretive description approach was utilized to analyze the data gathered from interviews, open-ended questions of the survey, and structured observations.

A set of five facilitator-barrier pairs were extracted. These key facilitator-barrier pairs were: (1) management and leadership, (2) available resources, (3) communications and networks across the organization, (4) previous intervention experiences, and (5) need for change.

Improving our understanding of the barriers and facilitators that may impact the implementation of a healthcare quality improvement intervention is of paramount importance. This study underscores the significance of identifing the barriers and facilitators of implementating an ED quality improvement program and developing strategies to overcome the barriers and enhance the facilitators for a successful implementations. We propose a set of strategies for hospitals when implementing such interventions, these include: staff training, champion selection, communicating the value of the intervention, promoting active engagement of ED staff, assigning data recording responsibilities, and requiring capacity analysis.

Trial registration

ClinicalTrials.gov. NCT04789902. 10/03/2021.

Peer Review reports

Introduction

Research motivation.

Wait times and overcrowding in emergency departments pose a severe national challenge for Canada as it has one of the highest wait times as compared to similarly industrialized countries [ 1 ]. This issue has persistently worsened as the number of emergency department (ED) visits in Canada has been increasing steadily over the past decade. From 2010–2011 to 2019–2020, the number of ED visits increased from approximately 6.7 million to 7.6 million, representing an average annual increase of 1.2%. Furthermore, the number of reported ED visits rose to almost 14.9 million in 2021–2022 from 11.7 million in 2020–2021[ 2 ]. The typical duration of a visit to an ED is around 3.5 h, which poses a risk to patients since prolonged waiting times in EDs have been associated with sub-optimal patient outcomes [ 3 , 4 , 5 , 6 , 7 , 8 , 9 ], and to the increased likelihood of adverse events [ 10 ]. To address this issue, SurgeCon, a quality improvement program, was devised to address the lack of integration, sustainability and logistical issues which negatively impact wait times in EDs [ 11 , 12 , 13 ]. SurgeCon delivers its quality improvement program through a department level management platform that encompasses three key elements: the installation and configuration of a tailored eHealth system, organizational restructuring, and the establishment of a patient-centric environment. SurgeCon aims to go beyond simply improving wait times; it seeks to optimize ED efficiency while providing a high standard of care for patients and promoting communication among clinicians.

Implementation of such a multidimensional quality improvement program in the dynamic and complex organizational structure of EDs, requires exploring barriers and facilitators prior to the respective implementation to formulate a set of strategies to enhance facilitators and overcome barriers, which may lead to a redesign of the program itself. Previous research in this area either lack the inclusion of strategies to overcome barriers or solely concentrate on eHealth adoption and implementation while neglecting considerations towards restructuring the organization of EDs and improving communication among clinicians. Barriers are factors that inhibit the implementation of practice change [ 14 ], while facilitators are factors that make the implementation easier [ 15 ]. Schreiweis et al. (2019) identified 76 barriers and 268 facilitators of implementation of eHealth services in health care out of 38 articles published between 2007 to 2018 from 12 different countries [ 16 ]. The most frequent barriers were categorized in three categories: individuals (e.g., poor digital health literacy), environmental and organizational (e.g., problems with financing eHealth solutions), and technical (e.g., lack of necessary devices). Also, some of the most stated facilitators were as follows: individuals (e.g., improvement in communication), environmental and organizational (e.g., involvement of all relevant stakeholders), and technical (e.g., ease of use). A limited number of studies have been conducted in an ED setting. For instance, Gyamfi et al., (2017) along with Kirk et al. (2016) and MacWilliams et al. (2017) explored relevant facilitators (e.g., capacity building, involvement and moral support of management and implementers, training and motivation, and environmental context and resources) and barriers (e.g., financial resources, data entry errors, shortage of human resources, and logistical constraints) that influence the implementation of eHealth services (e.g., Electronic Medical Records and screening tools) in EDs in Denmark, Ghana, and Canada (Ontario and Nova Scotia) [ 17 , 18 , 19 ]. Gyamfi et al. (2017) and Kirk et al. (2016) utilized semi-structured interviews, while MacWilliams et al. (2017) utilized focus groups for their data collection (these findings were then thematically analyzed) [ 17 , 18 , 19 ]. MacWilliams et al. (2017) also proposed suggestions to overcome barriers to implementation of Electronic Medical Records in EDs such as providing sufficient logistics (e.g., computers and accessories, reliable internet), rewarding staff, and regular staff training [ 19 ].

The aforementioned literature, along with other related literature, does not encompass the exploration of barriers and facilitators prior to the implementation of a large-scale quality improvement program that targets not only technical (i.e., eHealth system), but also structural (i.e., restructuring the ED organization and fostering patient-centric environment) and human (i.e., promoting communication across clinicians) aspects of the healthcare system. In fact, the objective of the quality improvement program in this study is not only to improve wait time, but also patient satisfaction, provider satisfaction, and the quality of care provided in EDs. As such, the SurgeCon program is connected to multiple dimensions related to patient outcomes within EDs. This study aims to explore barriers and facilitators prior to the implementation of SurgeCon in two rural and two urban Canadian EDs and formulate a set of strategies to overcome barriers and enhance facilitators. The findings identify areas of change for practitioners and policymakers [ 20 ]. This study is based on in-depth semi-structured interviews with clinicians, telephone interviews with patients, and structured observations in four EDs.

SurgeCon: a quality improvement program

SurgeCon includes three components: implementing an eHealth system to automate an action-based surge capacity plan, restructuring the ED organization and workflow, and fostering a more patient-centric environment. SurgeCon’s eHealth system predicts surge levels which sets appropriate automated workflows in motion to enact proactive measures to improve patient flow and associated outcomes. Crucially, eHealth interventions are generally reported to have a positive impact on patient care [ 21 ] wherein its impact ranges from an increase in the availability of patient information, enhanced communication between healthcare workers, improved healthcare accessibility, and reduced patient wait times [ 13 , 22 , 23 ]. The evolution of eHealth services can be attributed to solving critical challenges faced by healthcare institutions around the world, such as wait times and overcrowding which are significant challenges for EDs globally [ 24 , 25 ]. In addition to SurgeCon's eHealth system, a comprehensive approach to improving ED efficiency is also provided by including a patient flow course for frontline nurses and physicians. This course focuses on patient-centeredness and introduces process improvement strategies such as enhancing collaboration between physicians and mid-level providers like nurse practitioners, prioritizing stable patients based on factors beyond just acuity, and aiming to decrease the duration between a patient's arrival and their first assessment by a physician. Furthermore, SurgeCon's implementation process aims to improve the patient experience while in the department. This involves identifying problem areas that could negatively impact a patient's physical and mental well-being, such as their comfort level, ease of navigation, cleanliness of the department, clutter in the ED, and other related factors.

Study design

We employed a mixed-method approach at the technique level, incorporating semi-structured interviews, a structured questionnaire, and structured observation to collect data. To analyze the data, we adopted an interpretive description approach, as outlined by Thorne et al. (1997) [ 26 ]. This approach entails situating the findings within the current body of knowledge and drawing upon the contributions of other scholars, as highlighted by Mitchell and Cody (1993) [ 27 ]. This study aims to provide rich descriptive information on the key barriers and facilitators based on the language of the people involved, which inherently requires some degree of interpretation. The existing knowledge is not an organizing structure, rather, it serves as a foundational framework, providing a starting point and acts as an appropriate platform “upon which the design logic and the inductive reasoning in interpreting meanings within the data can be judged” [ 26 ].

Study context

The implementation of SurgeCon in this study follows a stepped-wedge cluster trial design, specifically focusing on EDs within Category A hospitals. These hospitals offer round-the-clock physician coverage in their EDs. All the hospitals involved in the study are located within the same jurisdiction, operating under the same governance and management structure. The two rural intervention sites in this trial are similar in size, each with a capacity of 8 ED beds. They have a staff roster consisting of approximately 6–10 physicians and 12 nurses, divided into two teams that work on rotating schedules.

One of the urban sites is an acute care facility that provides services to the entire province. The other urban site has 15 beds and shares a physician roster of approximately 40 physicians with the other urban site. Each ED at the urban sites is staffed with 55 and 70 nurses, respectively. Both urban sites offer a wide range of inpatient and outpatient services, including several tertiary services (Table  1 ).

Data collection

Prior to the implementation of the SurgeCon intervention we conducted semi-structured, in-depth interviews with a total of 31 clinicians. This cohort comprised of 20 clinicians from rural EDs and 11 from urban EDs. This included 12 nurses, 9 physicians, 7 managers, 2 patient care facilitators, and 1 program coordinator with 1 to 32 years of work experience in EDs with 69% of participants identifying as female. The interview questions were informed by the Consolidated Framework for Implementation Research (CFIR), Organization Readiness for Knowledge Translation (OR4KT) domains, and the clinical/content expertise of the team. The recruitment continued until data saturation was achieved [ 28 ].

Data on patient satisfaction and patient-reported experiences with ED wait times were collected through telephone surveys that took place from March 1, 2021, to August 31, 2021. In total, 341 patients who visited one of the four selected EDs were interviewed, with 136 coming from rural EDs and 205 from urban EDs. The mean age was 55.7 (SD = 16.8) with 66% of participants identifying as female. We analyzed open-ended questions that specifically targeted patients' experiences while receiving care at the selected EDs and gathered their suggestions for improving the ED environment. Patients' insights confirmed our findings regarding resources, communication, and the necessity for change. The interview guide adapted questions from previously validated questionnaires which include the Ontario Emergency Department Patient Experience of Care Survey , the CIHI Canadian Patient Experiences Survey , the Press Ganey Emergency Department Survey , and the NHS Accident and Emergency Department Questionnaire .

Structured observations were conducted by research team members who were also healthcare staff and had special permission to visit each of the sites which were locked-down and only accessible to authorized ED personnel and patients due to COVID-19 pandemic restrictions. A ‘Site Assessment Checklist’ was used to assess each of the four EDs in terms of the ED’s available resources (e.g., medical, human, and technological), staff communication, pervious experiences of intervention, staff readiness and tension for change. The checklist was developed through a Delphi approach which included the input of research team members, ED staff, and patients who selected key criteria to assess the EDs.

The data collected and referenced in this analysis stems from an innovative pragmatic cluster randomized trial designed to evaluate the effects of SurgeCon, an ED management platform, on wait times and patient satisfaction. The subset of data that was considered relevant to our analysis was collected from March 2021 to December 2022. All data used in this study were collected prior to the implementation of SurgeCon at the four EDs selected for the cluster randomized trial. Even though each dataset was gathered and analyzed independently, they were considered complementary to each other instead of being mutually exclusive.

Data analysis

Data from in-depth interviews, surveys, and structured observations was analyzed according to an interpretative description approach, while utilizing constant comparative analysis. Each set of data was repeatedly read by a qualitative researcher to comprehend the overall phenomena with questions such as “what is happening here? and “what am I learning about this?”, to become familiar with the data, to identify the potential themes or patterns and to achieve a broader insight about the phenomena [ 26 , 28 , 29 , 30 ]. The data was then coded in a broad manner and continually compared and examined for similarities, differences, and relationships to help formulate major themes. A set of five facilitator-barrier pairs was extracted in this study.

All stages in the coding process were conducted by a qualitative researcher and were then categorically reviewed by members of the team to reach a consensus. The data analysis process started with the exploration of semi-structured interview data, which then progressed to include structured observation, and ended with the comprehensive analysis of the data gathered through surveys. Data extracted from semi-structured, in-depth interviews with clinicians served as the primary source for exploring barriers and facilitators before implementation. However, structured observations and survey data were integrated to offer additional clarity and act as auxiliary and confirmatory sources. Data collected from different stakeholders produced complementary results that captured multidimensional interpretations of the topic. The integrated blend of findings collected from various stakeholders through disparate methods not only explains multiple dimensions of the phenomena but also targets different audiences. In this study, data triangulation (gathering data at different times from various sources), investigator triangulation (multiple researchers study the topic of interest), and methodological triangulation (utilizing multiple methods) were utilized as cross-validation checks [ 31 , 32 ].

The barriers and facilitators to the implementation of SurgeCon fell into five themes, each of which plays a dual role of a barrier and facilitator (see Fig.  1 ). These key pairings were: (1) management and leadership, (2) available resources, (3) communications and networks across the organization, (4) previous intervention experiences, and (5) need for change. No significant differences were observed in terms of barriers and facilitators between the groups (i.e., rural and urban EDs) or among providers, patients, and observer inputs. While observer inputs provided insight on all categories, the patients’ input had the most influence on the following categories; available resources, communications and networks, and the need for change. In the following sections, we discuss each of these barrier and facilitator pairs.

figure 1

Process of Identifying Barriers and Facilitators toward Formulating Strategies

Management and leadership

The overarching management and leadership EDs was anticipated to be one of the most important facilitators of the SurgeCon implementation. Having a receptive, accessible, and supportive senior manager who is continually engaged with all aspects of the transition phase paired with an effective management system where the staff are involved in the decision-making process, was perceived to stimulate positive managerial-clinical communications along with an increasing likelihood for the positive reception of an implementation program. Active early involvement, support, and engagement of managers in two EDs were deemed crucial facilitators to fostering a nurturing and motivating environment that encourages physicians and nurses to proactively engage in the implementation process. Data from the observations served as confirmation of the involvement of both management and staff as well.

“I can converse openly and there is an open-door policy. Furthermore, just in terms of communication, there is always a timely response and the manager is very proactive” [Healthcare provider] “The site manager, the direct manager of the staff, comes every morning to the department to see what was happening last night. If there is any new issue, [the manager offers assistance and any logistical resolutions] that can be done or offered immediately. Additionally, they have free access to the director and to the manager through email. The manager’s office is just a few meters away from them, so they can just reach them at any time. For the doctors, the situation is also the same” [Healthcare provider]

However, management and leadership could also pose barriers to a successful implementation. Barriers such as low manager participation and contribution, unreceptive and inaccessible managers, low staff autonomy and involvement in decision-making, and the lack of staff consultation all emerged in the analysis.

“You know a couple of years ago with the previous manager, everything was unilaterally implemented. As in, it was put forward and we had to strictly abide by it irrespective of what we felt the outcome was going to be. There were several instances where you had to accept what was told to you and consequently, there was very little room for discussion or negotiation.” [Healthcare provider]

When working in a small ED with limited staff turnover and a long-standing team who are familiar with daily routines and operations, it was deemed integral for managers to involve and engage frontline ED staff in the decision-making process while also managing strategies for running the department. Failure to give staff autonomy in their roles was anticipated to be a barrier to a successful implementation within this framework.

“The emergency department was say anywhere from 98-99% senior. So, when you got a small department that is pretty much occupied by senior staff, it runs itself. Most of us have been nursing for 30 plus years. So, we know how the system works; we know what we have to do; we know how to solve problems; we are familiar with critical thinking to get issues resolved. However, this other manager was always critiquing us, and certainly not in a constructive manner”. [Healthcare provider]

Amplifying these issues was the fact that there was a history of struggling with unapproachable, autocratic and unavailable managers in the ED. It left the clinicians with sentiments of neglect and varying overdue demands and expectations. This in-turn caused a “toxic environment” which was percived as a critical barrier to the successfull implementation:

“But it really was like I said before, a toxic environment which placed everybody in on a defensive stance at all times and people did not want to go to work and more crucially, people did not like to work. If they did statistics on it, I am sure there was a huge spike in sick leave as people were just not wanting to go to work. That's the bottom line.” [Healthcare provider]

Available resources

Availability of resources was considered as a critical facet for the implementation of SurgeCon. As such, disparate resources, that crossed human and medical resources and several other silos (e.g., space constraint), were anticipated to be necessary considerations to ensure the long-term tenability of the SurgeCon intervention. Participants at all four EDs unanimously identified excess workload, and staff shortages, and absence of opportunities to ease workloads as the most significant anticipated barriers to implementation. To incorporate the new implementation system, not only was it asserted that all clinicians need to be available and have sufficient time to attend a staff training program, but they also need to regularly entering and updating SurgeCon data. Virtually all participants anticipated that the lack of human resources (i.e., insufficient medical staff) would be a crucial barrier.

“Human resources can be a bit harder to come by because nurses are often treated as a commodity. There is so much overtime at the current time and requires increased staff.” [Healthcare provider] “I think more family doctors are needed to lower the congestion in the ED.” [Patient] “Need more staff. Patient asked multiple times to be taken to the bathroom after being left alone in a wheelchair... She asked again hours later and received no help so she peed in her wheelchair fully clothed and left without seeing a doctor due to embarrassment and such a lack of help.” [Patient] “if we don't have enough staff or if we don't have enough beds. To me it don't matter what you're doing, it’s not going to work. It's going to be harder for it to work if you don't have the resources.” [Healthcare provider]

Other than staff shortages, high staff turnover rates were cited as another anticipated barrier to implementation. The high level of staff turnover adversely impacted the level of communication among staff, and was also perceived as a significant challenge with regards to training and accommodating necessary implementation activities.

“We have a lot of new nurses that are just coming out of program. So, helping mentor them with an overwhelmed emergency department is difficult as they are also trying to get their footing within the emergency department, and learn new skills and tasks. I find communications a bit lacking right now because we have so much new staff and they're just trying to get their footing and learn. In doing so, it is hard to have that communication. Like everyone helps wherever they can but you're also trying to, within that time, train your new staff as well. It's kind of a bit hectic.” [Healthcare provider] “Rapid turnover of staff at HSC. So some of the staff have been through process improvement while many others have not.” [Observer]

Insufficient admission space (e.g., inadequate number of beds) and the lack of physical space and rooms in EDs were often identified by clinicians as the primary cause of backlogs and overcrowding in EDs. These factors were anticipated to be barriers to the implementation process as they affect patient admissions, transfers, discharges, as well as the restructuring of the ED organization and workflow.

“Some of the barriers would certainly be the inability to have free or vacant beds to transfer patients out of or transporting patients out of our department to a tertiary care facility.” [Healthcare provider] “There needs to be more beds and seating arrangements.” [Patient] “There is no current space adequate enough to run the flow center model.” [Observer] “Rooms are sticky at times; space is small and overpopulated.” [Observer]

Communications and Networks Across the Organization

In order to ensure the successful adoption of SurgeCon, intra and inter-departmental communication was deemed to be a critical factor. Consistent and frequent communication between clinicians, particularly among physicians and nurses, is necessary to execute implementation activities successfully. However, this theme received mixed evaluations by participants. Poor communication and fragmented relationships between nurses and physicians, and lack of teamwork among staff emerged as significant barriers to the implementation of SurgeCon. In all four EDs, it was observed that physicians and nurses do not have any formal joint meetings and there was scarce communication between different units within EDs. The lack of shared multidisciplinary meetings in EDs decreased the chance of developing mutual understanding and commitment, building empathy and awareness toward each other’s challenges, and enhancing unity and teamwork.

“There seems to be a huge miscommunication between staff, mainly to do with rules surrounding COVID.” [Patient] “We do not sit down at the same table. There are family practice meetings, there are student emergency doc meetings and then, there are nursing meetings; you are not set at the same table. So, I cannot realistically know, feel nor empathize with anybody else’s needs if I am not even aware of them. We are never really made aware of that stuff.” [Healthcare provider] “More communication between staff and patients would be very useful as most people will be more patient and understanding.” [Patient]

Even in the case of personal conflicts and tensions arising between nurses and physicians, formal meetings of managers were considered as a predominate strategy to resolve the respective issues rather than directly involving staff. While the lack of intergroup (i.e., nurses and physicians) communication was evaluated as a barrier, participants positively evaluated intragroup communication, citing regular weekly formal meetings and informal daily meetings when necessary. Furthermore, nurses at one of the sites participated in a Facebook group to share their concerns.

“There is a Facebook group… it was outlined that they are short a nurse, and they are looking for an extra nurse to come in. So, they posted that on the Facebook group in hopes that somebody will see it and come to their rescue.” [Healthcare provider]

In general, a collaborative, supportive, receptive and cooperative environment were considered as a facilitator to implementation. The staff valued a culture of support, transparency, and availability. Also, it was assessed that working in a small ED, where the clinicians are familiar with one another more intimately and for a prolonged duration of time, positively fostered teamwork and supportive communication.

“One main ED unit and there seemed to be good communication and in the smaller sites its quite easy to communicate” [Observer]

Another barrier under this construct was identified as the lack of communication and dialogue between staff in two different units within the EDs. As these units operated independently, the minimal contact and communication between them became routine. Communication between the two units was restricted to the end of the shift and pertained primarily to handing-over patients. When problems arose, the most common means of communication to resolve or discuss the issue was conducted via email.

“We’re taking care of the patients in unit one or unit two, and someone else is taking care of the patients in the other unit. So, I don't really talk to the other person. So, the only time when we communicate is around handover. So that's often sort of one we're saying, “Well, I am leaving, so you take over this patient.” [Healthcare provider] “When we asked staff if they felt the areas of the departments communicated well together they said yes but while we watched it certainly seemed like all the areas functioned independently of each other. NO situational awareness.” [Observer]

A common concern among participants pertained to the lack of engagement and involvement of other departments in the hospital in the implementation process of the intervention. The participants seemed to believe that the implementation could not be successful if other departments and stakeholders in the hospital have no intention to participate. Given the interconnectedness of a hospital’s departments, an intervention aimed to improve ED patient flow must also comprise meaningful engagement from external departments and must be prioritized at all levels of the organization rather than having the ED treated as an individual entity.

“We've done a lot of improvements. For instance, our stroke process or STEMI process, those are things that we've implemented within our department to help streamline that category of patient, that were more focused on just the ED which were more successful. We haven't been able to be successful because of the barriers that lie outside of our department which are a little bit more systems or like, organizational wide. It becomes harder because maybe there's been an unwillingness to participate or not seeing the value because a lot of people don't see what it is like in our department all the time. So, they think that it's just value for us as opposed to value for them as well.” [Healthcare provider]

Another potential barrier to implementation was the anticipated lack of physician participation in the implementation process. Nurses constantly emphasized the crucial role of physicians in the uptake of the intervention and furthermore desired assurance that the physicians will be well-informed about the implementation and will not be disengaged during the process.

“I think physicians are older, more experienced positions or maybe just set in their ways and are less open to change. Some of the physician group will be more resistant.” [Healthcare provider]

Despite the busy clinical environments, the success in the development and undertaking of the implementation hinged on constant and regular communication, including routine informal and formal meetings, that took place between the research team and clinicians. Although in-person meetings were preferable, due to COVID-19 pandemic related restrictions, videoconferencing was replaced to facilitate communication. Scheduling and arranging a meeting with clinicians because of the heavy workload, busy clinical schedule and demands was deemed as extremely challenging and proposed a critical barrier to implementation. Additionally, some of the research members do not have a direct line of communication with clinicians if not through internal facilitators or champions– i.e., nurse practitioners. Although a champion or facilitator demonstrated knowledge about the workload of clinicians which facilitated the scheduling of meetings, the lack of direct communication and in-person meetings seemed to be a critical barrier to implementation as the level of social engagement and connectedness between research staff and medical staff was adversely impacted.

Previous intervention experiences

The previous experiences of staff members in implementing other interventions were evaluated as mostly positive by clinicians and researchers who conducted structured observations. However, some barriers were reported as well. The prior positive experiences of interventions were reported by the study participants, such as with X32 Healthcare’s Online Staffing Optimization project. In general, participants reported that the X32 project resulted in improved workflow efficiency, simplified and organized patient assessments, prioritized triage, and reduced wait times. These positive experiences with past interventions seemed to positively shape the participants perceptions of the SurgeCon implementation.

“The X32 program was overall an effective program in my opinion. We did implement a lot of changes, overall infrastructure changes- the way that we introduce patients into our department and get them through the department to finally get them discharged. After the X32 program, we've seen dramatic improvements and changes versus the way that we were doing it.” [Healthcare provider]

However, there were also negative perceptions of past intervenstions, for example, a lack of communication between researchers and staff, and the lack of follow-up evaluations to meet the contextually specific needs of the EDs.

“Initially, there was a fair bit of communication between staff, the researchers and the end users but after it was implemented, I don't think there was any follow-up or any review of the X32.” [Healthcare provider]

The perception of inadequacies or unsuccessful outcomes from prior intervention efforts appeared to influence the study participants' perceptions of the implementation of SurgeCon and was seen to be a potential barrier to future implementations. This historical context of past initiatives not meeting their intended goals created scepticism and resistance towards embracing the new SurgeCon program.

“SurgeCon is new to us, but we've tried lots of different things over the years, and they've all failed. We've all put work into it… we'll try something, and we'll get all motivated to do it- we'll try it for six months, and everything that we've done falls apart inevitably.” [Healthcare provider] “Many previous wait time related interventions over the past number of years and front line staff report mostly failures with staff reverting to old ways.” [Observer]

Need for change

Tension for change is considerd as an important concept for leaders seeking to improve performance in their organizations. It is a mechanism that created the energy and motivation needed to mobilize human beings into action. Although dissatisfaction with the current approach was the most common perspective as described from patients and providers in four EDs; this was considered concurrently as a strong motivation and potential barrier for clinicians to actively engage in the implementation process. Dissatisfaction with long wait-times and poor workflow was perceived as a major aspect of motivation; the most endorsed facilitator was found to be the perception of necessity of the intervention to rectify deficiencies in wait-time and workflow efficiency. Clinicians valued the change and deemed it as urgently necessary and beneficial. They valued the intervention and possessed an intrinsic inclination towards change as they had long-lasting concerns about the wait-time and workflow; they anticipated that SurgeCon might help to resolve the issues faced in EDs. Thus, clinicians in these EDs collectively valued the intervention and demonstrated an appreciation for the actions taken, which was seen to be one of the more crucial facilitators and implementation drivers.

“I had to wait for 7 and a half hours which felt ridiculously long, even though there were not a lot of other people waiting.” [Patient] “We have been waiting for 2 days because there were no in-patient beds available.” [Patient] “The most important motivation is improving the quality of management for the patients and then, that will be reflected to the wellbeing of the patient as well as the smooth flow of the patient within the department. So, if there is any new idea that can facilitate this- they usually are very eager to adapt and undertake it.” [Healthcare provider]

The participants frequently felt that the staff struggled to deal with the confusion arising from technological limitations in communicating information about wait times and the availability of medical resources. Several complaints were made regarding complications in scheduling appointments, inconsistent wait times, and misallocation of scarce resources which diminished the overall efficiency of the ED. These issue was considered motivating factors for the implementation of SurgeCon.

“The sites lacked a digital patient tracking system that resulted in communication lapses between units.” [Observer] “[Our province] is far behind in technology compared to other provinces.” [Patient]

Participants expressed some dissatisfaction with the planned implementation as a result of not having enough time to participate, staff shortages, and heavy workloads. Two of the selected EDs were found to be particularly affected by this issue, which posed a significant obstacle even before the implementation which involved conducting pre-implementation in-depth interviews. The implementation of the quality improvement program would go ahead as planned, albeit with poor engagement and support from ED staff. Consequently, this lack of involvement might hinder the intervention from reaching their full potential.

“I think that's going to be the biggest challenge is just getting them on board. Just the word “change” or “implementation” right now is a bit challenging.” [Healthcare provider] “I mean morale in the past few years… it’s not in a good place and I think it's because of the increased business, and staff feel like they're burning out, so it's not that they don't do a good job. We need more resources.” [Healthcare provider]

Two of the EDs chosen for this study had rejected previous intervention attempts, (e.g., X32 Healthcare’s Online Staffing Optimalization), which implies that the organizational climate might not be change-oriented. This phenomenon, other than dissatisfaction, was rooted in being resistant to changes (including technological changes) while conforming to the existing status-quo and being reluctant to adopt the consulted changes suggested from outside of the organization. To the participants, interventions meant novel systems, processes and skills which inherently implied altering the quondam workplace routine to adopt a newer system. While ED staff constantly struggled with the internal forces for change (e.g., heavy workload, staffing issues, and long wait time), they were not receptive to the external research team’s attempts at initiating change through the implementation of the intervention. This extended to not only external stimuli for change, but also propositions for change initiated by insiders which were not mobilized in either of the urban sites.

Repeated resistance to technological changes expressed by staff in general. [Observer] “It was unknown- you hear this company from outside is going to come in and fix your emergency department. A lot of people felt like, ‘Well, why do we need an outside company? Why don’t they just speak to the staff that actually works there to see how they could fix it?’ We knew what needed to be fixed but I kind of felt amused as to why did an external entity do it when they didn't ask the people that worked in a department first.” [Healthcare provider] “I feel like change is a big thing for people personally and professionally. So, it is just going to take a while for people to get used to it and, it's something new that’s breaking our old routine of how we did things. I feel those will be some barriers. Technology is going to be a challenge and like I said, it’s a big change.” [Healthcare provider]

During the pandemic, it became evident that engaging ED staff in implementation activities across all four EDs will create a challenging environment. Frontline staff had to manage exhaustion, frustration, burnout, isolation, and a higher volume of sick patients, making change initiation difficult. Clinicians often lacked the energy to participate in pre-implementation interviews, despite compensation and other offered incentives. In describing their experiences, one participant states:

“We're just basically keeping our heads above water at this point.” [Healthcare provider]

Low motivation to participate was caused due to feeling burdened by a heavy workload, COVID-19 regulations and subsequent procedure alterations. Thus, these dismayed clinicians struggled with the pandemic and thereby, served as another major barrier to the intervention.

“With this pandemic, there's constant policy changes, procedure changes, and they're frustrated with it. So, if you want to bring in something else, even though it's going to help them a lot of times- they're resistant because it's just something else on their ‘To Do List’ and they don't want to be bothered with having to learn something else.” [Healthcare provider].

Summary of findings

Given the high rate of failure in translating evidence into practice in health care services and the challenges of implementing eHealth interventions [ 33 , 34 ], it is necessary to assess barriers and facilitators prior to implementation to attain a successful implementation. This study found five facilitator-barrier pairs that were perceived to influence the successful implementation of SurgeCon in the four EDs in our study.

Management and leadership structures were the first facilitator-barrier pair. Such structures play a critical role in the integration and maintenance of innovative implementations in hospital settings[ 35 ]. The findings of Bonawitz et al. (2020) suggest that ineffective management and leadership serve as barriers to change in healthcare institutions [ 36 ]. Management systems that effectively encourage the involvement of health care providers in making ED-related decisions and support proactive managers are perceived to be crucial facilitators, as evidenced by the findings of this study, while disengaged managers and lack of staff autonomy are perceived as critical barriers. The findings observed in this study parallel those observed by Manca et al. (2018) [ 37 ], who found that participative leadership, which seeps into control-oriented management, poses a significant barrier to the dynamics presented by the organizational culture toward change. Furthermore, the lack of top-management sponsorship and presence-based culture presented a recurring barrier to the adoption of innovation in healthcare institutions. Our data suggest that early engagement of managers in implementation procedures and applying a participative leadership style that promotes active engagement of staff may facilitate successful implementation. This is supported by Bonawitz et al. (2020) who found a participative leadership style to be a critical component in successfully implementing change in a healthcare setting [ 36 ].

Available resources is the second facilitator-barrier pair. According to de Wit et al. (2018), implementing system-wide changes requires substantial prerequisite committed hospital resources [ 38 ]. However, tailoring a strategy may permit circumventing change management projects that require committing substantial additional resources [ 39 ]. Furthermore, Barnett et al. (2011) express that the influence of human-based resources is integral in the process of developing, establishing, and diffusing innovations in healthcare institutions [ 40 ]. However, the Canadian Institute for Health Information (2021) points out the stark shortages and increasing staff turnover rates in medical staff within the Canadian healthcare system [ 24 ]. With a perpetually changing and constrained workforce, any pursuit to adopt an implementation will intrinsically face initial challenges. Additionally, de Wit et al. (2018) provide a comprehensive overview of the critical resources prior to initiating change: depending on the idiosyncratic details of implementation, educational resources need to be made available (with minimal barriers to accessing them), along with committed hospital resources in the form of financial, staffing, and other resources [ 38 ]. Furthermore, a lack of medical resources negatively impacts patient admissions, patient transfer delays, cancellation of surgeries, or early discharges [ 41 ]. Inadequate financial, technological, human, and medical resources were consistently identified as anticipated barriers across all four ED sites. Although implementing SurgeCon does not require substantial additional resources and the ED sites are provided with the technological equipment and educational requirements prior to the intervention, the shortage of medical staff and lack of medical resources remain potentially significant barriers, as found by this study.

The third facilitator-barrier pair is communications and networks across the organization. Considering the insights gained from previous studies on leadership structures in healthcare institutions, communication is a critical symptom of a participative leadership structure [ 35 , 36 , 42 , 43 ]. It is repeatedly established that teamwork, trust and other parameters of the respective organizational climate are founded by the principles of the underlying leadership structure. According to our study however, even in the participative leadership structure which embraces engagement and involvement of staff, ED environments suffer a lack of communication between nurses and physicians and between different ED units. While the minimal formal and informal discussions that occur between physicians and nurses may meet the basic requirements for professional standards, they are not fully cognisant of each other’s concerns and challenges. To fully engage and participate in the implementation of an intervention, collaboration between all ED staff is required. Lack of communication, dialogue, and teamwork among staff is recognized as an anticipated barrier to successful implementation. Conversely, constant communication and dialogue between the research staff and healthcare provider is considered as a practice that would facilitate the intervention’s implementation. However, in our case, due to COVID-19 restrictions, almost all communications were transferred from in-person to a virtual medium. Being overwhelmed by COVID-19 regulatory demands, staff shortages and burdensome workloads, clinicians were not left with enough energy and time to participate in pre-implementation on-line interviews.

The fourth facilitator-barrier pair, previous intervention experiences, were also anticipated to impact the SurgeCon implementation. Hamilton et al. (2010) found that prior experience with change efforts contributed to readiness for change in healthcare institutions [ 44 ]. As such, it is expressed that previous experience with interventions contributes to calibrating an appropriate organizational climate that is conducive to change. Previous experience greatly assists in establishing the appropriate steps and instilling confidence to create a ripe organizational climate for the implementation [ 45 ]. Zapka et al. (2013) express the need for reviews of past experiences of change as a necessary element to sustain the implementation [ 46 ]. The findings of this study with regard to previous experience of interventions and its potential to make a positive or negative impact on future interventions parallel those observed by previous scholars. Our data reveals that the negative perceptions of past intervenstions (e.g., lack of follow-up evaluations), was considered a notable obstacle to the implementation of SurgeCon.

The fifth facilitator-barrier pair was the need for change. Grol (2013) illustrates the importance of the perception of necessity in successfully adopting an intervention, particularly in a healthcare environment. Institutions with a positive perception of the necessity of an intervention are more likely to adopt and sustain an implementation [ 47 ]. Tension for change in implementation science is defined as the proclivity for shareholders to perceive the current situation as requiring a change or intolerable [ 48 , 49 , 50 ]. Our findings illustrated that dissatisfaction with the current system, with long wait times and poor workflow in EDs, was perceived as a necessity for urgent change and intervention. However, the perception of the necessity of the intervention does not necessarily imply valuing or practicing the change requirements. Our study supports findings of the inverse relationship between staff burnout and motivation to support an intervention [ 51 , 52 ]. When considering the drastic national rise in burnout experienced by healthcare workers in Canada [ 53 ], the current healthcare environment is not conducive to change. Lack of time, staff shortages, and heavy workload coupled with COVID-19 fatigue and burnout did not leave clinicians with sufficient energy to even participate in pre-implementation interviews, let alone in interest in being actively involved in the intervention. Additionally, this study found that using new technology and altering the workplace routines were perceived as barriers to change among clinicians. Regardless of the high level of dissatisfaction and staff workload, clinicians were still resistant to the interventions proposed by external sources.

Strategies for overcoming barriers and enhancing facilitators

Identifying and evaluating barriers and facilitators alone is only the first step in enhancing the probability of successful implementations of eHealth interventions such as SurgeCon. It is also important to formulate a set of strategies for hospitals to overcome the identified barriers and enhance the facilitators (Fig.  1 ). The recommended strategies—staff training, frontline champions, performance data review, communicating the value of the intervention, encouraging active engagement of ED staff, assigning an individual to regularly record data, and requiring capacity analysis—aim to address and overcome barriers while capitalizing on facilitators. These multi-faceted strategies were identified through discussions with decision makers, clinicians, patients, and research team members as well as lessons learned from SurgeCon’s implementation at the pilot site.

To elaborate on the specific components: It is crucial that a majority of ED staff attend a training on paitient flow and have ED leadership participate in software configuration to adjust and tailor SurgeCon’s the digital eHealth platform to their ED. Attending training sessions facilitates the adoption of quality improvement initiatives and patient flow strategies included within the SurgeCon platform and encourages ED staff to become actively engaged with the implementation process. This process is essential to foster an active participation and discussion between all tiers of staff which may not routinely transpire. The training course needs to actively engage frontline staff and must include the following modules: Interactive Simulation, SurgeCon eHealth Platform, and Patient Centeredness modules. The aim of the Interactive Simulation module is to provide insights into the rationale of connecting the software to process improvement and elucidate its procedure in a practical setting using ED-based scenarios. Since the module will be interactive, it allows for greater clarity to ensure that learning outcomes are achieved. The SurgeCon eHealth Platform module will assist ED staff in becoming familiar with the digital whiteboard application. This includes learning how the system collects and reports information, how to interpret and respond to system notices and warnings, and how to customize the dashboard to create a site-specific, adaptive version of SurgeCon that addresses the unique needs of their ED. The Patient Centeredness module comprises an educational session which reinforces the core importance of values pertaining to patient care across the following topics: providing quality ED care to all patients regardless of urgency; treating patients with respect; and considering the patient’s visit to an ED as always of vital necessity.

Having a dedicated frontline champion who is selected by ED management and trained by the implementation team can help ensure effective communication and facilitate the implementation process. These individuals can act as a liaison between ED staff and the research team, providing ongoing support and addressing any questions or concerns that may arise. In addition, they can provide valuable feedback to the research team regarding technical issues or challenges encountered during implementation which can help inform adjustments and improvements to the intervention. Ultimately, having frontline champions who are invested in the success of the intervention can contribute to a more seamless and effective implementation process.

Continuous performance reporting plays a crucial role in enhancing the operational efficiency and effectiveness of EDs and contributes to the development of improved operational strategies by providing meaningful data. In this study, the research protocol involves prominently displaying department-level data in the ED, such as at nursing stations, and providing individual-level performance reports to physicians at the participating sites. However, in the post-COVID era, EDs have been experiencing staffing shortages, which have necessitated changes in the reporting protocols of this study, particularly regarding key performance indicator (KPI) data. The KPIs examined in this study include the time to physician initial assessment (PIA), the length of stay in the ED (LOS), and the rate of patients leaving the ED without being seen by a physician (LWBS). These KPIs are widely recognized as the gold standard for evaluating ED performance. However, these indicators assume consistent operating conditions, and the reliability of using them as the primary method for assessing department efficiency diminishes in the presence of staff shortages. Providing individual physicians with performance reports may serve as a reminder of the operational challenges they have faced rather than providing a fair assessment of their ability to efficiently manage patient flow in their department. As a result, the research team decided to recommend aggregated department-level performance reports. Ultimately, the primary goal is to increase physician motivation to utilize SurgeCon by demonstrating its capacity to reduce door-to-doctor time, which is a critical metric for assessing standards of emergency care and efficiency.

It is important to the research team to communicate the importance and value of SurgeCon by presenting a successful implementation in the pilot site to raise awareness about the prospective results and enhance motivation for the adoption of the intervention. Additionally, implementing interventions is a “collective action” which necessitates a commitment to the process by all members. As Weiner (2009, p. 2) [ 54 ] states “implementing complex organizational changes involves collective action by many people, each of whom contributes something to the implementation effort […] problems arise when some feel committed to implementation, but others do not.” To stimulate engagement, compensation (i.e., full payment for attendance including travel and meals) offers for participating in training sessions and interviews; refreshments, in the form of snacks and beverages, were also provided at every training session. Furthermore, assigning an individual whose primary role is to manually enter data that cannot be automated into SurgeCon’s eHealth system, and using demand and capacity analysis to determine staffing models that will benefit the department are among the suggested strategies to overcome several of the encountered barriers to implementation.

Conclusion and implications

Successfully implementing eHealth systems goes beyond addressing technological aspects alone. It requires a thorough exploration of potential barriers and facilitators and the development of strategies to overcome barriers and enhance the facilitators. SurgeCon aims to enhance quality standards, improve efficiency, and increase satisfaction among both patients and providers in EDs. However, implementing such a quality improvement initiative in EDs presents challenges. Therefore, identifying these barriers and facilitators is crucial for developing tailored implementation strategies that are contextually relevant. This approach helps to ensure a smooth and sustainable transition, leading to long-term success and optimal performance. This study extends the findings in relevant literature by indentifying these facilitator-barrier pairs and providing a set of strategies to overcome the barriers and enhance the facilitators in the implementation of a large-scale quality improvement program. In investigating the factors associated with the successful adoption of SurgeCon, a broader consideration of the barriers and facilitators can be derived. Understanding these factors can assist in identifying obstacles and motivators that enable the sustainability and effectiveness of interventions at other EDs; this is critical given the high failure rate of ED quality improvement programs.

Effective management and leadership structures and participative leadership styles that encourage staff involvement and proactive management may facilitate ED implementations. Emphasis on the allocation of sufficient hospital resources (i.e., technological, human, and medical) and effective communication and collaboration are essential for fostering a supportive and cohesive work environment, thus facilitating such interventions. Those with positive perceptions of the need for the intervention are more likely to adopt and sustain implementation efforts, and previous experiences with interventions and the perception of the need for an intervention emerged as influential factors in the readiness for change.

This study strategically incorporates triangulation. By doing so, it addresses inherent blind spots and biases in each method, enhances the validation of data, and offers diverse perspectives on the topic. This triangulation not only validates findings but also contributes to a more comprehensive and calibrated understanding of the phenomena under investigation. Furthermore, this study involves a multi-disciplinary planning and implementation team to comprehensively study the various facilitators and barriers prior to implementation.

This study, like any rigorous research endeavor, is not exempt from limitations, and it is essential to openly acknowledge these factors to provide a transparent understanding of the study's scope. While our study gains insights from four diverse EDs, it is crucial to note a limitation in its context-specific nature. Our primary focus revolves around understanding barriers and facilitators before implementing the SurgeCon quality improvement program in Canadian EDs. Findings may lack broad generalizability. However, our emphasis on transferability urges researchers to assess the applicability of insights in similar settings, fostering a nuanced understanding. In this study, the data collector observed potential social desirability tendencies among participants. To address this, we made efforts to assure participants of anonymity and confidentiality, provided clear communication about the study's purpose and data use, and incorporated strategies like follow-up questions. Additionally, we encouraged participants to share examples to illustrate their responses, aiming to mitigate potential response bias [ 55 ]. Finally, the study, conducted within a specific timeframe, must consider the dynamic healthcare landscape. The advent of COVID-19 brought rapid changes to healthcare policies, ED protocols, and overall healthcare delivery. Acknowledging this evolving context during and after data collection is crucial for interpreting the study's findings in the broader context of a changing healthcare system.

The findings of this study will guide future initiatives for the implementation of quality improvement programs within the complex environment of EDs by identifying facilitators and barriers prior to implementation to ensure they are continually considered during the design phase of an intervention. We propose that it is important to examine these factors before implementing such systems so that the implementation can be designed and managed to address the multivariate impact they may impose.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available to protect the confidentiality of participants’ data but are available from the corresponding author upon reasonable request.

Abbreviations

  • Emergency department

Consolidated Framework for Implementation Research

Newfoundland and Labrador

Organization Readiness for Knowledge Translation

Key performance indicator

Physician initial assessment

Length of stay

Leaving the ED without being seen

References:

The Commonwealth Fund. The commonwealth fund 2010 international health policy survey in eleven countries. 2010.  http://www.commonwealthfund.org/Surveys/View-All.aspx , http://www.commonwealthfund.org/~/media/files/publications/chartbook/2010/pdf_2010_ihp_survey_chartpack_full_12022010.pdf . Accessed 26 June 2012.

Canadian Institute for Health Information. NACRS: Emergency Department Visits and Lengths of Stay. 2023. Retrieved from https://www.cihi.ca/en/nacrs-emergency-department-visits-and-lengths-of-stay

Magid DJ, Sullivan AF, Cleary PD, Rao SR, Gordon JA, Kaushal R, Guadagnoli E, Camargo CA, Blumenthal D. The safety of emergency care systems: results of a survey of clinicians in 65 US emergency departments. Annals of Emergency Medicine. 2009;53(6):715. https://doi.org/10.1016/j.annemergmed.2008.10.007 .

Article   PubMed   Google Scholar  

Bernstein SL, Aronsky D, Duseja R, Epstein S, Handel D, Hwang U, McCarthy M, John McConnell K, Pines JM, Rathlev N, Schafermeyer R, Zwemer F, Schull M, Asplin BR. The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10. https://doi.org/10.1111/j.1553-2712.2008.00295.x .

Pines JM, Hollander JE. Emergency department crowding is associated with poor care for patients with severe pain. Ann Emerg Med. 2008;51(1):1–5. https://doi.org/10.1016/j.annemergmed.2007.07.008 .

Fee C, Weber EJ, Maak CA, Bacchetti P. Effect of emergency department crowding on time to antibiotics in patients admitted with community-acquired pneumonia. Annals of Emergency Medicine. 2007;50(5):501. https://doi.org/10.1016/j.annemergmed.2007.08.003 .

Sprivulis PC, Da Silva J, Jacobs IG, Frazer AR, Jelinek GA. The association between hospital overcrowding and mortality among patients admitted via Western Australian emergency departments. Med J Aust. 2006;184(12):616–616. https://doi.org/10.5694/j.1326-5377.2006.tb00416.x .

Article   Google Scholar  

Richardson DB. Increase in patient mortality at 10 days associated with emergency department overcrowding. Med J Aust. 2006;184(5):213–6. https://doi.org/10.5694/j.1326-5377.2006.tb00204.x .

Derlet RW, Richards JR. Overcrowding in the nation’s emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000;35(1):63–8. https://doi.org/10.1016/s0196-0644(00)70105-3 .

Article   CAS   PubMed   Google Scholar  

Guttmann A, Schull MJ, Vermeulen MJ, Stukel TA. Association between waiting times and short term mortality and hospital admission after departure from emergency department: population based cohort study from Ontario, Canada. BMJ. 2011;342(jun01 1):d2983–d2983. https://doi.org/10.1136/bmj.d2983 .

Article   PubMed   PubMed Central   Google Scholar  

Anaraki NR, Jewer J, Hurley O, Mariathas H, Young C, Norman P, Patey C, Wilson B, Etchegary H, Senior D, Asghari S. Implementation of an Ed Surge Management Platform: A Study Protocol. 2021. https://doi.org/10.21203/rs.3.rs-764312/v1

Mariathas HH, Hurley O, Anaraki NR, Young C, Patey C, Norman P, Aubrey-Bassler K, Wang PP, Gadag V, Nguyen HV, Etchegary H, McCrate F, Knight JC, Asghari S. A quality improvement emergency department surge management platform (SurgeCon): protocol for a stepped wedge cluster randomized trial. JMIR Research Protocols. 2022;11(3):e30454.  https://doi.org/10.2196/30454 .

Patey C, Norman P, Araee M, Asghari S, Heeley T, Boyd S, Hurley O, Aubrey-Bassler K. SurgeCon: priming a community emergency department for patient flow management. Western Journal of Emergency Medicine. 2019;20(4):654–65. https://doi.org/10.5811/westjem.2019.5.42027 .

Shaw E, Baker R, Flottorp S, Camosso-Stefinovic J, Gillies C, Cheater F, Robertson N. Tailored interventions to overcome identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2005. https://doi.org/10.1002/14651858.cd001483.pub2 .

Dogherty E.J., Estabrooks C.A. Why do barriers and facilitators matter? In: Richards D.A., Rahm Hallberg I., editors. Complex interventions in health care: an overview of research methods. London: Routledge; 2015. p. 273–81.

Google Scholar  

Schreiweis B, Pobiruchin M, Strotbaum V, Suleder J, Wiesner M, Bergh B. Barriers and facilitators to the implementation of eHealth services: systematic literature analysis. J Med Internet Res. 2019;21(11):e14197.  https://doi.org/10.2196/14197 .

Gyamfi A, Mensah KA, Oduro G, Donkor P, Mock CN. Barriers and facilitators to electronic medical records usage in the emergency centre at komfo anokye teaching hospital. Kumasi-Ghana African J Emerg Med. 2017;7(4):177–82. https://doi.org/10.1016/j.afjem.2017.05.002 .

Kirk JW, Sivertsen DM, Petersen J, Nilsen P, Petersen HV. Barriers and facilitators for implementing a new screening tool in an emergency department: A qualitative study applying the theoretical domains framework. J Clin Nurs. 2016;25(19–20):2786–97. https://doi.org/10.1111/jocn.13275 .

MacWilliams K, Curran J, Racek J, Cloutier P, Cappelli M. Barriers and facilitators to implementing the heads-ed. Pediatr Emerg Care. 2017;33(12):774–80. https://doi.org/10.1097/pec.0000000000000651 .

Bradshaw C, Atkinson S, Doody O. Employing a qualitative description approach in health care research. Glob Qual Nurs Res. 2017;4:233339361774228. https://doi.org/10.1177/2333393617742282 .

Wildenbos GA, Peute LW, Jaspers MWM. Impact of patient-centered eHealth applications on patient outcomes: A review on the mediating influence of human factor issues. Yearb Med Inform. 2016;25(01):113–9. https://doi.org/10.15265/iy-2016-031 .

Elbert NJ, van Os-Medendorp H, van Renselaar W, Ekeland AG, Hakkaart-van Roijen L, Raat H, Nijsten TE, Pasmans SG. Effectiveness and cost-effectiveness of eHealth interventions in somatic diseases: a systematic review of systematic reviews and meta-analyses. J Med Internet Res. 2014;16(4):e110. https://doi.org/10.2196/jmir.2790 .

Catwell L, Sheikh A. Evaluating ehealth interventions: The need for continuous systemic evaluation. PLoS Medicine. 2009;6(8):e1000126.  https://doi.org/10.1371/journal.pmed.1000126 .

Canadian Institute for Health Information. Wait times for priority procedures in Canada — Data Table. Ottawa: CIHI; 2021.

Torjesen I. Latest waiting time figures for emergency departments in England are worst on record. BMJ. 2018. https://doi.org/10.1136/bmj.k1658 .

Thorne S, Kirkham SR, MacDonald-Emes J. Interpretive description: a noncategorical qualitative alternative for developing nursing knowledge. Res Nurs Health. 1997;20(2):169–77.

Mitchell GJ, Cody WK. The role of theory in qualitative research. Nurs Sci Q. 1993;6(4):170–8.

Thorne S. Interpretive description: Qualitative research for applied practice. New York and London: Routledge; 2016.

Alshehri HH, Wolf A, Öhlén J, Olausson S. Healthcare professionals’ perspective on palliative care in intensive care settings: an interpretive descriptive study. Global Qualitative Nursing Research. 2022;9:23333936221138076.

Wong ST, MacDonald M, Martin-Misener R, Meagher-Stewart D, O’Mara L, Valaitis RK. What systemic factors contribute to collaboration between primary care and public health sectors? an interpretive descriptive study. BMC Health Services Research. 2017;17(1):796.  https://doi.org/10.1186/s12913-017-2730-1 .

Denzin NK. The research act: A theoretical introduction to sociological methods (2nd ed.). New York: McGraw–Hill Book Company; 1978.

Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park: Sage Publications; 1990.

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implementation Science. 2012;7(1):50.  https://doi.org/10.1186/1748-5908-7-50 .

Murray, E., Burns, J., May, C., Finch, T., O’Donnell, C., Wallace, P., & Mair, F. (2011). Why is it difficult to implement e-health initiatives? A qualitative study. Implementation Science, 6(1). https://doi.org/10.1186/1748-5908-6-6

Day RM, Demski RJ, Pronovost PJ, Sutcliffe KM, Kasda EM, Maragakis LL, Paine L, Sawyer MD, Winner L. Operating management system for high reliability: leadership, accountability, learning and innovation in healthcare. J Patient Saf Risk Manage. 2018;23(4):155–66. https://doi.org/10.1177/2516043518790720 .

Bonawitz K, Wetmore M, Heisler M, Dalton VK, Damschroder LJ, Forman J, Allan KR, Moniz MH. Champions in context: Which attributes matter for change efforts in healthcare? Implementation Science. 2020;15(1):62.  https://doi.org/10.1186/s13012-020-01024-9 .

Manca C, Grijalvo M, Palacios M, Kaulio M. Collaborative Workplaces for Innovation in service companies: barriers and enablers for supporting new ways of working. Serv Bus. 2018;12(3):525–50. https://doi.org/10.1007/s11628-017-0359-0 .

de Wit K, Curran J, Thoma B, Dowling S, Lang E, Kuljic N, Perry JJ, Morrison L. Review of Implementation Strategies to change healthcare provider behaviour in the emergency department. CJEM. 2018;20(3):453–60. https://doi.org/10.1017/cem.2017.432 .

Šuc J, Ganslandt T, Prokosch H-U. Applicability of lewin´s change management model in a hospital setting. Methods Inf Med. 2009;48(05):419–28. https://doi.org/10.3414/me9235 .

Barnett, J., Vasileiou, K., Djemil, F., Brooks, L., & Young, T. Understanding innovators’ experiences of barriers and facilitators in implementation and diffusion of Healthcare Service Innovations: a qualitative study. BMC Health Serv Res. 2011;11(1). https://doi.org/10.1186/1472-6963-11-342 . 

Ravaghi, H., Alidoost, S., Mannion, R., & Bélorgeot, V. D. Models and methods for determining the optimal number of beds in hospitals and regions: a systematic scoping review. BMC Health Serv Res. 2020;20(1). https://doi.org/10.1186/s12913-020-5023-z . 

Lv C-M, Zhang L. How can collective leadership influence the implementation of change in health care? Chin Nurs Res. 2017;4(4):182–5. https://doi.org/10.1016/j.cnre.2017.10.005 .

Ortega A, Van den Bossche P, Sánchez-Manzanares M, Rico R, Gil F. The influence of change-oriented leadership and psychological safety on team learning in healthcare teams. J Bus Psychol. 2013. https://doi.org/10.1007/s10869-013-9315-8 .

Hamilton AB, Cohen AN, Young AS. Organizational readiness in specialty mental health care. J Gen Intern Med. 2010;25(S1):27–31. https://doi.org/10.1007/s11606-009-1133-3 .

Edwards N, Saltman RB. Re-thinking barriers to organizational change in public hospitals. Isr J Health Policy Res. 2017;6(1):8.  https://doi.org/10.1186/s13584-017-0133-8 .

Zapka J, Simpson K, Hiott L, Langston L, Fakhry S, Ford D. A mixed methods descriptive investigation of readiness to change in rural hospitals participating in a tele-critical care intervention. BMC Health Serv Res. 2013;13(1):33.  https://doi.org/10.1186/1472-6963-13-33 .

Grol R. Improving patient care the implementation of change in Health Care. Hoboken: Wiley-Blackwell; 2013.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629. https://doi.org/10.1111/j.0887-378x.2004.00325.x .

Simpson DD, Dansereau D. Assessing organizational functioning as a step toward innovation. Science & Practice Perspectives. 2007;3(2):20–8. https://doi.org/10.1151/spp073220 .

Bodenheimer T, Wagner EH, Grumbach K. Improving primary care for patients with chronic illness. JAMA. 2002;288(15):1909.  https://doi.org/10.1001/jama.288.15.1909 .

Geerligs L, Rankin NM, Shepherd HL, Butow P. Hospital-based interventions: a systematic review of staff-reported barriers and facilitators to implementation processes. Implementation Science. 2018;13(1):36.  https://doi.org/10.1186/s13012-018-0726-9 .

Corrigan PW, McCracken SG, Kommana S, Edwards M, Simpatico T. Staff perceptions about barriers to innovative behavioral rehabilitation programs. Cogn Ther Res. 1996;20(5):541–51. https://doi.org/10.1007/bf02227912 .

Statistics Canada, Government of Canada. (2021). Survey on Health Care Workers' experiences during the pandemic (SHCWEP). Survey on Health Care Workers' Experiences During the Pandemic (SHCWEP). Retrieved from https://www.statcan.gc.ca/en/survey/household/5362

Weiner BJ. A theory of organizational readiness for change. Implementation Science. 2009;4(1):67.  https://doi.org/10.1186/1748-5908-4-67 .

Bergen N, Labonté R. “Everything is perfect, and we have no problems”: detecting and limiting social desirability bias in qualitative research. Qual Health Res. 2020;30(5):783–92.

Download references

Acknowledgements

Not applicable

Funding agencies providing financial support for the SurgeCon study include:

-Canadian Institutes of Health Research.

-Newfoundland and Labrador Provincial Government (Department of Industry, Energy and Technology).

-Eastern Health (NL Eastern Regional Health Authority).

-Trinity Conception Placentia Health Foundation.

Among the funding agencies providing financial support, only Eastern Health is assisting with the collection of data. The design of the study, analysis, interpretation of data and manuscript preparation is/will be completed independently by the research team.

Author information

Authors and affiliations.

Centre for Rural Health Studies, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL, A1B 3V6, Canada

Nahid Rahimipour Anaraki, Meghraj Mukhopadhyay, Oliver Hurley & Shabnam Asghari

Faculty of Business Administration, Memorial University of Newfoundland, St. John’s, NL, A1B 3V6, Canada

Jennifer Jewer

Discipline of Family Medicine, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL, A1B 3V6, Canada

Christopher Patey

Eastern Health, Carbonear Institute for Rural Reach and Innovation By the Sea, Carbonear General Hospital, Carbonear, NL, A1Y 1A4, Canada

Paul Norman

Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NL, A1B 3V6, Canada

Holly Etchegary

Discipline of Family Medicine, Faculty of Medicine, Faculty of Medicine Building, Memorial University of Newfoundland, 300 Prince Philip Drive, St. John’s, Newfoundland, A1B 3V6, Canada

Shabnam Asghari

You can also search for this author in PubMed   Google Scholar

Contributions

NRA, MM, JJ, CP, PN, OH, HE, and SA have made substantial contributions to writing the main manuscript text and revising it. All authors reviewed the manuscript.

Corresponding author

Correspondence to Shabnam Asghari .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval for the SurgeCon study was granted on March 19, 2020 by the Newfoundland and Labrador Health Research Ethics Board. Ethics approval will be renewed annually until the end of the study. All methods were performed in accordance with the relevant guidelines and regulations. Informed consent was obtained from all the participants. HREB Reference #: 2019.264.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Anaraki, N.R., Mukhopadhyay, M., Jewer, J. et al. A qualitative study of the barriers and facilitators impacting the implementation of a quality improvement program for emergency departments: SurgeCon. BMC Health Serv Res 24 , 855 (2024). https://doi.org/10.1186/s12913-024-11345-w

Download citation

Received : 04 July 2023

Accepted : 23 July 2024

Published : 27 July 2024

DOI : https://doi.org/10.1186/s12913-024-11345-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quality improvement program
  • Facilitators
  • Pre-implementation

BMC Health Services Research

ISSN: 1472-6963

conclusion on quantitative and qualitative research

Quantitative Data: Definition, Examples, Types, Methods, and Analysis

11 min read

Quantitative Data: Definition, Examples, Types, Methods, and Analysis cover

35% of startups fail because there is no market need. This is because they haven’t conducted any customer research to determine whether the product they are building is actually what customers want.

To gather the information needed to avoid this, quantitative data is a valuable tool for all startups. This article will examine quantitative data, the difference between quantitative and qualitative data, and how to collect the former.

  • Quantitative data, expressed numerically, is crucial for analysis, driving strategic decisions, and understanding consumer behavior and market trends .
  • Metrics like DAU, MRR, sales figures, satisfaction scores, and traffic are examples of quantitative data across industries.
  • Quantitative data is numeric and measurable, identifying patterns or trends, while qualitative data is descriptive, providing deeper insights and context.
  • Nominal data categorizes information without order and labels variables like user roles or subscription types. It is often shown in bar or pie charts .
  • Ordinal data categorizes information in a specific order, such as satisfaction ratings or ticket priorities, and is often shown in a bar or stacked bar chart.
  • Discrete data is numerical and takes specific values, like daily sign-ups or support tickets , and is often shown in bar or column charts.
  • Continuous data can take any numerical value within a range, such as user time on a platform or revenue over time, and is often shown in line graphs or histograms.
  • Quantitative data is objective, handles large datasets, and enables easy comparisons, providing clear insights and generalized conclusions in various fields.
  • However, quantitative data analysis lacks contextual understanding, requires analytical expertise, and is influenced by data collection quality that may affect result validity.
  • Customer feedback surveys , triggered by tools like Userpilot, collect consistent quantitative data, providing reliable numerical insights into customer satisfaction and experiences.
  • Product analytics tools track user interactions and feature usage , offering insights into user behavior and improving the user experience.
  • Tracking customer support data identifies common issues and areas for improvement , enhances service quality, and helps understand customer needs.
  • Implementing A/B tests and other experiments provides quantitative data on feature performance, helping teams make informed decisions to enhance product and user experience.
  • Searching platforms like Kaggle or Statista for accurate, reliable datasets enhances product analysis by providing broader context and robust comparison data.
  • Statistical analysis uses mathematical techniques to summarize and infer data patterns, helping SaaS companies understand user behavior, evaluate features, and identify engagement trends.
  • Trend analysis tracks quantitative data to identify patterns, helping SaaS companies forecast outcomes, understand variations, and plan strategic initiatives effectively.
  • Funnel analysis tracks user progression through stages, identifies drop-off points to enhance user experience, and increases conversions for SaaS companies.
  • Cohort analysis groups users by attribute and tracks behavior over time to understand retention and engagement.
  • Path analysis maps user journeys to identify users’ optimal routes, helping SaaS companies streamline and enhance the user experience.
  • Feedback analysis examines responses to close-ended questions to identify user sentiments and areas for improvement.
  • If you want to collect quantitative data within your product and analyze it, then learn how Userpilot can help you. Book a demo now !

conclusion on quantitative and qualitative research

Try Userpilot and Take Your Product Experience to the Next Level

  • 14 Day Trial
  • No Credit Card Required

conclusion on quantitative and qualitative research

What is quantitative data?

Quantitative data is information that can be measured and expressed numerically. It is essential for making data-driven decisions, as it provides a concrete foundation for analysis and evaluation.

In various fields, such as market research , quantitative data helps businesses understand consumer behavior, market trends, and overall performance. Companies can gain insights that drive strategic decisions and improve their products or services by collecting and analyzing numerical data.

Whether conducting a survey, running experiments , or gathering information from other sources, quantitative data analysis is key to uncovering patterns, testing hypotheses, and making informed decisions based on solid evidence.

What are examples of quantitative data?

Quantitative data comes in many forms and is used across various industries to provide measurable and numerical insights. Here are some examples of quantitative data:

  • Daily Active Users (DAU) : This metric counts the number of unique users interacting with a product or service daily. It is crucial for understanding user engagement and product usage trends.
  • Monthly Recurring Revenue (MRR) : For SaaS businesses, MRR is a vital metric that shows the predictable revenue generated each month from subscriptions. It helps forecast growth and financial planning.
  • Sales figures : This includes the total number of products sold or services rendered over a specific period. Sales data helps in evaluating business performance and market demand.
  • Customer satisfaction scores : Often collected through surveys , these scores quantify customers’ satisfaction with a product or service.
  • Website traffic : Measured in terms of visits, page views, and unique visitors, this quantitative data helps businesses understand their online presence and the effectiveness of their marketing efforts.
  • Conversion rates : This metric shows the percentage of users who take a desired action, such as making a purchase or signing up for a newsletter, out of the total number of visitors.
  • Churn rate : This represents the percentage of customers who stop using a product or service over time. It’s essential for understanding customer retention .
  • Average Revenue Per User (ARPU) : This metric calculates the average revenue generated per user, which helps assess each customer’s value to the business.
  • Bounce rate : In web analytics, the bounce rate indicates the percentage of visitors who leave a website after viewing only one page. It’s useful for evaluating the effectiveness of a website’s content and user experience .

Differences between quantitative and qualitative data

Quantitative data and qualitative data are two fundamental types of information used in research and analysis, each serving distinct purposes and represented in different forms.

Quantitative data is numeric and measurable. It allows you to quantify variables and identify patterns or trends that can be generalized. For example, tracking product trends or analyzing charts to understand market movements. Some quantitative data examples include:

  • The number of daily active users on a platform.
  • Monthly recurring revenue.
  • Customer satisfaction scores .
  • Website traffic metrics, like page views.

On the other hand, qualitative data is descriptive and subjective, often represented in words and visuals. It aims to explore deeper insights, understand data , and provide context to behaviors and experiences.

Examples of qualitative data include:

  • Customer reviews and testimonials.
  • Interview responses.
  • Social media interactions.
  • Observations recorded during user tests .

Different types of quantitative data

Understanding the different types of quantitative data is essential for effective data analysis . These types help categorize and analyze data accurately to derive meaningful insights and make informed decisions.

Nominal data

Nominal data categorizes information without a specific order or ranking. It is used to label variables that do not have a quantitative value.

For instance, in a SaaS platform , user roles can be categorized as ‘admin,’ ‘editor,’ or ‘viewer.’ Subscription types might be classified as ‘free,’ ‘basic,’ ‘premium,’ or ‘enterprise.’

This data type is typically represented using bar charts or pie charts to show the frequency or proportion of each category.

Ordinal data

Ordinal data categorizes information with a specific order or ranking. It is used to label variables that follow a particular sequence.

Examples include:

  • Rating customer satisfaction as ‘poor,’ ‘fair,’ ‘good,’ ‘very good,’ or ‘excellent.’
  • Ranking support ticket priorities as ‘low,’ ‘medium,’ or ‘high.’
  • User feedback ratings on features as ‘1 star’ to ‘5 stars.’

This type of data is typically represented using bar charts or stacked bar charts to illustrate the order and frequency of each category.

Discrete data

Discrete data is numerical values that can only take on specific values and cannot be subdivided meaningfully.

Examples include the number of new sign-ups daily, the count of support tickets received, and the number of active users at a given time.

This type of numerical data is often represented using bar charts or column charts to display the frequency of each value.

Continuous data

Continuous data is numerical information that can take on any numerical value within a range.

In a SaaS context, examples include measuring the amount of time users spend on a platform, the bandwidth usage of an application, and the revenue generated over a specific period. Continuous data, along with interval data, helps identify patterns and trends over time.

Pros of analyzing quantitative data

Analyzing quantitative data offers several advantages, making it a valuable approach in various fields, especially in SaaS. Here are some key benefits:

Provides measurable and verifiable data

Quantitative data is numeric and objective, allowing for precise measurement and verification. This reduces the influence of personal biases and subjectivity in analysis, leading to more reliable and consistent results.

Analyzing customer data using quantitative methods can provide clear insights into user behavior and preferences, helping businesses make data-driven decisions.

Enables analysis of large datasets

Quantitative data analysis can handle large datasets efficiently, enabling the identification of patterns and trends across extensive samples.

This capability makes it possible to draw broad, generalized conclusions that can be applied to larger populations. For example, a company might analyze usage data from thousands of users to understand overall engagement trends and identify areas for improvement .

Allows easy comparison across different groups, time periods, and variables

Quantitative data allows straightforward comparisons across various groups, time periods, and variables. This facilitates the evaluation of changes over time, differences between demographics, and the impact of different factors on outcomes.

For instance, comparing customer satisfaction scores before and after a product update can help assess the effectiveness of the changes and guide future improvements.

Cons of quantitative data analysis

While quantitative data analysis offers many benefits, it also has some drawbacks:

Lacks contextual understanding

Quantitative data can miss the deeper context and nuances of human behavior, focusing solely on numbers without explaining the reasons behind actions. For example, tracking user behavior may show usage patterns but not the motivations or feelings behind them.

Requires analytical expertise

Accurate analysis and interpretation of quantitative data require specialized skills . Without proper expertise, there is a risk of misinterpretation and incorrect conclusions, which can negatively impact decision-making.

Influenced by data collection quality

The reliability of quantitative analysis depends on the data collection methods and the quality of measurement tools. Poor data collection can lead to data discrepancies , affecting the validity of the results. Ensuring consistent, high-quality data collection is essential for accurate analysis.

How to collect data for quantitative research?

Collecting data for quantitative research involves using systematic and structured methods to gather numerical information. Let’s look at a few methods in detail.

Customer feedback surveys

Customer feedback surveys are a key method for collecting quantitative data. Tools like Userpilot can trigger in-app surveys with closed-ended questions to ensure consistent data collection.

Conducting these surveys quarterly or after a specific period helps track changes in customer satisfaction and other important metrics. This approach provides reliable, numerical insights into customer opinions and experiences.

A screenshot of a customer survey created in Userpilot to collect Quantitative Data

Product usage data

Product analytics tools are essential for tracking user interactions and feature usage. Utilizing these tools allows you to monitor metrics such as user sessions, feature adoption , and user engagement regularly.

This quantitative data provides valuable insights into how users interact with your product, helping you understand their behavior and improve the overall user experience.

Customer support data

Tracking customer support data is crucial for quantitative research. You can record details such as ticket number, issue type, resolution time, and customer feedback by monitoring support tickets.

Organize these tickets into categories, such as feature requests , to identify common problems and areas needing product improvement . This approach helps understand customer needs and enhance overall service quality.

An example of a resource center you can collect in Userpilot

Experiments

Implementing experiments, such as A/B tests , is a powerful method for collecting quantitative data. By comparing the performance of different features or designs, you can gain valuable insights into what works best for your users.

Use the insights gained from these A/B tests and other product experimentation methods to make informed decisions that enhance your product and user experience.

A screenshot showing the results of an A/B test in Userpilot to help with Quantitative Data

Open-source datasets

Searching for datasets on platforms like Kaggle or Statista can provide valuable information relevant to your research. However, to avoid issues with data discrepancy , ensure these datasets are accurate and reliable before incorporating them into your analysis.

Utilizing accurate open-source datasets can significantly enhance your product analysis by providing a broader context and more robust quantitative data for comparison and insights.

A screenshot of Statista showing a AI report

Quantitative data analysis methods for gathering actionable insights

Analyzing quantitative data involves using various methods to extract meaningful and actionable insights. These techniques help understand the data’s patterns, trends, and relationships, enabling informed decision-making and strategic planning .

Statistical analysis

Statistical analysis involves using mathematical techniques to summarize, describe, and infer patterns from data. This method helps validate hypotheses and make data-driven decisions .

For SaaS companies, statistical analysis can be crucial in understanding user behavior , evaluating the effectiveness of new features, and identifying trends in user engagement.

By leveraging statistical techniques, SaaS businesses can derive meaningful insights from their data, allowing them to optimize their products and services based on empirical evidence.

Trend analysis

Trend analysis involves tracking quantitative data points and metrics to identify consistent patterns. Using a tool like Userpilot, SaaS companies can generate detailed trend analysis reports that provide valuable insights into how various metrics evolve.

This method enables SaaS companies to forecast future outcomes, understand seasonal variations, and plan strategic initiatives accordingly. By identifying trends, businesses can anticipate changes, adapt their strategies, and stay ahead of market dynamics.

A screenshot showing a trend analysis report in Userpilot

Funnel analysis

Funnel analysis defines key stages in the user journey and tracks the number of users progressing through each stage.

This method helps SaaS companies identify friction and drop-off points within the funnel. By understanding where users are dropping off, businesses can implement targeted improvements to enhance user experience and increase conversions.

An example of a funnel analysis report in Userpilot

Cohort analysis

Cohort analysis groups users into cohorts based on attributes such as the month of sign-up or acquisition channel and tracks their behavior over time.

This method allows SaaS companies to understand user retention and engagement patterns by comparing how cohorts perform over various periods. By analyzing these patterns, businesses can identify successful strategies and improvement areas.

A screenshot showing a cohort analysis report in Userpilot

Path analysis

Path analysis maps user journeys and analyzes the actions taken by users. This method helps SaaS companies identify the “ happy path ” or the optimal route users take to achieve their goals.

By understanding these paths , businesses can streamline the user experience, making it more intuitive and efficient.

Feedback analysis

Feedback analysis involves using questionnaires and examining responses to close-ended questions to identify patterns in customer feedback . This quantitative data helps you to understand common user sentiments, preferences, and areas needing improvement.

Businesses can make informed decisions to enhance their products and services by systematically analyzing feedback.

A screenshot of a feedback analysis report in Userpilot

Collecting quantitative data is important if you want a product that will succeed. Your customers are the only people who can signal your success, so speaking to them and analyzing the quantitative data you collect will help you to produce the best product you can.

If you want help collecting quantitative data and analyzing it, Userpilot can help. Book a demo now to see exactly how it can help.

Leave a comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Book a demo with on of our product specialists

Get The Insights!

The fastest way to learn about Product Growth,Management & Trends.

The coolest way to learn about Product Growth, Management & Trends. Delivered fresh to your inbox, weekly.

conclusion on quantitative and qualitative research

The fastest way to learn about Product Growth, Management & Trends.

You might also be interested in ...

10 customer service metrics + how to track them, 7 product analytics examples to learn from (+best tools).

Saffa Faisal

Psychographic Vs Behavioral Segmentation: What Are the Differences?

Aazar Ali Shad

Frequently asked questions

What’s the difference between quantitative and qualitative methods.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

  • Open access
  • Published: 31 July 2024

Managers in the context of small business growth: a qualitative study of working conditions and wellbeing

  • Elena Ahmadi   ORCID: orcid.org/0000-0001-6897-1194 1 , 2 ,
  • Daniel Lundqvist   ORCID: orcid.org/0000-0001-9722-178X 3 ,
  • Gunnar Bergström   ORCID: orcid.org/0000-0002-0161-160X 1 , 4 &
  • Gloria Macassa   ORCID: orcid.org/0000-0003-4415-7942 5 , 6  

BMC Public Health volume  24 , Article number:  2075 ( 2024 ) Cite this article

In view of the importance of managers’ wellbeing for their leadership behaviour, employee health, and business effectiveness and survival, a better understanding of managers’ wellbeing and working conditions is important for creating healthy and sustainable businesses. Previous research has mostly provided a static picture of managers’ wellbeing and work in the context of small businesses, missing the variability and dynamism that is characteristic of this context. Therefore, the purpose of this study is to explore how managers in small companies perceive their working conditions and wellbeing in the context of business growth.

The study is based on qualitative semi-structured interviews with 20 managers from twelve small companies. Content and thematic analysis were applied.

The findings indicate that a manager’s working environment evolves from its initial stages and through the company’s growth, leading to variations over time in the manager’s experiences of wellbeing and work–life balance as well as changes in job demands and resources. Managers’ working situation becomes less demanding and more manageable when workloads and working hours are reduced and a better work–life balance is achieved. The perceived improvement is related to changes in organizational factors (e.g. company resources), but also to individual factors (e.g. managers’ increased awareness of the importance of a sustainable work situation). However, there were differences in how the working conditions and wellbeing changed over time and how organizational and individual resources affected the studied managers’ wellbeing.

Conclusions

This study shows that, in the context of small business, managers’ working conditions and wellbeing are dynamic and are linked to growth-related changes that occur from the start of organizational activities and during periods of growth. In addition, the findings suggest that changes in managers’ working conditions and wellbeing follow different trajectories over time because of the interaction between organizational and personal factors.

Peer Review reports

Introduction

Small businesses play a significant role in global economies [ 1 , 2 ] and growing businesses are especially important in creating jobs and contributing to economic growth [ 3 , 4 , 5 ]. Previous research has shown the importance of managers’ wellbeing for leadership behaviours [ 6 ], employee health [ 7 ] and business survival and effectiveness [ 8 , 9 ]. Managers’ working conditions influence their wellbeing, which is important for their practiced leadership [ 6 , 10 ], which in its turn has effect on employees’ wellbeing [ 7 , 11 , 12 , 13 ]. Therefore, a better understanding of managers’ wellbeing and working conditions in the context of small businesses is important for creating healthy and sustainable businesses. Yet too few studies have focused on managers’ health and working conditions in the context of small companies.

Previous research has provided a largely static picture of wellbeing and work in small businesses, missing the variability and dynamism characteristic of this context [ 14 ]. One aspect of the dynamic context of small businesses is growth and the changes that growth causes in the companies. Hessels et al. [ 9 ] report that increasing firm age and size may have implications for managers’ working situation and wellbeing. More research is needed to examine how business growth can impact managers’ working conditions and wellbeing as this has implications for employees’ wellbeing and company performance. The purpose of this study is to explore perceived changes in working conditions and wellbeing among managers in the context of growing small businesses.

Prior to discussing the methodology of the study, the section below provides a short exposé over the conceptual and theoretical framework of this study as well as an overview of the previous research.

Theoretical framework and previous research

Through the lens of the Job Demands–Resources (JD–R) model [ 15 , 16 , 17 ] this paper explores the wellbeing and working conditions of managers of small businesses. The JD–R model, differentiates between two types of factors in the work environment: job demands and job resources [ 17 ]. The term “job demands” refers to job characteristics and circumstances requiring physical and psychological efforts and having physiological and psychological costs [ 17 ], e.g. workload and work pace. Resources, e.g. control, autonomy and support, on the other hand, contribute to achieving work goals, personal growth and development, and counterbalance the job demands and related physiological and psychological costs [ 18 ]. It has been suggested that working conditions characterized by high demands and low resources lead to increased strain and decreased work engagement [ 18 ] while work situations with high job demands and high resources are regarded as active and stimulating [ 19 ]. Research has shown that wellbeing, in general, is positively influenced by high job resources and negatively by increased job demands [ 20 , 21 , 22 ].

The JD-R model is among the most influential in connecting working conditions to well-being beside the demand–control–support model [ 23 , 24 ] and the effort–reward imbalance model [ 25 , 26 ]. This model was chosen since it is flexible, enabling among others the inclusion of working conditions and factors relevant to specific occupational settings [ 27 ]. Moreover, it has received empirical support across various contexts [ 28 , 29 , 30 , 31 ].

Since well-being encompasses several dimensions [ 32 , 33 ], such as physical, emotional, mental, and social [ 34 , 35 ], there have emerged quite a few ways of defining this concept. This paper adopts a broad conceptualization of well-being to capture the dimensions and reflecting managers’ evaluation of their lives, feeling well and daily functioning based on their unique perspectives [ 36 , 37 , 38 ]. This concept includes individuals’ subjective judgement of their life, work, health, relationships, and sense of purpose, including both positive (such as feeling of job satisfaction and happiness) and negative (such as feelings of distress, health problems impairing individuals’ daily functioning and quality of life) aspects of wellbeing [ 39 ].

There have been two strands of research applied to small business, one focusing on the general population of managers and the other focusing on entrepreneurs. However, among these studies, very few to date have specifically addressed managers’ wellbeing and working conditions in the context of small and growing companies. For instance, the published research has not sufficiently distinguished between different types of entrepreneurs, i.e. those with and those without employees [ 9 , 14 ]. There are differences in the nature of managerial work between that of small and large companies; and there are differences even between smaller firms depending on their size [ 40 ]. In addition, in small businesses, manager–owners have the combined responsibilities of entrepreneurs, managers, and operative employees, which may impact their work and wellbeing.

Research shows that both entrepreneurs and managers in general experience stressful working situations and high levels of demands, in terms of long working hours, a high workload and fast work pace, poor work–life balance, role conflicts and low support [ 8 , 14 , 41 , 42 , 43 , 44 , 45 , 46 ]. Entrepreneurs also work under uncertainty and amidst financial problems [ 8 , 14 ]. On the other hand, both managers and entrepreneurs experience high levels of control, autonomy and decision latitude [ 14 , 41 , 47 ]. Entrepreneurs enjoy flexibility and meaningfulness in work and report high job satisfaction and optimism [ 8 , 14 ].

Despite the intense demands and stressful work, managers and entrepreneurs generally report good wellbeing, better wellbeing compared with employees and other non-managers [ 9 , 14 , 19 , 48 ]. However, several studies have pointed to risks of decreased wellbeing associated with managerial position [ 49 , 50 , 51 , 52 ]. Similarly, entrepreneurs may run high risk of burnout [ 8 ] and ill health in the long term because of continuous exposure to high levels of stressors [ 14 ]. A few studies have reported entrepreneurs’ poor wellbeing [ 53 , 54 ].

Research also highlights differences in the wellbeing of managers based on their hierarchical level, where top managers enjoy better wellbeing, and first-line managers experience worse wellbeing and working conditions [ 55 , 56 ]. Buttner [ 4 ] suggests that entrepreneurs experience more problems with wellbeing, as well as higher stress and lower job satisfaction compared with managers and points to differences in entrepreneurial and managerial work demands.

Regarding business growth, this is known to be a complex and multifaceted phenomenon [ 57 ]. Despite a large volume of research, the area still suffers from insufficient theoretical development and a limited understanding [ 58 , 59 ]. In business studies, one of the approaches to describing business growth can be found in a rich plethora of life cycle models illustrating growth trajectories of firms as passing through a number of stages [ 60 , 61 ]. However, although the life cycle approach has been challenged for its determinism and linearity [ 61 ], researchers agree on the common features in the growth process, which include a series of stable periods, accompanied by crises points, as well as changes in the companies’ basic structure, activities and key challenges over time. In other words, when companies grow there are certain transformations beyond change in size and age.

According to a model by Churchill and Lewis [ 62 ], which was specifically developed for small growing companies, businesses move through five growth stages, namely, existence, survival, take-off, success, and resource maturity. Each stage involves an increase in diversity and complexity of five management factors: managerial style and management decision making (including the extent to which decision making authority is delegated by the owner); organizational structure (involving layers of management in the company); operational systems (referring to the development of financial, marketing, and production systems in the company); strategic planning (the degree to which a company develops both short- and long-range goals as well as major strategic planning); and owner involvement (the extent to which the owner is active in the business operations and decisions). The set of core problems and challenges that managers face also changes through the stages [ 62 , 63 ]. According to Churchill and Lewis [ 62 ], as a business moves through the growth stages, the owner’s style of decision making changes and becomes less controlling and more delegating. This means that owner involvement in the firm and daily work decreases and a new layer of management is created, with new managers coming in, as well as there being an increase in the complexity of organizational structure, operational systems, and strategic planning.

Torrès and Julien’s [ 64 ] discussion on the denaturing of small businesses (i.e. when the businesses no longer have the typical features of small businesses and adopt attributes that belong to larger companies) can help understand change related to growth. According to Torrrès and Julien [ 64 ], the denaturing of small business management practices can be marked by a higher degree of management decentralization, higher levels of labour specialization, the development of more formal, long-term strategies, a growing complexity, and formalization of information systems, as well as expanded markets. Denaturing is also followed by decreasing proximity in relations and contacts, growing formality and procedures, and a more structured and long-term approach [ 64 ].

Thus, growth (in size and complexity) introduces changes in a company’s structural and contextual dimension [ 61 ] that have consequences for the nature of the manager’s role [ 63 ] and, supposedly, for managers’ working conditions, resources and demands. However, little attention has been paid to business growth from an occupational health perspective.

Therefore, as stated above, this study explores how managers in small companies perceive their working conditions and wellbeing in the context of business growth. Following the theoretical foundation, the next section sets the stage for discussing the methodology that was used in this study.

Materials and methods

Study design and sample.

This study used a qualitative methodology based on interviews with managers of small companies. The company selection was linked to a regional project, Successful Companies in Gästrikland (SCiG), which annually credits successful businesses (ranked highest in terms of profitable growth) in a region in mid-Sweden. The selection procedure is fully described elsewhere [ 40 ].

For this study, we selected small companies (max 50 employees) that were nominated for the award between 2008 and 2019 and had been in operation since 2008 at least. Interviews were performed with 20 managers from twelve companies. The heterogeneity of the sample was increased by purposefully selecting companies on the top and at the bottom of the nomination list for the period 2008–2019. Nine companies had more than seven nominations during the period (indicating sustained profitable growth); three companies had only one nomination (indicating a short growth period).

The chief executive officers (CEOs) of the selected companies were invited by letter and subsequent phone calls to participate in the study. They were provided with information about the study’s purpose, methodology, and treatment of the collected data. The companies were in sales ( n  = 5), manufacturing ( n  = 4), technical consultancy ( n  = 2), and transportation ( n  = 1), employed between four and 46 persons and had been in operation for 12–51 years.

Twelve CEOs, nine of whom were owner–managers, and eight managers at lower level made up the group of participants. Managers of different levels were included to increase the variation in the material as the situation of low-level managers can differ from that of top managers. The participants included 18 male and two female managers between the ages of 29 and 66. Their managerial experience ranged from 3 to 29 years. Four managers were university-educated; 16 had secondary education or similar. Table  1 illustrates an overview of the characteristics of the managers participating in the study.

Data collection

The qualitative interviews were performed in 2020. A semi-structured interview guide [ 65 ] was employed and included such themes as experiences of managers’ wellbeing, working conditions, and work-related factors influencing their wellbeing. Examples of questions were: “How do you perceive your own health and wellbeing?”, “Did your wellbeing change during your work as manager? – If yes, in what way, and what did it depend on?”, “How do you perceive your work–life balance?” and “How do you perceive your working situation?” The open-ended questions were followed by probing questions to follow up and get clarifications and examples. This procedure enabled a natural conversation where interviewees could freely describe their perspectives. The participants were not provided with a definition of “wellbeing”.

The interviews were carried out by the first author either at the companies ( n  = 18) or remotely using the video conferencing service Zoom ( n  = 2). The interviews lasted 60–90 min. With the participants’ permission, all interviews were audio-recorded. A professional transcriber ( n  = 17) and the first author ( n  = 3) transcribed the interviews verbatim.

Data analysis

Data analysis was performed in two complementary stages. In the first stage, the data were analysed using qualitative content analysis [ 66 , 67 , 68 ]. Following the guidelines of Elo & Kyngäs [ 66 ] and Graneheim & Lundman [ 67 ] the content analysis followed such steps as preparation (selecting the unit of analysis and familiarizing with the data), organization (open coding, grouping, categorization, and abstraction) and reporting. The interview transcripts in their entirety were regarded as units of analysis [ 67 ]. They were read several times to achieve immersion in the data. The texts were thereafter uploaded to ATLAS.ti for Windows, version 9 (Microsoft, Seattle, WA, USA) for subsequent analysis.

All information in the interview transcripts, that was judged as relevant for the objective of the study, was thoroughly coded. The coding was done by selecting meaning units (ranging from a few words to several sentences) and assigning a heading that reflected their meaning and content. These headings became the initial codes. For instance, the phrase “My health was quite poor. Poor sleep. … Notepad on the bedside table so when you woke up at night and thought of things you had to write them down…. It wears you out a lot. You get old, you know, inside you age quickly… (IP1)” was coded as “Felt unwell previously”.

These initial codes were then compared with each other for similarities and differences, sorted, and abstracted into broader categories. The coding scheme was revised and refined several times through the iterative processes of sorting and abstraction, comparing meaning units, codes, categories, and subcategories. Thus, the content analysis in the first stage resulted in a list of categories, subcategories and codes describing managers’ perception of changes in their wellbeing and working conditions. These are presented in a category matrix (Table  2 ), elaborated, and supported by participants’ quotes in the Results section.

In the second stage, thematic analysis was employed to identify trajectories in the participants’ perceptions of their wellbeing, demands and resources (which are the categories identified in the first stage of analysis). This was done based on their descriptions of their working situation, currently and previously, as manager of the business. All the transcripts were reread several times and individual trajectories of the perceived changes in the factors were summarized for each case. These individual trajectories were aggregated in groups showing commonalities and differences in the participants’ experiences in how their perceived wellbeing, demands and resources had changed from previous periods to the time of data collection. The pattern grouping showed more salient trajectories where individuals could be a part of several groups. As a result, the analysis in the second stage suggested themes illustrating common patterns in participants’ individual trajectories of perceived wellbeing, demands and resources.

The main analysis was done by the first author (E.A.). Sorting and abstraction of data was then discussed with the second author (D.L.). Finally, the categories and themes were reviewed by all authors. The analysis presented in the results section is performed close to the manifest content [ 67 ] reflecting the perceptions and experiences of managers as expressed by them, and with low degree of interpretation by the authors. Further interpretation, analysis of connections between the categories and theorizing is done in the discussion section.

Ethical considerations

The study was approved by the Swedish Ethical Review Authority (approval No. 2019 − 00314). Furthermore, the study was carried out in line with the principles of the Declaration of Helsinki. All participants were informed about the study’s objective, the voluntariness of participation, anonymity, and confidentiality principles as well as their right to decline an interview at any moment without having to provide a justification. Before each interview, informed consent was gained from each participant.

Following the discussion of the methodology, the next section offers the reader an overview of the results of the empirical study.

The results are presented in two sections corresponding to the two stages of the analysis. In the first section we present findings showing that there has been change in managers’ experience of their wellbeing and working conditions from previously to today, and what factors were affected (see Table  2 ). The second section presents the findings of a trajectory analysis, individually investigating each manager’s journey and illustrating different ways in which these changes occurred.

Categories describing changes in the managers’ wellbeing and working conditions

A. managers’ wellbeing and work-life balance currently and previously.

The managers stated that they were satisfied with their job, and that they thrived at work. Several participants maintained that it was fun to go to work and that work gave them energy. Most of the managers assessed themselves as feeling well. Some said that their physical health could be better, e.g. they referred to problems of overweight or problems due to having a prolonged sitting time at work.

I feel great in many ways. Physically, it’s so-so considering that I’m overweight! Occupational health is great when I don’t work 100% and I’m in charge of my own free time. (Interview participant (IP) 1, CEO) Certainly it has been up and down, but I perceive my health as good. It makes me feel good when I come here [to work]; I enjoy it a lot. And that gives me a lot of energy. (IP 16, lower manager)

Several managers expressed that they were down-prioritizing their physical health in favour of spending time with family and of doing managerial work. Some maintained that they had not done what they should have done for their health to be better. Several participants wished that they could do more exercise.

Managers referred to feeling stressed during certain periods because of high workload and work pace; however, the stress was not constant. They described that it “goes in waves” and that work had “ups and downs.” Most managers stated that they were rarely badly stressed and when they were, it was for shorter time periods. Also, the managers felt they had a good balance between work and private life currently.

Several owner–managers expressed that the sheer fact that they had the opportunity to carry out what made them thrive compensated for the heavy burden of having to work long hours. Some noted that they felt calm when there was a lot of work and a high tempo because it meant that the company had a lot of orders and it was going well for the business.

You enjoy your job, but you may work a little more, but you get a good life situation yourself. … I like to work … I like to have many “irons in the fire”. That’s when I’m at my calmest … as long as it’s full speed and challenges like that … (IP 4, CEO).

Several managers, however, stated that they had felt unwell in earlier periods of their managerial career. They reported having felt tired, worn out and stressed constantly over longer periods of time. Some felt that they had prioritized away their health, had not had time to take care of themselves, to have lunch or take breaks, and had overconsumed coffee and tobacco. Several participants had since had problems with physical and mental health, including problems with the heart and stomach, burnout, and stress-induced shingles.

From the beginning … when I started … I worked very, very hard and then my stomach took a beating … and I’ve always been a bit stingy [which is why this CEO did not hire staff to do some of the work]. So I worked evenings and nights … until 3 in the morning … well, I probably did it for about 5 years …. So that’s when I realized I can’t go on like this. (IP 3, CEO) The first year I got gastritis and I started to feel dizzy, because I worked extremely hard. I went to the doctor and I used a pack of snus a day, drank twelve cups of coffee a day. I guess I didn’t realize my situation, that I have a big family plus a job. (IP 19, CEO)

Several managers stated that it had been a problem when they had worked more in the past. They described that a high workload and long working hours made it difficult to combine running a business and having a family life. They discussed that it is usual for small business owners to have difficulties with a relationship and family because of a large workload. Many managers felt that, previously, they had had no control over their own time and no time for family or relationships.

It’s hard to have your own business. But … if you have your own business and you feel that you’re managing your free time, then you’ve come to the right place. However, if you have to work 100 h a week because you have a bad conscience about things, then you’re not in control of your own time. You won’t exactly be a nice person. Because you’re never at home, you’re never free … It doesn’t work with a relationship … from a family’s point of view, it’s probably really hard to be together with a self-employed person. (IP 1, CEO ) The man I bought the company from … ran it for 4 years; then his wife said, You have to choose between me and the job. So then I was alone [in running the business]. I didn’t have a holiday. I had two small children, I worked every day of the week, between 7am and 9pm every day except weekends when I worked a little less but basically I worked all the time. Then my wife said, Now you have to choose, between the company and the family. (IP 8, CEO)

b. Demands currently and previously

Most of the managers described their current workload as quite high, but manageable. They estimated that they worked between 40 and 60 h a week, and generally did not see that as problematic. They knew the workload went up and down, in waves, and intense periods were followed by calmer periods during which they could recover. Some managers when talking about small business owners in general said that it is inevitable to work long hours – it is a common situation. They described that, when running an own business, one can never feel that the work is finished as there is always something to solve or improve.

When you run a business, you’re never done. There are always improvements to be made. You can never sit down and feel that now things are good. We want to improve our production, routines. At the same time, the most important thing is to be able to deliver, both products and services. That’s what we live on, so to speak. If we can’t do that, we don’t make any money. Then we’ll be out of business soon. (IP 20, CEO)

Some managers also reported that company growth brought about new challenges for them to handle. They expressed that there was a constant need for adjustments in the organization to match the growing size and complexity of the tasks performed by the company. In addition, some participants talked about the challenges in the organization related to clarity of structure, roles, policies, routines and information as companies expanded. Some managers mentioned conflicts and staff turnover in some periods as well as difficulties to maintain the family climate and close relationships.

Talking about previous periods of their managerial career in the current company, many managers described having worked much more compared with their present work situation. They estimated having worked between 50 and 100 h a week, including evenings and holidays.

Before, I worked a lot more. Maybe 100 h a week. I have done that for many years. Probably 20 years I would think … Lots of night work. Came home at 2, 3 and then up again at … (IP 1, CEO).

At the same time, the managers said that it was fun and they enjoyed working in this period of heavy workload and long working hours. Several managers explained that they were very engaged and ambitious, and wanted to achieve much more. Some described that they just worked and worked. One mentioned that he kept going as if he was a superman, another as if she was immortal; both meant that they felt they could manage anything and did not realize their limits.

I think you have a great overconfidence in yourself; in the beginning you want to do everything, you want to change yourself, you want to change the company, you have made an investment. Then after a while you realize that life is more than just work, life is more than money. (IP 8, CEO)

c. Resources

c1. Change in organizational resources

In the managers’ descriptions of their current and previous working conditions, they often referred to changes in the available organizational resources due to the growth of the company. They described that previously, when the company had been small, they had had multiple roles and had done almost everything in the company, operative work, administration, and management. All activities in the business had been theirs. They had felt they needed to be present all the time to ensure that everything ran smoothly and had done as much as they could themselves to save costs and build a stronger financial base for the company.

When the company had expanded, they had acquired financial and personnel resources. A new group of managers (at a lower level) had been hired, who had taken over some of the responsibility for staff and daily operative leadership. Extra staff had been hired to take care of finance and administration, relieving the managers from these tasks.

The acquisition of additional job resources was particularly prominent in CEOs’ perceptions of wellbeing. They felt that their workload decreased as they could delegate responsibility and tasks to lower-level managers, technical–administrative staff, and other employees. The CEOs could work more purposefully on overall leadership, and more proactively with development and seeking new clients, which, in their eyes, meant a purer leadership role, focusing on managerial tasks.

A few years ago, you were more of a salesperson and then you would get into a new suit and then you would go in and manage people. It was completely new … You do not do everything, you do [only] your thing. (IP 1, CEO) I had more to do and then it simply took longer. Now I have less to do. My work tasks are now shared by more people. (IP 4, CEO)

Some companies assigned both lower-level managers and other staff to take care of improvements in certain key areas, such as optimization of organization and processes, the work environment and safety, quality, certification, documenting routines, etc. The CEOs reported that they had not had the time to take care of these issues before. Finally, the process of growth required changes in the organization. The managers described that when the company expanded, there was an increase in specialization and division into departments or groups. The companies developed a clearer organization, roles and routines, which, according to the managers, contributed to a smoother processes and more effective problem solving.

We’ve made a lot of changes over the years, from chaos to organized chaos to order. Now that we have an organization, I work much less. (IP 1, CEO)

While most companies in the study showed continuous growth, three companies did not. The managers of these companies did not mention gaining organizational resources, but instead described the vulnerability of being a small business that was related to lack of financial and personnel resources.

c2. Change in individual resources

Many managers said that they had come to the insight that their work situation was not sustainable in the long run and needed to be changed.

To have that pace forever, then you give up in the end … but if you enjoy it and want to continue working then you have to try to find a sustainable work situation that works both at work and at home. … because otherwise you end up as a human being that you won’t be able to bear. You have nothing more to give … and it’s certain that you will burn yourself and others out. (IP 16, lower manager)

Two factors had led to this insight: ill health and the family situation. Some managers realized the importance of wellbeing and a sustainable work situation after having problems with their health and work–life balance. Those who developed health problems described how this had become a strong warning signal.

I got burned out 10 years ago. … And there it stopped. So I learned then. It was absolutely the most useful lesson I could have received. (IP 13, CEO)

Several managers expressed that they now prioritized health more and strove for a better work–life balance and a more sustainable work situation. Some worked intently on changing their situation and reducing their own working time. Some also maintained that they kept the balance over a long period of time, meaning that they worked overtime some days but compensated for it by working less on other days.

Some managers also mentioned a change in their family situation and their relationship with their partner and children as factors that had made them aware of the importance of wellbeing and work–life balance and had convinced them to make changes in the working situation. One participant talked about age as playing a role in this context. He emphasized that now, closer to retirement age, he did not want to work too much. He wanted more free time.

I work less now. I worked a lot more in the past! I don’t want to work as much. I’ve handed over things like administration, preparation of orders, and so on to the deputy manager. (IP 20, CEO)

Several managers specially highlighted the importance of accumulated managerial experience. They described that they had become more secure in their role and had reached a better understanding of the situation and the yearly work cycle. Based on this they made quicker decisions and did not spend so much time on seeking information. They also described that they had learned to cope better with the work situation, e.g. through planning, prioritizing, working in a more structured way, accomplishing work bit by bit, not promising too much and accepting that stress and a high workload are part of a manager’s job.

Trajectories of managers’ wellbeing and work conditions in the context of the growth of small businesses

The trajectory analysis showed that the changes in the managers’ wellbeing and working conditions occurred in different ways. Despite large variation in experiences, individual and firm-level characteristics and circumstances, several groups of trajectories of participants’ wellbeing and working conditions were identified.

a. Changes in wellbeing due to organizational and individual resources

This group consisted of owner–managers of growing companies who experienced changes in wellbeing as well as in organizational and individual factors. The managers in this group reported that, initially, when their companies had been smaller, they had experienced a deterioration in wellbeing because of a high workload, fast work pace and long working hours. However, enhanced organizational and individual resources had led to an improved work situation and wellbeing for this group. As companies had grown, managers had been able to hire more staff who could relieve them or take over some of their tasks. According to the managers in this group, their wellbeing had improved over time, from having been stressed to a new experience of feeling good. Challenges to wellbeing and disruption of the work–life balance had provided them with increased awareness of the importance of a sustainable working life and their own wellbeing. Several managers described that they had specifically worked on changing both their own work environment and the organization to make the company less dependent on the owner–manager’s availability all the time.

b. Unchanged wellbeing

The managers in this group had a stable wellbeing and had not experienced any significant changes in their wellbeing due to their work. Some managers noted that owing to their coping strategies (positive personality and taking things as they come without judging them as tough, and seeing all problems as challenges and tasks to be solved) they were not affected by high workload and stress.

The managers in this group mentioned their high resilience, positive personality, and active coping strategies. We also observed that some of the companies had several owner–managers, meaning managers’ tasks were shared by several persons.

c. Aware of the importance of sustainable working life from the beginning

Findings showed that managers in this group described that from the beginning they had had high awareness of the importance of sustainability at work and of maintaining a work–life balance. They intentionally strived to keep working hours to a moderate level, set clear boundaries between work and free time, and not work overtime. They described their health as stable and good and did not experience any change in wellbeing. Some managers mentioned that they had experienced work-related ill health, stress, and poor balance between their job and private life in previous jobs. They felt that this experience had helped them realize the importance of health. One manager learned from the example of his entrepreneur parents who had worked long hours. From the beginning, managers in this group had a high awareness of the importance of sustainable work life (and a high level of individual resources), which protected them from overworking and helped them maintain good levels of wellbeing.

d. Small companies with low organizational resources

The common feature of managers in this group is that their work situation was constrained by vulnerability characteristic of small businesses due to insufficient personnel and financial resources. These managers needed to work overtime to fill the personnel gaps and work operatively to earn their salary. They had to do administrative work, were unable to delegate tasks to others and could not invest time in the company’s development. These smaller companies also described some organizational adjustments; however, to a lesser extent. For example, they might hire a lower manager, or get help with finance, or with support systems, and developing improved routines. The managers also talked about trying to keep working hours at a moderate level. The managers in this group had low organizational resources but still felt well. Their working situation was constrained by the small size of their companies, but this did not translate into low wellbeing.

e. New in the manager role

This group consisted of managers who were new in their role of owner–manager or lower manager. Some had experienced heavy work demands when they had filled their role of manager, especially during the first period. After having problems with health these managers had acquired insight into the importance of wellbeing and had started to work intently on attaining and preserving a balance between work and life. They had also become more secure in their role after acquiring experience of working in a managerial position, and learned to delegate responsibilities to others, create better routines, prioritize actions, and not dwell too long on decisions. At the time of the interviews, the managers reported a clear improvement of their wellbeing compared with the first years of being in management.

Some of the newly promoted lower-level managers felt that their wellbeing had improved in their new position. They linked this to increased resources related to achieving larger responsibility, greater possibility to influence company development, more control over work and time, additional variation in work, and stimulating work.

The purpose of this study was to explore perceived changes in working conditions and wellbeing among managers of growing small businesses. To show how the results lead to conclusions regarding the purpose of the study, we first give a brief summary of the main findings and then discuss the observed changes in the managers’ wellbeing, their demands and resources, as well as changes in the context of small businesses itself in the process of growth. This is done by interpreting the findings and setting them in relation to the theoretical framework of the study.

The results indicate that managers’ working conditions in small companies evolve during periods of company growth. This leads to variations over time in managers’ experiences of wellbeing and work–life balance as well as to changes in job demands and resources. Managers’ working situation becomes less demanding and more manageable with a reduction in workload and working hours and a better work–life balance. The findings suggest that this perceived improvement may be due to changes in organizational factors, such as increased company resources, but also to managers’ personal insight based on their experiences, and to increasing awareness of the importance of a sustainable work situation. However, the analysis also showed that there were different trajectories in the way the perceived working conditions and wellbeing changed over time and how organizational and individual resources mattered for the managers’ wellbeing.

As mentioned previously, the basic assumption of the JD–R model is that specific job demands, and also resources, are rooted in specific occupational settings, i.e. they vary depending on the work settings and the context of the organization [ 27 ]. The present study, building on the JD–R model’s assumptions, shows further that the specific context of small companies is itself subject to changes when a company expands and evolves. In other words, the results of this study illustrate that change occurs in a company over time because of the growth, which refers to an aspect of dynamism that occurs in the small business context. Changes in managers’ wellbeing, job demands, and resources in the context of small business growth are explicated below.

Concerning wellbeing, previous research reported good health and job satisfaction with regard to both managers [ 45 , 48 ] and entrepreneurs [ 9 , 14 , 69 , 70 , 71 ], although some few studies showed the opposite (e.g. [ 49 , 51 , 53 , 54 ]). This study provides a more nuanced understanding of managers’ wellbeing in the context of small businesses. Like previous research, the findings in this study point out that managers felt well and experienced job satisfaction and good work–life balance despite the high demands they faced. Although they felt well at the time of the interview, many owner–managers had also experienced impaired wellbeing in previous periods when their company had been smaller and weaker, as shown in the description of the first trajectory group. Thus, the findings suggest that owner–managers in small businesses risk impaired wellbeing due to high workload, long working hours, and work–life conflict when the company is particularly small and when managers lead the growing company mostly by themselves. Also, new managers at low and higher levels, as demonstrated by trajectory group 5, seem to be at risk of diminished wellbeing due to increased job demands, especially during the first years of their managerial career. Increased demands due to transition to a managerial position have also been shown in previous studies [ 47 , 72 ].

Moreover, our results indicate that companies’ increased resources due to growth had implications for managers’ working conditions and wellbeing. First, the managers’ workload decreased because of increased possibilities to delegate a part of their tasks to lower-level managers and because of the increased number of personnel. Second, larger resources, better organization and routines reduced the degree of uncertainty and increased the preparedness and capacity to tackle arising problems, and thus increased the sense of manageability and reduced the intensity of the demands. The study shows that the decreased demands and increased organizational resources led to improved wellbeing for managers, as illustrated in the first trajectory group. Therefore, growth may have a positive effect on managers’ working conditions, primarily for higher managers in small growing companies. However, results from the study also indicate that growth can itself be a stressor, requiring constant adjustments and changes in the organization. If not well handled, growth can result in problems and tensions. Company growth, therefore, creates a changed situation that requires new strategies, new ways of working, and adjustments in an organization.

In relation to the organizational context, the present study distinctly points to the changing nature of the organization undergoing growth. More specifically, the study suggests that, during the process of growth, there is an increase in the degree to which an owner delegates their responsibilities as well as in the complexity of organizational structure (such as management levels) and operational systems (such as financial and production management systems). There also is a decrease in an owner’s involvement in business activities and daily decisions. Increased labour specialization, formalization, standardization (e.g. work with routines), planning and control as well as reduced proximity in relations with employees were some of the transformations that companies went through. Transformations may mean changes in the content of managers’ work, demands (e.g. decreased demands related to managers’ daily work, lower involvement in operational activities and lower working hours) and resources (e.g. in the form of a larger staff, personnel with special competence, higher use of operational systems, formalization and routines, and greater financial security due to larger resources). The described transformations could be traced to all the companies in the study that were growing, and thus represent a background characteristic of all the trajectory groups except for the fourth group (companies that did not continue to grow). The changes are generally in line with the transformations described in Churchill and Lewis’ [ 62 ] model, but also with Torrès and Julien’s [ 64 ] discussion on the denaturing of small business, as presented above.

According to the findings in the present study, even the features of small companies that give specificity to the management modes in this context are subject to change when a company expands. Applying Torrès and Julien’s [ 64 ] view, the current findings may indicate that companies in the process of growth “denature” and lose their small businesses specificity. Thus, businesses transition from simpler, more intuitive, and informal approaches to management, which are characterized by close relationships, to more complex, structured, and formalized modes that focus on long-term planning and less personal interaction [ 64 ]. This may even apply to managers’ work. As mentioned previously, managers in the smallest of small companies have a special position combining the managerial roles of several different levels: being the owner, the entrepreneur, the operative worker, the administrator, etc. (referring to the fourth trajectory group and the initial situation for the first, second and third groups). When a company expands the owner–manager’s work and role transform and become more like those of managers in larger companies (as described for the first trajectory group). The findings thus point to the special working conditions of owner–managers of small companies (characterized by a combination of different roles, resource constraints and the changing nature of their work in the process of business growth) while middle managers’ working conditions and wellbeing in these companies are more in line with what previous research has shown about managers in general.

In the current study, the smallest companies were vulnerable because of poor financial and personnel resources, while the larger small companies did not experience this vulnerability. The growing companies were able to enhance their personnel, financial and organizational resources thanks to growth, which allowed them to overcome the vulnerability related to small business size. This means that these companies built up a stronger reserve pool, which led to higher resilience, allowing them to endure acute and chronic stressors, prevent resource loss and ensure future resource gain [ 73 , 74 , 75 ]. The companies in the study that continued to grow seemed to have a resource surplus; and developed in positive spirals in relation to economic growth as they continued to grow steadily. Having a resource surplus or strong resource reservoirs can obviously be a protection and resource factor for managers’ wellbeing.

Interestingly, the results showed that managers in the smallest companies (companies that had had a short period of growth and did not continue to grow, as shown in the fourth trajectory group) experienced good wellbeing despite high demands. Two possible explanations might help understand these findings. First, as described above, it seems that the available personal resources had a protective effect. Second, it is possible that these companies may have attained the size and mode of operation that allowed a manageable working situation for managers. These companies experienced small business vulnerability due to low resources but remained stable. They were able to engage in reactive coping with daily stressors (e.g. sickness among staff, or machine breakdowns) and handle the situation and keep the balance, even though they were currently not able to invest in growth. Their lack of resources did not seem to lead to negative spirals; however, vulnerability remained. In other words, in case the external environment changes, e.g. in an economic recession, they may be at risk of escalating resource depletion.

An interesting finding of the study is that managers in general seemed satisfied with their job despite high workload both previously and currently. In relation to owner–managers, an entrepreneurial dimension in their job should be noted. Entrepreneurs’ work is self-chosen and the workload is quite often self-inflicted as well. Having a lot of work and solving problems can be the source of motivation, wellbeing, and work satisfaction for an entrepreneur. Entrepreneurs choose to have a lot of work and see this as a sign that everything is going well for the business. At the same time, demands related to high workload and pace may lead to lower wellbeing in the long term [ 14 ]. There seemed to be dual experiences of workload in the owner–managers’ work.

Finally, the study indicates that individual resources may affect managers’ working conditions. Firstly, these relate to managers’ awareness of the importance of health for their own and their companies’ sustainable working life. Secondly, the findings showed the significance of acquiring managerial experience as well as learning the own profession, and the work content and specific situation in the company.

It should be noted that we observed an increase in organizational resources in all growing companies and the participants from these companies admitted that an increase in organizational resources had improved their working conditions and reduced their workload. However, it seems that this had the most pronounced effect on improvement of wellbeing of owner–managers in the first trajectory group. It appears that the first group differed from the other groups of managers in growing companies (the second and third trajectory groups] in the way that they initially lacked individual resources in terms of awareness of the importance of wellbeing and sustainable working life. Those managers who initially enjoyed large individual resources did not overwork and therefore did not experience deterioration in wellbeing. This may suggest that individual resources can have a protective effect [ 16 ]. The findings further show that several managers developed a greater understanding of the importance of sustainable working life after having had problems with wellbeing and work–life balance (e.g. in the first and third trajectory groups). Thanks to this increased understanding, managers changed their behaviours (e.g. by keeping working hours to a moderate level or taking some time off after a period of hard work), which contributed to a reduction in their workload, which in turn had a positive impact on their wellbeing. Therefore, the study’s results suggest that there is feedback from managers’ wellbeing to their personal resources. Before concluding the paper, an outline of the study’s limitations and the practical implications of the findings are highlighted in the section below.

Theoretical and practical implications

This study responds to calls to deepen the understanding of occupational health of small business managers [ 76 , 77 ], to pay more attention to variability and temporal aspects in the work and wellbeing of small business managers [ 8 , 14 ].

The main contribution of this study is that it brings attention to the dynamic, fluid and contextually conditioned nature of managers’ work in small growing companies, and its implications for their wellbeing, as well as the interconnectedness of managers’ work, work organization and wellbeing. This clearly adds to previous research, largely offering a static view of managers’ wellbeing. Additionally, the study employed an interdisciplinary approach, integrating theoretical perspectives and empirical research from research areas of occupational health, management studies, business growth, and entrepreneurship. This and usage of qualitative approach contributes to a deeper and more nuanced understanding of managers’ working conditions and wellbeing in the particularly under-researched context of small growing firms, adding to the previous research characterised by predominantly quantitative approaches, and largely confined to a single research domain.

In terms of practical implications, the study’s results can support leaders in maintaining their own health when running small businesses and pursuing growth and economic effectiveness of the company. Thus, small business managers, particularly at the beginning of their careers, would benefit from developing an awareness of the role of wellbeing for their work and their organization. The study also delineates sources of occupational stress that may be detrimental to their wellbeing and available resources that may help to support and strengthen their wellbeing. Thus, the study draws attention to the importance of promoting a healthy work environment for both owner-managers and lower managers in small businesses. Managers should also be aware that high workload and long working hours constitute a risk to their wellbeing with potentially negative consequences for their companies. Information about factors important for the wellbeing of small business managers can be used in training programs for this group. Also, managers should be coached to participate in various professional peer networks to discuss their working situation, receive support and shared experience, learn how to create clearly defined boundaries for when they are working and not working to ensuring that they do not overwork.

The study may also inspire relevant stakeholders such as politicians, trade unions of employers and other decision makers to develop appropriate and feasible ways and structures (e.g. education kits for entrepreneurs, mentorship, shared resource pools for administrative work and human resources management etc. for several businesses) to reduce the sensitivity that start-ups and small businesses live with, to increase managers’ and companies’ resources, improving managers’ working conditions and therefore their wellbeing.

Limitations and future research

Altogether, it seems that the different pathways described in the trajectories led to higher resilience and a more sustainable working situation for managers thanks to reduced demands and increased resources. However, we are aware that the study sample, consisting of growing successful companies having survived for several years in a row, may have implications for our conclusions as companies that had not survived and where managers’ wellbeing may have led to entrepreneurial exit (i.e. when an owner–manager leaves or closes the firm) were not included. Previous research has indicated that most small businesses do not survive their first years of operation [ 78 ] and owner–managers’ wellbeing is associated with their exit intentions [ 79 ]. Future research should explore, qualitatively and quantitatively, the cases where managers’ wellbeing status led to entrepreneurial exit.

Another question that should be addressed in future studies is whether growth initially demands extra investment of resources from managers to ensure continuing growth. When managers lack the necessary resources (which is the case in many small businesses) they often need to work extra to save costs, or earn more to create the necessary surplus to ensure growth.

Assumingly, if the sample had had an even distribution of gender the results may have looked different, for instance in relation to work–life balance as women often do not have the same possibility to work extremely long hours as the men in our study did.

A possible limitation of the study is that the categories currently and previously may differ between individual managers. Currently could imply today but could also cover the last few years. Similarly, previously could mean last year but it could also mean 10 years ago. These discrepancies are due to the fact that the companies were in different stages of growth and managers had varied length of managerial experience. This has implications for the granularity of the trajectory analysis.

Furthermore, although the study results indicated changes in perceived wellbeing over time, these findings need to be interpreted with caution because of the small sample size. Additionally, the study relied on a qualitative design. Therefore, future research is warranted using other methodologies (e.g. quantitative). The trajectory analysis did not aim to identify general patterns in managers’ evolving wellbeing, demands and resources in relation to small companies’ growth; it merely was an attempt to illustrate that participants perceived those changes occurred in different ways because of an interplay between organizational and individual resources.

Finally, this study relied on managers’ subjective experiences and perceptions of their working conditions and wellbeing, which they felt reflected their current situation. Nevertheless, it is important to be aware of other perspectives which could see managers’ narratives as socially constructed. The findings of the study, for instance, show the importance of managers’ individual resources. This could be discussed in relation to the view of managers as either doers or heroes in the research streams that oppose exaggerating managers’ role in a company’s success and failures [ 80 ]. It could be argued that what managers share regarding their experiences and perceptions can be seen as an expression of their socially constructed identity of strong and action-oriented entrepreneurs whose actions are decisive for business success; and that they perhaps overemphasize their own individual contribution. Therefore, research capturing these experiences using other methods such as observations or discourse analysis is warranted.

Also, it can be assumed that those who felt satisfied with their job, wanted to share their success story, and had more time were more inclined to take part in the study. Those who could barely keep their heads above water may have been more likely to decline participation – both because of stress and because they could not live up to the narrative.

This study shows the dynamic picture of small business managers’ working conditions and wellbeing that is due to the growth-related changes in the company and the managers’ work. Managers’ experiences of own wellbeing, the posed demands, and available resources changed over time in the process of the companies’ growth. When the companies were small, there was a risk for impaired wellbeing among owner–managers because of high workload, long working hours, and work–life imbalance. In addition, the study shows a positive impact of increased organizational resources brought through the company’s growth, leading to reduced workload, improved wellbeing, and work–life balance for managers. Furthermore, the perceived improvements were due not only to the changes in organizational factors, but also to managers’ personal insights and an increased awareness of the importance of a sustainable work situation. Finally, the results showed that the perceived changes in managers’ working conditions and wellbeing followed different trajectories over time because of the interaction between organizational and personal factors.

Data availability

The data presented in the study are available on reasonable request from the corresponding author. The data are not publicly available owing to restrictions in the ethical approval of this study.

Barbosa C, Azevedo R, Rodrigues MA. Occupational safety and health performance indicators in SMEs: a literature review. WORK- J Prev Assess Rehabil. 2019;64(2):217–27.

Google Scholar  

Owalla B, Gherhes C, Vorley T, Brooks C. Mapping SME productivity research: a systematic review of empirical evidence and future research agenda. Small Bus Econ. 2022;58(3):1285–307.

Article   Google Scholar  

Bureau S, Salvador E, Fendt J. Small firms and the growth stage: can Entrepreneurship Education Programmes be supportive? Ind High Educ. 2012;26(2):79–100.

Buttner EH. Entrepreneurial stress: is it hazardous to your health? J Manag Issues. 1992;4(2):223–40.

Pasanen M. Sme growth strategies: organic or non-organic? J Enterprising Cult. 2007;15(04):317–38.

Kaluza AJ, Boer D, Buengeler C, van Dick R. Leadership behaviour and leader self-reported well-being: a review, integration and meta-analytic examination. Work Stress. 2020;34(1):34–56.

Skakon J, Nielsen K, Borg V, Guzman J. Are leaders’ well-being, behaviours and style associated with the affective well-being of their employees? A systematic review of three decades of research. Work Stress. 2010;24(2):107–39.

Torrès O, Thurik R. Small business owners and health. Small Bus Econ Entrep J. 2019;53(2):311–21.

Hessels J, Rietveld CA, Thurik AR, Van der Zwan P. Depression and Entrepreneurial exit. Acad Manag Perspect. 2018;32(3):323–39.

Lundqvist D. Psychosocial Work Conditions, Health, and Leadership of Managers. 2013 [cited 2022 Jan 21]; http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96787

Kuoppala J, Lamminpää A, Liira J, Vainio H, Leadership. Job Well-Being, and Health Effects—A systematic review and a Meta-analysis. J Occup Environ Med. 2008;50(8):904–15.

Article   PubMed   Google Scholar  

Montano D, Reeske A, Franke F, Hüffmeier J. Leadership, followers’ mental health and job performance in organizations: a comprehensive meta-analysis from an occupational health perspective. J Organ Behav. 2017;38(3):327–50.

Lundqvist D, Wallo A, Reineholm C. Leadership and well-being of employees in the nordic countries: a literature review. Work. 2023;74(4):1331–52.

Article   PubMed   PubMed Central   Google Scholar  

Stephan U. Entrepreneurs’ Mental Health and Well-Being: a review and research agenda. Acad Manag Perspect. 2018;32(3):290–322.

Bakker A, Demerouti E. The job demands-resources model: state of the art. J Manag Psychol. 2007;22(3):309–28.

Bakker A, Demerouti E. Job demands–resources theory: taking stock and looking forward. J Occup Health Psychol. 2017;22:273–85.

Demerouti E, Bakker A, Nachreiner F, Schaufeli WB. The job demands-resources model of burnout. J Appl Psychol. 2001;86:499–512.

Article   CAS   PubMed   Google Scholar  

Bakker A, Demerouti E, Verbeke W. Using the job demands-resources model to predict burnout and performance. Hum Resour Manage. 2004;43(1):83–104.

Karasek R, Theorell T, Healthy Work. Stress, Productivity, and the Reconstruction of Working Life. New York: Basic Books; 1990.

Crawford ER, LePine JA, Rich BL. Linking job demands and resources to employee engagement and burnout: a theoretical extension and meta-analytic test. J Appl Psychol. 2010;95:834–48.

Demerouti E, Sanz Vergel A. Burnout and work Engagement: the JD-R Approach. Annu Rev Organ Psychol Organ Behav. 2014;1.

Häusser JA, Mojzisch A, Niesel M, Schulz-Hardt S. Ten years on: a review of recent research on the Job demand–control (-Support) model and psychological well-being. Work Stress. 2010;24(1):1–35.

Johnson JV, Hall EM. Job strain, work place social support, and cardiovascular disease: a cross-sectional study of a random sample of the Swedish working population. Am J Public Health. 1988;78(10):1336–42.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Karasek RA. Job demands, job decision latitude, and Mental strain: implications for job redesign. Adm Sci Q. 1979;24(2):285–308.

Siegrist J. Adverse health effects of high-effort/low-reward conditions. J Occup Health Psychol. 1996;1:27–41.

Siegrist J. Effort-reward imbalance at work and health. In: L. Perrewe P, C. Ganster D, editors. Historical and Current Perspectives on Stress and Health [Internet]. Emerald Group Publishing Limited; 2002 [cited 2022 Dec 3]. pp. 261–91. (Research in Occupational Stress and Well Being; vol. 2). https://doi.org/10.1016/S1479-3555(02)02007-3

Schaufeli WB, Taris TW. A Critical Review of the Job Demands-Resources Model: Implications for Improving Work and Health. In: Bauer GF, Hämmig O, editors. Bridging Occupational, Organizational and Public Health: A Transdisciplinary Approach [Internet]. Dordrecht: Springer Netherlands; 2014 [cited 2022 Dec 3]. pp. 43–68. https://doi.org/10.1007/978-94-007-5640-3_4

Kattenbach R, Fietze S. Entrepreneurial orientation and the job demands-resources model. Pers Rev. 2018;47(3):745–64.

Bakker A, Demerouti E, Schaufeli W. Dual processes at work in a call centre: an application of the job demands – resources model. Eur J Work Organ Psychol. 2003;12(4):393–417.

Hakanen J, Bakker A, Schaufeli W. Burnout and work engagement among teachers. J Sch Psychol. 2006;43(6):495–513.

Llorens S, Bakker A, Schaufeli W, Salanova M. Testing the robustness of the job demands-resources model. Int J Stress Manag. 2006;13:378–91.

Diener E. Subjective well-being. Psychol Bull. 1984;95(3):542–75.

Warr P, Nielsen K. Wellbeing and Work Performance. In: Diener E, Oishi S, Tay L, editors. Handbook of well-being [Internet]. DEF Publishers; 2018. https://nobascholar.com

Grant AM, Christianson MK, Price RH. Happiness, Health, or relationships? Managerial practices and Employee Well-being tradeoffs. Acad Manag Perspect. 2007;21(3):51–63.

De Simone S. Conceptualizing wellbeing in the Workplace. Int J Bus Soc Sci. 2014;5(12).

Danna K, Griffin RW. Health and well-being in the workplace: a review and synthesis of the literature. J Manag. 1999;25(3):357–84.

Sonnentag S. Dynamics of Well-Being. Annu Rev Organ Psychol Organ Behav. 2015;2(1):261–93.

Warr P. How to think about and measure Psychological Well-Being. Research Methods in Occupational Health Psychology. Routledge; 2012.

Ryan RM, Deci EL. On happiness and human potentials: a review of Research on Hedonic and Eudaimonic Well-Being. Annu Rev Psychol. 2001;52(1):141–66.

Ahmadi E, Macassa G, Larsson J. Managers’ work and behaviour patterns in profitable growth SMEs. Small Bus Econ. 2021;57(2):849–63.

Bernin P, Theorell T. Demand–control–support among female and male managers in eight Swedish companies. Stress Health. 2001;17(4):231–43.

Carlson S. Executive Behaviour [Internet]. Reprinted with contribution by Henry Mintzberg and Rosemary Stewart; 1991 [cited 2022 Dec 3]. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-81170

Lundqvist D, Eriksson AF, Ekberg K. Exploring the relationship between managers’ leadership and their health. Work. 2012;42(3):419–27.

Mintzberg H. nature of managerial work [Internet]. Harper & Row; 1973 [cited 2022 Dec 3]. https://scholar.google.com/scholar_lookup?title=nature+of+managerial+work&author=Mintzberg%2C+Henry.&publication_year=1973

Nyberg A, Leineweber C, Magnusson Hanson L. Gender differences in psychosocial work factors, work–personal life interface, and well-being among Swedish managers and non-managers. Int Arch Occup Environ Health. 2015;88(8):1149–64.

Tengblad S. Is there a ‘New managerial work’? A comparison with Henry Mintzberg’s Classic Study 30 years Later*. J Manag Stud. 2006;43(7):1437–61.

Li WD, Schaubroeck JM, Xie JL, Keller AC. Is being a leader a mixed blessing? A dual-pathway model linking leadership role occupancy to well-being. J Organ Behav. 2018;39(8):971–89.

Marmot MG, Shipley MJ. Do socioeconomic differences in mortality persist after retirement? 25 year follow up of civil servants from the first Whitehall study. BMJ. 1996;313(7066):1177–80.

Boyce CJ, Oswald AJ. Do people become healthier after being promoted? Health Econ. 2012;21(5):580–96.

Ikesu R, Miyawaki A, Svensson AK, Svensson T, Kobayashi Y, Chung UI. Association of managerial position with cardiovascular risk factors: a fixed-effects analysis for Japanese employees. Scand J Work Environ Health. 2021;47(6):425–34.

Johnston DW, Lee WS. Extra Status and Extra stress: are promotions Good for us? ILR Rev. 2013;66(1):32–54.

Nyberg A, Peristera P, Westerlund H, Johansson G, Hanson LLM. Does job promotion affect men’s and women’s health differently? Dynamic panel models with fixed effects. Int J Epidemiol. 2017;46(4):1137–46.

PubMed   Google Scholar  

Jamal M. Job stress, satisfaction, and mental health: an empirical examination of self-employed and non-self-employed Canadians. J Small Bus Manag [Internet]. 1997 Oct 1 [cited 2023 Jun 20];v35(n4). https://search.ebscohost.com/login.aspx?direct=true&AuthType=shib&db=edsbig&AN=edsbig.A20240695=sv&site=eds-live&custid=s3912055

Boyd DP, Gumpert DE. Coping with entrepreneurial stress. Harv Bus Rev. 1983;61(2):44–64.

Björklund C, Lohela-Karlsson M, Jensen I, Bergström G. Hierarchies of Health: Health and work-related stress of managers in municipalities and County councils in Sweden. J Occup Environ Med. 2013;55(7):752–60.

Lundqvist D, Reineholm C, Gustavsson M, Ekberg K. Investigating work conditions and burnout at three hierarchical levels. J Occup Environ Med. 2013;55(10):1157–63.

Achtenhagen L, Naldi L, Melin L. Business Growth—Do practitioners and scholars really talk about the same thing? Entrep Theory Pract. 2010;34(2):289–316.

Dobbs M, Hamilton RT. Small business growth: recent evidence and new directions. Int J Entrep Behav Res. 2007;13(5):296–322.

Leitch C, Hill F, Neergaard H. Entrepreneurial and business growth and the Quest for a comprehensive theory: tilting at Windmills? Entrep Theory Pract. 2010;34(2):249–60.

Lester DL, Parnell JA, Carraher S, ORGANIZATIONAL LIFE CYCLE:. A FIVE-STAGE EMPIRICAL SCALE. Int J Organ Anal. 2003;11(4):339–54.

Phelps R, Adams R, Bessant J. Life cycles of growing organizations: a review with implications for knowledge and learning. Int J Manag Rev. 2007;9(1):1–30.

Churchill NC, Lewis VL. The five stages of small business growth. Harv Bus Rev. 1983;61(3):30–50.

Shim S, Eastlick MA, Lotz S. Examination of US Hispanic-owned, small retail and service businesses: an organizational life cycle approach. J Retail Consum Serv. 2000;7(1):19–32.

Torrès O, Julien PA. Specificity and denaturing of Small Business. Int Small Bus J. 2005;23(4):355–77.

Kvale S, Brinkmann S, InterViews. Learning the craft of qualitative research interviewing. SAGE; 2009. p. 377.

Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2008;62(1):107–15.

Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Mäkiniemi JP, Ahola S, Nuutinen S, Laitinen J, Oksanen T. Factors associated with job burnout, job satisfaction and work engagement among entrepreneurs. A systematic qualitative review. J Small Bus Entrep. 2021;33(2):219–47.

Toivanen S, Griep RH, Mellner C, Vinberg S, Eloranta S. Mortality differences between self-employed and paid employees: a 5-year follow-up study of the working population in Sweden. Occup Environ Med. 2016;73(9):627–36.

Rietveld CA, Bailey H, Hessels J, van der Zwan P. Health and entrepreneurship in four Caribbean Basin countries. Econ Hum Biol. 2016;21:84–9.

Lundqvist D. Psychosocial work environment and health when entering or leaving a managerial position. Work. 2022;73(2):505–15.

Hobfoll SE. The influence of Culture, Community, and the Nested-Self in the stress process: advancing conservation of resources Theory. Appl Psychol. 2001;50(3):337–421.

Hobfoll SE. Conservation of resources theory: its implication for stress, health, and resilience. The Oxford handbook of stress, health, and coping. New York, NY, US: Oxford University Press; 2011. pp. 127–47. (Oxford library of psychology).

Hobfoll SE, Stevens NR, Zalta AK. Expanding the Science of Resilience: conserving resources in the aid of adaptation. Psychol Inq. 2015;26(2):174–80.

Cocker F, Martin A, Scott J, Venn A, Sanderson K. Psychological distress, related work attendance, and Productivity loss in small-to-medium enterprise Owner/Managers. Int J Environ Res Public Health. 2013;10(10):5062–82.

Visentin DC, Cleary M, Minutillo S. Small Business Ownership and Mental Health. Issues Ment Health Nurs. 2020;41(5):460–3.

Gupta R. Entrepreneurship and firm growth: review of literature on firm-level entrepreneurship and small-firm growth. South Asian Surv. 2015;22(1):1–14.

Sardeshmukh SR, Goldsby M, Smith RM. Are work stressors and emotional exhaustion driving exit intentions among business owners? J Small Bus Manag. 2021;59(4):544–74.

Alvesson M, Spicer A. Critical perspectives on leadership. In: The Oxford Handbook of Leadership and Organizations. Oxford Library of Psychology; p. 40–56.

Download references

This research received no external funding.

Open access funding provided by University of Gävle.

Author information

Authors and affiliations.

Department of Occupational Health, Psychology and Sports Sciences, Faculty of Health and Occupational Studies, University of Gävle, Gävle, 80176, Sweden

Elena Ahmadi & Gunnar Bergström

Department of Business and Economic Studies, Faculty of Education and Business Studies, University of Gävle, Gävle, Sweden

Elena Ahmadi

Department of Behavioural Sciences and Learning, Division of Education and Sociology, Linköping University, Linköping, Sweden

Daniel Lundqvist

Unit of Intervention and Implementation Research for Worker Health, Institute of Environmental Medicine, Karolinska Institute, Box 210, Stockholm, 171 77, Sweden

Gunnar Bergström

Department of Social Work, Criminology and Public Health Sciences, Faculty of Health and Occupational Studies, University of Gävle, Gävle, Sweden

Gloria Macassa

EPIUnit–Instituto de Saude Publica, Universidade do Porto, Porto, Portugal

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (E.A., D.L., G.B., G.M.) planned and designed the study. E.A. collected and analyzed the data and discussed the analysis with D.L. E.A. wrote the main manuscript. All authors (E.A., D.L., G.B., G.M.) reviewed the manuscript.

Corresponding author

Correspondence to Elena Ahmadi .

Ethics declarations

Ethics approval and consent to participate.

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Swedish Ethical Review Authority (protocol code 2019 − 00314). Informed consent was obtained from all subjects involved in the study.

Consent for publication

All the authors have agreed on the order of authorship, and to submitting and the publication the manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ahmadi, E., Lundqvist, D., Bergström, G. et al. Managers in the context of small business growth: a qualitative study of working conditions and wellbeing. BMC Public Health 24 , 2075 (2024). https://doi.org/10.1186/s12889-024-19578-4

Download citation

Received : 24 February 2024

Accepted : 24 July 2024

Published : 31 July 2024

DOI : https://doi.org/10.1186/s12889-024-19578-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Small businesses
  • Business growth
  • Psychosocial working conditions
  • Job demands
  • Job resources

BMC Public Health

ISSN: 1471-2458

conclusion on quantitative and qualitative research

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

diagnostics-logo

Article Menu

conclusion on quantitative and qualitative research

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Compressed sensitivity encoding (sense): qualitative and quantitative analysis.

conclusion on quantitative and qualitative research

1. Introduction

2.1. population, 2.2. protocol optimisation, 2.3. mri protocol, 2.4. qualitative image analysis.

  • Score 5: Excellent; acceptable for diagnostic use, complete absence of artefacts;
  • Score 4: Good; acceptable for diagnostic use (only minor artefacts);
  • Score 3: Fair; acceptable for diagnostic use but with minor issues;
  • Score 2: Sufficient; acceptable for diagnostic use but severely mixed with the background;
  • Score 1: Insufficient; not acceptable for diagnostic use.

2.5. Quantitative Image Analysis

2.6. statistical analysis, 3.1. qualitative image analysis, 3.2. quantitative image analysis, 3.3. subgroup qualitative and quantitative image analysis, 4. discussion, 5. conclusions, supplementary materials, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest, abbreviations.

FLAIRFluid Attenuated Inversion Recovery
C-SENSECompressed SENsing–Sensitivity Encoding
Ccontrast
CNRcontrast-to-noise ratio
SNRsignal-to-noise ratio
MRImagnetic resonance imaging
CNScentral nervous system
WMwhite matter
GMgrey matter
CSFcerebrospinal fluid
TSETurbo Spin Echo
SENSEsensitivity encoding
CSCompressed Sensing
ROIRegions Of Interest
SWISusceptibility Weighted Imaging-phase
SARSpecific Absorption Rate
TFETurbo Field Echo
  • Tsao, J.; Kozerke, S. MRI Temporal Acceleration Techniques. J. Magn. Reson. Imaging 2012 , 36 , 543–560. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hamilton, J.; Franson, D.; Seiberlich, N. Recent Advances in Parallel Imaging for MRI. Prog. Nucl. Magn. Reson. Spectrosc. 2017 , 101 , 71–95. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pruessmann, K.P.; Weiger, M.; Scheidegger, M.B.; Boesiger, P. SENSE: Sensitivity Encoding for Fast MRI. Magn. Reson. Med. 1999 , 42 , 952–962. [ Google Scholar ] [ CrossRef ]
  • Liang, D.; Liu, B.; Wang, J.; Ying, L. Accelerating SENSE Using Compressed Sensing. Magn. Reson. Med. 2009 , 62 , 1574–1584. [ Google Scholar ] [ CrossRef ]
  • Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magn. Reson. Med. 2007 , 58 , 1182–1195. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Fessler, J. Model-Based Image Reconstruction for MRI. IEEE Signal Process. Mag. 2010 , 27 , 81–89. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Toledano-Massiah, S.; Sayadi, A.; de Boer, R.; Gelderblom, J.; Mahdjoub, R.; Gerber, S.; Zuber, M.; Zins, M.; Hodel, J. Accuracy of the Compressed Sensing Accelerated 3D-FLAIR Sequence for the Detection of MS Plaques at 3T. Am. J. Neuroradiol. 2018 , 39 , 454–458. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Granberg, T.; Uppman, M.; Hashim, F.; Cananau, C.; Nordin, L.E.; Shams, S.; Berglund, J.; Forslin, Y.; Aspelin, P.; Fredrikson, S.; et al. Clinical Feasibility of Synthetic MRI in Multiple Sclerosis: A Diagnostic and Volumetric Validation Study. Am. J. Neuroradiol. 2016 , 37 , 1023–1029. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Blystad, I.; Warntjes, J.B.M.; Smedby, O.; Landtblom, A.M.; Lundberg, P.; Larsson, E.M. Synthetic MRI of the Brain in a Clinical Setting. Acta radiol. 2012 , 53 , 1158–1163. [ Google Scholar ] [ CrossRef ]
  • Tanenbaum, L.N.; Tsiouris, A.J.; Johnson, A.N.; Naidich, T.P.; DeLano, M.C.; Melhem, E.R.; Quarterman, P.; Parameswaran, S.X.; Shankaranarayanan, A.; Goyen, M.; et al. Synthetic MRI for Clinical Neuroimaging: Results of the Magnetic Resonance Image Compilation (MAGiC) Prospective, Multicenter, Multireader Trial. Am. J. Neuroradiol. 2017 , 38 , 1103–1110. [ Google Scholar ] [ CrossRef ]
  • Di Giuliano, F.; Minosse, S.; Picchi, E.; Marfia, G.A.; Da Ros, V.; Muto, M.; Muto, M.; Pistolese, C.A.; Laghi, A.; Garaci, F.; et al. Comparison between Synthetic and Conventional Magnetic Resonance Imaging in Patients with Multiple Sclerosis and Controls. Magn. Reson. Mater. Phys. Biol. Med. 2020 , 33 , 549–557. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Di Giuliano, F.; Minosse, S.; Picchi, E.; Ferrazzoli, V.; Da Ros, V.; Muto, M.; Pistolese, C.A.; Garaci, F.; Floris, R. Qualitative and Quantitative Analysis of 3D T1 Silent Imaging. Radiol. Medica 2021 , 126 , 1207–1215. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chandarana, H.; Feng, L.; Block, T.K.; Rosenkrantz, A.B.; Lim, R.P.; Babb, J.S.; Sodickson, D.K.; Otazo, R. Free-Breathing Contrast-Enhanced Multiphase MRI of the Liver Using a Combination of Compressed Sensing, Parallel Imaging, and Golden-Angle Radial Sampling. Investig. Radiol. 2013 , 48 , 10–16. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Otazo, R.; Kim, D.; Axel, L.; Sodickson, D.K. Combination of Compressed Sensing and Parallel Imaging for Highly Accelerated First-pass Cardiac Perfusion MRI. Magn. Reson. Med. 2010 , 64 , 767–776. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yoon, J.K.; Kim, M.-J.; Lee, S. Compressed Sensing and Parallel Imaging for Double Hepatic Arterial Phase Acquisition in Gadoxetate-Enhanced Dynamic Liver Magnetic Resonance Imaging. Investig. Radiol. 2019 , 54 , 374–382. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • He, M.; Xu, J.; Sun, Z.; Wang, S.; Zhu, L.; Wang, X.; Wang, J.; Feng, F.; Xue, H.; Jin, Z. Comparison and Evaluation of the Efficacy of Compressed SENSE (CS) and Gradient- and Spin-echo (GRASE) in Breath-hold (BH) Magnetic Resonance Cholangiopancreatography (MRCP). J. Magn. Reson. Imaging 2020 , 51 , 824–832. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Vranic, J.E.; Cross, N.M.; Wang, Y.; Hippe, D.S.; de Weerdt, E.; Mossa-Basha, M. Compressed Sensing–Sensitivity Encoding (CS-SENSE) Accelerated Brain Imaging: Reduced Scan Time without Reduced Image Quality. Am. J. Neuroradiol. 2019 , 40 , 92–98. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sasi S, D.; Ramaniharan, A.K.; Bhattacharjee, R.; Gupta, R.K.; Saha, I.; Van Cauteren, M.; Shah, T.; Gopalakrishnan, K.; Gupta, A.; Singh, A. Evaluating Feasibility of High Resolution T1-Perfusion MRI with Whole Brain Coverage Using Compressed SENSE: Application to Glioma Grading. Eur. J. Radiol. 2020 , 129 , 109049. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Meister, R.L.; Groth, M.; Jürgens, J.H.W.; Zhang, S.; Buhk, J.H.; Herrmann, J. Compressed SENSE in Pediatric Brain Tumor MR Imaging. Clin. Neuroradiol. 2022 , 32 , 725–733. [ Google Scholar ] [ CrossRef ]
  • Cho, S.J.; Choi, Y.J.; Chung, S.R.; Lee, J.H.; Baek, J.H. High-Resolution MRI Using Compressed Sensing-Sensitivity Encoding (CS-SENSE) for Patients with Suspected Neurovascular Compression Syndrome: Comparison with the Conventional SENSE Parallel Acquisition Technique. Clin. Radiol. 2019 , 74 , 817.e9–817.e14. [ Google Scholar ] [ CrossRef ]
  • Nagata, S.; Goshima, S.; Noda, Y.; Kawai, N.; Kajita, K.; Kawada, H.; Tanahashi, Y.; Matsuo, M. Magnetic Resonance Cholangiopancreatography Using Optimized Integrated Combination with Parallel Imaging and Compressed Sensing Technique. Abdom. Radiol. 2019 , 44 , 1766–1772. [ Google Scholar ] [ CrossRef ]
  • Vasanawala, S.S.; Alley, M.T.; Hargreaves, B.A.; Barth, R.A.; Pauly, J.M.; Lustig, M. Improved Pediatric MR Imaging with Compressed Sensing. Radiology 2010 , 256 , 607–616. [ Google Scholar ] [ CrossRef ]
  • Liu, F.; Duan, Y.; Peterson, B.S.; Kangarlu, A. Compressed Sensing MRI Combined with SENSE in Partial k -Space. Phys. Med. Biol. 2012 , 57 , N391–N403. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mönch, S.; Sollmann, N.; Hock, A.; Zimmer, C.; Kirschke, J.S.; Hedderich, D.M. Magnetic Resonance Imaging of the Brain Using Compressed Sensing—Quality Assessment in Daily Clinical Routine. Clin. Neuroradiol. 2020 , 30 , 279–286. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Molnar, U.; Nikolov, J.; Nikolić, O.; Boban, N.; Subašić, V.; Till, V. Diagnostic Quality Assessment of Compressed SENSE Accelerated Magnetic Resonance Images in Standard Neuroimaging Protocol: Choosing the Right Acceleration. Phys. Medica 2021 , 88 , 158–166. [ Google Scholar ] [ CrossRef ]
  • Robson, P.M.; Grant, A.K.; Madhuranthakam, A.J.; Lattanzi, R.; Sodickson, D.K.; Mckenzie, C.A. Comprehensive Quantification of Signal-to-Noise Ratio and g -Factor for Image-Based and k -Space-Based Parallel Imaging Reconstructions. Magn. Reson. Med. 2008 , 60 , 895–907. [ Google Scholar ] [ CrossRef ]
  • Reeder, S.B.; Wintersperger, B.J.; Dietrich, O.; Lanz, T.; Greiser, A.; Reiser, M.F.; Glazer, G.M.; Schoenberg, S.O. Practical Approaches to the Evaluation of Signal-to-Noise Ratio Performance with Parallel Imaging: Application with Cardiac Imaging and a 32-Channel Cardiac Coil. Magn. Reson. Med. 2005 , 54 , 748–754. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Aja-Fernández, S.; Vegas-Sánchez-Ferrero, G.; Tristán-Vega, A. Noise Estimation in Parallel MRI: GRAPPA and SENSE. Magn. Reson. Imaging 2014 , 32 , 281–290. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sartoretti, E.; Sartoretti, T.; Binkert, C.; Najafi, A.; Schwenk, Á.; Hinnen, M.; van Smoorenburg, L.; Eichenberger, B.; Sartoretti-Schefer, S. Reduction of Procedure Times in Routine Clinical Practice with Compressed SENSE Magnetic Resonance Imaging Technique. PLoS ONE 2019 , 14 , e0214887. [ Google Scholar ] [ CrossRef ]
  • Duan, Y.; Zhang, J.; Zhuo, Z.; Ding, J.; Ju, R.; Wang, J.; Ma, T.; Haller, S.; Liu, Y.; Liu, Y. Accelerating Brain 3D T1-Weighted Turbo Field Echo MRI Using Compressed Sensing-Sensitivity Encoding (CS-SENSE). Eur. J. Radiol. 2020 , 131 , 109255. [ Google Scholar ] [ CrossRef ]
  • Sartoretti, T.; Sartoretti, E.; van Smoorenburg, L.; Schwenk, Á.; Mannil, M.; Graf, N.; Binkert, C.A.; Wyss, M.; Sartoretti-Schefer, S. Spiral 3-Dimensional T1-Weighted Turbo Field Echo: Increased Speed for Magnetization-Prepared Gradient Echo Brain Magnetic Resonance Imaging. Investig. Radiol. 2020 , 55 , 775–784. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Okuchi, S.; Fushimi, Y.; Okada, T.; Yamamoto, A.; Okada, T.; Kikuchi, T.; Yoshida, K.; Miyamoto, S.; Togashi, K. Visualization of Carotid Vessel Wall and Atherosclerotic Plaque: T1-SPACE vs. Compressed Sensing T1-SPACE. Eur. Radiol. 2019 , 29 , 4114–4122. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Suh, C.H.; Jung, S.C.; Lee, H.B.; Cho, S.J. High-Resolution Magnetic Resonance Imaging Using Compressed Sensing for Intracranial and Extracranial Arteries: Comparison with Conventional Parallel Imaging. Korean J. Radiol. 2019 , 20 , 487. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sartoretti, T.; Reischauer, C.; Sartoretti, E.; Binkert, C.; Najafi, A.; Sartoretti-Schefer, S. Common Artefacts Encountered on Images Acquired with Combined Compressed Sensing and SENSE. Insights Imaging 2018 , 9 , 1107–1115. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yang, A.C.; Kretzler, M.; Sudarski, S.; Gulani, V.; Seiberlich, N. Sparse Reconstruction Techniques in Magnetic Resonance Imaging. Investig. Radiol. 2016 , 51 , 349–364. [ Google Scholar ] [ CrossRef ]
  • Sharma, S.D.; Fong, C.L.; Tzung, B.S.; Law, M.; Nayak, K.S. Clinical Image Quality Assessment of Accelerated Magnetic Resonance Neuroimaging Using Compressed Sensing. Investig. Radiol. 2013 , 48 , 638–645. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Compressed-SENSENo Compressed-SENSE
T1-TSET2-TSE3D T2-FLAIRT1-TSET2-TSE3D T2-FLAIR
Acquisition matrix308 × 257420 × 322252 × 251308 × 226420 × 350228 × 228
Field of view (cm)232325232325
Repetition time (ms)200062006000200030004800
Echo time (ms)20903402080280
Slice thickness (mm)31.51441.1
Intersection gap (mm)11−0.511−0.55
Number of averages121112
Bandwidth (kHz)165.7217.2318.7169.8195.81166.5
C-SENSE factor329---
Acquisition time2′34″3′08″3′50″3′00″2′42″4′34″
SequencesReader 1Reader 2
T1-TSE Compressed-SENSE4.93 [4–5]4.78 [3–5]
T1-TSE No Compressed-SENSE4.95 [4–5]4.84 [4–5]
T2-TSE Compressed-SENSE4.93 [4–5]4.77 [4–5]
T2-TSE No Compressed-SENSE4.82 [4–5]4.70 [4–5]
3D T2 FLAIR Compressed-SENSE4.78 [4–5]3.97 [3–5]
3D T2 FLAIR No Compressed-SENSE4.89 [4–5]4.78 [4–5]
Compressed-SENSENo Compressed-SENSE
Median25th75thMedian25th75thp-Value
FLAIRGM-WM0.0900.170.080.010.150.130
GM-CSF0.640.560.690.770.710.82<0.001 *
WM-CSF0.580.510.640.740.660.79<0.001 *
T1GM-WM−0.17−0.24−0.13−0.19−0.25−0.130.009 *
GM-CSF0.680.640.710.650.590.7<0.001 *
WM-CSF0.760.740.790.750.710.77<0.001 *
T2GM-WM0.110.050.170.10.050.160.849
GM-CSF−0.52−0.55−0.47−0.39−0.43−0.33<0.001 *
WM-CSF−0.59−0.62−0.56−0.48−0.51−0.44<0.001 *
Compressed-SENSENo Compressed-SENSE
Median25th75thMedian25th75thp-Value
FLAIRGM-WM2.320.094.731.950.333.990.150
GM-CSF11.388.8114.5112.8210.1015.460.002 *
WM-CSF9.057.0011.6810.668.3012.93<0.001 *
T1GM-WM−9.03−11.99−6.38−8.79−11.94−6.220.633
GM-CSF17.0513.8520.6315.0312.0318.65<0.001 *
WM-CSF25.7122.3731.3124.6919.3029.240.007 *
T2GM-WM3.851.596.294.722.078.07<0.001 *
GM-CSF−43.52−52.04−35.44−33.35−40.00−27.02<0.001 *
WM-CSF−47.30−57.74−39.84−38.82−45.99−31.38<0.001 *
Compressed-SENSENo Compressed-SENSE
Median25th75thMedian25th75thp-Value
FLAIRFC18.4515.4821.7018.0814.9120.240.207
Ge11.649.6313.4612.409.8513.740.235
CSF3.262.903.811.901.512.39<0.001 *
Sp10.739.0813.6511.399.2613.040.797
CS14.9313.1018.2915.1712.4417.470.540
OC14.0211.9516.7314.6212.1316.430.803
Th12.3210.9515.9412.9911.2115.380.841
T1FC18.8314.4021.9116.5113.6319.490.025 *
Ge29.9123.4436.0628.4823.0133.300.269
CSF4.003.444.874.213.625.000.331
Sp29.9125.8335.7829.1623.7734.130.232
CS30.6826.0736.1028.3923.6332.540.084
OC21.1917.7325.1519.4816.8422.670.028 *
Th23.6920.5628.0822.2718.8226.440.201
T2FC23.4520.0727.8830.3227.0736.43<0.001 *
Ge15.7613.4617.7820.5717.3623.45<0.001 *
CSF64.2153.7377.6861.5851.0371.940.073
Sp15.9713.1218.8420.2916.1623.71<0.001 *
CS18.5316.2221.6923.5920.8628.34<0.001 *
OC18.3915.4221.0124.0519.9127.95<0.001 *
Th20.9617.2523.7325.5322.3329.46<0.001 *
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Picchi, E.; Minosse, S.; Pucci, N.; Di Pietro, F.; Serio, M.L.; Ferrazzoli, V.; Da Ros, V.; Giocondo, R.; Garaci, F.; Di Giuliano, F. Compressed SENSitivity Encoding (SENSE): Qualitative and Quantitative Analysis. Diagnostics 2024 , 14 , 1693. https://doi.org/10.3390/diagnostics14151693

Picchi E, Minosse S, Pucci N, Di Pietro F, Serio ML, Ferrazzoli V, Da Ros V, Giocondo R, Garaci F, Di Giuliano F. Compressed SENSitivity Encoding (SENSE): Qualitative and Quantitative Analysis. Diagnostics . 2024; 14(15):1693. https://doi.org/10.3390/diagnostics14151693

Picchi, Eliseo, Silvia Minosse, Noemi Pucci, Francesca Di Pietro, Maria Lina Serio, Valentina Ferrazzoli, Valerio Da Ros, Raffaella Giocondo, Francesco Garaci, and Francesca Di Giuliano. 2024. "Compressed SENSitivity Encoding (SENSE): Qualitative and Quantitative Analysis" Diagnostics 14, no. 15: 1693. https://doi.org/10.3390/diagnostics14151693

Article Metrics

Supplementary material.

ZIP-Document (ZIP, 121 KiB)

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Qualitative vs Quantitative Research: Differences and Examples

    conclusion on quantitative and qualitative research

  2. Qualitative vs Quantitative Research: Differences and Examples

    conclusion on quantitative and qualitative research

  3. Choosing Between Quantitative vs Qualitative Research

    conclusion on quantitative and qualitative research

  4. Qualitative Versus Quantitative Research

    conclusion on quantitative and qualitative research

  5. Qualitative vs. Quantitative Research

    conclusion on quantitative and qualitative research

  6. conclusion in research format

    conclusion on quantitative and qualitative research

VIDEO

  1. Quantitative Research: Its Characteristics, Strengths, and Weaknesses

  2. Quantitative, Qualitative, and Mixed Methods Research: What's the difference?

  3. Quantitative & Qualitative Research #quantitativeresearch #research

  4. difference between quantitative and qualitative research

  5. Difference between Qualitative and Quantitative Research

  6. Qualitative and Quantitative Data

COMMENTS

  1. A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  2. Qualitative vs. Quantitative Research

    When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.

  3. Qualitative vs Quantitative Research: What's the Difference?

    Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings. Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

  4. Chapter 21. Conclusion: The Value of Qualitative Research

    That said, qualitative research can help demonstrate the causal mechanisms by which something happens. Qualitative research is also helpful in exploring alternative explanations and counterfactuals. If you want to know more about qualitative research and causality, I encourage you to read chapter 10 of Rubin's text.

  5. Writing a Research Paper Conclusion

    Step 1: Restate the problem. The first task of your conclusion is to remind the reader of your research problem. You will have discussed this problem in depth throughout the body, but now the point is to zoom back out from the details to the bigger picture. While you are restating a problem you've already introduced, you should avoid phrasing ...

  6. What Is Qualitative Research? An Overview and Guidelines

    Research methodology in doctoral research: Understanding the meaning of conducting qualitative research [Conference session]. Association of Researchers in Construction Management (ARCOM) Doctoral Workshop (pp. 48-57). Association of Researchers in Construction Management.

  7. Qualitative vs. Quantitative Research

    However, qualitative research can be time-consuming, and data analysis may be subjective. In contrast, quantitative research provides objective and quantifiable data, making it easier to draw conclusions and establish causation. It enables researchers to collect data from large samples, increasing the generalizability of findings.

  8. Qualitative vs Quantitative Research

    Qualitative v s Quantitative Research . Quantitative research deals with quantity, hence, this research type is concerned with numbers and statistics to prove or disapprove theories or hypothesis. In contrast, qualitative research is all about quality - characteristics, unquantifiable features, and meanings to seek deeper understanding of behavior and phenomenon.

  9. Qualitative vs Quantitative Research: Differences and Examples

    LEARN ABOUT: Research Process Steps. Where as, qualitative research uses conversational methods to gather relevant information on a given subject. 4. Post-research response analysis and conclusions. Quantitative research uses a variety of statistical analysis methods to derive quantifiable research conclusions.

  10. Quantitative vs. Qualitative Research

    Qualitative research is based upon data that is gathered by observation. Qualitative research articles will attempt to answer questions that cannot be measured by numbers but rather by perceived meaning. Qualitative research will likely include interviews, case studies, ethnography, or focus groups. Indicators of qualitative research include:

  11. Quantitative and Qualitative Research: An Overview of Approaches

    Abstract. In Chap. 1, the nature and scope of research were outlined and included an overview of quantitative and qualitative research and a brief description of research designs. In this chapter, both quantitative and qualitative research will be described in a little more detail with respect to essential features and characteristics.

  12. PDF CHAPTER 4 Quantitative and Qualitative Research

    Quantitative research is an inquiry into an identified problem, based on testing a theory, measured with numbers, and analyzed using statistical techniques. The goal of quantitative methods is to determine whether the predictive generalizations of a theory hold true. By contrast, a study based upon a qualitative process of inquiry has the goal ...

  13. Qualitative vs. Quantitative Research: Comparing the Methods and

    In this example, qualitative and quantitative methodologies can lead to similar conclusions, but the research will differ in intent, design, and form. Taking a look at behavioral observation, another common method used for both qualitative and quantitative research, qualitative data may consider a variety of factors, such as facial expressions ...

  14. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  15. Difference Between Qualitative and Quantitative Research

    While the qualitative research relies on verbal narrative like spoken or written data, the quantitative research uses logical or statistical observations to draw conclusions. In a qualitative research, there are only a few non-representative cases are used as a sample to develop an initial understanding.

  16. Q: How is the conclusion drawn in qualitative research?

    Having said that, the conclusion of a qualitative study can at times be quite detailed. This would depend on the complexity of the study. A questionnaire about likes and dislikes is simpler to score, interpret, and infer than a focus group, interview, or case study. In the case of a simpler study, you may reiterate the key findings of the study ...

  17. Quantitative and Qualitative Research

    Qualitative research is a process of naturalistic inquiry that seeks an in-depth understanding of social phenomena within their natural setting. It focuses on the "why" rather than the "what" of social phenomena and relies on the direct experiences of human beings as meaning-making agents in their every day lives.

  18. Strengths and Limitations of Qualitative and Quantitative Research Methods

    Scientific research adopts qualitati ve and quantitative methodologies in the modeling. and analysis of numerous phenomena. The qualitative methodology intends to. understand a complex reality and ...

  19. Quantitative and qualitative research

    As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. Albert Einstein 1. Some clinicians still believe that qualitative research is a "soft" science and of lesser value to clinical decision making, but this position is no longer tenable. 2-4 A quick search using the key word qualitative on the Canadian Family ...

  20. (PDF) CHAPTER 5 SUMMARY, CONCLUSIONS, IMPLICATIONS AND ...

    The conclusions are as stated below: i. Students' use of language in the oral sessions depicted their beliefs and values. based on their intentions. The oral sessions prompted the students to be ...

  21. What's the difference between quantitative and qualitative methods?

    Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is ...

  22. Qualitative vs. Quantitative Data: 7 Key Differences

    Whether you're dealing with qualitative or quantitative data, transparency, accuracy, and validity are crucial. Focus on sourcing (or conducting) quantitative research that's easy to replicate and qualitative research that's been peer-reviewed. With rock-solid data like this, you can make critical business decisions with confidence.

  23. Quantitative and Qualitative Research Methods: Similarities and

    Qualitative research uses unstructured or semi-structured data collection techniques such as focus group discussions, whereas quantitative research uses structured techniques such as questionnaires. Moreover, qualitative research uses non-statistical data analysis techniques, whereas quantitative uses statistical methods to analyze data.

  24. A qualitative study of the barriers and facilitators impacting the

    Data were collected prior to the implementation of SurgeCon, by means of qualitative and quantitative methods consisting of semi-structured interviews with 31 clinicians (e.g., physicians, nurses, and managers), telephone surveys with 341 patients, and structured observations from four EDs. ... Conclusion. Improving our understanding of the ...

  25. Quantitative Data: Definition, Examples, Types, Methods, & Analysis

    Quantitative data is objective, handles large datasets, and enables easy comparisons, providing clear insights and generalized conclusions in various fields. However, quantitative data analysis lacks contextual understanding, requires analytical expertise, and is influenced by data collection quality that may affect result validity.

  26. What's the difference between quantitative and qualitative ...

    Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is ...

  27. Agricultural drought risk assessments: a comprehensive review of

    5. Conclusion. This systematic review has provided a quantitative and qualitative analysis of empirical research papers in quantifying drought risk using indicators of hazard, exposure, and vulnerability. Several efforts have been made to review DRA research.

  28. Managers in the context of small business growth: a qualitative study

    Small businesses play a significant role in global economies [1, 2] and growing businesses are especially important in creating jobs and contributing to economic growth [3,4,5].Previous research has shown the importance of managers' wellbeing for leadership behaviours [], employee health [] and business survival and effectiveness [8, 9].Managers' working conditions influence their ...

  29. Compressed SENSitivity Encoding (SENSE): Qualitative and Quantitative

    Background. This study aimed to qualitatively and quantitatively evaluate T1-TSE, T2-TSE and 3D FLAIR sequences obtained with and without Compressed-SENSE technique by assessing the contrast (C), the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR). Methods. A total of 142 MRI images were acquired: 69 with Compressed-SENSE and 73 without Compressed-SENSE. All the MRI images ...

  30. Solved Know what is a variable2. Know what the difference

    Know what the difference is between quantitative and qualitative research 3. Know the difference between exploratory and descriptive research 4. Know the types of exploratory and descriptive (and their pros and cons) Here's the best way to solve it. 1. **Variable**: - A variable is any characteri...