cases that meet some
predetermined criterion
of importance
Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”
Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).
Characteristics of implementation research.
In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.
A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.
Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.
Purposeful sampling strategies and mixed method designs in implementation research
Sampling strategy | Structure | Design | Function |
---|---|---|---|
Single stage sampling (n = 22) | |||
Criterion (n = 18) | Simultaneous (n = 17) Sequential (n = 6) | Merged (n = 9) Connected (n = 9) Embedded (n = 14) | Convergence (n = 6) Complementarity (n = 12) Expansion (n = 10) Development (n = 3) Sampling (n = 4) |
Maximum variation (n = 4) | Simultaneous (n = 3) Sequential (n = 1) | Merged (n = 1) Connected (n = 1) Embedded (n = 2) | Convergence (n = 1) Complementarity (n = 2) Expansion (n = 1) Development (n = 2) |
Intensity (n = 1) | Simultaneous Sequential | Merged Connected Embedded | Convergence Complementarity Expansion Development |
Typical case Study (n = 1) | Simultaneous | Embedded | Complementarity |
Multistage sampling (n = 4) | |||
Criterion/maximum variation (n = 2) | Simultaneous Sequential | Embedded Connected | Complementarity Development |
Criterion/intensity (n = 1) | Simultaneous | Embedded | Convergence Complementarity Expansion |
Criterion/snowball (n = 1) | Sequential | Connected | Convergence Development |
Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.
However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.
To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.
Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies
Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.
Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.
Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.
Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.
Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.
On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.
Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.
Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.
Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).
Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.
Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.
Multistage Purposeful Sampling Strategies
This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).
4215 Accesses
12 Citations
In quantitative research, collecting data from an entire population of a study is impractical in many instances. It squanders resources like time and money which can be minimized by choosing suitable sampling techniques between probability and non-probability methods. The chapter outlines a brief idea about the different categories of sampling techniques with examples. Sensibly selecting among the sampling techniques allows the researcher to generalize the findings to a specific study context. Although probability sampling is more appealing to draw a representative sample, non-probability sampling techniques also enable the researcher to generalize the findings upon implementing the sampling strategy wisely. Moreover, adopting probability sampling techniques is not feasible in many situations. The chapter suggests selecting sampling techniques should be guided by research objectives, study scope, and availability of sampling frame rather than looking at the nature of sampling techniques.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research.
Bryman, A., & Bell, E. (2015). Business research methods (4th ed.). Oxford University Press.
Google Scholar
Clottey, T. A., & Grawe, S. J. (2014). Non-response bias assessment in logistics survey research: Use fewer tests? International Journal of Physical Distribution & Logistics Management, 44 (5), 412–426. https://doi.org/10.1108/IJPDLM-10-2012-0314
Article Google Scholar
Collier, J. E., & Bienstock, C. C. (2007). An analysis of how nonresponse error is assessed in academic marketing research. Marketing Theory, 7 (2), 163–183. https://doi.org/10.1177/1470593107076865
Cooper, D. R., Schindler, P. S., & Sun, J. (2006). Business research methods (9th ed.). McGraw-Hill Irwin.
Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE Publications, Incorporated.
Feild, L., Pruchno, R. A., Bewley, J., Lemay, E. P., & Levinsky, N. G. (2006). Using probability versus nonprobability sampling to identify hard-to-access participants for health-related research: Costs and contrasts. Journal of Aging and Health, 18 (4), 565–583. https://doi.org/10.1177/0898264306291420.
Hulland, J., Baumgartner, H., & Smith, K. M. (2017). Marketing survey research best practices: Evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science, 46 (1), 92–108. https://doi.org/10.1007/s11747-017-0532-y
Lindner, J. R., Murphy, T. H., & Briers, G. E. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42 (4), 43–53.
Malhotra, N. K., & Das, S. (2010). Marketing research: An applied orientation (6th ed.). Pearson Education.
Book Google Scholar
Marshall, M. N. (1996). Sampling for qualitative research. Family Practice, 13 (6), 522–526. https://doi.org/10.1093/fampra/13.6.522
Memon, M. A., Ting, H., Ramayah, T., Chuah, F., & Cheah, J.-H. (2017). A review of the methodological misconceptions and guidelines related to the application of structural equation modeling: A Malaysian scenario. Journal of Applied Structural Equation Modeling, 1 (1), i–xiii.
Rowley, J. (2014). Designing and using research questionnaires. Management Research Review, 37 (3), 308–330. https://doi.org/10.1108/MRR-02-2013-0027
Sarstedt, M., Bengart, P., Shaltoni, A. M., & Lehmann, S. (2018). The use of sampling methods in advertising research: A gap between theory and practice. International Journal of Advertising, 37 (4), 650–663. https://doi.org/10.1080/02650487.2017.1348329
Sax, L. J., Gilmartin, S. K., & Bryant, A. N. (2003). Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, 44 (4), 409–432. https://doi.org/10.1023/a:1024232915870
Seddon, P. B., & Scheepers, R. (2012). Towards the improved treatment of generalization of knowledge claims in IS research: Drawing general conclusions from samples. European Journal of Information Systems, 21 (1), 6–21.
Sekaran, U., & Bougie, R. (2016). Research methods for business: A skill building approach (7th ed.). John Wiley & Sons.
Zikmund, W. G., Babin, B. J., Carr, J. C., & Griffin, M. (2013). Business research methods (9th ed.). Cengage Learning.
Download references
Authors and affiliations.
Assistant Professor of Marketing, Southamton Malysia Business School, University of Southamton Malysia, Johor Bahru, Malaysia
Moniruzzaman Sarker
Faculty of Business and Accountancy, University of Malaya, Kuala Lumpur, Malaysia
Mohammed Abdulmalek AL-Muaalemi
You can also search for this author in PubMed Google Scholar
Correspondence to Moniruzzaman Sarker .
Editors and affiliations.
Centre for Family and Child Studies, Research Institute of Humanities and Social Sciences, University of Sharjah, Sharjah, United Arab Emirates
M. Rezaul Islam
Department of Development Studies, University of Dhaka, Dhaka, Bangladesh
Niaz Ahmed Khan
Department of Social Work, School of Humanities, University of Johannesburg, Johannesburg, South Africa
Rajendra Baikady
Reprints and permissions
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Moniruzzaman Sarker, AL-Muaalemi, M.A. (2022). Sampling Techniques for Quantitative Research. In: Islam, M.R., Khan, N.A., Baikady, R. (eds) Principles of Social Research Methodology. Springer, Singapore. https://doi.org/10.1007/978-981-19-5441-2_15
DOI : https://doi.org/10.1007/978-981-19-5441-2_15
Published : 27 October 2022
Publisher Name : Springer, Singapore
Print ISBN : 978-981-19-5219-7
Online ISBN : 978-981-19-5441-2
eBook Packages : Social Sciences Social Sciences (R0)
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Policies and ethics
The selection of a sample in quantitative and qualitative research is guided by two opposing philosophies. In quantitative research you attempt to select a sample in such a way that it is unbiased and represents the population from where it is selected. In qualitative research, number considerations may influence the selection of a sample such as: the ease in accessing the potential respondents; your judgement that the person has extensive knowledge about an episode, an event or a situation of interest to you; how typical the case is of a category of individuals or simply that it is totally different from the others. You make every effort to select either a case that is similar to the rest of the group or the one which is totally different. Such considerations are not acceptable in quantitative research.
The purpose of sampling in quantitative research is to draw inferences about the group from which you have selected the sample, whereas in qualitative research it is designed either to gain in-depth knowledge about a situation/event/episode or to know as much as possible about different aspects of an individual on the assumption that the individual is typical of the group and hence will provide insight into the group.
Similarly, the determination of sample size in quantitative and qualitative research is based upon the two different philosophies. In quantitative research you are guided by a predetermined sample size that is based upon a number of other considerations in addition to the resources available. However, in qualitative research you do not have a predetermined sample size but during the data collection phase you wait to reach a point of data saturation. When you are not getting new information or it is negligible, it is assumed you have reached a data saturation point and you stop collecting additional information.
Considerable importance is placed on the sample size in quantitative research, depending upon the type of study and the possible use of the findings. Studies which are designed to formulate policies, to test associations or relationships, or to establish impact assessments place a considerable emphasis on large sample size. This is based upon the principle that a larger sample size will ensure the inclusion of people with diverse backgrounds, thus making the sample representative of the study population. The sample size in qualitative research does not play any significant role as the purpose is to study only one or a few cases in order to identify the spread of diversity and not its magnitude. In such situations the data saturation stage during data collection determines the sample size.
In quantitative research, randomisation is used to avoid bias in the selection of a sample and is selected in such a way that it represents the study population. In qualitative research no such attempt is made in selecting a sample.You purposely select ‘information-rich’ respondents who will provide you with the information you need. In quantitative research, this is considered a biased sample.
Most of the sampling strategies, including some non-probability ones, described in this chapter can be used when undertaking a quantitative study provided it meets the requirements. However, when conducting a qualitative study only the non-probability sampling designs can be used.
Source: Kumar Ranjit (2012), Research methodology: a step-by-step guide for beginners , SAGE Publications Ltd; Third edition.
29 Jul 2021
30 Jul 2021
These are really enormous ideas in concerning blogging.
You have touched some good things here. Any way keep up wrinting.
Dude these articles are amazing. They helped me a lot.
I want to thank you for your assistance and this post. It’s been great.
Thank you for writing this article!
Your articles are extremely helpful to me. Please provide more information!
You helped me a lot by posting this article and I love what I’m learning.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
Username or email address *
Password *
Log in Remember me
Lost your password?
Home » Quantitative Data – Types, Methods and Examples
Table of Contents
Definition:
Quantitative data refers to numerical data that can be measured or counted. This type of data is often used in scientific research and is typically collected through methods such as surveys, experiments, and statistical analysis.
There are two main types of quantitative data: discrete and continuous.
There are several common methods for collecting quantitative data. Some of these methods include:
There are several methods for analyzing quantitative data, including:
Quantitative data can be represented in different formats, depending on the nature of the data and the purpose of the analysis. Here are some common formats:
Here is a basic guide for gathering quantitative data:
Here are some examples of quantitative data:
Quantitative data has a wide range of applications across various fields, including:
The purpose of quantitative data is to provide a numerical representation of a phenomenon or observation. Quantitative data is used to measure and describe the characteristics of a population or sample, and to test hypotheses and draw conclusions based on statistical analysis. Some of the key purposes of quantitative data include:
Quantitative data is appropriate to use when you want to collect and analyze numerical data that can be measured and analyzed using statistical methods. Here are some situations where quantitative data is typically used:
Quantitative data is characterized by several key features, including:
Some advantages of quantitative data are:
Some Limitations of Quantitative Data are as follows:
Researcher, Academic Writer, Web developer
Qualitative data is information you can describe with words rather than numbers.
Quantitative data is information represented in a measurable way using numbers.
One type of data isn’t better than the other.
To conduct thorough research, you need both. But knowing the difference between them is important if you want to harness the full power of both qualitative and quantitative data.
In this post, we’ll explore seven key differences between these two types of data.
The single biggest difference between quantitative and qualitative data is that one deals with numbers, and the other deals with concepts and ideas.
The words “qualitative” and “quantitative” are really similar, which can make it hard to keep track of which one is which. I like to think of them this way:
Qualitative data—the descriptive one—usually involves written or spoken words, images, or even objects. It’s collected in all sorts of ways: video recordings, interviews, open-ended survey responses, and field notes, for example.
I like how researcher James W. Crick defines qualitative research in a 2021 issue of the Journal of Strategic Marketing : “Qualitative research is designed to generate in-depth and subjective findings to build theory.”
In other words, qualitative research helps you learn more about a topic—usually from a primary, or firsthand, source—so you can form ideas about what it means. This type of data is often rich in detail, and its interpretation can vary depending on who’s analyzing it.
Here’s what I mean: if you ask five different people to observe how 60 kittens behave when presented with a hamster wheel, you’ll get five different versions of the same event.
Quantitative data, on the other hand, is all about numbers and statistics. There’s no wiggle room when it comes to interpretation. In our kitten scenario, quantitative data might show us that of the 60 kittens presented with a hamster wheel, 40 pawed at it, 5 jumped inside and started spinning, and 15 ignored it completely.
There’s no ifs, ands, or buts about the numbers. They just are.
You should use both quantitative and quantitative data to make decisions for your business.
Quantitative data helps you get to the what . Qualitative data unearths the why .
Quantitative data collects surface information, like numbers. Qualitative data dives deep beneath these same numbers and fleshes out the nuances there.
Research projects can often benefit from both types of data, which is why you’ll see the term “mixed-method” research in peer-reviewed journals. The term “mixed-method” refers to using both quantitative and qualitative methods in a study.
So, maybe you’re diving into original research. Or maybe you’re looking at other peoples’ studies to make an important business decision. In either case, you can use both quantitative and qualitative data to guide you.
Imagine you want to start a company that makes hamster wheels for cats. You run that kitten experiment, only to learn that most kittens aren’t all that interested in the hamster wheel. That’s what your quantitative data seems to say. Of the 60 kittens who participated in the study, only 5 hopped into the wheel.
But 40 of the kittens pawed at the wheel. According to your quantitative data, these 40 kittens touched the wheel but did not get inside.
This is where your qualitative data comes into play. Why did these 40 kittens touch the wheel but stop exploring it? You turn to the researchers’ observations. Since there were five different researchers, you have five sets of detailed notes to study.
From these observations, you learn that many of the kittens seemed frightened when the wheel moved after they pawed it. They grew suspicious of the structure, meowing and circling it, agitated.
One researcher noted that the kittens seemed desperate to enjoy the wheel, but they didn’t seem to feel it was safe.
So your idea isn’t a flop, exactly.
It just needs tweaking.
According to your quantitative data, 75% of the kittens studied either touched or actively participated in the hamster wheel. Your qualitative data suggests more kittens would have jumped into the wheel if it hadn’t moved so easily when they pawed at it.
You decide to make your kitten wheel sturdier and try the whole test again with a new set of kittens. Hopefully, this time a higher percentage of your feline participants will hop in and enjoy the fun.
This is a very simplistic and fictional example of how a mixed-method approach can help you make important choices for your business.
When you can swing it, you should look at both qualitative and quantitative data before you make any big decisions.
But this is where we come to another big difference between quantitative vs. qualitative data: it’s a lot easier to source qualitative data than quantitative data.
Why? Because it’s easy to run a survey, host a focus group, or conduct a round of interviews. All you have to do is hop on SurveyMonkey or Zoom and you’re on your way to gathering original qualitative data.
And yes, you can get some quantitative data here. If you run a survey and 45 customers respond, you can collect demographic data and yes/no answers for that pool of 45 respondents.
But this is a relatively small sample size. (More on why this matters in a moment.)
To tell you anything meaningful, quantitative data must achieve statistical significance.
If it’s been a while since your college statistics class, here’s a refresh: statistical significance is a measuring stick. It tells you whether the results you get are due to a specific cause or if they can be attributed to random chance.
To achieve statistical significance in a study, you have to be really careful to set the study up the right way and with a meaningful sample size.
This doesn’t mean it’s impossible to get quantitative data. But unless you have someone on your team who knows all about null hypotheses and p-values and statistical analysis, you might need to outsource quantitative research.
Plenty of businesses do this, but it’s pricey.
When you’re just starting out or you’re strapped for cash, qualitative data can get you valuable information—quickly and without gouging your wallet.
Another reason qualitative data is more accessible? It requires a smaller sample size to achieve meaningful results.
Even one person’s perspective brings value to a research project—ever heard of a case study?
The sweet spot depends on the purpose of the study, but for qualitative market research, somewhere between 10-40 respondents is a good number.
Any more than that and you risk reaching saturation. That’s when you keep getting results that echo each other and add nothing new to the research.
Quantitative data needs enough respondents to reach statistical significance without veering into saturation territory.
The ideal sample size number is usually higher than it is for qualitative data. But as with qualitative data, there’s no single, magic number. It all depends on statistical values like confidence level, population size, and margin of error.
Because it often requires a larger sample size, quantitative research can be more difficult for the average person to do on their own.
Running a study is just the first part of conducting qualitative and quantitative research.
After you’ve collected data, you have to study it. Find themes, patterns, consistencies, inconsistencies. Interpret and organize the numbers or survey responses or interview recordings. Tidy it all up into something you can draw conclusions from and apply to various situations.
This is called data analysis, and it’s done in completely different ways for qualitative vs. quantitative data.
For qualitative data, analysis includes:
You can often do qualitative data analysis manually or with tools like NVivo and ATLAS.ti. These tools help you organize, code, and analyze your subjective qualitative data.
Quantitative data analysis is a lot less subjective. Here’s how it generally goes:
Researchers generally use sophisticated data analysis tools like RapidMiner and Tableau to help them do this work.
Quantitative research tends to be less flexible than qualitative research. It relies on structured data collection methods, which researchers must set up well before the study begins.
This rigid structure is part of what makes quantitative data so reliable. But the downside here is that once you start the study, it’s hard to change anything without negatively affecting the results. If something unexpected comes up—or if new questions arise—researchers can’t easily change the scope of the study.
Qualitative research is a lot more flexible. This is why qualitative data can go deeper than quantitative data. If you’re interviewing someone and an interesting, unexpected topic comes up, you can immediately explore it.
Other qualitative research methods offer flexibility, too. Most big survey software brands allow you to build flexible surveys using branching and skip logic. These features let you customize which questions respondents see based on the answers they give.
This flexibility is unheard of in quantitative research. But even though it’s as flexible as an Olympic gymnast, qualitative data can be less reliable—and harder to validate.
Quantitative data is more reliable than qualitative data. Numbers can’t be massaged to fit a certain bias. If you replicate the study—in other words, run the exact same quantitative study two or more times—you should get nearly identical results each time. The same goes if another set of researchers runs the same study using the same methods.
This is what gives quantitative data that reliability factor.
There are a few key benefits here. First, reliable data means you can confidently make generalizations that apply to a larger population. It also means the data is valid and accurately measures whatever it is you’re trying to measure.
And finally, reliable data is trustworthy. Big industries like healthcare, marketing, and education frequently use quantitative data to make life-or-death decisions. The more reliable and trustworthy the data, the more confident these decision-makers can be when it’s time to make critical choices.
Unlike quantitative data, qualitative data isn’t overtly reliable. It’s not easy to replicate. If you send out the same qualitative survey on two separate occasions, you’ll get a new mix of responses. Your interpretations of the data might look different, too.
There’s still incredible value in qualitative data, of course—and there are ways to make sure the data is valid. These include:
Whether you’re dealing with qualitative or quantitative data, transparency, accuracy, and validity are crucial. Focus on sourcing (or conducting) quantitative research that’s easy to replicate and qualitative research that’s been peer-reviewed.
With rock-solid data like this, you can make critical business decisions with confidence.
Keep reading about user experience.
UX research tools help designers, product managers, and other teams understand users and how they interact with a company’s products and services. The tools provide…
Qualitative data is information you can describe with words rather than numbers. Quantitative data is information represented in a measurable way using numbers. One type…
It seems like every other company is bragging about their AI-enhanced user experiences. Consumers and the UX professionals responsible for designing great user experiences are…
UX metrics help identify where users struggle when using an app or website and where they are successful. The data collected helps designers, developers, and…
Ease of use is a common expectation for a site to be considered well designed. Over the past few years, we have been used to…
Think that speeding up your website isn’t important? Big mistake. A one-second delay in page load time yields: Your site taking a few extra seconds to…
User experience is one of the most important aspects of having a successful website, app, piece of software, or any other product that you’ve built. …
Your website’s navigation structure has a huge impact on conversions, sales, and bounce rates. If visitors can’t figure out where to find what they want,…
A heatmap is an extremely valuable tool for anyone with a website. Heatmaps are a visual representation of crucial website data. With just a simple…
The problem with a lot of the content that covers website analysis is that the term “website analysis” can refer to a lot of different things—and…
Here, we show you how to use Google Analytics together with Crazy Egg’s heatmap reports to easily identify and fix 3 common website problems.
We share the 3-step process for the website usability testing we recommend to our customers, plus the tools to pull actionable insights out of the process.
Hotjar is a great tool for website optimization, but some marketers may need something a little different. If that’s you, here are 21 Hotjar alternatives.
To use the info in a clickmap to improve your website experience and effectiveness, you have to know how to interpret what you see.
User Experience Design or UX design is the process for improving the satisfaction of your website visitors by making your site more usable, accessible, and…
Over 300,000 websites use Crazy Egg to improve what's working, fix what isn't and test new ideas.
Last Updated on January 30, 2019
COMMENTS
However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study. Sampling in Qualitative Research Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size.
Probability sampling methods. Probability sampling means that every member of the population has a chance of being selected. It is mainly used in quantitative research. If you want to produce results that are representative of the whole population, probability sampling techniques are the most valid choice. There are four main types of ...
Key Takeaways: Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling. Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results. It's crucial to consider the potential impact on the bias, sample diversity, and generalizability when ...
In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data.
Any senior researcher, or seasoned mentor, has a practiced response to the 'how many' question. Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects (Staller, 2013).As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies."(p.537).
Sampling Strategies in Qualitative Research In: The SAGE Handbook of Qualitative Data Analysis By: Tim Rapley ... SAGE Research Methods. Page 2 of 21. Sampling Strategies in Qualitative Research. 1. 1. ... Within more quantitative work, when working with a random sample you need to be able ...
Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...
You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.
Purposive Sampling. Purposive (or purposeful) sampling is a non-probability technique used to deliberately select the best sources of data to meet the purpose of the study. Purposive sampling is sometimes referred to as theoretical or selective or specific sampling. Theoretical sampling is used in qualitative research when a study is designed ...
Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.
Answer 1: In qualitative research, samples are selected subjectively according to. the pur pose of the study, whereas in quantitative researc h probability sampling. technique are used to select ...
Abstract. In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals.
Qualitative studies use specific tools and techniques (methods) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ.
The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e ...
Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros and cons of ...
Sampling techniques . Sampling in quantitative research is a critical component that involves selecting a representative subset of individuals or cases from a larger population and often ... some non-probability techniques may occasionally be utilised in healthcare research. 42 Non-probability sampling methods are commonly used in qualitative ...
However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is now normative. When sampling this way, special attention should be given to statistical implications of issues such as range restriction and omitted variable bias.
Principles of Purposeful Sampling. Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources (Patton, 2002).This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ...
Types of Sampling Techniques in Quantitative Research. There are two main types of sampling techniques are observed—probability and non-probability sampling (Malhotra & Das, 2010; Sekaran & Bougie, 2016 ). If the population is known and each element has an equal chance of being picked, then probability sampling applies.
what is commonly referred to as qualitative from quantitative inquiry is the kind of sampling used. While qualitative research typically involves pur-poseful sampling to enhance understanding of the information-rich case (Patton, 1990), quantitative research ideally involves probability sampling to permit statistical inferences to be made. Although
In quantitative research, this is considered a biased sample. Most of the sampling strategies, including some non-probability ones, described in this chapter can be used when undertaking a quantitative study provided it meets the requirements. However, when conducting a qualitative study only the non-probability sampling designs can be used.
In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data.
In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals. There is a need for more ...
Difficulty in capturing qualitative aspects: Quantitative data is unable to capture the subjective experiences and qualitative aspects of human behavior, such as emotions, attitudes, and motivations. Possibility of bias: The collection and interpretation of quantitative data can be influenced by biases, such as sampling bias, measurement bias ...
Qualitative research is a lot more flexible. This is why qualitative data can go deeper than quantitative data. If you're interviewing someone and an interesting, unexpected topic comes up, you can immediately explore it. Other qualitative research methods offer flexibility, too.
The necessity, importance, relevance, and urgency of quantitative research are articulated, establishing a strong foundation for the subsequent discussion, which delineates the scope, objectivity, goals, data, and methods that distinguish quantitative research, alongside a balanced inspection of its strengths and shortcomings, particularly in ...