Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

Content Analysis

Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. As an example, researchers can evaluate language used within a news article to search for bias or partiality. Researchers can then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of surrounding the text.

Description

Sources of data could be from interviews, open-ended questions, field research notes, conversations, or literally any occurrence of communicative language (such as books, essays, discussions, newspaper headlines, speeches, media, historical documents). A single study may analyze various forms of text in its analysis. To analyze the text using content analysis, the text must be coded, or broken down, into manageable code categories for analysis (i.e. “codes”). Once the text is coded into code categories, the codes can then be further categorized into “code categories” to summarize data even further.

Three different definitions of content analysis are provided below.

Definition 1: “Any technique for making inferences by systematically and objectively identifying special characteristics of messages.” (from Holsti, 1968)

Definition 2: “An interpretive and naturalistic approach. It is both observational and narrative in nature and relies less on the experimental elements normally associated with scientific research (reliability, validity, and generalizability) (from Ethnography, Observational Research, and Narrative Inquiry, 1994-2012).

Definition 3: “A research technique for the objective, systematic and quantitative description of the manifest content of communication.” (from Berelson, 1952)

Uses of Content Analysis

Identify the intentions, focus or communication trends of an individual, group or institution

Describe attitudinal and behavioral responses to communications

Determine the psychological or emotional state of persons or groups

Reveal international differences in communication content

Reveal patterns in communication content

Pre-test and improve an intervention or survey prior to launch

Analyze focus group interviews and open-ended questions to complement quantitative data

Types of Content Analysis

There are two general types of content analysis: conceptual analysis and relational analysis. Conceptual analysis determines the existence and frequency of concepts in a text. Relational analysis develops the conceptual analysis further by examining the relationships among concepts in a text. Each type of analysis may lead to different results, conclusions, interpretations and meanings.

Conceptual Analysis

Typically people think of conceptual analysis when they think of content analysis. In conceptual analysis, a concept is chosen for examination and the analysis involves quantifying and counting its presence. The main goal is to examine the occurrence of selected terms in the data. Terms may be explicit or implicit. Explicit terms are easy to identify. Coding of implicit terms is more complicated: you need to decide the level of implication and base judgments on subjectivity (an issue for reliability and validity). Therefore, coding of implicit terms involves using a dictionary or contextual translation rules or both.

To begin a conceptual content analysis, first identify the research question and choose a sample or samples for analysis. Next, the text must be coded into manageable content categories. This is basically a process of selective reduction. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.

General steps for conducting a conceptual content analysis:

1. Decide the level of analysis: word, word sense, phrase, sentence, themes

2. Decide how many concepts to code for: develop a pre-defined or interactive set of categories or concepts. Decide either: A. to allow flexibility to add categories through the coding process, or B. to stick with the pre-defined set of categories.

Option A allows for the introduction and analysis of new and important material that could have significant implications to one’s research question.

Option B allows the researcher to stay focused and examine the data for specific concepts.

3. Decide whether to code for existence or frequency of a concept. The decision changes the coding process.

When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared.

When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text.

4. Decide on how you will distinguish among concepts:

Should text be coded exactly as they appear or coded as the same when they appear in different forms? For example, “dangerous” vs. “dangerousness”. The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes.

What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept? For example, “dangerous” vs. “the person is scary” vs. “that person could cause harm to me”. These word segments may not merit separate categories, due the implicit meaning of “dangerous”.

5. Develop rules for coding your texts. After decisions of steps 1-4 are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. The researcher can code for exactly what he/she wants to code. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.

6. Decide what to do with irrelevant information: should this be ignored (e.g. common English words like “the” and “and”), or used to reexamine the coding scheme in the case that it would add to the outcome of coding?

7. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program. When coding is done by hand, a researcher can recognize errors far more easily (e.g. typos, misspelling). If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. computer coding is most relevant for implicit information where category preparation is essential for accurate coding.

8. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted, or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information. Typically, general trends and patterns can be identified.

Relational Analysis

Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts.

To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis. Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.

There are three subcategories of relational analysis to choose from prior to going on to the general steps.

Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.

Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Text is defined as a string of words called a “window” that is scanned for the co-occurrence of concepts. The result is the creation of a “concept matrix”, or a group of interrelated co-occurring concepts that would suggest an overall meaning.

Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.

General steps for conducting a relational content analysis:

1. Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. 2. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. 3. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:

Strength of relationship: degree to which two or more concepts are related.

Sign of relationship: are concepts positively or negatively related to each other?

Direction of relationship: the types of relationship that categories exhibit. For example, “X implies Y” or “X occurs before Y” or “if X then Y” or if X is the primary motivator of Y.

4. Code the relationships: a difference between conceptual and relational analysis is that the statements or relationships between concepts are coded. 5. Perform statistical analyses: explore differences or look for relationships among the identified variables during coding. 6. Map out representations: such as decision mapping and mental models.

Reliability and Validity

Reliability : Because of the human nature of researchers, coding errors can never be eliminated but only minimized. Generally, 80% is an acceptable margin for reliability. Three criteria comprise the reliability of a content analysis:

Stability: the tendency for coders to consistently re-code the same data in the same way over a period of time.

Reproducibility: tendency for a group of coders to classify categories membership in the same way.

Accuracy: extent to which the classification of text corresponds to a standard or norm statistically.

Validity : Three criteria comprise the validity of a content analysis:

Closeness of categories: this can be achieved by utilizing multiple classifiers to arrive at an agreed upon definition of each specific category. Using multiple classifiers, a concept category that may be an explicit variable can be broadened to include synonyms or implicit variables.

Conclusions: What level of implication is allowable? Do conclusions correctly follow the data? Are results explainable by other phenomena? This becomes especially problematic when using computer software for analysis and distinguishing between synonyms. For example, the word “mine,” variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. Software can obtain an accurate count of that word’s occurrence and frequency, but not be able to produce an accurate accounting of the meaning inherent in each particular usage. This problem could throw off one’s results and make any conclusion invalid.

Generalizability of the results to a theory: dependent on the clear definitions of concept categories, how they are determined and how reliable they are at measuring the idea one is seeking to measure. Generalizability parallels reliability as much of it depends on the three criteria for reliability.

Advantages of Content Analysis

Directly examines communication using text

Allows for both qualitative and quantitative analysis

Provides valuable historical and cultural insights over time

Allows a closeness to data

Coded form of the text can be statistically analyzed

Unobtrusive means of analyzing interactions

Provides insight into complex models of human thought and language use

When done well, is considered a relatively “exact” research method

Content analysis is a readily-understood and an inexpensive research method

A more powerful tool when combined with other research methods such as interviews, observation, and use of archival records. It is very useful for analyzing historical material, especially for documenting trends over time.

Disadvantages of Content Analysis

Can be extremely time consuming

Is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation

Is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study

Is inherently reductive, particularly when dealing with complex texts

Tends too often to simply consist of word counts

Often disregards the context that produced the text, as well as the state of things after the text is produced

Can be difficult to automate or computerize

Textbooks & Chapters  

Berelson, Bernard. Content Analysis in Communication Research.New York: Free Press, 1952.

Busha, Charles H. and Stephen P. Harter. Research Methods in Librarianship: Techniques and Interpretation.New York: Academic Press, 1980.

de Sola Pool, Ithiel. Trends in Content Analysis. Urbana: University of Illinois Press, 1959.

Krippendorff, Klaus. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications, 1980.

Fielding, NG & Lee, RM. Using Computers in Qualitative Research. SAGE Publications, 1991. (Refer to Chapter by Seidel, J. ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’.)

Methodological Articles  

Hsieh HF & Shannon SE. (2005). Three Approaches to Qualitative Content Analysis.Qualitative Health Research. 15(9): 1277-1288.

Elo S, Kaarianinen M, Kanste O, Polkki R, Utriainen K, & Kyngas H. (2014). Qualitative Content Analysis: A focus on trustworthiness. Sage Open. 4:1-10.

Application Articles  

Abroms LC, Padmanabhan N, Thaweethai L, & Phillips T. (2011). iPhone Apps for Smoking Cessation: A content analysis. American Journal of Preventive Medicine. 40(3):279-285.

Ullstrom S. Sachs MA, Hansson J, Ovretveit J, & Brommels M. (2014). Suffering in Silence: a qualitative study of second victims of adverse events. British Medical Journal, Quality & Safety Issue. 23:325-331.

Owen P. (2012).Portrayals of Schizophrenia by Entertainment Media: A Content Analysis of Contemporary Movies. Psychiatric Services. 63:655-659.

Choosing whether to conduct a content analysis by hand or by using computer software can be difficult. Refer to ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’ listed above in “Textbooks and Chapters” for a discussion of the issue.

QSR NVivo:  http://www.qsrinternational.com/products.aspx

Atlas.ti:  http://www.atlasti.com/webinars.html

R- RQDA package:  http://rqda.r-forge.r-project.org/

Rolly Constable, Marla Cowell, Sarita Zornek Crawford, David Golden, Jake Hartvigsen, Kathryn Morgan, Anne Mudgett, Kris Parrish, Laura Thomas, Erika Yolanda Thompson, Rosie Turner, and Mike Palmquist. (1994-2012). Ethnography, Observational Research, and Narrative Inquiry. Writing@CSU. Colorado State University. Available at: https://writing.colostate.edu/guides/guide.cfm?guideid=63 .

As an introduction to Content Analysis by Michael Palmquist, this is the main resource on Content Analysis on the Web. It is comprehensive, yet succinct. It includes examples and an annotated bibliography. The information contained in the narrative above draws heavily from and summarizes Michael Palmquist’s excellent resource on Content Analysis but was streamlined for the purpose of doctoral students and junior researchers in epidemiology.

At Columbia University Mailman School of Public Health, more detailed training is available through the Department of Sociomedical Sciences- P8785 Qualitative Research Methods.

Join the Conversation

Have a question about methods? Join us on Facebook

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Content Analysis | Guide, Methods & Examples

Content Analysis | Guide, Methods & Examples

Published on July 18, 2019 by Amy Luo . Revised on June 22, 2023.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding).  In both types, you categorize or “code” words, themes, and concepts within the texts and then analyze the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis, other interesting articles.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyze.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects or concepts in a set of historical or contemporary texts.

Quantitative content analysis example

To research the importance of employment issues in political campaigns, you could analyze campaign speeches for the frequency of terms such as unemployment , jobs , and work  and use statistical analysis to find differences over time or between candidates.

In addition, content analysis can be used to make qualitative inferences by analyzing the meaning and semantic relationship of words and concepts.

Qualitative content analysis example

To gain a more qualitative understanding of employment issues in political campaigns, you could locate the word unemployment in speeches, identify what other words or phrases appear next to it (such as economy,   inequality or  laziness ), and analyze the meanings of these relationships to better understand the intentions and targets of different campaigns.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analyzing the consequences of communication content, such as the flow of information or audience responses

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

  • Unobtrusive data collection

You can analyze communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost – all you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions, leading to various types of research bias and cognitive bias .

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Example research question for content analysis

Is there a difference in how the US media represents younger politicians compared to older ones in terms of trustworthiness?

Next, you follow these five steps.

1. Select the content you will analyze

Based on your research question, choose the texts that you will analyze. You need to decide:

  • The medium (e.g. newspapers, speeches or websites) and genre (e.g. opinion pieces, political campaign speeches, or marketing copy)
  • The inclusion and exclusion criteria (e.g. newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small amount of texts that meet your criteria, you might analyze all of them. If there is a large volume of texts, you can select a sample .

2. Define the units and categories of analysis

Next, you need to determine the level at which you will analyze your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g. aged 30-40 ,  lawyer , parent ) or more conceptual (e.g. trustworthy , corrupt , conservative , family oriented ).

Your units of analysis are the politicians who appear in each article and the words and phrases that are used to describe them. Based on your research question, you have to categorize based on age and the concept of trustworthiness. To get more detailed data, you also code for other categories such as their political party and the marital status of each politician mentioned.

3. Develop a set of rules for coding

Coding involves organizing the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

In considering the category “younger politician,” you decide which titles will be coded with this category ( senator, governor, counselor, mayor ). With “trustworthy”, you decide which specific words or phrases related to trustworthiness (e.g. honest and reliable ) will be coded in this category.

4. Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti and Diction , which can help speed up the process of counting and categorizing words and phrases.

Following your coding rules, you examine each newspaper article in your sample. You record the characteristics of each politician mentioned, along with all words and phrases related to trustworthiness that are used to describe them.

5. Analyze the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context and audience of the texts.

Let’s say the results reveal that words and phrases related to trustworthiness appeared in the same sentence as an older politician more frequently than they did in the same sentence as a younger politician. From these results, you conclude that national newspapers present older politicians as more trustworthy than younger politicians, and infer that this might have an effect on readers’ perceptions of younger people in politics.

Prevent plagiarism. Run a free check.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Luo, A. (2023, June 22). Content Analysis | Guide, Methods & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/content-analysis/

Is this article helpful?

Amy Luo

Other students also liked

Qualitative vs. quantitative research | differences, examples & methods, descriptive research | definition, types, methods & examples, reliability vs. validity in research | difference, types and examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Content Analysis | A Step-by-Step Guide with Examples

Published on 5 May 2022 by Amy Luo . Revised on 5 December 2022.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers, and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding). In both types, you categorise or ‘code’ words, themes, and concepts within the texts and then analyse the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyse.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects, or concepts in a set of historical or contemporary texts.

In addition, content analysis can be used to make qualitative inferences by analysing the meaning and semantic relationship of words and concepts.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group, or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analysing the consequences of communication content, such as the flow of information or audience responses

Prevent plagiarism, run a free check.

  • Unobtrusive data collection

You can analyse communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost. All you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions.

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Next, you follow these five steps.

Step 1: Select the content you will analyse

Based on your research question, choose the texts that you will analyse. You need to decide:

  • The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)
  • The criteria for inclusion (e.g., newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small number of texts that meet your criteria, you might analyse all of them. If there is a large volume of texts, you can select a sample .

Step 2: Define the units and categories of analysis

Next, you need to determine the level at which you will analyse your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g., aged 30–40, lawyer, parent) or more conceptual (e.g., trustworthy, corrupt, conservative, family-oriented).

Step 3: Develop a set of rules for coding

Coding involves organising the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

Step 4: Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti , and Diction , which can help speed up the process of counting and categorising words and phrases.

Step 5: Analyse the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context, and audience of the texts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Luo, A. (2022, December 05). Content Analysis | A Step-by-Step Guide with Examples. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/content-analysis-explained/

Is this article helpful?

Amy Luo

Other students also liked

How to do thematic analysis | guide & examples, data collection methods | step-by-step guide & examples, qualitative vs quantitative research | examples & methods.

  • Privacy Policy

Research Method

Home » Content Analysis – Methods, Types and Examples

Content Analysis – Methods, Types and Examples

Table of Contents

Content Analysis

Content Analysis

Definition:

Content analysis is a research method used to analyze and interpret the characteristics of various forms of communication, such as text, images, or audio. It involves systematically analyzing the content of these materials, identifying patterns, themes, and other relevant features, and drawing inferences or conclusions based on the findings.

Content analysis can be used to study a wide range of topics, including media coverage of social issues, political speeches, advertising messages, and online discussions, among others. It is often used in qualitative research and can be combined with other methods to provide a more comprehensive understanding of a particular phenomenon.

Types of Content Analysis

There are generally two types of content analysis:

Quantitative Content Analysis

This type of content analysis involves the systematic and objective counting and categorization of the content of a particular form of communication, such as text or video. The data obtained is then subjected to statistical analysis to identify patterns, trends, and relationships between different variables. Quantitative content analysis is often used to study media content, advertising, and political speeches.

Qualitative Content Analysis

This type of content analysis is concerned with the interpretation and understanding of the meaning and context of the content. It involves the systematic analysis of the content to identify themes, patterns, and other relevant features, and to interpret the underlying meanings and implications of these features. Qualitative content analysis is often used to study interviews, focus groups, and other forms of qualitative data, where the researcher is interested in understanding the subjective experiences and perceptions of the participants.

Methods of Content Analysis

There are several methods of content analysis, including:

Conceptual Analysis

This method involves analyzing the meanings of key concepts used in the content being analyzed. The researcher identifies key concepts and analyzes how they are used, defining them and categorizing them into broader themes.

Content Analysis by Frequency

This method involves counting and categorizing the frequency of specific words, phrases, or themes that appear in the content being analyzed. The researcher identifies relevant keywords or phrases and systematically counts their frequency.

Comparative Analysis

This method involves comparing the content of two or more sources to identify similarities, differences, and patterns. The researcher selects relevant sources, identifies key themes or concepts, and compares how they are represented in each source.

Discourse Analysis

This method involves analyzing the structure and language of the content being analyzed to identify how the content constructs and represents social reality. The researcher analyzes the language used and the underlying assumptions, beliefs, and values reflected in the content.

Narrative Analysis

This method involves analyzing the content as a narrative, identifying the plot, characters, and themes, and analyzing how they relate to the broader social context. The researcher identifies the underlying messages conveyed by the narrative and their implications for the broader social context.

Content Analysis Conducting Guide

Here is a basic guide to conducting a content analysis:

  • Define your research question or objective: Before starting your content analysis, you need to define your research question or objective clearly. This will help you to identify the content you need to analyze and the type of analysis you need to conduct.
  • Select your sample: Select a representative sample of the content you want to analyze. This may involve selecting a random sample, a purposive sample, or a convenience sample, depending on the research question and the availability of the content.
  • Develop a coding scheme: Develop a coding scheme or a set of categories to use for coding the content. The coding scheme should be based on your research question or objective and should be reliable, valid, and comprehensive.
  • Train coders: Train coders to use the coding scheme and ensure that they have a clear understanding of the coding categories and procedures. You may also need to establish inter-coder reliability to ensure that different coders are coding the content consistently.
  • Code the content: Code the content using the coding scheme. This may involve manually coding the content, using software, or a combination of both.
  • Analyze the data: Once the content is coded, analyze the data using appropriate statistical or qualitative methods, depending on the research question and the type of data.
  • Interpret the results: Interpret the results of the analysis in the context of your research question or objective. Draw conclusions based on the findings and relate them to the broader literature on the topic.
  • Report your findings: Report your findings in a clear and concise manner, including the research question, methodology, results, and conclusions. Provide details about the coding scheme, inter-coder reliability, and any limitations of the study.

Applications of Content Analysis

Content analysis has numerous applications across different fields, including:

  • Media Research: Content analysis is commonly used in media research to examine the representation of different groups, such as race, gender, and sexual orientation, in media content. It can also be used to study media framing, media bias, and media effects.
  • Political Communication : Content analysis can be used to study political communication, including political speeches, debates, and news coverage of political events. It can also be used to study political advertising and the impact of political communication on public opinion and voting behavior.
  • Marketing Research: Content analysis can be used to study advertising messages, consumer reviews, and social media posts related to products or services. It can provide insights into consumer preferences, attitudes, and behaviors.
  • Health Communication: Content analysis can be used to study health communication, including the representation of health issues in the media, the effectiveness of health campaigns, and the impact of health messages on behavior.
  • Education Research : Content analysis can be used to study educational materials, including textbooks, curricula, and instructional materials. It can provide insights into the representation of different topics, perspectives, and values.
  • Social Science Research: Content analysis can be used in a wide range of social science research, including studies of social media, online communities, and other forms of digital communication. It can also be used to study interviews, focus groups, and other qualitative data sources.

Examples of Content Analysis

Here are some examples of content analysis:

  • Media Representation of Race and Gender: A content analysis could be conducted to examine the representation of different races and genders in popular media, such as movies, TV shows, and news coverage.
  • Political Campaign Ads : A content analysis could be conducted to study political campaign ads and the themes and messages used by candidates.
  • Social Media Posts: A content analysis could be conducted to study social media posts related to a particular topic, such as the COVID-19 pandemic, to examine the attitudes and beliefs of social media users.
  • Instructional Materials: A content analysis could be conducted to study the representation of different topics and perspectives in educational materials, such as textbooks and curricula.
  • Product Reviews: A content analysis could be conducted to study product reviews on e-commerce websites, such as Amazon, to identify common themes and issues mentioned by consumers.
  • News Coverage of Health Issues: A content analysis could be conducted to study news coverage of health issues, such as vaccine hesitancy, to identify common themes and perspectives.
  • Online Communities: A content analysis could be conducted to study online communities, such as discussion forums or social media groups, to understand the language, attitudes, and beliefs of the community members.

Purpose of Content Analysis

The purpose of content analysis is to systematically analyze and interpret the content of various forms of communication, such as written, oral, or visual, to identify patterns, themes, and meanings. Content analysis is used to study communication in a wide range of fields, including media studies, political science, psychology, education, sociology, and marketing research. The primary goals of content analysis include:

  • Describing and summarizing communication: Content analysis can be used to describe and summarize the content of communication, such as the themes, topics, and messages conveyed in media content, political speeches, or social media posts.
  • Identifying patterns and trends: Content analysis can be used to identify patterns and trends in communication, such as changes over time, differences between groups, or common themes or motifs.
  • Exploring meanings and interpretations: Content analysis can be used to explore the meanings and interpretations of communication, such as the underlying values, beliefs, and assumptions that shape the content.
  • Testing hypotheses and theories : Content analysis can be used to test hypotheses and theories about communication, such as the effects of media on attitudes and behaviors or the framing of political issues in the media.

When to use Content Analysis

Content analysis is a useful method when you want to analyze and interpret the content of various forms of communication, such as written, oral, or visual. Here are some specific situations where content analysis might be appropriate:

  • When you want to study media content: Content analysis is commonly used in media studies to analyze the content of TV shows, movies, news coverage, and other forms of media.
  • When you want to study political communication : Content analysis can be used to study political speeches, debates, news coverage, and advertising.
  • When you want to study consumer attitudes and behaviors: Content analysis can be used to analyze product reviews, social media posts, and other forms of consumer feedback.
  • When you want to study educational materials : Content analysis can be used to analyze textbooks, instructional materials, and curricula.
  • When you want to study online communities: Content analysis can be used to analyze discussion forums, social media groups, and other forms of online communication.
  • When you want to test hypotheses and theories : Content analysis can be used to test hypotheses and theories about communication, such as the framing of political issues in the media or the effects of media on attitudes and behaviors.

Characteristics of Content Analysis

Content analysis has several key characteristics that make it a useful research method. These include:

  • Objectivity : Content analysis aims to be an objective method of research, meaning that the researcher does not introduce their own biases or interpretations into the analysis. This is achieved by using standardized and systematic coding procedures.
  • Systematic: Content analysis involves the use of a systematic approach to analyze and interpret the content of communication. This involves defining the research question, selecting the sample of content to analyze, developing a coding scheme, and analyzing the data.
  • Quantitative : Content analysis often involves counting and measuring the occurrence of specific themes or topics in the content, making it a quantitative research method. This allows for statistical analysis and generalization of findings.
  • Contextual : Content analysis considers the context in which the communication takes place, such as the time period, the audience, and the purpose of the communication.
  • Iterative : Content analysis is an iterative process, meaning that the researcher may refine the coding scheme and analysis as they analyze the data, to ensure that the findings are valid and reliable.
  • Reliability and validity : Content analysis aims to be a reliable and valid method of research, meaning that the findings are consistent and accurate. This is achieved through inter-coder reliability tests and other measures to ensure the quality of the data and analysis.

Advantages of Content Analysis

There are several advantages to using content analysis as a research method, including:

  • Objective and systematic : Content analysis aims to be an objective and systematic method of research, which reduces the likelihood of bias and subjectivity in the analysis.
  • Large sample size: Content analysis allows for the analysis of a large sample of data, which increases the statistical power of the analysis and the generalizability of the findings.
  • Non-intrusive: Content analysis does not require the researcher to interact with the participants or disrupt their natural behavior, making it a non-intrusive research method.
  • Accessible data: Content analysis can be used to analyze a wide range of data types, including written, oral, and visual communication, making it accessible to researchers across different fields.
  • Versatile : Content analysis can be used to study communication in a wide range of contexts and fields, including media studies, political science, psychology, education, sociology, and marketing research.
  • Cost-effective: Content analysis is a cost-effective research method, as it does not require expensive equipment or participant incentives.

Limitations of Content Analysis

While content analysis has many advantages, there are also some limitations to consider, including:

  • Limited contextual information: Content analysis is focused on the content of communication, which means that contextual information may be limited. This can make it difficult to fully understand the meaning behind the communication.
  • Limited ability to capture nonverbal communication : Content analysis is limited to analyzing the content of communication that can be captured in written or recorded form. It may miss out on nonverbal communication, such as body language or tone of voice.
  • Subjectivity in coding: While content analysis aims to be objective, there may be subjectivity in the coding process. Different coders may interpret the content differently, which can lead to inconsistent results.
  • Limited ability to establish causality: Content analysis is a correlational research method, meaning that it cannot establish causality between variables. It can only identify associations between variables.
  • Limited generalizability: Content analysis is limited to the data that is analyzed, which means that the findings may not be generalizable to other contexts or populations.
  • Time-consuming: Content analysis can be a time-consuming research method, especially when analyzing a large sample of data. This can be a disadvantage for researchers who need to complete their research in a short amount of time.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Correlation Analysis

Correlation Analysis – Types, Methods and...

Data Analysis

Data Analysis – Process, Methods and Types

Multidimensional Scaling

Multidimensional Scaling – Types, Formulas and...

Documentary Analysis

Documentary Analysis – Methods, Applications and...

Discourse Analysis

Discourse Analysis – Methods, Types and Examples

Critical Analysis

Critical Analysis – Types, Examples and Writing...

Logo for Open Educational Resources

Chapter 17. Content Analysis

Introduction.

Content analysis is a term that is used to mean both a method of data collection and a method of data analysis. Archival and historical works can be the source of content analysis, but so too can the contemporary media coverage of a story, blogs, comment posts, films, cartoons, advertisements, brand packaging, and photographs posted on Instagram or Facebook. Really, almost anything can be the “content” to be analyzed. This is a qualitative research method because the focus is on the meanings and interpretations of that content rather than strictly numerical counts or variables-based causal modeling. [1] Qualitative content analysis (sometimes referred to as QCA) is particularly useful when attempting to define and understand prevalent stories or communication about a topic of interest—in other words, when we are less interested in what particular people (our defined sample) are doing or believing and more interested in what general narratives exist about a particular topic or issue. This chapter will explore different approaches to content analysis and provide helpful tips on how to collect data, how to turn that data into codes for analysis, and how to go about presenting what is found through analysis. It is also a nice segue between our data collection methods (e.g., interviewing, observation) chapters and chapters 18 and 19, whose focus is on coding, the primary means of data analysis for most qualitative data. In many ways, the methods of content analysis are quite similar to the method of coding.

content analysis research example

Although the body of material (“content”) to be collected and analyzed can be nearly anything, most qualitative content analysis is applied to forms of human communication (e.g., media posts, news stories, campaign speeches, advertising jingles). The point of the analysis is to understand this communication, to systematically and rigorously explore its meanings, assumptions, themes, and patterns. Historical and archival sources may be the subject of content analysis, but there are other ways to analyze (“code”) this data when not overly concerned with the communicative aspect (see chapters 18 and 19). This is why we tend to consider content analysis its own method of data collection as well as a method of data analysis. Still, many of the techniques you learn in this chapter will be helpful to any “coding” scheme you develop for other kinds of qualitative data. Just remember that content analysis is a particular form with distinct aims and goals and traditions.

An Overview of the Content Analysis Process

The first step: selecting content.

Figure 17.2 is a display of possible content for content analysis. The first step in content analysis is making smart decisions about what content you will want to analyze and to clearly connect this content to your research question or general focus of research. Why are you interested in the messages conveyed in this particular content? What will the identification of patterns here help you understand? Content analysis can be fun to do, but in order to make it research, you need to fit it into a research plan.

New stories Blogs Comment posts Lyrics
Letters to editor Films Cartoons Advertisements
Brand packaging Logos Instagram photos Tweets
Photographs Graffiti Street signs Personalized license plates
Avatars (names, shapes, presentations) Nicknames Band posters Building names

Figure 17.1. A Non-exhaustive List of "Content" for Content Analysis

To take one example, let us imagine you are interested in gender presentations in society and how presentations of gender have changed over time. There are various forms of content out there that might help you document changes. You could, for example, begin by creating a list of magazines that are coded as being for “women” (e.g., Women’s Daily Journal ) and magazines that are coded as being for “men” (e.g., Men’s Health ). You could then select a date range that is relevant to your research question (e.g., 1950s–1970s) and collect magazines from that era. You might create a “sample” by deciding to look at three issues for each year in the date range and a systematic plan for what to look at in those issues (e.g., advertisements? Cartoons? Titles of articles? Whole articles?). You are not just going to look at some magazines willy-nilly. That would not be systematic enough to allow anyone to replicate or check your findings later on. Once you have a clear plan of what content is of interest to you and what you will be looking at, you can begin, creating a record of everything you are including as your content. This might mean a list of each advertisement you look at or each title of stories in those magazines along with its publication date. You may decide to have multiple “content” in your research plan. For each content, you want a clear plan for collecting, sampling, and documenting.

The Second Step: Collecting and Storing

Once you have a plan, you are ready to collect your data. This may entail downloading from the internet, creating a Word document or PDF of each article or picture, and storing these in a folder designated by the source and date (e.g., “ Men’s Health advertisements, 1950s”). Sølvberg ( 2021 ), for example, collected posted job advertisements for three kinds of elite jobs (economic, cultural, professional) in Sweden. But collecting might also mean going out and taking photographs yourself, as in the case of graffiti, street signs, or even what people are wearing. Chaise LaDousa, an anthropologist and linguist, took photos of “house signs,” which are signs, often creative and sometimes offensive, hung by college students living in communal off-campus houses. These signs were a focal point of college culture, sending messages about the values of the students living in them. Some of the names will give you an idea: “Boot ’n Rally,” “The Plantation,” “Crib of the Rib.” The students might find these signs funny and benign, but LaDousa ( 2011 ) argued convincingly that they also reproduced racial and gender inequalities. The data here already existed—they were big signs on houses—but the researcher had to collect the data by taking photographs.

In some cases, your content will be in physical form but not amenable to photographing, as in the case of films or unwieldy physical artifacts you find in the archives (e.g., undigitized meeting minutes or scrapbooks). In this case, you need to create some kind of detailed log (fieldnotes even) of the content that you can reference. In the case of films, this might mean watching the film and writing down details for key scenes that become your data. [2] For scrapbooks, it might mean taking notes on what you are seeing, quoting key passages, describing colors or presentation style. As you might imagine, this can take a lot of time. Be sure you budget this time into your research plan.

Researcher Note

A note on data scraping : Data scraping, sometimes known as screen scraping or frame grabbing, is a way of extracting data generated by another program, as when a scraping tool grabs information from a website. This may help you collect data that is on the internet, but you need to be ethical in how to employ the scraper. A student once helped me scrape thousands of stories from the Time magazine archives at once (although it took several hours for the scraping process to complete). These stories were freely available, so the scraping process simply sped up the laborious process of copying each article of interest and saving it to my research folder. Scraping tools can sometimes be used to circumvent paywalls. Be careful here!

The Third Step: Analysis

There is often an assumption among novice researchers that once you have collected your data, you are ready to write about what you have found. Actually, you haven’t yet found anything, and if you try to write up your results, you will probably be staring sadly at a blank page. Between the collection and the writing comes the difficult task of systematically and repeatedly reviewing the data in search of patterns and themes that will help you interpret the data, particularly its communicative aspect (e.g., What is it that is being communicated here, with these “house signs” or in the pages of Men’s Health ?).

The first time you go through the data, keep an open mind on what you are seeing (or hearing), and take notes about your observations that link up to your research question. In the beginning, it can be difficult to know what is relevant and what is extraneous. Sometimes, your research question changes based on what emerges from the data. Use the first round of review to consider this possibility, but then commit yourself to following a particular focus or path. If you are looking at how gender gets made or re-created, don’t follow the white rabbit down a hole about environmental injustice unless you decide that this really should be the focus of your study or that issues of environmental injustice are linked to gender presentation. In the second round of review, be very clear about emerging themes and patterns. Create codes (more on these in chapters 18 and 19) that will help you simplify what you are noticing. For example, “men as outdoorsy” might be a common trope you see in advertisements. Whenever you see this, mark the passage or picture. In your third (or fourth or fifth) round of review, begin to link up the tropes you’ve identified, looking for particular patterns and assumptions. You’ve drilled down to the details, and now you are building back up to figure out what they all mean. Start thinking about theory—either theories you have read about and are using as a frame of your study (e.g., gender as performance theory) or theories you are building yourself, as in the Grounded Theory tradition. Once you have a good idea of what is being communicated and how, go back to the data at least one more time to look for disconfirming evidence. Maybe you thought “men as outdoorsy” was of importance, but when you look hard, you note that women are presented as outdoorsy just as often. You just hadn’t paid attention. It is very important, as any kind of researcher but particularly as a qualitative researcher, to test yourself and your emerging interpretations in this way.

The Fourth and Final Step: The Write-Up

Only after you have fully completed analysis, with its many rounds of review and analysis, will you be able to write about what you found. The interpretation exists not in the data but in your analysis of the data. Before writing your results, you will want to very clearly describe how you chose the data here and all the possible limitations of this data (e.g., historical-trace problem or power problem; see chapter 16). Acknowledge any limitations of your sample. Describe the audience for the content, and discuss the implications of this. Once you have done all of this, you can put forth your interpretation of the communication of the content, linking to theory where doing so would help your readers understand your findings and what they mean more generally for our understanding of how the social world works. [3]

Analyzing Content: Helpful Hints and Pointers

Although every data set is unique and each researcher will have a different and unique research question to address with that data set, there are some common practices and conventions. When reviewing your data, what do you look at exactly? How will you know if you have seen a pattern? How do you note or mark your data?

Let’s start with the last question first. If your data is stored digitally, there are various ways you can highlight or mark up passages. You can, of course, do this with literal highlighters, pens, and pencils if you have print copies. But there are also qualitative software programs to help you store the data, retrieve the data, and mark the data. This can simplify the process, although it cannot do the work of analysis for you.

Qualitative software can be very expensive, so the first thing to do is to find out if your institution (or program) has a universal license its students can use. If they do not, most programs have special student licenses that are less expensive. The two most used programs at this moment are probably ATLAS.ti and NVivo. Both can cost more than $500 [4] but provide everything you could possibly need for storing data, content analysis, and coding. They also have a lot of customer support, and you can find many official and unofficial tutorials on how to use the programs’ features on the web. Dedoose, created by academic researchers at UCLA, is a decent program that lacks many of the bells and whistles of the two big programs. Instead of paying all at once, you pay monthly, as you use the program. The monthly fee is relatively affordable (less than $15), so this might be a good option for a small project. HyperRESEARCH is another basic program created by academic researchers, and it is free for small projects (those that have limited cases and material to import). You can pay a monthly fee if your project expands past the free limits. I have personally used all four of these programs, and they each have their pluses and minuses.

Regardless of which program you choose, you should know that none of them will actually do the hard work of analysis for you. They are incredibly useful for helping you store and organize your data, and they provide abundant tools for marking, comparing, and coding your data so you can make sense of it. But making sense of it will always be your job alone.

So let’s say you have some software, and you have uploaded all of your content into the program: video clips, photographs, transcripts of news stories, articles from magazines, even digital copies of college scrapbooks. Now what do you do? What are you looking for? How do you see a pattern? The answers to these questions will depend partially on the particular research question you have, or at least the motivation behind your research. Let’s go back to the idea of looking at gender presentations in magazines from the 1950s to the 1970s. Here are some things you can look at and code in the content: (1) actions and behaviors, (2) events or conditions, (3) activities, (4) strategies and tactics, (5) states or general conditions, (6) meanings or symbols, (7) relationships/interactions, (8) consequences, and (9) settings. Table 17.1 lists these with examples from our gender presentation study.

Table 17.1. Examples of What to Note During Content Analysis

What can be noted/coded Example from Gender Presentation Study
Actions and behaviors
Events or conditions
Activities
Strategies and tactics
States/conditions
Meanings/symbols
Relationships/interactions
Consequences
Settings

One thing to note about the examples in table 17.1: sometimes we note (mark, record, code) a single example, while other times, as in “settings,” we are recording a recurrent pattern. To help you spot patterns, it is useful to mark every setting, including a notation on gender. Using software can help you do this efficiently. You can then call up “setting by gender” and note this emerging pattern. There’s an element of counting here, which we normally think of as quantitative data analysis, but we are using the count to identify a pattern that will be used to help us interpret the communication. Content analyses often include counting as part of the interpretive (qualitative) process.

In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, “strategies and tactics” is a bit of a stretch here. In studies that are looking specifically at, say, policy implementation or social movements, this category will prove much more salient.

Another way to think about “what to look at” is to consider aspects of your content in terms of units of analysis. You can drill down to the specific words used (e.g., the adjectives commonly used to describe “men” and “women” in your magazine sample) or move up to the more abstract level of concepts used (e.g., the idea that men are more rational than women). Counting for the purpose of identifying patterns is particularly useful here. How many times is that idea of women’s irrationality communicated? How is it is communicated (in comic strips, fictional stories, editorials, etc.)? Does the incidence of the concept change over time? Perhaps the “irrational woman” was everywhere in the 1950s, but by the 1970s, it is no longer showing up in stories and comics. By tracing its usage and prevalence over time, you might come up with a theory or story about gender presentation during the period. Table 17.2 provides more examples of using different units of analysis for this work along with suggestions for effective use.

Table 17.2. Examples of Unit of Analysis in Content Analysis

Unit of Analysis How Used...
Words
Themes
Characters
Paragraphs
Items
Concepts
Semantics

Every qualitative content analysis is unique in its particular focus and particular data used, so there is no single correct way to approach analysis. You should have a better idea, however, of what kinds of things to look for and what to look for. The next two chapters will take you further into the coding process, the primary analytical tool for qualitative research in general.

Further Readings

Cidell, Julie. 2010. “Content Clouds as Exploratory Qualitative Data Analysis.” Area 42(4):514–523. A demonstration of using visual “content clouds” as a form of exploratory qualitative data analysis using transcripts of public meetings and content of newspaper articles.

Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15(9):1277–1288. Distinguishes three distinct approaches to QCA: conventional, directed, and summative. Uses hypothetical examples from end-of-life care research.

Jackson, Romeo, Alex C. Lange, and Antonio Duran. 2021. “A Whitened Rainbow: The In/Visibility of Race and Racism in LGBTQ Higher Education Scholarship.” Journal Committed to Social Change on Race and Ethnicity (JCSCORE) 7(2):174–206.* Using a “critical summative content analysis” approach, examines research published on LGBTQ people between 2009 and 2019.

Krippendorff, Klaus. 2018. Content Analysis: An Introduction to Its Methodology . 4th ed. Thousand Oaks, CA: SAGE. A very comprehensive textbook on both quantitative and qualitative forms of content analysis.

Mayring, Philipp. 2022. Qualitative Content Analysis: A Step-by-Step Guide . Thousand Oaks, CA: SAGE. Formulates an eight-step approach to QCA.

Messinger, Adam M. 2012. “Teaching Content Analysis through ‘Harry Potter.’” Teaching Sociology 40(4):360–367. This is a fun example of a relatively brief foray into content analysis using the music found in Harry Potter films.

Neuendorft, Kimberly A. 2002. The Content Analysis Guidebook . Thousand Oaks, CA: SAGE. Although a helpful guide to content analysis in general, be warned that this textbook definitely favors quantitative over qualitative approaches to content analysis.

Schrier, Margrit. 2012. Qualitative Content Analysis in Practice . Thousand Okas, CA: SAGE. Arguably the most accessible guidebook for QCA, written by a professor based in Germany.

Weber, Matthew A., Shannon Caplan, Paul Ringold, and Karen Blocksom. 2017. “Rivers and Streams in the Media: A Content Analysis of Ecosystem Services.” Ecology and Society 22(3).* Examines the content of a blog hosted by National Geographic and articles published in The New York Times and the Wall Street Journal for stories on rivers and streams (e.g., water-quality flooding).

  • There are ways of handling content analysis quantitatively, however. Some practitioners therefore specify qualitative content analysis (QCA). In this chapter, all content analysis is QCA unless otherwise noted. ↵
  • Note that some qualitative software allows you to upload whole films or film clips for coding. You will still have to get access to the film, of course. ↵
  • See chapter 20 for more on the final presentation of research. ↵
  • . Actually, ATLAS.ti is an annual license, while NVivo is a perpetual license, but both are going to cost you at least $500 to use. Student rates may be lower. And don’t forget to ask your institution or program if they already have a software license you can use. ↵

A method of both data collection and data analysis in which a given content (textual, visual, graphic) is examined systematically and rigorously to identify meanings, themes, patterns and assumptions.  Qualitative content analysis (QCA) is concerned with gathering and interpreting an existing body of material.    

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Reference management. Clean and simple.

How to do a content analysis

Content analysis illustration

What is content analysis?

Why would you use a content analysis, types of content analysis, conceptual content analysis, relational content analysis, reliability and validity, reliability, the advantages and disadvantages of content analysis, a step-by-step guide to conducting a content analysis, step 1: develop your research questions, step 2: choose the content you’ll analyze, step 3: identify your biases, step 4: define the units and categories of coding, step 5: develop a coding scheme, step 6: code the content, step 7: analyze the results, frequently asked questions about content analysis, related articles.

In research, content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. Simply put, content analysis is a research method that aims to present the trends, patterns, concepts, and ideas in content as objective, quantitative or qualitative data , depending on the specific use case.

As such, some of the objectives of content analysis include:

  • Simplifying complex, unstructured content.
  • Identifying trends, patterns, and relationships in the content.
  • Determining the characteristics of the content.
  • Identifying the intentions of individuals through the analysis of the content.
  • Identifying the implied aspects in the content.

Typically, when doing a content analysis, you’ll gather data not only from written text sources like newspapers, books, journals, and magazines but also from a variety of other oral and visual sources of content like:

  • Voice recordings, speeches, and interviews.
  • Web content, blogs, and social media content.
  • Films, videos, and photographs.

One of content analysis’s distinguishing features is that you'll be able to gather data for research without physically gathering data from participants. In other words, when doing a content analysis, you don't need to interact with people directly.

The process of doing a content analysis usually involves categorizing or coding concepts, words, and themes within the content and analyzing the results. We’ll look at the process in more detail below.

Typically, you’ll use content analysis when you want to:

  • Identify the intentions, communication trends, or communication patterns of an individual, a group of people, or even an institution.
  • Analyze and describe the behavioral and attitudinal responses of individuals to communications.
  • Determine the emotional or psychological state of an individual or a group of people.
  • Analyze the international differences in communication content.
  • Analyzing audience responses to content.

Keep in mind, though, that these are just some examples of use cases where a content analysis might be appropriate and there are many others.

The key thing to remember is that content analysis will help you quantify the occurrence of specific words, phrases, themes, and concepts in content. Moreover, it can also be used when you want to make qualitative inferences out of the data by analyzing the semantic meanings and interrelationships between words, themes, and concepts.

In general, there are two types of content analysis: conceptual and relational analysis . Although these two types follow largely similar processes, their outcomes differ. As such, each of these types can provide different results, interpretations, and conclusions. With that in mind, let’s now look at these two types of content analysis in more detail.

With conceptual analysis, you’ll determine the existence of certain concepts within the content and identify their frequency. In other words, conceptual analysis involves the number of times a specific concept appears in the content.

Conceptual analysis is typically focused on explicit data, which means you’ll focus your analysis on a specific concept to identify its presence in the content and determine its frequency.

However, when conducting a content analysis, you can also use implicit data. This approach is more involved, complicated, and requires the use of a dictionary, contextual translation rules, or a combination of both.

No matter what type you use, conceptual analysis brings an element of quantitive analysis into a qualitative approach to research.

Relational content analysis takes conceptual analysis a step further. So, while the process starts in the same way by identifying concepts in content, it doesn’t focus on finding the frequency of these concepts, but rather on the relationships between the concepts, the context in which they appear in the content, and their interrelationships.

Before starting with a relational analysis, you’ll first need to decide on which subcategory of relational analysis you’ll use:

  • Affect extraction: With this relational content analysis approach, you’ll evaluate concepts based on their emotional attributes. You’ll typically assess these emotions on a rating scale with higher values assigned to positive emotions and lower values to negative ones. In turn, this allows you to capture the emotions of the writer or speaker at the time the content is created. The main difficulty with this approach is that emotions can differ over time and across populations.
  • Proximity analysis: With this approach, you’ll identify concepts as in conceptual analysis, but you’ll evaluate the way in which they occur together in the content. In other words, proximity analysis allows you to analyze the relationship between concepts and derive a concept matrix from which you’ll be able to develop meaning. Proximity analysis is typically used when you want to extract facts from the content rather than contextual, emotional, or cultural factors.
  • Cognitive mapping: Finally, cognitive mapping can be used with affect extraction or proximity analysis. It’s a visualization technique that allows you to create a model that represents the overall meaning of content and presents it as a graphic map of the relationships between concepts. As such, it’s also commonly used when analyzing the changes in meanings, definitions, and terms over time.

Now that we’ve seen what content analysis is and looked at the different types of content analysis, it’s important to understand how reliable it is as a research method . We’ll also look at what criteria impact the validity of a content analysis.

There are three criteria that determine the reliability of a content analysis:

  • Stability . Stability refers to the tendency of coders to consistently categorize or code the same data in the same way over time.
  • Reproducibility . This criterion refers to the tendency of coders to classify categories membership in the same way.
  • Accuracy . Accuracy refers to the extent to which the classification of content corresponds to a specific standard.

Keep in mind, though, that because you’ll need to code or categorize the concepts you’ll aim to identify and analyze manually, you’ll never be able to eliminate human error. However, you’ll be able to minimize it.

In turn, three criteria determine the validity of a content analysis:

  • Closeness of categories . This is achieved by using multiple classifiers to get an agreed-upon definition for a specific category by using either implicit variables or synonyms. In this way, the category can be broadened to include more relevant data.
  • Conclusions . Here, it’s crucial to decide what level of implication will be allowable. In other words, it’s important to consider whether the conclusions are valid based on the data or whether they can be explained using some other phenomena.
  • Generalizability of the results of the analysis to a theory . Generalizability comes down to how you determine your categories as mentioned above and how reliable those categories are. In turn, this relies on how accurately the categories are at measuring the concepts or ideas that you’re looking to measure.

Considering everything mentioned above, there are definite advantages and disadvantages when it comes to content analysis:

AdvantagesDisadvantages

It doesn’t require physical interaction with any participant, or, in other words, it’s unobtrusive. This means that the presence of a researcher is unlikely to influence the results. As a result, there are also fewer ethical concerns compared to some other analysis methods.

It always involves an element of subjective interpretation. In many cases, it’s criticized for being too subjective and not scientifically rigorous enough. Fortunately, when applying the criteria of reliability and validity, researchers can produce accurate results with content analysis.

It uses a systematic and transparent approach to gathering data. When done correctly, content analysis is easily repeatable by other researchers, which, in turn, leads to more reliable results.

It’s inherently reductive. In other words, by focusing only on specific concepts, words, or themes, researchers will often disregard any context, nuances, or deeper meaning to the content.

Because researchers are able to conduct content analysis in any location, at any time, and at a lower cost compared to many other analysis methods, it’s typically more flexible.

Although it offers researchers an inexpensive and flexible approach to gathering and analyzing data, coding or categorizing a large number of concepts is time-consuming.

It allows researchers to effectively combine quantitative and qualitative analysis into one approach, which then results in a more rigorous scientific analysis of the data.

Coding can be challenging to automate, which means the process largely relies on manual processes.

Let’s now look at the steps you’ll need to follow when doing a content analysis.

The first step will always be to formulate your research questions. This is simply because, without clear and defined research questions, you won’t know what question to answer and, by implication, won’t be able to code your concepts.

Based on your research questions, you’ll then need to decide what content you’ll analyze. Here, you’ll use three factors to find the right content:

  • The type of content . Here you’ll need to consider the various types of content you’ll use and their medium like, for example, blog posts, social media, newspapers, or online articles.
  • What criteria you’ll use for inclusion . Here you’ll decide what criteria you’ll use to include content. This can, for instance, be the mentioning of a certain event or advertising a specific product.
  • Your parameters . Here, you’ll decide what content you’ll include based on specified parameters in terms of date and location.

The next step is to consider your own pre-conception of the questions and identify your biases. This process is referred to as bracketing and allows you to be aware of your biases before you start your research with the result that they’ll be less likely to influence the analysis.

Your next step would be to define the units of meaning that you’ll code. This will, for example, be the number of times a concept appears in the content or the treatment of concept, words, or themes in the content. You’ll then need to define the set of categories you’ll use for coding which can be either objective or more conceptual.

Based on the above, you’ll then organize the units of meaning into your defined categories. Apart from this, your coding scheme will also determine how you’ll analyze the data.

The next step is to code the content. During this process, you’ll work through the content and record the data according to your coding scheme. It’s also here where conceptual and relational analysis starts to deviate in relation to the process you’ll need to follow.

As mentioned earlier, conceptual analysis aims to identify the number of times a specific concept, idea, word, or phrase appears in the content. So, here, you’ll need to decide what level of analysis you’ll implement.

In contrast, with relational analysis, you’ll need to decide what type of relational analysis you’ll use. So, you’ll need to determine whether you’ll use affect extraction, proximity analysis, cognitive mapping, or a combination of these approaches.

Once you’ve coded the data, you’ll be able to analyze it and draw conclusions from the data based on your research questions.

Content analysis offers an inexpensive and flexible way to identify trends and patterns in communication content. In addition, it’s unobtrusive which eliminates many ethical concerns and inaccuracies in research data. However, to be most effective, a content analysis must be planned and used carefully in order to ensure reliability and validity.

The two general types of content analysis: conceptual and relational analysis . Although these two types follow largely similar processes, their outcomes differ. As such, each of these types can provide different results, interpretations, and conclusions.

In qualitative research coding means categorizing concepts, words, and themes within your content to create a basis for analyzing the results. While coding, you work through the content and record the data according to your coding scheme.

Content analysis is the process of analyzing content and its features with the aim of identifying patterns and the presence of words, themes, and concepts within the content. The goal of a content analysis is to present the trends, patterns, concepts, and ideas in content as objective, quantitative or qualitative data, depending on the specific use case.

Content analysis is a qualitative method of data analysis and can be used in many different fields. It is particularly popular in the social sciences.

It is possible to do qualitative analysis without coding, but content analysis as a method of qualitative analysis requires coding or categorizing data to then analyze it according to your coding scheme in the next step.

content analysis research example

The Practical Guide to Qualitative Content Analysis

content analysis research example

This is part of our Essential Guide to Coding Qualitative Data | Start a Free Trial of Delve | Take Our Free Online Qualitative Data Analysis Course

What is Qualitative Content Analysis?

Qualitative content analysis is a research method used to analyze and interpret the content of textual data, such as written documents, interview transcripts, or other forms of communication. 

It provides a systematic way to identify patterns, concepts, and larger themes within the data to gain insight into the meaning and context of the content .

This guide introduces qualitative content analysis. We also cover many types of qualitative content analysis and address the conflicting ways this topic is presented in the research literature. Lastly, we will provide a step-by-step guide to how to conduct qualitative content analysis.

Qualitative vs. Quantitative Content Analysis

What is content analysis.

At a top level, content analysis in research allows you to examine and understand the content of textual data. There are two types of methodological approaches to content analysis: quantitative content analysis and qualitative content analysis. 

The term “qualitative content analysis” can be misleading because it often uses many quantitative elements. For this reason, it is helpful to clearly define each approach to show where this overlap in qualitative content analysis occurs.  

What is quantitative content analysis?

Quantitative content analysis is a research method that systematically measures the presence and frequency of specific words, phrases, or themes in a large sample of texts. It uses a numbers-based method to identify patterns that can answer “how much”, “how many” or “how often”. That is to say that the process is purely empirical. 

What is qualitative content analysis?

Qualitative content analysis answers “why”, “how”, or “what”. Through an iterative process of coding, counting, and interpretation, it explores the subtleties of data in a way the quantitative method does not. That said, qualitative content analysis often relies on quantifying the frequency of words, phrases, and concepts to provide such answers.

Learn about qualitative vs. quantitative data types on Delve’s Youtube channel.

Framing the relevance of frequency

Frequency refers to how often keywords, concepts, or themes are used within the analyzed content. That frequency is often used to signify relevance within a data set. 

Most types of qualitative content analysis utilize frequency to identify which concepts may warrant further exploration. However, unlike quantitative content analysis, frequency is not considered a final result in this method. It only indicates a pattern that may deserve probing.

Research that stops at frequency is quantitative content analysis. But in qualitative content analysis, you go “beyond merely counting words to examining language … for the purpose of classifying large amounts of text into … categories that represent similar meanings.” [1]

Now that we've differentiated these approaches to content analysis let's explore how qualitative content analysis differs from other types of qualitative research methods.

Qualitative Content Analysis vs. Other Qualitative Research Types

As we have discussed, frequency is a standard tool in qualitative content analysis. This is not the case in other qualitative research methodologies, such as thematic analysis and grounded theory . 

For example, when conducting thematic analysis, novice researchers are often warned not to equate frequency with relevance. In contrast, qualitative content analysis papers often include frequency tables or statistical graphics in the final analysis and write-up.

You can read more on this topic in our blog, contrasting content analysis with thematic analysis .

Defining Qualitative Content Analysis, According to the Experts

Perhaps you landed on this guide because you could not find a satisfactory definition of qualitative content analysis. In fact, one of the reasons we wrote this guide was our frustration at the lack of clarity on this topic—even among the most cited experts on this subject.

No single source of information offers a universal definition for qualitative content analysis. For such a widely practiced research method that is growing in popularity, there is an enormous void of information on what, when, or how to use it. We aim to offer clarity through this guide.

Researchers agree that identifying patterns within data adds context, making it easier to interpret large amounts of information. As a result, they get a stronger grasp of the content they are analyzing. That might be the simplest definition that captures all of the following definitions. 

Frequently cited definitions of qualitative content analysis:

“Allows researchers to understand social reality in a subjective but scientific manner.” (Zhang & Wildemuth, 2009)

“A research method for the subjective interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns.” (Hsieh & Shannon, 2005)

“A flexible method for making valid inferences from data in order to provide new insight, describe a phenomenon through concepts or categories, and develop an understanding of the meaning of communications with a concern for intentions, consequences, and context.” (Elo & Kyngäs, 2008)

“Represents a systematic and objective means of describing and quantifying phenomena.“ (Downe-Wamboldt, 1992)

“An approach of empirical, methodological controlled analysis of texts within their context of communication, following content analytic rules and step-by-step models, without rash quantification.” (Mayring, 2000)

“Any qualitative data reduction and sense-making effort that takes a volume of qualitative material and attempts to identify core consistencies and meanings.” (Patton, 2002)

After exhaustively researching this topic, we feel our initial definition offers a general consensus of what these researchers have offered. We included this section because we felt it important to convey the many different ways experts define and discuss qualitative content analysis.

Try Delve, Software for Qualitative Coding

When should i use qualitative content analysis.

Qualitative content analysis can be used in various research contexts, including social science, psychology, marketing research, education, and business. 

It is often used to explore complex phenomena, such as attitudes, beliefs, and social interactions. By exploring these topics, researchers gain a deeper understanding of the perspectives and experiences of individuals, groups, and even institutions.

Qualitative content analysis is a notoriously flexible research method with no strict guidelines for when or how to employ it. An oft-cited article from Columbia’s Mailman School of Public Health suggests using content analysis to:

Study and sample data from large amounts of text.

Figure out what a person, group, or organization is trying to achieve, what they are talking about, and how they communicate their ideas.

Explain how people react to messages in terms of their attitudes and behaviors.

Determine the psychological or emotional state of persons or groups.

Reveal international differences in communication content.

Analyze interviews and open-ended questions to complement quantitative data.

When you don’t have the time or resources to conduct focus groups or interviews. 

Source Materials for Qualitative Content Analysis

content analysis research example

Source materials used to conduct qualitative content analysis can be any text-based communication, including:

Transcribed interviews

Interviews and focus groups 

Transcribed news stories

Transcribed speeches

Historical documents

Web-based content (including social media posts); 

Transcribed films and documentaries

Field research notes

Essays 

A Brief History of Qualitative Content Analysis

The origins of qualitative content analysis can be traced back to the early 20th century when social scientists began using content analysis to study media messages. For instance, when researchers wanted to study newspapers or propaganda en masse. 

However, the formal development of content analysis as a qualitative research method really began in the 1950s, when researchers in various social sciences began using it to analyze various types of texts that could be any of the sources mentioned above.

Content Analysis: Then & now 

In the early days of content analysis, the focus was mainly on counting and categorizing the frequency of specific words or concepts in texts. As the method developed, researchers began to use content analysis more interpretively, analyzing not just the frequency of specific words or concepts but also the meanings and contexts in which they were used.

Over the years, various approaches and submethods have been developed from this original format. 

Today, content analysis remains a popular qualitative research method used by researchers in a wide range of fields to study various textual materials.

Inductive Versus Deductive Content Analysis

When you use qualitative content analysis for your research, the first step is generally to collect the data you want to analyze.

After you have collected your data, there are different ways to analyze it. Depending on your research question, the data available, and your research goals, you will likely choose an inductive approach, a deductive approach , or a combination of both. 

Inductive content analysis

Inductive content analysis is a bottom-up approach to meaning-making that starts with no preconceived codes or theories. Instead of using a preexisting framework or previous research, you develop a theory from scratch (the bottom) as you analyze the entire data set. 

content analysis research example

With inductive content analysis, you develop your codebook by immersing yourself in your data. It offers flexibility to adjust codes and theories as you progress through your analysis. You can then refine your understanding of the data and explore unexpected findings. 

This approach is often better suited for identifying latent content—or meaning that is not immediately apparent “on the surface” of the text.

[Related readings: Latent Content Analysis vs Manifest Content Analysis ]

✅ Advantages of inductive content analysis

Offers an exploratory way to answer research questions.

It is helpful when there is little existing literature on the topic.

It helps explore multiple perspectives and viewpoints on a topic.

It offers a flexible approach to data analysis.

❌ Disadvantages of inductive content analysis

It can be hard to balance immersion in the data and analyzing the data. 

Struggling to strike this balance can be a time-consuming process.

Inter-coder reliability can be challenging as the coding categories are not predefined.

Analysis may be influenced by a researcher’s personal biases and preconceptions.

Types of inductive content analysis

Conventional Content Analysis - With Conventional Content Analysis, you derive codes, categories, and themes from textual data, rather than preexisting theories. This method involves immersing oneself in the content through iterative readings to identify patterns and trends. Frequency counts are a core aspect that is used to gain a deeper understanding and infer new insights into the phenomenon being studied. 

Thematic Content Analysis - With this method, you identify story-like "thematic units" (McClelland et al., 1975) that may not be obvious in the data and need inductive analysis to discover. Unitizing and coding data in this way requires deep interpretation. It differs greatly from summative content analysis, explained below, which codes units with a keyword through a mostly deductive process.

Deductive content analysis

On the other hand, deductive content analysis is a top-down approach to data that involves a more structured and rigid approach to meaning-making. 

content analysis research example

You start with predetermined research questions and code based on previous research. You focus on building upon or attempting to refute those preexisting theories that guided your initial hypothesis and coding structure. As a result, this approach is often better suited for identifying manifest content—or data that is easily apparent “on the surface” of the text.

Deductive content analysis develops its codebook from existing theories or domain experts. That codebook is often applied to the larger dataset using automated methods such as keyword search.

✅ Advantages of deductive content analysis:

Offers a confirmatory way to answer research questions.

It is a good way to test existing theories and hypotheses.

Supports inter-coder reliability (predefined coding categories tend to be easier to code).

It is helpful when you want a more rigid approach to avoid biases.

❌ Disadvantages of deductive content analysis:

It limits the identification of new patterns or themes in the data.

It may not accommodate multiple perspectives and viewpoints.

It is difficult to answer exploratory research questions.

The quality of research is only as strong as the preexisting theory used.

Types of deductive content analysis

Directed Content Analysis - Also referred to as DQCA, this method is used to test or corroborate the theory guiding your study and codebook. Alternatively, it can extend a theory to contexts other than those in which they were developed. [5] Your initial code framework is derived from the theory guiding your study and is applied deductively to your data. 

Summative Content Analysis - Summative content analysis identifies and quantifies the frequency of keywords in textual data. Through a deductive approach, pre-existing codes or categories are applied to the data. You can identify patterns of meaning by analyzing the frequency of keywords appearing in the text and providing a statistical summary.

How to choose between inductive or deductive content analysis?

Long story short, you don’t always need to choose one or the other.

The types of qualitative content analysis referenced above lean either deductive or inductive. But as mentioned, you can also combine approaches. This allows you to leverage the strengths of each approach and can provide a more comprehensive understanding of the content.

For example, relational content analysis explores relationships between concepts and tests theoretical assumptions. You get the flexibility of inductive analysis with the rigor of deductive analysis to explore the complex relationships between different concepts in the data.

[Releated readings: Inductive Content Analysis vs. Deductive Content Analysis ]

Content Analysis Steps: How to Conduct Content Analysis in Qualitative Research?

Now that you have a firm grasp of how to approach the data you initially collected, your qualitative content analysis can begin. 

After data collection, the first decision to make is whether to use an inductive, deductive, or a combination of both approaches to analyze that data, which we have discussed above. 

With that decision aside, here is an outline of the general steps to follow:

How to conduct deductive content analysis - deductive

Data collection. 

Define your codebook. 

Your codebook will be predefined, based on existing theories or from speaking with an expert. 

Determine your coding rules.

Once you have defined your codebook, you will need to come up with rules on how that codebook will be applied to your data.

If you are coding automatically based on keywords, these rules dictate which keywords map to which codes.

If you are coding based on patterns or themes, these rules will be instructions to your research team (or yourself) on how the codes are applied.

Code your data by applying your coding rules.

You may find that the coding rules are coding your data correctly. 

If you are conducting automatic keyword coding, you may find that certain keywords code too much or don’t capture everything. You may need to adjust the rules based on what you are seeing.

If you are coding with a group, you may find that your research team is not applying the codes as expected. In this case, it is best to meet as a group and discuss the rules. Then, take another pass. 

In either case, you may need to return to step 3 and adjust your coding rules as needed. 

Analyze your results.

How to conduct content analysis - inductive

Data collection.

Immerse yourself in the data.

Develop your codebook from the data and generate codes.

Unlike deductive content analysis, your codebook will be generated as part of the analysis. 

You may find that you can develop a codebook after you have immersed yourself in the data.

Alternatively, you may develop your codebook by simply starting to code, and refinding and grouping those codes as you go.

At a certain point in conducting inductive qualitative analysis, you may find that your codebook is not changing that much. 

You may still, however, have data you want to apply that codebook to.

You can now develop coding rules so your codebook can be more systematically applied to the rest of your data, either by a research team or an automated keyword coding system.

Code the rest of your data.

Now with your code book and code rules defined, you can more systematically apply them to your data. This process will be similar to step 4 of deductive qualitative content analysis. 

Like deductive qualitative content analysis, you should iterate on your coding rules (step 4) as you find what works and what does not work.

Unlike deductive qualitative analysis, you may also find yourself iterating on the codebook itself as/if you find new patterns and themes.

Want more information on the coding process for specific subtypes of qualitative content analysis? You can find that information within most of their respective links above. 

Qualitative analysis doesn't have to be overwhelming

Take delve's free online course to learn how to find themes and patterns in your qualitative data. get started here..

content analysis research example

Wrapping Up

Qualitative content analysis is a powerful research method for examining and interpreting textual data. While it shares similarities with quantitative content analysis, it differs in its focus on exploring the meaning and context of the content beyond statistical significance.

Despite the varying definitions of this method that exist, it remains a valuable and easily accessible tool for researchers. A tool that helps make sense of complex phenomena and encapsulates the perspectives and experiences of many individuals, groups, and institutions. 

As such, it continues to grow in popularity among social science researchers, particularly in fields such as psychology, sociology, and communication studies.

Use Qualitative Content Analysis Software

Qualitative content analysis can be conducted using various tools, such as pen and paper, word processors like Microsoft Word, or specialized CAQDAS (Computer Assisted Qualitative Data Analysis Software) like Delve .

Ultimately, the choice of which approach to use for qualitative content analysis coding will depend on various factors, including the size of the data set, the available resources, and the specific research question being investigated. 

By understanding the pros and cons of each option, researchers can make an informed decision about which approach is best for them. 

Benefits of coding with Delve

Delve provides advanced code frequency reporting through code co-occurrence matrices. These matrices show how frequently codes overlap and automate code counts. 

With Delve, researchers can easily organize and search through data, eliminating the need for manual searches and reducing the risk of overlooking important information.

Delve's intuitive interface and easy-to-use features eliminate the need for extensive training, reducing the overall time commitment of using the software.

Coding with Delve helps eliminate errors and inconsistencies that can occur with manual coding, leading to more accurate analysis and results.

Delve provides a centralized platform for coding and analysis, reducing the risk of data loss or misplacement. As a cloud-based software with auto-save, your work is never lost!

Collaboration

Researchers can easily share memos with colleagues or with peer debriefers , facilitating collaboration and making it easier to work on large-scale research projects.

The software allows multiple users to work on the same data set simultaneously, reducing the time required for data analysis and speeding up the research process.

Researchers can see who applied codes to each portion of the content, allowing for code alignment and discussion.

Delve's cloud-based software allows researchers to work remotely without the need to use the same hardware or install the software.

Customizability

Users can create custom tags, categories, and themes, and apply them to their data to create a nuanced analysis that reflects the specific research focus.

It also allows researchers to easily import and export data in a variety of file formats, making it possible to work with data from a wide range of sources.

Delve also offers extensive documentation and easy-to-understand video tutorials to help you learn how to use the software effectively. Check out what our customers have to say .

Cost-Effectiveness

Last but not least, Delve provides an affordable option for researchers and students with limited financial resources who want to use CAQDAS. With industry-leading pricing options, Delve is a cost-effective solution that offers the benefits of advanced software without breaking the bank!

Qualitative Content Analysis With Delve

Overall, Delve offers numerous benefits for researchers. From increased efficiency and accuracy to improved collaboration and customizability, the software can help researchers streamline their work and achieve more robust and meaningful analysis results.

Zhang, Y., & Wildemuth, B. M. (2009). Qualitative analysis of content. In B. Cronin (Ed.), Annual Review of Information Science and Technology (Vol. 43, pp. 1-52). Medford, NJ: Information Today, Inc.

Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative health research, 15(9), 1277-1288. https://www.researchgate.net/publication/7561647_Three_Approaches_to_Qualitative_Content_Analysis

Elo, S., Kääriäinen, M., Kanste, O., Pölkki, T., Utriainen, K., & Kyngäs, H. (2014). Qualitative content analysis: A focus on trustworthiness. SAGE Open, 4(1), 2158244014522633. https://journals.sagepub.com/doi/10.1177/2158244014522633

McClelland, D. C. (1975). Power: The Inner Experience. Irvington Publishers.

Kibiswa, N. K. (2019). Directed Qualitative Content Analysis (DQlCA): A Tool for Conflict Analysis. The Qualitative Report, 24(8), 2059-2079. https://doi.org/10.46743/2160-3715/2019.3778

Downe-Wamboldt, B. (1992). Content analysis: Method, applications, and issues. Health care for women international, 13(3), 313-321.

Mayring, P. (2000). Qualitative content analysis. Forum qualitative Sozialforschung/Forum: Qualitative Social Research, 1(2), 20.

Patton, M. Q. (2002). Qualitative research and evaluation methods (Vol. 3). Sage publications.

Columbia University Mailman School of Public Health. (n.d.). Content analysis. Retrieved from https://www.publichealth.columbia.edu/research/population-health-methods/content-analysis

Cite This Article

Delve, Ho, L., & Limpaecher, A. (2023c, March 24). The Practical Guide to Qualitative Content Analysis https://delvetool.com/blog/qualitative-content-analysis

Analyst Answers

Data & Finance for Work & Life

content analysis

Qualitative Content Analysis: a Simple Guide with Examples

Content analysis is a type of qualitative research (as opposed to quantitative research) that focuses on analyzing content in various mediums, the most common of which is written words in documents.

It’s a very common technique used in academia, especially for students working on theses and dissertations, but here we’re going to talk about how companies can use qualitative content analysis to improve their processes and increase revenue.

Whether you’re new to content analysis or a seasoned professor, this article provides all you need to know about how data analysts use content analysis to improve their business. It will also help you understand the relationship between content analysis and natural language processing — what some even call natural language content analysis.

Don’t forget, you can get the free Intro to Data Analysis eBook , which will ensure you build the right practical skills for success in your analytical endeavors.

What is qualitative content analysis, and what is it used for?

Any content analysis definition must consist of at least these three things: qualitative language , themes , and quantification .

In short, content analysis is the process of examining preselected words in video, audio, or written mediums and their context to identify themes, then quantifying them for statistical analysis in order to draw conclusions. More simply, it’s counting how often you see two words close to each other.

For example, let’s say I place in front of you an audio bit, a old video with a static image, and a document with lots of text but no titles or descriptions. At the start, you would have no idea what any of it was about.

Let’s say you transpose the video and audio recordings on paper. Then you use a counting software to count the top ten most used words, excluding prepositions (of, over, to, by) and articles (the, a), conjunctions (and, but, or) and other common words like “very.”

Your results are that the top 5 words are “candy,” “snow,” “cold,” and “sled.” These 5 words appear at least 25 times each, and the next highest word appears only 4 times. You also find that the words “snow” and “sled” appear adjacent to each other 95% of the time that “snow” appears.

Well, now you have performed a very elementary qualitative content analysis .

This means that you’re probably dealing with a text in which snow sleds are important. Snow sleds, thus, become a theme in these documents, which goes to the heart of qualitative content analysis.

The goal of qualitative content analysis is to organize text into a series of themes . This is opposed to quantitative content analysis, which aims to organize the text into categories .

Types of qualitative content analysis

If you’ve heard about content analysis, it was most likely in an academic setting. The term itself is common among PhD students and Masters students writing their dissertations and theses. In that context, the most common type of content analysis is document analysis.

There are many types of content analysis , including:

  • Short- and long-form survey questions
  • Focus group transcripts
  • Interview transcripts
  • Legislature
  • Public records
  • Comments sections
  • Messaging platforms

This list gives you an idea for the possibilities and industries in which qualitative content analysis can be applied.

For example, marketing departments or public relations groups in major corporations might collect survey, focus groups, and interviews, then hand off the information to a data analyst who performs the content analysis.

A political analysis institution or Think Tank might look at legislature over time to identify potential emerging themes based on their slow introduction into policy margins. Perhaps it’s possible to identify certain beliefs in the senate and house of representatives before they enter the public discourse.

Non-governmental organizations (NGOs) might perform an analysis on public records to see how to better serve their constituents. If they have access to public records, it would be possible to identify citizen characteristics that align with their goal.

Analysis logic: inductive vs deductive

There are two types of logic we can apply to qualitative content analysis: inductive and deductive. Inductive content analysis is more of an exploratory approach. We don’t know what patterns or ideas we’ll discover, so we go in with an open mind.

On the other hand, deductive content analysis involves starting with an idea and identifying how it appears in the text. For example, we may approach legislation on wildlife by looking for rules on hunting. Perhaps we think hunting with a knife is too dangerous, and we want to identify trends in the text.

Neither one is better per se, and they each have carry value in different contexts. For example, inductive content analysis is advantageous in situations where we want to identify author intent. Going in with a hypothesis can bias the way we look at the data, so the inductive method is better

Deductive content analysis is better when we want to target a term. For example, if we want to see how important knife hunting is in the legislation, we’re doing deductive content analysis.

Measurements: idea coding vs word frequency

Two main methodologies exist for analyzing the text itself: coding and word frequency. Idea coding is the manual process of reading through a text and “coding” ideas in a column on the right. The reason we call this coding is because we take ideas and themes expressed in many words, and turn them into one common phrase. This allows researchers to better understand how those ideas evolve. We will look at how to do this in word below.

In short, coding in the context qualitative content analysis follows 2 steps:

  • Reading through the text one time
  • Adding 2-5 word summaries each time a significant theme or idea appears

Word frequency is simply counting the number of times a word appears in a text, as well as its proximity to other words. In our “snow sled” example above, we counted the number of times a word appeared, as well as how often it appeared next to other words. There’s are online tool for this we’ll look at below.

In short, word frequency in the context of content analysis follows 2 steps:

  • Decide whether you want to find a word, or just look at the most common words
  • Use word’s Replace function for the first, or an online tool such as Text Analyzer for the second (we’ll look at these in more detail below).

Many data scientists consider coding as the only qualitative content analysis, since word frequency turns to counting the number of times a word appears, making is quantitative.

While there is merit to this claim, I personally do not consider word frequency a part of quantitative content analysis. The fact that we count the frequency of a word does not mean we can draw direct conclusions from it. In fact, without a researcher to provide context on the number of time a word appears, word frequency is useless. True quantitative research carries conclusive value on its own.

Measurements AND analysis logic

There are four ways to approach qualitative content analysis given our two measurement types and inductive/deductive logical approaches. You could do inductive coding, inductive word frequency, deductive coding, and deductive word frequency.

The two best are inductive coding and deductive word frequency. If you would like to discover a document, trying to search for specific words will not inform you about its contents, so inductive word frequency is un-insightful.

Likewise, if you’re looking for the presence of a specific idea, you do not want to go through the whole document to code just to find it, so deductive coding is not insightful. Here’s simple matrix to illustrate:

Inductive (discovery)Deductive (locating)
(summarizing ideas)GOOD. (Example: discovering author intent in a passage.)BAD. (Example: coding an entire document to locate one idea.)
(counting word occurrences)OK. (Example: trying to understand author intent by pulling to 10% of words.)GOOD. (Example: locating and comparing a specific term in a text.)

Qualitative content analysis example

We looked at a small example above, but let’s play out all of the above information in a real world example. I will post the link to the text source at the bottom of the article, but don’t look at it yet . Let’s jump in with a discovery mentality , meaning let’s use an inductive approach and code our way through each paragraph.

Qualitative Content Analysis Example Download

*Click the “1” superscript to the right for a link to the source text. 1

How to do qualitative content analysis

We could use word frequency analysis to find out which are the most common x% of words in the text (deductive word frequency), but this takes some time because we need to build a formula that excludes words that are common but that don’t have any value (a, the, but, and, etc).

As a shortcut, you can use online tools such as Text Analyzer and WordCounter , which will give you breakdowns by phrase length (6 words, 5 words, 4 words, etc), without excluding common terms. Here are a few insightful example using our text with 7 words:

content analysis research example

Perhaps more insightfully, here is a list of 5 word combinations, which are much more common:

content analysis research example

The downside to these tools is that you cannot find 2- and 1-word strings without excluding common words. This is a limitation, but it’s unlikely that the work required to get there is worth the value it brings.

OK. Now that we’ve seen how to go about coding our text into quantifiable data, let’s look at the deductive approach and try to figure out if the text contains a single word we’re looking for. (This is my favorite.)

Deductive word frequency

We know the text now because we’ve already looked through it. It’s about the process of becoming literate, namely, the elements that impact our ability to learn to read. But we only looked at the first four sections of the article, so there’s more to explore.

Let’s say we want to know how a household situation might impact a student’s ability to read . Instead of coding the entire article, we can simply look for this term and it’s synonyms. The process for deductive word frequency is the following:

  • Identify your term
  • Think of all the possible synonyms
  • Use the word find function to see how many times they appear
  • If you suspect that this word often comes in connection with others, try searching for both of them

In my example, the process would be:

  • Parents, parent, home, house, household situation, household influence, parental, parental situation, at home, home situation
  • Go to “Edit>Find>Replace…” This will enable you to locate the number of instances in which your word or combinations appear. We use the Replace window instead of the simply Find bar because it allows us to visualize the information.
  • Accounted for in possible synonyms

The results: 0! None of these words appeared in the text, so we can conclude that this text has nothing to do with a child’s home life and its impact on his/her ability to learn to read. Here’s a picture:

deductive word frequency content analysis

Don’t Be Afraid of Content Analysis

Content analysis can be intimidating because it uses data analysis to quantify words. This article provides a starting point for your analysis, but to ensure you get 90% reliability in word coding, sign up to receive our eBook Beginner Content Analysis . I went from philosophy student to a data-heavy finance career, and I created it to cater to research and dissertation use cases.

content analysis research example

Content analysis vs natural language processing

While similar, content analysis, even the deductive word frequency approach, and natural language processing (NLP) are not the same. The relationship is hierarchical. Natural language processing is a field of linguistics and data science that’s concerned with understanding the meaning behind language.

On the other hand, content analysis is a branch of natural language processing that focuses on the methodologies we discussed above: discovery-style coding (sometimes called “tokenization”) and word frequency (sometimes called the “bag of words” technique)

For example, we would use natural language processing to quantify huge amounts of linguistic information, turn it into row-and-column data, and run tests on it. NLP is incredibly complex in the details, which is why it’s nearly impossible to provide a synopsis or example technique here (we’ll provide them in coursework on AnalystAnswers.com ). However, content analysis only focuses on a few manual techniques.

Content analysis in marketing

Content analysis in marketing is the use of content analysis to improve marketing reach and conversions. has grown in importance over the past ten years. As digital platforms become more central to our understanding and interaction with others, we use them more.

We write out ideas, small texts. We post our thoughts on Facebook and Twitter, and we write blog posts like this one. But we also post videos on youtube and express ourselves in podcasts.

All of these mediums contain valuable information about who we are and what we might want to buy . A good marketer aims to leverage this information in three ways:

  • Collect the data
  • Analyze the data
  • Modify his/her marketing messaging to better serve the consumer
  • Pretend, with bots or employees, to be a consumer and craft messages that influence potential buyers

The challenge for marketers doing this is getting the rights to access this data. Indeed, data privacy laws have gone into play in the European Union (General Data Protection Regulation, or GDPR) as well as in Brazil (General Data Protection Law, or GDPL).

Content analysis vs narrative analysis

Content analysis is concerned with themes and ideas, whereas narrative analysis is concerned with the stories people express about themselves or others. Narrative analysis uses the same tools as content analysis, namely coding (or tokenization) and word frequency, but its focus is on narrative relationship rather than themes. This is easier to understand with an example. Let’s look at how we might code the following paragraph from the two perspectives:

I do not like green eggs and ham. I do not like them, Sam-I-Am. I do not like them here or there. I do not like them anywhere!

Content analysis : the ideas expressed include green eggs and ham. the narrator does not like them

Narrative analysis : the narrator speaks from first person. He has a relationship with Sam-I-Am. He orients himself with regards to time and space. he does not like green eggs and ham, and may be willing to act on that feeling.

Content analysis vs document analysis

Content analysis and document analysis are very similar, which explains why many people use them interchangeably. The core difference is that content analysis examines all mediums in which words appear , whereas document analysis only examines written documents .

For example, if I want to carry out content analysis on a master’s thesis in education, I would consult documents, videos, and audio files. I may transcribe the video and audio files into a document, but I wouldn’t exclude them form the beginning.

On the other hand, if I want to carry out document analysis on a master’s thesis, I would only use documents, excluding the other mediums from the start. The methodology is the same, but the scope is different. This dichotomy also explains why most academic researchers performing qualitative content analysis refer to the process as “document analysis.” They rarely look at other mediums.

Content Gap Analysis

Content gap analysis is a term common in the field of content marketing, but it applies to the analytical fields as well. In a sentence, content gap analysis is the process of examining a document or text and identifying the missing pieces, or “gap,” that it needs to be completed.

As you can imagine, a content marketer uses gap analysis to determine how to improve blog content. An analyst uses it for other reasons. For example, he/she may have a standard for documents that merit analysis. If a document does not meet the criteria, it must be rejected until it’s improved.

The key message here is that content gap analysis is not content analysis. It’s a way of measuring the distance an underperforming document is from an acceptable document. It is sometimes, but not always, used in a qualitative content analysis context.

  • Link to Source Text [ ↩ ]

About the Author

Noah is the founder & Editor-in-Chief at AnalystAnswers. He is a transatlantic professional and entrepreneur with 5+ years of corporate finance and data analytics experience, as well as 3+ years in consumer financial products and business software. He started AnalystAnswers to provide aspiring professionals with accessible explanations of otherwise dense finance and data concepts. Noah believes everyone can benefit from an analytical mindset in growing digital world. When he's not busy at work, Noah likes to explore new European cities, exercise, and spend time with friends and family.

File available immediately.

content analysis research example

Notice: JavaScript is required for this content.

helpful professor logo

10 Content Analysis Examples

10 Content Analysis Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

content analysis example and definition, explained below

Content analysis is a research method and type of textual analysis that analyzes the meanings of content , which could take the form of textual, visual, aural, and otherwise multimodal texts.

Generally, a content analysis will seek meanings and relationships of certain words and concepts within the text or corpus of texts, and generate thematic data that reveals deeper insights into the text’s meanings.

Prasad (2008) defines it as:

:…the study of the content with reference to the meanings, contexts and intentions contained in messages.” (p. 174)

Content analyses can involve deductive coding , where themes and concepts are asserted before the content is created; or, they can involve inductive coding , where themes and concepts emerge during a close reading of the text.

An example of a content analysis would be a study that analyzes the presence of ideological words and phrases in newspapers to ascertain the editorial team’s political biases.

Content Analysis Examples

1. conceptual analysis.

Also called semantic content analysis, a conceptual analysis selects a concept and tries to count its occurrence within a text (Kosterec, 2016).

An example of a concept that you might examine is sentiment, such as positive, negative, and neutral sentiment. Here, you would need to conduct a semantic study of the text to find instances of words like ‘bad’, ‘terrible’, etc. for negative sentiment, and ‘good’, ‘great’, etc. for positive sentiment. A compare and contrast will demonstrate a balance of sentiment within the text.

A basic conceptual analysis has the weakness of lacking the capacity to read words in context, which would require a deeper qualitative analysis of paragraphs, which is offset by other types of analysis in this list.

Example of Conceptual Analysis

A company launches a new product and wants to understand the public’s initial reactions to it. They use conceptual analysis to analyze comments on their social media posts about the product. They could choose specific concepts such as “like”, “dislike”, “awesome”, “terrible”, etc. The frequency of these words in the comments give them an idea about the public’s sentiment towards the product.

2. Relational Analysis

Relational analysis addresses the above-mentioned weakness of conceptual analysis (i.e. that a mere counting of instances of terms lacks context) by examining how concepts in a text relate to one another .

Here, a scholar might analyze the overlap or sequences between certain concepts and sentiments in language (Kosterec, 2016). To combine the two examples from the above conceptual analysis, a scholar might examine all of a particular masthead newspaper’s columns on global warming. In the study, they would examine the proximity between the word ‘global warming’ and positive, negative, and neutral sentiment words (‘good’, ‘bad’, ‘great’, etc.) to ascertain the newspaper’s sentiment toward a specific concept .

Example of Relational Analysis

A political scientist wants to understand the relationship between the use of emotional rhetoric and audience reaction in political speeches. They carry out a relational analysis on a corpus of speeches and corresponding audience feedback. By exploring the co-occurrence of emotive words (“hope”, “fear”, “pride”) and audience responses (“applause”, “boos”, “silence”), they discover patterns in how different types of emotional language affect audience reactions.

3. Thematic Analysis

A thematic analysis focuses on identifying themes or major ideas running throughout the text.

This can follow a range of strategies, spanning from highly quantitative – such as using statistical software to thematically group words and terms – through to highly qualitative, where trained researchers take notes on each paragraph to extract key ideas that can be thematicized.

Many literature reviews take the form of a thematic analysis, where the scholar reads all recent studies on a topic and tries to ascertain themes, as well as gaps, across the recent literature.

Example of Thematic Analysis

A scholar searches on research bases for all published academic papers containing the keyword “back pain” from the past 10 years. She then uses inductive coding to generate themes that span the studies. From this thematic analysis, she produces a literature review on key emergent themes from the literature on back pain, as well as gaps in the research.

4. Narrative Analysis

This involves a close reading of the framing and structure of narrative elements within content. It can examine personal life stories, biographies, journals, and so on.

In literary research, this method generally explores the elements of the story , such as characters, plot, literary themes , and settings. But in life history research, it will generally involve deconstructing a real person’s life story, analyzing their perspectives and worldview to develop insights into their unique situation, life circumstances, or personality.

The focus generally expands out from the story itself to what it can tell us about the individuals or culture from which it originates.

Example of Narrative Analysis

A social work researcher takes a group of their patients’ personal journals and, after obtaining ethics clearance and permission from the patients, deconstructs the underlying messages in their journals in order to extract an understanding of the core mental hurdles each patient faces, which are then analyzed through the lens of Jungian psychoanalysis.

5. Discourse Analysis

Discourse analysis, the research methodology from which I conducted my PhD studies, involves the study of how language can create and reproduce social realities.

Based on the work of postmodern scholars such as Michel Foucault and Jaques Derrida, it attempts to deconstruct how texts normalize ways of thinking within specific historical, cultural, and social contexts .

Foucault, the most influential scholar in discourse analytic research, demonstrated through the study of how society spoke about madness that different societies constructed madness in different ways: in the renaissance era, mad people we spoken of as wise people, during the classical era, language changed, and they were framed as pariahs. Finally, in the modern era, they were spoken about as if they were sick.

Following Foucault (1988), many content analysis scholars now look at the differing ways societies frame different identities (gender, race, social class, etc.) in different times – and this can be revealed by looking at the language used in the content (i.e. the texts) produced throughout different eras (Johnstone, 2017).

Example of Discourse Analysis

A scholar examines a corpus of immigration speeches from a specific political party from the past 10 years and examines how refugees are discussed in the speeches, with a focus on how language constructs and defines refugees. It finds that refugees appear to be constructed as threats, dirty, and nefarious.

See Here for 10 More Examples of Discourse Analysis

6. Multimodal Analysis 

As audiovisual texts became more important in society, many scholars began to critique the fact that content analysis tends to only look at written texts. In response, a methodology called multimodal analysis emerged.

In multimodal analysis, scholars don’t just decode the meanings in written texts, but also in multimodal texts . This involves the study of the signs, symbols, movements, and sounds that are within the text.

This opens up space for the analysis of television advertisements, billboards, and so forth.

For an example, a multimodal analysis of a television advertisement might not just study what is said, but it’ll explore how the camera angles frame some people as powerful (low to high angle) and some people as weak (high to low angle). Similarly, they may examine the colors to see if a character is positioned as sad (dark colors, walking through rain) or joyful (bright colors, sunshine).

Example of Multimodal Analysis

A cultural studies scholar examines the representation of Gender in Disney films, looking not only at the spoken words, but also the dresses worn, the camera angles, and the princesses’ tone of voice when speaking to other characters to assess how Disney’s construction of gender has changed over time.

7. Semiotic Analysis

Semiotic analysis takes multimodal analysis to the next step by providing the specific methods for the analysis of multimodal texts.

Seminal scholars Kress and van Leeuwen (2006) have created a significant repertoire of texts demonstrating how semiotics shape meaning. In their works, they present deconstructions of various modes of address:

  • Visual: How images, signs, and symbols create meaning in social contexts. For example, in our modern world, a red octagon has a specific social meaning: stop!
  • Textual: How words shape meaning, such as through a sentiment analysis as discussed earlier.
  • Motive: How movement can create a sense of pace, distance, the movement of time, and so forth, which shapes meaning.
  • Aural: How sounds shape meaning. For example, the words spoken are not the only way we interpret a speech, but also how they’re spoken (shakily, confidently, assertively, etc.)

Example of Semiotic Analysis

A communications studies scholar examines the body language of leaders during meetings at an international political event, using it to explore how the leaders subtly send messages about who they are allied with, where they view themselves in geopolitical terms, and their attitudes toward the event overall.

8. Latent Content Analysis

This involves the interpretation of the underlying, inferred meanings of the words or visuals. The focus here is on what is being implied by the content rather than just what is explicitly said.

For example, in the context of the same newspaper articles, a latent content analysis might examine the way the event is framed, the language or rhetoric used, the themes or narratives that are implied, or the attitudes and ideologies that are expressed or endorsed, either overtly or covertly .

Returning to the work of Foucault, he demonstrated how silence also constructs meaning. The question emerges: what is left unsaid in the content, and how does this shape our understanding of the biases and assumptions of the author?

Example of Latent Content Analysis

A sociologist studying gender roles in films watches the top 10 movies from last year and doesn’t just count instances of words – rather, they analyze the underlying, implicit messages about gender roles. This could include exploring how female characters are portrayed (do they tend to be passive and in need of rescue, or are they active, independent and resourceful?) and how male characters are portrayed (emotional or unemotional?) What kind of occupations do characters of each gender typically have?

9. Manifest Content Analysis

A manifest content analysis is the counterpoint to latent content analysis. It involves a direct and surface-level reading of the visible aspects of the content.

It concerns itself primarily with what is visible, obvious and countable. This approach asserts that we should not read too deeply into anything beyond what is manifest (i.e. present), because the deeper we try to read into the missing or latent elements, the more we stray into the real of guessing and assuming.

Scholars will often do both latent and manifest content analyses side-by-side, exploring how each type of analysis might reveal different interpretations or insights.

Example of Manifest Content Analysis

A researcher is interested in studying bias in media coverage of a particular political event. They might conduct a conceptual analysis where the concept is the tone of language used – positive, neutral, or negative. They would examine a number of articles from different newspapers, tallying up instances of positive, negative, or neutral language to see if there is a bias towards positivity or negativity in coverage of the event.

10. Longitudinal Content Analysis

A longitudinal content analysis analyzes trends in content over a long period of time.

Earlier, I explored the idea in discourse analysis that different eras have different ideas about terms and concepts (consider, for example, evolving ideas of gender and race). A longitudinal analysis would be very useful here. It would involve collecting cross-sectional moments in time , at varying points in time, which would then be compared and contrasted for the representation of varying concepts and terms.

Example of Longitudinal Content Analsis

A scholar might look at newspaper reports on texts from each decade for 100 years, examining environmental terms (‘global warming’, ‘climate change’, ‘recycling’) to identify when and how environmental concepts entered public discourse.

For other Examples of Analysis, See Here

Content analysis is a form of empirical research that uses texts rather than interviews or naturalistic observation to gather data that can then be analyzed. There are a range of methods and approaches to the analysis of content, but their unifying feature is that they involve close readings of texts to identify concepts and themes that might be revealing of core or underlying messages within the content.

The above examples are not mutually exclusive types, but rather different approaches that researchers can use based on their specific goals and the nature of the data they are working with.

Foucault, M. (1988). Madness and civilization: A history of insanity in the age of reason . London: Vintage.

Johnstone, B. (2017). Discourse analysis . London: John Wiley & Sons.

Kosterec, M. (2016). Methods of conceptual analysis. Filozofia , 71 (3).

Kress, G., & Van Leeuwen, T. (2006). The grammar of visual design . London and New York: Routledge.

Prasad, B. D. (2008). Content analysis: A method of Social Science Research . In D.K. Lal Das (ed) Research Methods for Social Work, (pp.174-193). New Delhi: Rawat Publications.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Number Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Word Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Outdoor Games for Kids
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 50 Incentives to Give to Students

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

content analysis research example

Using Content Analysis

This guide provides an introduction to content analysis, a research methodology that examines words or phrases within a wide range of texts.

  • Introduction to Content Analysis : Read about the history and uses of content analysis.
  • Conceptual Analysis : Read an overview of conceptual analysis and its associated methodology.
  • Relational Analysis : Read an overview of relational analysis and its associated methodology.
  • Commentary : Read about issues of reliability and validity with regard to content analysis as well as the advantages and disadvantages of using content analysis as a research methodology.
  • Examples : View examples of real and hypothetical studies that use content analysis.
  • Annotated Bibliography : Complete list of resources used in this guide and beyond.

An Introduction to Content Analysis

Content analysis is a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Researchers quantify and analyze the presence, meanings and relationships of such words and concepts, then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of which these are a part. Texts can be defined broadly as books, book chapters, essays, interviews, discussions, newspaper headlines and articles, historical documents, speeches, conversations, advertising, theater, informal conversation, or really any occurrence of communicative language. Texts in a single study may also represent a variety of different types of occurrences, such as Palmquist's 1990 study of two composition classes, in which he analyzed student and teacher interviews, writing journals, classroom discussions and lectures, and out-of-class interaction sheets. To conduct a content analysis on any such text, the text is coded, or broken down, into manageable categories on a variety of levels--word, word sense, phrase, sentence, or theme--and then examined using one of content analysis' basic methods: conceptual analysis or relational analysis.

A Brief History of Content Analysis

Historically, content analysis was a time consuming process. Analysis was done manually, or slow mainframe computers were used to analyze punch cards containing data punched in by human coders. Single studies could employ thousands of these cards. Human error and time constraints made this method impractical for large texts. However, despite its impracticality, content analysis was already an often utilized research method by the 1940's. Although initially limited to studies that examined texts for the frequency of the occurrence of identified terms (word counts), by the mid-1950's researchers were already starting to consider the need for more sophisticated methods of analysis, focusing on concepts rather than simply words, and on semantic relationships rather than just presence (de Sola Pool 1959). While both traditions still continue today, content analysis now is also utilized to explore mental models, and their linguistic, affective, cognitive, social, cultural and historical significance.

Uses of Content Analysis

Perhaps due to the fact that it can be applied to examine any piece of writing or occurrence of recorded communication, content analysis is currently used in a dizzying array of fields, ranging from marketing and media studies, to literature and rhetoric, ethnography and cultural studies, gender and age issues, sociology and political science, psychology and cognitive science, and many other fields of inquiry. Additionally, content analysis reflects a close relationship with socio- and psycholinguistics, and is playing an integral role in the development of artificial intelligence. The following list (adapted from Berelson, 1952) offers more possibilities for the uses of content analysis:

  • Reveal international differences in communication content
  • Detect the existence of propaganda
  • Identify the intentions, focus or communication trends of an individual, group or institution
  • Describe attitudinal and behavioral responses to communications
  • Determine psychological or emotional state of persons or groups

Types of Content Analysis

In this guide, we discuss two general categories of content analysis: conceptual analysis and relational analysis. Conceptual analysis can be thought of as establishing the existence and frequency of concepts most often represented by words of phrases in a text. For instance, say you have a hunch that your favorite poet often writes about hunger. With conceptual analysis you can determine how many times words such as hunger, hungry, famished, or starving appear in a volume of poems. In contrast, relational analysis goes one step further by examining the relationships among concepts in a text. Returning to the hunger example, with relational analysis, you could identify what other words or phrases hunger or famished appear next to and then determine what different meanings emerge as a result of these groupings.

Conceptual Analysis

Traditionally, content analysis has most often been thought of in terms of conceptual analysis. In conceptual analysis, a concept is chosen for examination, and the analysis involves quantifying and tallying its presence. Also known as thematic analysis [although this term is somewhat problematic, given its varied definitions in current literature--see Palmquist, Carley, & Dale (1997) vis-a-vis Smith (1992)], the focus here is on looking at the occurrence of selected terms within a text or texts, although the terms may be implicit as well as explicit. While explicit terms obviously are easy to identify, coding for implicit terms and deciding their level of implication is complicated by the need to base judgments on a somewhat subjective system. To attempt to limit the subjectivity, then (as well as to limit problems of reliability and validity ), coding such implicit terms usually involves the use of either a specialized dictionary or contextual translation rules. And sometimes, both tools are used--a trend reflected in recent versions of the Harvard and Lasswell dictionaries.

Methods of Conceptual Analysis

Conceptual analysis begins with identifying research questions and choosing a sample or samples. Once chosen, the text must be coded into manageable content categories. The process of coding is basically one of selective reduction . By reducing the text to categories consisting of a word, set of words or phrases, the researcher can focus on, and code for, specific words or patterns that are indicative of the research question.

An example of a conceptual analysis would be to examine several Clinton speeches on health care, made during the 1992 presidential campaign, and code them for the existence of certain words. In looking at these speeches, the research question might involve examining the number of positive words used to describe Clinton's proposed plan, and the number of negative words used to describe the current status of health care in America. The researcher would be interested only in quantifying these words, not in examining how they are related, which is a function of relational analysis. In conceptual analysis, the researcher simply wants to examine presence with respect to his/her research question, i.e. is there a stronger presence of positive or negative words used with respect to proposed or current health care plans, respectively.

Once the research question has been established, the researcher must make his/her coding choices with respect to the eight category coding steps indicated by Carley (1992).

Steps for Conducting Conceptual Analysis

The following discussion of steps that can be followed to code a text or set of texts during conceptual analysis use campaign speeches made by Bill Clinton during the 1992 presidential campaign as an example. To read about each step, click on the items in the list below:

  • Decide the level of analysis.

First, the researcher must decide upon the level of analysis . With the health care speeches, to continue the example, the researcher must decide whether to code for a single word, such as "inexpensive," or for sets of words or phrases, such as "coverage for everyone."

  • Decide how many concepts to code for.

The researcher must now decide how many different concepts to code for. This involves developing a pre-defined or interactive set of concepts and categories. The researcher must decide whether or not to code for every single positive or negative word that appears, or only certain ones that the researcher determines are most relevant to health care. Then, with this pre-defined number set, the researcher has to determine how much flexibility he/she allows him/herself when coding. The question of whether the researcher codes only from this pre-defined set, or allows him/herself to add relevant categories not included in the set as he/she finds them in the text, must be answered. Determining a certain number and set of concepts allows a researcher to examine a text for very specific things, keeping him/her on task. But introducing a level of coding flexibility allows new, important material to be incorporated into the coding process that could have significant bearings on one's results.

  • Decide whether to code for existence or frequency of a concept.

After a certain number and set of concepts are chosen for coding , the researcher must answer a key question: is he/she going to code for existence or frequency ? This is important, because it changes the coding process. When coding for existence, "inexpensive" would only be counted once, no matter how many times it appeared. This would be a very basic coding process and would give the researcher a very limited perspective of the text. However, the number of times "inexpensive" appears in a text might be more indicative of importance. Knowing that "inexpensive" appeared 50 times, for example, compared to 15 appearances of "coverage for everyone," might lead a researcher to interpret that Clinton is trying to sell his health care plan based more on economic benefits, not comprehensive coverage. Knowing that "inexpensive" appeared, but not that it appeared 50 times, would not allow the researcher to make this interpretation, regardless of whether it is valid or not.

  • Decide on how you will distinguish among concepts.

The researcher must next decide on the , i.e. whether concepts are to be coded exactly as they appear, or if they can be recorded as the same even when they appear in different forms. For example, "expensive" might also appear as "expensiveness." The research needs to determine if the two words mean radically different things to him/her, or if they are similar enough that they can be coded as being the same thing, i.e. "expensive words." In line with this, is the need to determine the level of implication one is going to allow. This entails more than subtle differences in tense or spelling, as with "expensive" and "expensiveness." Determining the level of implication would allow the researcher to code not only for the word "expensive," but also for words that imply "expensive." This could perhaps include technical words, jargon, or political euphemism, such as "economically challenging," that the researcher decides does not merit a separate category, but is better represented under the category "expensive," due to its implicit meaning of "expensive."

  • Develop rules for coding your texts.

After taking the generalization of concepts into consideration, a researcher will want to create translation rules that will allow him/her to streamline and organize the coding process so that he/she is coding for exactly what he/she wants to code for. Developing a set of rules helps the researcher insure that he/she is coding things consistently throughout the text, in the same way every time. If a researcher coded "economically challenging" as a separate category from "expensive" in one paragraph, then coded it under the umbrella of "expensive" when it occurred in the next paragraph, his/her data would be invalid. The interpretations drawn from that data will subsequently be invalid as well. Translation rules protect against this and give the coding process a crucial level of consistency and coherence.

  • Decide what to do with "irrelevant" information.

The next choice a researcher must make involves irrelevant information . The researcher must decide whether irrelevant information should be ignored (as Weber, 1990, suggests), or used to reexamine and/or alter the coding scheme. In the case of this example, words like "and" and "the," as they appear by themselves, would be ignored. They add nothing to the quantification of words like "inexpensive" and "expensive" and can be disregarded without impacting the outcome of the coding.

  • Code the texts.

Once these choices about irrelevant information are made, the next step is to code the text. This is done either by hand, i.e. reading through the text and manually writing down concept occurrences, or through the use of various computer programs. Coding with a computer is one of contemporary conceptual analysis' greatest assets. By inputting one's categories, content analysis programs can easily automate the coding process and examine huge amounts of data, and a wider range of texts, quickly and efficiently. But automation is very dependent on the researcher's preparation and category construction. When coding is done manually, a researcher can recognize errors far more easily. A computer is only a tool and can only code based on the information it is given. This problem is most apparent when coding for implicit information, where category preparation is essential for accurate coding.

  • Analyze your results.

Once the coding is done, the researcher examines the data and attempts to draw whatever conclusions and generalizations are possible. Of course, before these can be drawn, the researcher must decide what to do with the information in the text that is not coded. One's options include either deleting or skipping over unwanted material, or viewing all information as relevant and important and using it to reexamine, reassess and perhaps even alter one's coding scheme. Furthermore, given that the conceptual analyst is dealing only with quantitative data, the levels of interpretation and generalizability are very limited. The researcher can only extrapolate as far as the data will allow. But it is possible to see trends, for example, that are indicative of much larger ideas. Using the example from step three, if the concept "inexpensive" appears 50 times, compared to 15 appearances of "coverage for everyone," then the researcher can pretty safely extrapolate that there does appear to be a greater emphasis on the economics of the health care plan, as opposed to its universal coverage for all Americans. It must be kept in mind that conceptual analysis, while extremely useful and effective for providing this type of information when done right, is limited by its focus and the quantitative nature of its examination. To more fully explore the relationships that exist between these concepts, one must turn to relational analysis.

Relational Analysis

Relational analysis, like conceptual analysis, begins with the act of identifying concepts present in a given text or set of texts. However, relational analysis seeks to go beyond presence by exploring the relationships between the concepts identified. Relational analysis has also been termed semantic analysis (Palmquist, Carley, & Dale, 1997). In other words, the focus of relational analysis is to look for semantic, or meaningful, relationships. Individual concepts, in and of themselves, are viewed as having no inherent meaning. Rather, meaning is a product of the relationships among concepts in a text. Carley (1992) asserts that concepts are "ideational kernels;" these kernels can be thought of as symbols which acquire meaning through their connections to other symbols.

Theoretical Influences on Relational Analysis

The kind of analysis that researchers employ will vary significantly according to their theoretical approach. Key theoretical approaches that inform content analysis include linguistics and cognitive science.

Linguistic approaches to content analysis focus analysis of texts on the level of a linguistic unit, typically single clause units. One example of this type of research is Gottschalk (1975), who developed an automated procedure which analyzes each clause in a text and assigns it a numerical score based on several emotional/psychological scales. Another technique is to code a text grammatically into clauses and parts of speech to establish a matrix representation (Carley, 1990).

Approaches that derive from cognitive science include the creation of decision maps and mental models. Decision maps attempt to represent the relationship(s) between ideas, beliefs, attitudes, and information available to an author when making a decision within a text. These relationships can be represented as logical, inferential, causal, sequential, and mathematical relationships. Typically, two of these links are compared in a single study, and are analyzed as networks. For example, Heise (1987) used logical and sequential links to examine symbolic interaction. This methodology is thought of as a more generalized cognitive mapping technique, rather than the more specific mental models approach.

Mental models are groups or networks of interrelated concepts that are thought to reflect conscious or subconscious perceptions of reality. According to cognitive scientists, internal mental structures are created as people draw inferences and gather information about the world. Mental models are a more specific approach to mapping because beyond extraction and comparison because they can be numerically and graphically analyzed. Such models rely heavily on the use of computers to help analyze and construct mapping representations. Typically, studies based on this approach follow five general steps:

  • Identifing concepts
  • Defining relationship types
  • Coding the text on the basis of 1 and 2
  • Coding the statements
  • Graphically displaying and numerically analyzing the resulting maps

To create the model, a researcher converts a text into a map of concepts and relations; the map is then analyzed on the level of concepts and statements, where a statement consists of two concepts and their relationship. Carley (1990) asserts that this makes possible the comparison of a wide variety of maps, representing multiple sources, implicit and explicit information, as well as socially shared cognitions.

Relational Analysis: Overview of Methods

As with other sorts of inquiry, initial choices with regard to what is being studied and/or coded for often determine the possibilities of that particular study. For relational analysis, it is important to first decide which concept type(s) will be explored in the analysis. Studies have been conducted with as few as one and as many as 500 concept categories. Obviously, too many categories may obscure your results and too few can lead to unreliable and potentially invalid conclusions. Therefore, it is important to allow the context and necessities of your research to guide your coding procedures.

The steps to relational analysis that we consider in this guide suggest some of the possible avenues available to a researcher doing content analysis. We provide an example to make the process easier to grasp. However, the choices made within the context of the example are but only a few of many possibilities. The diversity of techniques available suggests that there is quite a bit of enthusiasm for this mode of research. Once a procedure is rigorously tested, it can be applied and compared across populations over time. The process of relational analysis has achieved a high degree of computer automation but still is, like most forms of research, time consuming. Perhaps the strongest claim that can be made is that it maintains a high degree of statistical rigor without losing the richness of detail apparent in even more qualitative methods.

Three Subcategories of Relational Analysis

Affect extraction: This approach provides an emotional evaluation of concepts explicit in a text. It is problematic because emotion may vary across time and populations. Nevertheless, when extended it can be a potent means of exploring the emotional/psychological state of the speaker and/or writer. Gottschalk (1995) provides an example of this type of analysis. By assigning concepts identified a numeric value on corresponding emotional/psychological scales that can then be statistically examined, Gottschalk claims that the emotional/psychological state of the speaker or writer can be ascertained via their verbal behavior.

Proximity analysis: This approach, on the other hand, is concerned with the co-occurrence of explicit concepts in the text. In this procedure, the text is defined as a string of words. A given length of words, called a window , is determined. The window is then scanned across a text to check for the co-occurrence of concepts. The result is the creation of a concept determined by the concept matrix . In other words, a matrix, or a group of interrelated, co-occurring concepts, might suggest a certain overall meaning. The technique is problematic because the window records only explicit concepts and treats meaning as proximal co-occurrence. Other techniques such as clustering, grouping, and scaling are also useful in proximity analysis.

Cognitive mapping: This approach is one that allows for further analysis of the results from the two previous approaches. It attempts to take the above processes one step further by representing these relationships visually for comparison. Whereas affective and proximal analysis function primarily within the preserved order of the text, cognitive mapping attempts to create a model of the overall meaning of the text. This can be represented as a graphic map that represents the relationships between concepts.

In this manner, cognitive mapping lends itself to the comparison of semantic connections across texts. This is known as map analysis which allows for comparisons to explore "how meanings and definitions shift across people and time" (Palmquist, Carley, & Dale, 1997). Maps can depict a variety of different mental models (such as that of the text, the writer/speaker, or the social group/period), according to the focus of the researcher. This variety is indicative of the theoretical assumptions that support mapping: mental models are representations of interrelated concepts that reflect conscious or subconscious perceptions of reality; language is the key to understanding these models; and these models can be represented as networks (Carley, 1990). Given these assumptions, it's not surprising to see how closely this technique reflects the cognitive concerns of socio-and psycholinguistics, and lends itself to the development of artificial intelligence models.

Steps for Conducting Relational Analysis

The following discussion of the steps (or, perhaps more accurately, strategies) that can be followed to code a text or set of texts during relational analysis. These explanations are accompanied by examples of relational analysis possibilities for statements made by Bill Clinton during the 1998 hearings.

  • Identify the Question.

The question is important because it indicates where you are headed and why. Without a focused question, the concept types and options open to interpretation are limitless and therefore the analysis difficult to complete. Possibilities for the Hairy Hearings of 1998 might be:

What did Bill Clinton say in the speech? OR What concrete information did he present to the public?
  • Choose a sample or samples for analysis.

Once the question has been identified, the researcher must select sections of text/speech from the hearings in which Bill Clinton may have not told the entire truth or is obviously holding back information. For relational content analysis, the primary consideration is how much information to preserve for analysis. One must be careful not to limit the results by doing so, but the researcher must also take special care not to take on so much that the coding process becomes too heavy and extensive to supply worthwhile results.

  • Determine the type of analysis.

Once the sample has been chosen for analysis, it is necessary to determine what type or types of relationships you would like to examine. There are different subcategories of relational analysis that can be used to examine the relationships in texts.

In this example, we will use proximity analysis because it is concerned with the co-occurrence of explicit concepts in the text. In this instance, we are not particularly interested in affect extraction because we are trying to get to the hard facts of what exactly was said rather than determining the emotional considerations of speaker and receivers surrounding the speech which may be unrecoverable.

Once the subcategory of analysis is chosen, the selected text must be reviewed to determine the level of analysis. The researcher must decide whether to code for a single word, such as "perhaps," or for sets of words or phrases like "I may have forgotten."

  • Reduce the text to categories and code for words or patterns.

At the simplest level, a researcher can code merely for existence. This is not to say that simplicity of procedure leads to simplistic results. Many studies have successfully employed this strategy. For example, Palmquist (1990) did not attempt to establish the relationships among concept terms in the classrooms he studied; his study did, however, look at the change in the presence of concepts over the course of the semester, comparing a map analysis from the beginning of the semester to one constructed at the end. On the other hand, the requirement of one's specific research question may necessitate deeper levels of coding to preserve greater detail for analysis.

In relation to our extended example, the researcher might code for how often Bill Clinton used words that were ambiguous, held double meanings, or left an opening for change or "re-evaluation." The researcher might also choose to code for what words he used that have such an ambiguous nature in relation to the importance of the information directly related to those words.

  • Explore the relationships between concepts (Strength, Sign & Direction).

Once words are coded, the text can be analyzed for the relationships among the concepts set forth. There are three concepts which play a central role in exploring the relations among concepts in content analysis.

  • Strength of Relationship: Refers to the degree to which two or more concepts are related. These relationships are easiest to analyze, compare, and graph when all relationships between concepts are considered to be equal. However, assigning strength to relationships retains a greater degree of the detail found in the original text. Identifying strength of a relationship is key when determining whether or not words like unless, perhaps, or maybe are related to a particular section of text, phrase, or idea.
  • Sign of a Relationship: Refers to whether or not the concepts are positively or negatively related. To illustrate, the concept "bear" is negatively related to the concept "stock market" in the same sense as the concept "bull" is positively related. Thus "it's a bear market" could be coded to show a negative relationship between "bear" and "market". Another approach to coding for strength entails the creation of separate categories for binary oppositions. The above example emphasizes "bull" as the negation of "bear," but could be coded as being two separate categories, one positive and one negative. There has been little research to determine the benefits and liabilities of these differing strategies. Use of Sign coding for relationships in regard to the hearings my be to find out whether or not the words under observation or in question were used adversely or in favor of the concepts (this is tricky, but important to establishing meaning).
  • Direction of the Relationship: Refers to the type of relationship categories exhibit. Coding for this sort of information can be useful in establishing, for example, the impact of new information in a decision making process. Various types of directional relationships include, "X implies Y," "X occurs before Y" and "if X then Y," or quite simply the decision whether concept X is the "prime mover" of Y or vice versa. In the case of the 1998 hearings, the researcher might note that, "maybe implies doubt," "perhaps occurs before statements of clarification," and "if possibly exists, then there is room for Clinton to change his stance." In some cases, concepts can be said to be bi-directional, or having equal influence. This is equivalent to ignoring directionality. Both approaches are useful, but differ in focus. Coding all categories as bi-directional is most useful for exploratory studies where pre-coding may influence results, and is also most easily automated, or computer coded.
  • Code the relationships.

One of the main differences between conceptual analysis and relational analysis is that the statements or relationships between concepts are coded. At this point, to continue our extended example, it is important to take special care with assigning value to the relationships in an effort to determine whether the ambiguous words in Bill Clinton's speech are just fillers, or hold information about the statements he is making.

  • Perform Statisical Analyses.

This step involves conducting statistical analyses of the data you've coded during your relational analysis. This may involve exploring for differences or looking for relationships among the variables you've identified in your study.

  • Map out the Representations.

In addition to statistical analysis, relational analysis often leads to viewing the representations of the concepts and their associations in a text (or across texts) in a graphical -- or map -- form. Relational analysis is also informed by a variety of different theoretical approaches: linguistic content analysis, decision mapping, and mental models.

The authors of this guide have created the following commentaries on content analysis.

Issues of Reliability & Validity

The issues of reliability and validity are concurrent with those addressed in other research methods. The reliability of a content analysis study refers to its stability , or the tendency for coders to consistently re-code the same data in the same way over a period of time; reproducibility , or the tendency for a group of coders to classify categories membership in the same way; and accuracy , or the extent to which the classification of a text corresponds to a standard or norm statistically. Gottschalk (1995) points out that the issue of reliability may be further complicated by the inescapably human nature of researchers. For this reason, he suggests that coding errors can only be minimized, and not eliminated (he shoots for 80% as an acceptable margin for reliability).

On the other hand, the validity of a content analysis study refers to the correspondence of the categories to the conclusions , and the generalizability of results to a theory.

The validity of categories in implicit concept analysis, in particular, is achieved by utilizing multiple classifiers to arrive at an agreed upon definition of the category. For example, a content analysis study might measure the occurrence of the concept category "communist" in presidential inaugural speeches. Using multiple classifiers, the concept category can be broadened to include synonyms such as "red," "Soviet threat," "pinkos," "godless infidels" and "Marxist sympathizers." "Communist" is held to be the explicit variable, while "red," etc. are the implicit variables.

The overarching problem of concept analysis research is the challenge-able nature of conclusions reached by its inferential procedures. The question lies in what level of implication is allowable, i.e. do the conclusions follow from the data or are they explainable due to some other phenomenon? For occurrence-specific studies, for example, can the second occurrence of a word carry equal weight as the ninety-ninth? Reasonable conclusions can be drawn from substantive amounts of quantitative data, but the question of proof may still remain unanswered.

This problem is again best illustrated when one uses computer programs to conduct word counts. The problem of distinguishing between synonyms and homonyms can completely throw off one's results, invalidating any conclusions one infers from the results. The word "mine," for example, variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. One may obtain an accurate count of that word's occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage. For example, one may find 50 occurrences of the word "mine." But, if one is only looking specifically for "mine" as an explosive device, and 17 of the occurrences are actually personal pronouns, the resulting 50 is an inaccurate result. Any conclusions drawn as a result of that number would render that conclusion invalid.

The generalizability of one's conclusions, then, is very dependent on how one determines concept categories, as well as on how reliable those categories are. It is imperative that one defines categories that accurately measure the idea and/or items one is seeking to measure. Akin to this is the construction of rules. Developing rules that allow one, and others, to categorize and code the same data in the same way over a period of time, referred to as stability , is essential to the success of a conceptual analysis. Reproducibility , not only of specific categories, but of general methods applied to establishing all sets of categories, makes a study, and its subsequent conclusions and results, more sound. A study which does this, i.e. in which the classification of a text corresponds to a standard or norm, is said to have accuracy .

Advantages of Content Analysis

Content analysis offers several advantages to researchers who consider using it. In particular, content analysis:

  • looks directly at communication via texts or transcripts, and hence gets at the central aspect of social interaction
  • can allow for both quantitative and qualitative operations
  • can provides valuable historical/cultural insights over time through analysis of texts
  • allows a closeness to text which can alternate between specific categories and relationships and also statistically analyzes the coded form of the text
  • can be used to interpret texts for purposes such as the development of expert systems (since knowledge and rules can both be coded in terms of explicit statements about the relationships among concepts)
  • is an unobtrusive means of analyzing interactions
  • provides insight into complex models of human thought and language use

Disadvantages of Content Analysis

Content analysis suffers from several disadvantages, both theoretical and procedural. In particular, content analysis:

  • can be extremely time consuming
  • is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation
  • is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study
  • is inherently reductive, particularly when dealing with complex texts
  • tends too often to simply consist of word counts
  • often disregards the context that produced the text, as well as the state of things after the text is produced
  • can be difficult to automate or computerize

The Palmquist, Carley and Dale study, a summary of "Applications of Computer-Aided Text Analysis: Analyzing Literary and Non-Literary Texts" (1997) is an example of two studies that have been conducted using both conceptual and relational analysis. The Problematic Text for Content Analysis shows the differences in results obtained by a conceptual and a relational approach to a study.

Related Information: Example of a Problematic Text for Content Analysis

In this example, both students observed a scientist and were asked to write about the experience.

Student A: I found that scientists engage in research in order to make discoveries and generate new ideas. Such research by scientists is hard work and often involves collaboration with other scientists which leads to discoveries which make the scientists famous. Such collaboration may be informal, such as when they share new ideas over lunch, or formal, such as when they are co-authors of a paper.
Student B: It was hard work to research famous scientists engaged in collaboration and I made many informal discoveries. My research showed that scientists engaged in collaboration with other scientists are co-authors of at least one paper containing their new ideas. Some scientists make formal discoveries and have new ideas.

Content analysis coding for explicit concepts may not reveal any significant differences. For example, the existence of "I, scientist, research, hard work, collaboration, discoveries, new ideas, etc..." are explicit in both texts, occur the same number of times, and have the same emphasis. Relational analysis or cognitive mapping, however, reveals that while all concepts in the text are shared, only five concepts are common to both. Analyzing these statements reveals that Student A reports on what "I" found out about "scientists," and elaborated the notion of "scientists" doing "research." Student B focuses on what "I's" research was and sees scientists as "making discoveries" without emphasis on research.

Related Information: The Palmquist, Carley and Dale Study

Consider these two questions: How has the depiction of robots changed over more than a century's worth of writing? And, do students and writing instructors share the same terms for describing the writing process? Although these questions seem totally unrelated, they do share a commonality: in the Palmquist, Carley & Dale study, their answers rely on computer-aided text analysis to demonstrate how different texts can be analyzed.

Literary texts

One half of the study explored the depiction of robots in 27 science fiction texts written between 1818 and 1988. After texts were divided into three historically defined groups, readers look for how the depiction of robots has changed over time. To do this, researchers had to create concept lists and relationship types, create maps using a computer software (see Fig. 1), modify those maps and then ultimately analyze them. The final product of the analysis revealed that over time authors were less likely to depict robots as metallic humanoids.

Non-literary texts

The second half of the study used student journals and interviews, teacher interviews, texts books, and classroom observations as the non-literary texts from which concepts and words were taken. The purpose behind the study was to determine if, in fact, over time teacher and students would begin to share a similar vocabulary about the writing process. Again, researchers used computer software to assist in the process. This time, computers helped researchers generated a concept list based on frequently occurring words and phrases from all texts. Maps were also created and analyzed in this study (see Fig. 2).

Annotated Bibliography

Resources On How To Conduct Content Analysis

Beard, J., & Yaprak, A. (1989). Language implications for advertising in international markets: A model for message content and message execution. A paper presented at the 8th International Conference on Language Communication for World Business and the Professions. Ann Arbor, MI.

This report discusses the development and testing of a content analysis model for assessing advertising themes and messages aimed primarily at U.S. markets which seeks to overcome barriers in the cultural environment of international markets. Texts were categorized under 3 headings: rational, emotional, and moral. The goal here was to teach students to appreciate differences in language and culture.

Berelson, B. (1971). Content analysis in communication research . New York: Hafner Publishing Company.

While this book provides an extensive outline of the uses of content analysis, it is far more concerned with conveying a critical approach to current literature on the subject. In this respect, it assumes a bit of prior knowledge, but is still accessible through the use of concrete examples.

Budd, R. W., Thorp, R.K., & Donohew, L. (1967). Content analysis of communications . New York: Macmillan Company.

Although published in 1967, the decision of the authors to focus on recent trends in content analysis keeps their insights relevant even to modern audiences. The book focuses on specific uses and methods of content analysis with an emphasis on its potential for researching human behavior. It is also geared toward the beginning researcher and breaks down the process of designing a content analysis study into 6 steps that are outlined in successive chapters. A useful annotated bibliography is included.

Carley, K. (1992). Coding choices for textual analysis: A comparison of content analysis and map analysis. Unpublished Working Paper.

Comparison of the coding choices necessary to conceptual analysis and relational analysis, especially focusing on cognitive maps. Discusses concept coding rules needed for sufficient reliability and validity in a Content Analysis study. In addition, several pitfalls common to texts are discussed.

Carley, K. (1990). Content analysis. In R.E. Asher (Ed.), The Encyclopedia of Language and Linguistics. Edinburgh: Pergamon Press.

Quick, yet detailed, overview of the different methodological kinds of Content Analysis. Carley breaks down her paper into five sections, including: Conceptual Analysis, Procedural Analysis, Relational Analysis, Emotional Analysis and Discussion. Also included is an excellent and comprehensive Content Analysis reference list.

Carley, K. (1989). Computer analysis of qualitative data . Pittsburgh, PA: Carnegie Mellon University.

Presents graphic, illustrated representations of computer based approaches to content analysis.

Carley, K. (1992). MECA . Pittsburgh, PA: Carnegie Mellon University.

A resource guide explaining the fifteen routines that compose the Map Extraction Comparison and Analysis (MECA) software program. Lists the source file, input and out files, and the purpose for each routine.

Carney, T. F. (1972). Content analysis: A technique for systematic inference from communications . Winnipeg, Canada: University of Manitoba Press.

This book introduces and explains in detail the concept and practice of content analysis. Carney defines it; traces its history; discusses how content analysis works and its strengths and weaknesses; and explains through examples and illustrations how one goes about doing a content analysis.

de Sola Pool, I. (1959). Trends in content analysis . Urbana, Ill: University of Illinois Press.

The 1959 collection of papers begins by differentiating quantitative and qualitative approaches to content analysis, and then details facets of its uses in a wide variety of disciplines: from linguistics and folklore to biography and history. Includes a discussion on the selection of relevant methods and representational models.

Duncan, D. F. (1989). Content analysis in health educaton research: An introduction to purposes and methods. Heatlth Education, 20 (7).

This article proposes using content analysis as a research technique in health education. A review of literature relating to applications of this technique and a procedure for content analysis are presented.

Gottschalk, L. A. (1995). Content analysis of verbal behavior: New findings and clinical applications. Hillside, NJ: Lawrence Erlbaum Associates, Inc.

This book primarily focuses on the Gottschalk-Gleser method of content analysis, and its application as a method of measuring psychological dimensions of children and adults via the content and form analysis of their verbal behavior, using the grammatical clause as the basic unit of communication for carrying semantic messages generated by speakers or writers.

Krippendorf, K. (1980). Content analysis: An introduction to its methodology Beverly Hills, CA: Sage Publications.

This is one of the most widely quoted resources in many of the current studies of Content Analysis. Recommended as another good, basic resource, as Krippendorf presents the major issues of Content Analysis in much the same way as Weber (1975).

Moeller, L. G. (1963). An introduction to content analysis--including annotated bibliography . Iowa City: University of Iowa Press.

A good reference for basic content analysis. Discusses the options of sampling, categories, direction, measurement, and the problems of reliability and validity in setting up a content analysis. Perhaps better as a historical text due to its age.

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis. New York: Cambridge University Press.

Billed by its authors as "the first book to be devoted primarily to content analysis systems for assessment of the characteristics of individuals, groups, or historical periods from their verbal materials." The text includes manuals for using various systems, theory, and research regarding the background of systems, as well as practice materials, making the book both a reference and a handbook.

Solomon, M. (1993). Content analysis: a potent tool in the searcher's arsenal. Database, 16 (2), 62-67.

Online databases can be used to analyze data, as well as to simply retrieve it. Online-media-source content analysis represents a potent but little-used tool for the business searcher. Content analysis benchmarks useful to advertisers include prominence, offspin, sponsor affiliation, verbatims, word play, positioning and notational visibility.

Weber, R. P. (1990). Basic content analysis, second edition . Newbury Park, CA: Sage Publications.

Good introduction to Content Analysis. The first chapter presents a quick overview of Content Analysis. The second chapter discusses content classification and interpretation, including sections on reliability, validity, and the creation of coding schemes and categories. Chapter three discusses techniques of Content Analysis, using a number of tables and graphs to illustrate the techniques. Chapter four examines issues in Content Analysis, such as measurement, indication, representation and interpretation.

Examples of Content Analysis

Adams, W., & Shriebman, F. (1978). Television network news: Issues in content research . Washington, DC: George Washington University Press.

A fairly comprehensive application of content analysis to the field of television news reporting. The books tripartite division discusses current trends and problems with news criticism from a content analysis perspective, four different content analysis studies of news media, and makes recommendations for future research in the area. Worth a look by anyone interested in mass communication research.

Auter, P. J., & Moore, R. L. (1993). Buying from a friend: a content analysis of two teleshopping programs. Journalism Quarterly, 70 (2), 425-437.

A preliminary study was conducted to content-analyze random samples of two teleshopping programs, using a measure of content interactivity and a locus of control message index.

Barker, S. P. (???) Fame: A content analysis study of the American film biography. Ohio State University. Thesis.

Barker examined thirty Oscar-nominated films dating from 1929 to 1979 using O.J. Harvey Belief System and the Kohlberg's Moral Stages to determine whether cinema heroes were positive role models for fame and success or morally ambiguous celebrities. Content analysis was successful in determining several trends relative to the frequency and portrayal of women in film, the generally high ethical character of the protagonists, and the dogmatic, close-minded nature of film antagonists.

Bernstein, J. M. & Lacy, S. (1992). Contextual coverage of government by local television news. Journalism Quarterly, 69 (2), 329-341.

This content analysis of 14 local television news operations in five markets looks at how local TV news shows contribute to the marketplace of ideas. Performance was measured as the allocation of stories to types of coverage that provide the context about events and issues confronting the public.

Blaikie, A. (1993). Images of age: a reflexive process. Applied Ergonomics, 24 (1), 51-58.

Content analysis of magazines provides a sharp instrument for reflecting the change in stereotypes of aging over past decades.

Craig, R. S. (1992). The effect of day part on gender portrayals in television commercials: a content analysis. Sex Roles: A Journal of Research, 26 (5-6), 197-213.

Gender portrayals in 2,209 network television commercials were content analyzed. To compare differences between three day parts, the sample was chosen from three time periods: daytime, evening prime time, and weekend afternoon sportscasts. The results indicate large and consistent differences in the way men and women are portrayed in these three day parts, with almost all comparisons reaching significance at the .05 level. Although ads in all day parts tended to portray men in stereotypical roles of authority and dominance, those on weekends tended to emphasize escape form home and family. The findings of earlier studies which did not consider day part differences may now have to be reevaluated.

Dillon, D. R. et al. (1992). Article content and authorship trends in The Reading Teacher, 1948-1991. The Reading Teacher, 45 (5), 362-368.

The authors explore changes in the focus of the journal over time.

Eberhardt, EA. (1991). The rhetorical analysis of three journal articles: The study of form, content, and ideology. Ft. Collins, CO: Colorado State University.

Eberhardt uses content analysis in this thesis paper to analyze three journal articles that reported on President Ronald Reagan's address in which he responded to the Tower Commission report concerning the IranContra Affair. The reports concentrated on three rhetorical elements: idea generation or content; linguistic style or choice of language; and the potential societal effect of both, which Eberhardt analyzes, along with the particular ideological orientation espoused by each magazine.

Ellis, B. G. & Dick, S. J. (1996). 'Who was 'Shadow'? The computer knows: applying grammar-program statistics in content analyses to solve mysteries about authorship. Journalism & Mass Communication Quarterly, 73 (4), 947-963.

This study's objective was to employ the statistics-documentation portion of a word-processing program's grammar-check feature as a final, definitive, and objective tool for content analyses - used in tandem with qualitative analyses - to determine authorship. Investigators concluded there was significant evidence from both modalities to support their theory that Henry Watterson, long-time editor of the Louisville Courier-Journal, probably was the South's famed Civil War correspondent "Shadow" and to rule out another prime suspect, John H. Linebaugh of the Memphis Daily Appeal. Until now, this Civil War mystery has never been conclusively solved, puzzling historians specializing in Confederate journalism.

Gottschalk, L. A., Stein, M. K. & Shapiro, D.H. (1997). The application of computerized content analysis in a psychiatric outpatient clinic. Journal of Clinical Psychology, 53 (5) , 427-442.

Twenty-five new psychiatric outpatients were clinically evaluated and were administered a brief psychological screening battery which included measurements of symptoms, personality, and cognitive function. Included in this assessment procedure were the Gottschalk-Gleser Content Analysis Scales on which scores were derived from five minute speech samples by means of an artificial intelligence-based computer program. The use of this computerized content analysis procedure for initial, rapid diagnostic neuropsychiatric appraisal is supported by this research.

Graham, J. L., Kamins, M. A., & Oetomo, D. S. (1993). Content analysis of German and Japanese advertising in print media from Indonesia, Spain, and the United States. Journal of Advertising , 22 (2), 5-16.

The authors analyze informational and emotional content in print advertisements in order to consider how home-country culture influences firms' marketing strategies and tactics in foreign markets. Research results provided evidence contrary to the original hypothesis that home-country culture would influence ads in each of the target countries.

Herzog, A. (1973). The B.S. Factor: The theory and technique of faking it in America . New York: Simon and Schuster.

Herzog takes a look at the rhetoric of American culture using content analysis to point out discrepancies between intention and reality in American society. The study reveals, albeit in a comedic tone, how double talk and "not quite lies" are pervasive in our culture.

Horton, N. S. (1986). Young adult literature and censorship: A content analysis of seventy-eight young adult books . Denton, TX: North Texas State University.

The purpose of Horton's content analysis was to analyze a representative seventy-eight current young adult books to determine the extent to which they contain items which are objectionable to would-be censors. Seventy-eight books were identified which fit the criteria of popularity and literary quality. Each book was analyzed for, and tallied for occurrence of, six categories, including profanity, sex, violence, parent conflict, drugs and condoned bad behavior.

Isaacs, J. S. (1984). A verbal content analysis of the early memories of psychiatric patients . Berkeley: California School of Professional Psychology.

Isaacs did a content analysis investigation on the relationship between words and phrases used in early memories and clinical diagnosis. His hypothesis was that in conveying their early memories schizophrenic patients tend to use an identifiable set of words and phrases more frequently than do nonpatients and that schizophrenic patients use these words and phrases more frequently than do patients with major affective disorders.

Jean Lee, S. K. & Hwee Hoon, T. (1993). Rhetorical vision of men and women managers in Singapore. Human Relations, 46 (4), 527-542.

A comparison of media portrayal of male and female managers' rhetorical vision in Singapore is made. Content analysis of newspaper articles used to make this comparison also reveals the inherent conflicts that women managers have to face. Purposive and multi-stage sampling of articles are utilized.

Kaur-Kasior, S. (1987). The treatment of culture in greeting cards: A content analysis . Bowling Green, OH: Bowling Green State University.

Using six historical periods dating from 1870 to 1987, this content analysis study attempted to determine what structural/cultural aspects of American society were reflected in greeting cards. The study determined that the size of cards increased over time, included more pages, and had animals and flowers as their most dominant symbols. In addition, white was the most common color used. Due to habituation and specialization, says the author, greeting cards have become institutionalized in American culture.

Koza, J. E. (1992). The missing males and other gender-related issues in music education: A critical analysis of evidence from the Music Supervisor's Journal, 1914-1924. Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

The goal of this study was to identify all educational issues that would today be explicitly gender related and to analyze the explanations past music educators gave for the existence of gender-related problems. A content analysis of every gender-related reference was undertaken, finding that the current preoccupation with males in music education has a long history and that little has changed since the early part of this century.

Laccinole, M. D. (1982). Aging and married couples: A language content analysis of a conversational and expository speech task . Eugene, OR: University of Oregon.

Using content analysis, this paper investigated the relationship of age to the use of the grammatical categories, and described the differences in the usage of these grammatical categories in a conversation and expository speech task by fifty married couples. The subjects Laccinole used in his analysis were Caucasian, English speaking, middle class, ranged in ages from 20 to 83 years of age, were in good health and had no history of communication disorders.
Laffal, J. (1995). A concept analysis of Jonathan Swift's 'A Tale of a Tub' and 'Gulliver's Travels.' Computers and Humanities, 29 (5), 339-362.
In this study, comparisons of concept profiles of "Tub," "Gulliver," and Swift's own contemporary texts, as well as a composite text of 18th century writers, reveal that "Gulliver" is conceptually different from "Tub." The study also discovers that the concepts and words of these texts suggest two strands in Swift's thinking.

Lewis, S. M. (1991). Regulation from a deregulatory FCC: Avoiding discursive dissonance. Masters Thesis, Fort Collins, CO: Colorado State University.

This thesis uses content analysis to examine inconsistent statements made by the Federal Communications Commission (FCC) in its policy documents during the 1980s. Lewis analyzes positions set forth by the FCC in its policy statements and catalogues different strategies that can be used by speakers to be or to appear consistent, as well as strategies to avoid inconsistent speech or discursive dissonance.

Norton, T. L. (1987). The changing image of childhood: A content analysis of Caldecott Award books. Los Angeles: University of South Carolina.

Content analysis was conducted on 48 Caldecott Medal Recipient books dating from 1938 to 1985 to determine whether the reflect the idea that the social perception of childhood has altered since the early 1960's. The results revealed an increasing "loss of childhood innocence," as well as a general sentimentality for childhood pervasive in the texts. Suggests further study of children's literature to confirm the validity of such study.

O'Dell, J. W. & Weideman, D. (1993). Computer content analysis of the Schreber case. Journal of Clinical Psychology, 49 (1), 120-125.

An example of the application of content analysis as a means of recreating a mental model of the psychology of an individual.

Pratt, C. A. & Pratt, C. B. (1995). Comparative content analysis of food and nutrition advertisements in Ebony, Essence, and Ladies' Home Journal. Journal of Nutrition Education, 27 (1), 11-18.

This study used content analysis to measure the frequencies and forms of food, beverage, and nutrition advertisements and their associated health-promotional message in three U.S. consumer magazines during two 3-year periods: 1980-1982 and 1990-1992. The study showed statistically significant differences among the three magazines in both frequencies and types of major promotional messages in the advertisements. Differences between the advertisements in Ebony and Essence, the readerships of which were primarily African-American, and those found in Ladies Home Journal were noted, as were changes in the two time periods. Interesting tie in to ethnographic research studies?
Riffe, D., Lacy, S., & Drager, M. W. (1996). Sample size in content analysis of weekly news magazines. Journalism & Mass Communication Quarterly,73 (3), 635-645.
This study explores a variety of approaches to deciding sample size in analyzing magazine content. Having tested random samples of size six, eight, ten, twelve, fourteen, and sixteen issues, the authors show that a monthly stratified sample of twelve issues is the most efficient method for inferring to a year's issues.

Roberts, S. K. (1987). A content analysis of how male and female protagonists in Newbery Medal and Honor books overcome conflict: Incorporating a locus of control framework. Fayetteville, AR: University of Arkansas.

The purpose of this content analysis was to analyze Newbery Medal and Honor books in order to determine how male and female protagonists were assigned behavioral traits in overcoming conflict as it relates to an internal or external locus of control schema. Roberts used all, instead of just a sample, of the fictional Newbery Medal and Honor books which met his study's criteria. A total of 120 male and female protagonists were categorized, from Newbery books dating from 1922 to 1986.

Schneider, J. (1993). Square One TV content analysis: Final report . New York: Children's Television Workshop.

This report summarizes the mathematical and pedagogical content of the 230 programs in the Square One TV library after five seasons of production, relating that content to the goals of the series which were to make mathematics more accessible, meaningful, and interesting to the children viewers.

Smith, T. E., Sells, S. P., and Clevenger, T. Ethnographic content analysis of couple and therapist perceptions in a reflecting team setting. The Journal of Marital and Family Therapy, 20 (3), 267-286.

An ethnographic content analysis was used to examine couple and therapist perspectives about the use and value of reflecting team practice. Postsession ethnographic interviews from both couples and therapists were examined for the frequency of themes in seven categories that emerged from a previous ethnographic study of reflecting teams. Ethnographic content analysis is briefly contrasted with conventional modes of quantitative content analysis to illustrate its usefulness and rationale for discovering emergent patterns, themes, emphases, and process using both inductive and deductive methods of inquiry.

Stahl, N. A. (1987). Developing college vocabulary: A content analysis of instructional materials. Reading, Research and Instruction , 26 (3).

This study investigates the extent to which the content of 55 college vocabulary texts is consistent with current research and theory on vocabulary instruction. It recommends less reliance on memorization and more emphasis on deep understanding and independent vocabulary development.

Swetz, F. (1992). Fifteenth and sixteenth century arithmetic texts: What can we learn from them? Science and Education, 1 (4).

Surveys the format and content of 15th and 16th century arithmetic textbooks, discussing the types of problems that were most popular in these early texts and briefly analyses problem contents. Notes the residual educational influence of this era's arithmetical and instructional practices.
Walsh, K., et al. (1996). Management in the public sector: a content analysis of journals. Public Administration 74 (2), 315-325.
The popularity and implementaion of managerial ideas from 1980 to 1992 are examined through the content of five journals revolving on local government, health, education and social service. Contents were analyzed according to commercialism, user involvement, performance evaluation, staffing, strategy and involvement with other organizations. Overall, local government showed utmost involvement with commercialism while health and social care articles were most concerned with user involvement.

For Further Reading

Abernethy, A. M., & Franke, G. R. (1996).The information content of advertising: a meta-analysis. Journal of Advertising, Summer 25 (2) , 1-18.

Carley, K., & Palmquist, M. (1992). Extracting, representing and analyzing mental models. Social Forces , 70 (3), 601-636.

Fan, D. (1988). Predictions of public opinion from the mass media: Computer content analysis and mathematical modeling . New York, NY: Greenwood Press.

Franzosi, R. (1990). Computer-assisted coding of textual data: An application to semantic grammars. Sociological Methods and Research, 19 (2), 225-257.

McTavish, D.G., & Pirro, E. (1990) Contextual content analysis. Quality and Quantity , 24 , 245-265.

Palmquist, M. E. (1990). The lexicon of the classroom: language and learning in writing class rooms . Doctoral dissertation, Carnegie Mellon University, Pittsburgh, PA.

Palmquist, M. E., Carley, K.M., and Dale, T.A. (1997). Two applications of automated text analysis: Analyzing literary and non-literary texts. In C. Roberts (Ed.), Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Tanscripts. Hillsdale, NJ: Lawrence Erlbaum Associates.

Roberts, C.W. (1989). Other than counting words: A linguistic approach to content analysis. Social Forces, 68 , 147-177.

Issues in Content Analysis

Jolliffe, L. (1993). Yes! More content analysis! Newspaper Research Journal , 14 (3-4), 93-97.

The author responds to an editorial essay by Barbara Luebke which criticizes excessive use of content analysis in newspaper content studies. The author points out the positive applications of content analysis when it is theory-based and utilized as a means of suggesting how or why the content exists, or what its effects on public attitudes or behaviors may be.

Kang, N., Kara, A., Laskey, H. A., & Seaton, F. B. (1993). A SAS MACRO for calculating intercoder agreement in content analysis. Journal of Advertising, 22 (2), 17-28.

A key issue in content analysis is the level of agreement across the judgments which classify the objects or stimuli of interest. A review of articles published in the Journal of Advertising indicates that many authors are not fully utilizing recommended measures of intercoder agreement and thus may not be adequately establishing the reliability of their research. This paper presents a SAS MACRO which facilitates the computation of frequently recommended indices of intercoder agreement in content analysis.
Lacy, S. & Riffe, D. (1996). Sampling error and selecting intercoder reliability samples for nominal content categories. Journalism & Mass Communication Quarterly, 73 (4) , 693-704.
This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders.

Riffe, D., Aust, C. F., & Lacy, S. R. (1993). The effectiveness of random, consecutive day and constructed week sampling in newspaper content analysis. Journalism Quarterly, 70 (1), 133-139.

This study compares 20 sets each of samples for four different sizes using simple random, constructed week and consecutive day samples of newspaper content. Comparisons of sample efficiency, based on the percentage of sample means in each set of 20 falling within one or two standard errors of the population mean, show the superiority of constructed week sampling.

Thomas, S. (1994). Artifactual study in the analysis of culture: A defense of content analysis in a postmodern age. Communication Research, 21 (6), 683-697.

Although both modern and postmodern scholars have criticized the method of content analysis with allegations of reductionism and other epistemological limitations, it is argued here that these criticisms are ill founded. In building and argument for the validity of content analysis, the general value of artifact or text study is first considered.

Zollars, C. (1994). The perils of periodical indexes: Some problems in constructing samples for content analysis and culture indicators research. Communication Research, 21 (6), 698-714.

The author examines problems in using periodical indexes to construct research samples via the use of content analysis and culture indicator research. Issues of historical and idiosyncratic changes in index subject category heading and subheadings make article headings potentially misleading indicators. Index subject categories are not necessarily invalid as a result; nevertheless, the author discusses the need to test for category longevity, coherence, and consistency over time, and suggests the use of oversampling, cross-references, and other techniques as a means of correcting and/or compensating for hidden inaccuracies in classification, and as a means of constructing purposive samples for analytic comparisons.

Busch, Carol, Paul S. De Maret, Teresa Flynn, Rachel Kellum, Sheri Le, Brad Meyers, Matt Saunders, Robert White, and Mike Palmquist. (2005). Content Analysis. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=61

content analysis research example

Qualitative Data Analysis Methods 101:

The “big 6” methods + examples.

By: Kerryn Warren (PhD) | Reviewed By: Eunice Rautenbach (D.Tech) | May 2020 (Updated April 2023)

Qualitative data analysis methods. Wow, that’s a mouthful. 

If you’re new to the world of research, qualitative data analysis can look rather intimidating. So much bulky terminology and so many abstract, fluffy concepts. It certainly can be a minefield!

Don’t worry – in this post, we’ll unpack the most popular analysis methods , one at a time, so that you can approach your analysis with confidence and competence – whether that’s for a dissertation, thesis or really any kind of research project.

Qualitative data analysis methods

What (exactly) is qualitative data analysis?

To understand qualitative data analysis, we need to first understand qualitative data – so let’s step back and ask the question, “what exactly is qualitative data?”.

Qualitative data refers to pretty much any data that’s “not numbers” . In other words, it’s not the stuff you measure using a fixed scale or complex equipment, nor do you analyse it using complex statistics or mathematics.

So, if it’s not numbers, what is it?

Words, you guessed? Well… sometimes , yes. Qualitative data can, and often does, take the form of interview transcripts, documents and open-ended survey responses – but it can also involve the interpretation of images and videos. In other words, qualitative isn’t just limited to text-based data.

So, how’s that different from quantitative data, you ask?

Simply put, qualitative research focuses on words, descriptions, concepts or ideas – while quantitative research focuses on numbers and statistics . Qualitative research investigates the “softer side” of things to explore and describe , while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them. If you’re keen to learn more about the differences between qual and quant, we’ve got a detailed post over here .

qualitative data analysis vs quantitative data analysis

So, qualitative analysis is easier than quantitative, right?

Not quite. In many ways, qualitative data can be challenging and time-consuming to analyse and interpret. At the end of your data collection phase (which itself takes a lot of time), you’ll likely have many pages of text-based data or hours upon hours of audio to work through. You might also have subtle nuances of interactions or discussions that have danced around in your mind, or that you scribbled down in messy field notes. All of this needs to work its way into your analysis.

Making sense of all of this is no small task and you shouldn’t underestimate it. Long story short – qualitative analysis can be a lot of work! Of course, quantitative analysis is no piece of cake either, but it’s important to recognise that qualitative analysis still requires a significant investment in terms of time and effort.

Need a helping hand?

content analysis research example

In this post, we’ll explore qualitative data analysis by looking at some of the most common analysis methods we encounter. We’re not going to cover every possible qualitative method and we’re not going to go into heavy detail – we’re just going to give you the big picture. That said, we will of course includes links to loads of extra resources so that you can learn more about whichever analysis method interests you.

Without further delay, let’s get into it.

The “Big 6” Qualitative Analysis Methods 

There are many different types of qualitative data analysis, all of which serve different purposes and have unique strengths and weaknesses . We’ll start by outlining the analysis methods and then we’ll dive into the details for each.

The 6 most popular methods (or at least the ones we see at Grad Coach) are:

  • Content analysis
  • Narrative analysis
  • Discourse analysis
  • Thematic analysis
  • Grounded theory (GT)
  • Interpretive phenomenological analysis (IPA)

Let’s take a look at each of them…

QDA Method #1: Qualitative Content Analysis

Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

With content analysis, you could, for instance, identify the frequency with which an idea is shared or spoken about – like the number of times a Kardashian is mentioned on Twitter. Or you could identify patterns of deeper underlying interpretations – for instance, by identifying phrases or words in tourist pamphlets that highlight India as an ancient country.

Because content analysis can be used in such a wide variety of ways, it’s important to go into your analysis with a very specific question and goal, or you’ll get lost in the fog. With content analysis, you’ll group large amounts of text into codes , summarise these into categories, and possibly even tabulate the data to calculate the frequency of certain concepts or variables. Because of this, content analysis provides a small splash of quantitative thinking within a qualitative method.

Naturally, while content analysis is widely useful, it’s not without its drawbacks . One of the main issues with content analysis is that it can be very time-consuming , as it requires lots of reading and re-reading of the texts. Also, because of its multidimensional focus on both qualitative and quantitative aspects, it is sometimes accused of losing important nuances in communication.

Content analysis also tends to concentrate on a very specific timeline and doesn’t take into account what happened before or after that timeline. This isn’t necessarily a bad thing though – just something to be aware of. So, keep these factors in mind if you’re considering content analysis. Every analysis method has its limitations , so don’t be put off by these – just be aware of them ! If you’re interested in learning more about content analysis, the video below provides a good starting point.

QDA Method #2: Narrative Analysis 

As the name suggests, narrative analysis is all about listening to people telling stories and analysing what that means . Since stories serve a functional purpose of helping us make sense of the world, we can gain insights into the ways that people deal with and make sense of reality by analysing their stories and the ways they’re told.

You could, for example, use narrative analysis to explore whether how something is being said is important. For instance, the narrative of a prisoner trying to justify their crime could provide insight into their view of the world and the justice system. Similarly, analysing the ways entrepreneurs talk about the struggles in their careers or cancer patients telling stories of hope could provide powerful insights into their mindsets and perspectives . Simply put, narrative analysis is about paying attention to the stories that people tell – and more importantly, the way they tell them.

Of course, the narrative approach has its weaknesses , too. Sample sizes are generally quite small due to the time-consuming process of capturing narratives. Because of this, along with the multitude of social and lifestyle factors which can influence a subject, narrative analysis can be quite difficult to reproduce in subsequent research. This means that it’s difficult to test the findings of some of this research.

Similarly, researcher bias can have a strong influence on the results here, so you need to be particularly careful about the potential biases you can bring into your analysis when using this method. Nevertheless, narrative analysis is still a very useful qualitative analysis method – just keep these limitations in mind and be careful not to draw broad conclusions . If you’re keen to learn more about narrative analysis, the video below provides a great introduction to this qualitative analysis method.

QDA Method #3: Discourse Analysis 

Discourse is simply a fancy word for written or spoken language or debate . So, discourse analysis is all about analysing language within its social context. In other words, analysing language – such as a conversation, a speech, etc – within the culture and society it takes place. For example, you could analyse how a janitor speaks to a CEO, or how politicians speak about terrorism.

To truly understand these conversations or speeches, the culture and history of those involved in the communication are important factors to consider. For example, a janitor might speak more casually with a CEO in a company that emphasises equality among workers. Similarly, a politician might speak more about terrorism if there was a recent terrorist incident in the country.

So, as you can see, by using discourse analysis, you can identify how culture , history or power dynamics (to name a few) have an effect on the way concepts are spoken about. So, if your research aims and objectives involve understanding culture or power dynamics, discourse analysis can be a powerful method.

Because there are many social influences in terms of how we speak to each other, the potential use of discourse analysis is vast . Of course, this also means it’s important to have a very specific research question (or questions) in mind when analysing your data and looking for patterns and themes, or you might land up going down a winding rabbit hole.

Discourse analysis can also be very time-consuming  as you need to sample the data to the point of saturation – in other words, until no new information and insights emerge. But this is, of course, part of what makes discourse analysis such a powerful technique. So, keep these factors in mind when considering this QDA method. Again, if you’re keen to learn more, the video below presents a good starting point.

QDA Method #4: Thematic Analysis

Thematic analysis looks at patterns of meaning in a data set – for example, a set of interviews or focus group transcripts. But what exactly does that… mean? Well, a thematic analysis takes bodies of data (which are often quite large) and groups them according to similarities – in other words, themes . These themes help us make sense of the content and derive meaning from it.

Let’s take a look at an example.

With thematic analysis, you could analyse 100 online reviews of a popular sushi restaurant to find out what patrons think about the place. By reviewing the data, you would then identify the themes that crop up repeatedly within the data – for example, “fresh ingredients” or “friendly wait staff”.

So, as you can see, thematic analysis can be pretty useful for finding out about people’s experiences , views, and opinions . Therefore, if your research aims and objectives involve understanding people’s experience or view of something, thematic analysis can be a great choice.

Since thematic analysis is a bit of an exploratory process, it’s not unusual for your research questions to develop , or even change as you progress through the analysis. While this is somewhat natural in exploratory research, it can also be seen as a disadvantage as it means that data needs to be re-reviewed each time a research question is adjusted. In other words, thematic analysis can be quite time-consuming – but for a good reason. So, keep this in mind if you choose to use thematic analysis for your project and budget extra time for unexpected adjustments.

Thematic analysis takes bodies of data and groups them according to similarities (themes), which help us make sense of the content.

QDA Method #5: Grounded theory (GT) 

Grounded theory is a powerful qualitative analysis method where the intention is to create a new theory (or theories) using the data at hand, through a series of “ tests ” and “ revisions ”. Strictly speaking, GT is more a research design type than an analysis method, but we’ve included it here as it’s often referred to as a method.

What’s most important with grounded theory is that you go into the analysis with an open mind and let the data speak for itself – rather than dragging existing hypotheses or theories into your analysis. In other words, your analysis must develop from the ground up (hence the name). 

Let’s look at an example of GT in action.

Assume you’re interested in developing a theory about what factors influence students to watch a YouTube video about qualitative analysis. Using Grounded theory , you’d start with this general overarching question about the given population (i.e., graduate students). First, you’d approach a small sample – for example, five graduate students in a department at a university. Ideally, this sample would be reasonably representative of the broader population. You’d interview these students to identify what factors lead them to watch the video.

After analysing the interview data, a general pattern could emerge. For example, you might notice that graduate students are more likely to read a post about qualitative methods if they are just starting on their dissertation journey, or if they have an upcoming test about research methods.

From here, you’ll look for another small sample – for example, five more graduate students in a different department – and see whether this pattern holds true for them. If not, you’ll look for commonalities and adapt your theory accordingly. As this process continues, the theory would develop . As we mentioned earlier, what’s important with grounded theory is that the theory develops from the data – not from some preconceived idea.

So, what are the drawbacks of grounded theory? Well, some argue that there’s a tricky circularity to grounded theory. For it to work, in principle, you should know as little as possible regarding the research question and population, so that you reduce the bias in your interpretation. However, in many circumstances, it’s also thought to be unwise to approach a research question without knowledge of the current literature . In other words, it’s a bit of a “chicken or the egg” situation.

Regardless, grounded theory remains a popular (and powerful) option. Naturally, it’s a very useful method when you’re researching a topic that is completely new or has very little existing research about it, as it allows you to start from scratch and work your way from the ground up .

Grounded theory is used to create a new theory (or theories) by using the data at hand, as opposed to existing theories and frameworks.

QDA Method #6:   Interpretive Phenomenological Analysis (IPA)

Interpretive. Phenomenological. Analysis. IPA . Try saying that three times fast…

Let’s just stick with IPA, okay?

IPA is designed to help you understand the personal experiences of a subject (for example, a person or group of people) concerning a major life event, an experience or a situation . This event or experience is the “phenomenon” that makes up the “P” in IPA. Such phenomena may range from relatively common events – such as motherhood, or being involved in a car accident – to those which are extremely rare – for example, someone’s personal experience in a refugee camp. So, IPA is a great choice if your research involves analysing people’s personal experiences of something that happened to them.

It’s important to remember that IPA is subject – centred . In other words, it’s focused on the experiencer . This means that, while you’ll likely use a coding system to identify commonalities, it’s important not to lose the depth of experience or meaning by trying to reduce everything to codes. Also, keep in mind that since your sample size will generally be very small with IPA, you often won’t be able to draw broad conclusions about the generalisability of your findings. But that’s okay as long as it aligns with your research aims and objectives.

Another thing to be aware of with IPA is personal bias . While researcher bias can creep into all forms of research, self-awareness is critically important with IPA, as it can have a major impact on the results. For example, a researcher who was a victim of a crime himself could insert his own feelings of frustration and anger into the way he interprets the experience of someone who was kidnapped. So, if you’re going to undertake IPA, you need to be very self-aware or you could muddy the analysis.

IPA can help you understand the personal experiences of a person or group concerning a major life event, an experience or a situation.

How to choose the right analysis method

In light of all of the qualitative analysis methods we’ve covered so far, you’re probably asking yourself the question, “ How do I choose the right one? ”

Much like all the other methodological decisions you’ll need to make, selecting the right qualitative analysis method largely depends on your research aims, objectives and questions . In other words, the best tool for the job depends on what you’re trying to build. For example:

  • Perhaps your research aims to analyse the use of words and what they reveal about the intention of the storyteller and the cultural context of the time.
  • Perhaps your research aims to develop an understanding of the unique personal experiences of people that have experienced a certain event, or
  • Perhaps your research aims to develop insight regarding the influence of a certain culture on its members.

As you can probably see, each of these research aims are distinctly different , and therefore different analysis methods would be suitable for each one. For example, narrative analysis would likely be a good option for the first aim, while grounded theory wouldn’t be as relevant. 

It’s also important to remember that each method has its own set of strengths, weaknesses and general limitations. No single analysis method is perfect . So, depending on the nature of your research, it may make sense to adopt more than one method (this is called triangulation ). Keep in mind though that this will of course be quite time-consuming.

As we’ve seen, all of the qualitative analysis methods we’ve discussed make use of coding and theme-generating techniques, but the intent and approach of each analysis method differ quite substantially. So, it’s very important to come into your research with a clear intention before you decide which analysis method (or methods) to use.

Start by reviewing your research aims , objectives and research questions to assess what exactly you’re trying to find out – then select a qualitative analysis method that fits. Never pick a method just because you like it or have experience using it – your analysis method (or methods) must align with your broader research aims and objectives.

No single analysis method is perfect, so it can often make sense to adopt more than one  method (this is called triangulation).

Let’s recap on QDA methods…

In this post, we looked at six popular qualitative data analysis methods:

  • First, we looked at content analysis , a straightforward method that blends a little bit of quant into a primarily qualitative analysis.
  • Then we looked at narrative analysis , which is about analysing how stories are told.
  • Next up was discourse analysis – which is about analysing conversations and interactions.
  • Then we moved on to thematic analysis – which is about identifying themes and patterns.
  • From there, we went south with grounded theory – which is about starting from scratch with a specific question and using the data alone to build a theory in response to that question.
  • And finally, we looked at IPA – which is about understanding people’s unique experiences of a phenomenon.

Of course, these aren’t the only options when it comes to qualitative data analysis, but they’re a great starting point if you’re dipping your toes into qualitative research for the first time.

If you’re still feeling a bit confused, consider our private coaching service , where we hold your hand through the research process to help you develop your best work.

content analysis research example

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

87 Comments

Richard N

This has been very helpful. Thank you.

netaji

Thank you madam,

Mariam Jaiyeola

Thank you so much for this information

Nzube

I wonder it so clear for understand and good for me. can I ask additional query?

Lee

Very insightful and useful

Susan Nakaweesi

Good work done with clear explanations. Thank you.

Titilayo

Thanks so much for the write-up, it’s really good.

Hemantha Gunasekara

Thanks madam . It is very important .

Gumathandra

thank you very good

Faricoh Tushera

Great presentation

Pramod Bahulekar

This has been very well explained in simple language . It is useful even for a new researcher.

Derek Jansen

Great to hear that. Good luck with your qualitative data analysis, Pramod!

Adam Zahir

This is very useful information. And it was very a clear language structured presentation. Thanks a lot.

Golit,F.

Thank you so much.

Emmanuel

very informative sequential presentation

Shahzada

Precise explanation of method.

Alyssa

Hi, may we use 2 data analysis methods in our qualitative research?

Thanks for your comment. Most commonly, one would use one type of analysis method, but it depends on your research aims and objectives.

Dr. Manju Pandey

You explained it in very simple language, everyone can understand it. Thanks so much.

Phillip

Thank you very much, this is very helpful. It has been explained in a very simple manner that even a layman understands

Anne

Thank nicely explained can I ask is Qualitative content analysis the same as thematic analysis?

Thanks for your comment. No, QCA and thematic are two different types of analysis. This article might help clarify – https://onlinelibrary.wiley.com/doi/10.1111/nhs.12048

Rev. Osadare K . J

This is my first time to come across a well explained data analysis. so helpful.

Tina King

I have thoroughly enjoyed your explanation of the six qualitative analysis methods. This is very helpful. Thank you!

Bromie

Thank you very much, this is well explained and useful

udayangani

i need a citation of your book.

khutsafalo

Thanks a lot , remarkable indeed, enlighting to the best

jas

Hi Derek, What other theories/methods would you recommend when the data is a whole speech?

M

Keep writing useful artikel.

Adane

It is important concept about QDA and also the way to express is easily understandable, so thanks for all.

Carl Benecke

Thank you, this is well explained and very useful.

Ngwisa

Very helpful .Thanks.

Hajra Aman

Hi there! Very well explained. Simple but very useful style of writing. Please provide the citation of the text. warm regards

Hillary Mophethe

The session was very helpful and insightful. Thank you

This was very helpful and insightful. Easy to read and understand

Catherine

As a professional academic writer, this has been so informative and educative. Keep up the good work Grad Coach you are unmatched with quality content for sure.

Keep up the good work Grad Coach you are unmatched with quality content for sure.

Abdulkerim

Its Great and help me the most. A Million Thanks you Dr.

Emanuela

It is a very nice work

Noble Naade

Very insightful. Please, which of this approach could be used for a research that one is trying to elicit students’ misconceptions in a particular concept ?

Karen

This is Amazing and well explained, thanks

amirhossein

great overview

Tebogo

What do we call a research data analysis method that one use to advise or determining the best accounting tool or techniques that should be adopted in a company.

Catherine Shimechero

Informative video, explained in a clear and simple way. Kudos

Van Hmung

Waoo! I have chosen method wrong for my data analysis. But I can revise my work according to this guide. Thank you so much for this helpful lecture.

BRIAN ONYANGO MWAGA

This has been very helpful. It gave me a good view of my research objectives and how to choose the best method. Thematic analysis it is.

Livhuwani Reineth

Very helpful indeed. Thanku so much for the insight.

Storm Erlank

This was incredibly helpful.

Jack Kanas

Very helpful.

catherine

very educative

Wan Roslina

Nicely written especially for novice academic researchers like me! Thank you.

Talash

choosing a right method for a paper is always a hard job for a student, this is a useful information, but it would be more useful personally for me, if the author provide me with a little bit more information about the data analysis techniques in type of explanatory research. Can we use qualitative content analysis technique for explanatory research ? or what is the suitable data analysis method for explanatory research in social studies?

ramesh

that was very helpful for me. because these details are so important to my research. thank you very much

Kumsa Desisa

I learnt a lot. Thank you

Tesfa NT

Relevant and Informative, thanks !

norma

Well-planned and organized, thanks much! 🙂

Dr. Jacob Lubuva

I have reviewed qualitative data analysis in a simplest way possible. The content will highly be useful for developing my book on qualitative data analysis methods. Cheers!

Nyi Nyi Lwin

Clear explanation on qualitative and how about Case study

Ogobuchi Otuu

This was helpful. Thank you

Alicia

This was really of great assistance, it was just the right information needed. Explanation very clear and follow.

Wow, Thanks for making my life easy

C. U

This was helpful thanks .

Dr. Alina Atif

Very helpful…. clear and written in an easily understandable manner. Thank you.

Herb

This was so helpful as it was easy to understand. I’m a new to research thank you so much.

cissy

so educative…. but Ijust want to know which method is coding of the qualitative or tallying done?

Ayo

Thank you for the great content, I have learnt a lot. So helpful

Tesfaye

precise and clear presentation with simple language and thank you for that.

nneheng

very informative content, thank you.

Oscar Kuebutornye

You guys are amazing on YouTube on this platform. Your teachings are great, educative, and informative. kudos!

NG

Brilliant Delivery. You made a complex subject seem so easy. Well done.

Ankit Kumar

Beautifully explained.

Thanks a lot

Kidada Owen-Browne

Is there a video the captures the practical process of coding using automated applications?

Thanks for the comment. We don’t recommend using automated applications for coding, as they are not sufficiently accurate in our experience.

Mathewos Damtew

content analysis can be qualitative research?

Hend

THANK YOU VERY MUCH.

Dev get

Thank you very much for such a wonderful content

Kassahun Aman

do you have any material on Data collection

Prince .S. mpofu

What a powerful explanation of the QDA methods. Thank you.

Kassahun

Great explanation both written and Video. i have been using of it on a day to day working of my thesis project in accounting and finance. Thank you very much for your support.

BORA SAMWELI MATUTULI

very helpful, thank you so much

ngoni chibukire

The tutorial is useful. I benefited a lot.

Thandeka Hlatshwayo

This is an eye opener for me and very informative, I have used some of your guidance notes on my Thesis, I wonder if you can assist with your 1. name of your book, year of publication, topic etc., this is for citing in my Bibliography,

I certainly hope to hear from you

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • How it works

researchprospect post subheader

What is Content Analysis – Steps & Examples

Published by Alvin Nicolas at August 16th, 2021 , Revised On August 29, 2023

“The content analysis identifies specific words, patterns, concepts, themes, phrases, characters, or sentences within the recorded communication content.”

To conduct content analysis, you need to gather data from multiple sources; it can be anything or any form of data, including text, audio, or videos.

Depending on the requirements of your analysis, you may have to use a  primary or secondary form of data , including:

Videos Transcripts Images Newspaper Books Literature Biographies Documents Oral statements/conversations Text books Encyclopedia Newspapers Periodicals Social media posts Articles

The Purpose of Content Analysis

There are so many objectives of content analysis. Some fundamental objectives are given below.

  • To simplify the content.
  • To get a clear, in-depth meaning of the language.
  • To identify the uses of language.
  • To know the impact of language on society.
  • To find out the association of the language with cultures, interpersonal relationships, and communication.
  • To gain an in-depth understanding of the concept.
  • To find out the context, behaviour, and response of the speaker.
  • To analyse the trends and association between the text and multimedia.

When to Use Content Analysis? 

There are many uses of the content analysis; some of them are listed below:

The content analysis is used.

  • To represent the content precisely, breaking it into short form.
  • To describe the characteristics of the content.
  • To support an argument.
  • It is used in many walks of life, including marketing, media, literature, etc.
  • It is used for extracting essential information from a large amount of data.

Types of Content Analysis

Content analysis is a broad concept, and it has various types depending on various fields. However, people from all walks of life use it at their convenience. Some of the popular methods are given below:

Sr. no Types Definition Example
1 Relational Analysis It helps to understand the association between concepts in humans. What other words are used next to the word  it’s synonyms such as  is used in the communication?

What kind of meaning is produced by this group of words?

2 Unobtrusive Research It’s a method of studying social behaviour without collecting data directly from the subject group Durkheim’s analysis of suicide
3 Conceptual analysis It analyses the existence and frequency of concepts in human communication. Smoking can have adverse   on your health.

Here you can find out how many times the word  its synonyms such as  communication.

Confused between qualitative and quantitative methods of data analysis? No idea what discourse and content analysis are?

We hear you.

  • Whether you want a full dissertation written or need help forming a dissertation proposal, we can help you with both.
  • Get different dissertation services at ResearchProspect and score amazing grades!

Advantages and Disadvantages of Content Analysis

Content analysis has so many benefits, which are given below.

Content analysis:

  • Offers both qualitative and quantitative analysis of the communication.
  • Provides an in-depth understanding of the content by making it precise.
  • Enables us to understand the context and perception of the speaker.
  • Provides insight into complex models of human thoughts and language use.
  • Provides historical/cultural insight.
  • It can be applied at any given time, place, and people.
  • It helps to learn any language, its origin, and association with society and culture

Disadvantages

There are also some disadvantages of using the method of content analysis which are given below:

  • is very time-consuming.
  • Cannot interpret a large amount of data accurately and is subjected to increased error.
  • Cannot be computerised easily.

How to Conduct a Content Analysis?

If you want to conduct the content analysis, so here are some steps that you have to follow for that purpose. Those steps are given below.

Develop a Research Question and Select the Content

It’s essential to have a  research question to proceed with your study.  After selecting your research question, you need to find out the relevant resources to analyse.

Example:  If you want to find out the impact of plagiarism on the credibility of the authors. You can examine the relevant materials available on the topic from the internet, newspapers, and books published during the past 5-10 years.

Could you read it Thoroughly?

At this point, you have to read the content thoroughly until you understand it. 

Condensation

It would help if you broke the text into smaller portions for clear interpretation. In short, you have to create categories or smaller text from a large amount of given data.

The unit of analysis  is the basic unit of text to be classified. It can be a word, phrase, a theme, a plot, a newspaper article.

Code the Content

It takes a long to go through the textual data. Coding is a way of tagging the data and organising it into a sequence of symbols, numbers, and letters to highlight the relevant points. At this point, you have to draw meanings from those condensed parts. You have to understand the meaning and context of the text and the speaker clearly. 

Analyse and Interpret the Data

You can use statistical analysis to analyse the data. It is a method of collecting, analysing, and interpreting ample data to discover underlying patterns and details. Statistics are used in every field to make better decisions. It would help if you aimed to retain the meaning of the content while making it precise.

Frequently Asked Questions

How to perform content analysis.

To perform content analysis:

  • Define research objectives.
  • Select a representative sample.
  • Develop coding categories.
  • Analyze content systematically.
  • Apply coding to data.
  • Interpret results to draw insights about themes, patterns, and meanings.

You May Also Like

This post provides the key disadvantages of secondary research so you know the limitations of secondary research before making a decision.

Sampling methods are used to to draw valid conclusions about a large community, organization or group of people, but they are based on evidence and reasoning.

This article provides the key advantages of primary research over secondary research so you can make an informed decision.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Study.com

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.23(1); 2018 Feb

Logo of jrn

Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process

Qualitative content analysis consists of conventional, directed and summative approaches for data analysis. They are used for provision of descriptive knowledge and understandings of the phenomenon under study. However, the method underpinning directed qualitative content analysis is insufficiently delineated in international literature. This paper aims to describe and integrate the process of data analysis in directed qualitative content analysis. Various international databases were used to retrieve articles related to directed qualitative content analysis. A review of literature led to the integration and elaboration of a stepwise method of data analysis for directed qualitative content analysis. The proposed 16-step method of data analysis in this paper is a detailed description of analytical steps to be taken in directed qualitative content analysis that covers the current gap of knowledge in international literature regarding the practical process of qualitative data analysis. An example of “the resuscitation team members' motivation for cardiopulmonary resuscitation” based on Victor Vroom's expectancy theory is also presented. The directed qualitative content analysis method proposed in this paper is a reliable, transparent, and comprehensive method for qualitative researchers. It can increase the rigour of qualitative data analysis, make the comparison of the findings of different studies possible and yield practical results.

Introduction

Qualitative content analysis (QCA) is a research approach for the description and interpretation of textual data using the systematic process of coding. The final product of data analysis is the identification of categories, themes and patterns ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ; Zhang and Wildemuth, 2009 ). Researchers in the field of healthcare commonly use QCA for data analysis ( Berelson, 1952 ). QCA has been described and used in the first half of the 20th century ( Schreier, 2014 ). The focus of QCA is the development of knowledge and understanding of the study phenomenon. QCA, as the application of language and contextual clues for making meanings in the communication process, requires a close review of the content gleaned from conducting interviews or observations ( Downe-Wamboldt, 1992 ; Hsieh and Shannon, 2005 ).

QCA is classified into conventional (inductive), directed (deductive) and summative methods ( Hsieh and Shannon, 2005 ; Mayring, 2000 , 2014 ). Inductive QCA, as the most popular approach in data analysis, helps with the development of theories, schematic models or conceptual frameworks ( Elo and Kyngäs, 2008 ; Graneheim and Lundman, 2004 ; Vaismoradi et al., 2013 , 2016 ), which should be refined, tested or further developed by using directed QCA ( Elo and Kyngäs, 2008 ). Directed QCA is a common method of data analysis in healthcare research ( Elo and Kyngäs, 2008 ), but insufficient knowledege is available about how this method is applied ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ). This may hamper the use of directed QCA by novice qualitative researchers and account for a low application of this method compared with the inductive method ( Elo and Kyngäs, 2008 ; Mayring, 2000 ). Therefore, this paper aims to describe and integrate methods applied in directed QCA.

International databases such as PubMed (including Medline), Scopus, Web of Science and ScienceDirect were searched for retrieval of papers related to QCA and directed QCA. Use of keywords such as ‘directed content analysis’, ‘deductive content analysis’ and ‘qualitative content analysis’ led to 13,738 potentially eligible papers. Applying inclusion criteria such as ‘focused on directed qualitative content analysis’ and ‘published in peer-reviewed journals’; and removal of duplicates resulted in 30 papers. However, only two of these papers dealt with the description of directed QCA in terms of the methodological process. Ancestry and manual searches within these 30 papers revealed the pioneers of the description of this method in international literature. A further search for papers published by the method's pioneers led to four more papers and one monograph dealing with directed QCA ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1744987117741667-fig1.jpg

The search strategy for the identification of papers.

Finally, the authors of this paper integrated and elaborated a comprehensive and stepwise method of directed QCA based on the commonalities of methods discussed in the included papers. Also, the experiences of the current authors in the field of qualitative research were incorporated into the suggested stepwise method of data analysis for directed QCA ( Table 1 ).

The suggested steps for directed content analysis.

StepsReferences
Preparation phase
 1. Acquiring the necessary general skills ,
 2. Selecting the appropriate sampling strategyInferred by the authors of the present paper from
 3. Deciding on the analysis of manifest and/or latent content
 4. Developing an interview guideInferred by the authors of the present paper from
 5. Conducting and transcribing interviews ,
 6. Specifying the unit of analysis
 7. Being immersed in data
Organisation phase
 8. Developing a formative categorisation matrixInferred by the authors of the present paper from
 9. Theoretically defining the main categories and subcategories ,
 10. Determining coding rules for main categories
 11. Pre-testing the categorisation matrixInferred by the authors of the present paper from
 12. Choosing and specifying the anchor samples for each main category
 13. Performing the main data analysis , ,
 14. Inductive abstraction of main categories from preliminary codes
 15. Establishment of links between generic categories and main categoriesSuggested by the authors of the present paper
Reporting phase
 16. Reporting all steps of directed content analysis and findings ,

While the included papers about directed QCA were the most cited ones in international literature, none of them provided sufficient detail with regard to how to conduct the data analysis process. This might hamper the use of this method by novice qualitative researchers and hinder its application by nurse researchers compared with inductive QCA. As it can be seen in Figure 1 , the search resulted in 5 articles that explain DCA method. The following is description of the articles, along with their strengths and weaknesses. Authors used the strengths in their suggested method as mentioned in Table 1 .

The methods suggested for directed QCA in the international literature

The method suggested by hsieh and shannon (2005).

Hsieh and Shannon (2005) developed two strategies for conducting directed QCA. The first strategy consists of reading textual data and highlighting those parts of the text that, on first impression, appeared to be related to the predetermined codes dictated by a theory or prior research findings. Next, the highlighted texts would be coded using the predetermined codes.

As for the second strategy, the only difference lay in starting the coding process without primarily highlighting the text. In both analysis strategies, the qualitative researcher should return to the text and perform reanalysis after the initial coding process ( Hsieh and Shannon, 2005 ). However, the current authors believe that this second strategy provides an opportunity for recognising missing texts related to the predetermined codes and also newly emerged ones. It also enhances the trustworthiness of findings.

As an important part of the method suggested by Hsieh and Shannon (2005) , the term ‘code’ was used for the different levels of abstraction, but a more precise definition of this term seems to be crucial. For instance, they stated that ‘data that cannot be coded are identified and analyzed later to determine if they represent a new category or a subcategory of an existing code’ (2005: 1282).

It seems that the first ‘code’ in the above sentence indicates the lowest level of abstraction that could be achieved instantly from raw data. However, the ‘code’ at the end of the sentence refers to a higher level of abstraction, because it denotes to a category or subcategory.

Furthermore, the interchangeable and inconsistent use of the words ‘predetermined code’ and ‘category’ could be confusing to novice qualitative researchers. Moreover, Hsieh and Shannon (2005) did not specify exactly which parts of the text, whether highlighted, coded or the whole text, should be considered during the reanalysis of the text after initial coding process. Such a lack of specification runs the risk of missing the content during the initial coding process, especially if the second review of the text is restricted to highlighted sections. One final important omission in this method is the lack of an explicit description of the process through which new codes emerge during the reanalysis of the text. Such a clarification is crucial, because the detection of subtle links between newly emerging codes and the predetermined ones is not straightforward.

The method suggested by Elo and Kyngäs (2008)

Elo and Kyngäs (2008) suggested ‘structured’ and ‘unconstrained’ methods or paths for directed QCA. Accordingly, after determining the ‘categorisation matrix’ as the framework for data collection and analysis during the study process, the whole content would be reviewed and coded. The use of the unconstrained matrix allows the development of some categories inductively by using the steps of ‘grouping’, ‘categorisation’ and ‘abstraction’. The use of a structured method requires a structured matrix upon which data are strictly coded. Hypotheses suggested by previous studies often are tested using this method ( Elo and Kyngäs, 2008 ).

The current authors believe that the label of ‘data gathering by the content’ (p. 110) in the unconstrained matrix path can be misleading. It refers to the data coding step rather than data collection. Also, in the description of the structured path there is an obvious discrepancy with regard to the selection of the portions of the content that fit or do not fit the matrix: ‘… if the matrix is structured, only aspects that fit the matrix of analysis are chosen from the data …’; ‘… when using a structured matrix of analysis, it is possible to choose either only the aspects from the data that fit the categorization frame or, alternatively, to choose those that do not’ ( Elo and Kyngäs, 2008 : 111–112).

Figure 1 in Elo and Kyngäs's paper ( 2008 : 110) clearly distinguished between the structured and unconstrained paths. On the other hand, the first sentence in the above quotation clearly explained the use of the structured matrix, but it was not clear whether the second sentence referred to the use of the structured or unconstrained matrix.

The method suggested by Zhang and Wildemuth (2009)

Considering the method suggested by Hsieh and Shannon (2005) , Zhang and Wildemuth (2009) suggested an eight-step method as follows: (1) preparation of data, (2) definition of the unit of analysis, (3) development of categories and the coding scheme, (4) testing the coding scheme in a text sample, (5) coding the whole text, (6) assessment of the coding's consistency, (7) drawing conclusions from the coded data, and (8) reporting the methods and findings ( Zhang and Wildemuth, 2009 ). Only in the third step of this method, the description of the process of category development, did Zhang and Wildemuth (2009) briefly make a distinction between the inductive versus deductive content analysis methods. On first impression, the only difference between the two approaches seems to be the origin from which categories are developed. In addition, the process of connecting the preliminary codes extracted from raw data with predetermined categories is described. Furthermore, it is not clear whether this linking should be established from categories to primary codes, or vice versa.

The method suggested by Mayring ( 2000 , 2014 )

Mayring ( 2000 , 2014 ) suggested a seven-step method for directed QCA that distinctively differentiated between inductive and deductive methods as follows: (1) determination of the research question and theoretical background, (2) definition of the category system such as main categories and subcategories based on the previous theory and research, (3) establishing a guideline for coding, considering definitions, anchor examples and coding rules, (5) reading the whole text, determining preliminary codes, adding anchor examples and coding rules, (5) revision of the category and coding guideline after working through 10–50% of the data, (6) reworking data if needed, or listing the final category, and (7) analysing and interpreting based on the category frequencies and contingencies.

Mayring suggested that coding rules should be defined to distinctly assign the parts of the text to a particular category. Furthermore, indicating which concrete part of the text serves as typical examples, also known as ‘anchor samples’, and belongs to a particular category was recommended for describing each category ( Mayring, 2000 , 2014 ). The current authors believe that these suggestions help clarify directed QCA and enhance its trustworthiness.

But when the term ‘preliminary coding’ was used, Mayring ( 2000 , 2014 ) did not clearly clarify whether these codes are inductively or deductively created. In addition, Mayring was inclined to apply the quantitative approach implicitly in steps 5 and 7, which is incongruent with the qualitative paradigm. Furthermore, nothing was stated about the possibility of the development of new categories from the textual material: ‘… theoretical considerations can lead to a further categories or rephrasing of categories from previous studies, but the categories are not developed out of the text material like in inductive category formation …’ ( Mayring, 2014 : 97).

Integration and clarification of methods for directed QCA

Directed QCA took different paths when the categorisation matrix contained concepts with higher-level versus lower-level abstractions. In matrices with low abstraction levels, linking raw data to predetermined categories was not difficult, and suggested methods in international nursing literature seem appropriate and helpful. For instance, Elo and Kyngäs (2008) introduced ‘mental well-being threats’ based on the categories of ‘dependence’, ‘worries’, ‘sadness’ and ‘guilt’. Hsieh and Shannon (2005) developed the categories of ‘denial’, ‘anger’, ‘bargaining’, ‘depression’ and ‘acceptance’ when elucidating the stages of grief. Therefore, the low-level abstractions easily could link raw data to categories. The predicament of directed QCA began when the categorisation matrix contained the concepts with high levels of abstraction. The gap regarding how to connect the highly abstracted categories to the raw data should be bridged by using a transparent and comprehensive analysis strategy. Therefore, the authors of this paper integrated the methods of directed QCA outlined in the international literature and elaborated them using the phases of ‘preparation’, ‘organization’ and ‘reporting’ proposed by Elo and Kyngäs (2008) . Also, the experiences of the current authors in the field of qualitative research were incorporated into their suggested stepwise method of data analysis. The method was presented using the example of the “team members’ motivation for cardiopulmonary resuscitation (CPR)” based on Victor Vroom's expectancy theory ( Assarroudi et al., 2017 ). In this example, interview transcriptions were considered as the unit of analysis, because interviews are the most common method of data collection in qualitative studies ( Gill et al., 2008 ).

Suggested method of directed QCA by the authors of this paper

This method consists of 16 steps and three phases, described below: preparation phase (steps 1–7), organisation phase (steps 8–15), and reporting phase (step 16).

The preparation phase:

  • The acquisition of general skills . In the first step, qualitative researchers should develop skills including self-critical thinking, analytical abilities, continuous self-reflection, sensitive interpretive skills, creative thinking, scientific writing, data gathering and self-scrutiny ( Elo et al., 2014 ). Furthermore, they should attain sufficient scientific and content-based mastery of the method chosen for directed QCA. In the proposed example, qualitative researchers can achieve this mastery through conducting investigations in original sources related to Victor Vroom's expectancy theory. Main categories pertaining to Victor Vroom's expectancy theory were ‘expectancy’, ‘instrumentality’ and ‘valence’. This theory defined ‘expectancy’ as the perceived probability that efforts could lead to good performance. ‘Instrumentality’ was the perceived probability that good performance led to desired outcomes. ‘Valence’ was the value that the individual personally placed on outcomes ( Vroom, 1964 , 2005 ).
  • Selection of the appropriate sampling strategy . Qualitative researchers need to select the proper sampling strategies that facilitate an access to key informants on the study phenomenon ( Elo et al., 2014 ). Sampling methods such as purposive, snowball and convenience methods ( Coyne, 1997 ) can be used with the consideration of maximum variations in terms of socio-demographic and phenomenal characteristics ( Sandelowski, 1995 ). The sampling process ends when information ‘redundancy’ or ‘saturation’ is reached. In other words, it ends when all aspects of the phenomenon under study are explored in detail and no additional data are revealed in subsequent interviews ( Cleary et al., 2014 ). In line with this example, nurses and physicians who are the members of the CPR team should be selected, given diversity in variables including age, gender, the duration of work, number of CPR procedures, CPR in different patient groups and motivation levels for CPR.
  • Deciding on the analysis of manifest and/or latent content . Qualitative researchers decide whether the manifest and/or latent contents should be considered for analysis based on the study's aim. The manifest content is limited to the transcribed interview text, but latent content includes both the researchers' interpretations of available text, and participants' silences, pauses, sighs, laughter, posture, etc. ( Elo and Kyngäs, 2008 ). Both types of content are recommended to be considered for data analysis, because a deep understanding of data is preferred for directed QCA ( Thomas and Magilvy, 2011 ).
  • Developing an interview guide . The interview guide contains open-ended questions based on the study's aims, followed by directed questions about main categories extracted from the existing theory or previous research ( Hsieh and Shannon, 2005 ). Directed questions guide how to conduct interviews when using directed or conventional methods. The following open-ended and directed questions were used in this example: An open-ended question was ‘What is in your mind when you are called for performing CPR?’ The directed question for the main category of ‘expectancy’ could be ‘How does the expectancy of the successful CPR procedure motivate you to resuscitate patients?’
  • Conducting and transcribing interviews . An interview guide is used to conduct interviews for directed QCA. After each interview session, the entire interview is transcribed verbatim immediately ( Poland, 1995 ) and with utmost care ( Seidman, 2013 ). Two recorders should be used to ensure data backup ( DiCicco-Bloom and Crabtree, 2006 ). (For more details concerning skills required for conducting successful qualitative interviews, see Edenborough, 2002 ; Kramer, 2011 ; Schostak, 2005 ; Seidman, 2013 ).
  • Specifying the unit of analysis . The unit of analysis may include the person, a program, an organisation, a class, community, a state, a country, an interview, or a diary written by the researchers ( Graneheim and Lundman, 2004 ). The transcriptions of interviews are usually considered units of analysis when data are collected using interviews. In this example, interview transcriptions and filed notes are considered as the units of analysis.
  • Immersion in data . The transcribed interviews are read and reviewed several times with the consideration of the following questions: ‘Who is telling?’, ‘Where is this happening?’, ‘When did it happen?’, ‘What is happening?’, and ‘Why?’ ( Elo and Kyngäs, 2008 ). These questions help researchers get immersed in data and become able to extract related meanings ( Elo and Kyngäs, 2008 ; Elo et al., 2014 ).

The organisation phase:

The categorisation matrix of the team members' motivation for CPR.

Motivation for CPR
ExpectancyInstrumentalityValenceOther inductively emerged categories

CPR: cardiopulmonary resuscitation.

  • Theoretical definition of the main categories and subcategories . Derived from the existing theory or previous research, the theoretical definitions of categories should be accurate and objective ( Mayring, 2000 , 2014 ). As for this example, ‘expectancy’ as a main category could be defined as the “subjective probability that the efforts by an individual led to an acceptable level of performance (effort–performance association) or to the desired outcome (effort–outcome association)” ( Van Eerde and Thierry, 1996 ; Vroom, 1964 ).
  • – Expectancy in the CPR was a subjective probability formed in the rescuer's mind.
  • – This subjective probability should be related to the association between the effort–performance or effort–outcome relationship perceived by the rescuer.
  • The pre-testing of the categorisation matrix . The categorisation matrix should be tested using a pilot study. This is an essential step, particularly if more than one researcher is involved in the coding process. In this step, qualitative researchers should independently and tentatively encode the text, and discuss the difficulties in the use of the categorisation matrix and differences in the interpretations of the unit of analysis. The categorisation matrix may be further modified as a result of such discussions ( Elo et al., 2014 ). This also can increase inter-coder reliability ( Vaismoradi et al., 2013 ) and the trustworthiness of the study.
  • Choosing and specifying the anchor samples for each main category . An anchor sample is an explicit and concise exemplification, or the identifier of a main category, selected from meaning units ( Mayring, 2014 ). An anchor sample for ‘expectancy’ as the main category of this example could be as follows: ‘… the patient with advanced metastatic cancer who requires CPR … I do not envision a successful resuscitation for him.’

An example of steps taken for the abstraction of the phenomenon of expectancy (main category).

Meaning unitSummarised meaning unitPreliminary codeGroup of codesSubcategoryGeneric categoryMain category
The patient with advanced heart failure: I do not envisage a successful resuscitation for himNo expectation for the resuscitation of those with advanced heart failureCardiovascular conditions that decrease the chance of successful resuscitationEstimation of the functional capacity of vital organsScientific estimation of life capacityEstimation of the chances of successful CPRExpectancy
Patients are rarely resuscitated, especially those who experience a cardiogenic shock following a heart attackLow possibility of resuscitation of patients with a cardiogenic shock
When ventricular fibrillation is likely, a chance of resuscitation still exists even after performing CPR for 30 minutesThe higher chance of resuscitation among patients with ventricular fibrillationCardiovascular conditions that increase the chance of successful resuscitation
Patients with sudden cardiac arrest are more likely to be resuscitated through CPRThe higher chance of resuscitation among patients with sudden cardiac arrest
Estimation of the severity of the patient's complications
Estimation of remaining life span
Intuitive estimation of the chances of successful resuscitation
Uncertainty in the estimation
Time considerations in resuscitation
Estimation of self-efficacy

CPR: cardiopulmonary resuscitation

  • The inductive abstraction of main categories from preliminary codes . Preliminary codes are grouped and categorised according to their meanings, similarities and differences. The products of this categorisation process are known as ‘generic categories’ ( Elo and Kyngäs, 2008 ) ( Table 3 ).
  • The establishment of links between generic categories and main categories . The constant comparison of generic categories and main categories results in the development of a conceptual and logical link between generic and main categories, nesting generic categories into the pre-existing main categories and creating new main categories. The constant comparison technique is applied to data analysis throughout the study ( Zhang and Wildemuth, 2009 ) ( Table 3 ).

The reporting phase:

  • Reporting all steps of directed QCA and findings . This includes a detailed description of the data analysis process and the enumeration of findings ( Elo and Kyngäs, 2008 ). Findings should be systematically presented in such a way that the association between the raw data and the categorisation matrix is clearly shown and easily followed. Detailed descriptions of the sampling process, data collection, analysis methods and participants' characteristics should be presented. The trustworthiness criteria adopted along with the steps taken to fulfil them should also be outlined. Elo et al. (2014) developed a comprehensive and specific checklist for reporting QCA studies.

Trustworthiness

Multiple terms are used in the international literature regarding the validation of qualitative studies ( Creswell, 2013 ). The terms ‘validity’, ‘reliability’, and ‘generalizability’ in quantitative studies are equivalent to ‘credibility’, ‘dependability’, and ‘transferability’ in qualitative studies, respectively ( Polit and Beck, 2013 ). These terms, along with the additional concept of confirmability, were introduced by Lincoln and Guba (1985) . Polit and Beck added the term ‘authenticity’ to the list. Collectively, they are the different aspects of trustworthiness in all types of qualitative studies ( Polit and Beck, 2013 ).

To ehnance the trustworthiness of the directed QCA study, researchers should thoroughly delineate the three phases of ‘preparation’, ‘organization’, and ‘reporting’ ( Elo et al., 2014 ). Such phases are needed to show in detail how categories are developed from data ( Elo and Kyngäs, 2008 ; Graneheim and Lundman, 2004 ; Vaismoradi et al., 2016 ). To accomplish this, appendices, tables and figures may be used to depict the reduction process ( Elo and Kyngäs, 2008 ; Elo et al., 2014 ). Furthermore, an honest account of different realities during data analysis should be provided ( Polit and Beck, 2013 ). The authors of this paper believe that adopting this 16-step method can enhance the trustworthiness of directed QCA.

Directed QCA is used to validate, refine and/or extend a theory or theoretical framework in a new context ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ). The purpose of this paper is to provide a comprehensive, systematic, yet simple and applicable method for directed QCA to facilitate its use by novice qualitative researchers.

Despite the current misconceptions regarding the simplicity of QCA and directed QCA, knowledge development is required for conducting them ( Elo and Kyngäs, 2008 ). Directed QCA is often performed on a considerable amount of textual data ( Pope et al., 2000 ). Nevertheless, few studies have discussed the multiple steps need to be taken to conduct it. In this paper, we have integrated and elaborated the essential steps pointed to by international qualitative researchers on directed QCA such as ‘preliminary coding’, ‘theoretical definition’ ( Mayring, 2000 , 2014 ), ‘coding rule’, ‘anchor sample’ ( Mayring, 2014 ), ‘inductive analysis in directed qualitative content analysis’ ( Elo and Kyngäs, 2008 ), and ‘pretesting the categorization matrix’ ( Elo et al., 2014 ). Moreover, the authors have added a detailed discussion regarding ‘the use of inductive abstraction’ and ‘linking between generic categories and main categories’.

The importance of directed QCA is increased due to the development of knowledge and theories derived from QCA using the inductive approach, and the growing need to test the theories. Directed QCA proposed in this paper, is a reliable, transparent and comprehensive method that may increase the rigour of data analysis, allow the comparison of the findings of different studies, and yield practical results.

Abdolghader Assarroudi (PhD, MScN, BScN) is Assistant Professor in Nursing, Department of Medical‐Surgical Nursing, School of Nursing and Midwifery, Sabzevar University of Medical Sciences, Sabzevar, Iran. His main areas of research interest are qualitative research, instrument development study and cardiopulmonary resuscitation.

Fatemeh Heshmati Nabavi (PhD, MScN, BScN) is Assistant Professor in nursing, Department of Nursing Management, School of Nursing and Midwifery, Mashhad University of Medical Sciences, Mashhad, Iran. Her main areas of research interest are medical education, nursing management and qualitative study.

Mohammad Reza Armat (MScN, BScN) graduated from the Mashhad University of Medical Sciences in 1991 with a Bachelor of Science degree in nursing. He completed his Master of Science degree in nursing at Tarbiat Modarres University in 1995. He is an instructor in North Khorasan University of Medical Sciences, Bojnourd, Iran. Currently, he is a PhD candidate in nursing at the Mashhad School of Nursing and Midwifery, Mashhad University of Medical Sciences, Iran.

Abbas Ebadi (PhD, MScN, BScN) is professor in nursing, Behavioral Sciences Research Centre, School of Nursing, Baqiyatallah University of Medical Sciences, Tehran, Iran. His main areas of research interest are instrument development and qualitative study.

Mojtaba Vaismoradi (PhD, MScN, BScN) is a doctoral nurse researcher at the Faculty of Nursing and Health Sciences, Nord University, Bodø, Norway. He works in Nord’s research group ‘Healthcare Leadership’ under the supervision of Prof. Terese Bondas. For now, this team has focused on conducting meta‐synthesis studies with the collaboration of international qualitative research experts. His main areas of research interests are patient safety, elderly care and methodological issues in qualitative descriptive approaches. Mojtaba is the associate editor of BMC Nursing and journal SAGE Open in the UK.

Key points for policy, practice and/or research

  • In this paper, essential steps pointed to by international qualitative researchers in the field of directed qualitative content analysis were described and integrated.
  • A detailed discussion regarding the use of inductive abstraction, and linking between generic categories and main categories, was presented.
  • A 16-step method of directed qualitative content analysis proposed in this paper is a reliable, transparent, comprehensive, systematic, yet simple and applicable method. It can increase the rigour of data analysis and facilitate its use by novice qualitative researchers.

Declaration of conflicting interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

The author(s) received no financial support for the research, authorship, and/or publication of this article.

  • Assarroudi A, Heshmati Nabavi F, Ebadi A, et al.(2017) Professional rescuers' experiences of motivation for cardiopulmonary resuscitation: A qualitative study . Nursing & Health Sciences . 19(2): 237–243. [ PubMed ] [ Google Scholar ]
  • Berelson B. (1952) Content Analysis in Communication Research , Glenoce, IL: Free Press. [ Google Scholar ]
  • Cleary M, Horsfall J, Hayter M. (2014) Data collection and sampling in qualitative research: Does size matter? Journal of Advanced Nursing 70 ( 3 ): 473–475. [ PubMed ] [ Google Scholar ]
  • Coyne IT. (1997) Sampling in qualitative research.. Purposeful and theoretical sampling; merging or clear boundaries? Journal of Advanced Nursing 26 ( 3 ): 623–630. [ PubMed ] [ Google Scholar ]
  • Creswell JW. (2013) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 4th edn. Thousand Oaks, CA: SAGE Publications. [ Google Scholar ]
  • DiCicco-Bloom B, Crabtree BF. (2006) The qualitative research interview . Medical Education 40 ( 4 ): 314–321. [ PubMed ] [ Google Scholar ]
  • Downe-Wamboldt B. (1992) Content analysis: Method, applications, and issues . Health Care for Women International 13 ( 3 ): 313–321. [ PubMed ] [ Google Scholar ]
  • Edenborough R. (2002) Effective Interviewing: A Handbook of Skills and Techniques , 2nd edn. London: Kogan Page. [ Google Scholar ]
  • Elo S, Kyngäs H. (2008) The qualitative content analysis process . Journal of Advanced Nursing 62 ( 1 ): 107–115. [ PubMed ] [ Google Scholar ]
  • Elo S, Kääriäinen M, Kanste O, et al.(2014) Qualitative content analysis: A focus on trustworthiness . SAGE Open 4 ( 1 ): 1–10. [ Google Scholar ]
  • Gill P, Stewart K, Treasure E, et al.(2008) Methods of data collection in qualitative research: Interviews and focus groups . British Dental Journal 204 ( 6 ): 291–295. [ PubMed ] [ Google Scholar ]
  • Graneheim UH, Lundman B. (2004) Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness . Nurse Education Today 24 ( 2 ): 105–112. [ PubMed ] [ Google Scholar ]
  • Hsieh H-F, Shannon SE. (2005) Three approaches to qualitative content analysis . Qualitative Health Research 15 ( 9 ): 1277–1288. [ PubMed ] [ Google Scholar ]
  • Kramer EP. (2011) 101 Successful Interviewing Strategies , Boston, MA: Course Technology, Cengage Learning. [ Google Scholar ]
  • Lincoln YS, Guba EG. (1985) Naturalistic Inquiry , Beverly Hills, CA: SAGE Publications. [ Google Scholar ]
  • Mayring P. (2000) Qualitative Content Analysis . Forum: Qualitative Social Research 1 ( 2 ): Available at: http://www.qualitative-research.net/fqs-texte/2-00/02-00mayring-e.htm (accessed 10 March 2005). [ Google Scholar ]
  • Mayring P. (2014) Qualitative content analysis: Theoretical foundation, basic procedures and software solution , Klagenfurt: Monograph. Available at: http://nbn-resolving.de/urn:nbn:de:0168-ssoar-395173 (accessed 10 May 2015). [ Google Scholar ]
  • Poland BD. (1995) Transcription quality as an aspect of rigor in qualitative research . Qualitative Inquiry 1 ( 3 ): 290–310. [ Google Scholar ]
  • Polit DF, Beck CT. (2013) Essentials of Nursing Research: Appraising Evidence for Nursing Practice , 7th edn. China: Lippincott Williams & Wilkins. [ Google Scholar ]
  • Pope C, Ziebland S, Mays N. (2000) Analysing qualitative data . BMJ 320 ( 7227 ): 114–116. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1995) Sample size in qualitative research . Research in Nursing & Health 18 ( 2 ): 179–183. [ PubMed ] [ Google Scholar ]
  • Schostak J. (2005) Interviewing and Representation in Qualitative Research , London: McGraw-Hill/Open University Press. [ Google Scholar ]
  • Schreier M. (2014) Qualitative content analysis . In: Flick U. (ed.) The SAGE Handbook of Qualitative Data Analysis , Thousand Oaks, CA: SAGE Publications Ltd, pp. 170–183. [ Google Scholar ]
  • Seidman I. (2013) Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences , 3rd edn. New York: Teachers College Press. [ Google Scholar ]
  • Thomas E, Magilvy JK. (2011) Qualitative rigor or research validity in qualitative research . Journal for Specialists in Pediatric Nursing 16 ( 2 ): 151–155. [ PubMed ] [ Google Scholar ]
  • Vaismoradi M, Jones J, Turunen H, et al.(2016) Theme development in qualitative content analysis and thematic analysis . Journal of Nursing Education and Practice 6 ( 5 ): 100–110. [ Google Scholar ]
  • Vaismoradi M, Turunen H, Bondas T. (2013) Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study . Nursing & Health Sciences 15 ( 3 ): 398–405. [ PubMed ] [ Google Scholar ]
  • Van Eerde W, Thierry H. (1996) Vroom's expectancy models and work-related criteria: A meta-analysis . Journal of Applied Psychology 81 ( 5 ): 575. [ Google Scholar ]
  • Vroom VH. (1964) Work and Motivation , New York: Wiley. [ Google Scholar ]
  • Vroom VH. (2005) On the origins of expectancy theory . In: Smith KG, Hitt MA. (eds) Great Minds in Management: The Process of Theory Development , Oxford: Oxford University Press, pp. 239–258. [ Google Scholar ]
  • Zhang Y, Wildemuth BM. (2009) Qualitative analysis of content . In: Wildemuth B. (ed.) Applications of Social Research Methods to Questions in Information and Library Science , Westport, CT: Libraries Unlimited, pp. 308–319. [ Google Scholar ]

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Book Title: Graduate research methods in social work

Subtitle: A project-based approach

Authors: Matthew DeCarlo; Cory Cummings; and Kate Agnelli

Cover image for Graduate research methods in social work

Book Description: Our textbook guides graduate social work students step by step through the research process from conceptualization to dissemination. We center cultural humility, information literacy, pragmatism, and ethics and values as core components of social work research.

Book Information

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Social work

BREAKING: WHO declares Mpox a global public health emergency for second time in 2 years

Iran is targeting the U.S. election with fake news sites and cyber operations, research says

Iran is stepping up its influence campaign aimed at the U.S., researchers at Microsoft said in a new report , adding to the ongoing efforts by Russia and China to sway American public opinion before the presidential election. 

Researchers identified websites that they attributed to the Iranian operation, aimed at voters on the political left and right. One website, Nio Thinker, bills itself as “your go-to destination for insightful, progressive news and analysis that challenges the status quo” and hosts articles that bash former President Donald Trump and hail Vice President Kamala Harris as “our unexpected, awkward savior.” 

Another site identified by researchers, Savannah Time, poses as a voicey conservative local alt-weekly. “We’re opinionated, we’re noisy, and we’re having a good time,” the about section of the site says. It hosts articles claiming to be written by “the spokeswoman for the International League for Women’s Rights,” arguing for more modest Olympics beach volleyball bathing suits, next to articles lauding Iran’s military might. 

The Microsoft Threat Analysis Center noted the sites were likely using artificial intelligence tools to lift content from legitimate U.S. news publications and repackage articles in a way that hides the content’s source. 

The group behind the sites, according to Microsoft, is part of a larger Iranian operation, active since 2020, that operates more than a dozen other fake news sites targeting English-, French-, Spanish- and Arabic-speaking audiences. The campaign has not found significant success with a U.S. audience, and the sites’ content has not been shared widely on social media, according to the researchers. But researchers say the sites could be used closer to the election. 

Beyond the effort to sow controversy and divide Americans before the vote, researchers said another group linked to the Islamic Revolutionary Guard Corps targeted a “high-ranking official on a presidential campaign” in June with a spear-phishing email from a compromised email account of a former senior adviser and attempted to access an account belonging to “a former presidential candidate.” The report did not name the people who had been targeted. 

Iran’s United Nations mission did not immediately respond to a request for comment but denied the reports of meddling in a statement to The Associated Press : “Iran has been the victim of numerous offensive cyber operations targeting its infrastructure, public service centers, and industries. Iran’s cyber capabilities are defensive and proportionate to the threats it faces. Iran has neither the intention nor plans to launch cyber attacks. The U.S. presidential election is an internal matter in which Iran does not interfere.”

Microsoft’s report also noted continued activity by Russia, including an operation by a group researchers call Storm-1516 , which produces propaganda videos in support of Trump and Russian interests and distributes them through a network of fake news websites connected to a former U.S. police officer. China-linked actors, the report said, had also pivoted increasingly to spreading propaganda via video and had leveraged a network of online accounts to stoke outrage around pro-Palestinian university protests. 

The researchers reported an expectation that Iran, along with China and Russia, would intensify cyberattacks against candidates and institutions and increase efforts to divide Americans with propaganda and disinformation in the run-up to the election.

content analysis research example

Brandy Zadrozny is a senior reporter for NBC News. She covers misinformation, extremism and the internet.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 78, Issue 9
  • Estimated changes in free sugar consumption one year after the UK soft drinks industry levy came into force: controlled interrupted time series analysis of the National Diet and Nutrition Survey (2011–2019)
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-1857-2122 Nina Trivedy Rogers 1 ,
  • http://orcid.org/0000-0002-3957-4357 Steven Cummins 2 ,
  • Catrin P Jones 1 ,
  • Oliver Mytton 3 ,
  • Mike Rayner 4 ,
  • Harry Rutter 5 ,
  • Martin White 1 ,
  • Jean Adams 1
  • 1 MRC Epidemiology Unit, University of Cambridge School of Clinical Medicine, Institute of Metabolic Science, Cambridge Biomedical Campus , University of Cambridge , Cambridge , UK
  • 2 Department of Public Health, Environments & Society , London School of Hygiene & Tropical Medicine , London , UK
  • 3 Great Ormond Street Institute of Child Health , University College London , London , UK
  • 4 Nuffield Department of Population Health , University of Oxford , Oxford , UK
  • 5 Department of Social and Policy Sciences , , University of Bath , Bath , UK
  • Correspondence to Dr Nina Trivedy Rogers, MRC Epidemiology Unit, University of Cambridge School of Clinical Medicine, Institute of Metabolic Science, Cambridge Biomedical Campus, University of Cambridge, Cambridge, CB2 1TN, UK; nina.rogers{at}mrc-epid.cam.ac.uk

Background The UK soft drinks industry levy (SDIL) was announced in March 2016 and implemented in April 2018, encouraging manufacturers to reduce the sugar content of soft drinks. This is the first study to investigate changes in individual-level consumption of free sugars in relation to the SDIL.

Methods We used controlled interrupted time series (2011–2019) to explore changes in the consumption of free sugars in the whole diet and from soft drinks alone 11 months after SDIL implementation in a nationally representative sample of adults (>18 years; n=7999) and children (1.5–19 years; n=7656) drawn from the UK National Diet and Nutrition Survey. Estimates were based on differences between observed data and a counterfactual scenario of no SDIL announcement/implementation. Models included protein consumption (control) and accounted for autocorrelation.

Results Accounting for trends prior to the SDIL announcement, there were absolute reductions in the daily consumption of free sugars from the whole diet in children and adults of 4.8 g (95% CI 0.6 to 9.1) and 10.9 g (95% CI 7.8 to 13.9), respectively. Comparable reductions in free sugar consumption from drinks alone were 3.0 g (95% CI 0.1 to 5.8) and 5.2 g (95% CI 4.2 to 6.1). The percentage of total dietary energy from free sugars declined over the study period but was not significantly different from the counterfactual.

Conclusion The SDIL led to significant reductions in dietary free sugar consumption in children and adults. Energy from free sugar as a percentage of total energy did not change relative to the counterfactual, which could be due to simultaneous reductions in total energy intake associated with reductions in dietary free sugar.

  • PUBLIC HEALTH

Data availability statement

Data are available in a public, open access repository. Data from the National Diet and Nutrition Survey years 1–11 (2008–09 to 2018–19) can be accessed on the UK Data Service ( https://ukdataservice.ac.uk/ ).

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/jech-2023-221051

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

High intakes of free sugars are associated with a range of non-communicable diseases. Sugar sweetened beverages constitute a major source of dietary free sugars in children and adults.

The UK soft drink industry levy (SDIL) led to a reduction in the sugar content in many sugar sweetened beverages and a reduction in household purchasing of sugar from drinks.

No previous study has examined the impact of the SDIL on total dietary consumption of free sugars at the individual level.

WHAT THIS STUDY ADDS

There were declining trends in the intake of dietary free sugar in adults and children prior to the UK SDIL.

Accounting for prior trends, 1 year after the UK SDIL came into force children and adults further reduced their free sugar intake from food and drink by approximately 5 g/day and 11 g/day, respectively. Children and adults reduced their daily free sugar intake from soft drinks alone by approximately 3 g/day and approximately 5 g/day, respectively.

Energy intake from free sugars as a proportion of total energy consumed did not change significantly following the UK SDIL, indicating energy intake from free sugar was reducing simultaneously with overall total energy intake.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

The UK SDIL was associated with significant reductions in consumption of free sugars from soft drinks and across the whole diet and reinforces previous research indicating a reduction in purchasing. This evidence should be used to inform policy when extending or considering other sugar reduction strategies.

Energy intake from free sugars has been falling but levels remain higher than the 5% recommendation set by the WHO. Reductions in dietary sugar in relation to the SDIL may have driven significant reductions in overall energy.

Introduction

High consumption of free sugars is associated with non-communicable diseases. 1 Guidelines from the World Health Organization (WHO) and the UK Scientific Advisory Committee on Nutrition (SACN) suggest limiting free sugar consumption to below 5% of total energy intake to achieve maximum health benefits, 1 2 equivalent to daily maximum amounts of 30 g for adults, 24 g for children (7–10 years) and 19 g for young children (4–6 years). In the UK, consumption of free sugar is well above the recommended daily maximum, although levels have fallen over the last decade. 3 For example, adolescents consume approximately 70 g/day 4 and obtain 12.3% of their energy from free sugars. 3 Sugar sweetened beverages (SSBs) constitute a major source of free sugar in the UK diet, 2 5 and are the largest single source for children aged 11–18 years where they make up approximately one-third of their daily sugar intake. 6 A growing body of evidence has shown a link between consumption of SSBs and higher risk of weight gain, type 2 diabetes, coronary heart disease and premature mortality, 7 such that the WHO recommends taxation of SSBs in order to reduce over-consumption of free sugars and to improve health. 8 To date, >50 countries have introduced taxation on SSBs, which has been associated with a reduction in sales and dietary intake of free sugar from SSBs. 9 Reductions in the prevalence of childhood obesity 10 11 and improvements in dental health outcomes 12 13 have also been reported.

In March 2016 the UK government announced the UK soft drink industry levy (SDIL), a two-tier levy on manufacturers, importers and bottlers of soft drinks which would come into force in March 2018. 14 The levy was designed to incentivise manufacturers to reformulate and reduce the free sugar content of SSBs (see details in online supplemental text 1 ).

Supplemental material

One year after the UK SDIL was implemented there was evidence of a reduction in the sugar content of soft drinks 15 and households on average reduced the amount of sugar purchased from soft drinks by 8 g/week with no evidence of substitution with confectionary or alcohol. 16 However, lack of available data meant it was not possible to examine substitution of purchasing other sugary foods and drinks, which has previously been suggested in some but not all studies. 17 18 Household purchasing only approximates individual consumption because it captures only those products brought into the home, products may be shared unequally between household members, and it does not account for waste.

To examine the effects of the SDIL on total sugar intake at the individual level, in this study we used surveillance data collected using 3- or 4-day food diaries as part of the UK National Diet and Nutrition Survey (NDNS). We aimed to examine changes in absolute and relative consumption of free sugars from soft drinks alone and from both food and drinks (allowing us to consider possible substitutions with other sugary food items), following the announcement and implementation of the UK SDIL.

Data source

We used 11 years of data (2008–2019) from the NDNS. Data collection, sampling design and information on response is described in full elsewhere. 19 In brief, NDNS is a continuous national cross-sectional survey capturing information on food consumption, nutritional status and nutrient intake inside and outside of the home in a representative annual sample of approximately 500 adults and 500 children (1.5–18 years) living in private households in the UK. Participants are sampled throughout the year, such that in a typical month about 40 adults and 40 children participate (further details are shown in online supplemental text 2 ).

Outcomes of interest

Outcomes of interest were absolute and relative changes in the total intake of dietary free sugar from (1) all food and soft drinks combined and (2) from soft drinks alone. A definition of free sugar is given in online supplemental text 3 . Drink categories examined were those that fell within the following NDNS categories: soft drinks – not low calorie; soft drinks – low calorie; semi-skimmed milk; whole milk; skimmed milk; fruit juice, 1% fat milk and other milk and cream. Additionally, we examined absolute and relative changes in percentage energy from free sugar in (1) food and soft drinks and (2) soft drinks alone. While examination of changes in sugar consumption and percentage energy from sugar across the whole diet (food and drink) captures overall substitutions with other sugar-containing products following the UK SDIL, examination of sugar consumption from soft drinks alone provides a higher level of specificity to the SDIL.

Protein intake was selected as a non-equivalent dependent control. It was not a nutritional component specifically targeted by the intervention or other government interventions and therefore is unlikely to be affected by the SDIL but could still be affected by confounding factors such as increases in food prices 20 (see online supplemental text 4 ).

Statistical analysis

Controlled interrupted time series (ITS) analyses were performed to examine changes in the outcomes in relation to the UK SDIL separately in adults and children. We analysed data at the quarterly level over 11 years with the first data point representing dates from April to June 2008 and the last representing dates from January to March 2019. Model specifications are shown in online supplemental text 5 . Where diary date entries extended over two quarters, the earlier quarter was designated as the time point for analysis. Generalised least squares models were used. Autocorrelation in the time series was determined using Durbin–Watson tests and from visualisations of autocorrelation and partial correlation plots. Autocorrelation-moving average correlation structure with order (p) and moving average (q) parameters were used and selected to minimise the Akaike information criterion in each model. Trends in free sugar consumption prior to the announcement of SDIL in April 2016 were used to estimate counterfactual scenarios of what would have happened if the SDIL had not been announced or come into force. Thus, the interruption point was the 3-month period beginning April 2016. Absolute and relative differences in consumption of free sugars/person/day were estimated by calculating the difference between the observed and counterfactual values at quarterly time point 45. To account for non-response and to ensure the sample distribution represented the UK distribution of females and males and age profile, weights provided by NDNS were used and adapted for analysis of adults and children separately. 21 A study protocol has been published 22 and the study is registered ( ISRCTN18042742 ). For changes to the original protocol see online supplemental text 6 . All statistical analyses were performed in R version 4.1.0.

Data from 7999 adults and 7656 children were included across 11 years representing approximately 40 children and 40 adults each month. Table 1 gives descriptive values for the outcomes of interest. Compared with the pre-announcement period, free sugars consumed from all soft drinks reduced by around one-half in children and one-third in adults in the post-announcement period. Total dietary free sugar consumption and percentage of total dietary energy derived from free sugars also declined. Mean protein consumption was relatively stable over both periods in children and adults. The age and sex of the children and adults were very similar in the pre- and post-announcement periods.

  • View inline

Mean amount of free sugar (g) consumed in children and adults per day during the study period before and after the announcement of the soft drinks industry levy (SDIL)

All estimates of change in free sugar consumption referred to below are based on g/individual/day in the 3-month period beginning January 2019 and compared with the counterfactual scenario of no UK SDIL announcement and implementation.

Change in free sugar consumption (soft drinks only)

In children, consumption of free sugars from soft drinks was approximately 27 g/day at the start of the study period but fell steeply throughout. By the end of the study period mean sugar consumption from soft drinks was approximately 10 g/day ( figure 1 ). Overall, relative to the counterfactual scenario, there was an absolute reduction in daily free sugar consumption from soft drinks of 3.0 g (95% CI 0.1 to 5.8) or a relative reduction of 23.5% (95% CI 46.0% to 0.9%) in children ( table 2 ). In adults, free sugar consumption at the beginning of the study was lower than that of children (approximately 17 g/day) and was declining prior to the SDIL announcement, although less steeply ( figure 1 ). Following the SDIL announcement, free sugar consumption from soft drinks appeared to decline even more steeply. There was an absolute reduction in free sugar consumption from soft drinks of 5.2 g (95% CI 4.2 to 6.1) or a relative reduction of 40.4% (95% CI 32.9% to 48.0%) in adults relative to the counterfactual ( figure 1 , table 2 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Observed and modelled daily consumption (g) of free sugar from drink products per adult/child from April 2008 to March 2019. Red points show observed data and solid red lines (with light red shadows) show modelled data (and 95% CIs) of free sugar consumed from drinks. The dashed red line indicates the counterfactual line based on pre-announcement trends and if the announcement and implementation had not happened. Modelled protein consumption from drinks (control group) was removed from the graph to include resolution but is available in the supplementary section. The first and second dashed lines indicate the announcement and implementation of the soft drinks industry levy (SDIL), respectively.

Change in free sugar consumption in food and drink and energy from free sugar as a proportion of total energy compared with the counterfactual scenario of no announcement and implementation of the UK soft drinks industry levy (SDIL)

Change in total dietary free sugar consumption (food and soft drinks combined)

Consumption of total dietary free sugars in children was approximately 70 g/day at the beginning of the study but this fell to approximately 45 g/day by the end of the study ( figure 2 ). Relative to the counterfactual scenario, there was an absolute reduction in total dietary free sugar consumption of 4.8 g (95% CI 0.6 to 9.1) or relative reduction of 9.7% (95% CI 18.2% to 1.2%) in children ( figure 2 ; table 2 ). In adults, consumption of total dietary free sugar consumption at the beginning of the study was approximately 60 g/day falling to approximately 45 g/day by the end of the study ( figure 2 ). Relative to the counterfactual scenario there was an absolute reduction in total dietary free sugar consumption in adults of 10.9 g (95% CI 7.8 to 13.9) or a relative reduction of 19.8% (95% CI 25.4% to 14.2%). Online supplemental figures show that, relative to the counterfactual, dietary protein consumption and energy from protein was more or less stable across the study period (see online supplemental figures S3–S6 ).

Observed and modelled daily consumption (g) of free sugar from food and drink products per adult/child from April 2008 to March 2019. Red points show observed data and solid red lines (with light red shadows) show modelled data (and 95% CIs) of free sugar consumed from food and drinks. The dashed red line indicates the counterfactual line based on pre-announcement trends and if the announcement and implementation had not happened. Modelled protein consumption from food and drinks (control group) was removed from the graph to include resolution but is available in the supplementary section. The first and second dashed lines indicate the announcement and implementation of the soft drinks industry levy (SDIL), respectively.

Change in energy from free sugar as a proportion of total energy

The percentage of energy from total dietary free sugar decreased across the study period but did not change significantly relative to the counterfactual scenario in children or adults, with relative changes in free sugar consumption of −7.6 g (95% CI −41.7 to 26.5) and −24.3 g (95% CI −54.0 to 5.4), respectively (see online supplemental figure S1 and table 2 ). Energy from free sugar in soft drinks as a proportion of total energy from soft drinks also decreased across the study period but did not change significantly relative to the counterfactual (see online supplemental figure S2 ).

Summary of main findings

This study is the first to examine individual level consumption of free sugars in the total diet (and in soft drinks only) in relation to the UK SDIL. Using nationally representative population samples, we found that approximately 1 year following the UK SDIL came into force there was a reduction in total dietary free sugar consumed by children and adults compared with what would have been expected if the SDIL had not been announced and implemented. In children this was equivalent to a reduction of 4.8 g of free sugars/day from food and soft drinks, of which 3 g/day came from soft drinks alone, suggesting that the reduction of sugar in the diet was primarily due to a reduction of sugar from soft drinks. In adults, reductions in dietary sugar appeared to come equally from food and drink with an 11 g reduction in food and drink combined, of which 5.2 g was from soft drinks only. There was no significant reduction compared with the counterfactual in the percentage of energy intake from free sugars in the total diet or from soft drinks alone in both children and adults, suggesting that energy intake from free sugar was reducing simultaneously with overall total energy intake.

Comparison with other studies and interpretation of results

Our finding of a reduction in consumption of free sugars from soft drinks after accounting for pre-SDIL announcement trends is supported by previous research showing a large reduction in the proportion of available soft drinks with over 5 g of sugar/100 mL, the threshold at which soft drinks become levy liable. 15 Furthermore, efforts of the soft drink industry to reformulate soft drinks were found to have led to significant reductions in the volume and per capita sales of sugar from these soft drinks. 23

Our findings are consistent with recent research showing reductions in purchasing of sugar from soft drinks of approximately 8 g/household/week (equivalent to approximately 3 g/person/week or approximately 0.5 g/person/day) 1 year after the SDIL came into force. 16 The estimates from the current study suggest larger reductions in consumption (eg, 3 g free sugar/day from soft drinks in children) than previously reported for purchasing. Methodological differences may explain these differences in estimated effect sizes. Most importantly, the previous study used data on soft drink purchases that were for consumption in the home only. In contrast, we captured information on consumption (rather than purchasing) in and out of the home. Consumption of food and particularly soft drinks outside of the home in young people (1–21 years) increases with age and makes a substantial contribution to total free sugar intakes, highlighting the importance of recording both in home and out of home sugar consumption. 4 Purchasing and consumption data also treat waste differently; purchase data record what comes into the home and therefore include waste, whereas consumption data specifically aim to capture leftovers and waste and exclude it from consumption estimates. While both studies use weights to make the population samples representative of the UK, there may be differences in the study participant characteristics in the two studies, which may contribute to the different estimates.

Consistent with other studies, 24 we found that across the 11-year study period we observed a downward trend in free sugar and energy intake in adults and children. 3 A decline in consumption of free sugars was observed in the whole diet rather than just soft drinks, suggesting that consumption of free sugar from food was also declining from as early as 2008. One reason might be the steady transition from sugar in the diet to low-calorie artificial sweeteners, which globally have had an annual growth of approximately 5.1% between 2008 and 2015. 25

Public health signalling around the time of the announcement of the levy may also have contributed to the changes we observed. Public acceptability and perceived effectiveness of the SDIL was reported to be high 4 months before and approximately 20 months after the levy came into force. 26 Furthermore, awareness of the SDIL was found to be high among parents of children living in the UK, with most supporting the levy and intending to reduce purchases of SSBs as a result. 27 Health signalling was also found following the implementation of the SSB tax in Mexico, with one study reporting that most adults (65%) were aware of the tax and that those aware of the tax were more likely to think the tax would reduce purchases of SSBs, 28 although a separate study found that adolescents in Mexico were mostly unaware of the tax, 29 suggesting that public health signalling may differ according to age.

In 2016 the UK government announced a voluntary sugar reduction programme as part of its childhood obesity plan (which also included SDIL) with the aim of reducing sugar sold by industry by 5% no later than 2018 and by 20% in time for 2020 through both reformulation and portion size reduction. 30 While the programme only managed to achieve overall sugar reductions of approximately 3.5%, this did include higher reductions in specific products such as yoghurts (−17%) and cereals (−13%) by 2018 which may have contributed to some of the observed reductions in total sugar consumption (particularly from foods) around the time of the SDIL. While there is strong evidence that the UK SDIL led to significant reformulation 15 and reductions in purchases of sugar from soft drinks, 16 the products targeted by the sugar reduction programme were voluntary with no taxes or penalties if targets were not met, possibly leading to less incentive for manufacturers to reformulate products that were high in sugar. The 5-year duration of the voluntary sugar reduction programme also makes it challenging to attribute overall reductions using interruption points that we assigned to the ITS to align with the date of the SDIL announcement. The soft drinks categories in our study included levy liable and non-levy liable drinks because we wanted to examine whether individuals were likely to substitute levy liable drinks for high sugar non-liable options. The decline in sugar consumed overall and in soft drinks in relation to the levy suggests that individuals did not change their diets substantially by substituting more sugary foods and drinks. This is consistent with findings from a previous study that found no changes in relation to the levy in sugar purchased from fruit juice, powder used to make drinks or confectionery. 16

Consistent with previous analyses, 3 our findings showed that there was a downward trend in energy intake from sugar as a proportion of total energy across the duration of the study. While there was no reduction compared with the counterfactual scenario (which was also decreasing), our estimates suggest that, by 2019, on average energy from sugar as a proportion of all energy appears to be in line with the WHO recommendation of 10% but not the more recent guidelines of 5% which may bring additional health benefits. 1 31 This finding may suggest that reductions in energy intake from sugar were reducing in concert with overall energy intake and indeed may have been driving it. However, the magnitude of calories associated with the reduction in free sugars, compared with the counterfactual scenario in both adults and children, was modest and thus potentially too small to reflect significant changes in the percentage of energy from sugar. In children, a daily reduction of 4.8 g sugar equates to approximately 19.2 kilocalories out of an approximate daily intake of approximately 2000 kilocalories which is equivalent to approximately 1% reduction in energy intake. Furthermore, overall measures of dietary energy are also likely to involve a degree of error reducing the level of precision in any estimates.

Our estimates of changes in sugar consumption in relation to SDIL suggest that adults may have experienced a greater absolute reduction in sugar than children, which is not consistent with estimates of the distributional impact of the policy. 32 However, our understanding may be aided by the visualisations afforded by graphical depictions of our ITS graphs. Children’s consumption of sugar at the beginning of the study period, particularly in soft drinks, was higher than in adults but reducing at a steeper trajectory, which will have influenced our estimated counterfactual scenario of what would have happened without the SDIL. This steep downward trajectory could not have continued indefinitely as there is a lower limit for sugar consumption. No account for this potential ‘floor effect’ was made in the counterfactual. Adults had a lower baseline of sugar consumption, but their trajectory of sugar consumption decreased at a gentler trajectory, potentially allowing more scope for improvement over the longer run.

Reductions in the levels of sugar in food and drink may have also impacted different age groups and children and adults differently. For example, the largest single contributor to free sugars in younger children aged 4–10 years is cereal and cereal products, followed by soft drinks and fruit juice. By the age of 11–18 years, soft drinks provide the largest single source (29%) of dietary free sugar. For adults the largest source of free sugars is sugar, preserves and confectionery, followed by non-alcoholic beverages. 5

Strengths and limitations

The main strengths of the study include the use of nationally representative data on individual consumption of food and drink in and out of the home using consistent food diary assessment over a 4-day period, setting it apart from other surveys which have used food frequency questionnaires, 24 hour recall, shortened dietary instruments or a mixture of these approaches across different survey years. 33 The continual collection of data using consistent methods enabled us to analyse dietary sugar consumption and energy quarterly over 11 years (or 45 time points) including the announcement and implementation period of the SDIL. Information on participant age allowed us to examine changes in sugar consumption in adults and children separately. Limited sample sizes restricted our use of weekly or monthly data and prevented us from examining differences between sociodemographic groups. At each time point we used protein consumption in food and drink as a non-equivalent control category, strengthening our ability to adjust for time-varying confounders such as contemporaneous events. The trends in counterfactual scenarios of sugar consumption and energy from free sugar as part of total energy were based on trends from April 2008 to the announcement of the UK SDIL (March 2016); however, it is possible that the direction of sugar consumption may have changed course. Ascribing changes in free sugar consumption to the SDIL should include exploration of other possible interventions that might have led to a reduction in sugar across the population. We are only aware of the wider UK government’s voluntary sugar reduction programme implemented across overlapping timelines (2015–2020) and leading to reductions in sugar consumption that were well below the targets set. 30 In turn, under-reporting of portion sizes and high energy foods, which may be increasingly seen as less socially acceptable, has been suggested as a common error in self-reported dietary intake with some groups including older teenagers and females, especially those who are living with obesity, more likely to underestimate energy intake. 34 35 However, there is no evidence to suggest this would have changed as a direct result of the SDIL. 36

Conclusions

Our findings indicate that the UK SDIL led to reductions in consumption of dietary free sugars in adults and children 1 year after it came into force. Energy from free sugar as a proportion of overall energy intake was falling prior to the UK SDIL but did not change in relation to the SDIL, suggesting that a reduction in sugar may have driven a simultaneous reduction in overall energy intake.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

For NDNS 2008–2013, ethical approval was obtained from the Oxfordshire A Research Ethics Committee (Reference number: 07/H0604/113). For NDNS 2014–2017, ethical approval was given from the Cambridge South NRES Committee (Reference number: 13/EE/0016). Participants gave informed consent to participate in the study before taking part.

  • World Health Organization
  • Scientific Advisory Committee on Nutrition
  • Griffith R ,
  • O’Connell M ,
  • Smith K , et al
  • Roberts C ,
  • Maplethorpe N , et al
  • Tedstone A ,
  • Targett V ,
  • Mizdrak A , et al
  • Rogers NT ,
  • Cummins S ,
  • Forde H , et al
  • Gracner T ,
  • Marquez-Padilla F ,
  • Hernandez-Cortes D
  • Petimar J ,
  • Gibson LA ,
  • Wolff MS , et al
  • Conway DI ,
  • Mytton O , et al
  • Scarborough P ,
  • Adhikari V ,
  • Harrington RA , et al
  • Mytton OT , et al
  • Powell LM ,
  • Chriqui JF ,
  • Khan T , et al
  • Lawman HG ,
  • Bleich SN , et al
  • Venables MC ,
  • Nicholson S , et al
  • Lopez Bernal J ,
  • Gasparrini A
  • Public Health England
  • Briggs A , et al
  • Marriott BP ,
  • Malek AM , et al
  • Sylvetsky AC ,
  • Penney TL , et al
  • Gillison F ,
  • Álvarez-Sánchez C ,
  • Contento I ,
  • Jiménez-Aguilar A , et al
  • Ortega-Avila AG ,
  • Papadaki A ,
  • Briggs ADM ,
  • Mytton OT ,
  • Kehlbacher A , et al
  • Campbell M ,
  • Baird J , et al
  • Prentice AM ,
  • Goldberg GR , et al
  • Hebert JR ,
  • Pbert L , et al
  • Page P , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

X @stevencjcummins

Contributors OM, SC, MR, HR, MW and JA conceptualised and acquired funding for the study. NTR carried out statistical analyses. NTR and JA drafted the manuscript. All authors contributed to the article and approved the submitted version.

As the guarantor, NTR had access to the data, controlled the decision to publish and accepts full responsibility for the work and the conduct of the study.

Funding NTR, OM, MW and JA were supported by the Medical Research Council (grant Nos MC_UU_00006/7). This project was funded by the NIHR Public Health Research programme (grant nos 16/49/01 and 16/130/01) to MW. The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR, or the Department of Health and Social Care, UK. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

IMAGES

  1. 10 Content Analysis Examples (2024)

    content analysis research example

  2. Content Analysis For Research

    content analysis research example

  3. content analysis

    content analysis research example

  4. What it is Content Analysis and How Can you Use it in Research

    content analysis research example

  5. Content Analysis Guide, Methods And Examples

    content analysis research example

  6. Content Analysis For Research

    content analysis research example

COMMENTS

  1. Content Analysis Method and Examples

    Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). ... first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be ...

  2. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual: Books, newspapers and magazines. Speeches and interviews. Web content and social media posts. Photographs and films.

  3. Qualitative Content Analysis 101 (+ Examples)

    Content analysis is a qualitative analysis method that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants - this is called unobtrusive research. In other words, with content ...

  4. Content Analysis

    Step 1: Select the content you will analyse. Based on your research question, choose the texts that you will analyse. You need to decide: The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)

  5. Content Analysis

    Content analysis is a research method used to analyze and interpret the characteristics of various forms of communication, such as text, images, or audio. It involves systematically analyzing the content of these materials, identifying patterns, themes, and other relevant features, and drawing inferences or conclusions based on the findings.

  6. Chapter 17. Content Analysis

    Content analyses often include counting as part of the interpretive (qualitative) process. In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, "strategies and tactics" is a bit of a stretch here.

  7. A hands-on guide to doing content analysis

    They grapple with qualitative research terms and concepts, for example; differences between meaning units, codes, categories and themes, and regarding increasing levels of abstraction from raw data to categories or themes. ... Content analysis, as in all qualitative analysis, is a reflective process. There is no "step 1, 2, 3, done!" linear ...

  8. How to do a content analysis [7 steps]

    A step-by-step guide to conducting a content analysis. Step 1: Develop your research questions. Step 2: Choose the content you'll analyze. Step 3: Identify your biases. Step 4: Define the units and categories of coding. Step 5: Develop a coding scheme. Step 6: Code the content. Step 7: Analyze the Results. In Closing.

  9. Qualitative Content Analysis 101: The What, Why & How (With Examples

    Learn about content analysis in qualitative research. We explain what it is, the strengths and weaknesses of content analysis, and when to use it. This video...

  10. The Practical Guide to Qualitative Content Analysis

    Qualitative content analysis is a research method used to analyze and interpret the content of textual data, such as written documents, interview transcripts, or other forms of communication. It provides a systematic way to identify patterns, concepts, and larger themes within the data to gain insight into the meaning and context of the content.

  11. Qualitative Content Analysis: a Simple Guide with Examples

    Here are a few insightful example using our text with 7 words: 7 word strings, inductive word frequency, content analysis. Perhaps more insightfully, here is a list of 5 word combinations, which are much more common: 5 word strings, inductive word frequency, content analysis. The downside to these tools is that you cannot find 2- and 1-word ...

  12. How to plan and perform a qualitative study using content analysis

    In qualitative research, several analysis methods can be used, for example, phenomenology, hermeneutics, grounded theory, ethnography, phenomenographic and content analysis (Burnard, 1995). In contrast to qualitive research methods, qualitative content analysis is not linked to any particular science, and there are fewer rules to follow.

  13. 10 Content Analysis Examples (2024)

    Content analysis is a research method and type of textual analysis that analyzes the meanings of content, which could take the form of textual, visual, aural, and otherwise multimodal texts. Generally, a content analysis will ... Example of Latent Content Analysis. A sociologist studying gender roles in films watches the top 10 movies from last ...

  14. PDF Some examples of qualitative content analysis

    They include making a great amount of money, being charitable, being a law-abiding citizen, making a good marriage and raising a large family. To get there, the cartoon suggests making a large amount of money (i.e. money features both as an end and a means), using force, and working hard.

  15. (PDF) Content Analysis: A Flexible Methodology

    Abstract. Content analysis is a highly fl exible research method that has been. widely used in library and infor mation science (LIS) studies with. varying research goals and objectives. The ...

  16. Content Analysis

    Abstract. In this chapter, the focus is on ways in which content analysis can be used to investigate and describe interview and textual data. The chapter opens with a contextualization of the method and then proceeds to an examination of the role of content analysis in relation to both quantitative and qualitative modes of social research.

  17. Guide: Using Content Analysis

    Commentary: Read about issues of reliability and validity with regard to content analysis as well as the advantages and disadvantages of using content analysis as a research methodology. Examples: View examples of real and hypothetical studies that use content analysis. Annotated Bibliography: Complete list of resources used in this guide and ...

  18. Qualitative Data Analysis Methods: Top 6 + Examples

    QDA Method #1: Qualitative Content Analysis. Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

  19. What is Content Analysis

    Content analysis: Offers both qualitative and quantitative analysis of the communication. Provides an in-depth understanding of the content by making it precise. Enables us to understand the context and perception of the speaker. Provides insight into complex models of human thoughts and language use.

  20. Content Analysis in Social Research

    It is a useful research tool that scholars use to examine human thoughts and actions. During content analysis, researchers compile qualitative data based on human language in written form or even ...

  21. Directed qualitative content analysis: the description and elaboration

    The proposed 16-step method of data analysis in this paper is a detailed description of analytical steps to be taken in directed qualitative content analysis that covers the current gap of knowledge in international literature regarding the practical process of qualitative data analysis. An example of "the resuscitation team members ...

  22. (PDF) Content Analysis

    Content analysis is the study of recorded human. communications such as dairy entries, books, newspaper, video s, text messages, tweets, Facebook updates etc. Being the scientific study of the ...

  23. Content and Thematic Analysis Techniques in Qualitative Research

    Content analysis is also referred to as qualitative content analysis (Elo & Kyngäs, 2008) and ethnographic c ontent analysis (Altheide, 1987). It is a sys tematic process of coding

  24. A hands-on guide to doing content analysis

    A common starting point for qualitative content analysis is often transcribed interview texts. The objective in qualitative content analysis is to systematically transform a large amount of text into a highly organised and concise summary of key results. Analysis of the raw data from verbatim transcribed interviews to form categories or themes ...

  25. Book Title: Graduate research methods in social work

    10.2 Sampling approaches for quantitative research; 10.3 Sample quality; 11. Quantitative measurement. 11.1 Conceptual definitions; 11.2 Operational definitions; ... 19.5 Content analysis 19.6 Grounded theory analysis; 19.7 Photovoice; 20. Quality in qualitative studies: Rigor in research design.

  26. Iran is targeting the U.S. election with fake news sites and cyber

    The Microsoft Threat Analysis Center noted the sites were likely using artificial intelligence tools to lift content from legitimate U.S. news publications and repackage articles in a way that ...

  27. Estimated changes in free sugar consumption one year after the UK soft

    Background The UK soft drinks industry levy (SDIL) was announced in March 2016 and implemented in April 2018, encouraging manufacturers to reduce the sugar content of soft drinks. This is the first study to investigate changes in individual-level consumption of free sugars in relation to the SDIL. Methods We used controlled interrupted time series (2011-2019) to explore changes in the ...

  28. PDF Deloitte

    Deloitte