• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research analysis in

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

closed-loop management

Closed-Loop Management: The Key to Customer Centricity

Sep 3, 2024

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

research analysis in

Why You Should Attend XDAY 2024

Aug 30, 2024

Alchemer vs Qualtrics

Alchemer vs Qualtrics: Find out which one you should choose

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

research analysis in

Home > Blog >

Data analysis in qualitative research, theertha raj, august 30, 2024.

While numbers tell us "what" and "how much," qualitative data reveals the crucial "why" and "how." But let's face it - turning mountains of text, images, and observations into meaningful insights can be daunting.

This guide dives deep into the art and science of how to analyze qualitative data. We'll explore cutting-edge techniques, free qualitative data analysis software, and strategies to make your analysis more rigorous and insightful. Expect practical, actionable advice on qualitative data analysis methods, whether you're a seasoned researcher looking to refine your skills or a team leader aiming to extract more value from your qualitative data.

What is qualitative data?

Qualitative data is non-numerical information that describes qualities or characteristics. It includes text, images, audio, and video. 

This data type captures complex human experiences, behaviors, and opinions that numbers alone can't express.

A qualitative data example can include interview transcripts, open-ended survey responses, field notes from observations, social media posts and customer reviews

Importance of qualitative data

Qualitative data is vital for several reasons:

  • It provides a deep, nuanced understanding of complex phenomena.
  • It captures the 'why' behind behaviors and opinions.
  • It allows for unexpected discoveries and new research directions.
  • It puts people's experiences and perspectives at the forefront.
  • It enhances quantitative findings with depth and detail.

What is data analysis in qualitative research?

Data analysis in qualitative research is the process of examining and interpreting non-numerical data to uncover patterns, themes, and insights. It aims to make sense of rich, detailed information gathered through methods like interviews, focus groups, or observations.

This analysis moves beyond simple description. It seeks to understand the underlying meanings, contexts, and relationships within the data. The goal is to create a coherent narrative that answers research questions and generates new knowledge.

How is qualitative data analysis different from quantitative data analysis?

Qualitative and quantitative data analyses differ in several key ways:

  • Data type: Qualitative analysis uses non-numerical data (text, images), while quantitative analysis uses numerical data.
  • Approach: Qualitative analysis is inductive and exploratory. Quantitative analysis is deductive and confirmatory.
  • Sample size: Qualitative studies often use smaller samples. Quantitative studies typically need larger samples for statistical validity.
  • Depth vs. breadth: Qualitative analysis provides in-depth insights about a few cases. Quantitative analysis offers broader insights across many cases.
  • Subjectivity: Qualitative analysis involves more subjective interpretation. Quantitative analysis aims for objective, statistical measures.

What are the 3 main components of qualitative data analysis?

The three main components of qualitative data analysis are:

  • Data reduction: Simplifying and focusing the raw data through coding and categorization.
  • Data display: Organizing the reduced data into visual formats like matrices, charts, or networks.
  • Conclusion drawing/verification: Interpreting the displayed data and verifying the conclusions.

These components aren't linear steps. Instead, they form an iterative process where researchers move back and forth between them throughout the analysis.

How do you write a qualitative analysis?

Step 1: organize your data.

Start with bringing all your qualitative research data in one place. A repository can be of immense help here. Transcribe interviews , compile field notes, and gather all relevant materials.

Immerse yourself in the data. Read through everything multiple times.

Step 2: Code & identify themes

Identify and label key concepts, themes, or patterns. Group related codes into broader themes or categories. Try to connect themes to tell a coherent story that answers your research questions.

Pick out direct quotes from your data to illustrate key points.

Step 3: Interpret and reflect

Explain what your results mean in the context of your research and existing literature.

Als discuss, identify and try to eliminate potential biases or limitations in your analysis. 

Summarize main insights and their implications.

What are the 5 qualitative data analysis methods?

Thematic Analysis Identifying, analyzing, and reporting patterns (themes) within data.

Content Analysis Systematically categorizing and counting the occurrence of specific elements in text.

Grounded Theory Developing theory from data through iterative coding and analysis.

Discourse Analysis Examining language use and meaning in social contexts.

Narrative Analysis Interpreting stories and personal accounts to understand experiences and meanings.

Each method suits different research goals and data types. Researchers often combine methods for comprehensive analysis.

What are the 4 data collection methods in qualitative research?

When it comes to collecting qualitative data, researchers primarily rely on four methods.

  • Interviews : One-on-one conversations to gather in-depth information.
  • Focus Groups : Group discussions to explore collective opinions and experiences.
  • Observations : Watching and recording behaviors in natural settings.
  • Document Analysis : Examining existing texts, images, or artifacts.

Researchers often use multiple methods to gain a comprehensive understanding of their topic.

How is qualitative data analysis measured?

Unlike quantitative data, qualitative data analysis isn't measured in traditional numerical terms. Instead, its quality is evaluated based on several criteria. 

Trustworthiness is key, encompassing the credibility, transferability, dependability, and confirmability of the findings. The rigor of the analysis - the thoroughness and care taken in data collection and analysis - is another crucial factor. 

Transparency in documenting the analysis process and decision-making is essential, as is reflexivity - acknowledging and examining the researcher's own biases and influences. 

Employing techniques like member checking and triangulation all contribute to the strength of qualitative analysis.

Benefits of qualitative data analysis

The benefits of qualitative data analysis are numerous. It uncovers rich, nuanced understanding of complex phenomena and allows for unexpected discoveries and new research directions. 

By capturing the 'why' behind behaviors and opinions, qualitative data analysis methods provide crucial context. 

Qualitative analysis can also lead to new theoretical frameworks or hypotheses and enhances quantitative findings with depth and detail. It's particularly adept at capturing cultural nuances that might be missed in quantitative studies.

Challenges of Qualitative Data Analysis

Researchers face several challenges when conducting qualitative data analysis. 

Managing and making sense of large volumes of rich, complex data can lead to data overload. Maintaining consistent coding across large datasets or between multiple coders can be difficult. 

There's a delicate balance to strike between providing enough context and maintaining focus on analysis. Recognizing and mitigating researcher biases in data interpretation is an ongoing challenge. 

The learning curve for qualitative data analysis software can be steep and time-consuming. Ethical considerations, particularly around protecting participant anonymity while presenting rich, detailed data, require careful navigation. Integrating different types of data from various sources can be complex. Time management is crucial, as researchers must balance the depth of analysis with project timelines and resources. Finally, communicating complex qualitative insights in clear, compelling ways can be challenging.

Best Software to Analyze Qualitative Data

G2 rating: 4.6/5

Pricing: Starts at $30 monthly.

Looppanel is an AI-powered research assistant and repository platform that can make it 5x faster to get to insights, by automating all the manual, tedious parts of your job. 

Here’s how Looppanel’s features can help with qualitative data analysis:

  • Automatic Transcription: Quickly turn speech into accurate text; it works across 8 languages and even heavy accents, with over 90% accuracy.
  • AI Note-Taking: The research assistant can join you on calls and take notes, as well as automatically sort your notes based on your interview questions.
  • Automatic Tagging: Easily tag and organize your data with free AI tools.
  • Insight Generation: Create shareable insights that fit right into your other tools.
  • Repository Search: Run Google-like searches within your projects and calls to find a data snippet/quote in seconds
  • Smart Summary: Ask the AI a question on your research, and it will give you an answer, using extracts from your data as citations.

Looppanel’s focus on automating research tasks makes it perfect for researchers who want to save time and work smarter.

G2 rating: 4.7/5

Pricing: Free version available, with the Plus version costing $20 monthly.

ChatGPT, developed by OpenAI, offers a range of capabilities for qualitative data analysis including:

  • Document analysis : It can easily extract and analyze text from various file formats.
  • Summarization : GPT can condense lengthy documents into concise summaries.
  • Advanced Data Analysis (ADA) : For paid users, Chat-GPT offers quantitative analysis of data documents.
  • Sentiment analysis: Although not Chat-GPT’s specialty, it can still perform basic sentiment analysis on text data.

ChatGPT's versatility makes it valuable for researchers who need quick insights from diverse text sources.

How to use ChatGPT for qualitative data analysis

ChatGPT can be a handy sidekick in your qualitative analysis, if you do the following:

  • Use it to summarize long documents or transcripts
  • Ask it to identify key themes in your data
  • Use it for basic sentiment analysis
  • Have it generate potential codes based on your research questions
  • Use it to brainstorm interpretations of your findings

G2 rating: 4.7/5 Pricing: Custom

Atlas.ti is a powerful platform built for detailed qualitative and mixed-methods research, offering a lot of capabilities for running both quantitative and qualitative research.

It’s key data analysis features include:

  • Multi-format Support: Analyze text, PDFs, images, audio, video, and geo data all within one platform.
  • AI-Powered Coding: Uses AI to suggest codes and summarize documents.
  • Collaboration Tools: Ideal for teams working on complex research projects.
  • Data Visualization: Create network views and other visualizations to showcase relationships in your data.

G2 rating: 4.1/5 Pricing: Custom

NVivo is another powerful platform for qualitative and mixed-methods research. It’s analysis features include:

  • Data Import and Organization: Easily manage different data types, including text, audio, and video.
  • AI-Powered Coding: Speeds up the coding process with machine learning.
  • Visualization Tools: Create charts, graphs, and diagrams to represent your findings.
  • Collaboration Features: Suitable for team-based research projects.

NVivo combines AI capabilities with traditional qualitative analysis tools, making it versatile for various research needs.

Can Excel do qualitative data analysis?

Excel can be a handy tool for qualitative data analysis, especially if you're just starting out or working on a smaller project. While it's not specialized qualitative data analysis software, you can use it to organize your data, maybe putting different themes in different columns. It's good for basic coding, where you label bits of text with keywords. You can use its filter feature to focus on specific themes. Excel can also create simple charts to visualize your findings. But for bigger or more complex projects, you might want to look into software designed specifically for qualitative data analysis. These tools often have more advanced features that can save you time and help you dig deeper into your data.

How do you show qualitative analysis?

Showing qualitative data analysis is about telling the story of your data. In qualitative data analysis methods, we use quotes from interviews or documents to back up our points. Create charts or mind maps to show how different ideas connect, which is a common practice in data analysis in qualitative research. Group your findings into themes that make sense. Then, write it all up in a way that flows, explaining what you found and why it matters.

What is the best way to analyze qualitative data?

There's no one-size-fits-all approach to how to analyze qualitative data, but there are some tried-and-true steps. 

Start by getting your data in order. Then, read through it a few times to get familiar with it. As you go, start marking important bits with codes - this is a fundamental qualitative data analysis method. Group similar codes into bigger themes. Look for patterns in these themes - how do they connect? 

Finally, think about what it all means in the bigger picture of your research. Remember, it's okay to go back and forth between these steps as you dig deeper into your data. Qualitative data analysis software can be a big help in this process, especially for managing large amounts of data.

In qualitative methods of test analysis, what do test developers do to generate data?

Test developers in qualitative research might sit down with people for in-depth chats or run group discussions, which are key qualitative data analysis methods. They often use surveys with open-ended questions that let people express themselves freely. Sometimes, they'll observe people in their natural environment, taking notes on what they see. They might also dig into existing documents or artifacts that relate to their topic. The goal is to gather rich, detailed information that helps them understand the full picture, which is crucial in data analysis in qualitative research.

Which is not a purpose of reflexivity during qualitative data analysis?

Reflexivity in qualitative data analysis isn't about proving you're completely objective. That's not the goal. Instead, it's about being honest about who you are as a researcher. It's recognizing that your own experiences and views might influence how you see the data. By being upfront about this, you actually make your research more trustworthy. It's also a way to dig deeper into your data, seeing things you might have missed at first glance. This self-awareness is a crucial part of qualitative data analysis methods.

What is a qualitative data analysis example?

A simple example is analyzing customer feedback for a new product. You might collect feedback, read through responses, create codes like "ease of use" or "design," and group similar codes into themes. You'd then identify patterns and support findings with specific quotes. This process helps transform raw feedback into actionable insights.

How to analyze qualitative data from a survey?

First, gather all your responses in one place. Read through them to get a feel for what people are saying. Then, start labeling responses with codes - short descriptions of what each bit is about. This coding process is a fundamental qualitative data analysis method. Group similar codes into bigger themes. Look for patterns in these themes. Are certain ideas coming up a lot? Do different groups of people have different views? Use actual quotes from your survey to back up what you're seeing. Think about how your findings relate to your original research questions. 

Which one is better, NVivo or Atlas.ti?

NVivo is known for being user-friendly and great for team projects. Atlas.ti shines when it comes to visual mapping of concepts and handling geographic data. Both can handle a variety of data types and have powerful tools for qualitative data analysis. The best way to decide is to try out both if you can. 

While these are powerful tools, the core of qualitative data analysis still relies on your analytical skills and understanding of qualitative data analysis methods.

Do I need to use NVivo for qualitative data analysis?

You don't necessarily need NVivo for qualitative data analysis, but it can definitely make your life easier, especially for bigger projects. Think of it like using a power tool versus a hand tool - you can get the job done either way, but the power tool might save you time and effort. For smaller projects or if you're just starting out, you might be fine with simpler tools or even free qualitative data analysis software. But if you're dealing with lots of data, or if you need to collaborate with a team, or if you want to do more complex analysis, then specialized qualitative data analysis software like NVivo can be a big help. It's all about finding the right tool for your specific research needs and the qualitative data analysis methods you're using.

Here’s a guide that can help you decide.

How to use NVivo for qualitative data analysis

First, you import all your data - interviews, documents, videos, whatever you've got. Then you start creating "nodes," which are like folders for different themes or ideas in your data. As you read through your material, you highlight bits that relate to these themes and file them under the right nodes. NVivo lets you easily search through all this organized data, find connections between different themes, and even create visual maps of how everything relates.

How much does NVivo cost?

NVivo's pricing isn't one-size-fits-all. They offer different plans for individuals, teams, and large organizations, but they don't publish their prices openly. Contact the team here for a custom quote.

What are the four steps of qualitative data analysis?

While qualitative data analysis is often iterative, it generally follows these four main steps:

1. Data Collection: Gathering raw data through interviews, observations, or documents.

2. Data Preparation: Organizing and transcribing the collected data.

3. Data Coding: Identifying and labeling important concepts or themes in the data.

4. Interpretation: Drawing meaning from the coded data and developing insights.

Follow us on

Get the best resources for ux research, in your inbox, related articles.

research analysis in

Resources & Guides

February 15, 2024

How to use AI for Qualitative Data Analysis

research analysis in

August 15, 2024

Transcription in Qualitative Research: A Comprehensive Guide for UX Researchers

research analysis in

May 22, 2024

Triangulation in Qualitative Research: A Comprehensive Guide [2024]

Looppanel automatically records your calls, transcribes them, and centralizes all your research data in one place

Data Analysis

  • Introduction to Data Analysis
  • Quantitative Analysis Tools
  • Qualitative Analysis Tools
  • Mixed Methods Analysis
  • Geospatial Analysis
  • Further Reading

Profile Photo

What is Data Analysis?

According to the federal government, data analysis is "the process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data" ( Responsible Conduct in Data Management ). Important components of data analysis include searching for patterns, remaining unbiased in drawing inference from data, practicing responsible  data management , and maintaining "honest and accurate analysis" ( Responsible Conduct in Data Management ). 

In order to understand data analysis further, it can be helpful to take a step back and understand the question "What is data?". Many of us associate data with spreadsheets of numbers and values, however, data can encompass much more than that. According to the federal government, data is "The recorded factual material commonly accepted in the scientific community as necessary to validate research findings" ( OMB Circular 110 ). This broad definition can include information in many formats. 

Some examples of types of data are as follows:

  • Photographs 
  • Hand-written notes from field observation
  • Machine learning training data sets
  • Ethnographic interview transcripts
  • Sheet music
  • Scripts for plays and musicals 
  • Observations from laboratory experiments ( CMU Data 101 )

Thus, data analysis includes the processing and manipulation of these data sources in order to gain additional insight from data, answer a research question, or confirm a research hypothesis. 

Data analysis falls within the larger research data lifecycle, as seen below. 

( University of Virginia )

Why Analyze Data?

Through data analysis, a researcher can gain additional insight from data and draw conclusions to address the research question or hypothesis. Use of data analysis tools helps researchers understand and interpret data. 

What are the Types of Data Analysis?

Data analysis can be quantitative, qualitative, or mixed methods. 

Quantitative research typically involves numbers and "close-ended questions and responses" ( Creswell & Creswell, 2018 , p. 3). Quantitative research tests variables against objective theories, usually measured and collected on instruments and analyzed using statistical procedures ( Creswell & Creswell, 2018 , p. 4). Quantitative analysis usually uses deductive reasoning. 

Qualitative  research typically involves words and "open-ended questions and responses" ( Creswell & Creswell, 2018 , p. 3). According to Creswell & Creswell, "qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem" ( 2018 , p. 4). Thus, qualitative analysis usually invokes inductive reasoning. 

Mixed methods  research uses methods from both quantitative and qualitative research approaches. Mixed methods research works under the "core assumption... that the integration of qualitative and quantitative data yields additional insight beyond the information provided by either the quantitative or qualitative data alone" ( Creswell & Creswell, 2018 , p. 4). 

  • Next: Planning >>
  • Last Updated: Aug 28, 2024 1:41 PM
  • URL: https://guides.library.georgetown.edu/data-analysis

Creative Commons

PW Skills | Blog

Data Analysis Techniques in Research – Methods, Tools & Examples

' src=

Varun Saharawat is a seasoned professional in the fields of SEO and content writing. With a profound knowledge of the intricate aspects of these disciplines, Varun has established himself as a valuable asset in the world of digital marketing and online content creation.

Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.

data analysis techniques in research

Data Analysis Techniques in Research : While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence. Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

A straightforward illustration of data analysis emerges when we make everyday decisions, basing our choices on past experiences or predictions of potential outcomes.

If you want to learn more about this topic and acquire valuable skills that will set you apart in today’s data-driven world, we highly recommend enrolling in the Data Analytics Course by Physics Wallah . And as a special offer for our readers, use the coupon code “READER” to get a discount on this course.

Table of Contents

What is Data Analysis?

Data analysis is the systematic process of inspecting, cleaning, transforming, and interpreting data with the objective of discovering valuable insights and drawing meaningful conclusions. This process involves several steps:

  • Inspecting : Initial examination of data to understand its structure, quality, and completeness.
  • Cleaning : Removing errors, inconsistencies, or irrelevant information to ensure accurate analysis.
  • Transforming : Converting data into a format suitable for analysis, such as normalization or aggregation.
  • Interpreting : Analyzing the transformed data to identify patterns, trends, and relationships.

Types of Data Analysis Techniques in Research

Data analysis techniques in research are categorized into qualitative and quantitative methods, each with its specific approaches and tools. These techniques are instrumental in extracting meaningful insights, patterns, and relationships from data to support informed decision-making, validate hypotheses, and derive actionable recommendations. Below is an in-depth exploration of the various types of data analysis techniques commonly employed in research:

1) Qualitative Analysis:

Definition: Qualitative analysis focuses on understanding non-numerical data, such as opinions, concepts, or experiences, to derive insights into human behavior, attitudes, and perceptions.

  • Content Analysis: Examines textual data, such as interview transcripts, articles, or open-ended survey responses, to identify themes, patterns, or trends.
  • Narrative Analysis: Analyzes personal stories or narratives to understand individuals’ experiences, emotions, or perspectives.
  • Ethnographic Studies: Involves observing and analyzing cultural practices, behaviors, and norms within specific communities or settings.

2) Quantitative Analysis:

Quantitative analysis emphasizes numerical data and employs statistical methods to explore relationships, patterns, and trends. It encompasses several approaches:

Descriptive Analysis:

  • Frequency Distribution: Represents the number of occurrences of distinct values within a dataset.
  • Central Tendency: Measures such as mean, median, and mode provide insights into the central values of a dataset.
  • Dispersion: Techniques like variance and standard deviation indicate the spread or variability of data.

Diagnostic Analysis:

  • Regression Analysis: Assesses the relationship between dependent and independent variables, enabling prediction or understanding causality.
  • ANOVA (Analysis of Variance): Examines differences between groups to identify significant variations or effects.

Predictive Analysis:

  • Time Series Forecasting: Uses historical data points to predict future trends or outcomes.
  • Machine Learning Algorithms: Techniques like decision trees, random forests, and neural networks predict outcomes based on patterns in data.

Prescriptive Analysis:

  • Optimization Models: Utilizes linear programming, integer programming, or other optimization techniques to identify the best solutions or strategies.
  • Simulation: Mimics real-world scenarios to evaluate various strategies or decisions and determine optimal outcomes.

Specific Techniques:

  • Monte Carlo Simulation: Models probabilistic outcomes to assess risk and uncertainty.
  • Factor Analysis: Reduces the dimensionality of data by identifying underlying factors or components.
  • Cohort Analysis: Studies specific groups or cohorts over time to understand trends, behaviors, or patterns within these groups.
  • Cluster Analysis: Classifies objects or individuals into homogeneous groups or clusters based on similarities or attributes.
  • Sentiment Analysis: Uses natural language processing and machine learning techniques to determine sentiment, emotions, or opinions from textual data.

Also Read: AI and Predictive Analytics: Examples, Tools, Uses, Ai Vs Predictive Analytics

Data Analysis Techniques in Research Examples

To provide a clearer understanding of how data analysis techniques are applied in research, let’s consider a hypothetical research study focused on evaluating the impact of online learning platforms on students’ academic performance.

Research Objective:

Determine if students using online learning platforms achieve higher academic performance compared to those relying solely on traditional classroom instruction.

Data Collection:

  • Quantitative Data: Academic scores (grades) of students using online platforms and those using traditional classroom methods.
  • Qualitative Data: Feedback from students regarding their learning experiences, challenges faced, and preferences.

Data Analysis Techniques Applied:

1) Descriptive Analysis:

  • Calculate the mean, median, and mode of academic scores for both groups.
  • Create frequency distributions to represent the distribution of grades in each group.

2) Diagnostic Analysis:

  • Conduct an Analysis of Variance (ANOVA) to determine if there’s a statistically significant difference in academic scores between the two groups.
  • Perform Regression Analysis to assess the relationship between the time spent on online platforms and academic performance.

3) Predictive Analysis:

  • Utilize Time Series Forecasting to predict future academic performance trends based on historical data.
  • Implement Machine Learning algorithms to develop a predictive model that identifies factors contributing to academic success on online platforms.

4) Prescriptive Analysis:

  • Apply Optimization Models to identify the optimal combination of online learning resources (e.g., video lectures, interactive quizzes) that maximize academic performance.
  • Use Simulation Techniques to evaluate different scenarios, such as varying student engagement levels with online resources, to determine the most effective strategies for improving learning outcomes.

5) Specific Techniques:

  • Conduct Factor Analysis on qualitative feedback to identify common themes or factors influencing students’ perceptions and experiences with online learning.
  • Perform Cluster Analysis to segment students based on their engagement levels, preferences, or academic outcomes, enabling targeted interventions or personalized learning strategies.
  • Apply Sentiment Analysis on textual feedback to categorize students’ sentiments as positive, negative, or neutral regarding online learning experiences.

By applying a combination of qualitative and quantitative data analysis techniques, this research example aims to provide comprehensive insights into the effectiveness of online learning platforms.

Also Read: Learning Path to Become a Data Analyst in 2024

Data Analysis Techniques in Quantitative Research

Quantitative research involves collecting numerical data to examine relationships, test hypotheses, and make predictions. Various data analysis techniques are employed to interpret and draw conclusions from quantitative data. Here are some key data analysis techniques commonly used in quantitative research:

1) Descriptive Statistics:

  • Description: Descriptive statistics are used to summarize and describe the main aspects of a dataset, such as central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution (skewness, kurtosis).
  • Applications: Summarizing data, identifying patterns, and providing initial insights into the dataset.

2) Inferential Statistics:

  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. This technique includes hypothesis testing, confidence intervals, t-tests, chi-square tests, analysis of variance (ANOVA), regression analysis, and correlation analysis.
  • Applications: Testing hypotheses, making predictions, and generalizing findings from a sample to a larger population.

3) Regression Analysis:

  • Description: Regression analysis is a statistical technique used to model and examine the relationship between a dependent variable and one or more independent variables. Linear regression, multiple regression, logistic regression, and nonlinear regression are common types of regression analysis .
  • Applications: Predicting outcomes, identifying relationships between variables, and understanding the impact of independent variables on the dependent variable.

4) Correlation Analysis:

  • Description: Correlation analysis is used to measure and assess the strength and direction of the relationship between two or more variables. The Pearson correlation coefficient, Spearman rank correlation coefficient, and Kendall’s tau are commonly used measures of correlation.
  • Applications: Identifying associations between variables and assessing the degree and nature of the relationship.

5) Factor Analysis:

  • Description: Factor analysis is a multivariate statistical technique used to identify and analyze underlying relationships or factors among a set of observed variables. It helps in reducing the dimensionality of data and identifying latent variables or constructs.
  • Applications: Identifying underlying factors or constructs, simplifying data structures, and understanding the underlying relationships among variables.

6) Time Series Analysis:

  • Description: Time series analysis involves analyzing data collected or recorded over a specific period at regular intervals to identify patterns, trends, and seasonality. Techniques such as moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Fourier analysis are used.
  • Applications: Forecasting future trends, analyzing seasonal patterns, and understanding time-dependent relationships in data.

7) ANOVA (Analysis of Variance):

  • Description: Analysis of variance (ANOVA) is a statistical technique used to analyze and compare the means of two or more groups or treatments to determine if they are statistically different from each other. One-way ANOVA, two-way ANOVA, and MANOVA (Multivariate Analysis of Variance) are common types of ANOVA.
  • Applications: Comparing group means, testing hypotheses, and determining the effects of categorical independent variables on a continuous dependent variable.

8) Chi-Square Tests:

  • Description: Chi-square tests are non-parametric statistical tests used to assess the association between categorical variables in a contingency table. The Chi-square test of independence, goodness-of-fit test, and test of homogeneity are common chi-square tests.
  • Applications: Testing relationships between categorical variables, assessing goodness-of-fit, and evaluating independence.

These quantitative data analysis techniques provide researchers with valuable tools and methods to analyze, interpret, and derive meaningful insights from numerical data. The selection of a specific technique often depends on the research objectives, the nature of the data, and the underlying assumptions of the statistical methods being used.

Also Read: Analysis vs. Analytics: How Are They Different?

Data Analysis Methods

Data analysis methods refer to the techniques and procedures used to analyze, interpret, and draw conclusions from data. These methods are essential for transforming raw data into meaningful insights, facilitating decision-making processes, and driving strategies across various fields. Here are some common data analysis methods:

  • Description: Descriptive statistics summarize and organize data to provide a clear and concise overview of the dataset. Measures such as mean, median, mode, range, variance, and standard deviation are commonly used.
  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. Techniques such as hypothesis testing, confidence intervals, and regression analysis are used.

3) Exploratory Data Analysis (EDA):

  • Description: EDA techniques involve visually exploring and analyzing data to discover patterns, relationships, anomalies, and insights. Methods such as scatter plots, histograms, box plots, and correlation matrices are utilized.
  • Applications: Identifying trends, patterns, outliers, and relationships within the dataset.

4) Predictive Analytics:

  • Description: Predictive analytics use statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events or outcomes. Techniques such as regression analysis, time series forecasting, and machine learning algorithms (e.g., decision trees, random forests, neural networks) are employed.
  • Applications: Forecasting future trends, predicting outcomes, and identifying potential risks or opportunities.

5) Prescriptive Analytics:

  • Description: Prescriptive analytics involve analyzing data to recommend actions or strategies that optimize specific objectives or outcomes. Optimization techniques, simulation models, and decision-making algorithms are utilized.
  • Applications: Recommending optimal strategies, decision-making support, and resource allocation.

6) Qualitative Data Analysis:

  • Description: Qualitative data analysis involves analyzing non-numerical data, such as text, images, videos, or audio, to identify themes, patterns, and insights. Methods such as content analysis, thematic analysis, and narrative analysis are used.
  • Applications: Understanding human behavior, attitudes, perceptions, and experiences.

7) Big Data Analytics:

  • Description: Big data analytics methods are designed to analyze large volumes of structured and unstructured data to extract valuable insights. Technologies such as Hadoop, Spark, and NoSQL databases are used to process and analyze big data.
  • Applications: Analyzing large datasets, identifying trends, patterns, and insights from big data sources.

8) Text Analytics:

  • Description: Text analytics methods involve analyzing textual data, such as customer reviews, social media posts, emails, and documents, to extract meaningful information and insights. Techniques such as sentiment analysis, text mining, and natural language processing (NLP) are used.
  • Applications: Analyzing customer feedback, monitoring brand reputation, and extracting insights from textual data sources.

These data analysis methods are instrumental in transforming data into actionable insights, informing decision-making processes, and driving organizational success across various sectors, including business, healthcare, finance, marketing, and research. The selection of a specific method often depends on the nature of the data, the research objectives, and the analytical requirements of the project or organization.

Also Read: Quantitative Data Analysis: Types, Analysis & Examples

Data Analysis Tools

Data analysis tools are essential instruments that facilitate the process of examining, cleaning, transforming, and modeling data to uncover useful information, make informed decisions, and drive strategies. Here are some prominent data analysis tools widely used across various industries:

1) Microsoft Excel:

  • Description: A spreadsheet software that offers basic to advanced data analysis features, including pivot tables, data visualization tools, and statistical functions.
  • Applications: Data cleaning, basic statistical analysis, visualization, and reporting.

2) R Programming Language :

  • Description: An open-source programming language specifically designed for statistical computing and data visualization.
  • Applications: Advanced statistical analysis, data manipulation, visualization, and machine learning.

3) Python (with Libraries like Pandas, NumPy, Matplotlib, and Seaborn):

  • Description: A versatile programming language with libraries that support data manipulation, analysis, and visualization.
  • Applications: Data cleaning, statistical analysis, machine learning, and data visualization.

4) SPSS (Statistical Package for the Social Sciences):

  • Description: A comprehensive statistical software suite used for data analysis, data mining, and predictive analytics.
  • Applications: Descriptive statistics, hypothesis testing, regression analysis, and advanced analytics.

5) SAS (Statistical Analysis System):

  • Description: A software suite used for advanced analytics, multivariate analysis, and predictive modeling.
  • Applications: Data management, statistical analysis, predictive modeling, and business intelligence.

6) Tableau:

  • Description: A data visualization tool that allows users to create interactive and shareable dashboards and reports.
  • Applications: Data visualization , business intelligence , and interactive dashboard creation.

7) Power BI:

  • Description: A business analytics tool developed by Microsoft that provides interactive visualizations and business intelligence capabilities.
  • Applications: Data visualization, business intelligence, reporting, and dashboard creation.

8) SQL (Structured Query Language) Databases (e.g., MySQL, PostgreSQL, Microsoft SQL Server):

  • Description: Database management systems that support data storage, retrieval, and manipulation using SQL queries.
  • Applications: Data retrieval, data cleaning, data transformation, and database management.

9) Apache Spark:

  • Description: A fast and general-purpose distributed computing system designed for big data processing and analytics.
  • Applications: Big data processing, machine learning, data streaming, and real-time analytics.

10) IBM SPSS Modeler:

  • Description: A data mining software application used for building predictive models and conducting advanced analytics.
  • Applications: Predictive modeling, data mining, statistical analysis, and decision optimization.

These tools serve various purposes and cater to different data analysis needs, from basic statistical analysis and data visualization to advanced analytics, machine learning, and big data processing. The choice of a specific tool often depends on the nature of the data, the complexity of the analysis, and the specific requirements of the project or organization.

Also Read: How to Analyze Survey Data: Methods & Examples

Importance of Data Analysis in Research

The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process:

  • Data analysis helps ensure that the results obtained are valid and reliable. By systematically examining the data, researchers can identify any inconsistencies or anomalies that may affect the credibility of the findings.
  • Effective data analysis provides researchers with the necessary information to make informed decisions. By interpreting the collected data, researchers can draw conclusions, make predictions, or formulate recommendations based on evidence rather than intuition or guesswork.
  • Data analysis allows researchers to identify patterns, trends, and relationships within the data. This can lead to a deeper understanding of the research topic, enabling researchers to uncover insights that may not be immediately apparent.
  • In empirical research, data analysis plays a critical role in testing hypotheses. Researchers collect data to either support or refute their hypotheses, and data analysis provides the tools and techniques to evaluate these hypotheses rigorously.
  • Transparent and well-executed data analysis enhances the credibility of research findings. By clearly documenting the data analysis methods and procedures, researchers allow others to replicate the study, thereby contributing to the reproducibility of research findings.
  • In fields such as business or healthcare, data analysis helps organizations allocate resources more efficiently. By analyzing data on consumer behavior, market trends, or patient outcomes, organizations can make strategic decisions about resource allocation, budgeting, and planning.
  • In public policy and social sciences, data analysis is instrumental in developing and evaluating policies and interventions. By analyzing data on social, economic, or environmental factors, policymakers can assess the effectiveness of existing policies and inform the development of new ones.
  • Data analysis allows for continuous improvement in research methods and practices. By analyzing past research projects, identifying areas for improvement, and implementing changes based on data-driven insights, researchers can refine their approaches and enhance the quality of future research endeavors.

However, it is important to remember that mastering these techniques requires practice and continuous learning. That’s why we highly recommend the Data Analytics Course by Physics Wallah . Not only does it cover all the fundamentals of data analysis, but it also provides hands-on experience with various tools such as Excel, Python, and Tableau. Plus, if you use the “ READER ” coupon code at checkout, you can get a special discount on the course.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Data Analysis Techniques in Research FAQs

What are the 5 techniques for data analysis.

The five techniques for data analysis include: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis Qualitative Analysis

What are techniques of data analysis in research?

Techniques of data analysis in research encompass both qualitative and quantitative methods. These techniques involve processes like summarizing raw data, investigating causes of events, forecasting future outcomes, offering recommendations based on predictions, and examining non-numerical data to understand concepts or experiences.

What are the 3 methods of data analysis?

The three primary methods of data analysis are: Qualitative Analysis Quantitative Analysis Mixed-Methods Analysis

What are the four types of data analysis techniques?

The four types of data analysis techniques are: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis

  • 10 Best Companies For Data Analysis Internships 2024

data analysis internship

This article will help you provide the top 10 best companies for a Data Analysis Internship which will not only…

  • Top Best Big Data Analytics Classes 2024

big data analytics classes

Many websites and institutions provide online remote big data analytics classes to help you learn and also earn certifications for…

  • Data Analyst Roadmap 2024: Responsibilities, Skills Required, Career Path

research analysis in

Data Analyst Roadmap: The field of data analysis is booming and is very rewarding for those with the right skills.…

right adv

Related Articles

  • The Best Data And Analytics Courses For Beginners
  • Best Courses For Data Analytics: Top 10 Courses For Your Career in Trend
  • BI & Analytics: What’s The Difference?
  • Predictive Analysis: Predicting the Future with Data
  • Graph Analytics – What Is it and Why Does It Matter?
  • How to Analysis of Survey Data: Methods & Examples
  • Google Data Analytics Professional Certificate Review, Cost, Eligibility

bottom banner

Data Analysis in Quantitative Research

  • Reference work entry
  • First Online: 13 January 2019
  • Cite this reference work entry

research analysis in

  • Yong Moon Jung 2  

2360 Accesses

2 Citations

Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less flexibility. Conducting quantitative data analysis requires a prerequisite understanding of the statistical knowledge and skills. It also requires rigor in the choice of appropriate analysis model and the interpretation of the analysis outcomes. Basically, the choice of appropriate analysis techniques is determined by the type of research question and the nature of the data. In addition, different analysis techniques require different assumptions of data. This chapter provides introductory guides for readers to assist them with their informed decision-making in choosing the correct analysis models. To this end, it begins with discussion of the levels of measure: nominal, ordinal, and scale. Some commonly used analysis techniques in univariate, bivariate, and multivariate data analysis are presented for practical examples. Example analysis outcomes are produced by the use of SPSS (Statistical Package for Social Sciences).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

research analysis in

Data Analysis Techniques for Quantitative Study

research analysis in

Meta-Analytic Methods for Public Health Research

Armstrong JS. Significance tests harm progress in forecasting. Int J Forecast. 2007;23(2):321–7.

Article   Google Scholar  

Babbie E. The practice of social research. 14th ed. Belmont: Cengage Learning; 2016.

Google Scholar  

Brockopp DY, Hastings-Tolsma MT. Fundamentals of nursing research. Boston: Jones & Bartlett; 2003.

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks: Sage; 2014.

Fawcett J. The relationship of theory and research. Philadelphia: F. A. Davis; 1999.

Field A. Discovering statistics using IBM SPSS statistics. London: Sage; 2013.

Grove SK, Gray JR, Burns N. Understanding nursing research: building an evidence-based practice. 6th ed. St. Louis: Elsevier Saunders; 2015.

Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RD. Multivariate data analysis. Upper Saddle River: Pearson Prentice Hall; 2006.

Katz MH. Multivariable analysis: a practical guide for clinicians. Cambridge: Cambridge University Press; 2006.

Book   Google Scholar  

McHugh ML. Scientific inquiry. J Specialists Pediatr Nurs. 2007; 8 (1):35–7. Volume 8, Issue 1, Version of Record online: 22 FEB 2007

Pallant J. SPSS survival manual: a step by step guide to data analysis using IBM SPSS. Sydney: Allen & Unwin; 2016.

Polit DF, Beck CT. Nursing research: principles and methods. Philadelphia: Lippincott Williams & Wilkins; 2004.

Trochim WMK, Donnelly JP. Research methods knowledge base. 3rd ed. Mason: Thomson Custom Publishing; 2007.

Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Boston: Pearson Education.

Wells CS, Hin JM. Dealing with assumptions underlying statistical tests. Psychol Sch. 2007;44(5):495–502.

Download references

Author information

Authors and affiliations.

Centre for Business and Social Innovation, University of Technology Sydney, Ultimo, NSW, Australia

Yong Moon Jung

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yong Moon Jung .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Jung, Y.M. (2019). Data Analysis in Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_109

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_109

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

research analysis in

Quantitative Data Analysis 101

The lingo, methods and techniques, explained simply.

By: Derek Jansen (MBA)  and Kerryn Warren (PhD) | December 2020

Quantitative data analysis is one of those things that often strikes fear in students. It’s totally understandable – quantitative analysis is a complex topic, full of daunting lingo , like medians, modes, correlation and regression. Suddenly we’re all wishing we’d paid a little more attention in math class…

The good news is that while quantitative data analysis is a mammoth topic, gaining a working understanding of the basics isn’t that hard , even for those of us who avoid numbers and math . In this post, we’ll break quantitative analysis down into simple , bite-sized chunks so you can approach your research with confidence.

Quantitative data analysis methods and techniques 101

Overview: Quantitative Data Analysis 101

  • What (exactly) is quantitative data analysis?
  • When to use quantitative analysis
  • How quantitative analysis works

The two “branches” of quantitative analysis

  • Descriptive statistics 101
  • Inferential statistics 101
  • How to choose the right quantitative methods
  • Recap & summary

What is quantitative data analysis?

Despite being a mouthful, quantitative data analysis simply means analysing data that is numbers-based – or data that can be easily “converted” into numbers without losing any meaning.

For example, category-based variables like gender, ethnicity, or native language could all be “converted” into numbers without losing meaning – for example, English could equal 1, French 2, etc.

This contrasts against qualitative data analysis, where the focus is on words, phrases and expressions that can’t be reduced to numbers. If you’re interested in learning about qualitative analysis, check out our post and video here .

What is quantitative analysis used for?

Quantitative analysis is generally used for three purposes.

  • Firstly, it’s used to measure differences between groups . For example, the popularity of different clothing colours or brands.
  • Secondly, it’s used to assess relationships between variables . For example, the relationship between weather temperature and voter turnout.
  • And third, it’s used to test hypotheses in a scientifically rigorous way. For example, a hypothesis about the impact of a certain vaccine.

Again, this contrasts with qualitative analysis , which can be used to analyse people’s perceptions and feelings about an event or situation. In other words, things that can’t be reduced to numbers.

How does quantitative analysis work?

Well, since quantitative data analysis is all about analysing numbers , it’s no surprise that it involves statistics . Statistical analysis methods form the engine that powers quantitative analysis, and these methods can vary from pretty basic calculations (for example, averages and medians) to more sophisticated analyses (for example, correlations and regressions).

Sounds like gibberish? Don’t worry. We’ll explain all of that in this post. Importantly, you don’t need to be a statistician or math wiz to pull off a good quantitative analysis. We’ll break down all the technical mumbo jumbo in this post.

Need a helping hand?

research analysis in

As I mentioned, quantitative analysis is powered by statistical analysis methods . There are two main “branches” of statistical methods that are used – descriptive statistics and inferential statistics . In your research, you might only use descriptive statistics, or you might use a mix of both , depending on what you’re trying to figure out. In other words, depending on your research questions, aims and objectives . I’ll explain how to choose your methods later.

So, what are descriptive and inferential statistics?

Well, before I can explain that, we need to take a quick detour to explain some lingo. To understand the difference between these two branches of statistics, you need to understand two important words. These words are population and sample .

First up, population . In statistics, the population is the entire group of people (or animals or organisations or whatever) that you’re interested in researching. For example, if you were interested in researching Tesla owners in the US, then the population would be all Tesla owners in the US.

However, it’s extremely unlikely that you’re going to be able to interview or survey every single Tesla owner in the US. Realistically, you’ll likely only get access to a few hundred, or maybe a few thousand owners using an online survey. This smaller group of accessible people whose data you actually collect is called your sample .

So, to recap – the population is the entire group of people you’re interested in, and the sample is the subset of the population that you can actually get access to. In other words, the population is the full chocolate cake , whereas the sample is a slice of that cake.

So, why is this sample-population thing important?

Well, descriptive statistics focus on describing the sample , while inferential statistics aim to make predictions about the population, based on the findings within the sample. In other words, we use one group of statistical methods – descriptive statistics – to investigate the slice of cake, and another group of methods – inferential statistics – to draw conclusions about the entire cake. There I go with the cake analogy again…

With that out the way, let’s take a closer look at each of these branches in more detail.

Descriptive statistics vs inferential statistics

Branch 1: Descriptive Statistics

Descriptive statistics serve a simple but critically important role in your research – to describe your data set – hence the name. In other words, they help you understand the details of your sample . Unlike inferential statistics (which we’ll get to soon), descriptive statistics don’t aim to make inferences or predictions about the entire population – they’re purely interested in the details of your specific sample .

When you’re writing up your analysis, descriptive statistics are the first set of stats you’ll cover, before moving on to inferential statistics. But, that said, depending on your research objectives and research questions , they may be the only type of statistics you use. We’ll explore that a little later.

So, what kind of statistics are usually covered in this section?

Some common statistical tests used in this branch include the following:

  • Mean – this is simply the mathematical average of a range of numbers.
  • Median – this is the midpoint in a range of numbers when the numbers are arranged in numerical order. If the data set makes up an odd number, then the median is the number right in the middle of the set. If the data set makes up an even number, then the median is the midpoint between the two middle numbers.
  • Mode – this is simply the most commonly occurring number in the data set.
  • In cases where most of the numbers are quite close to the average, the standard deviation will be relatively low.
  • Conversely, in cases where the numbers are scattered all over the place, the standard deviation will be relatively high.
  • Skewness . As the name suggests, skewness indicates how symmetrical a range of numbers is. In other words, do they tend to cluster into a smooth bell curve shape in the middle of the graph, or do they skew to the left or right?

Feeling a bit confused? Let’s look at a practical example using a small data set.

Descriptive statistics example data

On the left-hand side is the data set. This details the bodyweight of a sample of 10 people. On the right-hand side, we have the descriptive statistics. Let’s take a look at each of them.

First, we can see that the mean weight is 72.4 kilograms. In other words, the average weight across the sample is 72.4 kilograms. Straightforward.

Next, we can see that the median is very similar to the mean (the average). This suggests that this data set has a reasonably symmetrical distribution (in other words, a relatively smooth, centred distribution of weights, clustered towards the centre).

In terms of the mode , there is no mode in this data set. This is because each number is present only once and so there cannot be a “most common number”. If there were two people who were both 65 kilograms, for example, then the mode would be 65.

Next up is the standard deviation . 10.6 indicates that there’s quite a wide spread of numbers. We can see this quite easily by looking at the numbers themselves, which range from 55 to 90, which is quite a stretch from the mean of 72.4.

And lastly, the skewness of -0.2 tells us that the data is very slightly negatively skewed. This makes sense since the mean and the median are slightly different.

As you can see, these descriptive statistics give us some useful insight into the data set. Of course, this is a very small data set (only 10 records), so we can’t read into these statistics too much. Also, keep in mind that this is not a list of all possible descriptive statistics – just the most common ones.

But why do all of these numbers matter?

While these descriptive statistics are all fairly basic, they’re important for a few reasons:

  • Firstly, they help you get both a macro and micro-level view of your data. In other words, they help you understand both the big picture and the finer details.
  • Secondly, they help you spot potential errors in the data – for example, if an average is way higher than you’d expect, or responses to a question are highly varied, this can act as a warning sign that you need to double-check the data.
  • And lastly, these descriptive statistics help inform which inferential statistical techniques you can use, as those techniques depend on the skewness (in other words, the symmetry and normality) of the data.

Simply put, descriptive statistics are really important , even though the statistical techniques used are fairly basic. All too often at Grad Coach, we see students skimming over the descriptives in their eagerness to get to the more exciting inferential methods, and then landing up with some very flawed results.

Don’t be a sucker – give your descriptive statistics the love and attention they deserve!

Examples of descriptive statistics

Branch 2: Inferential Statistics

As I mentioned, while descriptive statistics are all about the details of your specific data set – your sample – inferential statistics aim to make inferences about the population . In other words, you’ll use inferential statistics to make predictions about what you’d expect to find in the full population.

What kind of predictions, you ask? Well, there are two common types of predictions that researchers try to make using inferential stats:

  • Firstly, predictions about differences between groups – for example, height differences between children grouped by their favourite meal or gender.
  • And secondly, relationships between variables – for example, the relationship between body weight and the number of hours a week a person does yoga.

In other words, inferential statistics (when done correctly), allow you to connect the dots and make predictions about what you expect to see in the real world population, based on what you observe in your sample data. For this reason, inferential statistics are used for hypothesis testing – in other words, to test hypotheses that predict changes or differences.

Inferential statistics are used to make predictions about what you’d expect to find in the full population, based on the sample.

Of course, when you’re working with inferential statistics, the composition of your sample is really important. In other words, if your sample doesn’t accurately represent the population you’re researching, then your findings won’t necessarily be very useful.

For example, if your population of interest is a mix of 50% male and 50% female , but your sample is 80% male , you can’t make inferences about the population based on your sample, since it’s not representative. This area of statistics is called sampling, but we won’t go down that rabbit hole here (it’s a deep one!) – we’ll save that for another post .

What statistics are usually used in this branch?

There are many, many different statistical analysis methods within the inferential branch and it’d be impossible for us to discuss them all here. So we’ll just take a look at some of the most common inferential statistical methods so that you have a solid starting point.

First up are T-Tests . T-tests compare the means (the averages) of two groups of data to assess whether they’re statistically significantly different. In other words, do they have significantly different means, standard deviations and skewness.

This type of testing is very useful for understanding just how similar or different two groups of data are. For example, you might want to compare the mean blood pressure between two groups of people – one that has taken a new medication and one that hasn’t – to assess whether they are significantly different.

Kicking things up a level, we have ANOVA, which stands for “analysis of variance”. This test is similar to a T-test in that it compares the means of various groups, but ANOVA allows you to analyse multiple groups , not just two groups So it’s basically a t-test on steroids…

Next, we have correlation analysis . This type of analysis assesses the relationship between two variables. In other words, if one variable increases, does the other variable also increase, decrease or stay the same. For example, if the average temperature goes up, do average ice creams sales increase too? We’d expect some sort of relationship between these two variables intuitively , but correlation analysis allows us to measure that relationship scientifically .

Lastly, we have regression analysis – this is quite similar to correlation in that it assesses the relationship between variables, but it goes a step further to understand cause and effect between variables, not just whether they move together. In other words, does the one variable actually cause the other one to move, or do they just happen to move together naturally thanks to another force? Just because two variables correlate doesn’t necessarily mean that one causes the other.

Stats overload…

I hear you. To make this all a little more tangible, let’s take a look at an example of a correlation in action.

Here’s a scatter plot demonstrating the correlation (relationship) between weight and height. Intuitively, we’d expect there to be some relationship between these two variables, which is what we see in this scatter plot. In other words, the results tend to cluster together in a diagonal line from bottom left to top right.

Sample correlation

As I mentioned, these are are just a handful of inferential techniques – there are many, many more. Importantly, each statistical method has its own assumptions and limitations .

For example, some methods only work with normally distributed (parametric) data, while other methods are designed specifically for non-parametric data. And that’s exactly why descriptive statistics are so important – they’re the first step to knowing which inferential techniques you can and can’t use.

Remember that every statistical method has its own assumptions and limitations,  so you need to be aware of these.

How to choose the right analysis method

To choose the right statistical methods, you need to think about two important factors :

  • The type of quantitative data you have (specifically, level of measurement and the shape of the data). And,
  • Your research questions and hypotheses

Let’s take a closer look at each of these.

Factor 1 – Data type

The first thing you need to consider is the type of data you’ve collected (or the type of data you will collect). By data types, I’m referring to the four levels of measurement – namely, nominal, ordinal, interval and ratio. If you’re not familiar with this lingo, check out the video below.

Why does this matter?

Well, because different statistical methods and techniques require different types of data. This is one of the “assumptions” I mentioned earlier – every method has its assumptions regarding the type of data.

For example, some techniques work with categorical data (for example, yes/no type questions, or gender or ethnicity), while others work with continuous numerical data (for example, age, weight or income) – and, of course, some work with multiple data types.

If you try to use a statistical method that doesn’t support the data type you have, your results will be largely meaningless . So, make sure that you have a clear understanding of what types of data you’ve collected (or will collect). Once you have this, you can then check which statistical methods would support your data types here .

If you haven’t collected your data yet, you can work in reverse and look at which statistical method would give you the most useful insights, and then design your data collection strategy to collect the correct data types.

Another important factor to consider is the shape of your data . Specifically, does it have a normal distribution (in other words, is it a bell-shaped curve, centred in the middle) or is it very skewed to the left or the right? Again, different statistical techniques work for different shapes of data – some are designed for symmetrical data while others are designed for skewed data.

This is another reminder of why descriptive statistics are so important – they tell you all about the shape of your data.

Factor 2: Your research questions

The next thing you need to consider is your specific research questions, as well as your hypotheses (if you have some). The nature of your research questions and research hypotheses will heavily influence which statistical methods and techniques you should use.

If you’re just interested in understanding the attributes of your sample (as opposed to the entire population), then descriptive statistics are probably all you need. For example, if you just want to assess the means (averages) and medians (centre points) of variables in a group of people.

On the other hand, if you aim to understand differences between groups or relationships between variables and to infer or predict outcomes in the population, then you’ll likely need both descriptive statistics and inferential statistics.

So, it’s really important to get very clear about your research aims and research questions, as well your hypotheses – before you start looking at which statistical techniques to use.

Never shoehorn a specific statistical technique into your research just because you like it or have some experience with it. Your choice of methods must align with all the factors we’ve covered here.

Time to recap…

You’re still with me? That’s impressive. We’ve covered a lot of ground here, so let’s recap on the key points:

  • Quantitative data analysis is all about  analysing number-based data  (which includes categorical and numerical data) using various statistical techniques.
  • The two main  branches  of statistics are  descriptive statistics  and  inferential statistics . Descriptives describe your sample, whereas inferentials make predictions about what you’ll find in the population.
  • Common  descriptive statistical methods include  mean  (average),  median , standard  deviation  and  skewness .
  • Common  inferential statistical methods include  t-tests ,  ANOVA ,  correlation  and  regression  analysis.
  • To choose the right statistical methods and techniques, you need to consider the  type of data you’re working with , as well as your  research questions  and hypotheses.

research analysis in

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

77 Comments

Oddy Labs

Hi, I have read your article. Such a brilliant post you have created.

Derek Jansen

Thank you for the feedback. Good luck with your quantitative analysis.

Abdullahi Ramat

Thank you so much.

Obi Eric Onyedikachi

Thank you so much. I learnt much well. I love your summaries of the concepts. I had love you to explain how to input data using SPSS

MWASOMOLA, BROWN

Very useful, I have got the concept

Lumbuka Kaunda

Amazing and simple way of breaking down quantitative methods.

Charles Lwanga

This is beautiful….especially for non-statisticians. I have skimmed through but I wish to read again. and please include me in other articles of the same nature when you do post. I am interested. I am sure, I could easily learn from you and get off the fear that I have had in the past. Thank you sincerely.

Essau Sefolo

Send me every new information you might have.

fatime

i need every new information

Dr Peter

Thank you for the blog. It is quite informative. Dr Peter Nemaenzhe PhD

Mvogo Mvogo Ephrem

It is wonderful. l’ve understood some of the concepts in a more compréhensive manner

Maya

Your article is so good! However, I am still a bit lost. I am doing a secondary research on Gun control in the US and increase in crime rates and I am not sure which analysis method I should use?

Joy

Based on the given learning points, this is inferential analysis, thus, use ‘t-tests, ANOVA, correlation and regression analysis’

Peter

Well explained notes. Am an MPH student and currently working on my thesis proposal, this has really helped me understand some of the things I didn’t know.

Jejamaije Mujoro

I like your page..helpful

prashant pandey

wonderful i got my concept crystal clear. thankyou!!

Dailess Banda

This is really helpful , thank you

Lulu

Thank you so much this helped

wossen

Wonderfully explained

Niamatullah zaheer

thank u so much, it was so informative

mona

THANKYOU, this was very informative and very helpful

Thaddeus Ogwoka

This is great GRADACOACH I am not a statistician but I require more of this in my thesis

Include me in your posts.

Alem Teshome

This is so great and fully useful. I would like to thank you again and again.

Mrinal

Glad to read this article. I’ve read lot of articles but this article is clear on all concepts. Thanks for sharing.

Emiola Adesina

Thank you so much. This is a very good foundation and intro into quantitative data analysis. Appreciate!

Josyl Hey Aquilam

You have a very impressive, simple but concise explanation of data analysis for Quantitative Research here. This is a God-send link for me to appreciate research more. Thank you so much!

Lynnet Chikwaikwai

Avery good presentation followed by the write up. yes you simplified statistics to make sense even to a layman like me. Thank so much keep it up. The presenter did ell too. i would like more of this for Qualitative and exhaust more of the test example like the Anova.

Adewole Ikeoluwa

This is a very helpful article, couldn’t have been clearer. Thank you.

Samih Soud ALBusaidi

Awesome and phenomenal information.Well done

Nūr

The video with the accompanying article is super helpful to demystify this topic. Very well done. Thank you so much.

Lalah

thank you so much, your presentation helped me a lot

Anjali

I don’t know how should I express that ur article is saviour for me 🥺😍

Saiqa Aftab Tunio

It is well defined information and thanks for sharing. It helps me a lot in understanding the statistical data.

Funeka Mvandaba

I gain a lot and thanks for sharing brilliant ideas, so wish to be linked on your email update.

Rita Kathomi Gikonyo

Very helpful and clear .Thank you Gradcoach.

Hilaria Barsabal

Thank for sharing this article, well organized and information presented are very clear.

AMON TAYEBWA

VERY INTERESTING AND SUPPORTIVE TO NEW RESEARCHERS LIKE ME. AT LEAST SOME BASICS ABOUT QUANTITATIVE.

Tariq

An outstanding, well explained and helpful article. This will help me so much with my data analysis for my research project. Thank you!

chikumbutso

wow this has just simplified everything i was scared of how i am gonna analyse my data but thanks to you i will be able to do so

Idris Haruna

simple and constant direction to research. thanks

Mbunda Castro

This is helpful

AshikB

Great writing!! Comprehensive and very helpful.

himalaya ravi

Do you provide any assistance for other steps of research methodology like making research problem testing hypothesis report and thesis writing?

Sarah chiwamba

Thank you so much for such useful article!

Lopamudra

Amazing article. So nicely explained. Wow

Thisali Liyanage

Very insightfull. Thanks

Melissa

I am doing a quality improvement project to determine if the implementation of a protocol will change prescribing habits. Would this be a t-test?

Aliyah

The is a very helpful blog, however, I’m still not sure how to analyze my data collected. I’m doing a research on “Free Education at the University of Guyana”

Belayneh Kassahun

tnx. fruitful blog!

Suzanne

So I am writing exams and would like to know how do establish which method of data analysis to use from the below research questions: I am a bit lost as to how I determine the data analysis method from the research questions.

Do female employees report higher job satisfaction than male employees with similar job descriptions across the South African telecommunications sector? – I though that maybe Chi Square could be used here. – Is there a gender difference in talented employees’ actual turnover decisions across the South African telecommunications sector? T-tests or Correlation in this one. – Is there a gender difference in the cost of actual turnover decisions across the South African telecommunications sector? T-tests or Correlation in this one. – What practical recommendations can be made to the management of South African telecommunications companies on leveraging gender to mitigate employee turnover decisions?

Your assistance will be appreciated if I could get a response as early as possible tomorrow

Like

This was quite helpful. Thank you so much.

kidane Getachew

wow I got a lot from this article, thank you very much, keep it up

FAROUK AHMAD NKENGA

Thanks for yhe guidance. Can you send me this guidance on my email? To enable offline reading?

Nosi Ruth Xabendlini

Thank you very much, this service is very helpful.

George William Kiyingi

Every novice researcher needs to read this article as it puts things so clear and easy to follow. Its been very helpful.

Adebisi

Wonderful!!!! you explained everything in a way that anyone can learn. Thank you!!

Miss Annah

I really enjoyed reading though this. Very easy to follow. Thank you

Reza Kia

Many thanks for your useful lecture, I would be really appreciated if you could possibly share with me the PPT of presentation related to Data type?

Protasia Tairo

Thank you very much for sharing, I got much from this article

Fatuma Chobo

This is a very informative write-up. Kindly include me in your latest posts.

naphtal

Very interesting mostly for social scientists

Boy M. Bachtiar

Thank you so much, very helpfull

You’re welcome 🙂

Dr Mafaza Mansoor

woow, its great, its very informative and well understood because of your way of writing like teaching in front of me in simple languages.

Opio Len

I have been struggling to understand a lot of these concepts. Thank you for the informative piece which is written with outstanding clarity.

Eric

very informative article. Easy to understand

Leena Fukey

Beautiful read, much needed.

didin

Always greet intro and summary. I learn so much from GradCoach

Mmusyoka

Quite informative. Simple and clear summary.

Jewel Faver

I thoroughly enjoyed reading your informative and inspiring piece. Your profound insights into this topic truly provide a better understanding of its complexity. I agree with the points you raised, especially when you delved into the specifics of the article. In my opinion, that aspect is often overlooked and deserves further attention.

Shantae

Absolutely!!! Thank you

Thazika Chitimera

Thank you very much for this post. It made me to understand how to do my data analysis.

lule victor

its nice work and excellent job ,you have made my work easier

Pedro Uwadum

Wow! So explicit. Well done.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction

Data collection

Data analysis at the Armstrong Flight Research Center in Palmdale, California

data analysis

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Academia - Data Analysis
  • U.S. Department of Health and Human Services - Office of Research Integrity - Data Analysis
  • Chemistry LibreTexts - Data Analysis
  • IBM - What is Exploratory Data Analysis?
  • Table Of Contents

Data analysis at the Armstrong Flight Research Center in Palmdale, California

data analysis , the process of systematically collecting, cleaning, transforming, describing, modeling, and interpreting data , generally employing statistical techniques. Data analysis is an important part of both scientific research and business, where demand has grown in recent years for data-driven decision making . Data analysis techniques are used to gain useful insights from datasets, which can then be used to make operational decisions or guide future research . With the rise of “ big data ,” the storage of vast quantities of data in large databases and data warehouses, there is increasing need to apply data analysis techniques to generate insights about volumes of data too large to be manipulated by instruments of low information-processing capacity.

Datasets are collections of information. Generally, data and datasets are themselves collected to help answer questions, make decisions, or otherwise inform reasoning. The rise of information technology has led to the generation of vast amounts of data of many kinds, such as text, pictures, videos, personal information, account data, and metadata, the last of which provide information about other data. It is common for apps and websites to collect data about how their products are used or about the people using their platforms. Consequently, there is vastly more data being collected today than at any other time in human history. A single business may track billions of interactions with millions of consumers at hundreds of locations with thousands of employees and any number of products. Analyzing that volume of data is generally only possible using specialized computational and statistical techniques.

The desire for businesses to make the best use of their data has led to the development of the field of business intelligence , which covers a variety of tools and techniques that allow businesses to perform data analysis on the information they collect.

For data to be analyzed, it must first be collected and stored. Raw data must be processed into a format that can be used for analysis and be cleaned so that errors and inconsistencies are minimized. Data can be stored in many ways, but one of the most useful is in a database . A database is a collection of interrelated data organized so that certain records (collections of data related to a single entity) can be retrieved on the basis of various criteria . The most familiar kind of database is the relational database , which stores data in tables with rows that represent records (tuples) and columns that represent fields (attributes). A query is a command that retrieves a subset of the information in the database according to certain criteria. A query may retrieve only records that meet certain criteria, or it may join fields from records across multiple tables by use of a common field.

Frequently, data from many sources is collected into large archives of data called data warehouses. The process of moving data from its original sources (such as databases) to a centralized location (generally a data warehouse) is called ETL (which stands for extract , transform , and load ).

  • The extraction step occurs when you identify and copy or export the desired data from its source, such as by running a database query to retrieve the desired records.
  • The transformation step is the process of cleaning the data so that they fit the analytical need for the data and the schema of the data warehouse. This may involve changing formats for certain fields, removing duplicate records, or renaming fields, among other processes.
  • Finally, the clean data are loaded into the data warehouse, where they may join vast amounts of historical data and data from other sources.

After data are effectively collected and cleaned, they can be analyzed with a variety of techniques. Analysis often begins with descriptive and exploratory data analysis. Descriptive data analysis uses statistics to organize and summarize data, making it easier to understand the broad qualities of the dataset. Exploratory data analysis looks for insights into the data that may arise from descriptions of distribution, central tendency, or variability for a single data field. Further relationships between data may become apparent by examining two fields together. Visualizations may be employed during analysis, such as histograms (graphs in which the length of a bar indicates a quantity) or stem-and-leaf plots (which divide data into buckets, or “stems,” with individual data points serving as “leaves” on the stem).

research analysis in

Data analysis frequently goes beyond descriptive analysis to predictive analysis, making predictions about the future using predictive modeling techniques. Predictive modeling uses machine learning , regression analysis methods (which mathematically calculate the relationship between an independent variable and a dependent variable), and classification techniques to identify trends and relationships among variables. Predictive analysis may involve data mining , which is the process of discovering interesting or useful patterns in large volumes of information. Data mining often involves cluster analysis , which tries to find natural groupings within data, and anomaly detection , which detects instances in data that are unusual and stand out from other patterns. It may also look for rules within datasets, strong relationships among variables in the data.

8 Types of Data Analysis

The different types of data analysis include descriptive, diagnostic, exploratory, inferential, predictive, causal, mechanistic and prescriptive. Here’s what you need to know about each one.

Benedict Neo

Data analysis is an aspect of data science and  data analytics that is all about analyzing data for different kinds of purposes. The data analysis process involves inspecting, cleaning, transforming and  modeling data to draw useful insights from it.

Types of Data Analysis

  • Descriptive analysis
  • Diagnostic analysis
  • Exploratory analysis
  • Inferential analysis
  • Predictive analysis
  • Causal analysis
  • Mechanistic analysis
  • Prescriptive analysis

With its multiple facets, methodologies and techniques, data analysis is used in a variety of fields, including energy, healthcare and marketing, among others. As businesses thrive under the influence of technological advancements in data analytics, data analysis plays a huge role in decision-making , providing a better, faster and more effective system that minimizes risks and reduces human biases .

That said, there are different kinds of data analysis with different goals. We’ll examine each one below.

Two Camps of Data Analysis

Data analysis can be divided into two camps, according to the book R for Data Science :

  • Hypothesis Generation: This involves looking deeply at the data and combining your domain knowledge to generate  hypotheses about why the data behaves the way it does.
  • Hypothesis Confirmation: This involves using a precise mathematical model to generate falsifiable predictions with statistical sophistication to confirm your prior hypotheses.

More on Data Analysis: Data Analyst vs. Data Scientist: Similarities and Differences Explained

Data analysis can be separated and organized into types, arranged in an increasing order of complexity.  

1. Descriptive Analysis

The goal of descriptive analysis is to describe or summarize a set of data . Here’s what you need to know:

  • Descriptive analysis is the very first analysis performed in the data analysis process.
  • It generates simple summaries of samples and measurements.
  • It involves common, descriptive statistics like measures of central tendency, variability, frequency and position.

Descriptive Analysis Example

Take the Covid-19 statistics page on Google, for example. The line graph is a pure summary of the cases/deaths, a presentation and description of the population of a particular country infected by the virus.

Descriptive analysis is the first step in analysis where you summarize and describe the data you have using descriptive statistics, and the result is a simple presentation of your data.

2. Diagnostic Analysis  

Diagnostic analysis seeks to answer the question “Why did this happen?” by taking a more in-depth look at data to uncover subtle patterns. Here’s what you need to know:

  • Diagnostic analysis typically comes after descriptive analysis, taking initial findings and investigating why certain patterns in data happen. 
  • Diagnostic analysis may involve analyzing other related data sources, including past data, to reveal more insights into current data trends.  
  • Diagnostic analysis is ideal for further exploring patterns in data to explain anomalies .  

Diagnostic Analysis Example

A footwear store wants to review its  website traffic levels over the previous 12 months. Upon compiling and assessing the data, the company’s marketing team finds that June experienced above-average levels of traffic while July and August witnessed slightly lower levels of traffic. 

To find out why this difference occurred, the marketing team takes a deeper look. Team members break down the data to focus on specific categories of footwear. In the month of June, they discovered that pages featuring sandals and other beach-related footwear received a high number of views while these numbers dropped in July and August. 

Marketers may also review other factors like seasonal changes and company sales events to see if other variables could have contributed to this trend.    

3. Exploratory Analysis (EDA)

Exploratory analysis involves examining or  exploring data and finding relationships between variables that were previously unknown. Here’s what you need to know:

  • EDA helps you discover relationships between measures in your data, which are not evidence for the existence of the correlation, as denoted by the phrase, “ Correlation doesn’t imply causation .”
  • It’s useful for discovering new connections and forming hypotheses. It drives design planning and data collection .

Exploratory Analysis Example

Climate change is an increasingly important topic as the global temperature has gradually risen over the years. One example of an exploratory data analysis on climate change involves taking the rise in temperature over the years from 1950 to 2020 and the increase of human activities and industrialization to find relationships from the data. For example, you may increase the number of factories, cars on the road and airplane flights to see how that correlates with the rise in temperature.

Exploratory analysis explores data to find relationships between measures without identifying the cause. It’s most useful when formulating hypotheses. 

4. Inferential Analysis

Inferential analysis involves using a small sample of data to infer information about a larger population of data.

The goal of statistical modeling itself is all about using a small amount of information to extrapolate and generalize information to a larger group. Here’s what you need to know:

  • Inferential analysis involves using estimated data that is representative of a population and gives a measure of uncertainty or  standard deviation to your estimation.
  • The accuracy of inference depends heavily on your sampling scheme. If the sample isn’t representative of the population, the generalization will be inaccurate. This is known as the central limit theorem .

Inferential Analysis Example

A psychological study on the benefits of sleep might have a total of 500 people involved. When they followed up with the candidates, the candidates reported to have better overall attention spans and well-being with seven to nine hours of sleep, while those with less sleep and more sleep than the given range suffered from reduced attention spans and energy. This study drawn from 500 people was just a tiny portion of the 7 billion people in the world, and is thus an inference of the larger population.

Inferential analysis extrapolates and generalizes the information of the larger group with a smaller sample to generate analysis and predictions. 

5. Predictive Analysis

Predictive analysis involves using historical or current data to find patterns and make predictions about the future. Here’s what you need to know:

  • The accuracy of the predictions depends on the input variables.
  • Accuracy also depends on the types of models. A linear model might work well in some cases, and in other cases it might not.
  • Using a variable to predict another one doesn’t denote a causal relationship.

Predictive Analysis Example

The 2020 United States election is a popular topic and many prediction models are built to predict the winning candidate. FiveThirtyEight did this to forecast the 2016 and 2020 elections. Prediction analysis for an election would require input variables such as historical polling data, trends and current polling data in order to return a good prediction. Something as large as an election wouldn’t just be using a linear model, but a complex model with certain tunings to best serve its purpose.

6. Causal Analysis

Causal analysis looks at the cause and effect of relationships between variables and is focused on finding the cause of a correlation. This way, researchers can examine how a change in one variable affects another. Here’s what you need to know:

  • To find the cause, you have to question whether the observed correlations driving your conclusion are valid. Just looking at the surface data won’t help you discover the hidden mechanisms underlying the correlations.
  • Causal analysis is applied in randomized studies focused on identifying causation.
  • Causal analysis is the gold standard in data analysis and scientific studies where the cause of a phenomenon is to be extracted and singled out, like separating wheat from chaff.
  • Good data is hard to find and requires expensive research and studies. These studies are analyzed in aggregate (multiple groups), and the observed relationships are just average effects (mean) of the whole population. This means the results might not apply to everyone.

Causal Analysis Example  

Say you want to test out whether a new drug improves human strength and focus. To do that, you perform randomized control trials for the drug to test its effect. You compare the sample of candidates for your new drug against the candidates receiving a mock control drug through a few tests focused on strength and overall focus and attention. This will allow you to observe how the drug affects the outcome. 

7. Mechanistic Analysis

Mechanistic analysis is used to understand exact changes in variables that lead to other changes in other variables . In some ways, it is a predictive analysis, but it’s modified to tackle studies that require high precision and meticulous methodologies for physical or engineering science. Here’s what you need to know:

  • It’s applied in physical or engineering sciences, situations that require high  precision and little room for error, only noise in data is measurement error.
  • It’s designed to understand a biological or behavioral process, the pathophysiology of a disease or the mechanism of action of an intervention. 

Mechanistic Analysis Example

Say an experiment is done to simulate safe and effective nuclear fusion to power the world. A mechanistic analysis of the study would entail a precise balance of controlling and manipulating variables with highly accurate measures of both variables and the desired outcomes. It’s this intricate and meticulous modus operandi toward these big topics that allows for scientific breakthroughs and advancement of society.

8. Prescriptive Analysis  

Prescriptive analysis compiles insights from other previous data analyses and determines actions that teams or companies can take to prepare for predicted trends. Here’s what you need to know: 

  • Prescriptive analysis may come right after predictive analysis, but it may involve combining many different data analyses. 
  • Companies need advanced technology and plenty of resources to conduct prescriptive analysis. Artificial intelligence systems that process data and adjust automated tasks are an example of the technology required to perform prescriptive analysis.  

Prescriptive Analysis Example

Prescriptive analysis is pervasive in everyday life, driving the curated content users consume on social media. On platforms like TikTok and Instagram,  algorithms can apply prescriptive analysis to review past content a user has engaged with and the kinds of behaviors they exhibited with specific posts. Based on these factors, an  algorithm seeks out similar content that is likely to elicit the same response and  recommends it on a user’s personal feed. 

More on Data Explaining the Empirical Rule for Normal Distribution

When to Use the Different Types of Data Analysis  

  • Descriptive analysis summarizes the data at hand and presents your data in a comprehensible way.
  • Diagnostic analysis takes a more detailed look at data to reveal why certain patterns occur, making it a good method for explaining anomalies. 
  • Exploratory data analysis helps you discover correlations and relationships between variables in your data.
  • Inferential analysis is for generalizing the larger population with a smaller sample size of data.
  • Predictive analysis helps you make predictions about the future with data.
  • Causal analysis emphasizes finding the cause of a correlation between variables.
  • Mechanistic analysis is for measuring the exact changes in variables that lead to other changes in other variables.
  • Prescriptive analysis combines insights from different data analyses to develop a course of action teams and companies can take to capitalize on predicted outcomes. 

A few important tips to remember about data analysis include:

  • Correlation doesn’t imply causation.
  • EDA helps discover new connections and form hypotheses.
  • Accuracy of inference depends on the sampling scheme.
  • A good prediction depends on the right input variables.
  • A simple linear model with enough data usually does the trick.
  • Using a variable to predict another doesn’t denote causal relationships.
  • Good data is hard to find, and to produce it requires expensive research.
  • Results from studies are done in aggregate and are average effects and might not apply to everyone.​

Frequently Asked Questions

What is an example of data analysis.

A marketing team reviews a company’s web traffic over the past 12 months. To understand why sales rise and fall during certain months, the team breaks down the data to look at shoe type, seasonal patterns and sales events. Based on this in-depth analysis, the team can determine variables that influenced web traffic and make adjustments as needed.

How do you know which data analysis method to use?

Selecting a data analysis method depends on the goals of the analysis and the complexity of the task, among other factors. It’s best to assess the circumstances and consider the pros and cons of each type of data analysis before moving forward with a particular method.

Recent Data Science Articles

39 Machine Learning Examples and Applications to Know

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Basic statistical tools in research and data analysis

Zulfiqar ali.

Department of Anaesthesiology, Division of Neuroanaesthesiology, Sheri Kashmir Institute of Medical Sciences, Soura, Srinagar, Jammu and Kashmir, India

S Bala Bhaskar

1 Department of Anaesthesiology and Critical Care, Vijayanagar Institute of Medical Sciences, Bellary, Karnataka, India

Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

INTRODUCTION

Statistics is a branch of science that deals with the collection, organisation, analysis of data and drawing of inferences from the samples to the whole population.[ 1 ] This requires a proper design of the study, an appropriate selection of the study sample and choice of a suitable statistical test. An adequate knowledge of statistics is necessary for proper designing of an epidemiological study or a clinical trial. Improper statistical methods may result in erroneous conclusions which may lead to unethical practice.[ 2 ]

Variable is a characteristic that varies from one individual member of population to another individual.[ 3 ] Variables such as height and weight are measured by some type of scale, convey quantitative information and are called as quantitative variables. Sex and eye colour give qualitative information and are called as qualitative variables[ 3 ] [ Figure 1 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g001.jpg

Classification of variables

Quantitative variables

Quantitative or numerical data are subdivided into discrete and continuous measurements. Discrete numerical data are recorded as a whole number such as 0, 1, 2, 3,… (integer), whereas continuous data can assume any value. Observations that can be counted constitute the discrete data and observations that can be measured constitute the continuous data. Examples of discrete data are number of episodes of respiratory arrests or the number of re-intubations in an intensive care unit. Similarly, examples of continuous data are the serial serum glucose levels, partial pressure of oxygen in arterial blood and the oesophageal temperature.

A hierarchical scale of increasing precision can be used for observing and recording the data which is based on categorical, ordinal, interval and ratio scales [ Figure 1 ].

Categorical or nominal variables are unordered. The data are merely classified into categories and cannot be arranged in any particular order. If only two categories exist (as in gender male and female), it is called as a dichotomous (or binary) data. The various causes of re-intubation in an intensive care unit due to upper airway obstruction, impaired clearance of secretions, hypoxemia, hypercapnia, pulmonary oedema and neurological impairment are examples of categorical variables.

Ordinal variables have a clear ordering between the variables. However, the ordered data may not have equal intervals. Examples are the American Society of Anesthesiologists status or Richmond agitation-sedation scale.

Interval variables are similar to an ordinal variable, except that the intervals between the values of the interval variable are equally spaced. A good example of an interval scale is the Fahrenheit degree scale used to measure temperature. With the Fahrenheit scale, the difference between 70° and 75° is equal to the difference between 80° and 85°: The units of measurement are equal throughout the full range of the scale.

Ratio scales are similar to interval scales, in that equal differences between scale values have equal quantitative meaning. However, ratio scales also have a true zero point, which gives them an additional property. For example, the system of centimetres is an example of a ratio scale. There is a true zero point and the value of 0 cm means a complete absence of length. The thyromental distance of 6 cm in an adult may be twice that of a child in whom it may be 3 cm.

STATISTICS: DESCRIPTIVE AND INFERENTIAL STATISTICS

Descriptive statistics[ 4 ] try to describe the relationship between variables in a sample or population. Descriptive statistics provide a summary of data in the form of mean, median and mode. Inferential statistics[ 4 ] use a random sample of data taken from a population to describe and make inferences about the whole population. It is valuable when it is not possible to examine each member of an entire population. The examples if descriptive and inferential statistics are illustrated in Table 1 .

Example of descriptive and inferential statistics

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g002.jpg

Descriptive statistics

The extent to which the observations cluster around a central location is described by the central tendency and the spread towards the extremes is described by the degree of dispersion.

Measures of central tendency

The measures of central tendency are mean, median and mode.[ 6 ] Mean (or the arithmetic average) is the sum of all the scores divided by the number of scores. Mean may be influenced profoundly by the extreme variables. For example, the average stay of organophosphorus poisoning patients in ICU may be influenced by a single patient who stays in ICU for around 5 months because of septicaemia. The extreme values are called outliers. The formula for the mean is

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g003.jpg

where x = each observation and n = number of observations. Median[ 6 ] is defined as the middle of a distribution in a ranked data (with half of the variables in the sample above and half below the median value) while mode is the most frequently occurring variable in a distribution. Range defines the spread, or variability, of a sample.[ 7 ] It is described by the minimum and maximum values of the variables. If we rank the data and after ranking, group the observations into percentiles, we can get better information of the pattern of spread of the variables. In percentiles, we rank the observations into 100 equal parts. We can then describe 25%, 50%, 75% or any other percentile amount. The median is the 50 th percentile. The interquartile range will be the observations in the middle 50% of the observations about the median (25 th -75 th percentile). Variance[ 7 ] is a measure of how spread out is the distribution. It gives an indication of how close an individual observation clusters about the mean value. The variance of a population is defined by the following formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g004.jpg

where σ 2 is the population variance, X is the population mean, X i is the i th element from the population and N is the number of elements in the population. The variance of a sample is defined by slightly different formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g005.jpg

where s 2 is the sample variance, x is the sample mean, x i is the i th element from the sample and n is the number of elements in the sample. The formula for the variance of a population has the value ‘ n ’ as the denominator. The expression ‘ n −1’ is known as the degrees of freedom and is one less than the number of parameters. Each observation is free to vary, except the last one which must be a defined value. The variance is measured in squared units. To make the interpretation of the data simple and to retain the basic unit of observation, the square root of variance is used. The square root of the variance is the standard deviation (SD).[ 8 ] The SD of a population is defined by the following formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g006.jpg

where σ is the population SD, X is the population mean, X i is the i th element from the population and N is the number of elements in the population. The SD of a sample is defined by slightly different formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g007.jpg

where s is the sample SD, x is the sample mean, x i is the i th element from the sample and n is the number of elements in the sample. An example for calculation of variation and SD is illustrated in Table 2 .

Example of mean, variance, standard deviation

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g008.jpg

Normal distribution or Gaussian distribution

Most of the biological variables usually cluster around a central value, with symmetrical positive and negative deviations about this point.[ 1 ] The standard normal distribution curve is a symmetrical bell-shaped. In a normal distribution curve, about 68% of the scores are within 1 SD of the mean. Around 95% of the scores are within 2 SDs of the mean and 99% within 3 SDs of the mean [ Figure 2 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g009.jpg

Normal distribution curve

Skewed distribution

It is a distribution with an asymmetry of the variables about its mean. In a negatively skewed distribution [ Figure 3 ], the mass of the distribution is concentrated on the right of Figure 1 . In a positively skewed distribution [ Figure 3 ], the mass of the distribution is concentrated on the left of the figure leading to a longer right tail.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g010.jpg

Curves showing negatively skewed and positively skewed distribution

Inferential statistics

In inferential statistics, data are analysed from a sample to make inferences in the larger collection of the population. The purpose is to answer or test the hypotheses. A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. Hypothesis tests are thus procedures for making rational decisions about the reality of observed effects.

Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty).

In inferential statistics, the term ‘null hypothesis’ ( H 0 ‘ H-naught ,’ ‘ H-null ’) denotes that there is no relationship (difference) between the population variables in question.[ 9 ]

Alternative hypothesis ( H 1 and H a ) denotes that a statement between the variables is expected to be true.[ 9 ]

The P value (or the calculated probability) is the probability of the event occurring by chance if the null hypothesis is true. The P value is a numerical between 0 and 1 and is interpreted by researchers in deciding whether to reject or retain the null hypothesis [ Table 3 ].

P values with interpretation

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g011.jpg

If P value is less than the arbitrarily chosen value (known as α or the significance level), the null hypothesis (H0) is rejected [ Table 4 ]. However, if null hypotheses (H0) is incorrectly rejected, this is known as a Type I error.[ 11 ] Further details regarding alpha error, beta error and sample size calculation and factors influencing them are dealt with in another section of this issue by Das S et al .[ 12 ]

Illustration for null hypothesis

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g012.jpg

PARAMETRIC AND NON-PARAMETRIC TESTS

Numerical data (quantitative variables) that are normally distributed are analysed with parametric tests.[ 13 ]

Two most basic prerequisites for parametric statistical analysis are:

  • The assumption of normality which specifies that the means of the sample group are normally distributed
  • The assumption of equal variance which specifies that the variances of the samples and of their corresponding population are equal.

However, if the distribution of the sample is skewed towards one side or the distribution is unknown due to the small sample size, non-parametric[ 14 ] statistical techniques are used. Non-parametric tests are used to analyse ordinal and categorical data.

Parametric tests

The parametric tests assume that the data are on a quantitative (numerical) scale, with a normal distribution of the underlying population. The samples have the same variance (homogeneity of variances). The samples are randomly drawn from the population, and the observations within a group are independent of each other. The commonly used parametric tests are the Student's t -test, analysis of variance (ANOVA) and repeated measures ANOVA.

Student's t -test

Student's t -test is used to test the null hypothesis that there is no difference between the means of the two groups. It is used in three circumstances:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g013.jpg

where X = sample mean, u = population mean and SE = standard error of mean

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g014.jpg

where X 1 − X 2 is the difference between the means of the two groups and SE denotes the standard error of the difference.

  • To test if the population means estimated by two dependent samples differ significantly (the paired t -test). A usual setting for paired t -test is when measurements are made on the same subjects before and after a treatment.

The formula for paired t -test is:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g015.jpg

where d is the mean difference and SE denotes the standard error of this difference.

The group variances can be compared using the F -test. The F -test is the ratio of variances (var l/var 2). If F differs significantly from 1.0, then it is concluded that the group variances differ significantly.

Analysis of variance

The Student's t -test cannot be used for comparison of three or more groups. The purpose of ANOVA is to test if there is any significant difference between the means of two or more groups.

In ANOVA, we study two variances – (a) between-group variability and (b) within-group variability. The within-group variability (error variance) is the variation that cannot be accounted for in the study design. It is based on random differences present in our samples.

However, the between-group (or effect variance) is the result of our treatment. These two estimates of variances are compared using the F-test.

A simplified formula for the F statistic is:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g016.jpg

where MS b is the mean squares between the groups and MS w is the mean squares within groups.

Repeated measures analysis of variance

As with ANOVA, repeated measures ANOVA analyses the equality of means of three or more groups. However, a repeated measure ANOVA is used when all variables of a sample are measured under different conditions or at different points in time.

As the variables are measured from a sample at different points of time, the measurement of the dependent variable is repeated. Using a standard ANOVA in this case is not appropriate because it fails to model the correlation between the repeated measures: The data violate the ANOVA assumption of independence. Hence, in the measurement of repeated dependent variables, repeated measures ANOVA should be used.

Non-parametric tests

When the assumptions of normality are not met, and the sample means are not normally, distributed parametric tests can lead to erroneous results. Non-parametric tests (distribution-free test) are used in such situation as they do not require the normality assumption.[ 15 ] Non-parametric tests may fail to detect a significant difference when compared with a parametric test. That is, they usually have less power.

As is done for the parametric tests, the test statistic is compared with known values for the sampling distribution of that statistic and the null hypothesis is accepted or rejected. The types of non-parametric analysis techniques and the corresponding parametric analysis techniques are delineated in Table 5 .

Analogue of parametric and non-parametric tests

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g017.jpg

Median test for one sample: The sign test and Wilcoxon's signed rank test

The sign test and Wilcoxon's signed rank test are used for median tests of one sample. These tests examine whether one instance of sample data is greater or smaller than the median reference value.

This test examines the hypothesis about the median θ0 of a population. It tests the null hypothesis H0 = θ0. When the observed value (Xi) is greater than the reference value (θ0), it is marked as+. If the observed value is smaller than the reference value, it is marked as − sign. If the observed value is equal to the reference value (θ0), it is eliminated from the sample.

If the null hypothesis is true, there will be an equal number of + signs and − signs.

The sign test ignores the actual values of the data and only uses + or − signs. Therefore, it is useful when it is difficult to measure the values.

Wilcoxon's signed rank test

There is a major limitation of sign test as we lose the quantitative information of the given data and merely use the + or – signs. Wilcoxon's signed rank test not only examines the observed values in comparison with θ0 but also takes into consideration the relative sizes, adding more statistical power to the test. As in the sign test, if there is an observed value that is equal to the reference value θ0, this observed value is eliminated from the sample.

Wilcoxon's rank sum test ranks all data points in order, calculates the rank sum of each sample and compares the difference in the rank sums.

Mann-Whitney test

It is used to test the null hypothesis that two samples have the same median or, alternatively, whether observations in one sample tend to be larger than observations in the other.

Mann–Whitney test compares all data (xi) belonging to the X group and all data (yi) belonging to the Y group and calculates the probability of xi being greater than yi: P (xi > yi). The null hypothesis states that P (xi > yi) = P (xi < yi) =1/2 while the alternative hypothesis states that P (xi > yi) ≠1/2.

Kolmogorov-Smirnov test

The two-sample Kolmogorov-Smirnov (KS) test was designed as a generic method to test whether two random samples are drawn from the same distribution. The null hypothesis of the KS test is that both distributions are identical. The statistic of the KS test is a distance between the two empirical distributions, computed as the maximum absolute difference between their cumulative curves.

Kruskal-Wallis test

The Kruskal–Wallis test is a non-parametric test to analyse the variance.[ 14 ] It analyses if there is any difference in the median values of three or more independent samples. The data values are ranked in an increasing order, and the rank sums calculated followed by calculation of the test statistic.

Jonckheere test

In contrast to Kruskal–Wallis test, in Jonckheere test, there is an a priori ordering that gives it a more statistical power than the Kruskal–Wallis test.[ 14 ]

Friedman test

The Friedman test is a non-parametric test for testing the difference between several related samples. The Friedman test is an alternative for repeated measures ANOVAs which is used when the same parameter has been measured under different conditions on the same subjects.[ 13 ]

Tests to analyse the categorical data

Chi-square test, Fischer's exact test and McNemar's test are used to analyse the categorical or nominal variables. The Chi-square test compares the frequencies and tests whether the observed data differ significantly from that of the expected data if there were no differences between groups (i.e., the null hypothesis). It is calculated by the sum of the squared difference between observed ( O ) and the expected ( E ) data (or the deviation, d ) divided by the expected data by the following formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g018.jpg

A Yates correction factor is used when the sample size is small. Fischer's exact test is used to determine if there are non-random associations between two categorical variables. It does not assume random sampling, and instead of referring a calculated statistic to a sampling distribution, it calculates an exact probability. McNemar's test is used for paired nominal data. It is applied to 2 × 2 table with paired-dependent samples. It is used to determine whether the row and column frequencies are equal (that is, whether there is ‘marginal homogeneity’). The null hypothesis is that the paired proportions are equal. The Mantel-Haenszel Chi-square test is a multivariate test as it analyses multiple grouping variables. It stratifies according to the nominated confounding variables and identifies any that affects the primary outcome variable. If the outcome variable is dichotomous, then logistic regression is used.

SOFTWARES AVAILABLE FOR STATISTICS, SAMPLE SIZE CALCULATION AND POWER ANALYSIS

Numerous statistical software systems are available currently. The commonly used software systems are Statistical Package for the Social Sciences (SPSS – manufactured by IBM corporation), Statistical Analysis System ((SAS – developed by SAS Institute North Carolina, United States of America), R (designed by Ross Ihaka and Robert Gentleman from R core team), Minitab (developed by Minitab Inc), Stata (developed by StataCorp) and the MS Excel (developed by Microsoft).

There are a number of web resources which are related to statistical power analyses. A few are:

  • StatPages.net – provides links to a number of online power calculators
  • G-Power – provides a downloadable power analysis program that runs under DOS
  • Power analysis for ANOVA designs an interactive site that calculates power or sample size needed to attain a given power for one effect in a factorial ANOVA design
  • SPSS makes a program called SamplePower. It gives an output of a complete report on the computer screen which can be cut and paste into another document.

It is important that a researcher knows the concepts of the basic statistical methods used for conduct of a research study. This will help to conduct an appropriately well-designed study leading to valid and reliable results. Inappropriate use of statistical techniques may lead to faulty conclusions, inducing errors and undermining the significance of the article. Bad statistics may lead to bad research, and bad research may lead to unethical practice. Hence, an adequate knowledge of statistics and the appropriate use of statistical tests are important. An appropriate knowledge about the basic statistical methods will go a long way in improving the research designs and producing quality medical research which can be utilised for formulating the evidence-based guidelines.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Medcomms Academy

What Is Data Analysis in Research? Why It Matters & What Data Analysts Do

what is data analysis in research

Data analysis in research is the process of uncovering insights from data sets. Data analysts can use their knowledge of statistical techniques, research theories and methods, and research practices to analyze data. They take data and uncover what it’s trying to tell us, whether that’s through charts, graphs, or other visual representations. To analyze data effectively you need a strong background in mathematics and statistics, excellent communication skills, and the ability to identify relevant information.

Read on for more information about data analysis roles in research and what it takes to become one.

In this article – What is data analysis in research?

what is data analysis in research

What is data analysis in research?

Why data analysis matters, what is data science, data analysis for quantitative research, data analysis for qualitative research, what are data analysis techniques in research, what do data analysts do, in related articles.

  • How to Prepare for Job Interviews: Steps to Nail it!
  • Finding Topics for Literature Review: The Pragmatic Guide

How to Write a Conference Abstract: 4 Key Steps to Set Your Submission Apart

  • The Ultimate Guide to White Papers: What, Why and How
  • What is an Investigator’s Brochure in Pharma?

Data analysis is looking at existing data and attempting to draw conclusions from it. It is the process of asking “what does this data show us?” There are many different types of data analysis and a range of methods and tools for analyzing data. You may hear some of these terms as you explore data analysis roles in research – data exploration, data visualization, and data modelling. Data exploration involves exploring and reviewing the data, asking questions like “Does the data exist?” and “Is it valid?”.

Data visualization is the process of creating charts, graphs, and other visual representations of data. The goal of visualization is to help us see and understand data more quickly and easily. Visualizations are powerful and can help us uncover insights from the data that we may have missed without the visual aid. Data modelling involves taking the data and creating a model out of it. Data modelling organises and visualises data to help us understand it better and make sense of it. This will often include creating an equation for the data or creating a statistical model.

Data analysis is important for all research areas, from quantitative surveys to qualitative projects. While researchers often conduct a data analysis at the end of the project, they should be analyzing data alongside their data collection. This allows researchers to monitor their progress and adjust their approach when needed.

The analysis is also important for verifying the quality of the data. What you discover through your analysis can also help you decide whether or not to continue with your project. If you find that your data isn’t consistent with your research questions, you might decide to end your research before collecting enough data to generalize your results.

Data science is the intersection between computer science and statistics. It’s been defined as the “conceptual basis for systematic operations on data”. This means that data scientists use their knowledge of statistics and research methods to find insights in data. They use data to find solutions to complex problems, from medical research to business intelligence. Data science involves collecting and exploring data, creating models and algorithms from that data, and using those models to make predictions and find other insights.

Data scientists might focus on the visual representation of data, exploring the data, or creating models and algorithms from the data. Many people in data science roles also work with artificial intelligence and machine learning. They feed the algorithms with data and the algorithms find patterns and make predictions. Data scientists often work with data engineers. These engineers build the systems that the data scientists use to collect and analyze data.

Data analysis techniques can be divided into two categories:

  • Quantitative approach
  • Qualitative approach

Note that, when discussing this subject, the term “data analysis” often refers to statistical techniques.

Qualitative research uses unquantifiable data like unstructured interviews, observations, and case studies. Quantitative research usually relies on generalizable data and statistical modelling, while qualitative research is more focused on finding the “why” behind the data. This means that qualitative data analysis is useful in exploring and making sense of the unstructured data that researchers collect.

Data analysts will take their data and explore it, asking questions like “what’s going on here?” and “what patterns can we see?” They will use data visualization to help readers understand the data and identify patterns. They might create maps, timelines, or other representations of the data. They will use their understanding of the data to create conclusions that help readers understand the data better.

Quantitative research relies on data that can be measured, like survey responses or test results. Quantitative data analysis is useful in drawing conclusions from this data. To do this, data analysts will explore the data, looking at the validity of the data and making sure that it’s reliable. They will then visualize the data, making charts and graphs to make the data more accessible to readers. Finally, they will create an equation or use statistical modelling to understand the data.

A common type of research where you’ll see these three steps is market research. Market researchers will collect data from surveys, focus groups, and other methods. They will then analyze that data and make conclusions from it, like how much consumers are willing to spend on a product or what factors make one product more desirable than another.

Quantitative methods

These are useful in quantitatively analyzing data. These methods use a quantitative approach to analyzing data and their application includes in science and engineering, as well as in traditional business. This method is also useful for qualitative research.

Statistical methods are used to analyze data in a statistical manner. Data analysis is not limited only to statistics or probability. Still, it can also be applied in other areas, such as engineering, business, economics, marketing, and all parts of any field that seeks knowledge about something or someone.

If you are an entrepreneur or an investor who wants to develop your business or your company’s value proposition into a reality, you will need data analysis techniques. But if you want to understand how your company works, what you have done right so far, and what might happen next in terms of growth or profitability—you don’t need those kinds of experiences. Data analysis is most applicable when it comes to understanding information from external sources like research papers that aren’t necessarily objective.

A brief intro to statistics

Statistics is a field of study that analyzes data to determine the number of people, firms, and companies in a population and their relative positions on a particular economic level. The application of statistics can be to any group or entity that has any kind of data or information (even if it’s only numbers), so you can use statistics to make an educated guess about your company, your customers, your competitors, your competitors’ customers, your peers, and so on. You can also use statistics to help you develop a business strategy.

Data analysis methods can help you understand how different groups are performing in a given area—and how they might perform differently from one another in the future—but they can also be used as an indicator for areas where there is better or worse performance than expected.

In addition to being able to see what trends are occurring within an industry or population within that industry or population—and why some companies may be doing better than others—you will also be able to see what changes have been made over time within that industry or population by comparing it with others and analyzing those differences over time.

Data mining

Data mining is the use of mathematical techniques to analyze data with the goal of finding patterns and trends. A great example of this would be analyzing the sales patterns for a certain product line. In this case, a data mining technique would involve using statistical techniques to find patterns in the data and then analyzing them using mathematical techniques to identify relationships between variables and factors.

Note that these are different from each other and much more advanced than traditional statistics or probability.

As a data analyst, you’ll be responsible for analyzing data from different sources. You’ll work with multiple stakeholders and your job will vary depending on what projects you’re working on. You’ll likely work closely with data scientists and researchers on a daily basis, as you’re all analyzing the same data.

Communication is key, so being able to work with others is important. You’ll also likely work with researchers or principal investigators (PIs) to collect and organize data. Your data will be from various sources, from structured to unstructured data like interviews and observations. You’ll take that data and make sense of it, organizing it and visualizing it so readers can understand it better. You’ll use this data to create models and algorithms that make predictions and find other insights. This can include creating equations or mathematical models from the data or taking data and creating a statistical model.

Data analysis is an important part of all types of research. Quantitative researchers analyze the data they collect through surveys and experiments, while qualitative researchers collect unstructured data like interviews and observations. Data analysts take all of this data and turn it into something that other researchers and readers can understand and make use of.

With proper data analysis, researchers can make better decisions, understand their data better, and get a better picture of what’s going on in the world around them. Data analysis is a valuable skill, and many companies hire data analysts and data scientists to help them understand their customers and make better decisions.

Similar Posts

What is an Investigator’s Brochure in Pharma?

What is an Investigator’s Brochure in Pharma?

The investigator’s brochure (popularly referred to as IB) is an important tool for the pharmaceutical company to share information about the new drug and its indications with healthcare professionals. An investigator’s brochure keeps all clinical and nonclinical data on an investigational product (drug, supplement, device, or other product) under investigation. Staff constantly update the IB…

Storyboard for Video Template That’s Effective and Creative

Storyboard for Video Template That’s Effective and Creative

A good storyboard for vide video template is critical to any successful video production process. But it’s not just about having images with stick figures or wordy notes describing what we see on the page. A strong visual storyboard will help you tackle different camera angles, lighting details, props, sets, actors—even what props or clothing…

In-text Referencing in Scientific Writing

In-text Referencing in Scientific Writing

When writing a scientific document, you will need to reference sources using in-text referencing. This ensures that your readers can easily find the information you have used from another source. Whether you are collaborating, working on a team with other researchers or writing for any scientific audience, in-text referencing may be so useful. It makes…

How to Write a Conference Abstract: 4 Key Steps to Set Your Submission Apart

You’d be forgiven for thinking that writing a conference abstract is just a shorter version of your article or thesis. But the truth is that, although they are similar in some respects, there’s actually more to it than that. Thus, learning how to write a conference abstract is a useful skill for medical writers. Your…

Literature Reviews vs Systematic Reviews: What’s the Difference?

Literature Reviews vs Systematic Reviews: What’s the Difference?

A lot of times, we compare literature reviews vs systematic reviews. It is like comparing an orange with a tangerine – the same group but different fruits! A literature review and a systematic review are both research methods used to analyze the existing literature on a topic. But what exactly is the difference between these…

Best Way to Use a Plagiarism Checker – Grammarly

Best Way to Use a Plagiarism Checker – Grammarly

If you’re writing any kind of scientific paper, you know that it’s important to use good grammar and spelling. This means that checking your work for errors is one of the best ways to improve your writing. That’s why nearly every student has checked their papers with a plagiarism checker, whether they were worried they…

Privacy Overview

CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

research analysis in

What is Data Analysis? An Expert Guide With Examples

What is data analysis.

Data analysis is a comprehensive method of inspecting, cleansing, transforming, and modeling data to discover useful information, draw conclusions, and support decision-making. It is a multifaceted process involving various techniques and methodologies to interpret data from various sources in different formats, both structured and unstructured.

Data analysis is not just a mere process; it's a tool that empowers organizations to make informed decisions, predict trends, and improve operational efficiency. It's the backbone of strategic planning in businesses, governments, and other organizations.

Consider the example of a leading e-commerce company. Through data analysis, they can understand their customers' buying behavior, preferences, and patterns. They can then use this information to personalize customer experiences, forecast sales, and optimize marketing strategies, ultimately driving business growth and customer satisfaction.

Learn more about how to become a data analyst in our separate article, which covers everything you need to know about launching your career in this field and the skills you’ll need to master.

AI Upskilling for Beginners

The importance of data analysis in today's digital world.

In the era of digital transformation, data analysis has become more critical than ever. The explosion of data generated by digital technologies has led to the advent of what we now call 'big data.' This vast amount of data, if analyzed correctly, can provide invaluable insights that can revolutionize businesses.

Data analysis is the key to unlocking the potential of big data. It helps organizations to make sense of this data, turning it into actionable insights. These insights can be used to improve products and services, enhance experiences, streamline operations, and increase profitability.

A good example is the healthcare industry . Through data analysis, healthcare providers can predict disease outbreaks, improve patient care, and make informed decisions about treatment strategies. Similarly, in the finance sector, data analysis can help in risk assessment, fraud detection, and investment decision-making.

The Data Analysis Process: A Step-by-Step Guide

The process of data analysis is a systematic approach that involves several stages, each crucial to ensuring the accuracy and usefulness of the results. Here, we'll walk you through each step, from defining objectives to data storytelling. You can learn more about how businesses analyze data in a separate guide.

The data analysis process

The data analysis process in a nutshell

Step 1: Defining objectives and questions

The first step in the data analysis process is to define the objectives and formulate clear, specific questions that your analysis aims to answer. This step is crucial as it sets the direction for the entire process. It involves understanding the problem or situation at hand, identifying the data needed to address it, and defining the metrics or indicators to measure the outcomes.

Step 2: Data collection

Once the objectives and questions are defined, the next step is to collect the relevant data. This can be done through various methods such as surveys, interviews, observations, or extracting from existing databases. The data collected can be quantitative (numerical) or qualitative (non-numerical), depending on the nature of the problem and the questions being asked.

Step 3: Data cleaning

Data cleaning, also known as data cleansing, is a critical step in the data analysis process. It involves checking the data for errors and inconsistencies, and correcting or removing them. This step ensures the quality and reliability of the data, which is crucial for obtaining accurate and meaningful results from the analysis.

Step 4: Data analysis

Once the data is cleaned, it's time for the actual analysis. This involves applying statistical or mathematical techniques to the data to discover patterns, relationships, or trends. There are various tools and software available for this purpose, such as Python, R, Excel, and specialized software like SPSS and SAS.

Step 5: Data interpretation and visualization

After the data is analyzed, the next step is to interpret the results and visualize them in a way that is easy to understand. This could involve creating charts, graphs, or other visual representations of the data. Data visualization helps to make complex data more understandable and provides a clear picture of the findings.

Step 6: Data storytelling

The final step in the data analysis process is data storytelling. This involves presenting the findings of the analysis in a narrative form that is engaging and easy to understand. Data storytelling is crucial for communicating the results to non-technical audiences and for making data-driven decisions.

The Types of Data Analysis

Data analysis can be categorized into four main types, each serving a unique purpose and providing different insights. These are descriptive, diagnostic, predictive, and prescriptive analyses.

Four types of questions, four types of analytics

The four types of analytics

Descriptive analysis

Descriptive analysis , as the name suggests, describes or summarizes raw data and makes it interpretable. It involves analyzing historical data to understand what has happened in the past.

This type of analysis is used to identify patterns and trends over time.

For example, a business might use descriptive analysis to understand the average monthly sales for the past year.

Diagnostic analysis

Diagnostic analysis goes a step further than descriptive analysis by determining why something happened. It involves more detailed data exploration and comparing different data sets to understand the cause of a particular outcome.

For instance, if a company's sales dropped in a particular month, diagnostic analysis could be used to find out why.

Predictive analysis

Predictive analysis uses statistical models and forecasting techniques to understand the future. It involves using data from the past to predict what could happen in the future. This type of analysis is often used in risk assessment, marketing, and sales forecasting.

For example, a company might use predictive analysis to forecast the next quarter's sales based on historical data.

Prescriptive analysis

Prescriptive analysis is the most advanced type of data analysis. It not only predicts future outcomes but also suggests actions to benefit from these predictions. It uses sophisticated tools and technologies like machine learning and artificial intelligence to recommend decisions.

For example, a prescriptive analysis might suggest the best marketing strategies to increase future sales.

Data Analysis Techniques

There are numerous techniques used in data analysis, each with its unique purpose and application. Here, we will discuss some of the most commonly used techniques, including exploratory analysis, regression analysis, Monte Carlo simulation, factor analysis, cohort analysis, cluster analysis, time series analysis, and sentiment analysis.

Exploratory analysis

Exploratory analysis is used to understand the main characteristics of a data set. It is often used at the beginning of a data analysis process to summarize the main aspects of the data, check for missing data, and test assumptions. This technique involves visual methods such as scatter plots, histograms, and box plots.

You can learn more about exploratory data analysis with our course, covering how to explore, visualize, and extract insights from data using Python.

Regression analysis

Regression analysis is a statistical method used to understand the relationship between a dependent variable and one or more independent variables. It is commonly used for forecasting, time series modeling, and finding the causal effect relationships between variables.

We have a tutorial exploring the essentials of linear regression , which is one of the most widely used regression algorithms in areas like machine learning.

Linear and logistic regression

Linear and logistic regression

Factor analysis

Factor analysis is a technique used to reduce a large number of variables into fewer factors. The factors are constructed in such a way that they capture the maximum possible information from the original variables. This technique is often used in market research, customer segmentation, and image recognition.

Learn more about factor analysis in R with our course, which explores latent variables, such as personality, using exploratory and confirmatory factor analyses.

Monte Carlo simulation

Monte Carlo simulation is a technique that uses probability distributions and random sampling to estimate numerical results. It is often used in risk analysis and decision-making where there is significant uncertainty.

We have a tutorial that explores Monte Carlo methods in R , as well as a course on Monte Carlo simulations in Python , which can estimate a range of outcomes for uncertain events.

Monte Carlo simulation

Example of a Monte Carlo simulation

Cluster analysis

Cluster analysis is a technique used to group a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to those in other groups. It is often used in market segmentation, image segmentation, and recommendation systems.

You can explore a range of clustering techniques, including hierarchical clustering and k-means clustering, in our Cluster Analysis in R course.

Cohort analysis

Cohort analysis is a subset of behavioral analytics that takes data from a given dataset and groups it into related groups for analysis. These related groups, or cohorts, usually share common characteristics within a defined time span. This technique is often used in marketing, user engagement, and customer lifecycle analysis.

Our course, Customer Segmentation in Python , explores a range of techniques for segmenting and analyzing customer data, including cohort analysis.

Cluster analysis example

Graph showing an example of cohort analysis

Time series analysis

Time series analysis is a statistical technique that deals with time series data, or trend analysis. It is used to analyze the sequence of data points to extract meaningful statistics and other characteristics of the data. This technique is often used in sales forecasting, economic forecasting, and weather forecasting.

Our Time Series with Python skill track takes you through how to manipulate and analyze time series data, working with a variety of Python libraries.

Sentiment analysis

Sentiment analysis, also known as opinion mining, uses natural language processing, text analysis, and computational linguistics to identify and extract subjective information from source materials. It is often used in social media monitoring, brand monitoring, and understanding customer feedback.

To get familiar with sentiment analysis in Python , you can take our online course, which will teach you how to perform an end-to-end sentiment analysis.

Data Analysis Tools

In the realm of data analysis, various tools are available that cater to different needs, complexities, and levels of expertise. These tools range from programming languages like Python and R to visualization software like Power BI and Tableau. Let's delve into some of these tools.

Python is a high-level, general-purpose programming language that has become a favorite among data analysts and data scientists. Its simplicity and readability, coupled with a wide range of libraries like pandas , NumPy , and Matplotlib , make it an excellent tool for data analysis and data visualization.

" dir="ltr">Resources to get you started

  • You can start learning Python today with our Python Fundamentals skill track, which covers all the foundational skills you need to understand the language.
  • You can also take out Data Analyst with Python career track to start your journey to becoming a data analyst.
  • Check out our Python for beginners cheat sheet as a handy reference guide.

R is a programming language and free software environment specifically designed for statistical computing and graphics. It is widely used among statisticians and data miners for developing statistical software and data analysis. R provides a wide variety of statistical and graphical techniques, including linear and nonlinear modeling, classical statistical tests, time-series analysis, and more.

  • Our R Programming skill track will introduce you to R and help you develop the skills you’ll need to start coding in R.
  • With the Data Analyst with R career track, you’ll gain the skills you need to start your journey to becoming a data analyst.
  • Our Getting Started with R cheat sheet helps give an overview of how to start learning R Programming.

SQL (Structured Query Language) is a standard language for managing and manipulating databases. It is used to retrieve and manipulate data stored in relational databases. SQL is essential for tasks that involve data management or manipulation within databases.

  • To get familiar with SQL, consider taking our SQL Fundamentals skill track, where you’ll learn how to interact with and query your data.
  • SQL for Business Analysts will boost your business SQL skills.
  • Our SQL Basics cheat sheet covers a list of functions for querying data, filtering data, aggregation, and more.

Power BI is a business analytics tool developed by Microsoft. It provides interactive visualizations with self-service business intelligence capabilities. Power BI is used to transform raw data into meaningful insights through easy-to-understand dashboards and reports.

  • Explore the power of Power BI with our Power BI Fundamentals skill track, where you’ll learn to get the most from the business intelligence tool.
  • With Exploratory Data Analysis in Power BI you’ll learn how to enhance your reports with EDA.
  • We have a Power BI cheat sheet which covers many of the basics you’ll need to get started.

Tableau is a powerful data visualization tool used in the Business Intelligence industry. It allows you to create interactive and shareable dashboards, which depict trends, variations, and density of the data in the form of charts and graphs.

  • The Tableau Fundamentals skill track will introduce you to the business intelligence tool and how you can use it to clear, analyze, and visualize data.
  • Analyzing Data in Tableau will give you some of the advanced skills needed to improve your analytics and visualizations.
  • Check out our Tableau cheat sheet , which runs you through the essentials of how to get started using the tool.

Microsoft Excel is one of the most widely used tools for data analysis. It offers a range of features for data manipulation, statistical analysis, and visualization. Excel's simplicity and versatility make it a great tool for both simple and complex data analysis tasks.

  • Check out our Data Analysis in Excel course to build functional skills in Excel.
  • For spreadsheet skills in general, check out Marketing Analytics in Spreadsheets .
  • The Excel Basics cheat sheet covers many of the basic formulas and operations you’ll need to make a start.

Understanding the Impact of Data Analysis

Data analysis, whether on a small or large scale, can have a profound impact on business performance. It can drive significant changes, leading to improved efficiency, increased profitability, and a deeper understanding of market trends and customer behavior.

Informed decision-making

Data analysis allows businesses to make informed decisions based on facts, figures, and trends, rather than relying on guesswork or intuition. It provides a solid foundation for strategic planning and policy-making, ensuring that resources are allocated effectively and that efforts are directed towards areas that will yield the most benefit.

Impact on small businesses

For small businesses, even simple data analysis can lead to significant improvements. For example, analyzing sales data can help identify which products are performing well and which are not. This information can then be used to adjust marketing strategies, pricing, and inventory management, leading to increased sales and profitability.

Impact on large businesses

For larger businesses, the impact of data analysis can be even more profound. Big data analysis can uncover complex patterns and trends that would be impossible to detect otherwise. This can lead to breakthrough insights, driving innovation and giving the business a competitive edge.

For example, a large retailer might use data analysis to optimize its supply chain, reducing costs and improving efficiency. Or a tech company might use data analysis to understand user behavior, leading to improved product design and better user engagement.

The critical role of data analysis

In today's data-driven world, the ability to analyze and interpret data is a critical skill. Businesses that can harness the power of data analysis are better positioned to adapt to changing market conditions, meet customer needs, and drive growth and profitability.

Get started with DataCamp for Business

Build a data-driven workforce with DataCamp for business

research analysis in

Top Careers in Data Analysis in 2023

In the era of Big Data, careers in data analysis are flourishing. With the increasing demand for data-driven insights, these professions offer promising prospects. Here, we will discuss some of the top careers in data analysis in 2023, referring to our full guide on the top ten analytics careers .

1. Data scientist

Data scientists are the detectives of the data world, uncovering patterns, insights, and trends from vast amounts of information. They use a combination of programming, statistical skills, and machine learning to make sense of complex data sets. Data scientists not only analyze data but also use their insights to influence strategic decisions within their organization.

We’ve got a complete guide on how to become a data scientist , which outlines everything you need to know about starting your career in the industry.

Key skills :

  • Proficiency in programming languages like Python or R
  • Strong knowledge of statistics and probability
  • Familiarity with machine learning algorithms
  • Data wrangling and data cleaning skills
  • Ability to communicate complex data insights in a clear and understandable manner

Essential tools :

  • Jupyter Notebook
  • Machine learning libraries like Scikit-learn, TensorFlow
  • Data visualization libraries like Matplotlib, Seaborn

2. Business intelligence analyst

Business intelligence analysts are responsible for providing a clear picture of a business's performance by analyzing data related to market trends, business processes, and industry competition. They use tools and software to convert complex data into digestible reports and dashboards, helping decision-makers to understand the business's position and make informed decisions.

  • Strong analytical skills
  • Proficiency in SQL and other database technologies
  • Understanding of data warehousing and ETL processes
  • Ability to create clear visualizations and reports
  • Business acumen
  • Power BI, Tableau

3. Data engineer

Data engineers are the builders and maintainers of the data pipeline. They design, construct, install, test, and maintain highly scalable data management systems. They also ensure that data is clean, reliable, and preprocessed for data scientists to perform analysis.

Read more about what a data engineer does and how you can become a data engineer in our separate guide.

  • Proficiency in SQL and NoSQL databases
  • Knowledge of distributed systems and data architecture
  • Familiarity with ETL tools and processes
  • Programming skills, particularly in Python and Java
  • Understanding of machine learning algorithms
  • Hadoop, Spark
  • Python, Java

4. Business analyst

Business analysts are the bridge between IT and business stakeholders. They use data to assess processes, determine requirements, and deliver data-driven recommendations and reports to executives and stakeholders. They are involved in strategic planning, business model analysis, process design, and system analysis.

  • Understanding of business processes and strategies
  • Proficiency in SQL
  • Ability to communicate effectively with both IT and business stakeholders
  • Project management skills

Proficiency in programming, strong statistical knowledge, familiarity with machine learning, data wrangling skills, and effective communication.

Python, R, SQL, Scikit-learn, TensorFlow, Matplotlib, Seaborn

Strong analytical skills, proficiency in SQL, understanding of data warehousing and ETL, ability to create visualizations and reports, and business acumen.

SQL, Power BI, Tableau, Excel, Python

Proficiency in SQL and NoSQL, knowledge of distributed systems and data architecture, familiarity with ETL, programming skills, and understanding of machine learning.

SQL, NoSQL, Hadoop, Spark, Python, Java, ETL tools

Strong analytical skills, understanding of business processes, proficiency in SQL, effective communication, and project management skills.

SQL, Excel,Power BI, Tableau, Python

A table outlining different data analysis careers

How to Get Started with Data Analysis

Embarking on your journey into data analysis might seem daunting at first, but with the right resources and guidance, you can develop the necessary skills and knowledge. Here are some steps to help you get started, focusing on the resources available at DataCamp.

.css-138yw8m{-webkit-align-self:start;-ms-flex-item-align:start;align-self:start;-webkit-flex-shrink:0;-ms-flex-negative:0;flex-shrink:0;width:-webkit-max-content;width:-moz-max-content;width:max-content;} .css-b8zm5p{box-sizing:border-box;margin:0;min-width:0;-webkit-align-self:start;-ms-flex-item-align:start;align-self:start;-webkit-flex-shrink:0;-ms-flex-negative:0;flex-shrink:0;width:-webkit-max-content;width:-moz-max-content;width:max-content;} .css-i98n7q{-webkit-flex-shrink:0;-ms-flex-negative:0;flex-shrink:0;margin-top:5px;}.css-i98n7q .quote-new_svg__quote{fill:#7933ff;} .css-xfsibo{-webkit-align-self:center;-ms-flex-item-align:center;align-self:center;color:#000820;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-flex:1;-webkit-flex-grow:1;-ms-flex-positive:1;flex-grow:1;-webkit-box-pack:space-evenly;-ms-flex-pack:space-evenly;-webkit-justify-content:space-evenly;justify-content:space-evenly;margin-left:16px;} .css-gt3aw7{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-self:center;-ms-flex-item-align:center;align-self:center;color:#000820;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-flex:1;-webkit-flex-grow:1;-ms-flex-positive:1;flex-grow:1;-webkit-box-pack:space-evenly;-ms-flex-pack:space-evenly;-webkit-justify-content:space-evenly;justify-content:space-evenly;margin-left:16px;} .css-n2j7xo{box-sizing:border-box;margin:0;min-width:0;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-self:center;-ms-flex-item-align:center;align-self:center;color:#000820;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-flex:1;-webkit-flex-grow:1;-ms-flex-positive:1;flex-grow:1;-webkit-box-pack:space-evenly;-ms-flex-pack:space-evenly;-webkit-justify-content:space-evenly;justify-content:space-evenly;margin-left:16px;} .css-1yf55a1{margin-bottom:8px;}.css-1yf55a1 a{color:#05192d;font-weight:700;line-height:1.5;-webkit-text-decoration:none;text-decoration:none;}.css-1yf55a1 a:active,.css-1yf55a1 a:focus,.css-1yf55a1 a:hover{-webkit-text-decoration:underline;text-decoration:underline;}.css-1yf55a1 p{font-size:16px;font-weight:800;line-height:24px;} .css-xjjmwi{box-sizing:border-box;margin:0;min-width:0;font-size:1.5rem;letter-spacing:-0.5px;line-height:1.2;margin-top:0;margin-bottom:8px;}.css-xjjmwi a{color:#05192d;font-weight:700;line-height:1.5;-webkit-text-decoration:none;text-decoration:none;}.css-xjjmwi a:active,.css-xjjmwi a:focus,.css-xjjmwi a:hover{-webkit-text-decoration:underline;text-decoration:underline;}.css-xjjmwi p{font-size:16px;font-weight:800;line-height:24px;} To thrive in data analysis, you must build a strong foundation of knowledge, sharpen practical skills, and accumulate valuable experience. Start with statistics, mathematics, and programming and tackle real-world projects. Then, gain domain expertise, and connect with professionals in the field. Combine expertise, skills, and experience for a successful data analysis career. .css-16mqoqa{color:#626D79;font-weight:400;} .css-1k1umiz{box-sizing:border-box;margin:0;min-width:0;font-size:0.875rem;line-height:1.5;margin-top:0;color:#626D79;font-weight:400;} Richie Cotton ,  Data Evangelist at DataCamp

Understand the basics

Before diving into data analysis, it's important to understand the basics. This includes familiarizing yourself with statistical concepts, data types, and data structures. DataCamp's Introduction to Data Science in Python or Introduction to Data Science in R courses are great starting points.

Learn a programming language

Data analysis requires proficiency in at least one programming language. Python and R are among the most popular choices due to their versatility and the vast array of libraries they offer for data analysis. We offer comprehensive learning paths for both Python and R .

Master data manipulation and visualization

Data manipulation and visualization are key components of data analysis. They allow you to clean, transform, and visualize your data, making it easier to understand and analyze. Courses like Data Manipulation with pandas or Data Visualization with ggplot2 can help you develop these skills.

Dive into Specific Data Analysis Techniques

Once you've mastered the basics, you can delve into specific data analysis techniques like regression analysis , time series analysis , or machine learning . We offer a wide range of courses across many topics, allowing you to specialize based on your interests and career goals.

Practice, Practice, Practice

The key to mastering data analysis is practice. DataCamp's practice mode and projects provide hands-on experience with real-world data, helping you consolidate your learning and apply your skills. You can find a list of 20 data analytics projects for all levels to give you some inspiration.

Remember, learning data analysis is a journey. It's okay to start small and gradually build up your skills over time. With patience, persistence, and the right resources, you'll be well on your way to becoming a proficient data analyst.

Become a ML Scientist

Final thoughts.

In the era of digital transformation, data analysis has emerged as a crucial skill, regardless of your field or industry. The ability to make sense of data, to extract insights, and to use those insights to make informed decisions can give you a significant advantage in today's data-driven world.

Whether you're a marketer looking to understand customer behavior, a healthcare professional aiming to improve patient outcomes, or a business leader seeking to drive growth and profitability, data analysis can provide the insights you need to succeed.

Remember, data analysis is not just about numbers and statistics. It's about asking the right questions, being curious about patterns and trends, and having the courage to make data-driven decisions. It's about telling a story with data, a story that can influence strategies, change perspectives, and drive innovation.

So, we encourage you to apply your understanding of data analysis in your respective fields. Harness the power of data to uncover insights, make informed decisions, and drive success. The world of data is at your fingertips, waiting to be explored.

Data Analyst with Python

.css-1531qan{-webkit-text-decoration:none;text-decoration:none;color:inherit;} data analyst, what is data analysis .css-18x2vi3{-webkit-flex-shrink:0;-ms-flex-negative:0;flex-shrink:0;height:18px;padding-top:6px;-webkit-transform:rotate(0.5turn) translate(21%, -10%);-moz-transform:rotate(0.5turn) translate(21%, -10%);-ms-transform:rotate(0.5turn) translate(21%, -10%);transform:rotate(0.5turn) translate(21%, -10%);-webkit-transition:-webkit-transform 0.3s cubic-bezier(0.85, 0, 0.15, 1);transition:transform 0.3s cubic-bezier(0.85, 0, 0.15, 1);width:18px;}.

Data analysis is a comprehensive method that involves inspecting, cleansing, transforming, and modeling data to discover useful information, make conclusions, and support decision-making. It's a process that empowers organizations to make informed decisions, predict trends, and improve operational efficiency.

What are the steps in the data analysis process? .css-167dpqb{-webkit-flex-shrink:0;-ms-flex-negative:0;flex-shrink:0;height:18px;padding-top:6px;-webkit-transform:none;-moz-transform:none;-ms-transform:none;transform:none;-webkit-transition:-webkit-transform 0.3s cubic-bezier(0.85, 0, 0.15, 1);transition:transform 0.3s cubic-bezier(0.85, 0, 0.15, 1);width:18px;}

The data analysis process involves several steps, including defining objectives and questions, data collection, data cleaning, data analysis, data interpretation and visualization, and data storytelling. Each step is crucial to ensuring the accuracy and usefulness of the results.

What are the different types of data analysis?

Data analysis can be categorized into four types: descriptive, diagnostic, predictive, and prescriptive analysis. Descriptive analysis summarizes raw data, diagnostic analysis determines why something happened, predictive analysis uses past data to predict the future, and prescriptive analysis suggests actions based on predictions.

What are some commonly used data analysis techniques?

There are various data analysis techniques, including exploratory analysis, regression analysis, Monte Carlo simulation, factor analysis, cohort analysis, cluster analysis, time series analysis, and sentiment analysis. Each has its unique purpose and application in interpreting data.

What are some of the tools used in data analysis?

Data analysis typically utilizes tools such as Python, R, SQL for programming, and Power BI, Tableau, and Excel for visualization and data management.

How can I start learning data analysis?

You can start learning data analysis by understanding the basics of statistical concepts, data types, and structures. Then learn a programming language like Python or R, master data manipulation and visualization, and delve into specific data analysis techniques.

How can I become a data analyst?

Becoming a Data Analyst requires a strong understanding of statistical techniques and data analysis tools. Mastery of software such as Python, R, Excel, and specialized software like SPSS and SAS is typically necessary. Read our full guide on how to become a Data Analyst and consider our Data Analyst Certification to get noticed by recruiters.

Photo of Matt Crabtree

A writer and content editor in the edtech space. Committed to exploring data trends and enthusiastic about learning data science.

Photo of Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

What is Business Analytics? Everything You Need to Know

Joleen Bothma's photo

Joleen Bothma

How to Analyze Data For Your Business in 5 Steps

Javier Canales Luna's photo

Javier Canales Luna

research analysis in

What is Data Science? Definition, Examples, Tools & More

Matt Crabtree's photo

Matt Crabtree

Choosing a career path

Data Analyst vs. Data Scientist: A Comparative Guide For 2024

DataCamp Team's photo

DataCamp Team

A Beginner's Guide to Predictive Analytics

Data Analyst surfing on wave of data

9 Essential Data Analyst Skills: A Comprehensive Career Guide

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Analysis is a type of primary research that involves finding and interpreting patterns in data, classifying those patterns, and generalizing the results. It is useful when looking at actions, events, or occurrences in different texts, media, or publications. Analysis can usually be done without considering most of the ethical issues discussed in the overview, as you are not working with people but rather publicly accessible documents. Analysis can be done on new documents or performed on raw data that you yourself have collected.

Here are several examples of analysis:

  • Recording commercials on three major television networks and analyzing race and gender within the commercials to discover some conclusion.
  • Analyzing the historical trends in public laws by looking at the records at a local courthouse.
  • Analyzing topics of discussion in chat rooms for patterns based on gender and age.

Analysis research involves several steps:

  • Finding and collecting documents.
  • Specifying criteria or patterns that you are looking for.
  • Analyzing documents for patterns, noting number of occurrences or other factors.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Comprehensive guidelines for appropriate statistical analysis methods in research

Affiliations.

  • 1 Department of Anesthesiology and Pain Medicine, Daegu Catholic University School of Medicine, Daegu, Korea.
  • 2 Department of Medical Statistics, Daegu Catholic University School of Medicine, Daegu, Korea.
  • PMID: 39210669
  • DOI: 10.4097/kja.24016

Background: The selection of statistical analysis methods in research is a critical and nuanced task that requires a scientific and rational approach. Aligning the chosen method with the specifics of the research design and hypothesis is paramount, as it can significantly impact the reliability and quality of the research outcomes.

Methods: This study explores a comprehensive guideline for systematically choosing appropriate statistical analysis methods, with a particular focus on the statistical hypothesis testing stage and categorization of variables. By providing a detailed examination of these aspects, this study aims to provide researchers with a solid foundation for informed methodological decision making. Moving beyond theoretical considerations, this study delves into the practical realm by examining the null and alternative hypotheses tailored to specific statistical methods of analysis. The dynamic relationship between these hypotheses and statistical methods is thoroughly explored, and a carefully crafted flowchart for selecting the statistical analysis method is proposed.

Results: Based on the flowchart, we examined whether exemplary research papers appropriately used statistical methods that align with the variables chosen and hypotheses built for the research. This iterative process ensures the adaptability and relevance of this flowchart across diverse research contexts, contributing to both theoretical insights and tangible tools for methodological decision-making.

Conclusions: This study emphasizes the importance of a scientific and rational approach for the selection of statistical analysis methods. By providing comprehensive guidelines, insights into the null and alternative hypotheses, and a practical flowchart, this study aims to empower researchers and enhance the overall quality and reliability of scientific studies.

Keywords: Algorithms; Biostatistics; Data analysis; Guideline; Statistical data interpretation; Statistical model..

PubMed Disclaimer

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Search Menu
  • Sign in through your institution
  • CNS Injury and Stroke
  • Epilepsy and Sleep
  • Movement Disorders
  • Multiple Sclerosis/Neuroinflammation
  • Neuro-oncology
  • Neurodegeneration - Cellular & Molecular
  • Neuromuscular Disease
  • Neuropsychiatry
  • Pain and Headache
  • Advance articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Why publish with this journal?
  • Open Access
  • About Brain
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Terms and Conditions
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Why we need a revolution in clinical research.

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Masud Husain, Why we need a revolution in clinical research, Brain , Volume 147, Issue 9, September 2024, Pages 2897–2898, https://doi.org/10.1093/brain/awae265

  • Permissions Icon Permissions

We are at a pivotal moment for clinical research. In the UK, the system is fundamentally broken as recent reports have alluded to. 1 , 2 In other parts of the world too there are similar issues that are, at the very least, slowing down innovation and research. There are many factors that have been identified as contributing to this sad state of affairs in the UK. One important issue that has not attracted so much attention recently—though it was the subject of a report 3 in 2020—is the relationship between higher educational institutions (mostly universities) and healthcare providers (largely the National Health Service, NHS).

The vast majority of research activity in the UK occurs within the higher education sector, while most patient-related research such as clinical trials relies on NHS infrastructure. And this is where there is a massive disconnect. Each of these systems are huge, cumbersome behemoths, with their own local lumbering administrations focused on aims that are not aligned to the mission of producing rapid results in clinical research. In the university sector, the priorities of leaders are to keep the system financially afloat and minimize potential legal risks. Many institutions in the UK are on the cusp of fiscal ruin and so require grant and other research income to subsidize their existence. In the NHS on the other hand the aim is to cut waiting lists which, post pandemic and doctors’ industrial action, are now very lengthy, and to provide adequate service delivery. Making healthcare research effective and efficient is the last thing on the minds of the leadership of either sector.

But who can blame them? Surely, it’s difficult enough to run either a university or an NHS hospital? Indeed, this seems sufficient explanation—an adequate excuse—for some leaders of both these types of institution for the huge delays in getting any useful research done. Many teams are now waiting over a year to get their grant-funded research off the ground. Remarkably, some trials are failing because they never start, several years after the funding has been awarded. Material or data transfer agreements between universities; slothful legal reviews of contracts and agreements with third parties; calculating overheads to be charged; multiple reviews of research protocols by R&D departments; dragging of feet over costings independently for the university and hospital; sluggish reviews by research services; signing off contracts with the NHS; obtaining honorary contracts for non-clinical personnel; and many other procedures may take months, if not years, to complete. The system is both Byzantine and exasperating to navigate. No wonder that pharmaceutical companies are balking at initiating trials in the UK, their gaze turning instead to countries where they are more serious about getting things done sooner, not later. 1

So how do we get out of this mess? Given the narrow goals that the leaderships of universities and NHS hospitals have, we cannot expect a great deal more from them on this front— unless they are compelled to make changes. In the UK, when the National Institute for Health and Care Research (NIHR) was formed in 2006, many of us were under the impression that its mission really was to ‘create a health research system in which the NHS supports outstanding individuals, working in world-class facilities, conducting leading-edge research focused on the needs of patients and the public’. 4 Clearly though this just hasn’t happened. Otherwise, why the need for recent reports? 1 , 2

One of the key reasons for this failure (we cannot refer it to it as anything else) is the simple fact that universities and NHS infrastructure are not joined up. Many pretend to be, but it is obvious to anyone who works at even the best centres in the UK that this is a sham. At Oxford, one of the hospital networks calls itself the Oxford University Health NHS Foundation Trust, but there really is very little to suggest why ‘University’ should be in its title. The levels of duplication of work and contracting between the university and the hospital make a mockery of the concept of seamless integration between these institutions. It is the same elsewhere too. The result is a growing duopoly of administrations that negotiate with each other, waste time and slow the pace of progress. Even when a research proposal has been approved by a ‘joint’ R&D unit, there needs to be a costings agreement between university and NHS trust.

From a national perspective this makes little sense, either economically or for governance. We are in the bizarre situation where two sets of institution—universities and hospitals—both largely funded by taxpayers are independently setting their (growing) administrative staffs to scrutinize research protocols or haggle over costings on projects that are mostly funded by government or charities. It is even worse for multicentre studies when many different universities and NHS trusts each want a share of the pie. This has a hidden cost in numbers of people employed, researchers’ time dealing with paperwork, and an opportunity cost in terms of time taken to get studies off the ground. Furthermore, there is no incentive to do things better or faster. There is simply a parochial incentive to make money locally and mitigate risks locally . Until the day that universities and hospitals associated with them are compelled to work as one integrated unit, there is very little hope for change. We will be left in the current quagmire of structural indolence. And that is why we need a revolution. Writing more reports on the matter will not help.

It is interesting to reflect on the fact that it was also radical change that was necessary to bring medicine into the modern era—to make it based on observation, clinical examination and the scientific method—in the first place. From the confusing and sometimes bizarre practices that characterized medicine in the 18th century, there emerged a new way of doing things which came about within one generation and in perhaps one of the least advanced places in Europe for clinical science at that time: Paris. From being a backwater, the ‘Paris School of Medicine’ instigated such dramatic change that within 50 years it became the leading international centre for clinical practice, attracting physicians from around the world to learn about the ‘new medicine’. 5

The rise of scientific medicine in Paris depended on systematic correlation of physical examination findings on hundreds of patients with pathological findings at post-mortem; flexibility to revise diagnoses on the basis of these assessments; deployment of statistics, including data on mortality; and most of all on conducting this work and teaching it to medical students in hospitals. 5 What made this possible was reform. Before the French Revolution, control of medical care rested largely with the Church. With the reform of medical education that came after the Revolution, hospitals were centralized and their administration was overseen by the state. Fundamental changes in the way in which faculties of medicine were organized in France led the way for dramatic new ways of learning from patients and disseminating knowledge to clinicians. Medical education was transformed but it needed the convulsive change of a Revolution to make this happen. 6 It required top-down edicts to bring about change because there was no incentive for the old institutions to make those changes themselves.

We are now confronted with a similar problem. The old institutions—universities and hospitals—are used to doing things their way. There is no incentive for them to change unless the state or its organs of power intervene. In the UK, NIHR funds now support Biomedical Research Centres (BRCs) which supposedly cross universities and NHS hospital trusts, but in truth the fiscal support helps to prop up university research personnel with very little going to the NHS. Most importantly, the NIHR has not insisted on BRCs having joined up (i.e. single) integrated, university-NHS systems in place, or for seamless national transfer of approvals across sites without the need for new sets of contractual agreements. Nothing fundamental will change unless it or the new government compels this change. The pursuit of national interests requires national leadership to intervene; we can't rely on local, devolved institutions to make the obvious decisions that are required. This is why we need a revolution in healthcare research.

O’Shaughnessy J . Commercial clinical trials in the UK: The Lord O’Shaughnessy review - final report. Accessed 13 August 2024. https://www.gov.uk/government/publications/commercial-clinical-trials-in-the-uk-the-lord-oshaughnessy-review/commercial-clinical-trials-in-the-uk-the-lord-oshaughnessy-review-final-report

The Academy of Medical Sciences . Future-proofing UK health research: A people-centred, coordinated approach. https://acmedsci.ac.uk/file-download/23875189

The Academy of Medical Sciences . Transforming health through innovation: integrating the NHS and academia. https://acmedsci.ac.uk/file-download/23932583

Department of Health and Social Care . Best research for best health: a new national health research strategy. https://www.gov.uk/government/publications/best-research-for-best-health-a-new-national-health-research-strategy

La Berge A , Hannaway C . Paris medicine: Perspectives past and present . Clio Med . 1998 ; 50 : 1 – 69 .

Google Scholar

Ackerknecht EH . Medicine at the Paris hospital, 1794–1848 . Johns Hopkins University Press ; 1967 .

Google Preview

Email alerts

Citing articles via, looking for your next opportunity.

  • Contact the editorial office
  • Guarantors of Brain
  • Recommend to your Library

Affiliations

  • Online ISSN 1460-2156
  • Print ISSN 0006-8950
  • Copyright © 2024 Guarantors of Brain
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: I am obsessed with how you can use AI tools to make research more effective and today I'm going to share with you the epic prompts that I have found that you can use to make it easy to analyze your data, look for gaps in the literature as well as kind of synthesize your thoughts and ideas around certain topics and fields. Here we go, this is how we start. So to set up first of all what you need to do is get yourself the Edge browser. I know, I know it's not everyone's favorite thing but it's got Bing automatically kind of included in the top here so all you have to do is open up the Edge browser and then click on Bing at the top and then what you've got is this chat area and this is where you can ask the AI anything about anything that's in the window. The reason this is great is because in the past I've been copying and pasting text from papers into chat GPT and that just doesn't work when you have got a load of text information. Sometimes you reach the limits as to how much text you can paste in. Here those limits are completely removed as far as I can tell. So we go in here and you can see here I've got different types of papers so here I've got a review article and then I've got one of my papers and then I've got my thesis as well. So here we're going to look at how you can actually use all these different prompts for finding out things about a field, a paper, a thesis that would take you hours to synthesize on your own. So here are all of the different prompts that I have found are particularly powerful in this chat area. So we've got summarizing and analysis, potential research questions and gaps and methodology and techniques. So we'll be going through each of those and I'll share those down in the chat below if you want to use them yourself. But importantly to get this up and running you have to first of all give chat access to whatever's in this window. The way you do that is you go up to these dots up here and you go on notification app security and you make sure that page context is open. If that's not open the chat bot cannot read what's actually in the window. I've tried it. Okay so here we go over there perfect we're all good. So let's go back to the the different papers and so let's just start asking it questions. Let's ask about summarizing the stuff. So I'm going to take one of these identify the key findings and implications of this research paper and what it will go away is actually just start looking through the HTML documents and looking for the key findings and implications and I think that this is just a fantastic way to query, research, find out new things and save you hours and hours of work. So as you can see here it's coming up with different types of bullet-pointed things that you need to know about this paper and I think this is fantastic and you can query and go as deep as you want with these prompts but these are great starting prompts if you need to just find out what something is about immediately. One thing I love about this is it's not just using HTML from a web page it can also use PDF documents. Now these PDF documents are ones that I've got from my computer, I've got one of my papers and I've also got my thesis from 2010. Isn't that a long time ago? I can use exactly the same prompts on these things. So if I take another prompt from my summarizing and analysis prompts I want to know the strengths and weaknesses of this methodology. Let's see what it says about my research paper. So, analyze the strengths and weaknesses of this methodology. Boom. Let's see what it says. Now I've got to say that is pretty good. So we've got the strengths which is you know why I got the paper published in the first place which is great but the weaknesses I think are a great place to start if you're looking at building on top of someone else's research. So the materials, the process involves multiple steps and materials so it's very complex to produce I agree and then performance of the result in planar electrode may depend on the quality and consistency of the materials used. I completely agree on that as well. So there we are now we've got two avenues that we could take this paper and so using these prompts we've just sort of like really shortcut any of the hard lifting we need to do. Now obviously you need to go back in and have a look to see if that makes sense for this paper but it really really works. Now let's look at how you can actually use prompts to look at research questions and gaps. Alright let's start with my thesis. So I'm going to ask it to give me any potential research questions based on this thesis. So let's say list potential research questions related to this thesis to extend the work and let's just see what it says and because it's got access to all 256 pages we're going to start seeing it pump out some information. Oh I'm sorry I don't have information about the thesis you're referring to. Why don't you have access? You had access earlier when I was asking you. Let's go check in here. Ah that's because I turned off page contacts that I told you to do in the beginning. So let's turn that back on. Let's get rid of this and let's ask it again. I'm an idiot. Alright it's kicking out some pretty interesting extended research questions. Efficiency improved. Well that's what we're all trying to do. Long-term stability. That's great. My thesis didn't actually look at that. How do different fabrication methods and materials affect the performance? Can nanoparticle based organic divorces be scaled up to commercial production? And how do environmental factors such as temperature and humidity affect the performance? All of these are really great extension questions that I know the research group I worked on are working towards right now. So fantastic use of Bing's integrated AI chat. I think that there's a really great way to use chat on papers and that's to extract recipes for what you need to do if you want to reproduce someone's work or extend on it. So let's head over to my paper. Let's clear this and let's just ask, let's ask it to create a recipe for the process used in this paper and let's see what happens. It's pretty crazy that it even says where to purchase like the the silver nanowires from and the single walled carbon nanotubes. So it is producing a recipe for me to follow and it's doing it perfectly. So I could use this to not only then start my research process but also then to see what really doesn't work and try to find the gaps in the research based on the reproducibility of someone else's work. It really is doing a fantastic job. This is a really complicated paper with multiple steps and it's still going and kicking it out. So overall I really feel like this is the way of the future for asking questions about papers. Unfortunately it's in everyone's least favorite browser, arguably Microsoft Edge, but if this gets put into other browsers I think we'll be in a whole new world of productivity for researchers when dealing with the literature, with theses, with massive documents that it just makes it so much easier. Incredible. So there we have it, that's how you can use awesome epic prompts on research articles, on review articles, on thesis, on anything that you want to know about in the Edge browser with Bing Chat. It is really, really powerful and it beats copying and pasting all of that text into the chat GPT box because sometimes you run out of space and it doesn't process it. This is a way of analyzing massive documents and I really feel like it's a huge game-changer. Let me know in the comments what you would add and if there's any other tricks that you know about and there's more ways you can engage with me. The first way is to sign up to my newsletter and when you sign up you'll get five emails over about two weeks, everything from the tools I use, the podcasts I've been on, how to write the perfect abstract and more. It's exclusive content only available for free so go check it out now and also go check out academiainsider.com that's my new project where I've got my ebooks, the Ultimate Academic Writing Toolkit as well as the PhD Survival Guide. I've got a brand new resource pack for applying for a PhD that has got all of the tools and tricks you need for making sure your PhD application is super strong and we've got the forum and the blog growing out there as well and it's all there to make sure that your PhD works for you. Alright then, I'll see you in the next video.

techradar

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS. A lock ( Lock Locked padlock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

design element

  • Search Awards
  • Recent Awards
  • Presidential and Honorary Awards
  • About Awards
  • How to Manage Your Award
  • Grant General Conditions
  • Cooperative Agreement Conditions
  • Special Conditions
  • Federal Demonstration Partnership
  • Policy Office Website

research analysis in



August 20, 2024
August 20, 2024
2349238
Standard Grant
Patricia Simmons
[email protected]
�(703)292-5143
EEC
�Div Of Engineering Education and Centers
ENG
�Directorate For Engineering
November 1, 2024
October 31, 2027�(Estimated)
$464,923.00
$464,923.00
Imtiaz Andreescu
8 CLARKSON AVE
POTSDAM
NY �US �13676-1401
(315)268-6475
8 CLARKSON AVE
POTSDAM
NY �US �13676-1401
SSA-Special Studies & Analysis,
Partnership Funding from SRC
4900
4900
47.041

research analysis in

Please report errors in award information by writing to: [email protected] .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

As Robert F. Kennedy Jr. exits, a look at who supported him in the 2024 presidential race

Then-presidential candidate Robert F. Kennedy Jr. spoke at the Libertarian National Convention in Washington, D.C., on May 24, 2024. (Kevin Dietsch/Getty Images)

Robert F. Kennedy Jr. announced he would suspend his presidential campaign on Friday – adding yet another shakeup to the 2024 contest.

Pew Research Center conducted this analysis to better understand voters who said they planned to support Robert F. Kennedy Jr. in the 2024 presidential election. For this analysis, we surveyed 9,201 adults – including 7,569 registered voters – from Aug. 5 to 11, 2024.

Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), a group of people recruited through national, random sampling of residential addresses who have agreed to take surveys regularly. This kind of recruitment gives nearly all U.S. adults a chance of selection. Surveys were conducted either online or by telephone with a live interviewer. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other factors.  Read more about the ATP’s methodology .

Here are the  questions used for this analysis , the topline and the survey methodology .

Charts showing that, prior to departure from presidential race, Robert F. Kennedy Jr’s support had been declining.

Though the third-party candidate was capturing about 15% of registered voters in early July, he lost significant ground after that. In early August, just 7% of voters said they leaned toward or preferred Kennedy for president. This data comes from Pew Research Center surveys conducted in July and August.

As RFK Jr. exits the race, here are some findings about his supporters:

What Kennedy voters did after Biden withdrew from race

Many of Kennedy’s July supporters decided to back a different candidate after Joe Biden left the race. These voters picked Kamala Harris over Donald Trump by two-to-one.

A stacked bar chart showing that RFK Jr. voters were far less likely to strongly support their candidate.

Among voters who said they backed Kennedy in July, a majority (61%) supported a different candidate in August. Roughly four-in-ten (39%) continued to back RFK Jr. Far more of those who changed their preference decided to support Harris (39%) than Trump (20%).

Kennedy’s voters were lukewarm in their support

In August, just 18% of Kennedy’s supporters said they backed him strongly. This compared with nearly two-thirds of Trump (64%) and Harris (62%) supporters.

Which voters were more likely to support RFK Jr.  

A horizontal stacked bar chart showing that Kennedy’s supporters were relatively young, less attentive to politics, less motivated to vote.

Kennedy’s remaining supporters in August were far younger than Harris’ or Trump’s. About two-thirds of Kennedy’s supporters were under 50, compared with 46% of Harris’ and 38% of Trump’s.

While roughly half of Harris and Trump supporters follow what is going on in government and public affairs most of the time, only about a quarter (24%) of Kennedy supporters do.

Kennedy’s supporters also were far less likely to say they were highly motivated to vote in the presidential election. In August, the following shares of each candidate’s supporters said they were extremely motivated to vote:

  • Harris: 70%
  • Kennedy: 23%

Most Kennedy supporters did not identify as partisans – and a majority held unfavorable views of both Harris and Trump

Horizontal stacked bar charts showing that most of Kennedy’s supporters did not identify with a major party – and disliked both parties’ candidates.

Most of Kennedy’s remaining supporters did not call themselves partisans. Just 14% consider themselves Republicans while 12% consider themselves Democrats. The vast majority of his supporters (74%) say they are independent or something else. A larger share lean toward the Republican Party than the Democratic Party (40% vs. 26%).

In August, Kennedy supporters were sour on both Harris and Trump – 61% said they had an unfavorable view of both candidates.

Note: Here are the  questions used for this analysis , the topline and the survey methodology .

  • Donald Trump
  • Election 2024
  • Kamala Harris
  • Voters & Voting

Download Hannah Hartig's photo

Hannah Hartig is a senior researcher focusing on U.S. politics and policy research at Pew Research Center .

The Political Values of Harris and Trump Supporters

Harris energizes democrats in transformed presidential race, many americans are confident the 2024 election will be conducted fairly, but wide partisan differences remain, joe biden, public opinion and his withdrawal from the 2024 race, amid doubts about biden’s mental sharpness, trump leads presidential race, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Prevent plagiarism. Run a free check.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved September 4, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

August 31, 2024 | Flash Brief

Egypt deploys troops, weapons in somalia, raising tensions in the horn of africa, latest developments.

Egypt is reportedly sending 10,000 troops to Somalia, in a move that signals a potential major escalation in the Horn of Africa. On August 29, two Egyptian military planes carrying weapons and ammunition landed at Mogadishu airport as a part of Egypt’s planned deployment to a new peacekeeping mission — itself the consequence of a security pact signed in August 2024 between the two countries.

Tensions between Egypt and Ethiopia are at an unprecedented high over the Grand Ethiopian Renaissance Dam (GERD) on the Nile River. Since construction began in 2011, Egypt has viewed the dam as a direct threat to its water supply. Despite multiple rounds of negotiations, including a 2019 U.S.-brokered effort, the parties have failed to reach a lasting agreement.

Earlier this year, Ethiopia signed a preliminary agreement with the breakaway republic of Somaliland to lease land in exchange for potential recognition of Somaliland’s independence from Somalia. This move has strained relations between Ethiopia and the Somali government in Mogadishu, which called the deal a violation of its sovereignty. As a result, Egypt and Somalia have drawn closer, positioning Egypt as a strategic ally in Somalia’s confrontation with Ethiopia.

Expert Analysis

“Egypt’s military presence in Somalia aims to achieve four objectives: First, to assist the Somali army and raise its combat efficiency to deal with the terrorist operations of the Islamist Al-Shabaab terrorist organization. Second, to support the unity of the Somali territories by raising the capability of its armed forces. Third, to enhance Mogadishu’s participation in securing the Bab al-Mandab Strait. And fourth, to taunt Ethiopia and make its leaders uncomfortable.” — Haisam Hassanein , FDD Adjunct Fellow

“There is more to Egypt’s meddling in Somalia than meets the eye. This is a calculated move in the broader geo-political game, reflecting Cairo’s frustration with the diplomatic stalemate over Ethiopia’s dam project. By signaling its willingness to escalate, Egypt hopes to reign in the political and security threats posed by Ethiopia.” — Mariam Wahba , FDD Research Analyst

Ethiopian Dam Threatens Egypt’s Water Security

Since the start of GERD construction in 2011, Cairo has asserted that the project is a direct threat to its water security. Addis Ababa maintains that the dam is a development project and does not threaten Egypt.

Egypt’s population of over 110 million relies heavily on the Nile’s fresh water. The river is also the backbone of Egyptian agriculture — a major component of the Egyptian economy, representing 11.3 percent of the country’s gross domestic product.

For Ethiopia, the $4.5 billion project is a symbol of its regional ambitions. The mile-long dam is expected to double electricity production in the country of more than 120 million people.

Years of stop-start talks have yielded no results, including the most recent attempt undertaken in December 2023.

Related Analysis

“ Shabaab Mounts Large Scale Offensive, Somali Armed Forces Claim Victory ,” by Caleb Weiss

“ Egyptian Firm Linked to Government Profits from Palestinians Leaving Gaza ,” FDD Flash Brief

“ Talk Like an Egyptian ,” FDD Foreign Podicy Podcast

  • USA TODAY Sports

ESPN research shows Rockets lead NBA in continuity entering 2024-25 season

With nearly houston’s entire rotation returning, analysis by espn’s neil paine shows that the rockets lead the nba in continuity entering 2024-25..

In a new ESPN story ranking team continuity among the NBA’s 30 franchises, the Houston Rockets currently rank No. 1 headed into the 2024-25 season.

“We know that constant churn among players, coaches, front-office staff, and other key figures is why losing teams often stay down,” Neil Paine writes . “My previous research has shown that, even after controlling for how good or bad a team was, franchises with more turmoil around important roles tend to do worse moving forward.”

With that knowledge, Paine ranked where every NBA team currently is on the spectrum of continuity, ranging from those that have most of their players returning for 2024-25 to those that tore it all down. Here's the criteria:

To measure this, we'll look at a team's combined ranking across two different dimensions: the share of minutes played and the share of estimated RAPTOR wins above replacement (WAR) for the franchise from the previous three seasons (weighted by recency, such that last season gets a weight of 6, the year before that a 4, and the year before that a 1) from players on the team's current roster. NBA teams with a high ranking in each share — meaning they brought back more of the players who logged minutes and generated value for the team in recent seasons, especially in 2023-24 — have the greatest continuity.

The Rockets ranked third in the share of three-year minutes returning (75.7%) and second in the share of three-year WAR returning (94.4%). The combination of the two placed Houston at No. 1 in the league on the continuity index, just ahead of Orlando at No. 2 and defending NBA champion Boston at No. 3.

Paine notes that Houston is bringing back each of its top 12 WAR earners.

He writes :

Though the Rockets failed to make the playoffs last season, it was a season of growth for Houston, which improved its net rating by 9.0 points per 100 possessions from 2022-23. After years of losing with one of the youngest teams in the NBA (mean age 25.9), the Rockets provided a blueprint for how teams should emerge from a tanking era, with a mix of savvy veterans (Fred VanVleet) and young players coming into their own (Jalen Green, Alperen Sengun, Amen Thompson).

With a 41-41 record last season, Houston’s 19-win improvement from the previous season was the biggest annual jump of any NBA team. Beyond bringing back nearly all of last year’s playing rotation, the Rockets will also add established veteran center Steven Adams and highly touted rookie guard Reed Sheppard (the No. 3 overall pick in the 2024 NBA draft) to the 2024-25 mix.

Paine’s complete ESPN story on continuity in the NBA can be read here .

More: The Athletic: Rockets are NBA’s most compelling franchise for 2025 offseason

IMAGES

  1. FREE 13+ Research Analysis Samples in Word, PDF, Google Docs, Apple Pages

    research analysis in

  2. Standard statistical tools in research and data analysis

    research analysis in

  3. Data Analysis in research methodology

    research analysis in

  4. Analytical Research: What is it, Importance + Examples

    research analysis in

  5. FREE 13+ Research Analysis Samples in Word, PDF, Google Docs, Apple Pages

    research analysis in

  6. FREE 13+ Research Analysis Samples in Word, PDF, Google Docs, Apple Pages

    research analysis in

VIDEO

  1. Exploratory Data Analysis Overview

  2. Data Analysis in Research

  3. Differences Between Research and Analysis

  4. How to present research tools, procedures and data analysis techniques

  5. Research and Analysis Wing

  6. Data Analysis in R by Dustin Tran

COMMENTS

  1. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  2. The Beginner's Guide to Statistical Analysis

    Statistical analysis means investigating trends, patterns, and relationships using quantitative data. It is an important research tool used by scientists, governments, businesses, and other organizations. To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process. You need to specify ...

  3. How to Do Qualitative Data Analysis

    Data analysis in qualitative research is the process of examining and interpreting non-numerical data to uncover patterns, themes, and insights. It aims to make sense of rich, detailed information gathered through methods like interviews, focus groups, or observations.

  4. Introduction to Research Statistical Analysis: An Overview of the

    Introduction. Statistical analysis is necessary for any research project seeking to make quantitative conclusions. The following is a primer for research-based statistical analysis. It is intended to be a high-level overview of appropriate statistical testing, while not diving too deep into any specific methodology.

  5. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  6. What Is Data Analysis? (With Examples)

    Written by Coursera Staff • Updated on Apr 19, 2024. Data analysis is the practice of working with data to glean useful information, which can then be used to make informed decisions. "It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts," Sherlock ...

  7. Introduction to Data Analysis

    Data analysis can be quantitative, qualitative, or mixed methods. Quantitative research typically involves numbers and "close-ended questions and responses" (Creswell & Creswell, 2018, p. 3).Quantitative research tests variables against objective theories, usually measured and collected on instruments and analyzed using statistical procedures (Creswell & Creswell, 2018, p. 4).

  8. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  9. Data Analysis Techniques in Research

    Data Analysis Techniques in Research: While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence.Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

  10. Data Analysis in Quantitative Research

    Abstract. Quantitative data analysis serves as part of an essential process of evidence-making in health and social sciences. It is adopted for any types of research question and design whether it is descriptive, explanatory, or causal. However, compared with qualitative counterpart, quantitative data analysis has less flexibility.

  11. How to Do Thematic Analysis

    When to use thematic analysis. Thematic analysis is a good approach to research where you're trying to find out something about people's views, opinions, knowledge, experiences or values from a set of qualitative data - for example, interview transcripts, social media profiles, or survey responses. Some types of research questions you might use thematic analysis to answer:

  12. Quantitative Data Analysis Methods & Techniques 101

    Quantitative data analysis is one of those things that often strikes fear in students. It's totally understandable - quantitative analysis is a complex topic, full of daunting lingo, like medians, modes, correlation and regression.Suddenly we're all wishing we'd paid a little more attention in math class…. The good news is that while quantitative data analysis is a mammoth topic ...

  13. Data Analysis

    Data Analysis. Definition: Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets.

  14. Data analysis

    data analysis, the process of systematically collecting, cleaning, transforming, describing, modeling, and interpreting data, generally employing statistical techniques. Data analysis is an important part of both scientific research and business, where demand has grown in recent years for data-driven decision making.

  15. 8 Types of Data Analysis

    Exploratory analysis. Inferential analysis. Predictive analysis. Causal analysis. Mechanistic analysis. Prescriptive analysis. With its multiple facets, methodologies and techniques, data analysis is used in a variety of fields, including energy, healthcare and marketing, among others. As businesses thrive under the influence of technological ...

  16. Basic statistical tools in research and data analysis

    Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if ...

  17. Statistical Analysis in Research: Meaning, Methods and Types

    A Simplified Definition. Statistical analysis uses quantitative data to investigate patterns, relationships, and patterns to understand real-life and simulated phenomena. The approach is a key analytical tool in various fields, including academia, business, government, and science in general. This statistical analysis in research definition ...

  18. What Is Data Analysis in Research? Why It Matters & What Data Analysts

    Data analysis in research is the process of uncovering insights from data sets. Data analysts can use their knowledge of statistical techniques, research theories and methods, and research practices to analyze data. They take data and uncover what it's trying to tell us, whether that's through charts, graphs, or other visual representations.

  19. What is Data Analysis? An Expert Guide With Examples

    Data analysis is a comprehensive method of inspecting, cleansing, transforming, and modeling data to discover useful information, draw conclusions, and support decision-making. It is a multifaceted process involving various techniques and methodologies to interpret data from various sources in different formats, both structured and unstructured.

  20. Analysis

    Analysis is a type of primary research that involves finding and interpreting patterns in data, classifying those patterns, and generalizing the results. It is useful when looking at actions, events, or occurrences in different texts, media, or publications. Analysis can usually be done without considering most of the ethical issues discussed ...

  21. Comprehensive guidelines for appropriate statistical analysis methods

    Background: The selection of statistical analysis methods in research is a critical and nuanced task that requires a scientific and rational approach. Aligning the chosen method with the specifics of the research design and hypothesis is paramount, as it can significantly impact the reliability and quality of the research outcomes.

  22. What is Market Research Analysis? Definition, Steps ...

    Market research analysis is a vital tool that helps businesses gather and interpret data to make informed decisions, mitigate risks, identify opportunities for growth, and stay competitive in their respective markets. It plays a pivotal role in shaping business strategies and ensuring that resources are allocated effectively to achieve business ...

  23. Why we need a revolution in clinical research

    We are at a pivotal moment for clinical research. In the UK, the system is fundamentally broken as recent reports have alluded to. 1, 2 In other parts of the world too there are similar issues that are, at the very least, slowing down innovation and research. There are many factors that have been identified as contributing to this sad state of affairs in the UK.

  24. Maximize Research Efficiency with AI: Epic Prompts for Data Analysis

    Speaker 1: I am obsessed with how you can use AI tools to make research more effective and today I'm going to share with you the epic prompts that I have found that you can use to make it easy to analyze your data, look for gaps in the literature as well as kind of synthesize your thoughts and ideas around certain topics and fields. Here we go, this is how we start.

  25. NSF Award Search: Award # 2349238

    A new three-year REU Site: Undergraduate Research in Sensor Development (Design, Manufacture, Analysis) and Implementation Pipeline (SDIP) is hosted by Clarkson University. The SDIP program focuses on developing skilled leaders capable of addressing global challenges in fields such as public health, the environment, space, defense, photonics ...

  26. Who supported RFK Jr. in the 2024 presidential race?

    For this analysis, we surveyed 9,201 adults - including 7,569 registered voters - from Aug. 5 to 11, 2024. Everyone who took part in this survey is a member of the Center's American Trends Panel (ATP), a group of people recruited through national, random sampling of residential addresses who have agreed to take surveys regularly.

  27. What Is Qualitative Research?

    Qualitative research is the opposite of quantitative research, which involves collecting and analyzing numerical data for statistical analysis. Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc. Qualitative research question examples

  28. Egypt Deploys Troops, Weapons in Somalia, Raising Tensions in the Horn

    Latest Developments. Egypt is reportedly sending 10,000 troops to Somalia, in a move that signals a potential major escalation in the Horn of Africa. On August 29, two Egyptian military planes carrying weapons and ammunition landed at Mogadishu airport as a part of Egypt's planned deployment to a new peacekeeping mission — itself the consequence of a security pact signed in August 2024 ...

  29. U.S. Trade by Industry Sectors and Selected Trading Partners

    The United States International Trade Commission is an independent, nonpartisan, quasi-judicial federal agency that fulfills a range of trade-related mandates. We provide high-quality, leading-edge analysis of international trade issues to the President and the Congress. The Commission is a highly regarded forum for the adjudication of intellectual property and trade disputes.

  30. ESPN research: Rockets lead NBA in continuity entering 2024-25 season

    In a new ESPN story ranking team continuity among the NBA's 30 franchises, the Houston Rockets currently rank No. 1 headed into the 2024-25 season. "We know that constant churn among players, coaches, front-office staff, and other key figures is why losing teams often stay down," Neil Paine writes.. "My previous research has shown that, even after controlling for how good or bad a team ...