Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 3 September 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

survey plan in research

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Use a good survey analysis software . Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

Interactive forms

Interactive Forms: Key Features, Benefits, Uses + Design Tips

Sep 4, 2024

closed-loop management

Closed-Loop Management: The Key to Customer Centricity

Sep 3, 2024

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

survey plan in research

Why You Should Attend XDAY 2024

Aug 30, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Chapter 5. Planning and conducting a survey

Early planning

Choice of examination methods, staff and training.

n = 500n = 1000
21.0 – 3.71.2 – 3.1
107.5 – 13.08.2 – 12.0
2016.6 – 23.817.6 – 22.6

Sampling methods

Recruiting subjects, response rates.

  • Chapter 1. What is epidemiology?
  • Chapter 2. Quantifying disease in populations
  • Chapter 3. Comparing disease rates
  • Chapter 4. Measurement error and bias
  • Chapter 6. Ecological studies
  • Chapter 7. Longitudinal studies
  • Chapter 8. Case-control and cross sectional studies
  • Chapter 9. Experimental studies
  • Chapter 10. Screening
  • Chapter 11. Outbreaks of disease
  • Chapter 12. Reading epidemiological reports
  • Chapter 13. Further reading

Follow us on

Content links.

  • Collections
  • Health in South Asia
  • Women’s, children’s & adolescents’ health
  • News and views
  • BMJ Opinion
  • Rapid responses
  • Editorial staff
  • BMJ in the USA
  • BMJ in Latin America
  • BMJ in South Asia
  • Submit your paper
  • BMA members
  • Subscribers
  • Advertisers and sponsors

Explore BMJ

  • Our company
  • BMJ Careers
  • BMJ Learning
  • BMJ Masterclasses
  • BMJ Journals
  • BMJ Student
  • Academic edition of The BMJ
  • BMJ Best Practice
  • The BMJ Awards
  • Email alerts
  • Activate subscription

Information

We use cookies

This website uses cookies to provide better user experience and user's session management. By continuing visiting this website you consent the use of these cookies.

ChartExpo Survey

survey plan in research

Survey Planning: Definition, Importance & Insights

Surveys are a powerful tool for gathering valuable insights, but successful survey planning is critical to ensure the accuracy and relevance of your data.

However, successful survey planning is no walk in the park. It requires meticulous attention to detail, a knack for asking appropriate questions, and a healthy dose of creativity.

With meticulous planning, you can craft surveys that not only yield high response rates but also provide actionable insights that drive informed decisions.

survey planning

Worry not – surveys can be as exciting as a rollercoaster ride with proper planning.

We have created a roadmap that will guide you through the entire process. From defining your objectives to analyzing the results. We’ll share some witty tips and tricks that will make your survey stand out from the crowd.

Think of us as your trusty GPS, guiding you toward the most efficient data collection and analysis routes. And don’t worry. We won’t leave you stranded on the roadside with a flat tire of inadequate response rates.

Ultimately, you’ll be able to create real-impact surveys and gather valuable insights. You’ll also leave your respondents feeling like they’ve truly been heard.

You will also have fun doing it.

Table of Contents:

What is survey planning, why is survey planning important.

  • Steps for Survey Planning?

How to Examine Responses after Survey Planning?

  • Common Challenges in Survey Planning and How to Overcome Them?

Let’s get started.

Definition: Survey planning is like a treasure hunt, but you dig up information instead of finding treasure. It involves outlining objectives and determining the target population to be surveyed. It is a critical step in ensuring accurate and valuable information is obtained from surveys.

Survey planning is crucial for several reasons:

  • Objective and Focus: Planning helps clarify the purpose and objectives of the survey. It ensures the survey questions align with the specific information needed and the survey’s goals. This helps in collecting relevant and meaningful data.
  • Target audience: Planning enables you to accurately identify and define your target audience. Understanding the target audience’s characteristics and demographics helps you design appropriate survey questions. It also facilitates selecting the suitable sampling method to ensure representative results.
  • Question design: Proper planning allows for thoughtful question design. It ensures that the survey questions are clear, unbiased, and easy to understand for respondents. Well-designed questions help in obtaining accurate and reliable responses.
  • Resource optimization: Planning helps with effective resource allocation. It allows you to determine the appropriate type of survey  and estimate the required budget, time, and workforce.
  • Ethical Considerations: Planning helps address ethical considerations related to the survey. It ensures that informed consent is obtained from participants and their privacy is protected. It also ensures the elimination of potential biases or conflicts of interest.

Steps for Survey Planning

Pay attention to these steps and considerations when writing a survey planning document:

  • Define the objectives: Clearly state the purpose and goals of the survey. What specific information do you want to gather? What decisions or improvements will the survey data support?
  • Identify the target audience: Describe the characteristics and demographics of the target audience. This helps you select appropriate sampling methods and ensures the survey is relevant to the intended recipients.
  • Determine survey methodology:  Decide the survey method to use, such as online surveys , phone interviews, or in-person questionnaires. Consider factors like cost, time, feasibility, and reach.
  • Develop survey questions:  Create survey questions that are unambiguous, brief, and impartial. Make sure they are in line with the survey objectives and capable of generating actionable insights.
  • Plan data analysis:  Determine the techniques and tools you will use to examine the survey data. This includes considering statistical methods, data visualization approaches, and any necessary software or platforms for analysis.
  • Address ethical considerations:  Outline steps to ensure ethical practices throughout the survey process. This includes obtaining informed consent from participants, protecting their privacy and confidentiality, and following ethical guidelines.
  • Develop a timeline:  Create a timeline that outlines key milestones, including survey design , pre-testing, data collection, and analysis. This helps in managing the survey process and ensuring timely completion.
  • Consider resources and budget:  Estimate the resources needed for the survey, such as personnel, technology, and budget. This allows for effective allocation and management of resources.
  • Pre-test the survey:  Conduct a pilot survey to test the survey instrument and identify any issues or ambiguities. Then make the necessary revisions before launching the survey.
  • Plan for data collection: Determine the data collection process , including sample selection, distribution methods, and data entry procedures. Ensure data quality control measures are in place.

Failing to plan is planning to fail, they say. And with analyzing survey data, a well-thought-out plan is essential to turn those numbers into actionable insights. Lucky for you, with ChartExpo for Excel, you can create appealing, insightful visualizations with a few clicks. Even the most complex data sets will look like a work of art.

Let’s learn how to install ChartExpo in Excel.

  • Open your Excel application.
  • Open the worksheet and click the “ Insert ” menu.
  • You’ll see the “ My Apps ” option.
  • In the office Add-ins window, click “ Store ” and search for ChartExpo on my Apps Store.
  • Click the “ Add ” button to install ChartExpo in your Excel.

ChartExpo charts are available both in Google Sheets and Microsoft Excel. Please use the following CTA’s to install the tool of your choice and create beautiful visualizations in a few clicks in your favorite tool.

survey plan in research

Let’s say you want customer feedback regarding a recently launched product. Therefore, you ask customers to complete a customer feedback survey with the following questions.

  • Do you agree that the product meets your expectations in terms of quality?
  • Do you agree that the product provides value for its price?
  • Do you agree that the product is easy to use?

Respondents provide an answer to each question using the Likert scale below:

  • Strongly Disagree=1
  • Neither agree nor disagree=3
  • Strongly Agree=5

Assume your survey yields the results table below.

Strongly Agree Neither agree nor disagree Strongly Disagree
Neither agree nor disagree Strongly Agree Agree
Strongly Agree Neither agree nor disagree Strongly Agree
Neither agree nor disagree Disagree Agree
Disagree Strongly Disagree Strongly Disagree
Strongly Agree Agree Strongly Agree
Agree Strongly Disagree Agree
Neither agree nor disagree Agree Disagree
Agree Strongly Agree Strongly Agree
Strongly Agree Strongly Agree Strongly Agree
Strongly Disagree Neither agree nor disagree Disagree
Strongly Agree Strongly Agree Strongly Disagree
Neither agree nor disagree Strongly Disagree Disagree
Strongly Agree Strongly Agree Strongly Agree
Agree Agree Agree
Strongly Disagree Neither agree nor disagree Disagree
Strongly Agree Strongly Agree Strongly Agree
Strongly Agree Disagree Strongly Agree
Disagree Strongly Disagree Strongly Agree
Strongly Disagree Disagree  Disagree

This table contains example data. Expect many responses and questions in real life.

  • To get started with ChartExpo, install  ChartExpo in Excel .
  • Now Click on My Apps from the INSERT menu.

insert chartexpo in excel

  • Choose ChartExpo from My Apps , then click Insert.

open chartexpo in excel

  • Once it loads, choose the “ Likert Scale Chart ” from the charts list.

search likert scale chart for responses of survey planning

  • Click the “ Create Chart From Selection ” button after selecting the data from the sheet, as shown.

Create Chart From Selection for responses of survey planning

  • When you click the “ Create Chart From Selection ” button, you have to map responses with numbers manually. The Likert scale has this arrangement:
  • Extremely Dissatisfied = 1
  • Dissatisfied = 2
  • Neutral = 3
  • Satisfied = 4
  • Extremely Satisfied = 5
  • Once all is set, click the “ Create Chart ” button.

Map Likert Responses to Numbers for responses of survey planning

  • ChartExpo will generate the visualization below for you.

survey planning design template

  • If you want to have the chart’s title, click Edit Chart , as shown in the above image.
  • Click the pencil icon next to Chart Header to change the title.
  • It will open the properties dialog. Under the Text section, you can add a heading in Line 1 and enable Show .
  • Give the appropriate title of your chart and click the Apply button.

apply title on chart for responses of survey planning

  • Let’s say you want to add text responses instead of numbers against every emoji.
  • Click the pencil icon next to the respective emoji. Expand the “ Label ” properties and write the required text. Then click the “ Apply All ” button.
  • Click the “ Save Changes ” button to persist the changes.

apply label on chart for responses of survey planning

  • Your final chart will appear as below.

final survey planning

  • 63% of customers agree with the product usage, while 37% do not agree.
  • Regarding the price, 48% agree with its value, 32% do not agree, and 21% remain neutral.
  • Regarding product quality, 58% agree with it, but 22% do not agree, and 21% gave a neutral response.
  • 57% of customers gave positive feedback.
  • 30% gave negative feedback.
  • 14% remained neutral.

What are the Common Challenges in Survey Planning and How to Overcome Them?

Survey planning involves carefully considering various factors to ensure success and reliability. Here are some common challenges in survey planning and strategies to overcome them:

Targeting the Right Audience

Challenge:  Reaching the intended participants can be challenging, leading to a biased or unrepresentative sample.

Solution:  Develop a sampling strategy that targets the appropriate population. Use random sampling techniques or stratified sampling to ensure representation across different groups.

Designing Unbiased Survey Questions

Challenge:  Biased or leading questions can influence respondents’ answers and compromise the validity of the survey.

Solution: Craft neutral and clear questions using unbiased language. Also, pre-test the survey with a small group to identify and eliminate potential biases or misunderstandings.

Maximizing Response Rate

Challenge:  Low response rates can affect the sample’s representativeness and introduce non-response bias.

Solution:  Keep the survey concise, clearly communicate the purpose, and offer incentives if appropriate. Also, use multiple modes of distribution (e.g., email, online platforms), and send reminders to increase response rates.

Ensuring Survey Reliability and Validity

Challenge:  Surveys need to measure what they intend to measure consistently and accurately.

Solution:  Use established measurement scales and validated questions whenever possible. Conduct pilot testing to identify potential issues and refine the survey instrument for reliability and validity.

Minimizing Survey Fatigue

Challenge:  Long or repetitive surveys can lead to respondent fatigue, resulting in incomplete or rushed responses.

Solution:  Keep the survey concise, prioritize essential questions, and use skip logic to tailor the survey to individual respondents. Consider dividing the survey into multiple shorter sections if necessary.

Handling Missing Data

Challenge: Incomplete responses or missing data can affect the overall analysis and interpretation of survey results .

Solution:  Implement mechanisms to minimize missing data, such as mandatory response fields or reminder prompts. Use appropriate techniques for handling missing data during data analysis , such as imputation or sensitivity analyses.

How do you conduct a survey plan?

To conduct a survey plan:

  • Start by defining clear objectives and target audience.
  • Develop a sampling strategy, design unbiased questions, and plan for data collection methods.
  • Consider factors like response rates, privacy protection, data analysis, and reporting.

What is the first step in planning a survey?

The first step in planning a survey is clearly defining the objectives and research questions. This provides a clear focus and direction for the survey design. It also helps determine the necessary sample size and data collection methods.

What makes a survey successful?

A survey is successful when it effectively addresses its objectives and collects reliable and valid data. Also, when it has a high response rate, represents the target population well, and provides actionable insights.

Successful survey planning requires careful consideration of several key factors. By following this roadmap for survey planning, you can increase the likelihood of obtaining valuable insights. Ultimately, achieve your objectives.

The first step is to define the objectives and research questions clearly. This sets the foundation for the entire survey process. It also ensures that the survey remains focused and purposeful.

Identifying the target audience and implementing appropriate sampling techniques is crucial for obtaining representative and reliable data.

Next, design unbiased survey questions and ensure the reliability and validity of the survey instrument. This helps maintain the quality of the data collected.

Additionally, minimize survey fatigue, handle missing data, and protect respondent privacy. These steps contribute to the ethical and trustworthy nature of the survey.

Once data collection is complete, careful data analysis and interpretation are essential for drawing meaningful insights. That’s where visualizations come in.

ChartExpo empowers you to create compelling and insightful visualizations of your data. It revolutionizes data presentation and makes your data more digestible than ever before.

How much did you enjoy this article?

ExcelAd2

Related articles

How to Update a Chart in Excel for Clearer Insights?

Learn process of how to update a chart in Excel for effective analysis. Ensure your data is accurate and up-to-date with these practical steps and best practices.

What are HEDIS Measures: Overview and Guide

Delve into what HEDIS measures are, how they evaluate health plan performance, improve quality, and support transparency in healthcare delivery for patients.

Quality of Care Measures in Healthcare: A Complete Guide

Uncover the role of Quality of Care Measures in Healthcare in ensuring patient safety and satisfaction. Our analysis highlights key practices for better outcomes.

Population Pyramid Types with Real-World Examples

Uncover types of population pyramids and their impact on understanding demographic changes. Learn how expansive, constrictive, & stationary pyramids differ.

Balanced Scorecard: What is It, Uses, & Implementation

Explore Balanced Scorecard's role in aligning organizational activities with strategic goals, enhancing decision-making, and measuring performance effectively.

  • Online panels
  • Data-Collection Services
  • Full-Service Research
  • Global Omnibus
  • Case Studies
  • Quality Assurance
  • Work with us
  • Affiliation
  • TGM Content Hub
  • Bid Request
  • Create A Sampling Plan Guide

Create A Survey Sampling Plan In Seven Simple Steps

What is a Sampling Plan in Market Research?

Why is a sampling plan crucial for a market research project, seven steps to create a sampling plan for your research survey.

Seven Steps to Create a Sampling Plan for Your Research Survey

Step 1: Define Your Survey Goals and Objectives

Step 2: determine your sampling frame, step 3: choose your sampling method, step 4: determine the sample size.

  • Qualitative Studies: Typically involve smaller samples, continuing until theoretical saturation is reached—when new data no longer provides additional insights.For example, if you’re interviewing participants for a study on a specific behavioral pattern and you start noticing that each new interview is just repeating what has already been said, you’ve likely reached theoretical saturation.
  • Quantitative Studies: Use statistical methods to calculate sample size based on desired confidence levels and margins of error. For example, a 95% confidence level with a 5% margin of error requires precise calculations to ensure validity. Avoid generic rules like "100 subjects is enough"; instead, perform a tailored power analysis or consult detailed research for precise calculations.
Learn more about choosing between qualitative or quantitative research .

Step 5: Select Your Data Collection Method

Step 6: test your survey, step 7: implement your survey.

Discover advanced tools to combat sample and survey fraud and maintain data integrity.

Types of Sampling Methods Used within a Sampling Plan

  • Probability Sampling: This method focuses on ensuring that every individual in the population has a fair chance of being selected. Techniques like Simple Random Sampling and Stratified Sampling emphasize randomness and representation. Cluster Sampling and Multistage Sampling add layers of structure, making them particularly useful for large or geographically dispersed populations.
  • Non-Probability Sampling: In contrast, this method relies on more subjective criteria, selecting participants based on ease of access ( Convenience Sampling ) or specific characteristics (Purposive Sampling). Quota Sampling and Snowball Sampling build on these principles, with Quota ensuring representation across segments and Snowball leveraging participant networks.
  • Mixed Sampling: This approach blends elements from both probability and non-probability methods. For example, Stratified Cluster Sampling combines the stratification of probability sampling with the practicality of cluster sampling, while Sequential Sampling might start with a random selection and then narrow down the sample based on specific criteria, merging both randomization and purposiveness.
For a detailed guide on Survey Sampling Methods, visit https://tgmresearch.com/survey-sampling-methods.html !

What are the Most Important Things to Consider When Creating a Survey Sampling Plan?

8 best practices when designing a survey sampling plan

1. Understanding the target audience

2. choosing the right sampling method, 3. determining an adequate sample size, 4. addressing non-response bias:, 5. constructing clear questions and statements:, 6. choosing appropriate response types:, 7. using reliable scales and measures:.

  • Invalid scales often lack relevance, misalign with research goals, or are inappropriate for the population. For example, a scale that measures employee satisfaction but only includes questions about office decor, rather than job duties or management quality, is likely invalid.
  • Unreliable scales produce inconsistent results, show significant variations over time, or have high error rates. An example is a personality test that gives different results for the same person on different days without changes in their traits.

8. Considering analysis techniques

Managing resources for effective sampling and tracking progress.

  • Resource Allocation: Assign roles based on expertise, have project managers oversee timelines and progress, and create a detailed budget for participant incentives, tools, and software, monitoring it regularly to avoid overspending.
  • Training: Train your team in sampling methods and tools to minimize errors and enhance data quality. Ongoing professional development helps keep your team updated on best practices.
  • Time Management: Develop a project timeline with clear deadlines for each phase. Use project management tools to track progress and adjust as needed.
  • Monitoring Progress: Utilize software to track sampling progress in real-time. Regular team check-ins can identify and address challenges early.
  • Quality Control: Implement quality checks at each stage to ensure accuracy in your sampling process and take corrective actions as needed.
  • Contingency Planning: Prepare for unforeseen challenges with a contingency plan, including budget reserves and backup team members to maintain project continuity.

Key Software and Tools for Effective Sampling Plan Development

1. sampling plan templates and guides.

  • SurveyMonkey: Offers a range of sampling plan templates and guides to help you design your sampling strategy.
  • Qualtrics: Provides sampling plan resources including templates and best practices.

2. Sample Size Calculators

  • Raosoft Sample Size Calculator: A user-friendly tool for calculating sample size based on margin of error, confidence level, and population size.
  • Epi Info: Offers a sample size calculator for epidemiological studies and surveys.

3. Sampling Methods and Techniques

  • TGM Research Blog: Provides detailed explanations of various sampling methods, including probability and non-probability sampling techniques.
  • StatTrek: Offers comprehensive guides and examples on sampling methods and their applications.

4. Data Collection Tools

  • SurveyMonkey: A popular online survey tool that supports various data collection methods including online surveys.
  • Google Forms: A free tool for creating and distributing online surveys with real-time data collection and analysis.

5. Statistical Analysis Software

  • SPSS: Provides robust statistical analysis tools for analyzing survey data and determining sample representativeness. Learn more about SPSS.
  • R: An open-source programming language and software environment for statistical computing and graphics, useful for analyzing complex sampling data.

A strong sampling plan should include a representative sample, an adequate sample size, a well-defined sampling frame, and appropriate sampling techniques. These elements are crucial for producing high-quality, reliable research results.

A good sample size depends on the population size, the desired confidence level, and the margin of error. Typically, larger sample sizes provide more accurate results, but for most surveys, a sample size of around 350-400 participants is often sufficient to achieve a 95% confidence level with a 5% margin of error for a population of several thousand.

To choose a sampling method, consider the research goals, population characteristics, and resources available. Common methods include random sampling for generalizability, stratified sampling to ensure representation across key subgroups, and convenience sampling when ease and speed are priorities. The choice should align with the need for accuracy, representativeness, and feasibility.

Developing a sampling plan involves challenges such as defining a comprehensive sampling frame, choosing the right method, and determining an adequate sample size. Ensuring representativeness, minimizing bias, managing non-response rates, and addressing logistical constraints are also difficult but essential for effective planning. Overcoming these challenges requires careful planning and a deep understanding of sampling techniques and research goals.

Theoretical saturation is the stage in grounded theory analysis where additional data no longer adds new insights into the topic. In grounded theory, data collection and analysis continue iteratively until no further significant information is gained. Once this point is reached, additional data collection is unnecessary.

The Confidence Level (CL) a statistical measure of the likelihood that test results fall within a specified range. For example, a 95% Confidence Level suggests that outcomes are expected to meet expectations 95% of the time.

The margin of error is a statistic measuring the degree of random sampling error in survey results. A larger margin of error implies less certainty that the survey findings accurately represent the entire population.

Articles to read

7 Simple Steps to Create A Survey Sampling Plan

Transform your approach. Let's talk research!

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann R Coll Surg Engl
  • v.95(1); 2013 Jan

A quick guide to survey research

1 University of Cambridge,, UK

2 Cambridge University Hospitals NHS Foundation Trust,, UK

Questionnaires are a very useful survey tool that allow large populations to be assessed with relative ease. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Clear research goal

The first and most important step in designing a survey is to have a clear idea of what you are looking for. It will always be tempting to take a blanket approach and ask as many questions as possible in the hope of getting as much information as possible. This type of approach does not work as asking too many irrelevant or incoherent questions reduces the response rate 2 and therefore reduces the power of the study. This is especially important when surveying physicians as they often have a lower response rate than the rest of the population. 3 Instead, you must carefully consider the important data you will be using and work on a ‘need to know’ rather than a ‘would be nice to know’ model. 4

After considering the question you are trying to answer, deciding whom you are going to ask is the next step. With small populations, attempting to survey them all is manageable but as your population gets bigger, a sample must be taken. The size of this sample is more important than you might expect. After lost questionnaires, non-responders and improper answers are taken into account, this sample must still be big enough to be representative of the entire population. If it is not big enough, the power of your statistics will drop and you may not get any meaningful answers at all. It is for this reason that getting a statistician involved in your study early on is absolutely crucial. Data should not be collected until you know what you are going to do with them.

Directed questions

After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic ( Table 1 ).

Advantages and disadvantages of survey methods

Method of data collectionAdvantagesDisadvantages
Personal• Complex questions• Expensive
 • Visual aids can be used• Time inefficient
 • Higher response rates• Training to avoid bias
Telephone• Allows clarification• No visual aids
 • Larger radius than personal• Difficult to develop rapport
 • Less expensive or time consuming 
 • Higher response rates 
Postal• Larger target• Non-response
 • Visual aids (although limited)• Time for data compilation
 • Lower response rates 
Electronic• Larger target• Non-response
 • Visual aids• Not all subjects accessible
 • Quick response 
 • Quick data compilation 
 • Lower response rates 

Collected data are only useful if they convey information accurately and consistently about the topic in which you are interested. This is where a validated survey instrument comes in to the questionnaire design. Validated instruments are those that have been extensively tested and are correctly calibrated to their target. They can therefore be assumed to be accurate. 1 It may be possible to modify a previously validated instrument but you should seek specialist advice as this is likely to reduce its power. Examples of validated models are the Beck Hopelessness Scale 5 or the Addenbrooke’s Cognitive Examination. 6

The next step is choosing the type of question you are going to ask. The questionnaire should be designed to answer the question you want answered. Each question should be clear, concise and without bias. Normalising statements should be included and the language level targeted towards those at the lowest educational level in your cohort. 1 You should avoid open, double barrelled questions and those questions that include negative items and assign causality. 1 The questions you use may elicit either an open (free text answer) or closed response. Open responses are more flexible but require more time and effort to analyse, whereas closed responses require more initial input in order to exhaust all possible options but are easier to analyse and present.

Questionnaire

Two more aspects come into questionnaire design: aesthetics and question order. While this is not relevant to telephone or personal questionnaires, in self-administered surveys the aesthetics of the questionnaire are crucial. Having spent a large amount of time fine-tuning your questions, presenting them in such a way as to maximise response rates is pivotal to obtaining good results. Visual elements to think of include smooth, simple and symmetrical shapes, soft colours and repetition of visual elements. 7

Once you have attracted your subject’s attention and willingness with a well designed and attractive survey, the order in which you put your questions is critical. To do this you should focus on what you need to know; start by placing easier, important questions at the beginning, group common themes in the middle and keep questions on demographics to near the end. The questions should be arrayed in a logical order, questions on the same topic close together and with sensible sections if long enough to warrant them. Introductory and summary questions to mark the start and end of the survey are also helpful.

Pilot study

Once a completed survey has been compiled, it needs to be tested. The ideal next step should highlight spelling errors, ambiguous questions and anything else that impairs completion of the questionnaire. 8 A pilot study, in which you apply your work to a small sample of your target population in a controlled setting, may highlight areas in which work still needs to be done. Where possible, being present while the pilot is going on will allow a focus group-type atmosphere in which you can discuss aspects of the survey with those who are going to be filling it in. This step may seem non-essential but detecting previously unconsidered difficulties needs to happen as early as possible and it is important to use your participants’ time wisely as they are unlikely to give it again.

Distribution and collection

While it should be considered quite early on, we will now discuss routes of survey administration and ways to maximise results. Questionnaires can be self-administered electronically or by post, or administered by a researcher by telephone or in person. The advantages and disadvantages of each method are summarised in Table 1 . Telephone and personal surveys are very time and resource consuming whereas postal and electronic surveys suffer from low response rates and response bias. Your route should be chosen with care.

Methods for maximising response rates for self-administered surveys are listed in Table 2 , taken from a Cochrane review.2 The differences between methods of maximising responses to postal or e-surveys are considerable but common elements include keeping the questionnaire short and logical as well as including incentives.

Methods for improving response rates in postal and electronic questionnaires 2

PostalElectronic
Monetary or non-monetary incentivesNon-monetary incentives
Teaser on the envelopePersonalised questionnaires
Pre-notificationInclude pictures
Follow-up with another copy includedNot including ‘survey’ in subject line
Handwritten addressesMale signature
University sponsorshipWhite background
Use recorded deliveryShort questionnaire
Include return envelopeOffer of results
Avoid sensitive questionsStatement that others have responded
  • – Involve a statistician early on.
  • – Run a pilot study to uncover problems.
  • – Consider using a validated instrument.
  • – Only ask what you ‘need to know’.
  • – Consider guidelines on improving response rates.

The collected data will come in a number of forms depending on the method of collection. Data from telephone or personal interviews can be directly entered into a computer database whereas postal data can be entered at a later stage. Electronic questionnaires can allow responses to go directly into a computer database. Problems arise from errors in data entry and when questionnaires are returned with missing data fields. As mentioned earlier, it is essential to have a statistician involved from the beginning for help with data analysis. He or she will have helped to determine the sample size required to ensure your study has enough power. The statistician can also suggest tests of significance appropriate to your survey, such as Student’s t-test or the chi-square test.

Conclusions

Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore a greater statistical power, the ability to gather large amounts of information and having the availability of validated models. However, surveys are costly, there is sometimes discrepancy in recall accuracy and the validity of a survey depends on the response rate. Proper design is vital to enable analysis of results and pilot studies are critical to this process.

How to write a research plan: Step-by-step guide

Last updated

30 January 2024

Reviewed by

Short on time? Get an AI generated summary of this article instead

Today’s businesses and institutions rely on data and analytics to inform their product and service decisions. These metrics influence how organizations stay competitive and inspire innovation. However, gathering data and insights requires carefully constructed research, and every research project needs a roadmap. This is where a research plan comes into play.

Read this step-by-step guide for writing a detailed research plan that can apply to any project, whether it’s scientific, educational, or business-related.

  • What is a research plan?

A research plan is a documented overview of a project in its entirety, from end to end. It details the research efforts, participants, and methods needed, along with any anticipated results. It also outlines the project’s goals and mission, creating layers of steps to achieve those goals within a specified timeline.

Without a research plan, you and your team are flying blind, potentially wasting time and resources to pursue research without structured guidance.

The principal investigator, or PI, is responsible for facilitating the research oversight. They will create the research plan and inform team members and stakeholders of every detail relating to the project. The PI will also use the research plan to inform decision-making throughout the project.

  • Why do you need a research plan?

Create a research plan before starting any official research to maximize every effort in pursuing and collecting the research data. Crucially, the plan will model the activities needed at each phase of the research project .

Like any roadmap, a research plan serves as a valuable tool providing direction for those involved in the project—both internally and externally. It will keep you and your immediate team organized and task-focused while also providing necessary definitions and timelines so you can execute your project initiatives with full understanding and transparency.

External stakeholders appreciate a working research plan because it’s a great communication tool, documenting progress and changing dynamics as they arise. Any participants of your planned research sessions will be informed about the purpose of your study, while the exercises will be based on the key messaging outlined in the official plan.

Here are some of the benefits of creating a research plan document for every project:

Project organization and structure

Well-informed participants

All stakeholders and teams align in support of the project

Clearly defined project definitions and purposes

Distractions are eliminated, prioritizing task focus

Timely management of individual task schedules and roles

Costly reworks are avoided

  • What should a research plan include?

The different aspects of your research plan will depend on the nature of the project. However, most official research plan documents will include the core elements below. Each aims to define the problem statement , devising an official plan for seeking a solution.

Specific project goals and individual objectives

Ideal strategies or methods for reaching those goals

Required resources

Descriptions of the target audience, sample sizes , demographics, and scopes

Key performance indicators (KPIs)

Project background

Research and testing support

Preliminary studies and progress reporting mechanisms

Cost estimates and change order processes

Depending on the research project’s size and scope, your research plan could be brief—perhaps only a few pages of documented plans. Alternatively, it could be a fully comprehensive report. Either way, it’s an essential first step in dictating your project’s facilitation in the most efficient and effective way.

  • How to write a research plan for your project

When you start writing your research plan, aim to be detailed about each step, requirement, and idea. The more time you spend curating your research plan, the more precise your research execution efforts will be.

Account for every potential scenario, and be sure to address each and every aspect of the research.

Consider following this flow to develop a great research plan for your project:

Define your project’s purpose

Start by defining your project’s purpose. Identify what your project aims to accomplish and what you are researching. Remember to use clear language.

Thinking about the project’s purpose will help you set realistic goals and inform how you divide tasks and assign responsibilities. These individual tasks will be your stepping stones to reach your overarching goal.

Additionally, you’ll want to identify the specific problem, the usability metrics needed, and the intended solutions.

Know the following three things about your project’s purpose before you outline anything else:

What you’re doing

Why you’re doing it

What you expect from it

Identify individual objectives

With your overarching project objectives in place, you can identify any individual goals or steps needed to reach those objectives. Break them down into phases or steps. You can work backward from the project goal and identify every process required to facilitate it.

Be mindful to identify each unique task so that you can assign responsibilities to various team members. At this point in your research plan development, you’ll also want to assign priority to those smaller, more manageable steps and phases that require more immediate or dedicated attention.

Select research methods

Once you have outlined your goals, objectives, steps, and tasks, it’s time to drill down on selecting research methods . You’ll want to leverage specific research strategies and processes. When you know what methods will help you reach your goals, you and your teams will have direction to perform and execute your assigned tasks.

Research methods might include any of the following:

User interviews : this is a qualitative research method where researchers engage with participants in one-on-one or group conversations. The aim is to gather insights into their experiences, preferences, and opinions to uncover patterns, trends, and data.

Field studies : this approach allows for a contextual understanding of behaviors, interactions, and processes in real-world settings. It involves the researcher immersing themselves in the field, conducting observations, interviews, or experiments to gather in-depth insights.

Card sorting : participants categorize information by sorting content cards into groups based on their perceived similarities. You might use this process to gain insights into participants’ mental models and preferences when navigating or organizing information on websites, apps, or other systems.

Focus groups : use organized discussions among select groups of participants to provide relevant views and experiences about a particular topic.

Diary studies : ask participants to record their experiences, thoughts, and activities in a diary over a specified period. This method provides a deeper understanding of user experiences, uncovers patterns, and identifies areas for improvement.

Five-second testing: participants are shown a design, such as a web page or interface, for just five seconds. They then answer questions about their initial impressions and recall, allowing you to evaluate the design’s effectiveness.

Surveys : get feedback from participant groups with structured surveys. You can use online forms, telephone interviews, or paper questionnaires to reveal trends, patterns, and correlations.

Tree testing : tree testing involves researching web assets through the lens of findability and navigability. Participants are given a textual representation of the site’s hierarchy (the “tree”) and asked to locate specific information or complete tasks by selecting paths.

Usability testing : ask participants to interact with a product, website, or application to evaluate its ease of use. This method enables you to uncover areas for improvement in digital key feature functionality by observing participants using the product.

Live website testing: research and collect analytics that outlines the design, usability, and performance efficiencies of a website in real time.

There are no limits to the number of research methods you could use within your project. Just make sure your research methods help you determine the following:

What do you plan to do with the research findings?

What decisions will this research inform? How can your stakeholders leverage the research data and results?

Recruit participants and allocate tasks

Next, identify the participants needed to complete the research and the resources required to complete the tasks. Different people will be proficient at different tasks, and having a task allocation plan will allow everything to run smoothly.

Prepare a thorough project summary

Every well-designed research plan will feature a project summary. This official summary will guide your research alongside its communications or messaging. You’ll use the summary while recruiting participants and during stakeholder meetings. It can also be useful when conducting field studies.

Ensure this summary includes all the elements of your research project . Separate the steps into an easily explainable piece of text that includes the following:

An introduction: the message you’ll deliver to participants about the interview, pre-planned questioning, and testing tasks.

Interview questions: prepare questions you intend to ask participants as part of your research study, guiding the sessions from start to finish.

An exit message: draft messaging your teams will use to conclude testing or survey sessions. These should include the next steps and express gratitude for the participant’s time.

Create a realistic timeline

While your project might already have a deadline or a results timeline in place, you’ll need to consider the time needed to execute it effectively.

Realistically outline the time needed to properly execute each supporting phase of research and implementation. And, as you evaluate the necessary schedules, be sure to include additional time for achieving each milestone in case any changes or unexpected delays arise.

For this part of your research plan, you might find it helpful to create visuals to ensure your research team and stakeholders fully understand the information.

Determine how to present your results

A research plan must also describe how you intend to present your results. Depending on the nature of your project and its goals, you might dedicate one team member (the PI) or assume responsibility for communicating the findings yourself.

In this part of the research plan, you’ll articulate how you’ll share the results. Detail any materials you’ll use, such as:

Presentations and slides

A project report booklet

A project findings pamphlet

Documents with key takeaways and statistics

Graphic visuals to support your findings

  • Format your research plan

As you create your research plan, you can enjoy a little creative freedom. A plan can assume many forms, so format it how you see fit. Determine the best layout based on your specific project, intended communications, and the preferences of your teams and stakeholders.

Find format inspiration among the following layouts:

Written outlines

Narrative storytelling

Visual mapping

Graphic timelines

Remember, the research plan format you choose will be subject to change and adaptation as your research and findings unfold. However, your final format should ideally outline questions, problems, opportunities, and expectations.

  • Research plan example

Imagine you’ve been tasked with finding out how to get more customers to order takeout from an online food delivery platform. The goal is to improve satisfaction and retain existing customers. You set out to discover why more people aren’t ordering and what it is they do want to order or experience. 

You identify the need for a research project that helps you understand what drives customer loyalty . But before you jump in and start calling past customers, you need to develop a research plan—the roadmap that provides focus, clarity, and realistic details to the project.

Here’s an example outline of a research plan you might put together:

Project title

Project members involved in the research plan

Purpose of the project (provide a summary of the research plan’s intent)

Objective 1 (provide a short description for each objective)

Objective 2

Objective 3

Proposed timeline

Audience (detail the group you want to research, such as customers or non-customers)

Budget (how much you think it might cost to do the research)

Risk factors/contingencies (any potential risk factors that may impact the project’s success)

Remember, your research plan doesn’t have to reinvent the wheel—it just needs to fit your project’s unique needs and aims.

Customizing a research plan template

Some companies offer research plan templates to help get you started. However, it may make more sense to develop your own customized plan template. Be sure to include the core elements of a great research plan with your template layout, including the following:

Introductions to participants and stakeholders

Background problems and needs statement

Significance, ethics, and purpose

Research methods, questions, and designs

Preliminary beliefs and expectations

Implications and intended outcomes

Realistic timelines for each phase

Conclusion and presentations

How many pages should a research plan be?

Generally, a research plan can vary in length between 500 to 1,500 words. This is roughly three pages of content. More substantial projects will be 2,000 to 3,500 words, taking up four to seven pages of planning documents.

What is the difference between a research plan and a research proposal?

A research plan is a roadmap to success for research teams. A research proposal, on the other hand, is a dissertation aimed at convincing or earning the support of others. Both are relevant in creating a guide to follow to complete a project goal.

What are the seven steps to developing a research plan?

While each research project is different, it’s best to follow these seven general steps to create your research plan:

Defining the problem

Identifying goals

Choosing research methods

Recruiting participants

Preparing the brief or summary

Establishing task timelines

Defining how you will present the findings

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety
  • Survey Guide >

Planning a Survey

The success of a survey starts with an intense, detailed and comprehensive planning. Before you conduct a survey, you need to begin with brainstorming about the purpose of the survey, the goals and objectives, the creation of questions, and other important details included in utilizing the survey method.

This article is a part of the guide:

  • Response Scales
  • Example - Questionnaire
  • Advantages and Disadvantages
  • Surveys and Questionnaires - Guide
  • Types of Surveys

Browse Full Outline

  • 1 Surveys and Questionnaires - Guide
  • 2.1 Research and Surveys
  • 2.2 Advantages and Disadvantages
  • 2.3 Survey Design
  • 2.4 Sampling
  • 3.1 Defining Goals
  • 4.1 Survey Layout
  • 4.2 Types of Questions
  • 4.3 Constructing Questions
  • 4.4 Response Formats
  • 4.5 Response Scales
  • 5.1 Selecting Method
  • 5.2 Personal Interview
  • 5.3 Telephone
  • 5.4.1 Preparing Online Surveys
  • 5.4.2 Online Tools
  • 5.5 Focus Group
  • 5.6 Panel Study
  • 6.1 Pilot Survey
  • 6.2 Increasing Response Rates
  • 7.1 Analysis and Data
  • 7.2 Conclusion
  • 7.3 Presenting the Results
  • 8 Example - Questionnaire
  • 9 Checklist

survey plan in research

Defining Goals

Survey goals encompass the very purpose of conducting a survey . Having these goals, you will be able to create the right questions for the right participants. Survey goals will direct you to the type of survey you have to use and the type of survey administration you have to do. The survey goals also provide hints on the appropriate sample size of your survey, as well as the inclusion and exclusion criteria in terms of answering the question: “To whom should I administer the survey?”

survey plan in research

Participant Selection

Based on the survey goals or the purpose of conducting the survey, choose the participants that will be able to effectively represent the general population . In this step of the planning phase, you should be able to determine the inclusion and exclusion criteria so only the right people can be included in the target group. For instance, if you want to do a survey about teen mothers, you should eliminate women who bore children at the age of 20 and above.

Schedule Setting

Conduct the survey in a time-bounded fashion by means of planning out a schedule. First, start with setting a date for the creating of questions . Then, set a time frame for the standardization and/or revision of the survey. After this, mark your calendar for the period of administering the surveys to the participants. Next, schedule the date for tallying, summarizing and analyzing the results of the survey.

Budget Planning

When planning a survey successfully, budget allocation should be settled. When preparing for the budget, consider first the number of people that will participate in the survey. This will give you a good estimate of how much money is needed for the reproduction of the survey.

In terms of the questions, using a standardized survey for the study may or may not require money. This depends on whether the creator of the survey allows the free use of the questionnaire or obliges payment for it. On the other hand, creating your own survey and having it standardized or verified may require payments.

Another thing that needs to be considered in planning for the budget includes the time period of conducting the survey. If you are to have a survey of a very large target group, you must set aside a budget for the compensation of people who will help you administer the survey

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Sarah Mae Sincero (May 26, 2012). Planning a Survey. Retrieved Sep 03, 2024 from Explorable.com: https://explorable.com/planning-a-survey

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Save this course for later.

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

survey plan in research

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

Planning a Survey: 6 Step Guide + Best Practices

checklist

1. Set Objectives

  • 2. Define Audience
  • 3. Distribution Method
  • 4. Organize Data

5. Draft the Survey

  • Employee Feedback
  • Creating the Survey
  • Identity Protection
  • Research Tools

Need to create your own survey?

Planning a survey involves six steps: Set objectives, define the target audience, select the distribution method, organize external data, draft the survey, and then test. Following these steps will ensure you collect actionable feedback from your survey.

This guide will go over each of the six steps in detail. However, before covering those steps, it’s essential to consider some best practices when planning a survey.

Survey Best Practices

Keep your survey as short as possible..

A  dissertation by BYU  found that shorter surveys had almost a 2:1 odds ratio of being completed compared to longer surveys. The paper also mentions that surveys under 1000 words have a higher completion rate.

Only ask the questions you truly need. Don’t ask questions like “what store did you purchase from” or “what was the date of your order”. Instead, include that data automatically in the survey with a query string or custom data.

Using question types like a Likert Scale will also help reduce the length of each page. For example, you combine a list of questions that ask “how satisfied are you” into one question with multiple rows.

Group Similar Questions Together

Suppose your survey deals with multiple topics; group those topics on the same page. This will help increase the readability of your survey and keep your respondents focused.

Along the same lines, try not to put each question on a separate page. You want to limit the number of pages your survey has to avoid fatigue. In addition, putting each question on one page only increases the number of clicks needed to complete the study, which could adversely impact response rates.

Use Skip Logic

Skip logic will ensure that only relevant questions are shown to the user. For example, if you had one set of questions that related to males and another set of questions for females, then you can use skip logic to only show the relevant questions to each gender. Page one would ask for the gender and be used to build the skip logic rules.

Use a Net Promoter Score Question

Always include a Net Promoter Score question if you’re doing any customer or employee satisfaction survey. Net Promoter Score question asks survey respondents to rate the likelihood they would recommend a company, product, or service to a friend or colleague. When the results are tallied, one number ranging from -100 to 100 is displayed. This number makes it easy to set goals for improving your score.

The follow-up for a Net Promoter score question is usually an open-ended textbox question. This can be used to help determine why scores are high or low. On SurveyKing and other platforms, open-ended text questions can be automatically tagged using natural language processing . For example, many answers for an employee survey might talk about “paid time off.” The reporting section will then list a count of all “paid time off” related questions, and you’d be able to quantify how big of an issue this is.

The Net Promoter Score is an important concept when planning a survey. When used appropriately, it keeps your survey short and provides excellent data points.

Understand and Use Research Questions

One of the most significant inefficiencies encountered when a survey is poorly planned is using inefficient question types. For example, frequently, people will use ranking or rating questions to try and distinguish what is most important for a product or service. Similarly, when researching pricing, people will use an input box simply asking for a preferred price. Unfortunately, these question types are ineffective and sometimes can add unnecessary clutter to the design.

When researching and you want to understand what is LEAST and MOST important to your audience, then use a MaxDiff question. For example, if pricing research, use a Gabor Granger or Van Westendorp Question.

6 Steps to Planning a Survey

While every survey project is different, following these six steps when planning a survey will ensure that you efficiently collect actionable data.

Objectives  s hould be clear and concise. These objectives will drive the survey’s design and what questions are asked. Specific goals will make the survey design easier as any questions non directly related to the objects should be removed.

For example, if doing a product research survey, don’t ask about customer satisfaction. Instead, stick to only asking questions about the product.

2. Define the Target Audience

Depending on your objectives, you might only want opinions from a subset of your customers. You can use a sample size calculator to see how many responses you need to make your results statistical significance.

If it looks like your target audience will be too small to provide meaningful data, you can use a survey panel to buy targeted responses. A good survey panel will give you various demographic options to better target the needed audience.

3. Select the Distribution Method

Depending on your audience and the specific objectives, you might need different distribution methods. Make sure you plan exactly how to use each distribution method you use.

On the SurveyKing platform, here are the distribution methods above. Generally, this is similar to all survey platforms:

  • Web L ink – The link can be placed in emails, web pages, or social media. Custom data be added to the link with a query string. This data can include customer numbers, email addresses, or order numbers to track results internally.
  • Anonymous Link – Similar to a web link, but only one anonymous link is available per survey. This includes a seal at the top where respondents can learn how identities are protected. Anonymous links are commonly used in employee surveys.
  • Email Campaigns – Emails sent directly from the SurveyKing platform. The user uploads a list of emails, and then the platform sends them. The benefits here are being able to track click and open rates. In addition, each respondent gets a unique link, which helps prevent duplicate responses.
  • QR Code Surveys – Respondents can scan the QR code to take the survey. This is perfect for including product packaging or receipts to capture customer feedback quickly. Union surveys also utilize QR codes, which can be placed on flyers in break rooms.
  • Targeted Responses – If your sample size is too small, you can respond from a panel . Generally, responses start at $2 USD per completed response and include the more specific your audience needs to be in the cost. When planning your survey, include a budget for any panel costs.

For collecting customer feedback, email surveys are usually a great option. Emails can be triggered after a customer makes a purchase. You’d want to ensure your survey plan includes the specific time to send the customers. It’s a good idea to wait until the customer has used the product before sending a survey; usually, 24 hours after delivery is a good idea.

4. Organize External Data

This step is critical to ensuring your survey design is as effective as possible. For example, if doing a customer survey, you might want only to ask customers who have purchased in the last six months. Or if you’re conducting product research, you might need to pull a list of all related products sold to create a comprehensive list of features to use in a MaxDiff study.

If your organization uses tools like SAP , you can create reports to extract a customer list. Or you may need to write a SQL query to get the list. Regardless of how you collect the data, having all external data in front of you before drafting the survey is essential in making efficient survey questions.

The first step to drafting a survey is collaborating with other team members on the base set of questions. A shared Google doc is a good option. Changes can easily be made without needing to go back and constantly edit inside a survey platform.

Using a Google Doc also allows you to copy and paste those questions into SurveyKing import module. Once imported, you can make small adjustments to the questions and add things like skip logic.

6. Test the Survey

Testing is crucial to enduing your survey project is a success. Testing should include multiple people on your team. Here is a quick list of things you should be testing for:

  • Grammar Errors – Read over all the questions and instructions carefully. Grammar errors can  negatively impact brand reputation .
  • Question Order – Do questions follow a logical order? If no, edit the order.
  • Skip Logic – Test all scenarios if you have any skip logic rules in place. If using rules that flow to multiple pages, be sure that the survey ends early for different branches. For example, if questions for “males” is on page 2, and questions for females is on page “page 3, then the survey should end after page 2 for males.
  • If using query strings, submit a few test responses to ensure the query strings are pulling in the correct variables and data. On the SurveyKing platform, query strings should always start with the & sign, and the variable “id” is restricted.
  • Submit a few test responses to ensure the results are what you want. Test responses can also help new team members understand research questions like MaxDiff.

ABOUT THE AUTOR

Allen is the founder of SurveyKing. A former CPA and government auditor, he understands how important quality data is in decision making. He continues to help SurveyKing accomplish their main goal: providing organizations around the world with low-cost high-quality feedback tools.

Ready To Start?

Create your own survey now. Get started for free and collect actionable data.

Cover

10 Great SurveyMonkey Alternatives to Use in 2024

Discover alternatives to the most popular online survey tool, SurveyMonkey. Gain an understanding of where SurveyMonkey lacks in features and get intr...

10 Survey Tools for Academic Research in 2024

These nine survey tools are perfect for academic research because they offer unique question types, solid reporting options, and support staff to help...

8 Best Strategies of Survey Promotion to Increase Response Rates

Survey promotion is the process of spreading surveys through different channels to encourage more responses and participation. It involves placing the...

Business Process Improvement Consulting: Expert Solutions

Definition: A business process improvement consultant will help design and implement strategies to increase the efficiency of workflows across your o...

Cover

8 Excel Consulting Services to Use in 2024 + VBA Support

These 6 Excel consulting firms offer support, training, and VBA development to help you automate tasks and increase efficiency when using Microsoft Ex...

Cover

7 Great Qualtrics Alternatives to Use in 2024

These seven alternatives to Qualtrics offer either more features, a lower cost, or a cleaner user interface. These alternative platforms also include ...

Cover

Union Negotiation Consulting: Planning Labor Agreements

A labor union negotiation consulting engagement involves quantifying member needs, proposing contract language, and developing communication strategie...

Cover

Creating a Transactional Survey: Examples + Template

Definition: A transactional survey captures customer feedback after a specific interaction, referred to as a touchpoint. This survey type provides dir...

Cover

Hire an Excel Expert: Automation + VBA Development

An Excel expert will help you to complete your projects within Microsoft Excel. A good Excel expert should be proficient in advanced formulas such as ...

Cover

Creating an Anonymous Employee Survey + Template, Sample Questions

Definition: An anonymous employee survey is a convenient way to collect honest feedback in the workplace. The survey can either measure employee satis...

Improving Fleet Performance Through Driver Feedback Surveys

In the US, the trucking industry generated $875.5 billion in gross freight revenues, accounting for 80.8% of the country’s freight invoice in 20...

13 College Study Tips to Use in 2023

These 15 college study tips will help you succeed in your academic career.

Maximizing the Value of Skills Assessment Tools Using Surveys

When you apply for a job, it’s only natural that you’ll aim to present the best possible version of yourself. You’ll focus on your best skills a...

Creating UX Surveys: 6 Tips and Examples

UX surveys are used to help create a great user experience. A good UX survey will incorporate a variety of question types to help understand what user...

5 Web Consultants to Use in 2023: Design + Development

Definition: A web consultant can update an existing website design, create a custom website, help increase traffic, recommend layout changes, and even...

Creating a Targeted Survey: Panels to Reach Your Audience

Definition: A targeted survey is used to research a specific audience, frequently utilizing a survey panel provider. A paneling service generally has ...

8 Typeform Alternatives to Use in 2023

These seven alternatives to Typeform offer a lower cost or additional features. In addition, these alternative platforms include question types that T...

Cover

7 Ecommerce Skills For Professionals + Students

Ecommerce has occupied its leading niche in the world, allowing us to draw certain conclusions. For example, it is not surprising that more specialize...

Ecommerce Analytics Explained + Tools to Use

Definition: Ecommerce analytics is the practice of continuously monitoring your business performance by gathering and examining data that affects your...

4 Survey Consulting Services to Use in 2023

Definition: These 4 survey consulting services offer planning, design, development, and support to help complete your survey project. Whether it’s f...

Excel Automation Explained: VBA Code + Sample Workbooks

Definition: Excel automation will streamline repetitive tasks such as updating data, formatting cells, sending emails, and even uploading files to Sha...

Hire a Financial Modeling Consultant: Forecasts + Valuations

Definition: A financial modeling consultant will provide expertise in planning budgets, generating forecasts, creating valuations, and providing equit...

Excel Programming Services: Development, Macros, VBA

Definition: An excel programmer can be hired to organize workbooks, create custom formulas, automate repetitive tasks using VBA, and can consult on h...

Market Research Surveys: Sample Questions + Template

Definition: Market research surveys are a tool used to collect information about a target market. These surveys allow businesses to understand market ...

Cover

What do Americans Value Most in the Coming Election? A Comprehensive and Interactive 2020 Voter Poll

SurveyKing set out on a mission in the fall of 2020, to poll American's and help identify, with quantifiable data, what issues american are most focus...

Get Started Now

We have you covered on anything from customer surveys, employee surveys, to market research. Get started and create your first survey for free.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

survey plan in research

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.

Understanding survey validity and reliability: Key concepts and applications

Examples of market and customer validation surveys, online survey validity and reliability, methods for assessing validity, techniques for evaluating reliability, practical examples: market validation survey, ensuring validity and reliability in online surveys, create reliable surveys with surveyplanet’s survey tool.

In the realm of research and decision-making, surveys are indispensable tools. Whether gauging customer satisfaction, validating a new product, or conducting market research, the accuracy and consistency of survey results are paramount. This is where the concepts of survey validity and reliability come into play. Ensuring a survey is both valid and reliable is crucial for obtaining meaningful insights that can guide actions. In this blog post, we’ll explore what survey validity and reliability mean, why they are essential, and how to ensure your surveys meet these criteria.

What is survey validity?

Survey validity refers to the degree to which a survey measures what it is intended to measure. It’s about the accuracy and truthfulness of the results. A valid survey accurately reflects the reality it aims to capture, providing trustworthy data that can successfully inform decisions. Without validity, a study’s results can be misleading, which results in incorrect conclusions and potentially costly mistakes.

Survey validity is crucial because it ensures the accuracy of the data collected. When valid, survey results can be trusted to reflect the actual opinions or behaviors of respondents. Here are the key types of validity:

  • Content validity : This assesses whether a survey covers the full range of the concept it aims to measure. For example, a market validation survey designed to gauge customer interest in a new product must include questions that cover all aspects of the product’s features, benefits, and potential drawbacks.
  • Concurrent validity is the extent to which results correlate with other measures taken at the same time. For instance, a customer satisfaction survey should yield results aligning with sales data or customer retention rates.
  • Predictive validity : The extent to which results predict future outcomes. For example, a product validation survey should be able to predict future sales performance based on current customer feedback.
  • Construct validity : This evaluates whether a survey truly measures the theoretical construct it is intended to capture. For example, if surveying customer loyalty, questions should accurately reflect the components of loyalty, such as repeat purchases, brand advocacy, and emotional attachment.

To illustrate how these types of validity apply in practice, let’s look at market and customer validation surveys. A market validation survey might include questions about potential customers’ interest in a new product, willingness to pay for it, and preferences compared to existing products. On the other hand, a customer validation survey might focus on existing customers’ experiences with a product or service to identify strengths and areas for improvement.

What is survey reliability: Definition and significance

Survey reliability refers to the consistency of results over time. A reliable survey will yield the same results under consistent conditions, indicating that the data is dependable. Without reliability, even a valid survey can produce erratic results that undermine confidence in the findings.

Reliability is fundamental because it ensures that results are repeatable and consistent. Reliable data allows researchers to be confident that findings are stable and not influenced by external factors. Here are the primary types of reliability:

  • Test-retest reliability : This measures the consistency of survey results over time. Researchers can assess whether results are stable and consistent by administering the same survey to the same group of people at different times.
  • Internal consistency reliability : This assesses whether the items in a survey meant to measure the same concept produce similar results. Cronbach’s alpha is often used to evaluate internal consistency by analyzing the correlation between different survey items.

Reliability is critical in online surveys, where question interpretation by respondents may vary widely due to different contexts or distractions. Ensuring high reliability in online surveys helps obtain consistent and credible data. Reliability ensures consistent feedback on product features and usability for product validation surveys, allowing for better decision-making.

Assessing survey validity and reliability

To ensure that a questionnaire is valid and reliable, it’s essential to use appropriate assessment methods. This involves evaluating the survey’s design, its questions, and the data collected to ensure it meets necessary standards.

Find out the best practices and proven strategies for survey design with this blog.

Assessing validity involves several techniques to ensure that the survey accurately measures the intended concept:

  • Face validity : This is a preliminary check to see if the survey appears to measure what it is supposed to measure. Although subjective, it’s an essential first step in validating a survey.
  • Concurrent validity : As mentioned earlier, this involves comparing the survey results with other relevant measures taken simultaneously to ensure they align.
  • Predictive validity : This consists of evaluating whether the survey can accurately predict future outcomes based on current responses.

Evaluating reliability requires methods that ensure the survey results are consistent:

  • Split-half method : This involves dividing the survey into two halves and comparing the results. If the results are similar, the survey has high internal consistency.
  • Cronbach’s alpha : This statistical measure evaluates the correlation between different items on the survey. A higher alpha indicates greater internal consistency and, therefore, higher reliability.

Imagine you’re conducting a market validation survey for a new tech gadget.

  • To ensure content validity, questions about various features, potential use cases, and price points are included.
  • To assess criterion-related validity, responses to existing market data on similar products are compared.
  • For construct validity, measures are taken to ensure questions accurately reflect customer interest and purchase intentions.
  • To evaluate reliability, administer the survey to a sample group, then re-administer it after a few weeks to check for test-retest reliability.
  • Finally, Cronbach’s alpha is used to assess internal consistency, ensuring that questions about different features produce consistent responses.

Online surveys present unique challenges, such as varying respondent interpretations, distractions, and technical issues. To mitigate these challenges and ensure validity and reliability, consider the following strategies:

  • Clear and concise questions : Ensure that survey questions are straightforward and easy to understand. Avoid ambiguous language that could be interpreted differently. Learn how to write a good survey question with this blog .
  • Pilot testing : Conduct a pilot test with a small, representative sample to identify issues with question clarity or survey structure.
  • Consistent survey environment : Ensure that respondents complete the survey under similar conditions. This could involve specifying a time limit or providing instructions to minimize distractions.
  • Randomization : Question order should be randomized to reduce the impact of question order bias, which is when the sequence of questions influences responses. Learn how to reduce the impact of question order bias by reading our blog post about biased surveys .
  • Follow-up surveys : Use follow-up surveys to assess test-retest reliability, ensuring consistent results over time.

Suppose a company launches a new software product. They conduct an online product validation survey.

  • To ensure content validity comprehensive questions about functionality, user experience, and pricing are included.
  • To ensure reliability, they randomize question order and conduct a pilot test.
  • Cronbach’s alpha is used to assess internal consistency to ensure consistent responses about different features.

Survey validity and reliability are foundational to conducting effective research and making informed decisions. Validity ensures that a survey measures what it is intended to measure, while reliability ensures that results are consistent and dependable. Understanding and applying these concepts means creating surveys that provide accurate and trustworthy data that will guide the correct actions and decisions.

Now that you have a solid understanding of survey validity and reliability, it’s time to put these principles into practice. We invite you to try our survey tool to design surveys that deliver accurate and dependable insights. Our platform is designed to help you create highly valid and reliable surveys, offering features like customizable question formats, survey result filtering, a survey length estimator , and more.

Don’t leave the success of research to chance—experience the difference a well-designed survey can make. Sign up today for a free trial and see how our tools can help you achieve more reliable results and confidently make informed decisions.

Photo by Mario Heller on Unsplash

survey plan in research

Not just a simple survey: A case study of pitfalls in interdisciplinary, multiorganizational, multinational research for development

Not just a simple survey: A case study of pitfalls in interdisciplinary, multiorganizational, multinational research for development

  • From The Alliance of Bioversity International and the International Center for Tropical Agriculture (CIAT)
  • Published on 31.08.24
  • Challenges Poverty reduction, livelihoods & jobs

Share this to :

Surveys are common research methods in agricultural research for development. Despite the multitude of well-documented warnings for pitfalls in survey research, it appears challenging to avoid these in practice. We use a case study in agricultural research for development to illustrate what complicates survey research within an interdisciplinary, multiorganizational and multinational team. The survey research process, rather than the survey outcome, is our object of study. Using the Methodology of Interdisciplinary Research (MIR) framework we identify different steps within survey research. We overlay a technographic lens to understand “the making of a survey”. We thereby focus on the transformations made from beginning to end in survey research, the different task-groups involved, and on the norms and rules that guide this process. This illustrates the practice of survey research in diverse and multiform research teams, and shows which vital methodological steps are often ignored, skipped, or overruled. Our findings reveal that the intrinsic complexity of effective survey research is disproportionally exacerbated by the complexity of a diverse and multiform research team. We recommend allocating more resources and attention to capacity strengthening, harmonization, operationalization, capitalizing on different strengths, integration and communication.

Kilwinger, F.B.M.; Caron, C.M.; Rietveld, A.M.; van Dam, Y.K. 

This website uses cookies in order to improve the use experience and provide additional functionality Detail

Developing Surveys on Questionable Research Practices: Four Challenging Design Problems

  • Open access
  • Published: 02 September 2024

Cite this article

You have full access to this open access article

survey plan in research

  • Christian Berggren   ORCID: orcid.org/0000-0002-4233-5138 1 ,
  • Bengt Gerdin   ORCID: orcid.org/0000-0001-8360-5387 2 &
  • Solmaz Filiz Karabag   ORCID: orcid.org/0000-0002-3863-1073 1 , 3  

2 Altmetric

The exposure of scientific scandals and the increase of dubious research practices have generated a stream of studies on Questionable Research Practices (QRPs), such as failure to acknowledge co-authors, selective presentation of findings, or removal of data not supporting desired outcomes. In contrast to high-profile fraud cases, QRPs can be investigated using quantitative, survey-based methods. However, several design issues remain to be solved. This paper starts with a review of four problems in the QRP research: the problem of precision and prevalence, the problem of social desirability bias, the problem of incomplete coverage, and the problem of controversiality, sensitivity and missing responses. Various ways to handle these problems are discussed based on a case study of the design of a large, cross-field QRP survey in the social and medical sciences in Sweden. The paper describes the key steps in the design process, including technical and cognitive testing and repeated test versions to arrive at reliable survey items on the prevalence of QRPs and hypothesized associated factors in the organizational and normative environments. Partial solutions to the four problems are assessed, unresolved issues are discussed, and tradeoffs that resist simple solutions are articulated. The paper ends with a call for systematic comparisons of survey designs and item quality to build a much-needed cumulative knowledge trajectory in the field of integrity studies.

Similar content being viewed by others

survey plan in research

Lies, Damned Lies, and Crafty Questionnaire Design

survey plan in research

Design, Run, and Interpret Survey-Based Research in the Fields of Academic Integrity and Misconduct

Explore related subjects.

  • Medical Ethics

Avoid common mistakes on your manuscript.

Introduction

The public revelations of research fraud and non-replicable findings (Berggren & Karabag, 2019 ; Levelt et al., 2012 ; Nosek et al., 2022 ) have created a lively interest in studying research integrity. Most studies in this field tend to focus on questionable research practices, QRPs, rather than blatant fraud, which is less common and hard to study with rigorous methods (Butler et al., 2017 ). Despite the significant contributions of this research about the incidence of QRPs in various countries and contexts, several issues still need to be addressed regarding the challenges of designing precise and valid survey instruments and achieving satisfactory response rates in this sensitive area. While studies in management (Hinkin, 1998 ; Lietz, 2010 ), behavioral sciences, psychology (Breakwell et al., 2020 ), sociology (Brenner, 2020 ), and education (Hill et al., 2022 ) have provided guidelines to design surveys, they rarely discuss how to develop, test, and use surveys targeting sensitive and controversial issues such as organizational or individual corruption (Lin & Yu, 2020 ), fraud (Lawlor et al., 2021 ), and misconduct. The aim of this study is to contribute to a systematic discussion of challenges facing survey designers in these areas and, by way of a detailed case study, highlight alternative ways to increase participation and reliability of surveys focusing on questionable research practices, scientific norms, and organizational climate.

The following section starts with a literature-based review of four important problems:

the lack of conceptual consensus and precise measurements,

the problem of social desirability bias.

the difficulty of covering both quantitative and qualitative research fields.

the problem of controversiality and sensitivity.

Section 3 presents an in-depth case study of developing and implementing a survey on QRPs in the social and medical sciences in Sweden 2018–2021, designed to target these problems. Its first results were presented in this journal (Karabag et al., 2024 ). The section also describes the development process and the survey content and highlights the general design challenges. Section 4 returns to the four problems by discussing partial solutions, difficult tradeoffs, and remaining issues.

Four Design Problems in the Study of Questionable Research Practices

Extant QRP studies have generated an impressive body of knowledge regarding the occurrence and complexities of questionable practices, their increasing trend in several academic fields, and the difficulty of mitigating them with conventional interventions such as ethics courses and espousal of integrity policies (Gopalakrishna et al., 2022 ; Karabag et al., 2024 ; Necker, 2014 ). However, investigations on the prevalence of QRPs have so far lacked systematic problem analysis. Below, four main problems are discussed.

The Problem of Conceptual Clarity and Measurement Precision

Studies of QRP prevalence in the literature exhibit high levels of questionable behaviors but also considerable variation in their estimates. This is illustrated in the examples below:

“42% hade collected more data after inspecting whether results were statistically significant… and 51% had reported an unexpected finding as though it had been hypothesized from the start (HARKing)”( Fraser et al., 2018 , p. 1) , “51 , 3% of respondents engaging frequently in at least one QRP” ( Gopalakrishna et al., 2022 , p. 1) , “…one third of the researchers stated that for the express purpose of supporting hypotheses with statistical significance they engaged in post hoc exclusion of data” ( Banks et al., 2016 , p. 10).

On a general level, QRPs constitute deviations from the responsible conduct of research, that are not severe enough to be defined as fraud and fabrication (Steneck, 2006 ). Within these borders, there is no conceptual consensus regarding specific forms of QRPs (Bruton et al., 2020 ; Xie et al., 2021 ). This has resulted in a considerable variation in prevalence estimates (Agnoli et al., 2017 ; Artino et al. Jr, 2019 ; Fiedler & Schwarz, 2016 ). Many studies emphasize the role of intentionality, implying a purpose to support a specific assertion with biased evidence (Banks et al., 2016 ). This tends to be backed by reports of malpractices in quantitative research, such as p-hacking or HARKing, where unexpected findings or results from an exploratory analysis are reported as having been predicted from the start (Andrade, 2021 ). Other QRP studies, however, build on another, often implicit conceptual definition and include practices that could instead be defined as sloppy or under-resourced research, e.g. insufficient attention to equipment, deficient supervision of junior co-workers, inadequate note-keeping of the research process, or use of inappropriate research designs (Gopalakrishna et al., 2022 ). Alternatively, those studies include behaviors such as “Fashion-determined choice of research topic”, “Instrumental and marketable approach”, and “Overselling methods, data or results” (Ravn & Sørensen, 2021 , p. 30; Vermeulen & Hartmann, 2015 ) which may be opportunistic or survivalist but not necessarily involve intentions to mislead.

To shed light on the prevalence of QRPs in different environments, the first step is to conceptualize and delimit the practices to be considered. The next step is to operationalize the conceptual approach into useful indicators and, if needed, to reformulate and reword the indicators into unambiguous, easily understood items (Hinkin, 1995 , 1998 ). The importance of careful item design has been demonstrated by Fiedler and Schwarz ( 2016 ). They show how the perceived QRP prevalence changes by adding specifications to well-known QRP items. Such specifications include: “ failing to report all dependent measures that are relevant for a finding ”, “ selectively reporting studies related to a specific finding that ‘’worked’ ” (Fiedler & Schwarz, 2016 , p. 46, italics in original ), or “collecting more data after seeing whether results were significant in order to render non-significant results significant ” (Fiedler & Schwarz, 2016 , p. 49, italics in original ). These specifications demonstrate the importance of precision in item design, the need for item tests before applications in a large-scale survey, and as the case study in Sect. 3 indicates, the value of statistically analyzing the selected items post-implementation.

The Problem of Social Desirability

Case studies of publicly exposed scientific misconduct have the advantage of explicitness and possible triangulation of sources (Berggren & Karabag, 2019 ; Huistra & Paul, 2022 ). Opinions may be contradictory, but researchers/investigators may often approach a variety of stakeholders and compare oral statements with documents and other sources (Berggren & Karabag, 2019 ). By contrast, quantitative studies of QRPs need to rely on non-public sources in the form of statements and appraisals of survey respondents for the dependent variables and for potentially associated factors such as publication pressure, job insecurity, or competitive climate.

Many QRP surveys use items that target the respondents’ personal attitudes and preferences regarding the dependent variables, indicating QRP prevalence, as well as the explanatory variables. This has the advantage that the respondents presumably know their own preferences and practices. A significant disadvantage, however, concerns social desirability, which in this context means the tendency of respondents to portray themselves, sometimes inadvertently, in more positive ways than justified by their behavior. The extent of this problem was indicated in a meta-study by Fanelli ( 2009 ), which demonstrated major differences between answers to sensitive survey questions that targeted the respondents’ own behavior and questions that focused on the behavior of their colleagues. In the case study below, the pros and cons of the latter indirect approaches are analyzed.

The Problem of Covering Both Quantitative and Qualitative Research

Studies of QRP prevalence are dominated by quantitative research approaches, where there exists a common understanding of the meaning of facts, proper procedures and scientific evidence. Several research fields, also in the social and medical sciences, include qualitative approaches — case studies, interpretive inquiries, or discourse analysis — where assessments of ‘truth’ and ‘evidence’ may be different or more complex to evaluate.

This does not mean that all qualitative endeavors are equal or that deceit—such as presenting fabricated interview quotes or referring to non-existent protocols —is accepted. However, while there are defined criteria for reporting qualitative research, such as the Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ) or the Standards for Reporting Qualitative Research (SRQR checklist) (O’Brien et al., 2014 ), the field of qualitative research encompasses a wide range of different approaches. This includes comparative case studies that offer detailed evidence to support their claims—such as the differences between British and Japanese factories (Dore, 1973 /2011)—as well as discourse analyses and interpretive studies, where the concept of ‘evidence’ is more fluid and hard to apply. The generative richness of the analysis is a key component of their quality (Flick, 2013 ). This intra-field variation makes it hard to pin down and agree upon general QRP items to capture such behaviors in qualitative research. Some researchers have tried to interpret and report qualitative research by means of quantified methods (Ravn & Sørensen, 2021 ), but so far, these attempts constitute a marginal phenomenon. Consequently, the challenges of measuring the prevalence of QRPs (or similar issues) in the variegated field of qualitative research remain largely unexplored.

The Problem of Institutional Controversiality and Personal Sensitivity

Science and academia depend on public trust for funding and executing research. This makes investigations of questionable behaviors a controversial issue for universities and may lead to institutional refusal/non-response. This resistance was experienced by the designers of a large-scale survey of norms and practices in the Dutch academia when several universities decided not to take part, referring to the potential danger of negative publicity (de Vrieze, 2021 ). A Flemish survey on academic careers encountered similar participation problems (Aubert Bonn & Pinxten, 2019 ). Another study on universities’ willingness to solicit whistleblowers for participation revealed that university officers, managers, and lawyers tend to feel obligated to protect their institution’s reputation (Byrn et al., 2016 ). Such institutional actors may resist participation to avoid the exposure of potentially negative information about their institutions and management practices, which might damage the university’s brand (Byrn et al., 2016 ; Downes, 2017 ).

QRP surveys involve sensitive and potentially intrusive questions also from a respondent’s personal perspective that can lead to a reluctance to participate and non-response behavior (Roberts & John, 2014 ; Tourangeau & Yan, 2007 ). Studies show that willingness to participate declines for surveys covering sensitive issues such as misconduct, crime, and corruption, compared to less sensitive ones like leisure activities (cf. Tourangeau et al., 2010 ). The method of survey administration—whether face-to-face, over the phone, via the web, or paper-based—can influence the perceived sensitivity and response rate (Siewert & Udani, 2016 ; Szolnoki & Hoffmann, 2013 ). In the case study below, the survey did not require any institutional support. Instead, the designers focused on minimizing the individual sensitivity problem by avoiding questions about the respondents’ personal practices. To manage this, they concentrated on their colleagues’ behaviors (see Sect. 4.2). Even if a respondent agrees to participate, they may not answer the QRP items due to insufficient knowledge about her colleagues’ practices or a lack of motivation to answer critical questions about their colleagues’ practices (Beatty & Herrmann, 2002 ; Yan & Curtin, 2010 ). Additionally, a significant time gap between observing specific QRPs in the respondent’s research environment and receiving the survey may make it difficult to recall and accurately respond to the questions. Such issues may also result in non-response problems.

Addressing the Problems: Case Study of a Cross-Field QRP Survey – Design Process, Survey Content, Design Challenges

This section presents a case study of the way these four problems were addressed in a cross-field survey intended to capture QRP prevalence and associated factors across the social and medical sciences in Sweden. The account is based on the authors’ intensive involvement in the design and analysis of the survey, including the technical and cognitive testing, and post-implementation analysis of item quality, missing responses, and open respondent comments. The theoretical background and the substantive results of the study are presented in a separate paper (Karabag et al., 2024 ). Method and language experts at Statistics Sweden, a government agency responsible for public statistics in Sweden, supported the testing procedures, the stratified respondent sampling and administered the survey roll-out.

The Survey Design Process – Repeated Testing and Prototyping

The design process included four steps of testing, revising, and prototyping, which allowed the researchers to iteratively improve the survey and plan the roll-out.

Step 1: Development of the Baseline Survey

This step involved searching the literature and creating a list of alternative constructs concerning the key concepts in the planned survey. Based on the study’s aim, the first and third authors compared these constructs and examined how they had been itemized in the literature. After two rounds of discussions, they agreed on construct formulations and relevant ways to measure them, rephrased items if deemed necessary, and designed new items in areas where the extant literature did not provide any guidance. In this way, Survey Version 1 was compiled.

Step 2: Pre-Testing by Means of a Large Convenience Sample

In the second step, this survey version was reviewed by two experts in organizational behavior at Linköping University. This review led to minor adjustments and the creation of Survey Version 2 , which was used for a major pretest. The aim was both to check the quality of individual items and to garner enough responses for a factor analysis that could be used to build a preliminary theoretical model. This dual aim required a larger sample than suggested in the literature on pretesting (Perneger et al., 2015 ). At the same time, it was essential to minimize the contamination of the planned target population in Sweden. To accomplish this, the authors used their access to a community of organization scholars to administer Survey Version 2 to 200 European management researchers.

This mass pre-testing yielded 163 responses. The data were used to form preliminary factor structures and test a structural equation model. Feedback from a few of the respondents highlighted conceptual issues and duplicated questions. Survey Version 3 was developed and prepared for detailed pretesting based on this feedback.

Step 3: Focused Pre-Testing and Technical Assessment

This step focused on the pre-testing and technical assessment. The participants in this step’s pretesting were ten researchers (six in the social sciences and four in the medical sciences) at five Swedish universities: Linköping, Uppsala, Gothenburg, Gävle, and Stockholm School of Economics. Five of those researchers mainly used qualitative research methods, two used both qualitative and quantitative methods, and three used quantitative methods. In addition, Statistics Sweden conducted a technical assessment of the survey items, focusing on wording, sequence, and response options. Footnote 1 Based on feedback from the ten pretest participants and the Statistics Sweden assessment, Survey Version 4 was developed, translated into Swedish, and reviewed by two researchers with expertise in research ethics and scientific misconduct.

It should be highlighted that Swedish academia is predominantly bilingual. While most researchers have Swedish as their mother tongue, many are more proficient in English, and a minority have limited or no knowledge of Swedish. During the design process, the two language versions were compared item by item and slightly adjusted by skilled bilingual researchers. This task was relatively straightforward since most items and concepts were derived from previously published literature in English. Notably, the Swedish versions of key terms and concepts have long been utilized within Swedish academia (see for example Berggren, 2016 ; Hasselberg, 2012 ). To secure translation quality, the language was controlled by a language expert at Statistics Sweden.

Step 4: Cognitive Interviews by Survey and Measurement Experts

Next, cognitive interviews (Willis, 2004 ) were organized with eight researchers from the social and medical sciences and conducted by an expert from Statistics Sweden (Wallenborg Likidis, 2019 ). The participants included four women and four men, ranging in age from 30 to 60. They were two doctoral students, two lecturers, and four professors, representing five different universities and colleges. Additionally, two participants had a non-Nordic background. To ensure confidentiality, no connections are provided between these characteristics and the individual participants.

An effort was made to achieve a distribution of gender, age, subject, employment, and institution. Four social science researchers primarily used qualitative research methods, while the remaining four employed qualitative and quantitative methods. Additionally, four respondents completed the Swedish version of the survey, and four completed the English version.

The respondents completed the survey in the presence of a methods expert from Statistics Sweden, who observed their entire response process. The expert noted spontaneous reactions and recorded instances where respondents hesitated or struggled to understand an item. After the survey, the expert conducted a structured interview with all eight participants, addressing details in each section of the survey, including the missive for recruiting respondents. Some respondents provided oral feedback while reading the cover letter and answering the questions, while others offered feedback during the subsequent interview.

During the cognitive interview process, the methods expert continuously communicated suggestions for improvements to the design team. A detailed test protocol confirmed that most items were sufficiently strong, although a few required minor modifications. The research team then finalized Survey Version 5 , which included both English and Swedish versions (for the complete survey, see Supplementary Material S1).

Although the test successfully captured a diverse range of participants, it would have been desirable to conduct additional tests of the English survey with more non-Nordic participants; as it stands, only one such test was conducted. Despite the participants’ different approaches to completing the survey, the estimated time to complete it was approximately 15–20 min. No significant time difference was observed between completing the survey in Swedish and English.

Design Challenges – the Dearth of an Item-Specific Public Quality Discussion

The design decision to employ survey items from the relevant literature as much as possible was motivated by a desire to increase comparability with previous studies of questionable research practices. However, this approach came with several challenges. Survey-based studies of QRPs rely on the respondents’ subjective assessments, with no possibility to compare the answers with other sources. Thus, an open discussion of survey problems would be highly valuable. However, although published studies usually present the items used in the surveys, there is seldom any analysis of the problems and tradeoffs involved when using a particular type of item or response format and meager information about item validity. Few studies, for example, contain any analysis that clarifies which items that measured the targeted variables with sufficient precision and which items that failed to do so.

Another challenge when using existing survey studies is the lack of information regarding the respondents’ free-text comments about the survey’s content and quality. This could be because the survey did not contain any open questions or because the authors of the report could not statistically analyze the answers. As seen below, however, open respondent feedback on a questionnaire involving sensitive or controversial aspects may provide important feedback regarding problems that did not surface during the pretest process, which by necessity targets much smaller samples.

Survey Content

The survey started with questions about the respondent’s current employment and research environment. It ended with background questions on the respondents’ positions and the extent of their research activity, plus space for open comments about the survey. The core content of the survey consisted of sections on the organizational climate (15 items), scientific norms (13 items), good and questionable research practices (16 items), perceptions of fairness in the academic system (4 items), motivation for conducting research (8 items), ethics training and policies (5 items); and questions on the quality of the research environment and the respondent’s perceived job security.

Sample and Response Rate

All researchers, teachers, and Ph.D. students employed at Swedish universities are registered by Statistics Sweden. To ensure balanced representation and perspectives from both large universities and smaller university colleges, the institutions were divided into three strata based on the number of researchers, teachers, and Ph.D. students: more than 1,000 individuals (7 universities and university colleges), 500–999 individuals (3 institutions), and fewer than 500 individuals (29 institutions). From these strata, Statistics Sweden randomly sampled 35%, 45%, and 50% of the relevant employees, resulting in a sample of 10,047 individuals. After coverage analysis and exclusion of wrongly included, 9,626 individuals remained.

The selected individuals received a personal postal letter with a missive in both English and Swedish informing them about the project and the survey and notifying them that they could respond on paper or online. The online version provided the option to answer in either English or Swedish. The paper version was available only in English to reduce the cost of production and posting. The missive provided the recipients with comprehensive information about the study and what their involvement would entail. It emphasized the voluntary character of participation and their right to withdraw from the survey at any time, adding: “If you do not want to answer the questions , we kindly ask you to contact us. Then you will not receive any reminders.” Sixty-three individuals used this decline option. In line with standard Statistics Sweden procedures, survey completion implied an agreement to participation and to the publication of anonymized results and indicated participants’ understanding of the terms provided (Duncan & Cheng, 2021 ). An email address was provided for respondents to request study outputs or for any other reason. The survey was open for data collection for two months, during which two reminders were sent to non-responders who had not opted out.

Once Statistics Sweden had collected the answers, they were anonymized and used to generate data files delivered to the authors. Statistics Sweden also provided anonymized information about age, gender, and type of employment of each respondent in the dataset delivered to the researchers. Of the targeted individuals, 3,295 responded, amounting to an overall response rate of 34.2%. An analysis of missing value patterns revealed that 290 of the respondents either lacked data for an entire factor or had too many missing values dispersed over several survey sections. After removing these 290 responses, we used SPSS algorithms (IBM-SPSS Statistics 27) to analyze the remaining missing values, which were randomly distributed and constituted less than 5% of the data. These values were replaced using the program’s imputation program (Madley-Dowd et al., 2019 ). The final dataset consisted of 3,005 individuals, evenly distributed between female and male respondents (53,5% vs. 46,5%) and medical and social scientists (51,3% vs. 48,5%). An overview of the sample and the response rate is provided in Table  1 , which can also be found in (Karabag et al., 2024 ). As shown in Table  1 , the proportion of male and female respondents, as well as the proportion of respondents from medical and social science, and the age distribution of the respondents compared well with the original selection frame from Statistics Sweden.

Revisiting the Four Problems. Partial Solutions and Remaining Issues

Managing the precision problem - the value of factor analyses.

As noted above, the lack of conceptual consensus and standard ways to measure QRPs has resulted in a huge variation in estimated prevalence. In the case studied here, the purpose was to investigate deviations from research integrity and not low-quality research in general. This conceptual focus implied that selected survey items regarding QRP should build on the core aspect of intention, as suggested by Banks et al. ( 2016 , p. 323): “design, analytic, or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favor of an assertion”. After scrutinizing the literature, five items were selected as general indicators of QRP, irrespective of the research approach (see Table  2 ).

An analysis of the survey responses indicated that the general QRP indicators worked well in terms of understandability and precision. Considering the sensitive nature of the items, features that typically yield very high rates of missing data (Fanelli, 2009 ; Tourangeau & Yan, 2007 ), our missing rates of 11–21% must be considered modest. In addition, there were a few critical comments on the item formulation in the open response section at the end of the survey (see below).

Regarding the explanatory (independent) variables, the survey was inspired by studies showing the importance of the organizational climate and the normative environment within academia (Anderson et al., 2010 ). Organizational climate can be measured in several ways; the studied survey focused on items related to a collegial versus a competitive climate. The analysis of the normative environment was inspired by the classical norms of science articulated by Robert Merton in his CUDOS framework: communism (communalism), universalism, disinterestedness, and organized skepticism (Merton, 1942 /1973). This framework has been extensively discussed and challenged but remains a key reference (Anderson et al., 2010 ; Chalmers & Glasziou, 2009 ; Kim & Kim, 2018 ; Macfarlane & Cheng, 2008 ). Moreover, we were inspired by the late work of Merton on the ambivalence and ambiguities of scientists (Merton, 1942 /1973), and the counter norms suggested by Mitroff ( 1974 ). Thus, the survey involved a composite set of items to capture the contradictory normative environment in academia: classical norms as well as their counter norms.

To reduce the problems of social desirability bias and personal sensitivity, the survey design avoided items about the respondent’s personal adherence to explicit ideals, which are common in many surveys (Gopalakrishna et al., 2022 ). Instead, the studied survey focused on the normative preferences and attitudes within the respondent’s environment. This necessitated the identification, selection, and refinement of 3–4 items for each potentially relevant norm/counter-norm. The selection process was used in previous studies of norm subscription in various research communities (Anderson et al., 2007 ; Braxton, 1993 ; Bray & von Storch, 2017 ). For the norm “skepticism”, we consulted studies in the accounting literature of the three key elements of professional skepticism: questioning mind, suspension of judgment and search for knowledge (Hurtt, 2010 ).

The first analytical step after receiving the completed survey set from Statistics Sweden was to conduct a set of factor analyses to assess the quality and validity of the survey items related to the normative environment and the organizational climate. These analyses suggested three clearly identifiable factors related to the normative environment: (1) a counter norm factor combining Mitroff’s particularism and dogmatism (‘Biasedness’ in the further analysis), and two Mertonian factors: (2) Skepticism and (3) Openness, a variant of Merton’s Communalism (see Table  3 ). A fourth Merton factor, Disinterestedness, could not be identified in our analysis.

The analytical process for organizational climate involved reducing the number of items from 15 to 11 (see Table 4 ). Here, the factor analysis suggested two clearly identifiable factors, one related to collegiality and the other related to competition (see Table  4 ). Overall, the factor analyses suggested that the design efforts had paid off in terms of high item quality, robust factor loadings, and a very limited need to remove any items.

In a parallel step, the open comments were assessed as an indication of how the study was perceived by the respondents (see Table  5 ). Of the 3005 respondents, 622 provided comprehensible comments, and many of them were extensive. 187 comments were related to the respondents’ own employment/role, 120 were related to the respondents’ working conditions and research environment, and 98 were related to the academic environment and atmosphere. Problems in knowing details of collegial practices were mentioned in 82 comments.

Reducing Desirability Bias - the Challenge of Nonresponse

It is well established that studies on topics where the respondent has anything embarrassing or sensitive to report suffer from more missing responses than studies on neutral subjects and that respondents may edit the information they provide on sensitive topics (Tourangeau & Yan, 2007 ). Such a social desirability bias is applicable for QRP studies which explicitly target the respondents’ personal attitudes and behaviors. To reduce this problem, the studied survey applied a non-self-format focusing on the behaviors and preferences of the respondents’ colleagues. Relevant survey items from published studies were rephrased from self-format designs to non-self-questions about practices in the respondent’s environment, using the format: “In my research environment, colleagues…” followed by a five-step incremental response format from “(1) never” to “(5) always”. In a similar way the survey avoided “should”-statements about ideal normative values: “Scientists and scholars should critically examine…”. Instead, the survey used items intended to indicate the revealed preferences in the respondent’s normative environment regarding universalism versus particularism or openness versus secrecy.

As indicated by Fanelli ( 2009 ), these redesign efforts probably reduced the social desirability bias significantly. At the same time, however, the redesign seemed to increase a problem not discussed by Fanelli ( 2009 ): an increased uncertainty problem related to the respondents’ difficulties of knowing the practices of their colleagues in questionable areas. This issue was indicated by the open comment at the end of the studied survey, where 13% of the 622 respondents pointed out that they lacked sufficient knowledge about the behavior of their colleagues to answer the QRP questions (see Table  5 ). One respondent wrote:

“It’s difficult to answer questions about ‘colleagues in my research area’ because I don’t have an insight into their research practices; I can only make informed guesses and generalizations. Therefore, I am forced to answer ‘don’t know’ to a lot of questions”.

Regarding the questions on general QRPs, the rate of missing responses varied between 11% and 21%. As for the questions targeting specific QRP practices in quantitative and qualitative research, the rate of missing responses ranged from 38 to 49%. Unfortunately, the non-response alternative to these questions (“Don’t know/not relevant”) combined the two issues: the lack of knowledge and the lack of relevance. Thus, we don’t know what part of the missing responses related to a non-presence of the specific research approach in the respondent’s environment and what part signaled a lack of knowledge about collegial practices in this environment.

Measuring QRPs in Qualitative Research - the Limited Role of Pretests

Studies of QRP prevalence focus on quantitative research approaches, where there exists a common understanding of the interpretation of scientific evidence, clearly recommended procedures, and established QRP items related to compliance with these procedures. In the heterogenous field of qualitative research, there are several established standards for reporting the research (O’Brien et al., 2014 ; Tong et al., 2007 ), but, as noted above, hardly any commonly accepted survey items that capture behaviors that fulfill the criteria for QRPs. As a result, the studied survey project designed such items from the start during the survey development process. After technical and cognitive tests, four items were selected. See Table  6 .

Despite the series of pretests, however, the first two of these items met severe criticism from a few respondents in the survey’s open commentary section. Here, qualitative researchers argued that the items were unduly influenced by the truth claims in quantitative studies, whereas their research dealt with interpretation and discourse analysis. Thus, they rejected the items regarding selective usage of respondents and of interview quotes as indicators of questionable practices:

“The alternative regarding using quotes is a bit misleading. Supporting your results by quotes is a way to strengthen credibility in a qualitative method….” “The question about dubious practices is off target for us, who work with interpretation rather than solid truths. You can present new interpretations, but normally that does not imply that previous ‘findings’ should be considered incorrect.” “The questions regarding qualitative research were somewhat irrelevant. Often this research is not guided by a given hypothesis, and researchers may use a convenient sample without this resulting in lower quality.”

One comment focused on other problems related to qualitative research:

“Several questions do not quite capture the ethical dilemmas we wrestle with. For example , is the issue of dishonesty and ‘inaccuracies’ a little misplaced for us who work with interpretation? …At the same time , we have a lot of ethical discussions , which , for example , deal with power relations between researchers and ‘researched’ , participant observation/informal contacts and informed consent (rather than patients participating in a study)”.

Unfortunately, the survey received these comments and criticism only after the full-scale rollout and not during the pretest rounds. Thus, we had no chance to replace the contested items with other formulations or contemplate a differentiation of the subsection to target specific types of qualitative research with appropriate questions. Instead, we had to limit the post-roll-out survey analysis to the last two items in Table  6 , although they captured devious behaviors rather than gray zone practices.

Why then was this criticism of QRP items related to qualitative research not exposed in the pretest phase? This is a relevant question, also for future survey designers. An intuitive answer could be that the research team only involved quantitative researchers. However, as highlighted above, the pretest participants varied in their research methods: some exclusively used qualitative methods, others employed mixed methods, and some utilized quantitative methods. This diversity suggests that the selection of test participants was appropriate. Moreover, all three members of the research team had experience of both quantitative and qualitative studies. However, as discussed above, the field of qualitative research involves several different types of research, with different goals and methods – from detailed case studies grounded in original empirical fieldwork to participant observations of complex organizational phenomena to discursive re-interpretations of previous studies. Of the 3,005 respondents who answered the survey in a satisfactory way, only 16 respondents, or 0,5%, had any critical comments about the QRP items related to qualitative research. A failure to capture the objections from such a small proportion in a pretest phase is hardly surprising. The general problem could be compared with the challenge of detecting negative side-effects in drug development. Although the pharmaceutical firms conduct large-scale tests of candidate drugs before government approval, doctors nevertheless detect new side-effects when the medicine is rolled out to significantly more people than the test populations – and report these less frequent problems in the additional drug information (Galeano et al., 2020 ; McNeil et al., 2010 ).

In the social sciences, the purpose of pre-testing is to identify problems related to ambiguities and bias in item formulation and survey format and initiate a search for relevant solutions. A pre-test on a small, selected subsample cannot guarantee that all respondent problems during the full-scale data collection will be detected. The pretest aims to reduce errors to acceptable levels and ensure that the respondents will understand the language and terminology chosen. Pretesting in survey development is also essential to help the researchers to assess the overall flow and structure of the survey, and to make necessary adjustments to enhance respondent engagement and data quality (Ikart, 2019 ; Presser & Blair, 1994 ).

In our view, more pretests would hardly solve the epistemological challenge of formulating generally acceptable QRP items for qualitative research. The open comments studied here suggest that there is no one-size-fits-all solution. If this is right, the problem should rather be reformulated to a question of identifying different strands of qualitative research with diverse views of integrity and evidence which need to be measured with different measures. To address this challenge in a comprehensive way, however, goes far beyond the current study.

Controversiality and Collegial sensitivity - the Challenge of Predicting Nonresponse

Studies of research integrity, questionable research practices, and misconduct in science tend to be organizationally controversial and personally sensitive. If university leaders are asked to support such studies, there is a considerable risk that the answer will be negative. In the case studied here, the survey roll-out was not dependent on any active organizational participation since Statistics Sweden possessed all relevant respondent information in-house. This, we assumed, would take the controversiality problem off the agenda. Our belief was supported by the non-existent complaints regarding a potential negativity bias from the pretest participants. Instead, the problem surfaced when the survey was rolled out, and all the respondents contemplated the survey. The open comment section at the end of the survey provided insights into this reception.

Many respondents provided positive feedback, reflected in 30 different comments such as:

“Thank you for doing this survey. I really hope it will lead to changes because it is needed”. “This is an important survey. However , there are conflicting norms , such as those you cite in the survey , /concerning/ for example , data protection. How are researchers supposed to be open when we cannot share data for re-analysis?” “I am glad that the problems with egoism and non-collegiality are addressed in this manner ”.

Several of them asked for more critical questions regarding power, self-interest, and leadership:

“What I lack in the survey were items regarding academic leadership. Otherwise, I am happy that someone is doing research on these issues”. “A good survey but needs to be complemented with questions regarding researchers who put their commercial interests above research and exploit academic grants for commercial purposes”.

A small minority criticized the survey for being overly negative towards academia:

“A major part of the survey feels very negative and /conveys/ the impression that you have a strong pre-understanding of academia as a horrible environments”. “Some of the questions are uncomfortable and downright suggestive. Why such a negative attitude towards research?” “The questions have a tendency to make us /the respondents/ informers. An unpleasant feeling when you are supposed to lay information against your university”. “Many questions are hard to answer, and I feel that they measure my degree of suspicion against my closest colleagues and their motivation … Several questions I did not want to answer since they contain a negative interpretation of behaviors which I don’t consider as automatically negative”.

A few of these respondents stated that they abstained from answering some of the ‘negative questions’, since they did not want to report on or slander their colleagues. The general impact is hard to assess. Only 20% of the respondents offered open survey comments, and only seven argued that questions were “negative”. The small number explains why the issue of negativity did not show up during the testing process. However, a perceived sense of negativity may have affected the willingness to answer among more respondents than those who provided free test comments.

Conclusion - The Needs for a Cumulative Knowledge Trajectory in Integrity Studies

In the broad field of research integrity studies, investigations of QRPs in different contexts and countries play an important role. The comparability of the results, however, depends on the conceptual focus of the survey design and the quality of the survey items. This paper starts with a discussion of four common problems in QRP research: the problems of precision, social desirability, incomplete coverage, and organizational controversiality and sensitivity. This is followed by a case study of how these problems were addressed in a detailed survey design process. An assessment of the solutions employed in the studied survey design reveals progress as well as unresolved issues.

Overall, the paper shows that the problem and challenges of precision could be effectively managed through explicit conceptual definitions and careful item design.

The problem of social desirability bias was probably reduced by means of a non-self-response format referring to preferences and behaviors among colleagues instead of personal behaviors. However, an investigation of open respondent comments indicated that the reduced risk of social bias came at the expense of higher uncertainty due to the respondents’ lack of insight in the concrete practices of their colleagues.

The problem of incomplete coverage of QRPs in qualitative research, the authors initially linked to “the lack of standard items” to capture QRPs in qualitative studies. Open comments at the end of the survey, however, suggested that the lack of such standards would not be easily managed by the design of new items. Rather, it seems to be an epistemological challenge related to the multifarious nature of the qualitative research field, where the understanding of ‘evidence’ is unproblematic in some qualitative sub-fields but contested in others. This conjecture and other possible explanations will hopefully be addressed in forthcoming epistemological and empirical studies.

Regarding the problem of controversiality and sensitivity, previous studies show that QRP research is a controversial and sensitive area for academic executives and university brand managers. The case study discussed here indicates that this is a sensitive subject also for rank-and-file researchers who may hesitate to answer, even when the questions do not target the respondents’ own practices but the practices and preferences of their colleagues. Future survey designers may need to engage in framing, presenting, and balancing sensitive items to reduce respondent suspicions and minimize the rate of missing responses. Reflections on the case indicate that this is doable but requires thoughtful design, as well as repeated tests, including feedback from a broad selection of prospective participants.

In conclusion, the paper suggests that more resources should be spent on the systematic evaluation of different survey designs and item formulations. In the long term, such investments in method development will yield a higher proportion of robust and comparable studies. This would mitigate the problems discussed here and contribute to the creation of a much-needed cumulative knowledge trajectory in research integrity studies.

An issue not covered here is that surveys, however finely developed, only give quantitative information about patterns, behaviors, and structures. An understanding of underlying thoughts and perspectives requires other procedures. Thus, methods that integrate and triangulate qualitative and quantitative data —known as mixed methods (Karabag & Berggren, 2016 ; Ordu & Yılmaz, 2024 ; Smajic et al., 2022 )— may give a deeper and more complete picture of the phenomenon of QRP.

Data Availability

The data supporting the findings of this study are available from the corresponding author, upon reasonable request.

Wallenborg Likidis ( 2019 ). Academic norms and scientific attitudes: Metrology Review of a survey for doctoral students , researchers and academic teachers (In Swedish: Akademiska normer och vetenskapliga förhallningssätt. Mätteknisk granskning av en enkät till doktorander , forskare och akademiska lärare) . Prod.nr. 8,942,146, Statistics Sweden, Örebro.

Agnoli, F., Wicherts, J. M., Veldkamp, C. L., Albiero, P., & Cubelli, R. (2017). Questionable research practices among Italian research psychologists. PLoS One , 12(3), e0172792.

Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics , 13 , 437–461.

Article   Google Scholar  

Anderson, M. S., Ronning, E. A., Devries, R., & Martinson, B. C. (2010). Extending the Mertonian norms: Scientists’ subscription to norms of Research. The Journal of Higher Education , 81 (3), 366–393. https://doi.org/10.1353/jhe.0.0095

Andrade, C. (2021). HARKing, cherry-picking, p-hacking, fishing expeditions, and data dredging and mining as questionable research practices. The Journal of Clinical Psychiatry , 82 (1), 25941.

ArtinoJr, A. R., Driessen, E. W., & Maggio, L. A. (2019). Ethical shades of gray: International frequency of scientific misconduct and questionable research practices in health professions education. Academic Medicine , 94 (1), 76–84.

Aubert Bonn, N., & Pinxten, W. (2019). A decade of empirical research on research integrity: What have we (not) looked at? Journal of Empirical Research on Human Research Ethics , 14 (4), 338–352.

Banks, G. C., O’Boyle Jr, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management , 42 (1), 5–20.

Beatty, P., & Herrmann, D. (2002). To answer or not to answer: Decision processes related to survey item nonresponse. Survey Nonresponse , 71 , 86.

Google Scholar  

Berggren, C. (2016). Scientific Publishing: History, practice, and ethics (in Swedish: Vetenskaplig Publicering: Historik, Praktik Och Etik) . Studentlitteratur AB.

Berggren, C., & Karabag, S. F. (2019). Scientific misconduct at an elite medical institute: The role of competing institutional logics and fragmented control. Research Policy , 48 (2), 428–443. https://doi.org/10.1016/j.respol.2018.03.020

Braxton, J. M. (1993). Deviancy from the norms of science: The effects of anomie and alienation in the academic profession. Research in Higher Education , 54 (2), 213–228. https://www.jstor.org/stable/40196105

Bray, D., & von Storch, H. (2017). The normative orientations of climate scientists. Science and Engineering Ethics , 23 (5), 1351–1367.

Breakwell, G. M., Wright, D. B., & Barnett, J. (2020). Research questions, design, strategy and choice of methods. Research Methods in Psychology , 1–30.

Brenner, P. S. (2020). Why survey methodology needs sociology and why sociology needs survey methodology: Introduction to understanding survey methodology: Sociological theory and applications. In Understanding survey methodology: Sociological theory and applications (pp. 1–11). https://doi.org/10.1007/978-3-030-47256-6_1

Bruton, S. V., Medlin, M., Brown, M., & Sacco, D. F. (2020). Personal motivations and systemic incentives: Scientists on questionable research practices. Science and Engineering Ethics , 26 (3), 1531–1547.

Butler, N., Delaney, H., & Spoelstra, S. (2017). The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education , 16 (1), 94–109.

Byrn, M. J., Redman, B. K., & Merz, J. F. (2016). A pilot study of universities’ willingness to solicit whistleblowers for participation in a study. AJOB Empirical Bioethics , 7 (4), 260–264.

Chalmers, I., & Glasziou, P. (2009). Avoidable waste in the production and reporting of research evidence. The Lancet , 374 (9683), 86–89.

de Vrieze, J. (2021). Large survey finds questionable research practices are common. Science . https://doi.org/10.1126/science.373.6552.265

Dore, R. P. (1973/2011). British Factory Japanese Factory: The origins of National Diversity in Industrial Relations, with a New Afterword . University of California Press/Routledge.

Downes, M. (2017). University scandal, reputation and governance. International Journal for Educational Integrity , 13 , 1–20.

Duncan, L. J., & Cheng, K. F. (2021). Public perception of NHS general practice during the first six months of the COVID-19 pandemic in England. F1000Research , 10 .

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One , 4(5), e5738.

Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science , 7 (1), 45–52.

Flick, U. (2013). The SAGE Handbook of Qualitative Data Analysis . sage.

Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLoS One , 13(7), e0200303.

Galeano, D., Li, S., Gerstein, M., & Paccanaro, A. (2020). Predicting the frequencies of drug side effects. Nature Communications , 11 (1), 4575.

Gopalakrishna, G., Ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in the Netherlands. PLoS One , 17 (2), e0263023.

Hasselberg, Y. (2012). Science as Work: Norms and Work Organization in Commodified Science (in Swedish: Vetenskap Som arbete: Normer och arbetsorganisation i den kommodifierade vetenskapen) . Gidlunds förlag.

Hill, J., Ogle, K., Gottlieb, M., Santen, S. A., & ArtinoJr, A. R. (2022). Educator’s blueprint: a how-to guide for collecting validity evidence in survey‐based research. AEM Education and Training , 6(6), e10835.

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management , 21 (5), 967–988.

Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods , 1 (1), 104–121.

Huistra, P., & Paul, H. (2022). Systemic explanations of scientific misconduct: Provoked by spectacular cases of norm violation? Journal of Academic Ethics , 20 (1), 51–65.

Hurtt, R. K. (2010). Development of a scale to measure professional skepticism. Auditing: A Journal of Practice & Theory , 29 (1), 149–171.

Ikart, E. M. (2019). Survey questionnaire survey pretesting method: An evaluation of survey questionnaire via expert reviews technique. Asian Journal of Social Science Studies , 4 (2), 1.

Karabag, S. F., & Berggren, C. (2016). Misconduct, marginality and editorial practices in management, business and economics journals. PLoS One , 11 (7), e0159492. https://doi.org/10.1371/journal.pone.0159492

Karabag, S. F., Berggren, C., Pielaszkiewicz, J., & Gerdin, B. (2024). Minimizing questionable research practices–the role of norms, counter norms, and micro-organizational ethics discussion. Journal of Academic Ethics , 1–27. https://doi.org/10.1007/s10805-024-09520-z

Kim, S. Y., & Kim, Y. (2018). The ethos of Science and its correlates: An empirical analysis of scientists’ endorsement of Mertonian norms. Science Technology and Society , 23 (1), 1–24. https://doi.org/10.1177/0971721817744438

Lawlor, J., Thomas, C., Guhin, A. T., Kenyon, K., Lerner, M. D., Consortium, U., & Drahota, A. (2021). Suspicious and fraudulent online survey participation: Introducing the REAL framework. Methodological Innovations , 14 (3), 20597991211050467.

Levelt, W. J., Drenth, P., & Noort, E. (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel (in Dutch: Falende wetenschap: De frauduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel) . Commissioned by the Tilburg University, University of Amsterdam and the University of Groningen. https://doi.org/http://hdl.handle.net/11858/00-001M-0000-0010-258A-9

Lietz, P. (2010). Research into questionnaire design: A summary of the literature. International Journal of Market Research , 52 (2), 249–272.

Lin, M. W., & Yu, C. (2020). Can corruption be measured? Comparing global versus local perceptions of corruption in East and Southeast Asia. In Regional comparisons in comparative policy analysis studies (pp. 90–107). Routledge.

Macfarlane, B., & Cheng, M. (2008). Communism, universalism and disinterestedness: Re-examining contemporary support among academics for Merton’s scientific norms. Journal of Academic Ethics , 6 , 67–78.

Madley-Dowd, P., Hughes, R., Tilling, K., & Heron, J. (2019). The proportion of missing data should not be used to guide decisions on multiple imputation. Journal of Clinical Epidemiology , 110 , 63–73.

McNeil, J. J., Piccenna, L., Ronaldson, K., & Ioannides-Demos, L. L. (2010). The value of patient-centred registries in phase IV drug surveillance. Pharmaceutical Medicine , 24 , 281–288.

Merton, R. K. (1942/1973). The normative structure of science. In The sociology of science: Theoretical and empirical investigations . The University of Chicago Press.

Mitroff, I. I. (1974). Norms and counter-norms in a select group of the Apollo Moon scientists: A case study of the ambivalence of scientists. American Sociological Review , 39 (4), 579–595. https://doi.org/10.2307/2094423

Necker, S. (2014). Scientific misbehavior in economics. Research Policy , 43 (10), 1747–1759. https://doi.org/10.1016/j.respol.2014.05.002

Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., & Nuijten, M. B. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology , 73 (1), 719–748.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine , 89 (9). https://journals.lww.com/academicmedicine/fulltext/2014/09000/standards_for_reporting_qualitative_research__a.21.aspx

Ordu, Y., & Yılmaz, S. (2024). Examining the impact of dramatization simulation on nursing students’ ethical attitudes: A mixed-method study. Journal of Academic Ethics , 1–13.

Perneger, T. V., Courvoisier, D. S., Hudelson, P. M., & Gayet-Ageron, A. (2015). Sample size for pre-tests of questionnaires. Quality of life Research , 24 , 147–151.

Presser, S., & Blair, J. (1994). Survey pretesting: Do different methods produce different results? Sociological Methodology , 73–104.

Ravn, T., & Sørensen, M. P. (2021). Exploring the gray area: Similarities and differences in questionable research practices (QRPs) across main areas of research. Science and Engineering Ethics , 27 (4), 40.

Roberts, D. L., & John, F. A. S. (2014). Estimating the prevalence of researcher misconduct: a study of UK academics within biological sciences. PeerJ , 2 , e562.

Siewert, W., & Udani, A. (2016). Missouri municipal ethics survey: Do ethics measures work at the municipal level? Public Integrity , 18 (3), 269–289.

Smajic, E., Avdic, D., Pasic, A., Prcic, A., & Stancic, M. (2022). Mixed methodology of scientific research in healthcare. Acta Informatica Medica , 30 (1), 57–60. https://doi.org/10.5455/aim.2022.30.57-60

Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics , 12 , 53–74.

Szolnoki, G., & Hoffmann, D. (2013). Online, face-to-face and telephone surveys—comparing different sampling methods in wine consumer research. Wine Economics and Policy , 2 (2), 57–66.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care , 19 (6), 349–357. https://doi.org/10.1093/intqhc/mzm042

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin , 133 (5), 859.

Tourangeau, R., Groves, R. M., & Redline, C. D. (2010). Sensitive topics and reluctant respondents: Demonstrating a link between nonresponse bias and measurement error. Public Opinion Quarterly , 74 (3), 413–432.

Vermeulen, I., & Hartmann, T. (2015). Questionable research and publication practices in communication science. Communication Methods and Measures , 9 (4), 189–192.

Wallenborg Likidis, J. (2019). Academic norms and scientific attitudes: Metrology review of a survey for doctoral students, researchers and academic teachers (In Swedish: Akademiska normer och vetenskapliga förhallningssätt. Mätteknisk granskning av en enkät till doktorander, forskare och akademiska lärare) . Prod.nr. 8942146, Statistics Sweden, Örebro.

Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design . Sage Publications.

Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and Engineering Ethics , 27 (4), 41.

Yan, T., & Curtin, R. (2010). The relation between unit nonresponse and item nonresponse: A response continuum perspective. International Journal of Public Opinion Research , 22 (4), 535–551.

Download references

Acknowledgements

We thank Jennica Wallenborg Likidis, Statistics Sweden, for providing expert support in the survey design. We are grateful to colleagues Ingrid Johansson Mignon, Cecilia Enberg, Anna Dreber Almenberg, Andrea Fried, Sara Liin, Mariano Salazar, Lars Bengtsson, Harriet Wallberg, Karl Wennberg, and Thomas Magnusson, who joined the pretest or cognitive tests. We also thank Ksenia Onufrey, Peter Hedström, Jan-Ingvar Jönsson, Richard Öhrvall, Kerstin Sahlin, and David Ludvigsson for constructive comments or suggestions.

Open access funding provided by Linköping University. Swedish Forte: Research Council for Health, Working Life and Welfare ( https://www.vr.se/swecris?#/project/2018-00321_Forte ) Grant No. 2018-00321.

Open access funding provided by Linköping University.

Author information

Authors and affiliations.

Department of Management and Engineering [IEI], Linköping University, Linköping, SE-581 83, Sweden

Christian Berggren & Solmaz Filiz Karabag

Department of Surgical Sciences, Uppsala University, Uppsala University Hospital, entrance 70, Uppsala, SE-751 85, Sweden

Bengt Gerdin

Department of Civil and Industrial Engineering, Uppsala University, Box 169, Uppsala, SE-751 04, Sweden

Solmaz Filiz Karabag

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: CB. Survey Design: SFK, CB, Methodology: SFK, BG, CB. Visualization: SFK, BG. Funding acquisition: SFK. Project administration and management: SFK. Writing – original draft: CB. Writing – review & editing: CB, BG, SFK. Approval of the final manuscript: SFK, BG, CB.

Corresponding author

Correspondence to Solmaz Filiz Karabag .

Ethics declarations

Ethics approval and consent to participate.

The Swedish Act concerning the Ethical Review of Research Involving Humans (2003:460) defines the type of studies which requires an ethics approval. In line with the General Data Protection Regulation (EU 2016/67), the act is applicable for studies that collect personal data that reveal racial or ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, or health and sexual orientation. The present study does not involve any of the above, why no formal ethical permit was required. The ethical aspects of the project and its compliance with the guidelines of the Swedish Research Council (2017) were also part of the review process at the project’s public funding agency Forte.

Competing Interests

The authors declare that they have no competing interests.

Supporting Information

The complete case study survey of social and medical science researchers in Sweden 2020.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Berggren, C., Gerdin, B. & Karabag, S.F. Developing Surveys on Questionable Research Practices: Four Challenging Design Problems. J Acad Ethics (2024). https://doi.org/10.1007/s10805-024-09565-0

Download citation

Accepted : 23 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1007/s10805-024-09565-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Questionable Research Practices
  • Normative Environment
  • Organizational Climate
  • Survey Development
  • Design Problems
  • Problem of Incomplete Coverage
  • Survey Design Process
  • Baseline Survey
  • Pre-testing
  • Technical Assessment
  • Cognitive Interviews
  • Social Desirability
  • Sensitivity
  • Organizational Controversiality
  • Challenge of Nonresponse
  • Qualitative Research
  • Quantitative Research
  • Find a journal
  • Publish with us
  • Track your research

survey plan in research

  • Mutual Funds
  • SmartRetirement Funds
  • 529 Portfolios
  • Alternatives
  • Separately Managed Accounts
  • Money Market Funds
  • Commingled Funds
  • Featured Funds

Asset Class Capabilities

  • Fixed Income
  • Multi-Asset Solutions
  • Global Liquidity

Investment Approach

  • ETF Investing
  • Model Portfolios
  • Sustainable Investing
  • Commingled Pension Trust Funds

Education Savings

  • 529 Plan Solutions
  • College Planning Essentials

Defined Contribution

  • Target Date Strategies
  • Retirement Income
  • Startup and Micro 401(k) Plan Solutions
  • Small to Mid-market 401(k) Plan Solutions

Market Insights

  • Market Insights Overview
  • Guide to the Markets
  • Quarterly Economic & Market Update
  • Guide to Alternatives
  • Market Updates
  • On the Minds of Investors
  • Principles for Successful Long-Term Investing
  • Weekly Market Recap

Portfolio Insights

  • Portfolio Insights Overview
  • Asset Class Views
  • Long-Term Capital Market Assumptions
  • Multi-Asset Solutions Strategy Report
  • Strategic Investment Advisory Group
  • Retirement Insights
  • Retirement Insights Overview
  • Guide to Retirement
  • Principles for a Successful Retirement
  • Retirement Hot Topics

Portfolio Construction

  • Portfolio Construction Tools Overview
  • Portfolio Analysis
  • Investment Comparison
  • Heatmap Analysis
  • Bond Ladder Illustrator
  • Retirement Plan Tools & Resources Overview
  • Target Date Compass®
  • Core Menu Evaluator℠
  • Price Smart℠
  • Account Service Forms
  • News & Fund Announcements
  • Insights App
  • Continuing Education Opportunities
  • Market Response Center
  • Artificial Intelligence
  • Diversity, Equity, & Inclusion
  • Spectrum: Our Investment Platform
  • Media Resources
  • Our Leadership Team

survey plan in research

Defined Contribution (DC) Plan participants want more help managing their financial lives. Specifically, they want greater access to professional guidance, as well as employer-supported financial wellness initiatives.

These are among the key findings of J.P. Morgan’s 2024 DC Plan Participant Survey, our annual study of how plan sponsors and participants are navigating the retirement landscape. Alex Nobile, Retirement Strategist, reviews highlights of the survey in this issue. 1

Current state: Plan participants are on their own 

Today, a sizeable number of plan participants rely on their on their own research to make financial decisions on everything from budgeting to investing to planning for retirement/long-term goals. Moreover, the number of participants flying solo has actually grown in size in recent years.

The number of participants relying solely on their own judgement is even higher when it comes to day-to-day financial decision making: 

  • 80% make budgeting decisions based on their own research
  • 73% make short-term financial decisions unaided

Also worth noting: Four in 10 survey respondents told us they lack basic emergency savings—which our related research has found can lead to higher credit card debt and lower retirement savings balances.  

Desired state: More support from plan sponsors

The 2024 survey revealed participants’ growing uncertainty about achieving security in retirement: Only 43% were confident their savings would last their lifetime—down from 57% in 2021.  

This, along with global market volatility, may help explain why 75% of survey participants are eager for employers to provide more guidance from financial professionals (even though many participants are defaulted into investments in their DC plans). 

Financial wellness programs, too, are much in demand, with nearly nine in 10 participants finding them valuable. This interest is abetted by passage of the SECURE 2.0 Act of 2022, which showed clear support for these types of programs, including provisions for student loan matching contributions and in-plan emergency savings programs. 

Indeed, 85% of the plan sponsors we surveyed acknowledged feeling a very/somewhat high sense of responsibility regarding their employees’ financial wellness.

However, some plan sponsors that have opted to provide these programs fear the employees that most need this type of support are not taking advantage of it. Similar concerns may dissuade other plan sponsors from stepping into the financial wellness arena. 

Retirement income support is also of particular interest to participants:

  • Nearly eight in 10 respondents expressed concern about creating a steady stream of income in retirement that would last their lifetime, with 56% noting that they haven’t calculated how much they need to save.
  • Notably, nine in 10 respondents expressed interest in an in-plan solution that would provide guaranteed income in retirement.

Clearly, participants want more support with all stages and components of their financial lives and employers are well-positioned to help.

Four ways to move forward 

There are many ways to engage participants in financial wellness initiatives. Here are four approaches to consider:

1.  Survey participants to gain insights into where they would like support. Participating in the development process can help employees feel more heard and supported—and increase the likelihood they will take advantage of any programs offered.  

2. Focus on developing programs that can have an immediate impact on employees’ finances, such as:

  • Emergency savings accounts (either within or outside the plan)
  • Coaching/priority setting for savings goals, including retirement
  • Debt consolidation education and coaching
  • Student loan debt-matching; i.e., saving for retirement/other financial goals, such as buying a home, while simultaneously paying off education debts 

3. Default plan participants into financial wellness programs to increase engagement and consider offering in-plan solutions that provide lifetime income in retirement.

4. Develop a robust communications program to support a program rollout and follow up with targeted communications to specific employees according to their characteristics/behaviors: for example, promote emergency savings coaching to employees who borrow from their 401(k).

Employers can amend plan documents, if need be, before begin rolling out initiatives. Also, be sure to set usage goals and track progress over time to measure success. Revisit communication approaches if initial efforts don’t produce desired results. 

In summary: A win-win outcome is possible

Helping employees improve their financial wellness is a proven way to strengthen retirement outcomes and, as our survey confirms, it’s something that employees very much want. Additional benefits like these can help employers to attract and retain talent, so it’s beneficial for both employees and employers.

1. Our survey, conducted in January 2024, polled 1,503 full-time employees at for-profit organizations. All participants were active contributors to their respective company’s 401(k) plan in the 12 months preceding the survey.

09t5241908180026.

  • Retirement Research

Queensland research finds young people 'burnt out and in need of help'

By Claudia Williams

Topic: Mental Health

Legs of young people walking up stairs

New research shows almost nine out of 10 young Queenslanders have seen a negative change in their health and wellbeing in the past year.  ( ABC News: Stephanie Anderson )

It is impossible to ignore the negative impacts of smartphones and social media on the mental health and wellbeing of young people, Queensland’s chief health officer says. 

The comments come as new research shows almost nine out of 10 young Queenslanders have seen a negative change in their health and wellbeing in the past year. 

The survey of 1,424 young people conducted by the state's prevention agency, Health and Wellbeing Queensland, found more than half of respondents reported feeling stressed or anxious.

Chief Health Officer Dr John Gerrard said while less people were dying from heart disease and strokes, the mental health of young people was "getting worse very rapidly".

"It appears to be a real phenomenon and not the result of better reporting," he said. "I believe this is a very significant concern.

"One of the most dramatic indicators is the instances of hospitalisation due to self-harm in young children aged 10 to 14 has almost [tripled] over the last decade."

John Gerrard

John Gerrard says the mental ill-health of young people is a very real phenomenon being seen across the world. ( ABC News: Claudia Williams )

Dr Gerrard said the mental health decline in young people had been seen on a global scale since 2010, in the years following the release of the first smartphone.

He said there were no simple solutions, adding the community at-large has not spoken about "this enough".

"It is not clear at this stage what to do about this specific problem, but I have been meeting with Commonwealth agencies to discuss these issues."

'Burnt out and in need of help'

The research, commissioned by the Queensland government, found more than half of those aged 15 to 24 reported feeling tired for no reason or that everything was an effort in the four weeks prior to being surveyed.

Health and Wellbeing Queensland deputy chief executive Gemma Hodgetts said these were the warning signs of a generation "burnt out and in need of help". 

"Young Queenslanders who should be our most vibrant, energetic and hopeful generation are struggling," she said. 

Gemma Hodgetts

Gemma Hodgetts says the research shows young people are struggling. ( ABC News: Claudia Williams )

"Almost one in two Queenslanders will experience mental ill-health in their lifetime ... about 75 per cent of mental disorders emerge before the age of 24 years, so we need to act now."

The research found those experiencing mental health challenges were more likely to rate their health significantly lower.

The report said the findings suggest increased stress, along with poorer diets, may be negatively impacting the mental health of young Queenslanders, particularly young adults.

According to the research, women, girls and mothers are also more likely to experience negative impacts, which may in part be due to their lower activity levels.

Ms Hodgetts said the report laid the foundation for an Australian-first strategy which would take a deliberate wellbeing approach to mental health.

Energy.gov Home

Energy Innovation Hub teams will emphasize multi-disciplinary fundamental research to address long-standing and emerging challenges for rechargeable batteries

WASHINGTON, D.C . - Today, the U.S. Department of Energy (DOE) announced $125 million in funding for two Energy Innovation Hub teams to provide the scientific foundation needed to seed and accelerate next generation technologies beyond today’s generation of lithium (Li)-ion batteries. These multi-institution research teams, led by Argonne National Laboratory and Stanford University, will develop scientific concepts and understanding to impact decarbonization of transportation and incorporation of clean energy into the electricity grid.

Rechargeable batteries, such as Li-ion and lead-acid batteries, have had a tremendous impact on the nation’s economy. Emerging applications will require even greater energy storage capabilities, safer operation, lower costs, and diversity of materials to manufacture batteries. Meeting these challenges requires a better understanding of foundational battery and materials sciences to enable scalable battery designs with versatile and reversible energy storage capabilities beyond what is currently possible. Additional benefits may include mitigation of supply chain risks associated with the current generation of batteries.

"Providing the scientific foundation to accelerate this important research is key to our economy and making sure the U.S. plays a lead role in transforming the way we store and use electricity,” said Harriet Kung, DOE’s Acting Director for the Office of Science. “Today's awards provide our Energy Innovation Hub teams with the tools and resources to solve some of the most challenging science problems that are limiting our ability to decarbonize transportation and incorporate clean energy into the electricity grid."

The two Energy Innovation Hub teams are the Energy Storage Research Alliance (ESRA) led by Argonne National Laboratory and the Aqueous Battery Consortium (ABC) led by Stanford University. ESRA will provide the scientific underpinning to develop new compact batteries for heavy-duty transportation and energy storage solutions for the grid with a focus on achieving unprecedented molecular-level control of chemical reactivity, ion selectivity, and directional transport in complex electrochemical cells. ABC will focus on establishing the scientific foundation for large-scale development and deployment of aqueous batteries for long-duration grid storage technologies.  Both of these teams will prioritize study and use of Earth-abundant materials to mitigate supply chain risks.

Both Energy Innovation Hubs teams are comprised of multiple institutions, including Historically Black Colleges and Universities (HBCUs) and other Minority Serving Institutions (MSIs). The projects provide an outstanding opportunity for workforce development in energy storage research and inclusive research involving diverse individuals from diverse institutions. 

The teams were selected by competitive peer review under the DOE Funding Opportunity Announcement for the Energy Innovation Hub Program: Research to Enable Next-Generation Batteries and Energy Storage. While focused on basic science, the Funding Opportunity Announcement was developed in coordination through the DOE Joint Strategy Team for Batteries.

Total funding is $125 million for awards lasting up to five years in duration. More information can be found on the Basic Energy Sciences program  homepage and  Energy Innovation Hubs page.

Selection for award negotiations is not a commitment by DOE to issue an award or provide funding. Before funding is issued, DOE and the applicants will undergo a negotiation process, and DOE may cancel negotiations and rescind the selection for any reason during that time. 

IMAGES

  1. FREE 8+ Sample Survey Example Templates in PDF

    survey plan in research

  2. A Comprehensive Guide to Survey Research Methodologies

    survey plan in research

  3. Research Plan

    survey plan in research

  4. Survey Research

    survey plan in research

  5. 14+ Research Plan Templates

    survey plan in research

  6. Understanding the 3 Main Types of Survey Research & Putting Them to Use

    survey plan in research

VIDEO

  1. LEVEL UP YOUR RESEARCH WRITING

  2. How to plan Research Design? #AvikRoy #LibraryScience #InformationScience #Researchmethodology

  3. WHAT IS A SURVEY PLAN? #surveyors #realestateinvestment #propertyinvestment #bfyspodcast #lekki

  4. Define Survey by Instructor Chief Svy M. Saleem Amjid Sahib

  5. Business Survey Infographic Presentation

  6. How to Plot a Site/Survey Plan in Revit

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" (Check & Schutt, 2012, p. 160). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research ...

  4. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  5. Chapter 5. Planning and conducting a survey

    Chapter 5. Planning and conducting a survey. More chapters in Epidemiology for the uninitiated. Epidemiological surveys use various study designs and range widely in size. At one extreme a case-control investigation may include fewer than 50 subjects, while at the other, some large longitudinal studies follow up many thousands of people for ...

  6. Survey design: Plan out research goals to focus your survey

    5 survey design tips to keep in mind. Ask easier-to-answer questions at the beginning. You can ask harder-to-answer open-ended or sensitive questions toward the end of the survey. This will help ease your respondent into the survey and reduce their likelihood of dropping out early on.

  7. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Currently, surveys can play a central role in increasing research activities in non-mainstream science countries where limited research funding and other barriers hinder science growth. Planning surveys starts with overviewing related reviews and other publications which may help to design questionnaires with comprehensive coverage of all ...

  8. 7 Steps In Conducting a Survey Research

    Step 3: Decide on the type of survey method to use. Step 4: Design and write questions. Step 5: Distribute the survey and gather responses. Step 6: Analyze the collected data. Step 7: Create a report based on survey results. These survey method steps provide a general framework for conducting research.

  9. PDF Planning your Survey

    Survey design is a craft-" If you haven't studied it, you don't know how to write a survey well, and the data you get is (sic) garbage" - Portigal, S. (2007)-Surveys are primarily quantitative tools for gathering research evidence-Surveys require some basic mathematics to plan (e.g. what does your sample

  10. Survey Planning: Definition, Importance & Insights

    To conduct a survey plan: Start by defining clear objectives and target audience. Develop a sampling strategy, design unbiased questions, and plan for data collection methods. ... The first step in planning a survey is clearly defining the objectives and research questions. This provides a clear focus and direction for the survey design. It ...

  11. Create A Survey Sampling Plan In Seven Simple Steps

    A sampling plan is a foundational step in market research that determines the quality and reliability of the data collected. A well-designed sampling plan ensures that the sample accurately represents the broader population, allowing researchers to draw valid conclusions and make informed decisions.

  12. A quick guide to survey research

    Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

  13. PDF Fundamentals of Survey Research Methodology

    In survey research, independent and dependent variables are used to define the scope of study, but cannot be explicitly controlled by the researcher. Before conducting the survey, ... The sampling plan is the methodology that will be used to select the sample from the population (p. 6). The sampling plan describes the approach that

  14. How to write a research plan: Step-by-step guide

    Here's an example outline of a research plan you might put together: Project title. Project members involved in the research plan. Purpose of the project (provide a summary of the research plan's intent) Objective 1 (provide a short description for each objective) Objective 2. Objective 3.

  15. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  16. Planning a Survey

    Schedule Setting. Conduct the survey in a time-bounded fashion by means of planning out a schedule. First, start with setting a date for the creating of questions. Then, set a time frame for the standardization and/or revision of the survey. After this, mark your calendar for the period of administering the surveys to the participants.

  17. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  18. Why are surveys important in research?

    Surveys are important in research because they offer a flexible and dependable method of gathering crucial data. Learn more today. ... Use the feedback you receive to inform plans for product improvements and features, workplace changes, and other business decisions. Once you've acted on the feedback with improvements, send out another survey ...

  19. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  20. Planning a Survey: 6 Step Guide + Best Practices

    Start with our questionnaire template. Planning a survey involves six steps: Set objectives, define the target audience, select the distribution method, organize external data, draft the survey, and then test. Following these steps will ensure you collect actionable feedback from your survey. This guide will go over each of the six steps in detail.

  21. What Is a Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Other interesting articles.

  22. Research Plan: What Is It & How To Write It [with Templates]

    A research plan is a comprehensive document that outlines the entirety of your research project. It details the research process, from defining the problem statement and research objectives to selecting the research method and outlining the expected outcomes. This plan serves as a blueprint for your research activities, ensuring a focused and ...

  23. Understanding Survey Validity and Reliability

    Create reliable surveys with SurveyPlanet's survey tool. Survey validity and reliability are foundational to conducting effective research and making informed decisions. Validity ensures that a survey measures what it is intended to measure, while reliability ensures that results are consistent and dependable.

  24. Not just a simple survey: A case study of pitfalls in interdisciplinary

    We thereby focus on the transformations made from beginning to end in survey research, the different task-groups involved, and on the norms and rules that guide this process. This illustrates the practice of survey research in diverse and multiform research teams, and shows which vital methodological steps are often ignored, skipped, or overruled.

  25. Developing Surveys on Questionable Research Practices: Four ...

    The exposure of scientific scandals and the increase of dubious research practices have generated a stream of studies on Questionable Research Practices (QRPs), such as failure to acknowledge co-authors, selective presentation of findings, or removal of data not supporting desired outcomes. In contrast to high-profile fraud cases, QRPs can be investigated using quantitative, survey-based ...

  26. Our survey results are in: Employees want more financial support

    Alex Nobile, Retirement Strategist, reviews highlights of the survey in this issue. 1. Current state: Plan participants are on their own . Today, a sizeable number of plan participants rely on their on their own research to make financial decisions on everything from budgeting to investing to planning for retirement/long-term goals.

  27. Key Components of Qigong for People With Multiple Sclerosis: A Survey

    Forty-seven experts, including QG instructors, clinicians, and QG and MS researchers, completed the survey. Respondents had a mean (SD) of 20 (11) years of QG teaching experience, 26 (12) years of clinical practice, 24 (9) years of QG research experience, 13 (5) years of MS research experience, and worked with at least 3 (2) people with MS. Approximately 125 QG forms/movements were recommended.

  28. New research finds young people 'burnt out and in need of help'

    New research shows almost nine out of 10 young Queenslanders have seen a negative change in their health and wellbeing in the past year. ... The survey of 1,424 young people conducted by the state ...

  29. Department of Energy Awards $125 Million for Research to Enable Next

    Energy Innovation Hub teams will emphasize multi-disciplinary fundamental research to address long-standing and emerging challenges for rechargeable batteries. WASHINGTON, D.C. - Today, the U.S. Department of Energy (DOE) announced $125 million in funding for two Energy Innovation Hub teams to provide the scientific foundation needed to seed ...

  30. Data Analysis Plan: Examples & Templates

    A data analysis plan is a roadmap for how you're going to organize and analyze your survey data—and it should help you achieve three objectives that relate to the goal you set before you started your survey: Answer your top research questions. Use more specific survey questions to understand those answers. Segment survey respondents to ...