Research methodology vs. research methods
The research methodology or design is the overall strategy and rationale that you used to carry out the research. Whereas, research methods are the specific tools and processes you use to gather and understand the data you need to test your hypothesis.
To further understand research methodology, let’s explore some examples of research methodology:
a. Qualitative research methodology example: A study exploring the impact of author branding on author popularity might utilize in-depth interviews to gather personal experiences and perspectives.
b. Quantitative research methodology example: A research project investigating the effects of a book promotion technique on book sales could employ a statistical analysis of profit margins and sales before and after the implementation of the method.
c. Mixed-Methods research methodology example: A study examining the relationship between social media use and academic performance might combine both qualitative and quantitative approaches. It could include surveys to quantitatively assess the frequency of social media usage and its correlation with grades, alongside focus groups or interviews to qualitatively explore students’ perceptions and experiences regarding how social media affects their study habits and academic engagement.
These examples highlight the meaning of methodology in research and how it guides the research process, from data collection to analysis, ensuring the study’s objectives are met efficiently.
When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.
Wondering how to write the research methodology section? Follow these steps to create a strong methods chapter:
At the start of a research paper , you would have provided the background of your research and stated your hypothesis or research problem. In this section, you will elaborate on your research strategy.
Begin by restating your research question and proceed to explain what type of research you opted for to test it. Depending on your research, here are some questions you can consider:
a. Did you use qualitative or quantitative data to test the hypothesis?
b. Did you perform an experiment where you collected data or are you writing a dissertation that is descriptive/theoretical without data collection?
c. Did you use primary data that you collected or analyze secondary research data or existing data as part of your study?
These questions will help you establish the rationale for your study on a broader level, which you will follow by elaborating on the specific methods you used to collect and understand your data.
Now that you have told your reader what type of research you’ve undertaken for the dissertation, it’s time to dig into specifics. State what specific methods you used and explain the conditions and variables involved. Explain what the theoretical framework behind the method was, what samples you used for testing it, and what tools and materials you used to collect the data.
Once you have explained the data collection process, explain how you analyzed and studied the data. Here, your focus is simply to explain the methods of analysis rather than the results of the study.
Here are some questions you can answer at this stage:
a. What tools or software did you use to analyze your results?
b. What parameters or variables did you consider while understanding and studying the data you’ve collected?
c. Was your analysis based on a theoretical framework?
Your mode of analysis will change depending on whether you used a quantitative or qualitative research methodology in your study. If you’re working within the hard sciences or physical sciences, you are likely to use a quantitative research methodology (relying on numbers and hard data). If you’re doing a qualitative study, in the social sciences or humanities, your analysis may rely on understanding language and socio-political contexts around your topic. This is why it’s important to establish what kind of study you’re undertaking at the onset.
Now that you have gone through your research process in detail, you’ll also have to make a case for it. Justify your choice of methodology and methods, explaining why it is the best choice for your research question. This is especially important if you have chosen an unconventional approach or you’ve simply chosen to study an existing research problem from a different perspective. Compare it with other methodologies, especially ones attempted by previous researchers, and discuss what contributions using your methodology makes.
No matter how thorough a methodology is, it doesn’t come without its hurdles. This is a natural part of scientific research that is important to document so that your peers and future researchers are aware of it. Writing in a research paper about this aspect of your research process also tells your evaluator that you have actively worked to overcome the pitfalls that came your way and you have refined the research process.
1. Remember who you are writing for. Keeping sight of the reader/evaluator will help you know what to elaborate on and what information they are already likely to have. You’re condensing months’ work of research in just a few pages, so you should omit basic definitions and information about general phenomena people already know.
2. Do not give an overly elaborate explanation of every single condition in your study.
3. Skip details and findings irrelevant to the results.
4. Cite references that back your claim and choice of methodology.
5. Consistently emphasize the relationship between your research question and the methodology you adopted to study it.
To sum it up, what is methodology in research? It’s the blueprint of your research, essential for ensuring that your study is systematic, rigorous, and credible. Whether your focus is on qualitative research methodology, quantitative research methodology, or a combination of both, understanding and clearly defining your methodology is key to the success of your research.
Once you write the research methodology and complete writing the entire research paper, the next step is to edit your paper. As experts in research paper editing and proofreading services , we’d love to help you perfect your paper!
Here are some other articles that you might find useful:
What does research methodology mean, what types of research methodologies are there, what is qualitative research methodology, how to determine sample size in research methodology, what is action research methodology.
Found this article helpful?
This is very simplified and direct. Very helpful to understand the research methodology section of a dissertation
Leave a Comment: Cancel reply
Your email address will not be published.
Your organization needs a technical editor: here’s why, your guide to the best ebook readers in 2024, writing for the web: 7 expert tips for web content writing.
Subscribe to our Newsletter
Get carefully curated resources about writing, editing, and publishing in the comfort of your inbox.
How to Copyright Your Book?
If you’ve thought about copyrighting your book, you’re on the right path.
© 2024 All rights reserved
Terms & conditions.
As the Christmas season is upon us, we find ourselves reflecting on the past year and those who we have helped to shape their future. It’s been quite a year for us all! The end of the year brings no greater joy than the opportunity to express to you Christmas greetings and good wishes.
At this special time of year, Research Prospect brings joyful discount of 10% on all its services. May your Christmas and New Year be filled with joy.
We are looking back with appreciation for your loyalty and looking forward to moving into the New Year together.
"Claim this offer"
In unfamiliar and hard times, we have stuck by you. This Christmas, Research Prospect brings you all the joy with exciting discount of 10% on all its services.
Offer valid till 5-1-2024
We love being your partner in success. We know you have been working hard lately, take a break this holiday season to spend time with your loved ones while we make sure you succeed in your academics
Discount code: RP23720
Published by Nicolas at March 21st, 2024 , Revised On March 12, 2024
Research methodology is a crucial aspect of any investigative process, serving as the blueprint for the entire research journey. If you are stuck in the methodology section of your research paper , then this blog will guide you on what is a research methodology, its types and how to successfully conduct one.
Table of Contents
Research methodology can be defined as the systematic framework that guides researchers in designing, conducting, and analyzing their investigations. It encompasses a structured set of processes, techniques, and tools employed to gather and interpret data, ensuring the reliability and validity of the research findings.
Research methodology is not confined to a singular approach; rather, it encapsulates a diverse range of methods tailored to the specific requirements of the research objectives.
Here is why Research methodology is important in academic and professional settings.
Research methodology forms the backbone of rigorous inquiry. It provides a structured approach that aids researchers in formulating precise thesis statements , selecting appropriate methodologies, and executing systematic investigations. This, in turn, enhances the quality and credibility of the research outcomes.
In both academic and professional contexts, the ability to reproduce research outcomes is paramount. A well-defined research methodology establishes clear procedures, making it possible for others to replicate the study. This not only validates the findings but also contributes to the cumulative nature of knowledge.
In professional settings, decisions often hinge on reliable data and insights. Research methodology equips professionals with the tools to gather pertinent information, analyze it rigorously, and derive meaningful conclusions.
This informed decision-making is instrumental in achieving organizational goals and staying ahead in competitive environments.
For academic researchers, adherence to robust research methodology is a hallmark of excellence. Institutions value research that adheres to high standards of methodology, fostering a culture of academic rigour and intellectual integrity. Furthermore, it prepares students with critical skills applicable beyond academia.
Research methodology instills a problem-solving mindset by encouraging researchers to approach challenges systematically. It equips individuals with the skills to dissect complex issues, formulate hypotheses , and devise effective strategies for investigation.
In the pursuit of knowledge and discovery, understanding the fundamentals of research methodology is paramount.
Research, in its essence, is a systematic and organized process of inquiry aimed at expanding our understanding of a particular subject or phenomenon. It involves the exploration of existing knowledge, the formulation of hypotheses, and the collection and analysis of data to draw meaningful conclusions.
Research is a dynamic and iterative process that contributes to the continuous evolution of knowledge in various disciplines.
Research takes on various forms, each tailored to the nature of the inquiry. Broadly classified, research can be categorized into two main types:
To conduct effective research, one must go through the different components of research methodology. These components form the scaffolding that supports the entire research process, ensuring its coherence and validity.
Research design serves as the blueprint for the entire research project. It outlines the overall structure and strategy for conducting the study. The three primary types of research design are:
Choosing the right data collection methods is crucial for obtaining reliable and relevant information. Common methods include:
Once data is collected, analysis becomes imperative to derive meaningful conclusions. Different methodologies exist for quantitative and qualitative data:
Selecting an appropriate research method is a critical decision in the research process. It determines the approach, tools, and techniques that will be used to answer the research questions.
Quantitative research involves the collection and analysis of numerical data, providing a structured and objective approach to understanding and explaining phenomena.
Experimental research involves manipulating variables to observe the effect on another variable under controlled conditions. It aims to establish cause-and-effect relationships.
Key Characteristics:
Applications: Commonly used in scientific studies and psychology to test hypotheses and identify causal relationships.
Survey research gathers information from a sample of individuals through standardized questionnaires or interviews. It aims to collect data on opinions, attitudes, and behaviours.
Applications: Widely employed in social sciences, marketing, and public opinion research to understand trends and preferences.
Descriptive research seeks to portray an accurate profile of a situation or phenomenon. It focuses on answering the ‘what,’ ‘who,’ ‘where,’ and ‘when’ questions.
Applications: Useful in situations where researchers want to understand and describe a phenomenon without altering it, common in social sciences and education.
Qualitative research emphasizes exploring and understanding the depth and complexity of phenomena through non-numerical data.
A case study is an in-depth exploration of a particular person, group, event, or situation. It involves detailed, context-rich analysis.
Applications: Common in social sciences, psychology, and business to investigate complex and specific instances.
Ethnography involves immersing the researcher in the culture or community being studied to gain a deep understanding of their behaviours, beliefs, and practices.
Applications: Widely used in anthropology, sociology, and cultural studies to explore and document cultural practices.
Grounded theory aims to develop theories grounded in the data itself. It involves systematic data collection and analysis to construct theories from the ground up.
Applications: Commonly applied in sociology, nursing, and management studies to generate theories from empirical data.
Research design is the structural framework that outlines the systematic process and plan for conducting a study. It serves as the blueprint, guiding researchers on how to collect, analyze, and interpret data.
Exploratory design.
Exploratory research design is employed when a researcher aims to explore a relatively unknown subject or gain insights into a complex phenomenon.
Applications: Valuable in the early stages of investigation, especially when the researcher seeks a deeper understanding of a subject before formalizing research questions.
Descriptive research design focuses on portraying an accurate profile of a situation, group, or phenomenon.
Applications: Widely used in social sciences, marketing, and educational research to provide detailed and objective descriptions.
Explanatory research design aims to identify the causes and effects of a phenomenon, explaining the ‘why’ and ‘how’ behind observed relationships.
Applications: Commonly employed in scientific studies and social sciences to delve into the underlying reasons behind observed patterns.
Cross-sectional design.
Cross-sectional designs collect data from participants at a single point in time.
Applications: Suitable for studying characteristics or behaviours that are stable or not expected to change rapidly.
Longitudinal designs involve the collection of data from the same participants over an extended period.
Applications: Ideal for studying developmental processes, trends, or the impact of interventions over time.
Experimental design.
Experimental designs involve manipulating variables under controlled conditions to observe the effect on another variable.
Applications: Commonly used in scientific studies, psychology, and medical research to establish causal relationships.
Non-experimental designs observe and describe phenomena without manipulating variables.
Applications: Suitable for studying complex phenomena in real-world settings where manipulation may not be ethical or feasible.
Effective data collection is fundamental to the success of any research endeavour.
Objective Design:
Structured Format:
Pilot Testing:
Sampling Strategy:
Establishing Rapport:
Open-Ended Questions:
Active Listening:
Ethical Considerations:
1. participant observation.
Immersive Participation:
Field Notes:
Ethical Awareness:
Objective Observation:
Data Reliability:
Contextual Understanding:
1. using existing data.
Identifying Relevant Archives:
Data Verification:
Ethical Use:
Incomplete or Inaccurate Archives:
Temporal Bias:
Access Limitations:
Conducting research is a complex and dynamic process, often accompanied by a myriad of challenges. Addressing these challenges is crucial to ensure the reliability and validity of research findings.
Sampling bias:.
Measurement error:.
Timeline pressures:.
Selection bias:.
Conducting successful research relies not only on the application of sound methodologies but also on strategic planning and effective collaboration. Here are some tips to enhance the success of your research methodology:
Well-defined research objectives guide the entire research process. Clearly articulate the purpose of your study, outlining specific research questions or hypotheses.
A thorough literature review provides a foundation for understanding existing knowledge and identifying gaps. Invest time in reviewing relevant literature to inform your research design and methodology.
A detailed plan serves as a roadmap, ensuring all aspects of the research are systematically addressed. Develop a detailed research plan outlining timelines, milestones, and tasks.
Ethical practices are fundamental to maintaining the integrity of research. Address ethical considerations early, obtain necessary approvals, and ensure participant rights are safeguarded.
Research methodologies evolve, and staying updated is essential for employing the most effective techniques. Engage in continuous learning by attending workshops, conferences, and reading recent publications.
Unforeseen challenges may arise during research, necessitating adaptability in methods. Be flexible and willing to modify your approach when needed, ensuring the integrity of the study.
Research is often an iterative process, and refining methods based on ongoing findings enhance the study’s robustness. Regularly review and refine your research design and methods as the study progresses.
What is the research methodology.
Research methodology is the systematic process of planning, executing, and evaluating scientific investigation. It encompasses the techniques, tools, and procedures used to collect, analyze, and interpret data, ensuring the reliability and validity of research findings.
Research methodologies include qualitative and quantitative approaches. Qualitative methods involve in-depth exploration of non-numerical data, while quantitative methods use statistical analysis to examine numerical data. Mixed methods combine both approaches for a comprehensive understanding of research questions.
To write a research methodology, clearly outline the study’s design, data collection, and analysis procedures. Specify research tools, participants, and sampling methods. Justify choices and discuss limitations. Ensure clarity, coherence, and alignment with research objectives for a robust methodology section.
In the methodology section of a research paper, describe the study’s design, data collection, and analysis methods. Detail procedures, tools, participants, and sampling. Justify choices, address ethical considerations, and explain how the methodology aligns with research objectives, ensuring clarity and rigour.
Mixed research methodology combines both qualitative and quantitative research approaches within a single study. This approach aims to enhance the details and depth of research findings by providing a more comprehensive understanding of the research problem or question.
Stuck with your dissertation. Worried about that dissertation explicative that has been haunting you for several days but you can’t […]
What is a manuscript? A manuscript is a written or typed document, often the original draft of a book or article, before publication, undergoing editing and revisions.
Academic integrity: a commitment to honesty and ethical conduct in learning. Upholding originality and proper citation are its cornerstones.
Ready to place an order?
Learning resources.
Research methodology involves a systematic and well-structured approach to conducting scholarly or scientific inquiries. Knowing the significance of research methodology and its different components is crucial as it serves as the basis for any study.
Typically, your research topic will start as a broad idea you want to investigate more thoroughly. Once you’ve identified a research problem and created research questions , you must choose the appropriate methodology and frameworks to address those questions effectively.
Research methodology is the process or the way you intend to execute your study. The methodology section of a research paper outlines how you plan to conduct your study. It covers various steps such as collecting data, statistical analysis, observing participants, and other procedures involved in the research process
The methods section should give a description of the process that will convert your idea into a study. Additionally, the outcomes of your process must provide valid and reliable results resonant with the aims and objectives of your research. This thumb rule holds complete validity, no matter whether your paper has inclinations for qualitative or quantitative usage.
Studying research methods used in related studies can provide helpful insights and direction for your own research. Now easily discover papers related to your topic on SciSpace and utilize our AI research assistant, Copilot , to quickly review the methodologies applied in different papers.
While deciding on your approach towards your research, the reason or factors you weighed in choosing a particular problem and formulating a research topic need to be validated and explained. A research methodology helps you do exactly that. Moreover, a good research methodology lets you build your argument to validate your research work performed through various data collection methods, analytical methods, and other essential points.
Just imagine it as a strategy documented to provide an overview of what you intend to do.
While undertaking any research writing or performing the research itself, you may get drifted in not something of much importance. In such a case, a research methodology helps you to get back to your outlined work methodology.
A research methodology helps in keeping you accountable for your work. Additionally, it can help you evaluate whether your work is in sync with your original aims and objectives or not. Besides, a good research methodology enables you to navigate your research process smoothly and swiftly while providing effective planning to achieve your desired results.
Usually, you must ensure to include the following stated aspects while deciding over the basic structure of your research methodology:
Explain what research methods you’re going to use. Whether you intend to proceed with quantitative or qualitative, or a composite of both approaches, you need to state that explicitly. The option among the three depends on your research’s aim, objectives, and scope.
Based on logic and reason, let your readers know why you have chosen said research methodologies. Additionally, you have to build strong arguments supporting why your chosen research method is the best way to achieve the desired outcome.
The mechanism encompasses the research methods or instruments you will use to develop your research methodology. It usually refers to your data collection methods. You can use interviews, surveys, physical questionnaires, etc., of the many available mechanisms as research methodology instruments. The data collection method is determined by the type of research and whether the data is quantitative data(includes numerical data) or qualitative data (perception, morale, etc.) Moreover, you need to put logical reasoning behind choosing a particular instrument.
The results will be available once you have finished experimenting. However, you should also explain how you plan to use the data to interpret the findings. This section also aids in understanding the problem from within, breaking it down into pieces, and viewing the research problem from various perspectives.
Anything that you feel must be explained to spread more awareness among readers and focus groups must be included and described in detail. You should not just specify your research methodology on the assumption that a reader is aware of the topic.
All the relevant information that explains and simplifies your research paper must be included in the methodology section. If you are conducting your research in a non-traditional manner, give a logical justification and list its benefits.
Include information about the sample and sample space in the methodology section. The term "sample" refers to a smaller set of data that a researcher selects or chooses from a larger group of people or focus groups using a predetermined selection method. Let your readers know how you are going to distinguish between relevant and non-relevant samples. How you figured out those exact numbers to back your research methodology, i.e. the sample spacing of instruments, must be discussed thoroughly.
For example, if you are going to conduct a survey or interview, then by what procedure will you select the interviewees (or sample size in case of surveys), and how exactly will the interview or survey be conducted.
This part, which is frequently assumed to be unnecessary, is actually very important. The challenges and limitations that your chosen strategy inherently possesses must be specified while you are conducting different types of research.
You must have observed that all research papers, dissertations, or theses carry a chapter entirely dedicated to research methodology. This section helps maintain your credibility as a better interpreter of results rather than a manipulator.
A good research methodology always explains the procedure, data collection methods and techniques, aim, and scope of the research. In a research study, it leads to a well-organized, rationality-based approach, while the paper lacking it is often observed as messy or disorganized.
You should pay special attention to validating your chosen way towards the research methodology. This becomes extremely important in case you select an unconventional or a distinct method of execution.
Curating and developing a strong, effective research methodology can assist you in addressing a variety of situations, such as:
As a researcher, you must choose which tools or data collection methods that fit best in terms of the relevance of your research. This decision has to be wise.
There exists many research equipments or tools that you can use to carry out your research process. These are classified as:
An interview aimed to get your desired research outcomes can be undertaken in many different ways. For example, you can design your interview as structured, semi-structured, or unstructured. What sets them apart is the degree of formality in the questions. On the other hand, in a group interview, your aim should be to collect more opinions and group perceptions from the focus groups on a certain topic rather than looking out for some formal answers.
In surveys, you are in better control if you specifically draft the questions you seek the response for. For example, you may choose to include free-style questions that can be answered descriptively, or you may provide a multiple-choice type response for questions. Besides, you can also opt to choose both ways, deciding what suits your research process and purpose better.
Similar to the group interviews, here, you can select a group of individuals and assign them a topic to discuss or freely express their opinions over that. You can simultaneously note down the answers and later draft them appropriately, deciding on the relevance of every response.
If your research domain is humanities or sociology, observations are the best-proven method to draw your research methodology. Of course, you can always include studying the spontaneous response of the participants towards a situation or conducting the same but in a more structured manner. A structured observation means putting the participants in a situation at a previously decided time and then studying their responses.
Of all the tools described above, it is you who should wisely choose the instruments and decide what’s the best fit for your research. You must not restrict yourself from multiple methods or a combination of a few instruments if appropriate in drafting a good research methodology.
A research methodology exists in various forms. Depending upon their approach, whether centered around words, numbers, or both, methodologies are distinguished as qualitative, quantitative, or an amalgamation of both.
When a research methodology primarily focuses on words and textual data, then it is generally referred to as qualitative research methodology. This type is usually preferred among researchers when the aim and scope of the research are mainly theoretical and explanatory.
The instruments used are observations, interviews, and sample groups. You can use this methodology if you are trying to study human behavior or response in some situations. Generally, qualitative research methodology is widely used in sociology, psychology, and other related domains.
If your research is majorly centered on data, figures, and stats, then analyzing these numerical data is often referred to as quantitative research methodology. You can use quantitative research methodology if your research requires you to validate or justify the obtained results.
In quantitative methods, surveys, tests, experiments, and evaluations of current databases can be advantageously used as instruments If your research involves testing some hypothesis, then use this methodology.
As the name suggests, the amalgam methodology uses both quantitative and qualitative approaches. This methodology is used when a part of the research requires you to verify the facts and figures, whereas the other part demands you to discover the theoretical and explanatory nature of the research question.
The instruments for the amalgam methodology require you to conduct interviews and surveys, including tests and experiments. The outcome of this methodology can be insightful and valuable as it provides precise test results in line with theoretical explanations and reasoning.
The amalgam method, makes your work both factual and rational at the same time.
If you have kept your sincerity and awareness intact with the aims and scope of research well enough, you must have got an idea of which research methodology suits your work best.
Before deciding which research methodology answers your research question, you must invest significant time in reading and doing your homework for that. Taking references that yield relevant results should be your first approach to establishing a research methodology.
Moreover, you should never refrain from exploring other options. Before setting your work in stone, you must try all the available options as it explains why the choice of research methodology that you finally make is more appropriate than the other available options.
You should always go for a quantitative research methodology if your research requires gathering large amounts of data, figures, and statistics. This research methodology will provide you with results if your research paper involves the validation of some hypothesis.
Whereas, if you are looking for more explanations, reasons, opinions, and public perceptions around a theory, you must use qualitative research methodology.The choice of an appropriate research methodology ultimately depends on what you want to achieve through your research.
1. how to write a research methodology.
You can always provide a separate section for research methodology where you should specify details about the methods and instruments used during the research, discussions on result analysis, including insights into the background information, and conveying the research limitations.
There generally exists four types of research methodology i.e.
The set of techniques or procedures followed to discover and analyze the information gathered to validate or justify a research outcome is generally called Research Methodology.
Your research methodology directly reflects the validity of your research outcomes and how well-informed your research work is. Moreover, it can help future researchers cite or refer to your research if they plan to use a similar research methodology.
Part of the book series: Progress in IS ((PROIS))
5110 Accesses
4 Citations
Digital age brings the most dramatic changes in this study and research discipline as well as in other fields of human activities. Scientific research is known for a very long time, however in comparison with other research fields the business and management researches are a little bit younger. The information technologies and new research methodologies that have recently emerged, dramatically change the nature of the research. Therefore, researchers should be ready to absorb new possibilities and follow basic roles coming from earlier stages of the discipline. The intention of this chapter is to provide a brief introduction to those aspects of pertinent research to beginner researchers. The chapter presents the nature of scientific research so that it may be clearly understood and uses, as its basic approach, the fundamental principles of problem solving. The scope of the research provides an overviews the entire assumptions about reality, knowledge and human nature, key terms of theory and research presented. Main concepts of the research are discussed and all this is oriented to business, management and economic science specific.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
A. Saunders, M. Lewis, P. Thornhill, Research Methods for Business Students , 6th edn (Harlow: Pearson, 2012)
Google Scholar
B. Blumberg, D.R. Cooper, P.S. Schindler, Business Research Methods , 3rd edn. (McGraw-Hill, London, 2011)
A. Bryman, E. Bell, Business Research Methods , 3rd edn. (Oxford University Press, Oxford, 2015)
A.J. Veal, Business Research Methods: A Managerial Approach (Pearson Education Australia, Frenchs Forest, 2005)
Y.K. Singh, Fundamental of Research Methodology and Statistics (New Age International (P) Ltd, 2006)
A.A. Berger, No Title Media and communication research methods: an introduction to qualitative and quantitative approaches , Third Edition (Sage, Thousand Oaks (Calif.), 2013)
A. Carsrud, M. Brannback, Handbook of Research Methods and Applications in Entrepreneurship and Small Business (Edward Elgar, Cheltenham, 2014)
D.R. Cooper, P.S. Schindler, Business Research Methods , 12th edn. (McGraw-Hill Education, New York, 2013)
M. Easterby-Smith, R. Thorpe, P.R. Jackson, Management Research , 4th edn. (SAGE Publications Ltd, London, 2012)
S. Greener, Business Research Methods (Ventus Publishing ApS, Denmark, 2008)
S.L. Jackson, Research Methods and Statistics: A Critical Thinking Approach (Wadsworth Cengage Learning, Belmont (Calif.), 2012)
B.C. Lindlof, T.R., Taylor, Qualitative Communication Research Methods (Sage, Thousand Oaks (Calif.), 2011)
A.J. Pickard, Research Methods in Information (Facet, London, 2013)
T. Stokes, P., Wall, Research Method (Palgrave Macmillan, London, 2014)
A.H. Walle, Qualitative Research in Business: A Practical Overview (Cambridge Scholars Publishing, Newcastle upon Tyne, 2015)
W.G. Zikmund, B.J. Babin, J.C. Carr, M. Griffin, Business Research Methods, 9th edition (South-Western: Cengage Learning, 2013)
D.C. Mark Teale, V. Dispenza, J. Flynn, Management Decision Making: Towards an Integrative Approach (Financial Times/Prentice Hall, USA, 2002)
B.C. Agrawal, Anthropological Methods for Communication Research: Experiences and Encounters During SITE (Concept Publishing Company, Delhi, 1985)
Britanica, An Inquiry into the Nature and Causes of the Wealth of Nations | work by Smith | Britannica.com . www.britannica.com/topic/An-Inquiry-into-the-Nature-and-Causes-of-the-Wealth-of-Nations . [Accessed: 16 Dec 2015]
L.B. Christensen, R.B. Johnson, L.A. Turner, Research Methods, Design, and Analysis (Pearson, Boston (Mass.), 2013)
N. Blaikie, Designing Social Research (2009)
Download references
Authors and affiliations.
Department of Business Technologies and Enterpreneurship, Vilnius Gediminas Technical University, Saulėtekio al 11, 10223, Vilnius, Lithuania
Vida Davidavičienė
You can also search for this author in PubMed Google Scholar
Correspondence to Vida Davidavičienė .
Editors and affiliations.
Department of Computing Science, University of Oldenburg, Oldenburg, Germany
Jorge Marx Gómez
Department of Banking and Finance, Arab International University, Damascus, Syrian Arab Republic
Sulaiman Mouselli
Reprints and permissions
© 2018 Springer International Publishing AG, part of Springer Nature
Davidavičienė, V. (2018). Research Methodology: An Introduction. In: Marx Gómez, J., Mouselli, S. (eds) Modernizing the Academic Teaching and Research Environment. Progress in IS. Springer, Cham. https://doi.org/10.1007/978-3-319-74173-4_1
DOI : https://doi.org/10.1007/978-3-319-74173-4_1
Published : 31 March 2018
Publisher Name : Springer, Cham
Print ISBN : 978-3-319-74172-7
Online ISBN : 978-3-319-74173-4
eBook Packages : Business and Management Business and Management (R0)
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Policies and ethics
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Lawrence mbuagbaw.
1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada
2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada
3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon
Livia puljak.
4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia
5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA
6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada
7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada
8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.
We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?
Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.
The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).
In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig. 1 .
Trends in the number studies that mention “methodological review” or “meta-
epidemiological study” in PubMed.
The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.
The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.
Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.
Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.
Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.
These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].
There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.
Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].
Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.
In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.
Q: How should I select research reports for my methodological study?
A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].
The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.
Q: How many databases should I search?
A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.
Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.
Q: Should I publish a protocol for my methodological study?
A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.
Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).
Q: How to appraise the quality of a methodological study?
A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.
Q: Should I justify a sample size?
A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:
For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].
Q: What should I call my study?
A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.
Q: Should I account for clustering in my methodological study?
A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”
A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].
Q: Should I extract data in duplicate?
A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].
Q: Should I assess the risk of bias of research reports included in my methodological study?
A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].
Q: What variables are relevant to methodological studies?
A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:
Q: Should I focus only on high impact journals?
A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.
Q: Can I conduct a methodological study of qualitative research?
A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.
Q: What reporting guidelines should I use for my methodological study?
A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.
Q: What are the potential threats to validity and how can I avoid them?
A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.
Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].
With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.
Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.
Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.
In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:
A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].
Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].
Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].
In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].
Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].
Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].
Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].
In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.
Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].
Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].
Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n = 103) [ 30 ].
Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.
Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.
Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].
This framework is outlined in Fig. 2 .
A proposed framework for methodological studies
Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.
In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.
Abbreviations.
CONSORT | Consolidated Standards of Reporting Trials |
EPICOT | Evidence, Participants, Intervention, Comparison, Outcome, Timeframe |
GRADE | Grading of Recommendations, Assessment, Development and Evaluations |
PICOT | Participants, Intervention, Comparison, Outcome, Timeframe |
PRISMA | Preferred Reporting Items of Systematic reviews and Meta-Analyses |
SWAR | Studies Within a Review |
SWAT | Studies Within a Trial |
LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.
This work did not receive any dedicated funding.
Ethics approval and consent to participate.
Not applicable.
Competing interests.
DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Research methods are the strategies, processes or techniques utilized in the collection of data or evidence for analysis in order to uncover new information or create better understanding of a topic.
There are different types of research methods which use different tools for data collection.
Qualitative Research gathers data about lived experiences, emotions or behaviours, and the meanings individuals attach to them. It assists in enabling researchers to gain a better understanding of complex concepts, social interactions or cultural phenomena. This type of research is useful in the exploration of how or why things have occurred, interpreting events and describing actions.
Quantitative Research gathers numerical data which can be ranked, measured or categorised through statistical analysis. It assists with uncovering patterns or relationships, and for making generalisations. This type of research is useful for finding out how many, how much, how often, or to what extent.
Mixed Methods Research integrates both Q ualitative and Quantitative Research . It provides a holistic approach combining and analysing the statistical data with deeper contextualised insights. Using Mixed Methods also enables Triangulation, or verification, of the data from two or more sources.
Finding Mixed Methods research in the Databases
“mixed model*” OR “mixed design*” OR “multiple method*” OR multimethod* OR triangulat*
Qualitative Techniques or Tools | Quantitative Techniques or Tools |
---|---|
: these can be structured, semi-structured or unstructured in-depth sessions with the researcher and a participant. | Surveys or questionnaires: which ask the same questions to large numbers of participants or use Likert scales which measure opinions as numerical data. |
: with several participants discussing a particular topic or a set of questions. Researchers can be facilitators or observers. | Observation: which can either involve counting the number of times a specific phenomenon occurs, or the coding of observational data in order to translate it into numbers. |
: On-site, in-context or role-play options. | Document screening: sourcing numerical data from financial reports or counting word occurrences. |
: Interrogation of correspondence (letters, diaries, emails etc) or reports. | Experiments: testing hypotheses in laboratories, testing cause and effect relationships, through field experiments, or via quasi- or natural experiments. |
: Remembrances or memories of experiences told to the researcher. |
Help and Information
Prof. D. Walwyn: 2017
Dr. A. Nyika: 2017
Prof. C.M.E. McCrindle: 2017
Prof. D. Walwyn: 2016
Prof. J. Burnett: 2009
Research methodology can be understood as a way to systemically solve or answer the research problem. Thus essentially, it can be understood as the process of studying how research is done in a scientific manner. Through the methodology, we study the various steps that are generally adopted by a researcher in studying his/her research problem and the underlying logic behind them. The selection of the research method is crucial for what conclusions you can make about a phenomenon. It affects what you can say about the cause and factors influencing the phenomenon.
Research methods refers to the tools that one uses to do research. These can either be qualitative or quantitative or mixed. Quantitative methods examines numerical data and often requires the use of statistical tools to analyse data collected. This allows for the measurement of variables and relationships between them can then be established. This type of data can be represented using graphs and tables. Qualitative data is non-numerical and focuses on establishing patterns. Mixed methods are composed of both qualitative and quantitative research methods. Mixed methods allow for explanation of unexpected results.
Quantitative research methods, qualitative research methods, mixed method approach, selecting the best research method.
Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project. The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data. The following boxes below contain a list of quantitative, qualitative, and mixed research methods.
Constructing Questionnaires
When constructing your questions for a survey or questionnaire, there are things you can do to ensure that your questions are accurate and easy to understand (Dawson, 2019):
Quantitative Research Measures
When you are considering a quantitative approach to your research, you need to identify why types of measures you will use in your study. This will determine what type of numbers you will be using to collect your data. There are four levels of measurement:
Focus Groups
This is when a select group of people gather to talk about a particular topic. They can also be called discussion groups or group interviews (Dawson, 2019). They are usually lead by a moderator to help guide the discussion and ask certain questions. It is critical that a moderator allows everyone in the group to get a chance to speak so that no one dominates the discussion. The data that are gathered from focus groups tend to be thoughts, opinions, and perspectives about an issue.
Advantages of Focus Groups
Disadvantages of Focus Groups
Observation
There are two ways to conduct research observations:
Open-Ended Questionnaires
These types of questionnaires are the opposite of "multiple choice" questionnaires because the answer boxes are left open for the participant to complete. This means that participants can write short or extended answers to the questions. Upon gathering the responses, researchers will often "quantify" the data by organizing the responses into different categories. This can be time consuming because the researcher needs to read all responses carefully.
Semi-structured Interviews
This is the most common type of interview where researchers aim to get specific information so they can compare it to other interview data. This requires asking the same questions for each interview, but keeping their responses flexible. This means including follow-up questions if a subject answers a certain way. Interview schedules are commonly used to aid the interviewers, which list topics or questions that will be discussed at each interview (Dawson, 2019).
Theoretical Analysis
Often used for nonhuman research, theoretical analysis is a qualitative approach where the researcher applies a theoretical framework to analyze something about their topic. A theoretical framework gives the researcher a specific "lens" to view the topic and think about it critically. it also serves as context to guide the entire study. This is a popular research method for analyzing works of literature, films, and other forms of media. You can implement more than one theoretical framework with this method, as many theories complement one another.
Common theoretical frameworks for qualitative research are (Grant and Osanloo, 2014):
Unstructured Interviews
These are in-depth interviews where the researcher tries to understand an interviewee's perspective on a situation or issue. They are sometimes called life history interviews. It is important not to bombard the interviewee with too many questions so they can freely disclose their thoughts (Dawson, 2019).
Other mixed method approaches that incorporate quantitative and qualitative research methods depend heavily on the research topic. It is strongly recommended that you collaborate with your academic advisor before finalizing a mixed method approach.
How do you determine which research method would be best for your proposal? This heavily depends on your research objective. According to Dawson (2019), there are several questions to ask yourself when determining the best research method for your project:
methodology
Did you know.
Methodology and Science
The methodology employed in an experiment is essential to its success, and bad methodology has spoiled thousands of research projects. So whenever a piece of research is published in a scientific or medical journal, the researchers always carefully describe their methodology; otherwise, other scientists couldn't possibly judge the quality of what they've done.
These examples are programmatically compiled from various online sources to illustrate current usage of the word 'methodology.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.
New Latin methodologia , from Latin methodus + -logia -logy
1800, in the meaning defined at sense 1
You keep using that word. I do not think it means what you think it means.
methodologist
methods engineer
“Methodology.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/methodology. Accessed 18 Aug. 2024.
Nglish: Translation of methodology for Spanish Speakers
Britannica English: Translation of methodology for Arabic Speakers
Britannica.com: Encyclopedia article about methodology
Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!
Word of the day.
See Definitions and Examples »
Get Word of the Day daily email!
Plural and possessive names: a guide, commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, how to use accents and diacritical marks, popular in wordplay, 8 words for lesser-known musical instruments, it's a scorcher words for the summer heat, 7 shakespearean insults to make life more interesting, 10 words from taylor swift songs (merriam's version), 9 superb owl words, games & quizzes.
A title page is required for all APA Style papers. There are both student and professional versions of the title page. Students should use the student version of the title page unless their instructor or institution has requested they use the professional version. APA provides a student title page guide (PDF, 199KB) to assist students in creating their title pages.
The student title page includes the paper title, author names (the byline), author affiliation, course number and name for which the paper is being submitted, instructor name, assignment due date, and page number, as shown in this example.
Title page setup is covered in the seventh edition APA Style manuals in the Publication Manual Section 2.3 and the Concise Guide Section 1.6
Student papers do not include a running head unless requested by the instructor or institution.
Follow the guidelines described next to format each element of the student title page.
|
|
|
---|---|---|
Paper title | Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms. |
|
Author names | Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name. | Cecily J. Sinclair and Adam Gonzaga |
Author affiliation | For a student paper, the affiliation is the institution where the student attends school. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author name(s). | Department of Psychology, University of Georgia |
Course number and name | Provide the course number as shown on instructional materials, followed by a colon and the course name. Center the course number and name on the next double-spaced line after the author affiliation. | PSY 201: Introduction to Psychology |
Instructor name | Provide the name of the instructor for the course using the format shown on instructional materials. Center the instructor name on the next double-spaced line after the course number and name. | Dr. Rowan J. Estes |
Assignment due date | Provide the due date for the assignment. Center the due date on the next double-spaced line after the instructor name. Use the date format commonly used in your country. | October 18, 2020 |
| Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header. | 1 |
The professional title page includes the paper title, author names (the byline), author affiliation(s), author note, running head, and page number, as shown in the following example.
Follow the guidelines described next to format each element of the professional title page.
|
|
|
---|---|---|
Paper title | Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms. |
|
Author names
| Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name. | Francesca Humboldt |
When different authors have different affiliations, use superscript numerals after author names to connect the names to the appropriate affiliation(s). If all authors have the same affiliation, superscript numerals are not used (see Section 2.3 of the for more on how to set up bylines and affiliations). | Tracy Reuter , Arielle Borovsky , and Casey Lew-Williams | |
Author affiliation
| For a professional paper, the affiliation is the institution at which the research was conducted. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author names; when there are multiple affiliations, center each affiliation on its own line.
| Department of Nursing, Morrigan University |
When different authors have different affiliations, use superscript numerals before affiliations to connect the affiliations to the appropriate author(s). Do not use superscript numerals if all authors share the same affiliations (see Section 2.3 of the for more). | Department of Psychology, Princeton University | |
Author note | Place the author note in the bottom half of the title page. Center and bold the label “Author Note.” Align the paragraphs of the author note to the left. For further information on the contents of the author note, see Section 2.7 of the . | n/a |
| The running head appears in all-capital letters in the page header of all pages, including the title page. Align the running head to the left margin. Do not use the label “Running head:” before the running head. | Prediction errors support children’s word learning |
| Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header. | 1 |
Home » Research Design – Types, Methods and Examples
Table of Contents
Definition:
Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.
Types of Research Design are as follows:
This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.
Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.
Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.
Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.
Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.
Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.
The format of a research design typically includes the following sections:
An Example of Research Design could be:
Research question: Does the use of social media affect the academic performance of high school students?
Research design:
Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:
Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.
Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.
The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.
Some of the key purposes of research design include:
There are numerous applications of research design in various fields, some of which are:
Here are some advantages of research design:
Research Design | Research Methodology |
---|---|
The plan and structure for conducting research that outlines the procedures to be followed to collect and analyze data. | The set of principles, techniques, and tools used to carry out the research plan and achieve research objectives. |
Describes the overall approach and strategy used to conduct research, including the type of data to be collected, the sources of data, and the methods for collecting and analyzing data. | Refers to the techniques and methods used to gather, analyze and interpret data, including sampling techniques, data collection methods, and data analysis techniques. |
Helps to ensure that the research is conducted in a systematic, rigorous, and valid way, so that the results are reliable and can be used to make sound conclusions. | Includes a set of procedures and tools that enable researchers to collect and analyze data in a consistent and valid manner, regardless of the research design used. |
Common research designs include experimental, quasi-experimental, correlational, and descriptive studies. | Common research methodologies include qualitative, quantitative, and mixed-methods approaches. |
Determines the overall structure of the research project and sets the stage for the selection of appropriate research methodologies. | Guides the researcher in selecting the most appropriate research methods based on the research question, research design, and other contextual factors. |
Helps to ensure that the research project is feasible, relevant, and ethical. | Helps to ensure that the data collected is accurate, valid, and reliable, and that the research findings can be interpreted and generalized to the population of interest. |
Researcher, Academic Writer, Web developer
Leave a comment x.
Save my name, email, and website in this browser for the next time I comment.
Discover the world's research
To read the full-text of this research, you can request a copy directly from the authors.
Published on 16.8.2024 in Vol 26 (2024)
Authors of this article:
There are no citations yet available for this article according to Crossref .
Genome Biology volume 25 , Article number: 213 ( 2024 ) Cite this article
1282 Accesses
30 Altmetric
Metrics details
In biomedical research, validating a scientific discovery hinges on the reproducibility of its experimental results. However, in genomics, the definition and implementation of reproducibility remain imprecise. We argue that genomic reproducibility, defined as the ability of bioinformatics tools to maintain consistent results across technical replicates, is essential for advancing scientific knowledge and medical applications. Initially, we examine different interpretations of reproducibility in genomics to clarify terms. Subsequently, we discuss the impact of bioinformatics tools on genomic reproducibility and explore methods for evaluating these tools regarding their effectiveness in ensuring genomic reproducibility. Finally, we recommend best practices to improve genomic reproducibility.
Reproducibility is a cornerstone principle across various scientific disciplines, each adapting the concept to suit its specific nuances [ 1 , 2 , 3 , 4 ]. The topic of reproducibility has garnered significant attention as experts across fields highlight the need to establish standards for validating scientific findings. Definitions of reproducibility and related concepts, such as replicability and robustness often vary by discipline. In computational research, these concepts are often defined based on whether the code and data utilized are identical. For instance, Whitaker’s matrix [ 5 ] organized the concepts of reproducibility into a framework, where the interplay between code and data determines whether the findings are reproducible, replicable, robust, or generalizable. The matrix categorizes outcomes based on the consistency of the code and data used in research. On the other hand, Essawy et al. [ 2 ] present a hierarchical pyramid model of the reproducibility taxonomy for complex computational studies, outlining the progression from repeatability, runnability, reproducibility to replicability, each requiring increasing levels of effort and time (Additional file 1 : Table S1).
In genomics, reproducibility hinges on both experimental procedures and computational methods, facilitating recent strides toward precision medicine [ 6 ]. The analysis of genomic data fuels tailored treatments and improved patient outcomes. Yet, ensuring the credibility and progress of genomic medicine demands reproducible results across laboratories.
The multifaceted nature of reproducibility in genomics research is reflected in its dependence on both experimental procedures and computational methods. This complexity is underscored by the diverse steps involved in data production and analysis, spanning experimental procedures such as sample preparation and sequencing, as well as computational tasks like read alignment, variant calling, and gene expression analysis. Furthermore, the experimental variability occurring during the production of genomic data poses a considerable challenge for bioinformatics tools, as they are supposed to generate consistent genomic results under such variation.
This aspect is commonly referred to as methods reproducibility in experimental studies [ 4 ]. Methods reproducibility, as defined by Goodman et al. [ 4 ], pertains to the ability of precisely executing, to the highest degree possible, the experimental and computational procedures, using the same data and tools, in order to yield identical results [ 4 ]. In the context of genomics, methods reproducibility refers to obtaining the same results across multiple runs of the bioinformatics tools using the same parameters and genomic data (Fig. 1 ). Ideally, bioinformatics tools should also provide consistent results when analyzing genomic data obtained from different sequencing runs, including in different laboratories, but using the same protocols. A single, universally recognized term that describes the impact of bioinformatics tools on genomic results across such technical replicates is currently lacking. Pan et al. discuss reproducibility in the context of specific bioinformatics tasks. For instance, the reproducibility impact of read alignment tools is referred to as “aligner reproducibility,” while the reproducibility of structural variant callers is termed “caller reproducibility” [ 7 ]. The authors assess the consistency of these bioinformatics tasks across multiple tools and datasets. The closest definitions for this assessment were introduced by Goodman et al. [ 4 ] as results reproducibility and by Gundersen [ 8 ] as outcome reproducibility. Results reproducibility is the ability to obtain the same results when independent studies on different datasets are conducted with procedures closely resembling the original study [ 8 ]. However, the concept of results reproducibility was defined to target the reproduction of an experiment including a handful of statistical tests, rather than the analysis of high-dimensional and heterogeneous multi-omics data produced regularly by large collaborative genomics initiatives today. Therefore, we propose the term genomic reproducibility which measures ability to obtain consistent outcomes from bioinformatics tools using genomic data obtained from different library preparations and sequencing runs, but for fixed experimental protocols (Fig. 1 ).
Schematic representation of three key concepts: technical replicates, methods reproducibility, and genomic reproducibility. The same sample is processed (library preparation) and sequenced multiple times, possibly in different laboratories, but using the same experimental protocols and sequencing platform. The output of these sequencing runs are technical replicates represented as FASTQ files. Data analysis is performed for each technical replicate multiple times to assess consistency of genomic results, which refers to methods reproducibility. Genomic reproducibility, on the other hand, evaluates the consistency of genomic results across technical replicates
We explore various interpretations of reproducibility before focusing on its specific application within genomics, with the goal of refining key terminology in this context. Our focus extends to the pivotal role of bioinformatics tools and their impact on genomic reproducibility, followed by an evaluation of methodologies for assessing these tools. Additionally, we examine relevant studies and technical replicate datasets as valuable resources for assessing genomic reproducibility. In conclusion, we propose actionable best practices to enhance genomic reproducibility.
Genomic reproducibility faces challenges at two pivotal junctures. The initial stage involves pre-sequencing and sequencing, where technical variability might emerge. Subsequently, during computational analysis and interpretation of genomic data, stochastic algorithms can introduce uncertainties, further impacting reproducibility. In the context of DNA sequencing, technical variability can arise from the use of diverse sequencing platforms [ 9 ] and from differences between individual flow cells [ 7 , 10 , 11 ]. Even if the sequencing protocol is kept identical across multiple runs, experimental variation is still expected as a result of the random sampling variance of the sequencing process and variations in library preparation [ 12 , 13 , 14 ]. In light of this, the objective of bioinformatics tools should be to accommodate and tolerate such experimental variation, aiming to generate consistent results across different sequencing runs and library preparations, which means achieving genomic reproducibility.
In genomics, replicates are classified into two types, biological replicates and technical replicates [ 15 ]. Biological replicates utilize multiple biological samples sharing identical conditions to quantify the inherent biological variation among them. On the other hand, technical replicates are obtained from the same biological sample sequenced multiple times, using the same experimental and computational procedures. They are used to assess and account for variability arising from the experimental process itself, such as inconsistencies in sample handling, instrument performance, or measurement techniques. Below, we focus on the best practices of using and simulating technical replicates to assess genomic reproducibility. Importantly, when assessing genomic reproducibility, we do not rely on gold standards, as the focus is not on the performance of the tools but on their capacity to maintain consistent results across technical replicates.
In practice, controlling conditions of sequencing experiments is challenging and high levels of experimental variations may compromise the ability of bioinformatics tools to maintain consistent results across technical replicates. In order to evaluate the performance of bioinformatics tools in terms of genomic reproducibility, one can consider technical replicates that specifically capture the variations among sequencing runs and library preparation techniques. This approach intentionally disregards other potential factors that could confound the results, such as sequencing protocols and platforms, allowing technical replicates acquired under the same sequencing protocols to be utilized to evaluate bioinformatics tools' impact. However, generating technical replicates can escalate both the financial burden and logistical complexity of genomic experiments. In certain cases, obtaining them may be impractical or ethically prohibitive, particularly in clinical settings.
Bioinformatics tools play a crucial role in analyzing and eliminating undesired variation in genomic data. Variations in genomic data can arise due to multiple sources, such as experimental noise, sequencing errors, or biological artifacts. For example, homopolymer compression is employed to mitigate errors in regions with repeating nucleotide sequences by simplifying these sequences to enhance alignment accuracy [ 16 ]. Furthermore, normalization processes are used to remove batch effects or technical biases, ensuring that systematic errors do not confound the results [ 17 ]. Despite their critical roles, these tools are imperfect and can introduce various kinds of variation, both deterministic and stochastic [ 18 ].
Deterministic variations include algorithmic biases, leading alignment algorithms to favor certain sequences over others. For example, BWA [ 19 ] and Stampy [ 20 ] demonstrate a reference bias in favoring sequences containing reference alleles of a known heterozygous indel [ 20 ]. Additionally, data processing decisions such as setting a low threshold for quality filtering can include low-quality reads prone to sequencing errors, thus introducing further unwanted variation [ 21 ].
Stochastic variations in bioinformatics tools, on the other hand, stem from the intrinsic randomness of certain computational processes, such as Markov Chain Monte Carlo and genetic algorithms. Consequently, these variations may produce divergent outcomes even when identical datasets are analyzed under identical conditions.
While bioinformatics tools aim to increase the accuracy of genomic data analysis and reduce sequencing errors, they can also introduce additional variation due to their built-in biases. For example, one of the challenges of read alignment tools is capturing and reporting reads mapped to repetitive regions of the reference genome, known as multi-mapped reads [ 22 ]. There exist different strategies to deal with the uncertainty of multi-mapped reads: some tools ignore these reads entirely (e.g., SNAP [ 23 ]), and others employ a deterministic approach to identify the best possible position among all the matching positions (e.g., RazerS [ 24 ] and mrFAST [ 25 ]), and finally, BWA-MEM [ 19 ] reports these multi-mapped reads with a mapping quality of zero. In the case of multi-mapping, allowing users to set a seed for a pseudo-random generator can restore the reproducibility of stochastic alignment strategies.
According to one study, random shuffling of reads affects Bowtie2 [ 26 ] and BWA-MEM [ 19 ] differently [ 27 ]. Bowtie2 is able to produce consistent alignment results irrespective of the order of the reads, while BWA-MEM [ 19 ] can show variability in results when the sequence of reads is altered. Specifically, BWA-MEM displayed variability under specific test conditions where reads were segmented and processed independently. This deviation from BWA-MEM’s typical integrated parallel processing can alter the calculated size distribution of the read inserts, as the analysis relies on smaller groups of shuffled data. This approach, although not commonly used, highlights the potential for irreproducible mapping results with BWA-MEM with respect to read order. Such variations could also influence the consistency of structural variant detection. Alkan et al. also found that structural variant calling tools produced 3.5 to 25.0% of different variant call sets with randomly shuffled data compared to the original data [ 27 ]. Furthermore, another study highlights that detecting structural variants varies significantly across different SV (structural variant) callers and even among the same callers when different read alignment tools are used [ 7 ]. It was previously shown that these variations were mainly attributed to duplications in repeat regions [ 27 ]. These studies demonstrate the potential impact of bioinformatics algorithms on the reproducibility of genomic results and emphasize the significance of assessing it with replicates.
Ongoing efforts in genomics include ensuring whole-genome sequencing (WGS) reproducibility, with notable initiatives including the Genome in a Bottle (GIAB) consortium, hosted by the National Institute of Standards and Technology (NIST), and the HapMap project. The complementing efforts were performed within consecutive phases of the US FDA-led MicroArray/Sequencing Quality Control Project (MAQC/SEQC), which is helping improve microarray and next-generation sequencing technologies and foster their proper applications in discovery, development, and review of FDA-regulated products. In the MAQC-IV/SEQC phase, the aim was to assess the technical performance of next-generation sequencing platforms by generating benchmark datasets with reference samples and evaluating the advantages and limitations of various bioinformatics strategies in RNA and DNA analyses. The impact of various bioinformatics approaches on the downstream biological interpretations of RNA-seq results was comprehensively examined and the utility of RNA-seq in clinical application and safety evaluation was assessed. In SEQC2, which is the next phase of SEQC, the focus has been placed on targeted DNA- and RNA-seq to develop standard analysis protocols and quality control metrics for fit-for-purpose use of NGS data to enhance regulatory science research and precision medicine. On the other hand, consortiums such as the GIAB, and the HapMap projects provide reference materials that are used to evaluate genomic reproducibility in various studies. In Table 1 , DNA and RNA-seq technical replicate datasets from major consortiums and studies are compiled, which can be used to assess genomic reproducibility.
Technical replicates of the Ashkenazi Trio dataset were generated to assess the performance of DNA sequencing platforms [ 9 ]. This involved generating triplicates of inter-laboratory and intra-laboratory paired-end and single-end DNA-seq samples using five Illumina and three ThermoFisher Ion Torrent platforms. This dataset can serve as a valuable resource for assessing genomic reproducibility by examining the performance of DNA-seq alignment tools and structural variant callers using both paired-end and single-end triplicate samples. The Chinese Quartet dataset, the HapMap Trio, and a pilot genome NA12878 are datasets with technical replicates that have been generated for structural variant detection studies [ 7 , 31 ]. Pan et al. used technical replicates from the Chinese Quartet to assess reproducibility across three different labs using different alignment and structural variant callers [ 7 , 31 ]. These technical replicates were sequenced from three different labs as triplicates representing different runs of sequencing. The same dataset was used to evaluate how sequencing centers, replicates, alignment tools, and platforms affect SV calling in NGS [ 31 ]. Additionally, The HapMap Trio and the NA12878 datasets were employed in a separate SV calling study to examine reproducibility across various factors, including sequencing platforms, labs, library preparations, alignment tools, and SV calling tools [ 7 ]. Technical replicates consist of triplicates of short-reads which can again be used to assess genomic reproducibility and the findings can be compared to the findings available in SV calling studies [ 7 , 31 ]. Lastly, we mention an RNA-seq dataset provided by the SEQC consortium [ 32 ], which has been employed to assess the reproducibility of RNA-seq experiments [ 17 , 33 ] and also the impact of RNA-seq data analysis tools on gene expression analysis [ 18 ]. Four samples were sequenced in 4 technical replicates each. The whole experiment was replicated in 6 different sites worldwide and another 5th replicate was created by a vendor and sent to labs for sequencing. All RNA-seq technical replicates used in these studies are made publicly available, serving as a valuable resource for assessing genomic reproducibility.
In certain conditions, such as when the number of technical replicates is limited for a specific type of genomic data or when reproducibility assessment requires a more controlled environment, synthetic replicates may be employed instead of technical replicates. This approach allows for a more controlled examination of the impact of specific alterations in the data. Synthetic replicates are generated in silico to mimic the variations of sequencing output expected from technical replicates. In practice, it is impossible to computationally reproduce all variations among technical replicates, but different techniques exist to generate synthetic replicates that reflect some of the variations.
One approach to create synthetic replicates is randomly shuffling the order of the reads reported from a sequencer (Fig. 2 a), which reflects the randomness of events in a sequencing experiment, such as DNA hybridization on the flow cell [ 27 ]. Another technique is to take the reverse complement of each read (Fig. 2 b) to assess strand bias [ 34 ] when the reference genome is double-stranded. The bias arises due to a pronounced overabundance in one direction of NGS sequencing reads either forward or reverse, compared to the opposite direction [ 35 ]. This problem may lead to unwanted variation, which can impact genomic reproducibility. Yet another technique is bootstrapping (Fig. 2 c) reflecting random sampling variance, which is a widely used type of synthetic replicate employed in many genomics, transcriptomics [ 36 ], and metagenomics [ 37 ] studies. Subsampling (Fig. 2 d) is another type of synthetic replicate, which involves randomly selecting a subset of reads from the original dataset. This method simulates different levels of sequencing depth and coverage from the stochastic nature of sequencing.
Schematic representation of generating synthetic replicates. Based on a given dataset consisting of five reads R1, …, R5 (left) four different types of synthetic replicates (right) are created by either randomly shuffling the order of the five reads ( a ), or by taking the reverse complement of each read ( b ), or by bootstrapping, i.e., resampling of the five reads with replacement ( c ), or by subsampling, i.e., selecting a subset consisting of three reads from the original five reads ( d )
Both technical replicates and synthetic replicates have their own advantages and limitations. Technical replicates contribute to a more realistic and reliable assessment by accounting for inherent variability in experimental procedures, such as different sequencing runs, and enabling rigorous statistical analysis. On the other hand, synthetic replicates offer a controlled evaluation of tools since the modifications applied to the data are known, allowing for a precise assessment against a ground truth. Hence, utilizing both types of replicates can be useful in assessing genomic reproducibility.
A fundamental hurdle in achieving reproducibility with structural variant callers lies in the inherent ambiguity of encoding genomic variants, stemming from biological complexities rather than technical limitations. While indels (insertions and deletions) can be left-aligned or normalized to a standard representation to facilitate comparability, complex alterations such as large deletions, insertions, duplications, inversions, and translocations present unique characteristics that complicate their consistent encoding and comparison across different replicates and bioinformatics tools. For instance, what one tool interprets as a single large deletion might be seen as multiple smaller deletions by another, due to differences in read alignments. Translocations further exemplify these difficulties, especially when they involve subtle additional changes, such as small insertions at the junction points, which might be detected by some tools but overlooked by others. These complexities significantly challenge assessing the genomic reproducibility of structural variant callers.
Moreover, the detection and characterization of SVs are intricately linked to the performance of read alignment processes. Inaccuracies or variability in aligning sequencing reads to the reference genome can have profound downstream effects on the identification and interpretation of SVs.
We have compiled a set of recommended standards and guidelines aimed at promoting genomic reproducibility (Table 2 ). These recommendations are based on the expectation that bioinformatics tools already adhere to existing dependency and workflow management standards, enabling their identical execution in different settings [ 38 ]. Dependency management systems like conda, along with shared computing environments and containers such as Docker and Apptainer (formerly Singularity) [ 39 , 40 , 41 ] play a crucial role in ensuring consistent software environments.
We suggest the following best practices for the development and application of bioinformatics tools to ensure genomic reproducibility. First, tools should be documented sufficiently, including detailed explanations of all parameters, their default settings, usage examples, and guidelines. This documentation assists users in selecting appropriate parameter values, which is essential for reproducibility. Furthermore, tool developers should clarify the relationship between parameter selection and reproducibility in the documentation to facilitate accurate and consistent results.
The second essential requirement involves incorporating functionality that allows users to specify random seeds. By implementing this feature, developers provide users control over the random results generated by non-deterministic algorithms. This control is vital for ensuring that the same set of input data consistently produces the same output, enabling methods to assess reproducibility. This consistency is the cornerstone for enabling reliable methods to assess genomic reproducibility systematically. By setting seeds, researchers can replicate runs of bioinformatics tools under the same conditions, thereby validating the reliability of the results and facilitating a transparent evaluation of genomic reproducibility.
Another recommendation pertains to the performance assessment of the bioinformatics tool. It is essential to conduct controlled experiments using synthetic replicates, technical replicates, or a combination of both. The result obtained from these experiments, along with any observed discrepancies or variations, should be thoroughly reported. This comprehensive reporting enables researchers to evaluate the performance and reliability of the tool accurately.
Bioinformatics tool developers can enhance reproducibility by providing result visualization from replicates. However, effectively handling visualization and communicating results poses challenges due to the extensive scale and complexity of the genomic data involved. These challenges can be overcome by employing suitable visualization techniques and dimensionality reduction methods. Through careful analysis of patterns of discrepancies from the visualizations, researchers can gain valuable insights into the reliability and consistency of the results produced by the tool.
Given the vast array of bioinformatics tools and methods available, comprehensible benchmarking becomes increasingly important [ 40 , 42 ]. Benchmarking can not only be used to assess performance against a ground truth, but also to assess reproducibility even in the absence of a ground truth. Reproducibility benchmarking studies are designed to evaluate the consistency of tools when used across synthetic and technical replicates. This dual approach thoroughly illuminates the reliability of tools by rigorously evaluating their performance across diverse scenarios—including variations in parameter settings and random outputs generated by different seed values. Such detailed evaluations are pivotal for pinpointing and mitigating the inherent uncertainties in parameter selection and the inherent randomness of algorithms, thereby ensuring that tools can reliably reproduce results under similar conditions.
While establishing the reliability of tools through rigorous benchmarking is vital, it's equally essential to acknowledge the potential limitations that may arise, particularly regarding the selection of cell lines for experimentation. One significant challenge is the potential presence of somatic mutations within these cell lines, which can introduce biases in evaluating tool performance. These mutations, occurring during the lifetime of the cell, can inadvertently influence experimental outcomes, leading to skewed results.
To mitigate these challenges and ensure the benchmarking studies themselves are reproducible, it is imperative that they adhere to clear guidelines. These guidelines should cover the documentation of methodologies, parameters, and experimental conditions in detail, facilitating the replication of studies by other researchers [ 40 , 42 ]. Incorporating workflow management systems can further bolster the reproducibility of benchmarking studies by automating and documenting the analytical processes, thereby enhancing the consistency and transparency of genomic research.
In addition to structured benchmarking, community-driven and continuous benchmarking efforts can play important roles in advancing bioinformatics tools. For example, continuous benchmarking, as supported by Omnibenchmark [ 43 ], enables researchers to monitor tool efficacy amidst evolving datasets and computational landscapes, adapting to emerging challenges and driving progress in genomic research. This ongoing process reinforces the foundation of genomic reproducibility, promoting transparency, accountability, and adaptability within the scientific community. Embracing this structured and iterative approach to benchmarking enhances the reliability of bioinformatics tools and fortifies the foundation of genomic reproducibility.
Reproducibility is critical in all fields of science, engineering, and medicine to ensure the reliability and integrity of findings and the safeness of their applications. However, there are various challenges and limitations to achieving reproducibility in practice. The field of genomics faces several hurdles to reproducibility due to rapid advancements in sequencing technologies and data generation. Each new technology introduces unique biases and sources of variation, which need to be carefully considered and addressed during data analysis. Additionally, genomic studies often involve complex bioinformatics pipelines, which are susceptible to errors and require rigorous validation.
Bioinformatics tools have made significant contributions to mitigating some of these challenges and enhancing genomic reproducibility. These tools facilitate the standardization and automation of data processing, analysis and visualizations, minimizing human error, and increasing the reliability of results. However, bioinformatics tools are not without limitations and can even introduce unwanted variations that compromise genomic reproducibility. The use of technical and synthetic replicates presents valuable approaches for evaluating essential aspects of bioinformatics algorithms and their influence on genomic reproducibility.
The use of technical replicates offers advantages, as it captures the diversity across different sequencing runs. In order to correctly assess bioinformatics tools in terms of genomic reproducibility, it is important to acknowledge that despite efforts to control experimental conditions, variations can arise due to factors such as human errors in sample preparations or unknown batch effects. These confounding factors and other experimental parameters such as variations in sequencing platforms can influence genomic results. We recommend the use of technical replicates to capture variations arising from different runs of sequencing and different library preparations.
Additionally, it is vital to understand the extent to which non-deterministic algorithms influence genomic results and to tailor the assessment of genomic reproducibility accordingly. It is important to note that while setting seeds ensures consistent results under the same conditions and facilitates reproducibility, it may also mask underlying variability across different seeds. This type of variability, if substantial, raises critical questions about genomic reproducibility.
Synthetic replicates are a fast and cost-efficient way of generating replicates in genomics. They cannot fully represent real data variation as they capture only some of the differences produced between different sequencing runs. However, they provide a useful and easily accessible way of assessing necessary features of bioinformatics algorithms and the way they impact on genomic reproducibility. When evaluating genomic reproducibility through synthetic replicates, employing shuffling and reverse complementing facilitates meaningful comparisons in read alignment. This approach enables a direct assessment of read alignments present across synthetic replicates, enhancing the effectiveness of the analysis, as the set of reads is consistent across replicates. In contrast, subsampling and bootstrapping challenge such direct comparisons; subsampling involves selecting a portion of the original reads, and bootstrapping changes the read composition by resampling with replacement. Despite this, subsampling offers valuable insights by allowing the evaluation of bioinformatics tools across different subsets of reads, serving as an indirect measure of reproducibility. Bootstrapping provides a way to simulate various sampling scenarios, creating numerous pseudo-replicates. This method enables the exploration of the inherent variability and stability in read alignment and variant detection processes under different sampling conditions. By repeatedly analyzing these varied samples, researchers can better understand how changes in read frequency and composition affect the reproducibility and accuracy of genomic analyses.
While we recommend testing tools across synthetic and technical replicates, significant concerns arise from the inherent uncertainty when using different bioinformatics tools or adjusting their settings, which leads to substantial variability in results [ 44 ]. This variability, and how method choice contributes to it, can be exploited to achieve desired outcomes, which can harm reproducibility [ 45 ] selectively. These considerations extend beyond the scope of our study but remain highly relevant and important in the broader context of genomic analysis.
Precision medicine heavily relies on accurate and reliable genomic information. However, the reliability of genomic results can only be ensured if they are reproducible by bioinformatics tools. As such, it is essential to consider reproducibility as a key evaluation criterion when assessing the quality of these tools. We recommend that both developers and users of bioinformatics tools follow the guidelines in Table 2 to ensure genomic reproducibility. By implementing these guidelines, we can improve the reliability of analyzing genomic data, and facilitate the successful translation of precision medicine to clinical practice.
No datasets or codes were used for data analysis.
Leipzig J, Nüst D, Hoyt CT, Ram K, Greenberg J. The role of metadata in reproducible computational research. Patterns (N Y). 2021;2:100322.
Article PubMed Google Scholar
Bakinam T Essawy, Jonathan L. Goodall, Daniel Voce, Mohamed M. Morsy, Jeffrey M. Sadler, Young Don Choi, David G. Tarboton, Tanu Malik. A taxonomy for reproducible and replicable research in environmental modelling. Environmental Modelling and Software. 2020;134:104753.
Arnold, B. et al. The Turing Way: A Handbook for Reproducible Data Science. https://doi.org/10.5281/zenodo.3233986 .
Goodman, S. N., Fanelli, D. & Ioannidis, J. P. A. What does research reproducibility mean? Sci. Transl. Med. 8, 341ps12 (2016).
Whitaker, K. Showing Your Working: A Guide to Reproducible Neuroimaging Analyses. (figshare, 2016). https://doi.org/10.6084/M9.FIGSHARE.4244996.V1 .
Hussen BM, et al. The emerging roles of NGS in clinical oncology and personalized medicine. Pathol Res Pract. 2022;230:153760.
Article CAS PubMed Google Scholar
Pan B, et al. Assessing reproducibility of inherited variants detected with short-read whole genome sequencing. Genome Biol. 2022;23:2.
Article CAS PubMed PubMed Central Google Scholar
Erik Gundersen O. The fundamental principles of reproducibility. Philos Trans A Math Phys Eng Sci. 2021;379:20200210.
PubMed Google Scholar
Foox J, et al. Performance assessment of DNA sequencing platforms in the ABRF Next-Generation Sequencing Study. Nat Biotechnol. 2021;39:1129–40.
Website. Consortium, S.-I. & SEQC/MAQC-III Consortium. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium. Nature Biotechnology vol. 32 903–914 Preprint at https://doi.org/10.1038/nbt.2957 (2014).
Website. Blainey, P., Krzywinski, M. & Altman, N. Replication. Nature Methods vol. 11 879–880 Preprint at https://doi.org/10.1038/nmeth.3091 (2014).
Marioni JC, Mason CE, Mane SM, Stephens M, Gilad Y. RNA-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome Res. 2008;18:1509–17.
Łabaj PP, et al. Characterization and improvement of RNA-Seq precision in quantitative transcript expression profiling. Bioinformatics. 2011;27:i383–91.
Article PubMed PubMed Central Google Scholar
Fu GK, et al. Molecular indexing enables quantitative targeted RNA sequencing and reveals poor efficiencies in standard library preparations. Proc Natl Acad Sci U S A. 2014;111:1891–6.
Bell G. Replicates and repeats. BMC Biol. 2016;14:28.
Mapping-friendly sequence reductions. Going beyond homopolymer compression iScience. 2022;25:105305.
Google Scholar
Li S, et al. Detecting and correcting systematic variation in large-scale RNA sequencing data. Nat Biotechnol. 2014;32:888–95.
Tong L, et al. Impact of RNA-seq data analysis algorithms on gene expression estimation and downstream prediction. Sci Rep. 2020;10:17925.
Li H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv [q-bio.GN]. 2013.
Lunter G, Goodson M. Stampy: a statistical algorithm for sensitive and fast mapping of Illumina sequence reads. Genome Res. 2011;21:936–9.
Ros-Freixedes R, et al. Impact of index hopping and bias towards the reference allele on accuracy of genotype calls from low-coverage sequencing. Genet Sel Evol. 2018;50:64.
Alser M, et al. Technology dictates algorithms: recent developments in read alignment. Genome Biol. 2021;22:249.
Zaharia M, et al. Faster and More Accurate Sequence Alignment with SNAP. arXiv [cs.DS]. 2011.
Weese D, Holtgrewe M, Reinert K. RazerS 3: faster, fully sensitive read mapping. Bioinformatics. 2012;28:2592–9.
Alkan C, et al. Personalized copy number and segmental duplication maps using next-generation sequencing. Nat Genet. 2009;41:1061–7.
Langmead, B., Trapnell, C., Pop, M. & Salzberg, S. L. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biology vol. 10 R25 Preprint at https://doi.org/10.1186/gb-2009-10-3-r25 (2009).
Firtina, C. & Alkan, C. On genomic repeats and reproducibility. Bioinformatics vol. 32 2243–2247 Preprint at https://doi.org/10.1093/bioinformatics/btw139 (2016).
Ball MP, et al. A public resource facilitating clinical use of genomes. Proc Natl Acad Sci U S A. 2012;109:11920–7.
Consortium, †the International Hapmap & †The International HapMap Consortium. The International HapMap Project. Nature vol. 426 789–796 Preprint at https://doi.org/10.1038/nature02168 (2003).
Zook JM, et al. An open resource for accurately benchmarking small variant and reference calls. Nat Biotechnol. 2019;37:561–6.
Khayat MM, et al. Hidden biases in germline structural variant detection. Genome Biol. 2021;22:347.
SEQC/MAQC-III Consortium. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium." Nat Biotechnol. 2014;32(9):903–14.
Munro SA, et al. Assessing technical performance in differential gene expression experiments with external spike-in RNA control ratio mixtures. Nat Commun. 2014;5:5125.
Guo Y, et al. The effect of strand bias in Illumina short-read sequencing data. BMC Genomics. 2012;13:666.
Validation of a Customized Bioinformatics Pipeline for a Clinical Next-Generation Sequencing Test Targeting Solid Tumor–Associated Variants. J Mol Diagn. 2018;20, 355–365.
Al Seesi S, et al. Bootstrap-based differential gene expression analysis for RNA-Seq data with and without replicates. BMC Genomics. 2014;15:S2.
Saremi B, Kohls M, Liebig P, Siebert U, Jung K. Measuring reproducibility of virus metagenomics analyses using bootstrap samples from FASTQ-files. Bioinformatics. 2021;37:1068–75.
Alser M, et al. Packaging and containerization of computational methods. Nat Protoc. 2024. https://doi.org/10.1038/s41596-024-00986-0 .
Brito, J. J. et al. Recommendations to enhance rigor and reproducibility in biomedical research. Gigascience 9, (2020).
Weber LM, et al. Essential guidelines for computational method benchmarking. Genome Biol. 2019;20:1–12.
Article Google Scholar
Mangul S, et al. Challenges and recommendations to improve the installability and archival stability of omics computational tools. PLoS Biol. 2019;17:e3000333.
Mangul S, et al. Systematic benchmarking of omics computational tools. Nat Commun. 2019;10(1):1393.
Home - OMNIBENCHMARK. https://omnibenchmark.org .
Wünsch M, et al. "From RNA sequencing measurements to the final results: A practical guide to navigating the choices and uncertainties of gene set analysis." Wiley Interdiscip Rev Comput Stat. 2024;16(1):e1643.
Wünsch, M., Sauer, C., Herrmann, M., Hinske, L. C. & Boulesteix, A.-L. To tweak or not to tweak. How exploiting flexibilities in gene set analysis leads to over-optimism. (2024).
Download references
Open access funding provided by Swiss Federal Institute of Technology Zurich SM is supported by the National Science Foundation (NSF) grants 2041984 and 2316223, National Institutes of Health (NIH) grant R01AI173172, and USC Office of Research and Innovation Zumberge Preliminary Studies in Research Award.
Serghei Mangul and Niko Beerenwinkel contributed equally to this work.
Department of Biosystems Science and Engineering, ETH Zurich, 4058, Basel, Switzerland
Pelin Icer Baykal & Niko Beerenwinkel
SIB Swiss Institute of Bioinformatics, 4058, Basel, Switzerland
Pelin Icer Baykal, Daniel J. Stekhoven & Niko Beerenwinkel
Małopolska Centre of Biotechnology, Jagiellonian University, 30-387, Gronostajowa 7A, Krakow, Poland
Paweł Piotr Łabaj
Department of Biotechnology, Boku University Vienna, Muthgasse 18, 1190, Vienna, Austria
Cancer Research UK Cambridge Research Institute, Cambridge, CB2 0RE, UK
Florian Markowetz
Department of Oncology, University of Cambridge, Cambridge, CB2 2XZ, UK
Institute for Genome Sciences, University of Maryland School of Medicine, HSFIII, 670 W. Baltimore St, Baltimore, MD, 21201, USA
Lynn M. Schriml
NEXUS Personalized Health Technologies, ETH Zurich, 8952, Zurich, Switzerland
Daniel J. Stekhoven
Titus Family Department of Clinical Pharmacy, USC Alfred E. Mann School of Pharmacy and Pharmaceutical Sciences, University of Southern California, 1540 Alcazar Street, Los Angeles, CA, 90033, USA
Serghei Mangul
Department of Quantitative and Computational Biology, University of Southern California Dornsife College of Letters, Arts, and Sciences, Los Angeles, CA, 90089, USA
You can also search for this author in PubMed Google Scholar
PIB was responsible for drafting the entire manuscript. NB and SM contributed significantly to the work presented, both having equal contributions as main contributors. DS provided substantial feedback and insights. PPL identified and pointed out critical data that were included in Table 1 of the manuscript. FM and LS made valuable contributions to the manuscript.
X handles: @cbg_ethz (Niko Beerenwinkel).
Kevin Pang was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
The review history is available as Additional file 2.
Correspondence to Serghei Mangul or Niko Beerenwinkel .
Ethics approval and consent to participate.
This is not applicable for this manuscript.
The authors declare that they have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
13059_2024_3343_moesm1_esm.docx.
Additional file 1: Table S1: Glossary of definitions. Table containing the definitions of all terms used throughout the manuscript.
Rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Baykal, P.I., Łabaj, P.P., Markowetz, F. et al. Genomic reproducibility in the bioinformatics era. Genome Biol 25 , 213 (2024). https://doi.org/10.1186/s13059-024-03343-2
Download citation
Received : 14 September 2023
Accepted : 23 July 2024
Published : 09 August 2024
DOI : https://doi.org/10.1186/s13059-024-03343-2
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1474-760X
Though manifestation and the law of attraction are age-old concepts with origins as far back as Hindu scriptures and Buddhism , it has enjoyed a modern-day refresh since the 2006 documentary “The Secret.” Now, manifestation fills social media feeds and coaching philosophies alike.
The hashtag #Manifestation has over 51 billion views on TikTok. Scroll for a bit and you’ll find not every manifestation coach has the same definition. We spoke to three practicing manifesters about what it means to them.
As a verb, manifesting is “to make evident or certain by showing or displaying .” In other words, manifesting is the act or practice of bringing something into your life through belief.
Candice Nikeia’s lessons in manifestation arrive at a place of gratitude. For her, manifesting is more than just wishing for physical items — a new car, a house, clothes — it’s acknowledging what you already have.
“It's really about learning how to love yourself, thinking of yourself as something that you desire to be, believing in yourself, striving for more than you can ever imagine,” Nikeia says. “Trusting that what's out there for you is out there for you; what's meant for you is meant for you.”
Kathleen Cameron is a wealth manifester . For her, manifestation is all about mindset — the art of “becoming a new version of yourself that you have yet to become in order to create the life that you have yet to experience.”
Ryan Lu has a bit more of a spiritual approach . Her manifestation philosophy is about working with the universe to “meet in the middle.”
“Manifestation is everyone's individual relationship with their higher power, whether that be God, the universe, whatever their religion is, whatever they personally believe,” Lu says.
What are angel numbers? Here's what they could mean in your life.
Manifestation is, at its core, believing you are the person who can achieve the goals you set. Here's a five-step recipe to help get started, according to the coaches:
♦ Be grateful for what you already have
This is the base of Nikeia’s practice. You can start by thinking something positive as you wake up and before you go to bed.
She uses the example of manifesting a new job: instead of harvesting the feeling of not having that job, focus on what’s currently in your life.
“Be grateful for the moments you have today because when you do get that job, there's something that's going to make you want to complain,” Nikeia says.
♦ Shift your mindset
"It's not necessarily some magical thing you do, abracadabra, it's not like that," Nikeia says. "It's really a mindset and the words you speak — whether you're conscious of it or not, your words are manifesting into your reality."
When it comes to a mindset change, Cameron walks into situations thinking, “What can I do right now that would be something that a confident version of myself would do?”
In daily practice, that’s self care for Cameron — being in tune with becoming her best self.
“Whatever it is for you, that self care for you puts you in a good energy,” Cameron says. “When you look after and care for yourself, you're so much more powerful.”
♦ Put it all out there: Envision what you desire
Once you’re in the mindset to work toward what you want, the rest of the manifesting begins. The most common quick method is with a pen and paper — putting what you desire into words.
Others practice by using repetition, candle magic or waiting for certain phases of the moon.
Lu’s message to beginners? Don’t complicate it. Do what works for you.
“I am a lazy manifester, I simply do not have the time to be doing all these elaborate rituals,” Lu says. “But hey, if they work for someone, please, by all means, do it. That's how I started, I wrote things down.”
One of her favorite methods is making a to-do list on her phone; an easy way to reference those manifestations throughout the year.
Nikeia recommends beginning with affirmations. This could be as simple as speaking aloud your wishes as if they’ve already come true. You could even say it as if you’re talking to a friend or a family member if talking to yourself feels unnatural.
♦ Take inspired action
Lu shared one of her recent manifestations — hit a million followers on TikTok. While she’s been manifesting it since 2020, she just recently hit that milestone.
Her account is an example of manifestation in real time, she says — it’s not magic, you have to take action alongside your desires. There’s no way to reach a million followers if you never post a video.
“I can control making videos, I can control the quality of the videos, I can control the content,” Lu says. “I cannot control if people are going to like that, I can't control if people are going to follow. So I'm doing what I can do, and I'm trusting the universe with the rest.”
♦ Be patient
But what about when things happen that are out of control? Nikeia calls those “energetic tests.” Part of manifestation is about bringing a positive mindset to negative situations.
“Is this the moment where I say I'm done, I give up? I'm not going to do these manifestations anymore?” she says. “Or is this a test that, okay, I know, this happened to me, but this is happening for me, and not to me, this is happening for my growth.”
There’s also patience — manifesting something once doesn’t mean you’ll wake up with it the next day.
“Life can turn in dramatic ways, but when we find the power in the now and find the realms of just allowing life to unfold at the right time, that is what manifestation is,” Nikeia says.
Whether you’re looking for signs in repeated numbers (known as angel numbers ), collecting crystals or getting tarot readings on the regular, there are many tools that accompany the world of manifesting.
Lu says while these aren’t necessary, they can help us be “in a state of empowerment.”
“Sometimes we already have the answers, we know what's up,” Lu says. “But it's nice to be reassured, to hear something and be like, this is someone I trust, this method I trust, I'm hearing that things are going to work out, and I can sleep tonight.”
But get far enough down the manifesting rabbit hole on TikTok and you may wind up scrolling through dozens of prediction videos or “signs” from the universe.
“If you believe that you're supposed to hear this message, then you're watching every single video getting 1,000 messages, you’re going to be so confused,” Cameron says. “So this is why you have to trust you, you have to trust what you see, what you believe, your intuition, what you feel.”
There’s no scientific evidence that manifestation automatically makes dreams come true, but some research suggests positive mindsets have equally positive outcomes.
One study found imagining a hypothetical event will land you with a better action plan for how to make the event actually happen. The concept of growth mindset says that individuals who believe their talents can grow tend to achieve more than those who don’t.
Certainly, the thought-to-action aspects of manifestation hold some ground. Some scientists have referenced manifestation’s close alignment with cognitive behavioral therapy , which is modifying unhelpful ways of thinking and behavior to cope with and prevent further psychological problems.
And yet, the “magic” of manifesting that has come into popular culture is not necessarily what many manifestation coaches actually believe.
“It's not about bringing like that one thing or that one amount of money into your life. It's about being a different version of yourself,” Cameron says. “It doesn't come from thin air — it comes from your actions that you take based on the energy that you're in.”
Manifestation, all three coaches said, is something we’re doing all the time. We wish on birthday candles, when we see the time is 11:11 or when we set a particular goal.
“You're always running based on the beliefs you hold,” Cameron says. “Every day you're creating the next day of your life over and over and over again, so people are manifesting but they don't know what or how they're doing it.”
There can be too much of a good thing.
Some research suggests the practice could be harmful for those with anxiety or obsessive-compulsive disorders or who may be actively working toward unlearning harmful “thought equals reality” triggers — what manifestation holds at its core.
“You have to know yourself well enough and your mental health to understand where you are going to take your practice of manifesting,” Lu says. “If it is something that you end up being obsessive over and ends up really impeding your life, please take a step back and find a way that really works for you.”
What is my Zodiac sign? Horoscopes, astrology and what the stars say about you.
USA TODAY is exploring the questions you and others ask every day. From " Is Mercury in retrograde? " to " Are cats nocturnal? " to " Are witches real? " – we're striving to find answers to the most common questions you ask every day. Head to our Just Curious section to see what else we can answer.
COMMENTS
Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...
As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...
Mixed methods. Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you. Note Keep in mind that mixed methods research doesn't just mean collecting both types of data. Rather, it ...
Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.
Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:
Research methodology is a crucial framework that guides the entire research process. It involves choosing between various qualitative and quantitative approaches, each tailored to specific research questions and objectives. Your chosen methodology shapes how data is gathered, analysed, and interpreted, ultimately influencing the reliability and ...
A research methodology gives research legitimacy and provides scientifically sound findings. It also provides a detailed plan that helps to keep researchers on track, making the process smooth, effective and manageable. A researcher's methodology allows the reader to understand the approach and methods used to reach conclusions.
A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more. You can think of your research methodology as being a formula. One part will be how you plan on putting your research into ...
1. Qualitative research methodology. Qualitative research methodology is aimed at understanding concepts, thoughts, or experiences. This approach is descriptive and is often utilized to gather in-depth insights into people's attitudes, behaviors, or cultures. Qualitative research methodology involves methods like interviews, focus groups, and ...
Research methodology can be defined as the systematic framework that guides researchers in designing, conducting, and analyzing their investigations. It encompasses a structured set of processes, techniques, and tools employed to gather and interpret data, ensuring the reliability and validity of the research findings.
Provide the rationality behind your chosen approach. Based on logic and reason, let your readers know why you have chosen said research methodologies. Additionally, you have to build strong arguments supporting why your chosen research method is the best way to achieve the desired outcome. 3. Explain your mechanism.
A research methodology is different from a research method because research methods are the tools you use to gather your data (Dawson, 2019). You must consider several issues when it comes to selecting the most appropriate methodology for your topic. Issues might include research limitations and ethical dilemmas that might impact the quality of ...
2.1 Research Methodology. Method can be described as a set of tools and techniques for finding something out, or for reducing levels of uncertainty. According to Saunders (2012) method is the technique and procedures used to obtain and analyse research data, including for example questionnaires, observation, interviews, and statistical and non-statistical techniques [].
Definition: Research refers to the process of investigating a particular topic or question in order to discover new information, ... Research methodology refers to the overall approach and strategy used to conduct a research study. It involves the systematic planning, design, and execution of research to answer specific research questions or ...
In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims. This normally involves various steps, like choosing a sample ...
In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts). In the past 10 years, there has been an increase in the use of terms related to ...
What are research methods. Research methods are the strategies, processes or techniques utilized in the collection of data or evidence for analysis in order to uncover new information or create better understanding of a topic. There are different types of research methods which use different tools for data collection.
Research Methods. Definition: Research Methods refer to the techniques, procedures, and processes used by researchers to collect, analyze, and interpret data in order to answer research questions or test hypotheses.The methods used in research can vary depending on the research questions, the type of data that is being collected, and the research design.
Research methodology can be understood as a way to systemically solve or answer the research problem. Thus essentially, it can be understood as the process of studying how research is done in a scientific manner. Through the methodology, we study the various steps that are generally adopted by a researcher in studying his/her research problem ...
Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project. The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data. The following boxes below contain a ...
research methods. Since the object of research, particularly the applied research, it to arrive at a solution for a given problem, the available data and the unknown aspects of the problem have to be related to each other to make a solution possible. Keeping this in view, research methods can be put into the following three groups: 1.
methodology: [noun] a body of methods, rules, and postulates employed by a discipline : a particular procedure or set of procedures.
For a professional paper, the affiliation is the institution at which the research was conducted. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author names; when there are multiple affiliations, center ...
Definition: Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. ... It is an important planning phase that outlines the research methodology, data collection methods, and ...
The proposed root mean square (RMS) Intersect Method can be applied directly to any data set, such as DPIV data, DNS, and CFD data, to detect the presence of vortices in a laminar or turbulent ...
The outcomes were analyzed using a generalized estimating equation method by intention-to-treat analysis. Results: A total of 92 participants (46 in each group) were recruited in this study. Of these, 86 completed the course and follow-up evaluations with a mean age of 62.38 (SD 12.91) years.
In biomedical research, validating a scientific discovery hinges on the reproducibility of its experimental results. However, in genomics, the definition and implementation of reproducibility remain imprecise. We argue that genomic reproducibility, defined as the ability of bioinformatics tools to maintain consistent results across technical replicates, is essential for advancing scientific ...
The most common quick method is with a pen and paper — putting what you desire into words. Others practice by using repetition, candle magic or waiting for certain phases of the moon.