- View sidebar
A Political Science Guide
For students, researchers, and others interested in doing the work of political science, formulating/extracting hypotheses.
Formulating hypotheses, which are defined as propositions set forth to explain a group of facts or phenomena, is a fundamental component to any research scholarship. Hypotheses lay out the central arguments that will be tested and either verified or rejected in the body of a paper. Papers may address multiple competing or supporting hypotheses in order to account for the full spectrum of explanations that could account for the phenomenon being studied. As such, hypotheses often include statements about a presumed impact of an independent variable on a dependent variable.
Hypotheses should not emanate from preconceived perceptions about a given relationship between variables, but rather should come about as a product of research. Thus, hypotheses should be formed after developing an understanding of the relevant literature to a given topic rather than before conducting research. Beginning research with a specific argument in mind can lead to discounting other evidence that could either run counter to this preconceived argument or could point to other potential explanations.
There are a number of different types of hypotheses utilized in political science research:
- Null hypothesis: states that there is no relationship between two concepts
- Correlative hypothesis: states that there is a relationship, between two or more concepts or variables, but doesn’t specify the nature of a relationship
- Directional hypothesis: states the nature of the relationship between concepts or variables. These types of relationships can include positive, negative (inverse), high or low levels of influence, etc.
- Causal hypothesis: states that one variable causes the other
A good hypothesis should be both correlative and directional and most hypotheses in political science research will also be causal, asserting the impact of an independent variable on a dependent variable.
There are a number of additional considerations that must be taken into account in order to make a hypothesis as strong as possible:
- Hypotheses must be falsifiable , that is able to be empirically tested. They cannot attribute causation to something like a supernatural entity whose existence can neither be proven nor denied.
- Hypotheses must be internally consistent , that is that they must be proving what they claim to be proving and must not contain any logical or analytical contradiction
- Hypotheses must have clearly defined outcomes (dependent variables) that are both dependent and vary based on the dependent variable.
- Hypotheses must be general and should aim to explain as much as possible with as little as possible. As such, hypotheses should have as few exceptions as possible and should not rely on amorphous concepts like ‘national interest.’
- Hypotheses must be empirical statements that are propositions about relationships that exist in the real world.
- Hypotheses must be plausible (there must be a logical reason why they might be true) and should be specific (the relationship between variables must be expressed as explicitly as possible) and directional.
- Fearon, James D. 1991. Counterfactuals and Hypothesis Testing in Political Science . World Politics 43 (2): 169-195.
Abstract : “Scholars in comparative politics and international relations routinely evaluate causal hypotheses by referring to counterfactual cases where a hypothesized causal factor is supposed to have been absent. The methodological status and the viability of this very common procedure are unclear and are worth examining. How does the strategy of counterfactual argument relate, if at all, to methods of hypothesis testing based on the comparison of actual cases, such as regression analysis or Mill’s Method of Difference? Are counterfactual thought experiments a viable means of assessing hypotheses about national and international outcomes, or are they methodologically invalid in principle? The paper addresses the first question in some detail and begins discussion of the second. Examples from work on the causes of World War I, the nonoccurrence of World War III, social revolutions, the breakdown of democratic regimes in Latin America, and the origins of fascism and corporatism in Europe illustrate the use, problems and potential of counterfactual argument in small-N-oriented political science research.” – Jstor.org
- King, Gary, Robert Owen Keohane, and Sidney Verba. 1994. Designing social inquiry: scientific inference in qualitative research. Princeton, NJ: Princeton University Press.
- Palazzolo, David and Dave Roberts. 2010. What is a Good Hypothesis? University of Richmond Writing Center.
Contributor: Harrison Polans
updated July 12, 2017 – MN
Share this:
- Already have a WordPress.com account? Log in now.
- Subscribe Subscribed
- Copy shortlink
- Report this content
- View post in Reader
- Manage subscriptions
- Collapse this bar
University Libraries
Psci 3300: introduction to political research.
- Library Accounts
- Selecting a Topic for Research
- From Topic to Research Question
- From Question to Theories, Hypotheses, and Research Design
- Annotated Bibliographies
- The Literature Review
- Search Strategies for Ann. Bibliographies & Lit. Reviews
- Find PSCI Books for Ann. Bibliographies & Lit. Reviews
- Databases & Electronic Resources for Your Lit. Review
- Methods, Data Analysis, Results, Limitations, and Conclusion
- Finding Data and Statistics for the Data Analysis
- Citing Sources for the Reference Page
Need Help with Basic and Advance Research?
Visit the Basic and Advanced Library Research Guide to learn more about the library.
If you select "no," please send me an email so I can improve this guide.
Hypothesis in Political Science
"A generalization predicting that a relationship exists between variables. Many generalizations about politics are a sort of folklore. Others proceed from earlier work carried out by social scientists. Within the social sciences most statements about behaviour relate to large groups of people. Hence, testing any hypothesis in the field of political science will involve statistical method. It will be dealing with probabilities.
To test a hypothesis one must pose a null hypothesis. If we wanted to test the validity of the common generalization, 'manual workers tend to vote for the Labour Party' we would begin by assuming the statement was untrue. The investigation would require a sample survey in which manual workers were identified and questions put to them. It would need to be done in several constituencies in different parts of the country. Having collated the data we would use the evidence to test the null hypothesis, employing statistical techniques to assess the probability of acquiring such data if the null hypothesis were correct. These techniques are known as 'significance tests'. They estimate the probability that the rejection of a null hypothesis is a mistake. If the statistical tests indicates that the odds against it being a mistake are 1000 to one, then this is stated as a '.001 level of significance'.
The fact that the research showed that it was highly likely that manual workers 'tend' to vote for the Labour vote would not satisfy most political scientists. They also want to understand those who did not. Consequently much more work would need to be done to refine the hypothesis and define the tendency with more accuracy. Whatever the case, a hypothesis in the social sciences about a group or socio-demographic category can never tell us about the behaviour of an individual in that group or category."
Hypothesis. (1999). In F. Bealey. The Blackwell Dictionary of Political Science , Oxford, United Kingdom: Blackwell Publishers.
What a Quantitative Research Design?
Quantitative research studies produce results that can be used to describe or note numerical changes in measurable characteristics of a population of interest; generalize to other, similar situations; provide explanations of predictions; and explain causal relationships. The fundamental philosophy underlying quantitative research is known as positivism, which is based on the scientific method of research. Measurement is necessary if the scientific method is to be used. The scientific method involves an empirical or theoretical basis for the investigation of populations and samples. Hypotheses must be formulated, and observable and measurable data must be gathered. Appropriate mathematical procedures must be used for the statistical analyses required for hypothesis testing.
Quantitative methods depend on the design of the study (experimental, quasi-experimental, non-experimental). Study design takes into account all those elements that surround the plan for the investigation, such as research question or problem statement, research objectives, operational definitions, scope of inferences to be made, assumptions and limitations of the study, independent and dependent variables, treatment and controls, instrumentation, systematic data collection actions, statistical analysis, time lines, and reporting procedures. The elements of a research study and experimental, quasi-experimental, and nonexperimental designs are discussed here.
Elements of Quantitative Design
Problem statement.
First, an empirical or theoretical basis for the research problem should be established. This basis may emanate from personal experiences or established theory relevant to the study. From this basis, the researcher may formulate a research question or problem statement.
Operational Definitions
Operational definitions describe the meaning of specific terms used in a study. They specify the procedures or operations to be followed in producing or measuring complex constructs that hold different meanings for different people. For example, intelligence may be defined for research purposes by scores on the Stanford-Binet Intelligence Scale.
Population and Sample
Quantitative methods include the target group (population) to which the researcher wishes to generalize and the group from which data are collected (sample). Early in the planning phase, the researcher should determine the scope of inference for results of the study. The scope of inference pertains to populations of interest, procedures used to select the sample(s), method for assigning subjects to groups, and the type of statistical analysis to be conducted.
Formulation of Hypotheses
Complex questions to compare responses of two or more groups or show relationships between two or more variables are best answered by hypothesis testing. A hypothesis is a statement of the researcher's expectations about a relationship between variables.
Hypothesis Testing
Statements of hypotheses may be written in the alternative or null form. A directional alternative hypothesis states the researcher's predicted direction of change, difference between two or more sample means, or relationship among variables. An example of a directional alternative hypothesis is as follows:
Third-grade students who use reading comprehension strategies will score higher on the State Achievement Test than their counterparts who do not use reading comprehension strategies.
A nondirectional alternative hypothesis states the researcher's predictions without giving the direction of the difference. For example:
There will be a difference in the scores on the State Achievement Test between third-grade students who use reading comprehension strategies and those who do not.
Stated in the null form, hypotheses can be tested for statistically significant differences between groups on the dependent variable(s) or statistically significant relationships between and among variables. The null hypothesis uses the form of “no difference” or “no relationship.” Following is an example of a null hypothesis:
There will be no difference in the scores on the State Achievement Test between third-grade students who use reading comprehension strategies and those who do not.
It is important that hypotheses to be tested are stated in the null form because the interpretation of the results of inferential statistics is based on probability. Testing the null hypothesis allows researchers to test whether differences in observed scores are real, or due to chance or error; thus, the null hypothesis can be rejected or retained.
Organization and Preparation of Data for Analysis
Survey forms, inventories, tests, and other data collection instruments returned by participants should be screened prior to the analysis. John Tukey suggested that exploratory data analysis be conducted using graphical techniques such as plots and data summaries in order to take a preliminary look at the data. Exploratory analysis provides insight into the underlying structure of the data. The existence of missing cases, outliers, data entry errors, unexpected or interesting patterns in the data, and whether or not assumptions of the planned analysis are met can be checked with exploratory procedures.
Inferential Statistical Tests
Important considerations for the choice of a statistical test for a particular study are (a) type of research questions to be answered or hypotheses to be tested; (b) number of independent and dependent variables; (c) number of covariates; (d) scale of the measurement instrument(s) (nominal, ordinal, interval, ratio); and (e) type of distribution (normal or non-normal). Examples of statistical procedures commonly used in educational research are t test for independent samples, analysis of variance, analysis of covariance, multivariate procedures, Pearson product-moment correlation, Mann–Whitney U test, Kruskal–Wallis test, and Friedman's chi-square test.
Results and Conclusions
The level of statistical significance that the researcher sets for a study is closely related to hypothesis testing. This is called the alpha level. It is the level of probability that indicates the maximum risk a researcher is willing to take that observed differences are due to chance. The alpha level may be set at .01, meaning that 1 out of 100 times the results will be due to chance; more commonly, the alpha level is set at .05, meaning that 5 out of 100 times observed results will be due to chance. Alpha levels are often depicted on the normal curve as the critical region, and the researcher must reject the null hypothesis if the data fall into the predetermined critical region. When this occurs, the researcher must conclude that the findings are statistically significant. If the researcher rejects a true null hypothesis (there is, in fact, no difference between the means), a Type I error has occurred. Essentially, the researcher is saying there is a difference when there is none. On the other hand, if a researcher fails to reject a false null (there is, in fact, a difference), a Type II error has occurred. In this case, the researcher is saying there is no difference when a difference exists. The power in hypothesis testing is the probability of correctly rejecting a false null hypothesis. The cost of committing a Type I or Type II error rests with the consequences of the decisions made as a result of the test. Tests of statistical significance provide information on whether to reject or fail to reject the null hypothesis; however, an effect size ( R 2 , eta 2 , phi, or Cohen's d ) should be calculated to identify the strength of the conclusions about differences in means or relationships among variables.
Salkind, Neil J. 2010. Encyclopedia of Research Design . Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781412961288 .
Some Terms in Statistics that You Should Know
Bivariate Regression
Central Tendacy, Measures of
Chi-Square Test
Cohen's d Statistic
Cohen's f Statistic
Correspondence Analysis
Cross-Sectional Design
Descriptive Statistics
Effect Size, Measure of
Eta-Squared
Factor Loadings
False Positive
Frequency Tables
Alternative Hypotheses
Null Hypothesis
Krippendorff's Alpha
Multiple Regression
Multivariate Analysis of Variance (MANOVA)
Multivariate Normal Distribution
Partial Eta-Squared
Percentile Rank
Random Error
Reliability
Regression Discontinuity
Regression to the Mean
Standard Deviation
Significance, Statistical
Trimmed Mean
Variability, Measure of
Is the term you are looking for not here? Review the Encyclopedia of Research Design below.
SAGE Research Methods is a research methods tool created to help researchers, faculty and students with their research projects. SAGE Research Methods links over 175,000 pages of SAGE’s renowned book, journal and reference content. Researchers can explore methods concepts to help them design research projects, understand particular methods or identify a new method, conduct their research, and write up their findings. Since SAGE Research Methods focuses on methodology rather than disciplines, it can be used across the social sciences, health sciences, and more. Subject coverage includes sociology, health, criminology, education, anthropology, psychology, business, political science, history, economics, among others.
Sage Research Methods has a feature called a Methods Map that can help you explore different types of Research Designs .
You can also explore Cases to see real research using your selected research method to learn how other authors are writing up their findings.
- << Previous: From Topic to Research Question
- Next: Annotated Bibliographies >>
Copyright © University of North Texas. Some rights reserved. Except where otherwise indicated, the content of this library guide is made available under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license . Suggested citation for citing this guide when adapting it:
This work is a derivative of "PSCI 3300: Introduction to Political Research" , created by [author name if apparent] and © University of North Texas, used under CC BY-NC 4.0 International .
- Last Updated: Sep 10, 2024 6:15 PM
- URL: https://guides.library.unt.edu/PSCI3300
Additional Links
UNT: Apply now UNT: Schedule a tour UNT: Get more info about the University of North Texas
UNT: Disclaimer | UNT: AA/EOE/ADA | UNT: Privacy | UNT: Electronic Accessibility | UNT: Required Links | UNT: UNT Home
IMAGES
VIDEO
COMMENTS
Hypothesis testing is a basic tool in contemporary political science studies, especially in quantitative political science. In the following chapters, we will introduce specific methods that explore the relations between different variables in our society.
There are a number of different types of hypotheses utilized in political science research: Null hypothesis: states that there is no relationship between two concepts; …
These propositions are comprised of (1) concepts that introduce basic terms of the theory; (2) assumptions that relate the basic concepts to each other; and (3) generalizations that relate …
Authoritarianism, corruption, ideology, partisanship, populism, public mood, civil war, social movements... these are all common phenomena studied in Political Science. Yet what exactly do these terms mean?
Formulation of Hypotheses. Complex questions to compare responses of two or more groups or show relationships between two or more variables are best answered by hypothesis testing. A hypothesis is a …
HYPOTHESIS TESTING IN POLITICAL SCIENCE By JAMES D. FEARON* Without the prior democratic modernization of England, the reac- tionary methods adopted in Germany and …
Examples from work on the causes of World War I, the nonoccurrence of World War III, social revolutions, the breakdown of democratic regimes in Latin America, and the origins of fascism …
To see how political scientists use hypotheses, and to imagine how you might use a hypothesis to develop a thesis for your paper, consider the following example. Suppose that …