One-Tailed Hypothesis Tests: 3 Example Problems

In statistics, we use hypothesis tests to determine whether some claim about a population parameter is true or not.

Whenever we perform a hypothesis test, we always write a null hypothesis and an alternative hypothesis, which take the following forms:

H 0 (Null Hypothesis): Population parameter = ≤, ≥ some value

H A (Alternative Hypothesis): Population parameter , ≠ some value

There are two types of hypothesis tests:

  • Two-tailed test : Alternative hypothesis contains the ≠ sign
  • One-tailed test : Alternative hypothesis contains either or > sign

In a one-tailed test , the alternative hypothesis contains the less than (“”) sign. This indicates that we’re testing whether or not there is a positive or negative effect.

Check out the following example problems to gain a better understanding of one-tailed tests.

Example 1: Factory Widgets

Suppose it’s assumed that the average weight of a certain widget produced at a factory is 20 grams. However, one engineer believes that a new method produces widgets that weigh less than 20 grams.

To test this, he can perform a one-tailed hypothesis test with the following null and alternative hypotheses:

  • H 0 (Null Hypothesis): μ ≥ 20 grams
  • H A (Alternative Hypothesis): μ

Note : We can tell this is a one-tailed test because the alternative hypothesis contains the less than ( ) sign. Specifically, we would call this a left-tailed test because we’re testing if some population parameter is less than a specific value.

To test this, he uses the new method to produce 20 widgets and obtains the following information:

  • n = 20 widgets
  • x = 19.8 grams
  • s = 3.1 grams

Plugging these values into the One Sample t-test Calculator , we obtain the following results:

  • t-test statistic: -0.288525
  • one-tailed p-value: 0.388

Since the p-value is not less than .05, the engineer fails to reject the null hypothesis.

He does not have sufficient evidence to say that the true mean weight of widgets produced by the new method is less than 20 grams.

Example 2: Plant Growth

Suppose a standard fertilizer has been shown to cause a species of plants to grow by an average of 10 inches. However, one botanist believes a new fertilizer can cause this species of plants to grow by an average of greater than 10 inches.

To test this, she can perform a one-tailed hypothesis test with the following null and alternative hypotheses:

  • H 0 (Null Hypothesis): μ ≤ 10 inches
  • H A (Alternative Hypothesis): μ > 10 inches

Note : We can tell this is a one-tailed test because the alternative hypothesis contains the greater than ( > ) sign. Specifically, we would call this a right-tailed test because we’re testing if some population parameter is greater than a specific value.

To test this claim, she applies the new fertilizer to a simple random sample of 15 plants and obtains the following information:

  • n = 15 plants
  • x = 11.4 inches
  • s = 2.5 inches
  • t-test statistic: 2.1689
  • one-tailed p-value: 0.0239

Since the p-value is less than .05, the botanist rejects the null hypothesis.

She has sufficient evidence to conclude that the new fertilizer causes an average increase of greater than 10 inches.

Example 3: Studying Method

A professor currently teaches students to use a studying method that results in an average exam score of 82. However, he believes a new studying method can produce exam scores with an average value greater than 82.

To test this, he can perform a one-tailed hypothesis test with the following null and alternative hypotheses:

  • H 0 (Null Hypothesis): μ ≤ 82
  • H A (Alternative Hypothesis): μ > 82

To test this claim, the professor has 25 students use the new studying method and then take the exam. He collects the following data on the exam scores for this sample of students:

  • t-test statistic: 3.6586
  • one-tailed p-value: 0.0006

Since the p-value is less than .05, the professor rejects the null hypothesis.

He has sufficient evidence to conclude that the new studying method produces exam scores with an average score greater than 82.

Additional Resources

The following tutorials provide additional information about hypothesis testing:

Introduction to Hypothesis Testing What is a Directional Hypothesis? When Do You Reject the Null Hypothesis?

Statistics vs. Probability: What’s the Difference?

One sample z-test calculator, related posts, how to normalize data between -1 and 1, how to interpret f-values in a two-way anova, how to create a vector of ones in..., vba: how to check if string contains another..., how to determine if a probability distribution is..., what is a symmetric histogram (definition & examples), how to find the mode of a histogram..., how to find quartiles in even and odd..., how to calculate sxy in statistics (with example), how to calculate sxx in statistics (with example).

  • Search Search Please fill out this field.

What Is a One-Tailed Test?

  • Determining Significance
  • One-Tailed Test FAQs
  • Corporate Finance
  • Financial Analysis

One-Tailed Test Explained: Definition and Example

example of 1 tailed hypothesis test

Investopedia / Xiaojie Liu

A one-tailed test is a statistical test in which the critical area of a distribution is one-sided so that it is either greater than or less than a certain value, but not both. If the sample being tested falls into the one-sided critical area, the alternative hypothesis will be accepted instead of the null hypothesis.

Financial analysts use the one-tailed test to test an investment or portfolio hypothesis.

Key Takeaways

  • A one-tailed test is a statistical hypothesis test set up to show that the sample mean would be higher or lower than the population mean, but not both.
  • When using a one-tailed test, the analyst is testing for the possibility of the relationship in one direction of interest and completely disregarding the possibility of a relationship in another direction.
  • Before running a one-tailed test, the analyst must set up a null and alternative hypothesis and establish a probability value (p-value).

A basic concept in inferential statistics is hypothesis testing . Hypothesis testing is run to determine whether a claim is true or not, given a population parameter. A test that is conducted to show whether the mean of the sample is significantly greater than and significantly less than the mean of a population is considered a two-tailed test . When the testing is set up to show that the sample mean would be higher or lower than the population mean, it is referred to as a one-tailed test. The one-tailed test gets its name from testing the area under one of the tails (sides) of a normal distribution , although the test can be used in other non-normal distributions.

Before the one-tailed test can be performed, null and alternative hypotheses must be established. A null hypothesis is a claim that the researcher hopes to reject. An alternative hypothesis is the claim supported by rejecting the null hypothesis.

A one-tailed test is also known as a directional hypothesis or directional test.

Example of the One-Tailed Test

Let's say an analyst wants to prove that a portfolio manager outperformed the S&P 500 index in a given year by 16.91%. They may set up the null (H 0 ) and alternative (H a ) hypotheses as:

H 0 : μ ≤ 16.91

H a : μ > 16.91

The null hypothesis is the measurement that the analyst hopes to reject. The alternative hypothesis is the claim made by the analyst that the portfolio manager performed better than the S&P 500. If the outcome of the one-tailed test results in rejecting the null, the alternative hypothesis will be supported. On the other hand, if the outcome of the test fails to reject the null, the analyst may carry out further analysis and investigation into the portfolio manager’s performance.

The region of rejection is on only one side of the sampling distribution in a one-tailed test. To determine how the portfolio’s return on investment compares to the market index, the analyst must run an upper-tailed significance test in which extreme values fall in the upper tail (right side) of the normal distribution curve. The one-tailed test conducted in the upper or right tail area of the curve will show the analyst how much higher the portfolio return is than the index return and whether the difference is significant.

1%, 5% or 10%

The most common significance levels (p-values) used in a one-tailed test.

Determining Significance in a One-Tailed Test

To determine how significant the difference in returns is, a significance level must be specified. The significance level is almost always represented by the letter p, which stands for probability. The level of significance is the probability of incorrectly concluding that the null hypothesis is false. The significance value used in a one-tailed test is either 1%, 5%, or 10%, although any other probability measurement can be used at the discretion of the analyst or statistician. The probability value is calculated with the assumption that the null hypothesis is true. The lower the p-value , the stronger the evidence that the null hypothesis is false.

If the resulting p-value is less than 5%, the difference between both observations is statistically significant, and the null hypothesis is rejected. Following our example above, if the p-value = 0.03, or 3%, then the analyst can be 97% confident that the portfolio returns did not equal or fall below the return of the market for the year. They will, therefore, reject H 0  and support the claim that the portfolio manager outperformed the index. The probability calculated in only one tail of a distribution is half the probability of a two-tailed distribution if similar measurements were tested using both hypothesis testing tools.

When using a one-tailed test, the analyst is testing for the possibility of the relationship in one direction of interest and completely disregarding the possibility of a relationship in another direction. Using our example above, the analyst is interested in whether a portfolio’s return is greater than the market’s. In this case, they do not need to statistically account for a situation in which the portfolio manager underperformed the S&P 500 index. For this reason, a one-tailed test is only appropriate when it is not important to test the outcome at the other end of a distribution.

How Do You Determine If It Is a One-Tailed or Two-Tailed Test?

A one-tailed test looks for an increase or decrease in a parameter. A two-tailed test looks for change, which could be a decrease or an increase.

What Is a One-Tailed T Test Used for?

A one-tailed T-test checks for the possibility of a one-direction relationship but does not consider a directional relationship in another direction.

When Should a Two-Tailed Test Be Used?

You would use a two-tailed test when you want to test your hypothesis in both directions.

University of Southern California. " FAQ: What Are the Differences Between One-Tailed and Two-Tailed Tests? "

example of 1 tailed hypothesis test

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

When Can I Use One-Tailed Hypothesis Tests?

By Jim Frost 16 Comments

One-tailed hypothesis tests offer the promise of more statistical power compared to an equivalent two-tailed design. While there is some debate about when you can use a one-tailed test, the general consensus among statisticians is that you should use two-tailed tests unless you have concrete reasons for using a one-tailed test.

In this post, I discuss when you should and should not use one-tailed tests. I’ll cover the different schools of thought and offer my own opinion.

If you need to learn the basics about these two types of test, please read my previous post: One-Tailed and Two-Tailed Hypothesis Tests Explained .

Two-Tailed Tests are the Default Choice

The vast majority of hypothesis tests that analysts perform are two-tailed because they can detect effects in both directions. This fact is generally the clincher. In most studies, you are interested in determining whether there is a positive effect or a negative effect. In other words, results in either direction provide essential information. If this statement describes your study, you must use a two-tailed test. There’s no need to read any further. Typically, you need a strong reason to move away from using two-tailed tests.

On the other hand, there are some cases where one-tailed tests are not only a valid option, but truly are a requirement.

Consequently, there is a spectrum that ranges from cases where one-tailed tests are definitely not appropriate to cases where they are required. In the middle of this spectrum, there are cases where analysts might disagree. The breadth of opinions extends from those who believe you should use one-tailed tests for only a few specific situations when they are required to those who are more lenient about their usage.

A Concrete Rule about Choosing Between One- and Two-Tailed Tests

Despite this disagreement, there is a hard and fast rule about the decision process itself upon which all statisticians agree. You must decide whether you will use a one-tailed or two-tailed test at the beginning of your study before you look at your data. You must not perform a two-tailed analysis, obtain non-significant results, and then try a one-tailed test to see if that is statistically significant. If you plan to use a one-tailed test, make this decision at the beginning of the study and explain why it is the proper choice.

The approach I take is to assume you’ll use a two-tailed test and then move away from that only after carefully determining that a one-tailed test is appropriate for your study. The following are potential reasons for why you might use a one-tailed hypothesis test.

Related post : 5 Steps for Conducting Scientific Studies with Statistical Analyses

One-Tailed Tests Can Be the Only Option

For some hypothesis tests, the mechanics of how a test functions dictate using a one-tailed methodology. Chi-squared tests and F-tests and are often one-tailed for this reason.

Chi-squared tests

Analysts often use chi-squared tests to determine whether data fit a theoretical distribution and whether categorical variables are independent . For these tests, when the chi-squared value exceeds the critical threshold, you have sufficient evidence to conclude that the data do not follow the distribution or that the categorical variables are dependent. The chi-squared value either reaches this threshold or it does not. For all values below the threshold, you fail to reject the null hypothesis. There is no other interpretation for very low chi-squared values. Hence, these tests are one-tailed by their nature.

Graph of a chi-square probability distribution that has a region shaded for a one-tailed test.

F-tests are highly flexible tests that analysts use in a wide variety of scenarios. Some of these scenarios exclude the possibility of a two-tailed test. For instance, F-tests in ANOVA and the overall test of significance for linear models are similar to the chi-squared example. The F ratio can increase to the significance threshold or it does not. In one-way ANOVA, if the F-value surpasses the threshold, you can conclude that not all group means are equal. On the other hand, all F-values below the threshold yield the same interpretation—the sample provides insufficient evidence to conclude that the group means are unequal. No other effect or interpretation exists for very low F-values.

Example of one-tailed F-distribution.

When a one-tailed version of the test is the only meaningful possibility, statistical software won’t ask you to make a choose. That’s why you’ll never need to choose between a one or two-tailed ANOVA F-test or chi-square tests.

In some cases, the nature of the test itself requires using a one-sided methodology, and it does not depend on the study area.

Effects can Occur in Only One Direction

On the other hand, other hypothesis tests can legitimately have one and two-tailed versions, and you need to choose between them based on the study area. Tests that fall in this category include t-tests , proportion tests, Poisson rate tests, variance tests, and some nonparametric tests for the median. In these cases, base the decision on subject-area knowledge about the possible effects.

For some study areas, the effect can exist in only one direction. It simply can’t exist in the other direction. To make this determination, you need to use your subject-area knowledge and understanding of physical limitations. In this case, if there were a difference in the untested direction, you would attribute it to random error regardless of how large it is. In other words, only chance can produce an observed effect in the other direction. If you have even the smallest notion that an observed effect in the other direction could be a real effect rather than random error, use a two-tailed test.

For example, imagine we are comparing an herbicide’s ability to kill weeds to no treatment. We randomly apply the herbicide to some patches of grass and no herbicide to other patches. It is inconceivable that the herbicide can promote weed growth. In the worst-case scenario, it is entirely ineffective, and the herbicide patches should be equivalent to the control group. If the herbicide patches ultimately have more weeds than the control group, we’ll chalk that up to random error regardless of the difference—even if it’s substantial. In this case, we are wholly justified using a one-tailed test to determine whether the herbicide is better than no treatment.

No Controversy So Far!

So far, the preceding two reasons fall entirely on safe ground. Using one-tailed tests because of its mechanics or because an effect can occur in only one direction should be acceptable to all statisticians. In fact, some statisticians believe that these are the only valid reasons for using one-tailed hypothesis tests. I happen to fall within this school of thought myself.

In the next section, I’ll discuss a scenario where some analysts believe you can choose between one and two-tailed tests, but others disagree with that notion.

You Only Need to Know About Effects in One Direction

In this scenario, effects can exist in both directions, but you only care about detecting an effect in one direction. Analysts use the one-tailed approach in this situation to boost the statistical power of the hypothesis test .

To even consider using a one-tailed test for this reason, you must be entirely sure there is no need to detect an effect in the other direction. While you gain more statistical power in one direction, the test has absolutely no power in the other direction.

Suppose you are testing a new vaccine and want to determine whether it’s better than the current vaccine. You use a one-tailed test to improve the test’s ability to learn whether the new vaccine is better. However, that’s unethical because the test cannot determine whether it is less effective. You risk missing valuable information by testing in only one direction.

However, there might be occasions where you, or science, genuinely don’t need to detect an effect in the untested direction. For example, suppose you are considering a new part that is cheaper than the current part. Your primary motivation for switching is the price reduction. The new part doesn’t have to be better than the current part, but it cannot be worse. In this case, it might be appropriate to perform a one-tailed test that determines whether the new part is worse than the old part. You won’t know if it is better, but you don’t need to know that.

As I mentioned, many statisticians don’t think you should use a one-tailed test for this type of scenario. My position is that you should set up a two-tailed test that produces the same power benefits as a one-tailed test because that approach will accurately capture the underlying fact that effects can occur in both directions.

However, before explaining this alternate approach, I need to describe an additional problem with the above scenario.

Beware of the Power that One-Tailed Tests Provide

The promise of extra statistical power in the direction of interest is tempting. After all, if you don’t care about effects in the opposite direction, what’s the problem? It turns out there is an additional penalty that comes with the extra power.

First, let’s see why one-tailed tests are more powerful than two-tailed tests with the same significance level . The graphs below display the t-distributions for two t-tests with the same sample size. I show the critical t-values for both tests. As you can see, the one-tailed test requires a less extreme t-value (1.725) to produce a statistically significant result in the right tail than the two-tailed test (2.086). In other words, a smaller effect is statistically significant in the one-tailed test.

Plot that displays a single critical region for a one-tailed test.

Both tests have the same Type I error rate because we defined the significance level as 0.05. This type of error occurs when the test rejects a true null hypothesis—a false positive. This error rate corresponds to the total percentage of the shaded areas under the curve. While both tests have the same overall Type I error rate, the distribution of these errors is different.

To understand why, keep in mind that the critical regions also represent where the Type I errors occur. For a two-tailed test, these errors are split equally between the left and right tails. However, for a one-tailed test, all of these errors arise specifically in the one direction that you are interested in. Unfortunately, the error rate doubles in that direction compared to a two-tailed test. In the graphs above, the right tail has an error rate of 5% in the one-tailed test compared to 2.5% in the two-tailed test.

Related Post : Types of Errors in Hypothesis Tests

You Haven’t Changed Anything of Substance

By switching to a one-tailed test, you haven’t changed anything of substance to gain this extra power. All you’ve done is to redraw the critical region so that a smaller effect in the direction of interest is statistically significant. In this light, it’s not surprising that merely labeling smaller effects as being statistically significant also produces more false positives in that direction! And, the graphs reflect that fact.

If you want to increase the test’s power without increasing the Type I error rate, you’ll need to make a more fundamental change to your study’s design, such as increasing your sample size or more effectively controlling the variability.

Is the Higher False Positive Rate Worthwhile?

To use a one-tailed test to gain more power, you can’t care about detecting an effect in the other direction, and you have to be willing to accept twice the false positives in the direction you are interested. Remember, a false positive means that you will not obtain the benefits you expect.

Should you accept double the false positives in the direction of interest? Answering that question depends on the actions that a significant result will prompt. If you’re considering changing to a new production line, that’s a very costly decision. Doubling the false positives is problematic. Your company will spend a lot of money for a new manufacturing line, but it might not produce better products. However, if you’re changing suppliers for a part based on the test result, and their parts don’t cost more, a false positive isn’t an expensive problem.

Think carefully about whether the additional power is worth the extra false positives in your direction of interest! If you decide that the added power is worth the risk, consider my alternative approach below. It produces an equivalent amount of statistical power as the one-tailed approach. However, it uses a methodology that more accurately reflects the underlying reality of the study area and the goals of the analyst.

Alternative: Use a Two-Tailed Test with a Higher Significance Level

In my view, determining the possible directions of an effect and the statistical power of the analysis are two independent issues. Using a one-tailed test to boost power can obscure these matters and their ramifications. My recommendation is to use the following process:

  • Identify the directions that an effect can occur, and then choose a one-tailed or two-tailed test accordingly.
  • Choose the significance level to correctly set the sensitivity and false-positive rate based on your specific requirements.

This process breaks down the questions you need to answer into two separate issues, which allows you to consider each more carefully.

Now, let’s apply this process to the scenario where you’re studying an effect that can occur in both directions, but the following are both true:

  • You care about effects in only one direction.
  • Increasing the power of the test is worth a higher risk of false positives in that direction.

In this situation, using a one-tailed test to gain extra power seems like an acceptable solution. However, that approach attempts to solve the right problem by using the wrong methodology. Here’s my alternative method.

Instead of using a one-tailed test, consider using a two-tailed test and doubling the significance level, such as from 0.05 to 0.10. This approach increases your power while allowing the test methodology to match the reality of the situation better. It also increases the transparency of your goals as the analyst.

Related Post : Significance Levels and P-values

How the Two-Tailed Approach with a Higher Significance Level Works

To understand this approach, compare the graphs below. The top graph is one-sided and uses a significance level of 0.05. The bottom graph is two-sided and uses a significance level of 0.10.

Plot that display critical regions in the two tails of the distribution for a significance level of 0.10.

As you can see in the graphs, the critical region on the right side of both distributions starts at the same critical t-value (1.725). Consequently, both the one- and two-tailed tests provide the same power in that direction. Additionally, there is a critical region in the other tail, which means that the test can detect effects in the opposite direction as well.

The end result is that the two-tailed test has the same power and an equal probability of a Type I error in the direction of interest. Great! And, you can detect effects in the other direction even though you might not need to know about them. Okay, that’s not a bad thing.

This Approach Is More Transparent

What’s so great about this approach? It makes your methodology choices more explicit while accurately reflecting a study area where effects can occur in both directions. Here’s how.

The significance level is an evidentiary standard for the amount of sample evidence required to reject the null hypothesis. By increasing the significance level from 0.05 to 0.10, you’re explicitly stating that you are lowering the amount of evidence necessary to reject the null, which logically increases the power of the test. Additionally, as you raise the significance level, the Type I error rate also increases by definition. This approach produces the same power gains as a one-tailed test. However, it more clearly indicates how the analyst set up a more sensitive test in exchange for a higher risk of false positives.

The problem with gaining the additional power by switching to a one-tailed test is that it obscures the fact that you’re weakening the evidentiary standard. After all, you’re not explicitly changing the significance level. That’s why the increase in the Type I error rate in the direction of interest can be surprising!

Decision Guidelines

We covered a lot in this post. Here’s a brief recap of when to use each type of test. For some tests, you don’t have to worry about this choice. However, if you do need to decide between using a one-tailed and a two-tailed test, follow these guidelines. If the effect can occur in:

  • One direction: Use a one-tailed test and choose the correct alternative hypothesis .
  • Both directions: Use a two-tailed test.
  • Both directions, but you care about only one direction and you need the higher statistical power: Use a two-tailed test and double the significance level. Be aware that you are doubling the probability of a false positive.

Share this:

example of 1 tailed hypothesis test

Reader Interactions

' src=

April 13, 2021 at 10:02 am

Thanks Jim!

April 12, 2021 at 1:57 pm

Another great post.

If my hypothesis was say, that intelligence overall will be greater for first group that took the study in 2010 than the second group that took the same test in 2020. Would this be one tailed because I have made a specific prediction about the direction of intelligence over time?

Thanks again, Grace

' src=

April 13, 2021 at 12:22 am

I think you’d have a stronger case for a one-tailed test if the studies were closer together in time. When they’re so far apart, it’s possible that intelligence could decline over the years. (I’ve seen it happen!) But, if the studies were say a month apart, you’d have a stronger case for saying that intelligence wouldn’t decline over such a short span of time and, therefore, a one-tailed test might be called for. Whenever you can say that an effect is only possible in one direction, that’s the strongest case for a one-tailed test where you won’t get any debate.

It sounds like you’re asking about a one-tailed test based on a prediction about the hypothesis. That’s not usually a good enough reason to use a one-tailed test by itself. Of course, as I mention, there is some debate about when it’s ok. At the very least, it could be based on your prediction and the fact that you don’t care about results in the other direction. If you wanted to get published in a journal, that wouldn’t fly. Outside the academic context, you’d probably get some analysts to agree with that case and others wouldn’t.

Just be aware of the drawbacks that I mention. By going to a one-tailed tests, you’re doubling the false positives in the hypothesis direction in which you’re interested. I only recommend one-tailed tests for cases where the effect can only possibly exist in one direction.

' src=

November 24, 2020 at 12:33 pm

Brilliant post, Jim! I use hypothesis tests all the time (always two-tailed), but with the explanation you provided here, I can raise the significance if more false positives (i.e., Type I errors) are not a problem. With that said, this approach would still have to get past reviewers in a manuscript submission, which is no sure thing. I’ll play with the numbers if this comes up again in my work — and I will read this post at least once more, too. Thanks for the insight.

November 24, 2020 at 10:51 pm

Great to hear from you again! I’m glad this post was helpful. I think typically you wouldn’t want to raise the significance level higher that 0.05. However, for those who change to a one-sided test and leave the significance level at 0.05, they’re doing that in effect.

Best wishes and Happy Thanksgiving!

' src=

November 18, 2020 at 6:19 am

November 8, 2020 at 12:49 am

By “not all the data falls within a particular region”, do you mean that some of the data collected fall in the region and others don’t , BUT the mean of all data in this particular sample do, which is the whole point of hypothesis testing? As to the curve, I think that is the hypothesized sampling distribution of the sample mean, with the sample collected being a member of the overall set. please advise whether this is right, if not , then, there really is something wrong with the understanding and I will go back to the text book : ), otherwise, my question regarding one tailed test remains, thank you so much Jim.

November 8, 2020 at 3:35 pm

I do highly recommend that you read the post I link for you. It’ll help!

There seems a crucial piece that you’re missing. Again, it’s totally understandable because it’s not obvious.

These tests don’t assess where individual data points fall in a distribution.

Instead, these tests assess one estimate of a population parameter and compare it to the null hypothesis value.

Let’s look at that in the context of a 1-sample t-test. In this case, you’re comparing the sample mean (which estimates the population mean) and comparing it to the null hypothesis value. So, it’s just one value (the sample mean), not all the data points. And, you’re looking to see where that value falls in relation to the null hypothesis value. In the graphs, the null hypothesis value is the peak. And, you’re looking to see how far out the mean is. And because the mean is only one value, it’ll fall only at one point on the graphs.

Again, read the other post. It’ll answer your questions. It doesn’t make sense for me retype what I wrote in that post here in the comments. If after reading that post you have more questions, I’ll be happy to answer! 🙂

November 7, 2020 at 12:42 am

Use your illustration “Null: The effect is less than or equal to zero. Alternative: The effect is greater than zero.” Say if the significance level is 0.05, all on the left side, we are saying that 5% of the data are in the region, and if we observe this unlikely event, then it’s unlikely that the hypothesized mean is the true mean, depending how rare you want your criteria to be. If say the alpha level is 1%, where the critical value is even further from the mean, and if the p value is still in there, then we can be even more confident.I hope I am right about the above, but even if I am, I am still not as comfortable with “less than” as with “ not equal to”, even though I can work through the mechanics and get most my practice questions right. Is it ok to say that, at most 1% of data is in there, given the distribution, because any means greater will have a lower percentage, so 0.9%, 0.8%, 0.7% etc as you shift your means to the right with no boundaries, so you can shift infinitely, therefore we can be 99% confident that it is less than, please? Or if not, what’s the logic in words please? Thank you.

November 8, 2020 at 12:15 am

That’s not what the significance level indicates. The significance level doesn’t indicate where the data fall. If you’re performing a one-tailed test and get significant results, it doesn’t mean all the data falls in a particular region of the curve. It means that the sample statistic, such as the mean effect, falls far enough away from zero in a particular direction such that the test statistic falls in the corresponding critical region. The curves you’re seeing in the graphs are not data distributions. They’re sampling distributions for the test statistic, which is an entirely different thing.

I think before trying to understand one-tailed tests, you should read more about how hypothesis tests work in general. Click that link to learn more about how they work, sampling distributions, and what significance levels and p-values actually mean. I can tell you have a few misconceptions about them. That’s ok because they’re tricky concepts. But, it’ll be difficult to understand one-tailed tests without fully understanding how hypothesis tests work.

November 5, 2020 at 9:19 am

I can tell that in a two tailed test, the rejection regions are such that only a certain percentage of data points falls within that range and if you happen to observe a data point within that range, then it’s ok to conclude that the hypothesized mean is unlikely the true mean. However, if I shift all the rejection region to one side, knowing how unlikely I will find something in there, and then somehow observe a data within the range, how does it lead to a conclusion that the true mean is greater or smaller than the hypothesized value please? How can I draw any conclusion from this observation ? If the true mean is to either the left or right of the hypothesized value, it will have its own distribution , rendering the existing distribution irrelevant for drawing conclusion about a different mean?

November 6, 2020 at 9:23 pm

To be technically correct, you’re not looking for data points to fall in the critical regions. Instead, you’re looking for sample statistics that fall in those regions. You don’t need to worry about the distribution changing based on whether the mean is greater than or less than. It all works on the same distribution, which is a sampling distribution for the test statistic.

Read my post about one-tailed and two-tailed tests . It’ll show you how they work and I believe will answer your questions. I show the distributions for both types. If you do have more questions after reading that, don’t hesitate to ask!

' src=

May 10, 2020 at 5:32 am

Nice article, thank you Mr. Frost. I am a statistician and I run in this problem regularly and I am still not clear with it. With “Both directions but you care about only one direction” I use the approach that I do 2-tail test on 5 % sig. level and if this is significant and my client is interested only in one direction, then I interpret that the one-sided effect is significant at 5 % level. Which may look weird, but it is a correct statement. Basically, I avoid stating that one sided effect is significant at 5 % level in the situation where the 2-sided p-value is e.g. 0.07 and 1-sided is 0.035. This I don’t interpret as significant on 5 % even if my client is interested only in one direction.

' src=

December 26, 2018 at 1:20 am

Ye ! This Is A Good Blog!

' src=

November 12, 2018 at 10:00 am

Its a good article Mr. Jim. It gives clarication to the one tailed and two tailed tests that we commonly use in research.

November 12, 2018 at 11:00 am

Thanks, Sreekumar! I’m glad it was helpful!

Comments and Questions Cancel reply

LEARN STATISTICS EASILY

LEARN STATISTICS EASILY

Learn Data Analysis Now!

LEARN STATISTICS EASILY LOGO 2

What is: One-Tail Test

What is a one-tail test.

A one-tail test, also known as a directional test, is a statistical hypothesis test that evaluates the probability of a sample statistic falling in one specific tail of the distribution. This type of test is particularly useful when researchers have a specific hypothesis about the direction of the effect or difference they are investigating. For instance, if a researcher believes that a new drug will increase recovery rates compared to a placebo, they would use a one-tail test to assess this hypothesis.

 width=

Ad description. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Understanding the Null and Alternative Hypotheses

In the context of a one-tail test, the null hypothesis (H0) typically posits that there is no effect or difference, while the alternative hypothesis (H1) suggests that there is a significant effect in a specified direction. For example, if we are testing whether a new teaching method improves student performance, the null hypothesis might state that the mean test scores of students using the new method are equal to those using the traditional method, while the alternative hypothesis would assert that the mean scores are greater for the new method.

When to Use a One-Tail Test

One-tail tests are appropriate when the research question is focused on detecting an effect in one direction only. This is common in fields such as medicine, psychology, and social sciences, where researchers often have a clear expectation of the outcome. However, it is crucial to determine the direction of the test before collecting data, as using a one-tail test after observing the data can lead to biased results.

Advantages of One-Tail Tests

One-tail tests offer several advantages, including increased statistical power when the hypothesis is directional. This means that if the effect exists in the specified direction, a one-tail test is more likely to detect it compared to a two-tail test, which assesses both directions. Additionally, one-tail tests can lead to smaller p-values, making it easier to achieve statistical significance.

Limitations of One-Tail Tests

Despite their advantages, one-tail tests have limitations. The most significant drawback is the risk of overlooking an effect in the opposite direction. If a researcher conducts a one-tail test and finds no significant results, they may incorrectly conclude that there is no effect, even if one exists in the opposite direction. This limitation underscores the importance of carefully considering the research question and hypotheses before selecting the type of test.

Calculating the One-Tail Test Statistic

The calculation of a one-tail test statistic involves determining the z-score or t-score, depending on the sample size and whether the population standard deviation is known. The formula for the z-score is given by (X̄ – μ) / (σ/√n), where X̄ is the sample mean, μ is the population mean under the null hypothesis, σ is the population standard deviation, and n is the sample size. The resulting score is then compared to the critical value from the z or t distribution tables to determine significance.

Interpreting Results from a One-Tail Test

When interpreting the results of a one-tail test, researchers focus on the p-value obtained from the test statistic. A p-value less than the predetermined significance level (commonly set at 0.05) indicates that the null hypothesis can be rejected in favor of the alternative hypothesis. It is essential to report the p-value alongside the test statistic to provide a complete picture of the findings.

Common Applications of One-Tail Tests

One-tail tests are commonly used in various fields, including clinical trials, quality control, and behavioral studies. For instance, in clinical research, a one-tail test may be employed to determine if a new medication significantly lowers blood pressure compared to a placebo. In quality control, manufacturers may use one-tail tests to assess whether a production process yields products that exceed a specified quality threshold.

Conclusion on One-Tail Tests

In summary, one-tail tests are a powerful statistical tool for hypothesis testing when researchers have a specific directional hypothesis. While they offer advantages in terms of statistical power and p-value significance, careful consideration must be given to the research question and the potential for overlooking effects in the opposite direction. Understanding the appropriate use and interpretation of one-tail tests is crucial for accurate data analysis in various fields.

example of 1 tailed hypothesis test

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

example of 1 tailed hypothesis test

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved September 16, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

Wallstreet Logo

Trending Courses

Course Categories

Certification Programs

  • Free Courses

Statistics Resources

  • Free Practice Tests
  • On Demand Webinars

One-Tailed Test

Published on :

21 Aug, 2024

Blog Author :

Edited by :

Collins Enosh

Reviewed by :

Dheeraj Vaidya

One-Tailed Test Definition

The one-tailed test is a statistical hypothesis testing method. To reject the null hypothesis sample mean should be either greater or less than the population mean. This test is also referred to as a directional test or directional hypothesis. The test is run to prove a claim either true or false.

The determination of this test cannot be ambiguous, meaning that it can be either less or more than the population mean but cannot be both. Hypothesis testing determines the probability of a hypothesis being correct. The test validates the accuracy of the alternate hypothesis by eliminating randomness.

Table of contents

One tailed test explained, one-tailed test vs. two-tailed test, frequently asked questions (faqs), recommended articles.

  • For a one tailed test hypothesis, the sample mean value can be either more or less than the population mean value but cannot be both.
  • The null hypothesis and alternative hypothesis precede one-tailed tests—along with a p-value (probability value).
  • The test is directional; hence it does not consider the other direction while establishing a relationship.

One-Tailed Test

The one tailed test is a statistical method of hypothesis testing. Based on statistical data, hypothesis testing determines whether a theory is true or not. If a test shows the mean sample being both larger and smaller than the population, it is a two-tailed test. But when a test shows the sample mean being only larger or smaller than the population, it is a one tailed test. So, during testing, if sample data predominantly occur on one side, then the null hypothesis will be rejected—an alternate hypothesis would be accepted.

One Tailed Test Explained

One-tailed tests are preceded by the null hypothesis and alternative hypothesis. Researchers are required to prove the null hypothesis wrong; only then can they claim the alternative hypothesis. Ideally, in order to prove a theory, researchers need to eliminate randomness. When they prove an observation caused by a specific cause, the observations should not be caused by random factors. Randomness levels are determined by statistical significance .

A significance level is represented as "p," referring to probability. Usually, significance values are either 1%, 5%, or 10%. However, researchers have the discretion to use any other probability. The probability value is calculated assuming that the null hypothesis is true. The lower the p-value , the lesser the randomness—the null hypothesis will easily be proven false. If the resulting p-value is below 5%, the difference between both observations is statistically significant, and the null hypothesis is rejected.

Let us understand the application of one-tailed tests with an example.

Let us assume a school principal wants to prove that a new math professor increased classroom performance by 9.29%. The principal set up the null (H0) and alternative (Ha) hypotheses:

H0: μ ≤ 9.29

Ha: μ > 9.29

The principal hopes to reject the null hypothesis and validate his claim as the alternative hypothesis. If the test rejects the null hypothesis, the alternative hypothesis is supported. On the contrary, if the test outcome fails to reject the null hypothesis, the principal will have to research further to discover other explanations for the classroom performance.

The rejection region lies on one side of the sampling distribution . Therefore, to determine how the classroom performed compared to a different mathematics professor, the principal must run a right-tailed significance test—extreme values must fall on the right side of the normal distribution curve. A normal distribution or Gaussian distribution refers to a probability distribution where the values of a random variable are distributed symmetrically. These values are equally distributed on the left and the right side of the central tendency. Thus, a bell-shaped curve is formed.

Right Tailed Distribution

The one tail test results, represented in the right-tail curve area could show an overlap between increased classroom performance and the period taught by the new professor. Further, the test will show if the results were significantly different for the previous professor.

One Tailed TestTwo Tailed Test
In one tailed tests, alternative hypotheses have only one end. In a two-tailed test, the alternative hypothesis has two ends
It is a directional hypothesis It is a non-directional hypothesis
The region of rejection can be either left or right. Region of rejection can be both left as well as right.
There is a relationship between variables in a single direction. There is a relationship between variables in either direction.
The result is always greater or less than a specific value. The result is greater or less than a certain value range.
Denoted by > or < Denoted as ≠

One tailed tests are used in situations where a theory or statement is set to be either true or false. Assume that a new drug is developed. The developers want to check if it is more effective than the current drug. In such scenarios, one tailed test can be used to prove the effectiveness.

It is based on two hypotheses—the null hypothesis and the alternative hypothesis. The test will prove only one of them true. Researchers want to prove the null hypothesis false to establish their findings as the alternative explanation for the sampled data.

One tail tests have a very practical advantage—it demands fewer subjects to obtain significance. On the other hand, a two-tailed test splits the significance level and then implies it in both directions. Thus, each direction is half as strong as a one tail test.

This article has been a guide to what is One-Tailed Test and Definition. Here we discuss one-tailed test examples, graphs, p-values, and how it differs from the two-tailed test. You may learn more about financing from the following articles –

  • T-test in Excel
  • Z Test in Excel
  • P-Value in Excel

Youtube

example of 1 tailed hypothesis test

Hypothesis Testing for Means & Proportions

  •   1  
  • |   2  
  • |   3  
  • |   4  
  • |   5  
  • |   6  
  • |   7  
  • |   8  
  • |   9  
  • |   10  

On This Page sidebar

Hypothesis Testing: Upper-, Lower, and Two Tailed Tests

Type i and type ii errors.

Learn More sidebar

All Modules

More Resources sidebar

Z score Table

t score Table

The procedure for hypothesis testing is based on the ideas described above. Specifically, we set up competing hypotheses, select a random sample from the population of interest and compute summary statistics. We then determine whether the sample data supports the null or alternative hypotheses. The procedure can be broken down into the following five steps.  

  • Step 1. Set up hypotheses and select the level of significance α.

H 0 : Null hypothesis (no change, no difference);  

H 1 : Research hypothesis (investigator's belief); α =0.05

 

Upper-tailed, Lower-tailed, Two-tailed Tests

The research or alternative hypothesis can take one of three forms. An investigator might believe that the parameter has increased, decreased or changed. For example, an investigator might hypothesize:  

: μ > μ , where μ is the comparator or null value (e.g., μ =191 in our example about weight in men in 2006) and an increase is hypothesized - this type of test is called an ; : μ < μ , where a decrease is hypothesized and this is called a ; or : μ ≠ μ where a difference is hypothesized and this is called a .  

The exact form of the research hypothesis depends on the investigator's belief about the parameter of interest and whether it has possibly increased, decreased or is different from the null value. The research hypothesis is set up by the investigator before any data are collected.

 

  • Step 2. Select the appropriate test statistic.  

The test statistic is a single number that summarizes the sample information.   An example of a test statistic is the Z statistic computed as follows:

When the sample size is small, we will use t statistics (just as we did when constructing confidence intervals for small samples). As we present each scenario, alternative test statistics are provided along with conditions for their appropriate use.

  • Step 3.  Set up decision rule.  

The decision rule is a statement that tells under what circumstances to reject the null hypothesis. The decision rule is based on specific values of the test statistic (e.g., reject H 0 if Z > 1.645). The decision rule for a specific test depends on 3 factors: the research or alternative hypothesis, the test statistic and the level of significance. Each is discussed below.

  • The decision rule depends on whether an upper-tailed, lower-tailed, or two-tailed test is proposed. In an upper-tailed test the decision rule has investigators reject H 0 if the test statistic is larger than the critical value. In a lower-tailed test the decision rule has investigators reject H 0 if the test statistic is smaller than the critical value.  In a two-tailed test the decision rule has investigators reject H 0 if the test statistic is extreme, either larger than an upper critical value or smaller than a lower critical value.
  • The exact form of the test statistic is also important in determining the decision rule. If the test statistic follows the standard normal distribution (Z), then the decision rule will be based on the standard normal distribution. If the test statistic follows the t distribution, then the decision rule will be based on the t distribution. The appropriate critical value will be selected from the t distribution again depending on the specific alternative hypothesis and the level of significance.  
  • The third factor is the level of significance. The level of significance which is selected in Step 1 (e.g., α =0.05) dictates the critical value.   For example, in an upper tailed Z test, if α =0.05 then the critical value is Z=1.645.  

The following figures illustrate the rejection regions defined by the decision rule for upper-, lower- and two-tailed Z tests with α=0.05. Notice that the rejection regions are in the upper, lower and both tails of the curves, respectively. The decision rules are written below each figure.

Rejection Region for Upper-Tailed Z Test (H : μ > μ ) with α=0.05

The decision rule is: Reject H if Z 1.645.

 

 

α

Z

0.10

1.282

0.05

1.645

0.025

1.960

0.010

2.326

0.005

2.576

0.001

3.090

0.0001

3.719

Standard normal distribution with lower tail at -1.645 and alpha=0.05

Rejection Region for Lower-Tailed Z Test (H 1 : μ < μ 0 ) with α =0.05

The decision rule is: Reject H 0 if Z < 1.645.

a

Z

0.10

-1.282

0.05

-1.645

0.025

-1.960

0.010

-2.326

0.005

-2.576

0.001

-3.090

0.0001

-3.719

Standard normal distribution with two tails

Rejection Region for Two-Tailed Z Test (H 1 : μ ≠ μ 0 ) with α =0.05

The decision rule is: Reject H 0 if Z < -1.960 or if Z > 1.960.

0.20

1.282

0.10

1.645

0.05

1.960

0.010

2.576

0.001

3.291

0.0001

3.819

The complete table of critical values of Z for upper, lower and two-tailed tests can be found in the table of Z values to the right in "Other Resources."

Critical values of t for upper, lower and two-tailed tests can be found in the table of t values in "Other Resources."

  • Step 4. Compute the test statistic.  

Here we compute the test statistic by substituting the observed sample data into the test statistic identified in Step 2.

  • Step 5. Conclusion.  

The final conclusion is made by comparing the test statistic (which is a summary of the information observed in the sample) to the decision rule. The final conclusion will be either to reject the null hypothesis (because the sample data are very unlikely if the null hypothesis is true) or not to reject the null hypothesis (because the sample data are not very unlikely).  

If the null hypothesis is rejected, then an exact significance level is computed to describe the likelihood of observing the sample data assuming that the null hypothesis is true. The exact level of significance is called the p-value and it will be less than the chosen level of significance if we reject H 0 .

Statistical computing packages provide exact p-values as part of their standard output for hypothesis tests. In fact, when using a statistical computing package, the steps outlined about can be abbreviated. The hypotheses (step 1) should always be set up in advance of any analysis and the significance criterion should also be determined (e.g., α =0.05). Statistical computing packages will produce the test statistic (usually reporting the test statistic as t) and a p-value. The investigator can then determine statistical significance using the following: If p < α then reject H 0 .  

 

 

  • Step 1. Set up hypotheses and determine level of significance

H 0 : μ = 191 H 1 : μ > 191                 α =0.05

The research hypothesis is that weights have increased, and therefore an upper tailed test is used.

  • Step 2. Select the appropriate test statistic.

Because the sample size is large (n > 30) the appropriate test statistic is

  • Step 3. Set up decision rule.  

In this example, we are performing an upper tailed test (H 1 : μ> 191), with a Z test statistic and selected α =0.05.   Reject H 0 if Z > 1.645.

We now substitute the sample data into the formula for the test statistic identified in Step 2.  

We reject H 0 because 2.38 > 1.645. We have statistically significant evidence at a =0.05, to show that the mean weight in men in 2006 is more than 191 pounds. Because we rejected the null hypothesis, we now approximate the p-value which is the likelihood of observing the sample data if the null hypothesis is true. An alternative definition of the p-value is the smallest level of significance where we can still reject H 0 . In this example, we observed Z=2.38 and for α=0.05, the critical value was 1.645. Because 2.38 exceeded 1.645 we rejected H 0 . In our conclusion we reported a statistically significant increase in mean weight at a 5% level of significance. Using the table of critical values for upper tailed tests, we can approximate the p-value. If we select α=0.025, the critical value is 1.96, and we still reject H 0 because 2.38 > 1.960. If we select α=0.010 the critical value is 2.326, and we still reject H 0 because 2.38 > 2.326. However, if we select α=0.005, the critical value is 2.576, and we cannot reject H 0 because 2.38 < 2.576. Therefore, the smallest α where we still reject H 0 is 0.010. This is the p-value. A statistical computing package would produce a more precise p-value which would be in between 0.005 and 0.010. Here we are approximating the p-value and would report p < 0.010.                  

In all tests of hypothesis, there are two types of errors that can be committed. The first is called a Type I error and refers to the situation where we incorrectly reject H 0 when in fact it is true. This is also called a false positive result (as we incorrectly conclude that the research hypothesis is true when in fact it is not). When we run a test of hypothesis and decide to reject H 0 (e.g., because the test statistic exceeds the critical value in an upper tailed test) then either we make a correct decision because the research hypothesis is true or we commit a Type I error. The different conclusions are summarized in the table below. Note that we will never know whether the null hypothesis is really true or false (i.e., we will never know which row of the following table reflects reality).

Table - Conclusions in Test of Hypothesis

 

is True

Correct Decision

Type I Error

is False

Type II Error

Correct Decision

In the first step of the hypothesis test, we select a level of significance, α, and α= P(Type I error). Because we purposely select a small value for α, we control the probability of committing a Type I error. For example, if we select α=0.05, and our test tells us to reject H 0 , then there is a 5% probability that we commit a Type I error. Most investigators are very comfortable with this and are confident when rejecting H 0 that the research hypothesis is true (as it is the more likely scenario when we reject H 0 ).

When we run a test of hypothesis and decide not to reject H 0 (e.g., because the test statistic is below the critical value in an upper tailed test) then either we make a correct decision because the null hypothesis is true or we commit a Type II error. Beta (β) represents the probability of a Type II error and is defined as follows: β=P(Type II error) = P(Do not Reject H 0 | H 0 is false). Unfortunately, we cannot choose β to be small (e.g., 0.05) to control the probability of committing a Type II error because β depends on several factors including the sample size, α, and the research hypothesis. When we do not reject H 0 , it may be very likely that we are committing a Type II error (i.e., failing to reject H 0 when in fact it is false). Therefore, when tests are run and the null hypothesis is not rejected we often make a weak concluding statement allowing for the possibility that we might be committing a Type II error. If we do not reject H 0 , we conclude that we do not have significant evidence to show that H 1 is true. We do not conclude that H 0 is true.

Lightbulb icon signifying an important idea

 The most common reason for a Type II error is a small sample size.

return to top | previous page | next page

Content ©2017. All Rights Reserved. Date last modified: November 6, 2017. Wayne W. LaMorte, MD, PhD, MPH

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

One Tailed Test or Two in Hypothesis Testing; One Tailed Distribution Area

Contents (Click to slip to that section):

  • Alpha levels
  • When should you use either test?
  • One tailed distribution (how to find the area)

One tailed test or two in Hypothesis Testing: Overview

one tailed test or two

In hypothesis testing , you are asked to decide if a claim is true or not. For example, if someone says “all Floridian’s have a 50% increased chance of melanoma”, it’s up to you to decide if this claim holds merit. One of the first steps is to look up a z-score , and in order to do that , you need to know if it’s a one tailed test or two . You can figure this out in just a couple of steps. Back to top

One tailed test or two in Hypothesis Testing: Steps

If you’re lucky enough to be given a picture, you’ll be able to tell if your test is one-tailed or two-tailed by comparing it to the image above. However, most of the time you’re given questions, not pictures. So it’s a matter of deciphering the problem and picking out the important piece of information. You’re basically looking for keywords like equals , more than , or less than .

Example question #1: A government official claims that the dropout rate for local schools is 25% . Last year, 190 out of 603 students dropped out. Is there enough evidence to reject the government official’s claim?

Example question #2: A government official claims that the dropout rate for local schools is less than 25%. Last year, 190 out of 603 students dropped out. Is there enough evidence to reject the government official’s claim?

Example question #3: A government official claims that the dropout rate for local schools is greater than 25%. Last year, 190 out of 603 students dropped out. Is there enough evidence to reject the government official’s claim?

Step 1: Read the question.

Step 2: Rephrase the claim in the question with an equation.

  • In example question #1, Drop out rate = 25%
  • In example question #2, Drop out rate < 25%
  • In example question #3, Drop out rate > 25%.

Step 3: If step 2 has an equals sign in it, this is a two-tailed test. If it has > or < it is a one-tailed test.

Like the explanation? Check out the Statistics How To Handbook , which has hundreds of easy to understand definitions and examples, just like this one!

Back to top

One Tailed Test or Two: Onto some more technical stuff

The above should have given you a brief overview of the differences between one-tailed tests and two-tailed tests. For the very beginning of your stats class, that’s probably all the information you need to get by. But once you hit ANOVA and regression analysis , things get a little more challenging.

1. Alpha levels

Alpha levels (sometimes just called “significance levels”) are used in hypothesis tests ; it is the probability of making the wrong decision when the null hypothesis is true. A one-tailed test has the entire 5% of the alpha level in one tail (in either the left, or the right tail). A two-tailed test splits your alpha level in half (as in the image to the left).

Let’s say you’re working with the standard alpha level of 0.5 (5%). A two tailed test will have half of this (2.5%) in each tail. Very simply, the hypothesis test might go like this:

  • A null hypothesis might state that the mean = x . You’re testing if the mean is way above this or way below.
  • You run a t-test , which churns out a t-statistic .
  • If this test statistic falls in the top 2.5% or bottom 2.5% of its probability distribution (in this case, the t-distribution ), you would reject the null hypothesis .

The “cut off” areas created by your alpha levels are called rejection regions . It’s where you would reject the null hypothesis, if your test statistic happens to fall into one of those rejection areas. The terms “one tailed” and “two tailed” can more precisely be defined as referring to where your rejection regions are located. Back to top

A one-tailed test is where you are only interested in one direction. If a mean is x, you might want to know if a set of results is more than x or less than x. A one-tailed test is more powerful than a two-tailed test, as you aren’t considering an effect in the opposite direction.

Next : Left tailed test or right tailed test? Back to top

3. When Should You Use a One-Tailed Test?

In the above examples, you were given specific wording like “greater than” or “less than.” Sometimes you, the researcher, do not have this information and you have to choose the test.

For example, you develop a drug which you think is just as effective as a drug already on the market (it also happens to be cheaper). You could run a two-tailed test (to test that it is more effective and to also check that it is less effective). But you don’t really care about it being more effective, just that it isn’t any less effective (after all, your drug is cheaper). You can run a one-tailed test to check that your drug is at least as effective as the existing drug.

On the other hand, it would be inappropriate (and perhaps, unethical) to run a one-tailed test for this scenario in the opposite direction (i.e. to show the drug is more effective). This sounds reasonable until you consider there may be certain circumstances where the drug is less effective. If you fail to test for that, your research will be useless.

Consider both directions when deciding if you should run a one tailed test or two. If you can skip one tail and it’s not irresponsible or unethical to do so, then you can run a one-tailed test. Back to top

One tailed Test or Two: How to find the area of a one-tailed distribution: Steps

There are a few ways to find the area under a one tailed distribution curve. The easiest, by far, is looking up the value in a table like the z-table . A z-table gives you percentages, which represent the area under a curve . For example, a table value of 0.5000 is 50% of the area and 0.2000 is 20% of the area.

If you are looking for other area problems*, see the normal distribution curve index . The index lists seven possible types of area, including two tailed, one tailed, and areas to the left and right of z.

*You can also calculate areas with integral calculus . See The Area Problem .

Note : In order to use a z-table , you need to split your z-value up into decimal places (e.g. tenths and hundredths). For example, if you are asked to find the area in a one tailed distribution with a z-value of 0.21, split this into tenths (0.2) and hundredths (0.01).

One tailed distribution: Steps for finding the area in a z-table

Step 1: Look up your z-score in the z-table . Looking up the value means finding the intersection of your two decimals (see note above). For example, if you are asked to find the area in a one tailed distribution to the left of z = -0.46, look up 0.46 in the table (note: ignore negative values. If you have a negative value, use its absolute value ). The table below shows that the value in the intersection for 0.46 is .1772. This figure was obtained by looking up 0.4 in the left hand column and 0.06 in the top row.

z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.0000 0.0040 0.0080 0.0120 0.0160 0.0199 0.0239 0.0279 0.0319 0.0359
0.1 0.0398 0.0438 0.0478 0.0517 0.0557 0.0596 0.0636 0.0675 0.0714 0.0753
0.2 0.0793 0.0832 0.0871 0.0910 0.0948 0.0987 0.1026 0.1064 0.1103 0.1141
0.3 0.1179 0.1217 0.1255 0.1293 0.1331 0.1368 0.1406 0.1443 0.1480 0.1517
0.4 0.1554 0.1591 0.1628 0.1664 0.1700 0.1736 0.1772 0.1808 0.1844 0.1879
0.5 0.1915 0.1950 0.1985 0.2019 0.2054 0.2088 0.2123 0.2157 0.2190 0.2224

Step 2: Take the area you just found in step 2 and add .500. That’s because the area in the right-hand z-table is the area between the mean and the z-score. You want the entire area up to that point, so: .5000 + .1772 = .6772.

Step 3: Subtract from 1 to get the tail area: 1 – .6772 = 0.3228.

That’s it!

One Tailed Test or Two: References

Gonick, L. (1993). The Cartoon Guide to Statistics . HarperPerennial. Heath, D. (2002). An Introduction to Experimental Design and Statistics for Biology. CRC Press. IDRE: FAQ: What are the differences between one-tailed and two-tailed tests? Retrieved May 27, 2018 from: https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-the-differences-between-one-tailed-and-two-tailed-tests/

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Statistical Methods and Data Analytics

FAQ: What are the differences between one-tailed and two-tailed tests?

When you conduct a test of statistical significance, whether it is from a correlation, an ANOVA, a regression or some other kind of test, you are given a p-value somewhere in the output.  If your test statistic is symmetrically distributed, you can select one of three alternative hypotheses. Two of these correspond to one-tailed tests and one corresponds to a two-tailed test.  However, the p-value presented is (almost always) for a two-tailed test.  But how do you choose which test?  Is the p-value appropriate for your test? And, if it is not, how can you calculate the correct p-value for your test given the p-value in your output?  

What is a two-tailed test?

First let’s start with the meaning of a two-tailed test.  If you are using a significance level of 0.05, a two-tailed test allots half of your alpha to testing the statistical significance in one direction and half of your alpha to testing statistical significance in the other direction.  This means that .025 is in each tail of the distribution of your test statistic. When using a two-tailed test, regardless of the direction of the relationship you hypothesize, you are testing for the possibility of the relationship in both directions.  For example, we may wish to compare the mean of a sample to a given value x using a t-test.  Our null hypothesis is that the mean is equal to x . A two-tailed test will test both if the mean is significantly greater than x and if the mean significantly less than x . The mean is considered significantly different from x if the test statistic is in the top 2.5% or bottom 2.5% of its probability distribution, resulting in a p-value less than 0.05.     

What is a one-tailed test?

Next, let’s discuss the meaning of a one-tailed test.  If you are using a significance level of .05, a one-tailed test allots all of your alpha to testing the statistical significance in the one direction of interest.  This means that .05 is in one tail of the distribution of your test statistic. When using a one-tailed test, you are testing for the possibility of the relationship in one direction and completely disregarding the possibility of a relationship in the other direction.  Let’s return to our example comparing the mean of a sample to a given value x using a t-test.  Our null hypothesis is that the mean is equal to x . A one-tailed test will test either if the mean is significantly greater than x or if the mean is significantly less than x , but not both. Then, depending on the chosen tail, the mean is significantly greater than or less than x if the test statistic is in the top 5% of its probability distribution or bottom 5% of its probability distribution, resulting in a p-value less than 0.05.  The one-tailed test provides more power to detect an effect in one direction by not testing the effect in the other direction. A discussion of when this is an appropriate option follows.   

When is a one-tailed test appropriate?

Because the one-tailed test provides more power to detect an effect, you may be tempted to use a one-tailed test whenever you have a hypothesis about the direction of an effect. Before doing so, consider the consequences of missing an effect in the other direction.  Imagine you have developed a new drug that you believe is an improvement over an existing drug.  You wish to maximize your ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the possibility that the new drug is less effective than the existing drug.  The consequences in this example are extreme, but they illustrate a danger of inappropriate use of a one-tailed test.

So when is a one-tailed test appropriate? If you consider the consequences of missing an effect in the untested direction and conclude that they are negligible and in no way irresponsible or unethical, then you can proceed with a one-tailed test. For example, imagine again that you have developed a new drug. It is cheaper than the existing drug and, you believe, no less effective.  In testing this drug, you are only interested in testing if it less effective than the existing drug.  You do not care if it is significantly more effective.  You only wish to show that it is not less effective. In this scenario, a one-tailed test would be appropriate. 

When is a one-tailed test NOT appropriate?

Choosing a one-tailed test for the sole purpose of attaining significance is not appropriate.  Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was.  Using statistical tests inappropriately can lead to invalid results that are not replicable and highly questionable–a steep price to pay for a significance star in your results table!   

Deriving a one-tailed test from two-tailed output

The default among statistical packages performing tests is to report two-tailed p-values.  Because the most commonly used test statistic distributions (standard normal, Student’s t) are symmetric about zero, most one-tailed p-values can be derived from the two-tailed p-values.   

Below, we have the output from a two-sample t-test in Stata.  The test is comparing the mean male score to the mean female score.  The null hypothesis is that the difference in means is zero.  The two-sided alternative is that the difference in means is not zero.  There are two one-sided alternatives that one could opt to test instead: that the male score is higher than the female score (diff  > 0) or that the female score is higher than the male score (diff < 0).  In this instance, Stata presents results for all three alternatives.  Under the headings Ha: diff < 0 and Ha: diff > 0 are the results for the one-tailed tests. In the middle, under the heading Ha: diff != 0 (which means that the difference is not equal to 0), are the results for the two-tailed test. 

Two-sample t test with equal variances ------------------------------------------------------------------------------ Group | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- male | 91 50.12088 1.080274 10.30516 47.97473 52.26703 female | 109 54.99083 .7790686 8.133715 53.44658 56.53507 ---------+-------------------------------------------------------------------- combined | 200 52.775 .6702372 9.478586 51.45332 54.09668 ---------+-------------------------------------------------------------------- diff | -4.869947 1.304191 -7.441835 -2.298059 ------------------------------------------------------------------------------ Degrees of freedom: 198 Ho: mean(male) - mean(female) = diff = 0 Ha: diff < 0 Ha: diff != 0 Ha: diff > 0 t = -3.7341 t = -3.7341 t = -3.7341 P < t = 0.0001 P > |t| = 0.0002 P > t = 0.9999

Note that the test statistic, -3.7341, is the same for all of these tests.  The two-tailed p-value is P > |t|. This can be rewritten as P(>3.7341) + P(< -3.7341).  Because the t-distribution is symmetric about zero, these two probabilities are equal: P > |t| = 2 *  P(< -3.7341).  Thus, we can see that the two-tailed p-value is twice the one-tailed p-value for the alternative hypothesis that (diff < 0).  The other one-tailed alternative hypothesis has a p-value of P(>-3.7341) = 1-(P<-3.7341) = 1-0.0001 = 0.9999.   So, depending on the direction of the one-tailed hypothesis, its p-value is either 0.5*(two-tailed p-value) or 1-0.5*(two-tailed p-value) if the test statistic symmetrically distributed about zero. 

In this example, the two-tailed p-value suggests rejecting the null hypothesis of no difference. Had we opted for the one-tailed test of (diff > 0), we would fail to reject the null because of our choice of tails. 

The output below is from a regression analysis in Stata.  Unlike the example above, only the two-sided p-values are presented in this output.

Source | SS df MS Number of obs = 200 -------------+------------------------------ F( 2, 197) = 46.58 Model | 7363.62077 2 3681.81039 Prob > F = 0.0000 Residual | 15572.5742 197 79.0486001 R-squared = 0.3210 -------------+------------------------------ Adj R-squared = 0.3142 Total | 22936.195 199 115.257261 Root MSE = 8.8909 ------------------------------------------------------------------------------ socst | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- science | .2191144 .0820323 2.67 0.008 .0573403 .3808885 math | .4778911 .0866945 5.51 0.000 .3069228 .6488594 _cons | 15.88534 3.850786 4.13 0.000 8.291287 23.47939 ------------------------------------------------------------------------------

For each regression coefficient, the tested null hypothesis is that the coefficient is equal to zero.  Thus, the one-tailed alternatives are that the coefficient is greater than zero and that the coefficient is less than zero. To get the p-value for the one-tailed test of the variable science having a coefficient greater than zero, you would divide the .008 by 2, yielding .004 because the effect is going in the predicted direction. This is P(>2.67). If you had made your prediction in the other direction (the opposite direction of the model effect), the p-value would have been 1 – .004 = .996.  This is P(<2.67). For all three p-values, the test statistic is 2.67. 

Your Name (required)

Your Email (must be a valid email for us to receive the report!)

Comment/Error Report (required)

How to cite this page

  • © 2024 UC REGENTS

IMAGES

  1. One-Tailed Test Explained: Definition and Example

    example of 1 tailed hypothesis test

  2. One Tailed Test Conditions Explained with Examples

    example of 1 tailed hypothesis test

  3. One-Tailed Test

    example of 1 tailed hypothesis test

  4. Solved A one-tailed hypothesis test for a repeated-measures

    example of 1 tailed hypothesis test

  5. One Tailed Hypothesis Test Example

    example of 1 tailed hypothesis test

  6. One-tailed Hypothesis test

    example of 1 tailed hypothesis test

VIDEO

  1. Excel Statistical Analysis 44: Hypothesis Testing with Z Distribution, 1 Tail Lower (Left) Test

  2. Hypothesis Testing & Two-tailed and One-tailed Test (tagalog and basic)

  3. Hypothesis test Z Test Part 4 Single Sample one tailed Test MBS First Semester Statistics

  4. Testing and Estimation

  5. T Test for Two Samples Means with Example Problem

  6. Understanding the Difference Between One-Tailed and Two-Tailed Tests in Statistics

COMMENTS

  1. One-Tailed Hypothesis Tests: 3 Example Problems

    In a one-tailed test, the alternative hypothesis contains the less than ("<") or greater than (">") sign. This indicates that we're testing whether or not there is a positive or negative effect. Check out the following example problems to gain a better understanding of one-tailed tests. Example 1: Factory Widgets

  2. One-Tailed and Two-Tailed Hypothesis Tests Explained

    One-tailed hypothesis tests are also known as directional and one-sided tests because you can test for effects in only one direction. When you perform a one-tailed test, the entire significance level percentage goes into the extreme end of one tail of the distribution. In the examples below, I use an alpha of 5%.

  3. One-Tailed Hypothesis Tests: 3 Example Problems

    To test this, she can perform a one-tailed hypothesis test with the following null and alternative hypotheses: H0 (Null Hypothesis): μ ≤ 10 inches. HA (Alternative Hypothesis): μ > 10 inches. Note: We can tell this is a one-tailed test because the alternative hypothesis contains the greater than (>) sign. Specifically, we would call this a ...

  4. One-Tailed Test Explained: Definition and Example

    A one-tailed test is a statistical hypothesis test set up to show that the sample mean ... Example of the One-Tailed Test . ... The significance value used in a one-tailed test is either 1%, 5% ...

  5. When Can I Use One-Tailed Hypothesis Tests?

    One-tailed hypothesis tests offer the promise of more statistical power compared to an equivalent two-tailed design. While there is some debate about when you can use a one-tailed test, ... Let's look at that in the context of a 1-sample t-test. In this case, you're comparing the sample mean (which estimates the population mean) and ...

  6. What is: One-Tail Test Explained in Detail

    A one-tail test, also known as a directional test, is a statistical hypothesis test that evaluates the probability of a sample statistic falling in one specific tail of the distribution. This type of test is particularly useful when researchers have a specific hypothesis about the direction of the effect or difference they are investigating.

  7. Khan Academy

    Khanmigo is now free for all US educators! Plan lessons, develop exit tickets, and so much more with our AI teaching assistant. Get it now!

  8. Hypothesis Testing

    Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test. Step 4: Decide whether to reject or fail to reject your null hypothesis. Step 5: Present your findings. Other interesting articles. Frequently asked questions about hypothesis testing.

  9. Hypothesis testing: One-tailed and two-tailed tests

    At this point, you might use a statistical test, like unpaired or 2-sample t-test, to see if there's a significant difference between the two groups' means. Typically, an unpaired t-test starts with two hypotheses. The first hypothesis is called the null hypothesis, and it basically says there's no difference in the means of the two groups.

  10. 11.4: One- and Two-Tailed Tests

    Figure 11.4.1 11.4. 1 shows a graph of the binomial distribution. The red bars show the values greater than or equal to 13 13. As you can see in the figure, the probabilities are calculated for the upper tail of the distribution. A probability calculated in only one tail of the distribution is called a "one-tailed probability."

  11. One-Tailed Test: Definition & Examples

    Cite this lesson. A one-tailed test is used when a tested hypothesis looks for whether a value is notably higher or lower than expected by chance. Explore the definition and examples of one-tailed ...

  12. One Sample t-test

    Clearly, the two-tailed test is more conservative because, regardless of direction, the test statistic has to be more than 1.96 units away from the null. The vast majority of tests that are reported are two-tailed tests. However, there are occasionally situations in which a one-tailed test hypothesis can be justified.

  13. One-Tailed Test

    The one-tailed test is a statistical hypothesis testing method. To reject the null hypothesis sample mean should be either greater or less than the population mean. This test is also referred to as a directional test or directional hypothesis. The test is run to prove a claim either true or false. The determination of this test cannot be ...

  14. One-Tailed vs. Two-Tailed Tests

    One-tailed test, also known as the directional hypothesis, defined as a test of significance to determine if there is a relationship between variables in one direction. Learning Outcomes Once you ...

  15. One- and two-tailed tests

    In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable X which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) ¯. If testing for whether the coin is biased towards heads, a one-tailed test would be used - only large numbers of heads would be significant.

  16. One Tailed and Two Tailed Tests, Critical Values ...

    This statistics video tutorial explains when you should use a one tailed test vs a two tailed test when solving problems associated with hypothesis testing. ...

  17. Hypothesis Testing: Upper-, Lower, and Two Tailed Tests

    The procedure for hypothesis testing is based on the ideas described above. Specifically, we set up competing hypotheses, select a random sample from the population of interest and compute summary statistics. ... For example, in an upper tailed Z test, if α =0.05 then the critical value is Z=1.645. The following figures illustrate the ...

  18. One Sample t-test: Definition, Formula, and Example

    One Sample t-test: Formula. A one-sample t-test always uses the following null hypothesis: H 0: μ = μ 0 (population mean is equal to some hypothesized value μ 0) The alternative hypothesis can be either two-tailed, left-tailed, or right-tailed: H 1 (two-tailed): μ ≠ μ 0 (population mean is not equal to some hypothesized value μ 0)

  19. One- and Two-Tailed Tests

    In the previous example, you tested a research hypothesis that predicted not only that the sample mean would be different from the population mean but that it would be different in a specific direction—it would be lower. ... Figure 1.Comparison of (a) a two‐tailed test and (b) a one‐tailed test, at the same probability level (95 percent). ...

  20. One Tailed Test or Two in Hypothesis Testing: How ...

    The two red tails are the alpha level, divided by two (i.e. α/2). Alpha levels (sometimes just called "significance levels") are used in hypothesis tests; it is the probability of making the wrong decision when the null hypothesis is true. A one-tailed test has the entire 5% of the alpha level in one tail (in either the left, or the right tail).

  21. FAQ: What are the differences between one-tailed and two-tailed tests?

    So, depending on the direction of the one-tailed hypothesis, its p-value is either .5*(two-tailed p-value) or 1-.5*(two-tailed p-value) if the test statistic symmetrically distributed about zero. In this example, the two-tailed p-value suggests rejecting the null hypothesis of no difference.

  22. How to Identify a Left Tailed Test vs. a Right Tailed Test

    Two-tailed test: The alternative hypothesis contains the "≠" sign. Left-tailed test: The alternative hypothesis contains the "<" sign. Right-tailed test: The alternative hypothesis contains the ">" sign. Notice that we only have to look at the sign in the alternative hypothesis to determine the type of hypothesis test.

  23. When should we use one-tailed hypothesis testing?

    1. Although one-tailed hypothesis tests are commonly used, clear justification for why this approach is used is often missing from published papers. ... 'We used one-tailed two-sample t-tests to test the prediction that stand densities of each conifer were greater in regions without pine squirrels than in regions with pine squirrels'.