Welcome to our automated Critical Value Calculator, here you can estimate the critical value for one-tail and two-tail tests quickly. It works for any test that follows normal distribution i.e. Z-test. t-test, Chi-Square test, and F distribution test.

**Ways to reject or accept the Null Hypothesis**

There are two ways to assess in hypothesis testing whether there is enough data from the study to reject H_{0} or to accept H_{0}. Comparing the p-value with a pre-specified value of alpha (α) is the most common way, where alpha (α) is the probability/likelihood of rejecting H_{0} when H_{0} is valid/true. However, the estimated value of a test statistic can also be contrasted with the critical value. In this article, we are going to discuss the way of critical value.

**What is a Critical Value?**

A critical value is a representative line against some mathematical value on a chart that separates the chart into sections. The “rejection region” is one or two of the sections; if the value of any test falls within that region so we reject the null hypothesis.

A critical value shows a point on any test distribution against test statistics on the basis of we decide whether to reject or accept the null hypothesis. You can assert statistical significance and deny the null hypothesis if the absolute value of our test statistics is higher than the critical value.

The approach to critical value consists of testing whether the value of the test statistics produced by our sample belongs to the so-called rejection region or a critical region, which is the region where it is highly unlikely that the test statistics will lie. A critical value is a cut-off in a one-tailed analysis/test or two cut-offs in the case or two-tailed analysis/test which represents the boundary of the rejection region(s). Critical values, in other words, divide the size of the test statistics into the area of rejection and non-rejection region.

**Critical Value Definition**

Before running to the estimation of test statistics and finding critical value, we first need to understand the assumption of our concerned test statistics as well as assumptions that our null hypothesis holds. The test statistics and critical values both have the same probabilities which are equal to the significance level α. For certain critical values, these values are believed to be at least as extreme.

What “at least as extreme” implies is decided by the alternative hypothesis. Critical values are either a single value or two values, single in the case of a one-tailed test while two in the case of two-tailed tests. In the one-tailed test, the critical value is either on the left or right side of the median of the distribution while in two-tailed tests it will be on both sides of the median of the distribution i.e. left and right.

Critical values can easily be indentified in any distribution as the area under the density curve from critical to lower end which is equal to α:

- Right-tailed test: the area from critical value to the right end under the density curve is equal to α:
- Left-tailed test: the area from critical value to left end under the density curve is equal to α:
- Two-tailed test: the area from critical value to both ends under the density curve is equal to α:

**How to calculate critical values?**

Quantile function Q is the inverse function of distribution function (CDF) in any test statistic distribution and used in formulae of critical values. The formula is hereunder

Q = cdf-1

The critical value formulae are hereunder after selecting the value of α

**two-tailed test**: (-∞, Q(α/2)] ∪ [Q(1 – α/2), ∞)**right-tailed test**: [Q(1 – α), ∞)**left-tailed test**: (-∞, Q(α)]

When the concerned **distribution **is** symmetric about 0**, the two-tailed test’s critical values are symmetric as well:

Q(1 – α/2) = -Q(α/2)

Unfortunately, there are rather complex CDF formulas in the probability distributions that are the most popular in hypothesis testing. We’d need to use advanced tools or mathematical tables to find critical values by hand. In these cases, our critical value calculator is, of course, the best alternative.

**How to use our Critical Value Calculator?**

Our critical value calculator facilitates you to find critical value for complicated distributions, so you no longer need to worry about it. You need to follow undernoted steps.

- You need to let us know about the test statistic distribution for your null hypothesis. Is it a regular normal N(0,1), or t-Student, and either chi-squared or F-distribution? If you are not able to tell us then read this full post to understand which test you have required for testing the null hypothesis.
- Finalize your alternative hypothesis (in most of the cases it is opposite to the null hypothesis i.e. greater than, less than, etc. then population mean in one-tailed tests while it has two options in two-tailed tests i.e. yes or no which means it is either greater or lower than population mean): right-tailed, left-tailed, or two-tailed,
- If required, define the degree of freedom distributed by the test statistics. Check the definition of the test you are conducting if you are not sure.
- Set the level of significance (α), mostly in hypothesis testing it is selected at 5% (0.05) but we can adjust it as per our need.
- The critical value(s) as well as the rejection region(s) will then be shown by the critical value calculator.

Advance mode option of this critical value calculator will facilitate you with more accurate results.

**Z Critical Values**

If our test statistics follow (at least approximately) the Standard Normal N(0,1) distribution, we use the Z (standard normal) choice.

μ denotes the quantile function of the regular normal distribution N(0,1) in the formulation below:

- two-tailed Z critical value: ± μ (1 – α/2)
- right-tailed Z critical value: μ (1 – α)
- left-tailed Z critical value: μ (α)

**t Critical Values**

t-student distribution can be followed when our test statistic follows the t-Student distribution. This distribution is almost the same as the standard normal distribution but it has fatter tails, which depends on the degree of freedom. If the sample size is large (> 30), then it difficult to differentiate between a standard normal distribution and t-student distribution.

Q_{t,d }(Q_{t, }and _{d}), Q_{t} is representing the t-Student distribution’s quantile function while _{d} represent degrees of freedom in the formula below:

- two-tailed t critical values: ±Q
_{t,d}(1 – α/2) - right-tailed t critical value: Q
_{t,d}(1 – α) - left-tailed t critical value: Q
_{t,d}(α)

**Chi-square Critical Values (χ²)**

χ² (chi-square) critical values will be used when our test statistics follow χ² (chi-square) distribution. For the most widely used χ^{2} tests, we need to calculate the number of degrees of freedom for the χ2-distribution of our test statistics.

The formulae for chi-square critical values are given here; Q_{χ2},_{d} is the quantile χ^{2} distribution function with d degrees of freedom:

- Two-tailed χ² critical values: Q
_{χ²,d}(α/2) and Q_{χ²,d}(1 – α/2) - Right-tailed χ² critical value: Q
_{χ²,d}(1 – α) - Left-tailed χ² critical value: Q
_{χ²,d}(α)

**F Critical Values**

Lastly, F critical values will be used when our test statistics follow F-distribution. In this distribution degree of freedom is in a pair form.

Now we have to brief you on how these degrees of freedom emerge. Take an example that there are two variables X and Y both are independent random variables with d_{1} and d_{2} degree of freedom and are following χ2-distribution. If we consider the ratio as (X/d_{1})/(Y/d_{2}), it turns out that both degrees of freedom are following the F-distribution. That’s why we call d_{1} the degrees of freedom of numerator and d_{2} the degree of freedom of denominator, respectively.

QF represents the quantile function while d_{1} and d_{2} represent the degrees of freedom in the F-distribution and the formula is mentioned below:

- Two-tailed F critical values: Q
_{F,}d_{1},d_{2 }(α/2) and Q_{F,}d_{1},d_{2}(1 – α/2) - Right-tailed F critical value: Q
_{F,}d_{1},d_{2}(1 – α) - Left-tailed F critical value: Q
_{F,}d_{1},d_{2}(α)

We are enlisting the important tests in which a researcher uses F-stat. All of them are right-tailed.

ANOVA (Analysis of Variance): The ANOVA is used when more than two means have the same variations and following a normal distribution. There are degrees of freedom (k-1, n-k), where n is the sample size (in each group) and k is the number of groups.

- Regression analysis’s overall significance. It follows the same degree of freedom (k-1, n-k) but here k represents the no. of variables.
- Comparison of two nested models of regression. With k2-k1, n-k2 degree of freedom (where k1 represents the no. of variables in the smaller model while k2 represents the number of variables in the larger model and n represents the sample size) the test statistic follows the F-distribution.
- In two normally distributed populations, the equality of variances. There are (n-1, m-1) degrees of freedom in which both n and m represent corresponding sample sizes.

**Why Critical Value is Important?**

It is almost impossible to gather population data so the researchers gathered the information/data on a sample basis and to check the credibility, either the sample/collected data is the true representative of the population or not researchers do hypothesis testing. For hypothesis testing, critical values will use.