Critical Value in Statistics: Types, Examples, and Solutions
The critical value is a well-known technique in hypothesis testing and statistical inference. This technique is used to check whether the null hypothesis falls in the accepted region or not. It separates the consistent and inconsistent null hypothesis of the sample statistic.
Statistical tables are used to calculate the critical values which are known as distribution tables. In this post, we are going to explain the term critical value along with its types, examples, and solutions.
What is the critical value?
In statistics, a term that is used to find the acceptance or rejection region of the null hypothesis is said to be the critical value. The acceptance or rejection of the null hypothesis depends on the values of the critical values. Such as:
- We reject the null hypothesis in favor of the alternative hypothesis if the detected sample statistics fall outside the critical value range.
- We accept the null hypothesis in favor of the alternative hypothesis if the detected sample statistics fall within the critical value range.
This technique is the function of degrees of freedom, level of significance, and the distribution of test statistics. Let us discuss the components of critical value briefly.
Level of significance
The level of significance is a frequently used term in hypothesis testing to find the starting point at which the null hypothesis can be rejected. To avoid incorrect results in statistical analysis, it is compulsory to choose a suitable level of significance.
It is denoted by alpha (α) and is also known as alpha level. There are two types of errors that occur to determine the critical value region.
- Type I Error: It is used for rejecting the null hypothesis when it is true
- Type II Error: It is used for accepting an alternative hypothesis when it is false.
The most suitable level of significance is 1% & 5% (0.01 & 0.05). This means that there is a 1% or 5% chance of rejecting the null hypothesis when it is actually true.
Degree of freedom
In statistics, the number of values in a calculation of hypothesis testing that is free to diverge is said to be the degree of freedom. This term is very essential for the calculations of critical values of t-test, chi-square, ANOVA, and regression analysis.
It is calculated with the help of sample size and the number of variables in the analysis. The general expression of the degree of freedom (df) is “sample size minus the number of parameters being estimated”.
Distribution of test statistics
In statistical hypothesis testing, we use a test statistic to determine the probability of detecting a certain result if the null hypothesis is true. The distribution of the test statistic is important because it allows us to determine whether the observed test statistic is likely to have occurred by chance or whether it is unlikely enough to support rejecting the null hypothesis.
Types of Critical Value
There are various types of critical value. Here is a brief introduction to the types of critical values with formulas.
The critical value for the t-test
The mean of a single population and the difference between the expected values of two populations are estimated with the help of the critical value for the t-test. It is frequently used when the sample size is smaller.
The t critical value is dependent on the level of significance and degree of freedom. An increase in the value of the degree of freedom takes the t-distribution near to the normal distribution and the critical value will approach to the z critical value.
The formula for calculating the t critical value is:
Formula for |
Formula |
Two-tailed |
±Q_{t,d }(1 - α/2) |
Right-tailed |
Q_{t,d}(1 - α) |
Left-tailed |
Q_{t,d} (α) |
The critical value for the z-test
The mean of a single population and the difference between the expected values of two populations are estimated with the help of the critical value for the z-test. It is frequently used when the sample size is larger.
Z critical value depends on the level of significance and is usually obtained from the standard normal distribution table.
The formula for calculating the z critical value is:
Formula for |
Formula |
Two-tailed |
±u(1 – α/2) |
Right-tailed |
u(α) |
Left-tailed |
u(1 – α) |
The critical value for the chi-square test
The critical value for the chi-square test is used to test hypotheses about the independence or goodness of fit of definite data. The critical value for the chi-square test depends on the degrees of freedom and the level of significance.
To find the critical value, you can use a chi-square distribution table or a critical value calculator. For example, with a degree of freedom = 2 and significance level = α = 0.05, the critical value is 5.99.
This means that if the calculated chi-square statistic is greater than 5.99, we reject the null hypothesis at the 0.05 level of significance
The formula for calculating the chi-square critical value is:
Formula for |
Formula |
Two-tailed |
Q_{χ2,d}(α/2) and Q_{χ2,d}(1−α/2) |
Right-tailed |
Q_{χ2,d}(1 – α) |
Left-tailed |
Q_{χ2,d}(α) |
The critical value for the F-test
The critical value for the F-test is used to test the equality of variances as well as the equality of expected values across multiple groups. For instance, suppose we want to test whether the variances of two populations are equal.
We use the F-test and the critical value for the F-distribution to make this test. The critical value depends on the degrees of freedom for the numerator and denominator, which are the degrees of freedom associated with the variances of the two populations.
If the calculated F-value is greater than the critical value, we reject the null hypothesis and conclude that the variances are not equal.
Formula for |
Formula |
Two-tailed |
Q_{F,d1,d2}(α/2) and Q_{F,d1,d2}(1−α/2) |
Right-tailed |
Q_{F,d1,d2}(1 – α) |
Left-tailed |
Q_{F,d1,d2}(α) |
How to calculate critical values?
Example 1
A researcher wants to test whether the average score on a statistics test is greater than 75. A random sample of 30 students is selected from a large population, and the sample expected value is 78.
The population standard deviation is known to be 5. Test the null hypothesis at the 5% significance level.
Solution:
Step 1: The null hypothesis of the expected value is:
The null hypothesis is that the expected score is less than or equal to 75,
The alternative hypothesis is that the expected score is greater than 75.
Step 2: Take the level of significance and find the corresponding critical value.
The level of significance = α = 5%
critical value = 1.645.
Step 3: Calculate the test statistics.
z = (x - µ) / (∑/√(n))
z = (78 - 75) / (5 / √(30))
z = 3.46
Since the calculated z-value is greater than the critical value of 1.645, we reject the null hypothesis and conclude that the expected score is greater than 75 at the 5% significance level.
Example 2:
A researcher wants to test whether there is a significant correlation between two variables, U and V. A random sample of 20 pairs of observations is collected, and the sample correlation coefficient is 0.75. Test the null hypothesis at the 5% significance level.
Solution:
Step 1: The null hypothesis of the expected value is:
The null hypothesis is that there is no correlation between U and V
The alternative hypothesis is that there is a correlation between U and V.
Step 2: Take the level of significance & sample size and find the degree of freedom and corresponding critical value.
sample size = 20 which corresponds to 18 degrees of freedom.
significance level = 5% = 0.05
the critical value for a two-tailed test is ±2.101.
Step 3: Calculate the test statistics.
t = r * √(df / (1 – r^{2}))
t = 0.75 * sqrt(18 / (1 - 0.75^{2}))
t = 4.45
Since the absolute value of the calculated t-value is greater than the critical value of 2.101, we reject the null hypothesis and conclude that there is a significant correlation between U and V at the 1% significance level.
Conclusion
Critical value is very helpful for calculating the null hypothesis whether it is true or false. The t, z, and chi-square critical values are very helpful for the acceptance or rejection of the null hypothesis.