Heteroskedasticity

PrepNuggets

LEVEL II

Heteroskedasticity refers to the situation where the variance of the error term in a statistical model is not constant across all values of the predictor variables. This can lead to inaccurate results and invalid conclusions in the model.

There are two types of heteroskedasticity: unconditional and conditional.

Unconditional heteroskedasticity occurs when the variance of the error term is not constant across all observations, regardless of the values of the predictor variables. This can be caused by various factors such as the presence of outliers or a non-linear relationship between the dependent and independent variables.

Conditional heteroskedasticity, on the other hand, occurs when the variance of the error term depends on the value of the predictor variables. For example, the variance of the error term may be larger for higher values of the predictor variable and smaller for lower values. This type of heteroskedasticity can be caused by a misspecified model or omitted variables.

Overall, heteroskedasticity can have a significant impact on the results of a statistical analysis and it is important to detect and correct for it in order to obtain reliable conclusions. Heteroskedasticity can be detected using the Breusch-Pagan Test, and corrected using White-corrected standard error.