Ooooc

Much of hypothesis testing is concerned with making decisions about the null and alternate hypotheses. You collect the data, estimate the parameter, calculate a test statistic that summarizes the value of the parameter estimate, and then decide whether the value of the test statistic would be expected if the null hypothesis were true or the alternate hypothesis were true. In our case, we collect data on alcoholism in a limited number of twins (which we hope accurately represent the entire twin population) and decide whether the results we obtain better match the null hypothesis (no difference in rates) or the alternate hypothesis (higher rate in identical twins).

Of course, there is always a chance that you have made the wrong decision—that you have interpreted your data incorrectly. In statistics, there are two types of errors that can be made. A type I error is when the conclusion was made in favor of the alternate hypothesis, when the null hypothesis was really true. A type II error refers to the converse situation, where the conclusion was made in favor of the null hypothesis when the alternate hypothesis was really true. Thus a type I error is when you see something that is not there, and a type II error is when you do not see something that is really there. In general, type I errors are thought to be worse than type II errors, since you do not want to spend time and resources following up on a finding that is not true.

How can we decide if we have made the right choice about accepting or rejecting our null hypothesis? These statistical decisions are often made by calculating a probability value, or p-value. P-values for many test statistics are easily calculated using a computer, thanks to the theoretical work of mathematical statisticians such as Jerzy Neyman.

A p-value is simply the probability of observing a test statistic as large or larger than the one observed from your data, if the null hypothesis were really true. It is common in many statistical analyses to accept a type I error rate of one in twenty, or 0.05. This means there is less than a one-in-twenty chance of making a type I error.

To see what this means, let us imagine that our data show that identical twins have a 10 percent greater likelihood of being concordant for alcoholism than fraternal twins. Is this a significant enough difference that we should reject the null hypothesis of no difference between twin types? By examining the number of individuals tested and the variance in the data, we can come up with an estimate of the probability that we could obtain this difference by chance alone, even if the null hypothesis were true. If this probability is less than 0.05—if the likelihood of obtaining this difference by chance is less than one in twenty— then we reject the null hypothesis in favor of the alternate hypothesis.

Prior to carrying out a scientific investigation and a statistical analysis of the resulting data, it is possible to get a feel for your chances of seeing something if it is really there to see. This is referred to as the power of a study and is simply one minus the probability of making a type II error. A commonly accepted power for a study is 80 percent or greater. That is, you would like to know that you have at least an 80 percent chance of seeing something if it is really there. Increasing the size of the random sample from the population is perhaps the best way to improve the power of a study. The closer your sample is to the true population size, the more likely you are to see something if it is really there.

Supreme Sobriety

Supreme Sobriety

How to Maintain Your Resolution to Be Sober. Get All The Support And Guidance You Need To Be A Success At Sobriety. This Book Is One Of The Most Valuable Resources In The World When It Comes To Turning Your Love For Cooking Into A Money Maker.

Get My Free Ebook


Post a comment