Statistics are a set of mathematical procedures used to organize, summarize, and interpret information. As the primary form of analyzing and interpreting quantitative data, statistics help us to organize and understand the information contained within our data, communicate our results and describe characteristics of our data to others, and answer the research questions that drive the research.
The Normal distribution (also called the Gaussian distribution or the bell curve) is frequently occurring for many continuous variables (e.g. the IQ distribution in the population). The bell shaped curve describes a probability function showing that, for a normally distributed variable, values closer to the mean are most likely to occur in the population. As the distance from the mean increases (on either side), the probability of occurrence decreases. For instance, as IQ increases or decreases, the probability of occurrence in the general population is gradually decreasing. The normal distribution is symmetric and has certain properties that allow predictions and the computation of standardized scores. It enables researchers to analyze and compare scores and find out the proportion of individuals who fall above or below a score. The lecture provided describes the properties of Normal distributions, demonstrates how to determine location using percentiles and z-scores, how to apply the 68-95-99.7 rule to determine proportions, and perform calculations of standardized scores.
Hypothesis testing is a common goal shared by all inferential statistics. This presentation discusses the logic of hypothesis testing and important related concepts including errors, alpha, effect sizes, and statistical power. This presentation includes a detailed example of the process used for hypothesis testing. Although the provided example is set within the context of z scores, the concepts described are generalizable to other inferential techniques.