In the previous modules, you have been learning a good bit of introductory content relevant to clinical psychological science. But at presentations and when reading empirical papers, it can sometimes be tough to follow the statistical aspects of what’s going on. So this module is devoted to handling the beginning aspects of data analysis. Data analysis is the more or less the language of science - you can’t really conclude anything from a bunch of data points without invoking a type of statistical test. There are two main types of statistical tests - descriptive and inferential. Descriptive statistics are concerned with describing the data or sample. Thus, no conclusions are drawn about relations between variables, rather just about the properties of the variables themselves. Inferential statistics, on the other hand, are about inferring relations among variables. The main method of how we deem whether two variables are statistically significantly related is through the use of p-values. P-values are some of the most misunderstood things in science, but hope is not lost, we have faith that you will be able to grasp them! Conventionally, if a p-value is LESS than .05, the relation is considered to be statistically significant. A p-value represents the *probability of observing data* (the relation between your variables) *at least that extreme* (in terms of absolute value of the magnitude of the relation between your variables) *under random selection of data from a null model*. A null model is one that says that your two variables' correlation is equal to 0. So the larger a correlation you have, the smaller the probability of observing that correlation if you have randomly selected participant data in a model that posits the true correlation is equal to 0 (the null model). All a null model is saying is that there is no relation (for correlations) or no group differences (for group models). So again, a p-value represents the probability of observing data at least that extreme under random selection of data from a null model. So if it’s sufficiently small (unlikely; *p* < .05), we reject the null hypothesis and term the relation statistically significant.

First, you will watch a video warning of the danger of mixing causality and correlation. You’ve probably heard this a hundred times already, but if not, you will have heard it now. Then there is a video to follow that up that explains how correlation can imply causation (only the first three minutes are particularly relevant). In fact, for one variable to cause something, it *must* be correlated with the thing it is purported to cause. Then, there is a comic that illustrates statistical significance and how misinterpreting p-values can go awry. However, the comic still does not quite get the definition of p-values right. A p-value is not the chance that the results were obtained by chance. So don’t listen to that part of it, but still a funny comic overall. Then to end, a short video explaining the difference between descriptive and inferential statistics.

I would **highly** recommend practicing performing the most common statistical tests you see in papers or presentations whenever possible! Trust me, it’s worth it to get over your fear of stats now so you won’t be limited in your understanding of studies and presentations going forward. I will have some hands-on examples in future modules!