Research Design and Statistical Consulting
George M. Diekhoff, Ph.D.

When a Correlation is a Difference and a Difference is a Correlation

One common way of categorizing statistics involves putting some of them (like t-tests and ANOVAs) into the “significant difference tests” box, and others (like the Pearson correlation and Chi-square) into the “correlation” box. Although that can be useful, it can also be confusing, because significant difference tests can be used to establish correlational relationships between variables and correlations can be used to measure the magnitude of the difference between groups. 

Suppose you’ve found that men and women differ significantly on a measure of political conservatism. In addition to noting that their is a significant sex difference, it would also be appropriate to say that there is a significant relationship (or correlation) between sex and political conservatism. In fact, the various measures of effect strength (e.g., Cohen’s d and f statistics, the eta-squared statistic, and the omega-square statistic) that are often calculated after finding a difference to be statistically significant are actually just measures of the strength of that correlational relationship. 

Suppose that you’ve found that there is a significant correlation between years of education and income. It would also be extremely likely that if you used a median split to divide your sample into two income groups–one “low income” and the other “high income”–you would find that the two groups would now differ significantly if you used a t-test to compare their average educational levels.So, although it’s sometimes convenient to divide statistics into “difference tests” and “correlations,” the fact is that the primary difference between the statistics in these categories is semantic, not mathematical.