• Home
  • Research
    • Identities in Action
    • Doing Good
    • Publications
  • People
  • Resources
    • For Change Agents
    • For Students
    • COVID19
    • PEPSS
    • Leapfrog
    • Forward
  • Blog
  • Contact
  • Videos
  • Privacy Policy
  Social Change Lab
  • Home
  • Research
    • Identities in Action
    • Doing Good
    • Publications
  • People
  • Resources
    • For Change Agents
    • For Students
    • COVID19
    • PEPSS
    • Leapfrog
    • Forward
  • Blog
  • Contact
  • Videos
  • Privacy Policy

The seven deadly sins of statistical misinterpretation, and how to avoid them

26/4/2017

0 Comments

 
Winnifred Louis and Cassandra Chapman, The University of Queensland
Statistical methods
Where are the error bars? (Shutterstock)
Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.
1. Assuming small differences are meaningful
 
Many of the daily fluctuations in the stock market represent chance rather than anything meaningful. Differences in polls when one party is ahead by a point or two are often just statistical noise.

You can avoid drawing faulty conclusions about the causes of such fluctuations by demanding to see the “margin of error” relating to the numbers.

If the difference is smaller than the margin of error, there is likely no meaningful difference, and the variation is probably just down to random fluctuations.
Error bars on graphs
Error bars illustrate the degree of uncertainty in a score. When such margins of error overlap, the difference is likely to be due to statistical noise.
2. Equating statistical significance with real-world significance
 
We often hear generalisations about how two groups differ in some way, such as that women are more nurturing while men are physically stronger.

These differences often draw on stereotypes and folk wisdom but often ignore the similarities in people between the two groups, and the variation in people within the groups.

If you pick two men at random, there is likely to be quite a lot of difference in their physical strength. And if you pick one man and one woman, they may end up being very similar in terms of nurturing, or the man may be more nurturing than the woman.

You can avoid this error by asking for the “effect size” of the differences between groups. This is a measure of how much the average of one group differs from the average of another.

If the effect size is small, then the two groups are very similar. Even if the effect size is large, the two groups will still likely have a great deal of variation within them, so not all members of one group will be different from all members of another group.

3. Neglecting to look at extremes

The flipside of effect size is relevant when the thing that you’re focusing on follows a "normal distribution" (sometimes called a “bell curve”). This is where most people are near the average score and only a tiny group is well above or well below average.

When that happens, a small change in performance for the group produces a difference that means nothing for the average person (see point 2) but that changes the character of the extremes more radically.

Avoid this error by reflecting on whether you’re dealing with extremes or not.  When you’re dealing with average people, small group differences often don’t matter. When you care a lot about the extremes, small group differences can matter heaps.
Overlapping normal distributions
When two populations follow a normal distribution, the differences between them will be more apparent at the extremes than in the averages.
4. Trusting coincidence

Did you know there’s a correlation between the number of people who drowned each year in the United States by falling into a swimming pool and number of films Nicholas Cage appeared in?
Picture
But is there a causal link? (tylervigen.com)
If you look hard enough you can find interesting patterns and correlations that are merely due to coincidence.

Just because two things happen to change at the same time, or in similar patterns, does not mean they are related.

Avoid this error by asking how reliable the observed association is. Is it a one-off, or has it happened multiple times? Can future associations be predicted? If you have seen it only once, then it is likely to be due to random chance.
 
5. Getting causation backwards

When two things are correlated – say, unemployment and mental health issues – it might be tempting to see an “obvious” causal path – say that mental health problems lead to unemployment.

But sometimes the causal path goes in the other direction, such as unemployment causing mental health issues.

You can avoid this error by remembering to think about reverse causality when you see an association. Could the influence go in the other direction? Or could it go both ways, creating a feedback loop?
 
6. Forgetting to consider outside causes

People often fail to evaluate possible “third factors”, or outside causes, that may create an association between two things because both are actually outcomes of the third factor.

For example, there might be an association between eating at restaurants and better cardiovascular health. That might lead you to believe there is a causal connection between the two.

However, it might turn out that those who can afford to eat at restaurants regularly are in a high socioeconomic bracket, and can also afford better health care, and it’s the health care that affords better cardiovascular health.

You can avoid this error by remembering to think about third factors when you see a correlation. If you’re following up on one thing as a possible cause, ask yourself what, in turn, causes that thing? Could that third factor cause both observed outcomes?
 
7. Deceptive graphs

A lot of mischief occurs in the scaling and labelling of the vertical axis on graphs. The labels should show the full meaningful range of whatever you’re looking at.

But sometimes the graph maker chooses a narrower range to make a small difference or association look more impactful. On a scale from 0 to 100, two columns might look the same height. But if you graph the same data only showing from 52.5 to 56.5, they might look drastically different.

You can avoid this error by taking care to note graph’s labels along the axes. Be especially sceptical of unlabelled graphs.
Graph scales
Graphs can tell a story – making differences look bigger or smaller depending on scale.
- Winnifred Louis and Cassandra Chapman 
​
***
This article was originally published on The Conversation. Read the original article.
The Conversation
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    RSS Feed


    Authors

    All researchers in the Social Change Lab contribute to the "Do Good" blog. Click the author's name at the bottom of any post to learn more about their research or get in touch.

    Categories

    All
    Activism
    Communication
    Community Action
    Discrimination
    Education
    Environment
    Gender
    Helping
    Identities
    Legend
    Norms
    Politics
    Race
    Relationships
    Research
    Romance
    Trajectories Of Radicalisation And De Radicalisation
    Trajectories Of Radicalisation And De-radicalisation

    Archive

    December 2022
    February 2022
    December 2021
    September 2021
    August 2021
    July 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017

Location

Social Change Lab
School of Psychology
McElwain Building
​The University of Queensland
St Lucia, QLD 4072
Australia

    Join our mailing list

Subscribe

Picture
Follow us on Twitter!
Check out our Privacy Policy
Copyright © 2017
  • Home
  • Research
    • Identities in Action
    • Doing Good
    • Publications
  • People
  • Resources
    • For Change Agents
    • For Students
    • COVID19
    • PEPSS
    • Leapfrog
    • Forward
  • Blog
  • Contact
  • Videos
  • Privacy Policy