“Analysing data correctly, to identify a “difference in differences”, is a little tricksy”.

“Analysing data correctly, to identify a “difference in differences”, is a little tricksy”.

We’ve known for some time that biologists (not just neuroscientists) need better training in statistics. If only to even out the playing field.

Originally shared by Andrew T. Lane

This entry was posted in Rajini Rao. Bookmark the permalink.

5 Responses to “Analysing data correctly, to identify a “difference in differences”, is a little tricksy”.

  1. Matt Kuenzel says:


    Ok, so how should that type of data be analyzed? I’ve got one foolproof method to calculate the significance of any experimental results like this: simulate it a million times on your computer and then count the number of times that the simulated results are at least as far from the null result as the actual measured data.

  2. Rajini Rao says:


    Hi Matt Kuenzel , here is the relevant part from the Perspective paper (this week in Nature Neuroscience): “The researchers who made these statements wanted to claim that one effect (for example, the training effect on neuronal activity in mutant mice) was larger or smaller than the other effect (the training effect in control mice). To support this claim, they needed to report a statistically significant interaction (between amount of training and type of mice), but instead they reported that one effect was statistically significant, whereas the other effect was not. Although superficially compelling, the latter type of statistical reasoning is erroneous because the difference between significant and not significant need not itself be statistically significant1. Consider an extreme scenario in which training-induced activity barely reaches significance in mutant mice (for example, P = 0.049) and barely fails to reach significance for control mice (for example, P = 0.051). Despite the fact that these two P values lie on opposite sides of 0.05, one cannot conclude that the training effect for mutant mice differs statistically from that for control mice. That is, as famously noted by Rosnow and Rosenthal2, “surely, God loves the 0.06 nearly as much as the 0.05”. Thus, when making a comparison between two effects, researchers should report the statistical significance of their difference rather than the difference between their significance levels.”


    Re. simulation, it is a great way of checking significance providing one has a reasonable way of simulating a complex biological event.


    P.S. I could send you the paper (pdf) for your input if there was a way (old fashioned email?)..the link is no good unless you have a subscription.

  3. Matt Kuenzel says:


    Thanks, I will post my email address for you. In the scenario they discuss, a simple way to simulate would be something like this:


    X = observed normal firing rate


    G = Gaussian-distributed random noise with mean 0 and variance the same as observed


    choose N numbers to represent the firing rate (X + G) of each of N untreated mice (group 1)


    choose another N numbers to represent the firing rate ( (X * 0.85) + G) of each of N treated mice (group 2)


    Now take the mean of group 1 and compare to the mean of group 2. If the difference is 30% or greater then call this simulated result a LARGE drop otherwise call it a SMALL drop.


    Do this many times. The ratio LARGE/(LARGE + SMALL) is the probability that the result (30% drop) would occur by chance alone.

  4. TIAN XUE says:


    Really like it. Check my ready to published paper, we use the right method for comparison. Sweet 🙂

  5. Rajini Rao says:


    Congrats on the soon to be published paper, TIAN XUE , I’ll keep an eye out for it (I’m betting it is on neuroscience of vision, right?).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s