False Discovery Rates

Data Skeptic

Episode | Podcast

Date: Fri, 28 Sep 2018 15:09:18 +0000

<p>A false discovery rate (FDR) is a methodology that can be useful when struggling with the problem of multiple comparisons.</p> <p>In any experiment, if the experimenter checks more than one dependent variable, then they are making multiple comparisons. Naturally, if you make enough comparisons, you will eventually find some correlation.</p> <p>Classically, people applied the <a href="https://dataskeptic.com/blog/episodes/2016/bonferroni-correction">Bonferroni Correction</a>. In essence, this procedure dictates that you should lower your <a href="https://dataskeptic.com/blog/episodes/2014/p-values">p-value</a> (raise your standard of evidence) by a specific amount depending on the number of variables you're considering. While effective, this methodology is strict about preventing false positives (type i errors). You aren't likely to find evidence for a hypothesis that is actually false using Bonferroni. However, your exuberance to avoid type i errors may have introduced some type ii errors. There could be some hypotheses that are actually true, which you did not notice.</p> <p>This episode covers an alternative known as false discovery rates. The essence of this method is to make more specific adjustments to your expectation of what p-value is sufficient evidence. </p>