#statstab #393 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements [actual post]
Thoughts: #392 has the comments, but this is where the magic happens.
#modelselection #modelcomparison #variance #effectsize #tutorial

#statstab #393 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements [actual post]
Thoughts: #392 has the comments, but this is where the magic happens.
#modelselection #modelcomparison #variance #effectsize #tutorial
When designing a scientific experiment, a key factor is the sample size to be used for the results of the experiment to be meaningful.
How many cells do I need to measure? How many people do I interview? How many patients do I try my new drug on?
This is of great importance especially for quantitative studies, where we use statistics to determine whether a treatment or condition has an effect. Indeed, when we test a drug on a (small) number of patients, we do so in the hope our results can generalise to any patient because it would be impossible to test it on everyone.
The solution is to perform a "power analysis", a calculation that tells us whether given our experimental design, the statistical test we are using is able to see an effect of a certain magnitude, if that effect is really there. In other words, this is something that tells us whether the experiment we're planning to do could give us meaningful results.
But, as I said, in order to do a power analysis we need to decide what size of effect we would like to see. So... do scientists actually do that?
We explored this question in the context of the chronic variable stress literature.
We found that only a few studies give a clear justification for the sample size used, and in those that do, only a very small fraction used a biologically meaningful effect size as part of the sample size calculation. We discuss challenges around identifying a biologically meaningful effect size and ways to overcome them.
Read more here!
https://physoc.onlinelibrary.wiley.com/doi/10.1113/EP092884
#statstab #366 Type M error might explain Weisburd’s Paradox
Thoughts: Learn about type M error while you learn about the issues in criminology!
#replication #typeM #typeS #QRPs #paradox #power #effectsize #errorrate
https://sites.stat.columbia.edu/gelman/research/published/weisburd_28.05.2017.pdf
#statstab #356 Measures of Heterogeneity
Thoughts: An overview of different measures, I^2, Q, H^2, and associated R code.
#metaanalysis #heterogeneity #effectsize #effectsize #variability #error
https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/heterogeneity.html
New blog post
: Your Study Is Too Small (If You Care About Practically Significant Effects)
#effectsize #precision #poweranalysis #research #Psychology #MCID #SESOI #samplesize
#statstab #353 The Abuse of Power; The Pervasive Fallacy of Power Calculations for Data Analysis
Thoughts: An seminal paper on "post hoc" power calculations.
#power #QRPs #NHST #posthoc #samplesize #effectsize
https://www.tandfonline.com/doi/abs/10.1198/000313001300339897
#statstab #347 A Simple Method for Removing Bias From a Popular Measure of Standardized Effect Size: Adjusted Partial Eta Squared
Thoughts: Its epsilon^2, but easier to compute by hand.
An overview of 67 different effect size estimators, including confidence intervals, for two-group comparisons:
https://journals.sagepub.com/doi/full/10.1177/25152459251323186
The authors have also developed a Shiny web app to evaluate these.
#statstab #305 The Fallacy of Employing Standardized Regression Coefficients and Correlations ad Measures of Effect
Thoughts: Everyone loves effect sizes, but mind how you compute and interpret them.
#statstab #294 So You Think You Can Graph - effectiveness of presenting the magnitude of an effect
Thoughts: Competition in the many ways to display effect magnitude. Some cool ideas.
#dataviz #stats #effectsize #effects #plots #figures #cohend
https://amplab.colostate.edu/SYTYCG_S1/SYTYCG_Season1_Results.html
#statstab #281 Correcting Cohen’s d for Measurement Error (A Method!)
Thoughts: Scale reliability can be incorporated into effect size computation (i.e., remove attenuation)
An even better solution would be a table where you could select which type of effect #effectSize measure to show (calculated using e.g. these calculations https://www.escal.site/). If anyone has the skills to implement that in #wikipedia #markup, please do so!
It always takes me some minutes to look up the interpretation guidelines for various effect size measures (yes, I know the rules of thumb are somewhat arbitrary). Today I edited Wikipedia to show three different guidelines for four different measures in the same table. Hopefully this can save some time for other researchers.
#statstab #265 The limited epistemic value of ‘variation analysis’ (R^2)
Thoughts: Interesting post and comments on what we can and can't say from an r2 metric.
#stats #r2 #effectsize #variance #modelcomparison #models #causalinference
https://larspsyll.wordpress.com/2023/05/23/the-limited-epistemic-value-of-variation-analysis/
#statstab #260 Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data
Thoughts: "A_w and d_r were generally robust to these violations"
#robust #effectsize #ttest #2groups #metaanalysis #assumptions #ttest #cohend
#statstab #256 Rule of three (95%CI for no event)
Thoughts: Sometimes you have 0 recorded events, so how do you compute a Confidence Interval? Using the rule of 3!
#statstab #254 Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations
Thoughts: I share tutorial papers, as people resonate with different writing styles and explanations.
#statstab #243 Approaches to Calculating Number Needed to Treat (NNT) with Meta-Analysis
Thoughts: Ppl love a one-number-summary. NNT has won out in medical/clinical. So, here are some ways to compute them (for what they're worth)
#NNT #metaanalysis #R #effectsize #statistics #clinical #clinicaltrials
#statstab #230 Power and Sample Size Determination
Thoughts: Frequentist power is a complicated and non-intuitive thing, so it's good to read various tutorials/papers until you find one that sticks.
#stats #poweranalysis #power #NHST #effectsize
https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_power/bs704_power_print.html
#statstab #226 Standardization and other approaches to meta-analyze differences in means
Thoughts: "standardization after meta-analysis...can be used to assess magnitudes of a meta-analyzed mean effect"