Quantcast


CURATOR
A pinboard by
Miao Yu

I am a Postdoctoral Fellow in Prof. Pawliszyn’s group in UWaterloo. My research interests are environmental chemistry and environmental data analysis related to public health.

I received my Ph.D. from Chinese Academy of Sciences in 2016 and my major is environmental science. I enjoy data-driven research and develop related software packages or applications. Besides, I run an online Chinese Column with more than 7000 followers focused on interesting scientific papers and data analysis.

PINBOARD SUMMARY

Papers related to the common issues in the science community.

This pinboard will show some paper story and common issues in research such as p-value, reproducibility, data visulization et.cl.

4 ITEMS PINNED

Trends in P Value, Confidence Interval, and Power Analysis Reporting in Health Professions Education Research Reports: A Systematic Appraisal.

Abstract: To characterize reporting of P values, confidence intervals (CIs), and statistical power in health professions education research (HPER) through manual and computerized analysis of published research reports.The authors searched PubMed, Embase, and CINAHL on May 7, 2016, for comparative research studies. For manual analysis of abstracts and main texts, they randomly sampled 250 HPER reports published in 1985, 1995, 2005, and 2015, and 100 biomedical research reports published in 1985 and 2015. Automated computerized analysis of abstracts included all HPER reports published 1970-2015.In the 2015 HPER sample, P values were reported in 69/100 abstracts and 94 main texts. CIs were reported in 6 abstracts and 22 main texts. Most P values (≥ 77%) were ≤ .05. Across all years, 60/164 two-group HPER studies had ≥ 80% power to detect a between-group difference of 0.5 standard deviations. From 1985 to 2015, the proportion of HPER abstracts reporting a CI did not change significantly (odds ratio [OR] 2.87; 95% CI 1.04, 7.88) whereas that of main texts reporting a CI increased (OR 1.96; 95% CI 1.39, 2.78). Comparison with biomedical studies revealed similar reporting of P values, but more frequent use of CIs in biomedicine. Automated analysis of 56,440 HPER abstracts found 14,867 (26.3%) reporting a P value, 3,024 (5.4%) reporting a CI, and increased reporting of P values and CIs from 1970 to 2015.P values are ubiquitous in HPER, CIs are rarely reported, and most studies are underpowered. Most reported P values would be considered statistically significant.

Pub.: 24 Jun '17, Pinned: 28 Aug '17

What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science.

Abstract: A recent study of the replicability of key psychological findings is a major contribution toward understanding the human side of the scientific process. Despite the careful and nuanced analysis reported, the simple narrative disseminated by the mass, social, and scientific media was that in only 36% of the studies were the original results replicated. In the current study, however, we showed that 77% of the replication effect sizes reported were within a 95% prediction interval calculated using the original effect size. Our analysis suggests two critical issues in understanding replication of psychological studies. First, researchers' intuitive expectations for what a replication should show do not always match with statistical estimates of replication. Second, when the results of original studies are very imprecise, they create wide prediction intervals-and a broad range of replication effects that are consistent with the original estimates. This may lead to effects that replicate successfully, in that replication results are consistent with statistical expectations, but do not provide much information about the size (or existence) of the true effect. In this light, the results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment.

Pub.: 31 Jul '16, Pinned: 28 Aug '17