As part of our “Tools of the Trade” blog series this summer, we’re here to help you further your own research by providing the resources you need to focus on your scholarship, write—or rewrite—your work, and prepare your work for publication.

Regardless of the discipline, the quality of one’s research is only as sound as the manner in which it was conducted. That’s why our Open Access journal, Collabra: Psychology, has an entire section dedicated to the study of Methodology and Research Practice in Psychology. For those conducting research this summer—and especially those in psychological fields—we’ve rounded up the following articles to help inform your own methodological approaches, data transparency, and replicability practices.

Making Your Research Transparent (Unlike a Car Salesperson!)

Quality Uncertainty Erodes Trust in Science by Simine Vazire

When consumers of science (readers and reviewers) lack relevant details about the study design, data, and analyses, they cannot adequately evaluate the strength of a scientific study. A car whose carburetor is duct-taped to the rest of the car might work perfectly fine, but the buyer has a right to know about the duct-taping. Without high levels of transparency in scientific publications, consumers of scientific manuscripts are in a similar position as buyers of used cars – they cannot reliably tell the difference between lemons and high quality findings. The solution is to increase transparency and give consumers of scientific research the information they need to accurately evaluate research. Transparency also encourages researchers to be more careful in how they conduct their studies and write up their results.

A New Standard for Replicating Your Research

A New Replication Norm for Psychology by Etienne P LeBel

In recent years, there has been a growing concern regarding the replicability of findings in psychology, including a mounting number of prominent findings that have failed to replicate via high-powered independent replication attempts. In the face of this replicability “crisis of confidence”, several initiatives have been implemented to increase the reliability of empirical findings. In the current article, LeBel proposes a new replication norm that aims to further boost the dependability of findings in psychology. Paralleling the extant social norm that researchers should peer review about three times as many articles that they themselves publish per year, the new replication norm states that researchers should aim to independently replicate important findings in their own research areas in proportion to the number of original studies they themselves publish per year (e.g., a 4:1 original-to-replication studies ratio).

Giving Due Attention to the Pitfalls of False Negatives

Too Good to be False: Nonsignificant Results Revisited by Chris H. J. Hartgerink, et al

The concern for false positives has overshadowed the concern for false negatives in the recent debates in psychology. This might be unwarranted, since reported statistically nonsignificant findings may just be “too good to be false.” This article examines evidence for false negatives in nonsignificant results in three different ways, arguing that the failure to address false negatives can lead to a waste of research resources and stifle the scientific discovery process.

FacebookTwitterTumblrLinkedInEmail