People have been shown to link particular sounds with particular shapes. For instance, the round-sounding nonword bouba tends to be associated with curved shapes, whereas the sharp-sounding nonword kiki is deemed to be related to angular shapes. People’s tendency to associate sounds and shapes has been observed across different languages. In the present study, we reexamined the claim by Hung, Styles, and Hsieh (2017) that such sound–shape mappings can occur before an individual becomes aware of the visual stimuli. More precisely, we replicated their first experiment, in which congruent and incongruent stimuli (e.g., bouba presented in a round shape or an angular shape, respectively) were rendered invisible through continuous flash suppression. The results showed that congruent combinations, on average, broke suppression faster than incongruent combinations, thus providing converging evidence for Hung and colleagues’ assertions. Collectively, these findings now provide a solid basis from which to explore the boundary conditions of the effect.
Science is self-correcting, or so the adage goes, but to what extent is that indeed the case? Answering this question requires careful consideration of the various approaches to achieve the collective goal of self-correction. One of the most straightforward mechanisms is individual self-correction: researchers rectifying their own mistakes by publishing a correction notice. Although it offers an efficient route to correcting the scientific record, it has received little to no attention from a metascientific point of view. We aim to fill this void by analysing the content of correction notices published from 2010 until 2018 in the three psychology journals featuring the highest number of corrections over that timespan based on the Scopus database (i.e. Psychological Science with N = 58, Frontiers in Psychology with N = 99 and Journal of Affective Disorders with N = 57). More concretely, we examined which aspects of the original papers were affected (e.g. hypotheses, data-analyses, metadata such as author order, affiliations, funding information etc.) as well as the perceived implications for the papers’ main findings. Our exploratory analyses showed that many corrections involved inconsequential errors. Furthermore, authors rarely revised their conclusions, even though several corrections concerned changes to the results. We conclude with a discussion of current policies, and suggest ways to improve upon the present situation by (i) preventing mistakes, and (ii) transparently rectifying those mistakes that do find their way into the literature.
People have been shown to link particular sounds with particular shapes. For instance, the round-sounding non-word bouba tends to be associated with curved shapes, whereas the sharp-sounding non-word kiki is deemed to be related to angular shapes. This tendency of people to associate sounds and shapes has been observed across different languages. In the present study, we re-examined the claim of Hung, Styles, and Hsieh (2017) that such sound-shape mappings can occur before becoming aware of the visual stimuli. More precisely, we replicated their first experiment in which congruent and incongruent stimuli (e.g., bouba presented in a round or an angular shape, respectively) were rendered invisible through continuous flash suppression. The results showed that congruent combinations, on average, broke suppression faster than incongruent stimuli, thus providing converging evidence for Hung and colleagues’ assertions. Collectively, these findings now provide a solid basis from which to explore the boundary conditions of the effect.
Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.