The explosive growth of the human neuroimaging literature has led to major advances in understanding of human brain function, but has also made aggregation and synthesis of neuroimaging findings increasingly difficult. Here we describe and validate an automated brain mapping framework that uses text mining, meta-analysis and machine learning techniques to generate a large database of mappings between neural and cognitive states. We demonstrate the capacity of our approach to automatically conduct large-scale, high-quality neuroimaging meta-analyses, address long-standing inferential problems in the neuroimaging literature, and support accurate ‘decoding’ of broad cognitive states from brain activity in both entire studies and individual human subjects. Collectively, our results validate a powerful and generative framework for synthesizing human neuroimaging data on an unprecedented scale.
Author guidelines for journals could help to promote transparency, openness, and reproducibility
Psychology has historically been concerned, first and foremost, with explaining the causal mechanisms that give rise to behavior. Randomized, tightly controlled experiments are enshrined as the gold standard of psychological research, and there are endless investigations of the various mediating and moderating variables that govern various behaviors. We argue that psychology's near-total focus on explaining the causes of behavior has led much of the field to be populated by research programs that provide intricate theories of psychological mechanism but that have little (or unknown) ability to predict future behaviors with any appreciable accuracy. We propose that principles and techniques from the field of machine learning can help psychology become a more predictive science. We review some of the fundamental concepts and tools of machine learning and point out examples where these concepts have been used to conduct interesting and important psychological research that focuses on predictive research questions. We suggest that an increased focus on prediction, rather than explanation, can ultimately lead us to greater understanding of behavior.
Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors, and lack of direct replication apply to many fields, but perhaps particularly to fMRI. Here we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful answers to neuroscientific questions. Main textNeuroimaging, particularly using functional magnetic resonance imaging (fMRI), has become the primary tool of human neuroscience 1 , and recent advances in the acquisition and analysis of fMRI data have provided increasingly powerful means to dissect brain function. The most common form of fMRI (known as "blood oxygen level dependent" or BOLD fMRI) measures brain activity indirectly through localized changes in blood oxygenation that occur in relation to 2 synaptic signaling 2 . These signal changes provide the ability to map activation in relation to specific mental processes, identify functionally connected networks from resting fMRI 3 , characterize neural representational spaces 4 , and decode or predict mental function from brain activity 5,6 . These advances promise to offer important insights into the workings of the human brain, but also generate the potential for a "perfect storm" of irreproducible results. In particular, the high dimensionality of fMRI data, relatively low power of most fMRI studies, and the great amount of flexibility in data analysis all potentially contribute to a high degree of false positive findings.Recent years have seen intense interest in the reproducibility of scientific results and the degree to which some problematic yet common research practices may be responsible for high rates of false findings in the scientific literature, particularly within psychology but also more generally [7][8][9] . There is growing interest in "meta-research" 10 , and a corresponding growth in studies investigating factors that contribute to poor reproducibility. These factors include study design characteristics which may introduce bias, low statistical power, and flexibility in data collection, analysis, and reporting -termed "researcher degrees of freedom" by Simmons and colleagues 8 . There is clearly concern that these issues may be undermining the value of science -in the UK, the Academy of Medical Sciences recently convened a joint meeting with a number of other funders to explore these issues, while in the US the National Institutes of Health has an ongoing initiative to improve research reproducibility 11 .In this article we outline a number of potentially problematic research practices in neuroimaging that can lead to increased risk of false or exaggerated results. For each prob...
A B S T R A C TNeuroimaging has evolved into a widely used method to investigate the functional neuroanatomy, brain-behaviour relationships, and pathophysiology of brain disorders, yielding a literature of more than 30,000 papers. With such an explosion of data, it is increasingly difficult to sift through the literature and distinguish spurious from replicable findings. Furthermore, due to the large number of studies, it is challenging to keep track of the wealth of findings. A variety of meta-analytical methods (coordinate-based and image-based) have been developed to help summarise and integrate the vast amount of data arising from neuroimaging studies. However, the field lacks specific guidelines for the conduct of such meta-analyses. Based on our combined experience, we propose best-practice recommendations that researchers from multiple disciplines may find helpful. In addition, we provide specific guidelines and a checklist that will hopefully improve the transparency, traceability, replicability and reporting of meta-analytical results of neuroimaging data.
Recent work has indicated that the insula may be involved in goal-directed cognition, switching between networks, and the conscious awareness of affect and somatosensation. However, these findings have been limited by the insula's remarkably high base rate of activation and considerable functional heterogeneity. The present study used a relatively unbiased data-driven approach combining resting-state connectivity-based parcellation of the insula with large-scale meta-analysis to understand how the insula is anatomically organized based on functional connectivity patterns as well as the consistency and specificity of the associated cognitive functions. Our findings support a tripartite subdivision of the insula and reveal that the patterns of functional connectivity in the resting-state analysis appear to be relatively conserved across tasks in the meta-analytic coactivation analysis. The function of the networks was meta-analytically "decoded" using the Neurosynth framework and revealed that while the dorsoanterior insula is more consistently involved in human cognition than ventroanterior and posterior networks, each parcellated network is specifically associated with a distinct function. Collectively, this work suggests that the insula is instrumental in integrating disparate functional systems involved in processing affect, sensory-motor processing, and general cognition and is well suited to provide an interface between feelings, cognition, and action.
Open access, open data, open source and other open scholarship practices are growing in popularity and necessity. However, widespread adoption of these practices has not yet been achieved. One reason is that researchers are uncertain about how sharing their work will affect their careers. We review literature demonstrating that open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities. These findings are evidence that open research practices bring significant benefits to researchers relative to more traditional closed practices.DOI: http://dx.doi.org/10.7554/eLife.16800.001
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.