Categorical choices are preceded by the accumulation of sensory evidence in favor of one action or another. Current models describe evidence accumulation as a continuous process occurring at a constant rate, but this view is inconsistent with accounts of a psychological refractory period during sequential information processing. During multisample perceptual categorization, we found that the neural encoding of momentary evidence in human electrical brain signals and its subsequent impact on choice fluctuated rhythmically according to the phase of ongoing parietal delta oscillations (1-3 Hz). By contrast, lateralized beta-band power (10-30 Hz) overlying human motor cortex encoded the integrated evidence as a response preparation signal. These findings draw a clear distinction between central and motor stages of perceptual decision making, with successive samples of sensory evidence competing to pass through a serial processing bottleneck before being mapped onto action.
SummaryTo survive, humans must estimate their own ability and the abilities of others. We found that, although people estimated their abilities on the basis of their own performance in a rational manner, their estimates of themselves were partly merged with the performance of others. Reciprocally, their ability estimates for others also reflected their own, as well as the others’, performance. Self-other mergence operated in a context-dependent manner: interacting with high or low performers, respectively, enhanced and diminished own ability estimates in cooperative contexts, but the opposite occurred in competitive contexts. Self-other mergence not only influenced subjective evaluations, it also affected how people subsequently objectively adjusted their performance. Perigenual anterior cingulate cortex tracked one’s own performance. Dorsomedial frontal area 9 tracked others’ performances, but also integrated contextual and self-related information. Self-other mergence increased with the strength of self and other representations in area 9, suggesting it carries interdependent representations of self and other.
Natural environments are complex, and a single choice can lead to multiple outcomes. Agents should learn which outcomes are due to their choices and therefore relevant for future decisions and which are stochastic in ways common to all choices and therefore irrelevant for future decisions between options. We designed an experiment in which human participants learned the varying reward and effort magnitudes of two options and repeatedly chose between them. The reward associated with a choice was randomly real or hypothetical (i.e., participants only sometimes received the reward magnitude associated with the chosen option). The real/hypothetical nature of the reward on any one trial was, however, irrelevant for learning the longer-term values of the choices, and participants ought to have only focused on the informational content of the outcome and disregarded whether it was a real or hypothetical reward. However, we found that participants showed an irrational choice bias, preferring choices that had previously led, by chance, to a real reward in the last trial. Amygdala and ventromedial prefrontal activity was related to the way in which participants' choices were biased by real reward receipt. By contrast, activity in dorsal anterior cingulate cortex, frontal operculum/anterior insula, and especially lateral anterior prefrontal cortex was related to the degree to which participants resisted this bias and chose effectively in a manner guided by aspects of outcomes that had real and more sustained relationships with particular choices, suppressing irrelevant reward information for more optimal learning and decision making.
SummaryReal-world decisions have benefits occurring only later and dependent on additional decisions taken in the interim. We investigated this in a novel decision-making task in humans (n = 76) while measuring brain activity with fMRI (n = 24). Modeling revealed that participants computed the prospective value of decisions: they planned their future behavior taking into account how their decisions might affect which states they would encounter and how they themselves might respond in these states. They considered their own likely future behavioral biases (e.g., failure to adapt to changes in prospective value) and avoided situations in which they might be prone to such biases. Three neural networks in adjacent medial frontal regions were linked to distinct components of prospective decision making: activity in dorsal anterior cingulate cortex, area 8 m/9, and perigenual anterior cingulate cortex reflected prospective value, anticipated changes in prospective value, and the degree to which prospective value influenced decisions.
Environments furnish multiple information sources for making predictions about future events. Here we use behavioural modelling and fMRI to describe how humans select predictors that might be most relevant. First, during early encounters with potential predictors, participants' selections were explorative and directed towards subjectively uncertain predictors (positive uncertainty effect). This was particularly the case when many future opportunities remained to exploit knowledge gained. Then, preferences for accurate predictors increased over time, while uncertain predictors were avoided (negative uncertainty effect). The behavioural transition from positive to negative uncertainty-driven selections was accompanied by changes in representations of belief uncertainty in ventromedial prefrontal cortex (vmPFC). The polarity of uncertainty representations (positive or negative encoding of uncertainty) changed between exploration and exploitation periods. Moreover, the two periods were separated by a third transitional period in which beliefs about predictors' accuracy predominated. VmPFC signals a multiplicity of decision variables, the strength and polarity of which vary with behavioural context.
Decisions are based on value expectations derived from experience. We show that dorsal anterior cingulate cortex and three other brain regions hold multiple representations of choice value based on different timescales of experience organized in terms of systematic gradients across the cortex. Some parts of each area represent value estimates based on recent reward experience while others represent value estimates based on experience over the longer term. The value estimates within these areas interact with one another according to their temporal scaling. Some aspects of the representations change dynamically as the environment changes. The spectrum of value estimates may act as a flexible selection mechanism for combining experience-derived value information with other aspects of value to allow flexible and adaptive decisions in changing environments.
Decisions made by mammals and birds are often temporally extended. They require planning and sampling of decision-relevant information. Our understanding of such decision making remains in its infancy compared to simpler, forced choice paradigms. However, recent advances in algorithms supporting planning and information search provide a lens through which we can explain neural and behavioural data in these tasks. We review these advances to obtain a clearer understanding for why planning and curiosity originated in certain species but not others; how activity in the medial temporal lobe, prefrontal and cingulate cortices may support these behaviours; and how planning and information search may complement each other as means to improve future action selection.
To make good decisions, humans need to learn about and integrate different sources of appetitive and aversive information. While serotonin has been linked to value-based decision-making, its role in learning is less clear, with acute manipulations often producing inconsistent results. Here, we show that when the effects of a selective serotonin reuptake inhibitor (SSRI, citalopram) are studied over longer timescales, learning is robustly improved. We measured brain activity with functional magnetic resonance imaging (fMRI) in volunteers as they performed a concurrent appetitive (money) and aversive (effort) learning task. We found that 2 weeks of citalopram enhanced reward and effort learning signals in a widespread network of brain regions, including ventromedial prefrontal and anterior cingulate cortex. At a behavioral level, this was accompanied by more robust reward learning. This suggests that serotonin can modulate the ability to learn via a mechanism that is independent of stimulus valence. Such effects may partly underlie SSRIs’ impact in treating psychological illnesses. Our results highlight both a specific function in learning for serotonin and the importance of studying its role across longer timescales.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.