We studied resting-state oscillatory connectivity using magnetoencephalography in healthy young humans (N = 183) genotyped for APOE-ɛ4, the greatest genetic risk for Alzheimer’s disease (AD). Connectivity across frequencies, but most prevalent in alpha/beta, was increased in APOE-ɛ4 in a set of mostly right-hemisphere connections, including lateral parietal and precuneus regions of the Default Mode Network. Similar regions also demonstrated hyperactivity, but only in gamma (40–160 Hz). In a separate study of AD patients, hypoconnectivity was seen in an extended bilateral network that partially overlapped with the hyperconnected regions seen in young APOE-ɛ4 carriers. Using machine-learning, AD patients could be distinguished from elderly controls with reasonable sensitivity and specificity, while young APOE-e4 carriers could also be distinguished from their controls with above chance performance. These results support theories of initial hyperconnectivity driving eventual profound disconnection in AD and suggest that this is present decades before the onset of AD symptomology.
Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related, and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate, and late stages, respectively, during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions and are extracted at the final stage of a temporal gradient in the brain.
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time‐resolved decoding of sensor‐level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time‐resolved relevance patterns in source space track expression‐related information from the visual cortex (100 ms) to higher‐level temporal and frontal areas (200–500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception.
Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography (EEG) experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate and late stages respectively during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions, and are extracted at the final stage of a temporal gradient in the brain.
Magnetoencephalography (MEG) is increasingly being used to study brain function because of its excellent temporal resolution and its direct association with brain activity at the neuronal level. One possible cause of error in the analysis of MEG data comes from the fact that participants, even MEG-experienced ones, move their head in the MEG system. Head movement can cause source localization errors during the analysis of MEG data, which can result in the appearance of source variability that does not reflect brain activity. The MEG community places great importance in eliminating this source of possible errors as is evident, for example, by recent efforts to develop head casts that limit head movement in the MEG system. In this work we use software tools to identify, assess and eliminate from the analysis of MEG data any possible correlations between head movement in the MEG system and widely-used measures of brain activity derived from MEG resting-state recordings. The measures of brain activity we study are a) the Hilbert-transform derived amplitude envelope of the beamformer time series and b) functional networks; both measures derived by MEG resting-state recordings. Ten-minute MEG resting-state recordings were performed on healthy participants, with head position continuously recorded. The sources of the measured magnetic signals were localized via beamformer spatial filtering. Temporal independent component analysis was subsequently used to derive resting-state networks.Significant correlations were observed between the beamformer envelope time series and head movement. The correlations were substantially reduced, and in some cases eliminated, after a participant-specific temporal high-pass filter was applied to those time series. Regressing the head movement metrics out of the beamformer envelope time series had an even stronger effect in reducing these correlations. Correlation trends were also observed between head movement and the activation time series of the default-mode and frontal networks. Regressing the head movement metrics out of the beamformer envelope time series completely eliminated these correlations. Additionally, applying the head movement correction resulted in changes in the network spatial maps for the visual and sensorimotor networks. Our results a) show that the results of MEG resting-state studies that use the above-mentioned analysis methods are confounded by head movement effects, b) suggest that regressing the head movement metrics out of the beamformer envelope time series is a necessary step to be added to these analyses, in order to eliminate the effect that head movement has on the amplitude envelope of beamformer time series and the network time series and c) highlight changes in the connectivity spatial maps when head movement correction is applied.
Background: In autism spectrum disorders (ASDs), impairments in fundamental social abilities and a lack of interest in social stimuli become apparent early in life. These impairments are thought to negatively affect further brain and behavioural development. Early intensive interventions can help to attenuate social-development and other risk factors and, thus, to ameliorate the deficits associated with ASDs. We present FIAS, an intensive early intervention approach for young children with ASD, which aims at developing children's social motivation. During 18 days, therapists work continuously for 6 h a day with the affected child, involving the whole family in a day care setting. Follow-up care at home over 1 year as well as fresh-up interventions and inclusion in kindergarten or a play group should stabilise the effects and help to respond to further challenges. Material and Methods: Here, we present observations from the first 12 patients (25-48 months of age) treated according to the FIAS approach. We evaluated changes in core autistic symptoms and level of functioning after the 18 days of intensive intervention. Beyond standardised assessment, two innovative video-based instruments (Autism Behaviour Coding System and Evaluationsfragebogen) have been developed to assess autistic symptoms and interaction parameters during intervention. Results: Improvements were noted in most core autistic symptom domains, with the highest effect sizes in domains like eye contact, communication, repetitive behaviour, imitation, motivation and reciprocity. In addition, the level of functioning significantly improved. Conclusions: The first evaluation of the FIAS approach shows promising results, as the FIAS intervention appears to improve core autistic symptom domains as well as the level of everyday functioning. Limitations of this study are the small sample size and the lack of a control group. A more comprehensive and longitudinal evaluation is in progress; this will focus on the stability of the observed effects and will attempt to identify potential predictors of treatment response.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.