The graphemic representations that underlie spelling performance must encode not only the identities of the letters in a word, but also the positions of the letters. This study investigates how letter position information is represented. We present evidence from two dysgraphic individuals, CM and LSS, who perseverate letters when spelling: that is, letters from previous spelling responses intrude into subsequent responses. The perseverated letters appear more often than expected by chance in the same position in the previous and subsequent responses. We used these errors to address the question of how letter position is represented in spelling. In a series of analyses we determined how often the perseveration errors produced maintain position as defined by a number of alternative theories of letter position encoding proposed in the literature. The analyses provide strong evidence that the grapheme representations used in spelling encode letter position such that position is represented in a graded manner based on distance from both edges of the word. Keywordsletter position coding; dysgraphia; spelling; letter perseveration errors; orthographic processing Many cognitive functions require the ability to represent and process sequences of items or events. Sequence information is essential, for example, in recalling a telephone number, reasoning about causes and effects, navigating a route through an environment, or producing a sentence. As Karl Lashley pointed out more than 50 years ago in The problem of serial order in behavior (Lashley, 1951), the question of how the brain represents and processes ordered sequences is far from trivial; and this question remains a central concern for research in a variety of cognitive domains (e.g., working memory: Henson, 1998; motor control: Bullock, 2004; reading: Grainger & Whitney, 2004; music performance: Palmer, 2005; spoken language production: Dell, Burger & Svec, 1997).This article addresses the serial order issue in the context of spelling. Spelling a word requires not only information about the identities of the letters in the word, but also information about the ordering of those letters. This ordering information could be encoded in a variety of ways. In the word PENCIL, for example, the letter E could be represented as the second letter in the word, the letter five positions from the end of the word, the letter in the nucleus of the first (orthographic) syllable, or the letter that follows P and precedes N. In each case the E's position * Corresponding author: Department of Cognitive Science, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, Tel: 410-516-7625, Fax: 410-516-8020, fischerbaum@cogsci.jhu.edu. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form....
1Reading is a rapid, distributed process that engages multiple components of the 2 ventral visual stream. However, the neural constituents and their interactions that allow 3 us to identify written words are not well understood. Using direct intracranial recordings 4 in a large cohort of humans, we comprehensively isolated the spatiotemporal 5 dynamics of visual word recognition across the entire left ventral occipitotemporal 6 cortex. The mid-fusiform cortex is the first region that is sensitive to word identity and 7 to both sub-lexical and lexical frequencies. Its activation, response latency and 8 amplitude, are highly dependent on the statistics of natural language. Information 9 about lexicality and word frequency propagates posteriorly from this region to 10 traditional visual word form regions and to earlier visual cortex. This unique sensitivity 11 of mid-fusiform cortex to the lexical characteristics of written words points to its central 12 role as an orthographic lexicon, which accesses the long-term memory 13representations of visual word forms. 14 Woodhead et al., 2014) to enable rapid orthographic-lexical-semantic transformations. 36While most of our knowledge of the cortical architecture of reading arises from 37 functional MRI, the rapid speed of reading demands that we use methods with very 38 high spatiotemporal resolution to study these processes. To this end, we used 39 recordings in 35 individuals with 784 intracranial electrodes, to comprehensively 40 characterize the spatial organization and functional roles of orthographic and lexical 41 regions across the ventral visual pathway during sub-lexical and lexical processes. 42Given their construction, these two tasks, performed in the same cohort, tap into 43 varying levels of attentional modulation of orthographic processing. Specifically, we 44 isolated functionally distinct regions across the vOTC that are highly sensitive to the 45 structure and statistics of natural language at multiple stages of orthographic 46 processing. 47
Many theories of visual word processing assume obligatory semantic access and phonological recoding whenever a written word is encountered. However, the relative importance of different reading processes depends on task. The current study uses event related potentials (ERPs) to investigate whether – and, if so, when and how – effects of task modulate how visually-presented words are processed. Participants were presented written words in the context of two tasks, delayed reading aloud and proper name detection. Stimuli varied factorially on lexical frequency and on spellingto-sound regularity, while controlling for other lexical variables. Effects of both lexical frequency and regularity were modulated by task. Lexical frequency modulated N400 amplitude, but only in the reading aloud task, whereas spellingto-sound regularity interacted with frequency to modulate the LPC, again only in the reading aloud task. Taken together, these results demonstrate that task demands affect how meaning and sound are generated from written words.
The representations that underlie our ability to read must encode not only the identities of the letters in a word, but also their relative positions.
In immediate serial recall, participants are asked to recall novel sequences of items in the correct order. Theories of the representations and processes required for this task differ in how order information is maintained; some have argued that order is represented through item-to-item associations, while others have argued that each item is coded for its position in a sequence, with position being defined either by distance from the start of the sequence, or by distance from both the start and the end of the sequence. Previous researchers have used error analyses to adjudicate between these different proposals. However, these previous attempts have not allowed researchers to examine the full set of alternative proposals. In the current study, we analyzed errors produced in 2 immediate serial recall experiments that differ in the modality of input (visual vs. aural presentation of words) and the modality of output (typed vs. spoken responses), using new analysis methods that allow for a greater number of alternative hypotheses to be considered. We find evidence that sequence positions are represented relative to both the start and the end of the sequence, and show a contribution of the end-based representation beyond the final item in the sequence. We also find limited evidence for item-to-item associations, suggesting that both a start-end positional scheme and item-to-item associations play a role in representing item order in immediate serial recall.
Recent work in cognitive neuroscience has focused on analyzing the brain as a network, rather than as a collection of independent regions. Prior studies taking this approach have found that individual differences in the degree of modularity of the brain network relate to performance on cognitive tasks. However, inconsistent results concerning the direction of this relationship have been obtained, with some tasks showing better performance as modularity increases and other tasks showing worse performance. A recent theoretical model [Chen, M., & Deem, M. W. 2015. Development of modularity in the neural activity of children's brains. Physical Biology, 12, 016009] suggests that these inconsistencies may be explained on the grounds that high-modularity networks favor performance on simple tasks whereas low-modularity networks favor performance on more complex tasks. The current study tests these predictions by relating modularity from resting-state fMRI to performance on a set of simple and complex behavioral tasks. Complex and simple tasks were defined on the basis of whether they did or did not draw on executive attention. Consistent with predictions, we found a negative correlation between individuals' modularity and their performance on a composite measure combining scores from the complex tasks but a positive correlation with performance on a composite measure combining scores from the simple tasks. These results and theory presented here provide a framework for linking measures of whole-brain organization from network neuroscience to cognitive processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.