We report evidence for spatially parallel visual search for targets defined by combinations of form elements in visual search. In Section 1, we show that flat search functions occur for combined-form targets when distractor forms are homogeneous and can be grouped together, thus segmenting the target from the distractors. Introducing heterogeneous distractors lessens distractor grouping and can produce serial search. These results cannot be easily attributed to subjects' use of local feature information to discriminate targets. Instead, they suggest that grouping can operate at a level at which combined form information is represented. In Section 2 we show that these grouping effects are spatially scaled by the size of the stimuli. In Section 3 we show that heterogeneity does not prevent flat search functions when the target has a unique defining feature. The data are interpreted in terms of a hierarchical processing system involving both devoted single-feature and combined-feature (junction) maps. Grouping processes can operate at both the single-feature and the combined-form levels. Selection in visual search remains confined to one object description at a time, but this description can be at various spatial scales, including that at the level of grouped forms.
Recently, the authors presented evidence that new items can be prioritized for selection by the top-down attentional inhibition of old stimuli already in the field (visual marking; D. G. Watson & G. W. Humphreys, 1997). In this article the authors assess whether this inhibition extends to moving old items and test an alternative account of visual marking. Six experiments showed that old moving items could be inhibited provided they did not undergo abrupt property changes. Further, and in contrast to effects with static stimuli, the marking of old moving stimuli was based on inhibition applied at the level of a whole feature map, rather than at their locations. The results also rule out an alternative account of visual marking based on the top-down weighting of dynamic or static processing pathways.
Three experiments investigated whether spatial cuing influences luminance-increment detection accuracy. Ss saw multiple-target displays and responded yes or no to 4 locations, including cued position. To test whether cuing effects are due to the load on visual short-term memory from the number of locations, Experiments 1 and 2 presented displays with 4 or 8 relevant locations. Experiment 1 used peripheral cues; Experiment 2 used central cues. Significant cuing effects were less marked with 4- than 8-location displays. Cuing effects were largest with multiple targets, but a small reliable effect remained even with single targets. Experiment 3 replicated the single-target effect with predominantly multiple- and single-target displays. A capacity-limited selection account is developed for these findings and their implications are discussed for separate central and peripheral cuing mechanisms and the locus of spatial cuing effects.
The role of perceptual grouping and the encoding of closure of local elements in the processing of hierarchical patterns was studied. Experiments 1 and 2 showed a global advantage over the local level for 2 tasks involving the discrimination of orientation and closure, but there was a local advantage for the closure discrimination task relative to the orientation discrimination task. Experiment 3 showed a local precedence effect for the closure discrimination task when local element grouping was weakened by embedding the stimuli from Experiment 1 in a background made up of cross patterns. Experiments 4A and 4B found that dissimilarity of closure between the local elements of hierarchical stimuli and the background figures could facilitate the grouping of closed local elements and enhanced the perception of global structure. Experiment 5 showed that the advantage for detecting the closure of local elements in hierarchical analysis also held under divided- and selective-attention conditions. Results are consistent with the idea that grouping between local elements takes place in parallel and competes with the computation of closure of local elements in determining the selection between global and local levels of hierarchical patterns for response.
MiXeD-cAsE stimuli have long been used to test whether word recognition is based on holistic visual information or preliminary letter identification. However, without knowing which properties of mixed-case stimuli disrupt processing, it is not possible to determine which visual units mediate word recognition. The present studies examined the effects of case mixing on word and nonword naming as a function of (a) whether spaces were inserted between letters and (b) whether letter size was alternated independent of letter case. The results suggest that case-mixing disruption effects are due to at least 2 factors: the introduction of inappropriate grouping between letters with the same size and case, and the disruption of transletter features. The data support a model of visual lexical access based on the input from multiple visually based units.
In this article the operation of a direct visual route to action in response to objects, in addition to a semantically mediated route, is demonstrated. Four experiments were conducted in which participants made gesturing or naming responses to pictures under deadline conditions. There was a cross-over interaction in the number of visual errors relative to the number of semantic plus semantic-visual errors in the two tasks: In gesturing, compared with naming, participants made higher proportions of visual errors and lower proportions of semantic plus semantic-visual errors (Experiments 1, 3, and 4). These results suggest that naming and gesturing are dependent on separate information-processing routes from stimulus to response, with gesturing dependent on a visual route in addition to a semantic route. Partial activation of competing responses from the visual information present in objects (mediated by the visual route to action) leads to high proportions of visual errors under deadline conditions. Also, visual errors do not occur when gestures are made in response to words under a deadline (Experiment 2), which indicates that the visual route is specific to seen objects.
Four experiments investigated the types of representations mediating sequential visual matching of objects depicted at different depth rotations. Matching performance was affected by the similarity between depicted views of the objects. Effects of view similarity were not influenced by the presence of a meaningless mask in the interstimulus interval (ISI), but they were reduced by long ISIs and by familiarity with the stimuli. It is suggested that with longer ISIs or increased stimulus familiarity, a number of object representations are activated that, although abstracted from some image characteristics, remain view specific. Under these conditions, matching is less reliant on representations closely tied to the view of the initial stimulus presented. The results are consistent with both the derivation and the long-term representation of view-specific rather than view-invariant descriptions of objects.
Whether the global shape of objects can be processed without accessing semantic or identity information was tested. Ss judged which of 2 fragmented forms had the same global shape as a reference stimulus. Matching stimuli could be physically identical, semantically related, or unrelated. The reference stimulus and nonmatching (distractor) form could be semantically related or unrelated. Similarity effects in the related condition were unconfounded with matches nameable and nonnameable forms. For nameable forms, related matching forms facilitated performance; a related distractor disrupted performance. Semantic interference was eliminated when nameable distractors were replaced with nonnameable partners; semantic similarity effects on matching were eliminated with a nonnameable reference stimulus and with inverted targets and distractors. Access to information concerning global shape does not normally occur without object identification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.