Date of Award
1-1-2015
Language
English
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
College/School/Department
Department of Psychology
Program
Cognitive Psychology
Content Description
1 online resource (vii, 83 pages) : color illustrations.
Dissertation/Thesis Chair
James H Neely
Committee Members
James H Neely, W T Neill, Laurie B Feldman
Keywords
attention, label, language, speech, top-down effects, visual search, Auditory evoked response, Auditory perception, Visual evoked response, Visual perception, Interference (Perception)
Subject Categories
Cognitive Psychology | Psychology
Abstract
The present study explored the mechanisms underlying the self-directed speech effect, the finding that relative to silent reading of a label (e.g., “DOG”), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether speech, per se, is required for the effect. Self-directed speech did not reduce search RTs more than hearing only a pre-recorded auditory presentation of a label, both of which reduced RTs relative to silent reading. Experiments 2 and 3 examined whether the benefit of an auditory presentation of the label relative to silent reading of a label (the auditory label effect) was affected by physically degrading the target or by how well the target picture matched prototypical features of the target concept (imagery concordance), respectively. Contrary to previous findings in which the auditory label effect occurred only for targets high in imagery concordance, an auditory label benefit occurred regardless of the target’s imagery concordance and degrading the target did not increase the auditory label effect. Experiment 4 was similar to Experiment 3 but, in addition to the auditory word and visual word cues, two additional conditions that used picture cues, members high and in low prototypicality of the target category, were included. When the target imagery concordance was high, RTs following the high prototypicality picture and auditory cues were comparable and shorter than RTs following the visual label and low prototypicality cues. However, when the target imagery concordance was low, RTs following the auditory cue were shorter than the comparable RTs following the picture cues and visual-label cues. The results suggest that an auditory label activates both prototypical and atypical features of a concept.
Recommended Citation
Cho, Kit Wing, "I can see what you are saying : auditory labels reduce visual search times" (2015). Legacy Theses & Dissertations (2009 - 2024). 1357.
https://scholarsarchive.library.albany.edu/legacy-etd/1357