Abstract:
Blind people rely mostly on the auditory feedback of screen readers to consume digital information. Efficiency is a problem especially in situations where relevant information must be recognized among large amounts of irrelevant information. Sighted people use scanning as a strategy to achieve this goal, by glancing at all content expecting to identify information of interest to be subsequently analyzed with further care. In contrast, screen readers rely on a sequential auditory channel that is impairing a quicker overview of the content, when compared to the visual presentation on screen.
We propose taking advantage of the Cocktail Party Effect, which states that people are able to focus their attention on a single voice among several conversations, but still identify relevant content in the background. Therefore, oppositely to one sequential speech channel, we hypothesize that blind users can leverage concurrent speech to quickly get the gist of digital information. Grounded on literature reviews that documented several features (e.g. spatial location, voice characteristics) that increase speech intelligibility, we investigated if and how we could take advantage of concurrent speech to accelerate blind people’s scanning for digital information.
Results confirm that blind people are able to scan for relevant content with two or three simultaneous voices. Most importantly, we show that two or three voices with speech rates slightly faster than the default rate, enable a significantly faster scanning for relevant content, while maintaining its comprehension. In contrast, to keep-up with concurrent speech timings, a single voice requires a speech rate to fast that it cause a considerable loss in performance. We then investigated and explored other prospective scenarios for concurrent speech interfaces. Besides scenarios that focus on information consumption, we explored the use of concurrent speech to support two-handed exploration in multitouch scenarios. Overall, results show that concurrent speech accelerates the consumption of digital information in scanning scenarios, but that in tasks that require a greater physical coordination with the speech sources, the benefits are more dependent on the task itself and on user strategies.
Based on user studies conducted and analyzed in this research, the thesis of this dissertation
is:
Screen readers with concurrent speech enable faster, but still effective, scanning to find relevant digital information.
Full Thesis:
Download João Guerreiro’s Full Thesis
Thesis Advisor:
Daniel Gonçalves
Award Date:
April 27, 2016
Institution:
University of Lisbon
Lisbon, Portugal
Author Contact:
joao.p.guerreiro@tecnico.ulisboa.pt
Comments are closed.