ASSETS ’18- Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility

Full Citation in the ACM Digital Library

SESSION: Keynote

Session details: Keynote

  • Richard Ladner

Exploring Paths to a More Accessible Digital Future

  • Judy Brewer

Advances and proliferation of digital technologies have greatly expanded access to the information society for people with disabilities. Yet when considering which devices, applications, or cutting-edge immersive virtual environments to explore, people with disabilities must still take into account whether our needs will be fully supported. Despite considerable progress over the years, any time we consider educational programs, employment opportunities, online banking, electronic health care portals, artistic endeavors and entertainment options, we still must worry whether we will encounter barriers, and whether we will need to find extra time to address user interface and interoperability problems before addressing the tasks we had originally planned.

SESSION: Session 1: Interacting with the Real World

Session details: Session 1: Interacting with the Real World

  • Kyle Montague

Investigating Cursor-based Interactions to Support Non-Visual Exploration in the Real World

  • Anhong Guo
  • Saige McVea
  • Xu Wang
  • Patrick Clary
  • Ken Goldman
  • Yang Li
  • Yu Zhong
  • Jeffrey P. Bigham

The human visual system processes complex scenes to focus attention on relevant items. However, blind people cannot visually skim for an area of interest. Instead, they use a combination of contextual information, knowledge of the spatial layout of their environment, and interactive scanning to find and attend to specific items. In this paper, we define and compare three cursor-based interactions to help blind people attend to items in a complex visual scene: window cursor (move their phone to scan), finger cursor (point their finger to read), and touch cursor (drag their finger on the touchscreen to explore). We conducted a user study with 12 participants to evaluate the three techniques on four tasks, and found that: window cursor worked well for locating objects on large surfaces, finger cursor worked well for accessing control panels, and touch cursor worked well for helping users understand spatial layouts. A combination of multiple techniques will likely be best for supporting a variety of everyday tasks for blind users.

What My Eyes Can’t See, A Robot Can Show Me: Exploring the Collaboration Between Blind People and Robots

  • Mayara Bonani
  • Raquel Oliveira
  • Filipa Correia
  • André Rodrigues
  • Tiago Guerreiro
  • Ana Paiva

Blind people rely on sighted peers and different assistive technologies to accomplish everyday tasks. In this paper, we explore how assistive robots can go beyond information-giving assistive technologies (e.g., screen readers) by physically collaborating with blind people. We first conducted a set of focus groups to assess how blind people perceive and envision robots. Results showed that, albeit having stereotypical concerns, participants conceive the integration of assistive robots in a broad range of everyday life scenarios and are welcoming of this type of technology. In a second study, we asked blind participants to collaborate with two versions of a robot in a Tangram assembly task: one robot would only provide static verbal instructions whereas the other would physically collaborate with participants and adjust the feedback to their performance. Results showed that active collaboration had a major influence on the successful performance of the task. Participants also reported higher perceived warmth, competence and usefulness when interacting with the physically assistive robot. Overall, we provide preliminary results on the usefulness of assistive robots and the possible role these can hold in fostering a higher degree of autonomy for blind people.

Design of an Augmented Reality Magnification Aid for Low Vision Users

  • Lee Stearns
  • Leah Findlater
  • Jon E. Froehlich

Augmented reality (AR) systems that enhance visual capabilities could make text and other fine details more accessible for low vision users, improving independence and quality of life. Prior work has begun to investigate the potential of assistive AR, but recent advancements enable new AR visualizations and interactions not yet explored in the context of assistive technology. In this paper, we follow an iterative design process with feedback and suggestions from seven visually impaired participants, designing and testing AR magnification ideas using the Microsoft HoloLens. Participants identified several advantages to the concept of head-worn magnification (e.g., portability, privacy, ready availability), and to our AR designs in particular (e.g., a more natural reading experience and the ability to multitask). We discuss the strengths and weaknesses of this AR magnification approach and summarize lessons learned throughout the process.

HoloLearn: Wearable Mixed Reality for People with Neurodevelopmental Disorders (NDD)

  • Beatrice Aruanno
  • Franca Garzotto
  • Emanuele Torelli
  • Francesco Vona

Our research explores the potential of wearable Mixed Reality (MR) for people with Neuro-Developmental Disorders (NDD). The paper presents HoloLearn, a MR application designed in cooperation with NDD experts and implemented using HoloLens technology. The goal of HoloLearn is to help people with NDD learn how to perform simple everyday tasks in domestic environments and improve autonomy. An original feature of the system is the presence of a virtual assistant devoted to capture the user’s attention and to give her/him hints during task execution in the MR environment. We performed an exploratory study involving 20 subjects with NDD to investigate the acceptability and usability of HoloLearn and its potential as a therapeutic tool. HoloLearn was well-accepted by the participants and the activities in the MR space were perceived as enjoyable, despite some usability problems associated to HoloLens interaction mechanism. More extensive and long term empirical research is needed to validate these early results, but our study suggests that HoloLearn could be adopted as a complement to more traditional interventions. Our work, and the lessons we learned, may help designers and developers of future MR applications devoted to people with NDD and to other people with similar needs.

Improving the Academic Inclusion of a Student with Special Needs at University Bordeaux

  • John J. Kelway
  • Anke M. Brock
  • Pascal Guitton
  • Aurélie Millet
  • Yasushi Nakata

Recently, there has been a sharp increase in the number of students with disabilities (SWDs) enrolled in universities. Unfortunately SWDs still struggle to attain the same level of education as non-disabled students. This paper presents a collaborative approach between members of the student support service, researchers and a special needs student in order to improve his access to and participation in university education. We performed a person-technology match and analyzed different existing technologies. Then, we designed and printed a keyguard, keyboard stand and mobile armrest which allowed him to almost double his text entry speed on a computer. We hope that our experience will inspire other universities to better address the needs of students with disabilities.

SESSION: Session 2: Supporting Speech

Session details: Session 2: Supporting Speech

  • Robin Brewer

Towards More Robust Speech Interactions for Deaf and Hard of Hearing Users

  • Raymond Fok
  • Harmanpreet Kaur
  • Skanda Palani
  • Martez E. Mott
  • Walter S. Lasecki

Mobile, wearable, and other ubiquitous computing devices are increasingly creating a context in which conventional keyboard and screen-based inputs are being replaced in favor of more natural speech-based interactions. Digital personal assistants use speech to control a wide range of functionality, from environmental controls to information access. However, many deaf and hard-of-hearing users have speech patterns that vary from those of hearing users due to incomplete acoustic feedback from their own voices. Because automatic speech recognition (ASR) systems are largely trained using speech from hearing individuals, speech-controlled technologies are typically inaccessible to deaf users. Prior work has focused on providing deaf users access to aural output via real-time captioning or signing, but little has been done to improve users’ ability to provide input to these systems’ speech-based interfaces. Further, the vocalization patterns of deaf speech often make accurate recognition intractable for both automated systems and human listeners, making traditional approaches to mitigate ASR limitations, such as human captionists, less effective. To bridge this accessibility gap, we investigate the limitations of common speech recognition approaches and techniques—both automatic and human-powered—when applied to deaf speech. We then explore the effectiveness of an iterative crowdsourcing workflow, and characterize the potential for groups to collectively exceed the performance of individuals. This paper contributes a better understanding of the challenges of deaf speech recognition and provides insights for future system development in this space.

Behavioral Changes in Speakers who are Automatically Captioned in Meetings with Deaf or Hard-of-Hearing Peers

  • Matthew Seita
  • Khaled Albusays
  • Sushant Kafle
  • Michael Stinson
  • Matt Huenerfauth

Deaf and hard of hearing (DHH) individuals face barriers to communication in small-group meetings with hearing peers; we examine generation of captions on mobile devices by automatic speech recognition (ASR). While ASR output displays errors, we study whether such tools benefit users and influence conversational behaviors. An experiment was conducted where DHH and hearing individuals collaborated in discussions in three conditions (without an ASR-based application, with the application, and with a version indicating words for which the ASR has low confidence). An analysis of audio recordings, from each participant across conditions, revealed significant differences in speech features. When using the ASR-based automatic captioning application, hearing individuals spoke more loudly, with improved voice quality (harmonics-to-noise ratio), with a non-standard articulation (changes in F1 and F2 formants), and at a faster rate. Identifying non-standard speech in this setting has implications on the composition of data used for ASR training/testing, which should be representative of its usage context. Understanding these behavioral influences may also enable designers of ASR captioning systems to leverage these effects, to promote communication success.

Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing

  • Dhruv Jain
  • Rachel Franz
  • Leah Findlater
  • Jackson Cannon
  • Raja Kushalnagar
  • Jon Froehlich

Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.

Assessing Virtual Assistant Capabilities with Italian Dysarthric Speech

  • Fabio Ballati
  • Fulvio Corno
  • Luigi De Russis

The usage of smartphone-based virtual assistants (e.g., Siri or Google Assistant) is growing, and their spread has generally a positive impact on device accessibility, e.g., for people with disabilities. However, people with dysarthria or other speech impairments may be unable to use these virtual assistants with proficiency. This paper investigates to which extent people with ALS-induced dysarthria can be understood and get consistent answers by three widely used smartphone-based assistants, namely Siri, Google Assistant, and Cortana. We focus on the recognition of Italian dysarthric speech, to study the behavior of the virtual assistants with this specific population for which no relevant studies are available. We collected and recorded suitable speech samples from people with dysarthria in a dedicated center of the Molinette hospital, in Turin, Italy. Starting from those recordings, the differences between such assistants, in terms of speech recognition and consistency in answer, are investigated and discussed. Results highlight different performance among the virtual assistants. For speech recognition, Google Assistant is the most promising, with around 25% of word error rate per sentence. Consistency in answer, instead, sees Siri and Google Assistant provide coherent answers around 60% of times.

Usability Testing – An Aphasia Perspective

  • Abi Roper
  • Ian Davey
  • Stephanie Wilson
  • Timothy Neate
  • Jane Marshall
  • Brian Grellmann

This paper reports the experience of participating in usability testing from the perspective of a person with aphasia. We briefly report adaptations to classic usability testing to enable the participation of people with aphasia. These included the use of short, direct tasks and physical artefacts such as picture cards. Authors of the paper include Ian, a user with aphasia who participated in adapted usability testing and Abi, a speech and language therapist researcher who facilitated sessions. Ian reports that these methods allowed him, as a person with aphasia, to engage with the usability testing process. We argue that such adaptations are essential in order to develop technologies which will be accessible to people with aphasia. This collaborative report provides a case for both how and why these adaptations can be made.

SESSION: Session 3: Accessing Information

Session details: Session 3: Accessing Information

  • Sergio Mascetti

BrowseWithMe: An Online Clothes Shopping Assistant for People with Visual Impairments

  • Abigale J. Stangl
  • Esha Kothari
  • Suyog D. Jain
  • Tom Yeh
  • Kristen Grauman
  • Danna Gurari

Our interviews with people who have visual impairments show clothes shopping is an important activity in their lives. Unfortunately, clothes shopping web sites remain largely inaccessible. We propose design recommendations to address online accessibility issues reported by visually impaired study participants and an implementation, which we call BrowseWithMe, to address these issues. BrowseWithMe employs artificial intelligence to automatically convert a product web page into a structured representation that enables a user to interactively ask the BrowseWithMe system what the user wants to learn about a product (e.g., What is the price? Can I see a magnified image of the pants?). This enables people to be active solicitors of the specific information they are seeking rather than passive listeners of unparsed information. Experiments demonstrate BrowseWithMe can make online clothes shopping more accessible and produce accurate image descriptions.

Examining Image-Based Button Labeling for Accessibility in Android Apps through Large-Scale Analysis

  • Anne Spencer Ross
  • Xiaoyi Zhang
  • James Fogarty
  • Jacob O. Wobbrock

We conduct the first large-scale analysis of the accessibility of mobile apps, examining what unique insights this can provide into the state of mobile app accessibility. We analyzed 5,753 free Android apps for label-based accessibility barriers in three classes of image-based buttons: Clickable Images, Image Buttons, and Floating Action Buttons. An epidemiology-inspired framework was used to structure the investigation. The population of free Android apps was assessed for label-based inaccessible button diseases. Three determinants of the disease were considered: missing labels, duplicate labels, and uninformative labels. The prevalence, or frequency of occurrences of barriers, was examined in apps and in classes of image-based buttons. In the app analysis, 35.9% of analyzed apps had 90% or more of their assessed image-based buttons labeled, 45.9% had less than 10% of assessed image-based buttons labeled, and the remaining apps were relatively uniformly distributed along the proportion of elements that were labeled. In the class analysis, 92.0% of Floating Action Buttons were found to have missing labels, compared to 54.7% of Image Buttons and 86.3% of Clickable Images. We discuss how these accessibility barriers are addressed in existing treatments, including accessibility development guidelines.

Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility

  • Xiaoyi Zhang
  • Tracy Tran
  • Yuqian Sun
  • Ian Culhane
  • Shobhit Jain
  • James Fogarty
  • Jennifer Mankoff

The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. Prior investigation of tactile solutions for large touchscreens also may not address the challenges on mobile devices. We therefore present Interactiles, a low cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.

Multimodal Deep Learning using Images and Text for Information Graphic Classification

  • Edward Kim
  • Kathleen F. McCoy

Information graphics, e.g. line or bar graphs, are often displayed in documents and popular media to support an intended message, but for a growing number of people, they are missing the point. The World Health Organization estimates that the number of people with vision impairment could triple in the next thirty years due to population growth and aging. If a graphic is not described, explained in the text, or missing alt tags and other metadata (as is often the case in popular media), the intended message is lost or not adequately conveyed. In this work, we describe a multimodal deep learning approach that supports the communication of the intended message. The multimodal model uses both the pixel data and text data in a single neural network to classify the information graphic into an intention category that has previously been validated as useful for people who are blind or who are visually impaired. Furthermore, we collect a new dataset of information graphics and present qualitative and quantitative results that show our multimodal model exceeds the performance of any one modality alone, and even surpasses the capabilities of the average human annotator.

SESSION: Session 4: Considering Design

Session details: Session 4: Considering Design

  • Patrick Carrington

Incorporating Social Factors in Accessible Design

  • Kristen Shinohara
  • Jacob O. Wobbrock
  • Wanda Pratt

Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to investigate how professional designers can leverage social factors to include accessibility in design. We focused on how professional designers incorporated Design for Social Accessibility’s three tenets: (1) to work with users with and without visual impairments; (2) to consider social and functional factors; (3) to employ tools-a framework and method cards-to raise awareness and prompt reflection on social aspects toward accessible design. We then interviewed designers about their workshop experiences. We found DSA to be an effective set of tools and strategies incorporating social/functional and non/disabled perspectives that helped designers create accessible design.

Interdependence as a Frame for Assistive Technology Research and Design

  • Cynthia L. Bennett
  • Erin Brady
  • Stacy M. Branham

In this paper, we describe interdependence for assistive technology design, a frame developed to complement the traditional focus on independence in the Assistive Technology field. Interdependence emphasizes collaborative access and people with disabilities’ important and often understated contribution in these efforts. We lay the foundation of this frame with literature from the academic discipline of Disability Studies and popular media contributed by contemporary disability justice activists. Then, drawing on cases from our own work, we show how the interdependence frame (1) synthesizes findings from a growing body of research in the Assistive Technology field and (2) helps us orient to additional technology design opportunities. We position interdependence as one possible orientation to, not a prescription for, research and design practice–one that opens new design possibilities and affirms our commitment to equal access for people with disabilities.

‘Wow! You’re Wearing a Fitbit, You’re a Young Boy Now!”: Socio-Technical Aspirations for Children with Autism in India

  • Sumita Sharma
  • Krishnaveni Achary
  • Harmeet Kaur
  • Juhani Linna
  • Markku Turunen
  • Blessin Varkey
  • Jaakko Hakulinen
  • Sanidhya Daeeyya

In this paper, we build a case for incorporating socio-technical aspirations of different stakeholders, e.g. parents, care-givers, and therapists, to motivate technology acceptance and adoption for children with autism. We base this on findings from two studies at a special school in New Delhi. First, with six children with autism, their parents and therapists we explored whether fitness bands motivate children with autism in India to increase their physical activity. Second, with five parents and specialists at the same school, we conducted interviews to understand their expectations from and current usage of technology. Previous work defines a culture-based framework for assistive technology design with three dimensions: lifestyle, socio-technical infrastructure, and monetary and informational resources. To this framework we propose a fourth dimension of socio-technical aspirations. We discuss the implications of the proposed fourth dimension to the existing framework.

Understanding the Power of Control in Autonomous Vehicles for People with Vision Impairment

  • Robin N. Brewer
  • Vaishnav Kameswaran

Autonomy and control are important themes in design for people with disabilities. With the rise in research in autonomous vehicle design, we investigate perceived differences in control for people with vision impairments in the use of semi- and fully autonomous vehicles. We conducted focus groups with 15 people with vision impairments. Each focus group included a design component asking participants to design voice-based and tactile solutions to problems identified by the group. We contribute a new perspective of independence in the context of control. We discuss the importance of driving for blind and low vision people, describe differences in perceptions of autonomous vehicles based on level of autonomy, and the use of assistive technology in vehicle operation and information gathering. Our findings guide the design of accessible autonomous transportation systems and existing navigation and orientation systems for people with vision impairments.

From Behavioral and Communication Intervention to Interaction Design: User Perspectives from Clinicians

  • Yao Du
  • LouAnne Boyd
  • Seray Ibrahim

To improve functional communication and behavioral management, many children with disabilities receive behavioral and communication-related intervention from professionals such as behavioral analysts and speech and language therapists. This paper presents user perspectives from three clinicians who have used and/or designed assistive technology with children with disabilities, and calls for researchers to recognize and leverage clinicians’ knowledge to design accessible technology for children with complex sensory and communication needs.

SESSION: Session 5: Data & Privacy

Session details: Session 5: Data & Privacy

  • Kristen Shinohara

Who Should Have Access to my Pointing Data?: Privacy Tradeoffs of Adaptive Assistive Technologies

  • Foad Hamidi
  • Kellie Poneres
  • Aaron Massey
  • Amy Hurst

Customizing assistive technologies based on user needs, abilities, and preferences is necessary for accessibility, especially for individuals whose abilities vary due to a diagnosis, medication, or other external factors. Adaptive Assistive Technologies (AATs) that can automatically monitor a user’s current abilities and adapt functionality and appearance accordingly offer exciting solutions. However, there is an often-overlooked privacy tradeoff between usability and user privacy when designing such systems. We present a general privacy threat model analysis of AATs and contextualize it with findings from an interview study with older adults who experience pointing problems. We found that participants had positive attitude towards assistive technologies that gather their personal data but also had strong preferences for how their data should be used and who should have access to it. We identify a need to seriously consider privacy threats when designing assistive technologies to avoid exposing users to them.

Understanding Authentication Method Use on Mobile Devices by People with Vision Impairment

  • Daniella Briotto Faustino
  • Audrey Girouard

Passwords help people avoid unauthorized access to their personal devices but are not without challenges, like memorability and shoulder surfing attacks. Little is known about how people with vision impairment assure their digital security in mobile contexts. We conducted an online survey to understand their strategies to remember passwords, their perceptions of authentication methods and their self-assessed ability to keep their digital information safe. We collected answers from 325 people who are blind or have low vision from 12 countries and found: most use familiar names and numbers to create memorable passwords, the majority consider fingerprint to be the most secure and accessible user authentication method and PINs the least secure user authentication method. This paper presents our survey results and provides insights for designing better authentication methods for people with vision impairment.

Volunteer-Based Online Studies With Older Adults and People with Disabilities

  • Qisheng Li
  • Krzysztof Z. Gajos
  • Katharina Reinecke

There are few large-scale empirical studies with people with disabilities or older adults, mainly because recruiting partici­pants with specific characteristics is even harder than recruit­ing young and/or non-disabled populations. Analyzing four online experiments on LabintheWild with a total of 355,656 participants, we show that volunteer-based online experiments that provide personalized feedback attract large numbers of participants with diverse disabilities and ages and allow ro­bust studies with these populations that replicate and extend the findings of prior laboratory studies. To find out what mo­tivates people with disabilities to take part, we additionally analyzed participants’ feedback and forum entries that discuss LabintheWild experiments. The results show that participants use the studies to diagnose themselves, compare their abilities to others, quantify potential impairments, self-experiment, and share their own stories — findings that we use to inform design guidelines for online experiment platforms that adequately support and engage people with disabilities.

Exploring the Data Tracking and Sharing Preferences of Wheelchair Athletes

  • Patrick Carrington
  • Gierad Laput
  • Jeffrey P. Bigham

Sports are increasingly data-driven. Athletes use a variety of physical activity monitors to capture their movements, improve performance, and achieve excellence. To understand how wheelchair athletes want to use and share their activity data, we conducted a study using a prototype wheelchair fitness tracking device, which served as a probe to facilitate discussions. We interviewed 15 wheelchair basketball players about the use of performance data in the context of wheelchair basketball, and we discuss several implications for using and sharing automatically-tracked data. We find that the wheelchair basketball community is less concerned about the privacy of their data, and, in contrast to health data, athletes are motivated by competition. We conclude with a set of design opportunities that leverage digitized performance metrics within wheelchair basketball, which could apply to the broader wheelchair and adaptive athletics community.

SESSION: Session 6: Advancing Communication

Session details: Session 6: Advancing Communication

  • Abi Roper

“Siri Talks at You”: An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind

  • Ali Abdolrahmani
  • Ravi Kuber
  • Stacy M. Branham

Voice-activated personal assistants (VAPAs)–like Amazon Echo or Apple Siri–offer considerable promise to individuals who are blind due to widespread adoption of these non-visual interaction platforms. However, studies have yet to focus on the ways in which these technologies are used by individuals who are blind, along with whether barriers are encountered during the process of interaction. To address this gap, we interviewed fourteen legally-blind adults with experience of home and/or mobile-based VAPAs. While participants appreciated the access VAPAs provided to inaccessible applications and services, they faced challenges relating to the input, responses from VAPAs, and control of information presented. User behavior varied depending on the situation or context of the interaction. Implications for design are suggested to support inclusivity when interacting with VAPAs. These include accounting for privacy and situational factors in design, examining ways to support concerns over trust, and synchronizing presentation of visual and non-visual cues.

Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations

  • Sedeeq Al-khazraji
  • Larwan Berke
  • Sushant Kafle
  • Peter Yeung
  • Matt Huenerfauth

To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.

Why Is Gesture Typing Promising for Older Adults?: Comparing Gesture and Tap Typing Behavior of Older with Young Adults

  • Yu-Hao Lin
  • Suwen Zhu
  • Yu-Jung Ko
  • Wenzhe Cui
  • Xiaojun Bi

Gesture typing has been a widely adopted text entry method on touchscreen devices. We have conducted a study to understand whether older adults could gesture type, how they type, what are the strengths and weaknesses of gesture typing, and how to further improve it. By logging stroke-level interaction data and leveraging the existing modeling tools, we compared the gesture and tap typing behavior of older adults with young adults. Our major finding is promising and encouraging. Gesture typing outperformed the typical tap typing for older adults, and was very easy for them to learn. The gesture typing input speed was 15.28% higher than that of tap typing for 14 older adults who had none gesture typing experience in the past. One of the main reasons was that older adults adopted the word-level inputting strategy in gesture typing, while often used the letter-level correction strategy in tap typing. Compared with young adults, older adults exhibited little degradation in gesture accuracy. Our study also led to implications on how to further improve gesture typing for older adults.

Designing an Animated Character System for American Sign Language

  • Danielle Bragg
  • Raja Kushalnagar
  • Richard Ladner

Sign languages lack a standard written form, preventing millions of Deaf people from accessing text in their primary language. A major barrier to adoption is difficulty learning a system which represents complex 3D movements with stationary symbols. In this work, we leverage the animation capabilities of modern screens to create the first animated character system prototype for sign language, producing text that combines iconic symbols and movement. Using animation to represent sign movements can increase resemblance to the live language, making the character system easier to learn. We explore this idea through the lens of American Sign Language (ASL), presenting 1) a pilot study underscoring the potential value of an animated ASL character system, 2) a structured approach for designing animations for an existing ASL character system, and 3) a design probe workshop with ASL users eliciting guidelines for the animated character system design.

SESSION: Session 7: Enhancing Navigation

Session details: Session 7: Enhancing Navigation

  • Anke Brock

Exploring Aural and Haptic Feedback for Visually Impaired People on a Track: A Wizard of Oz Study

  • Kyle Rector
  • Rachel Bartlett
  • Sean Mullan

Access to a variety of exercises is important for maintaining a healthy lifestyle. This variety includes physical activity in public spaces. A 400-meter jogging track is not accessible because it provides solely visual cues for people to remain in their lane. As a first step toward making exercise spaces accessible, we conducted an ecologically valid Wizard of Oz study to compare the accuracy and user experience of human guide, verbal, wrist vibration, and head beat feedback while people walked around the track. The technology conditions did not affect accuracy, but the order of preference was human guide, verbal, wrist vibration, and head beat. Participants had a difficult time perceiving vibrations when holding their cane or guide dog, and lower frequency sounds made it difficult to focus on their existing navigation strategies.

“It Looks Beautiful but Scary”: How Low Vision People Navigate Stairs and Other Surface Level Changes

  • Yuhang Zhao
  • Elizabeth Kupferstein
  • Doron Tal
  • Shiri Azenkot

Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people’s distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements.

MANA: Designing and Validating a User-Centered Mobility Analysis System

  • Boyd Anderson
  • Shenggao Zhu
  • Ke Yang
  • Jian Wang
  • Hugh Anderson
  • Chao Xu Tay
  • Vincent Y. F. Tan
  • Ye Wang

In this paper, we demonstrate a new IMU-based wearable system (dubbed MANA or Mobility ANAlytics) for measuring gait in a clinical setting. The design process and choices that were made to ensure that the technology was invisible and accessible are described. We collect a rich and diverse dataset of walking data from sixty participants, including forty people with Parkinson’s Disease (PD). The system is then validated in a clinical setting with this dataset. We present novel and innovative algorithms to measure common gait parameters. The system is able to estimate these gait parameters with high accuracy, with a mean absolute error of 4.0 cm for stride length and 2.6 cm for step length, outperforming all state-of-the-art methods that included data from people with PD.

Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments

  • Dragan Ahmetovic
  • Uran Oh
  • Sergio Mascetti
  • Chieko Asakawa

Navigation assistive technologies aim to improve the mobility of blind or visually impaired people. In particular, turn-by-turn navigation assistants provide sequential instructions to enable autonomous guidance towards a destination. A problem frequently addressed in the literature is to obtain accurate position and orientation of the user during such guidance. An orthogonal challenge, often overlooked in the literature, is how precisely navigation instructions are followed by users. In particular, imprecisions in following rotation instructions lead to rotation errors that can significantly affect navigation. Indeed, a relatively small error during a turn is amplified by the following frontal movement and can lead the user towards incorrect or dangerous paths. In this contribution, we study rotation errors and their effect on turn-by-turn guidance for individuals with visual impairments. We analyze a dataset of indoor trajectories of 11 blind participants guided along three routes through a multi-story shopping mall using NavCog, a turn-by-turn smartphone navigation assistant. We find that participants extend rotations by 17º on average. The error is not proportional to the expected rotation; instead, it is accentuated for “slight turns” (22.5º-60º), while “ample turns” (60º-120º) are consistently approximated to 90º. We generalize our findings as design considerations for engineering navigation assistance in real-world scenarios.

SESSION: Poster Session 1

A Feasibility Study of Using Google Street View and Computer Vision to Track the Evolution of Urban Accessibility

  • Ladan Najafizadeh
  • Jon E. Froehlich

Previous work has explored scalable methods to collect data on the accessibility of the built environment by combining manual labeling, computer vision, and online map imagery. In this poster paper, we explore how to extend these methods to track the evolution of urban accessibility over time. Using Google Street View’s “time machine” feature, we introduce a three-stage classification framework: (i) manually labeling accessibility problems in one time period; (ii) classifying the labeled image patch into one of five accessibility categories; (iii) localizing the patch in all previous snapshots. Our preliminary results analyzing 1633 Street View images across 376 locations demonstrate feasibility.

An Accessible CAD Workflow Using Programming of 3D Models and Preview Rendering in A 2.5D Shape Display

  • Alexa F. Siu
  • Joshua Miele
  • Sean Follmer

Affordable rapid 3D printing technologies have become a key enabler in the maker movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access for people who are blind or visually impaired. We propose an accessible CAD workflow where 3D models are generated through OpenSCAD, a script-based 3D modeling tool, and rendered at interactive speeds in an actuated 2.5D shape display. We report preliminary findings on a case study with one blind user. Based on our observations, we frame design imperatives on interactions that might be important in future accessible CAD systems with tactile output.

An Interactive Multimodal Guide to Improve Art Accessibility for Blind People

  • Luis Cavazos Quero
  • Jorge Iranzo Bartolomé
  • Seonggu Lee
  • En Han
  • Sunhee Kim
  • Jundong Cho

The development of 3D printing technology has improved the engagement of the visually impaired people when experiencing two-dimensional visual artworks. However, it is still difficult to explore, experience and get a clear understanding. We introduce an interactive multimodal guide in which a 3D printed 2.5D representation of a painting can be explored by touch. Touching determined features in the representation triggers localized verbal, audio, wind, and light/heat feedback events that convey spatial and semantic information. In this work we present a working prototype developed through three sessions using a participatory design approach.

Applying Transfer Learning to Recognize Clothing Patterns Using a Finger-Mounted Camera

  • Lee Stearns
  • Leah Findlater
  • Jon E. Froehlich

Color identification tools do not identify visual patterns or allow users to quickly inspect multiple locations, which are both important for identifying clothing. We are exploring the use of a finger-based camera that allows users to query clothing colors and patterns by touch. Previously, we demonstrated the feasibility of this approach using a small, highly-controlled dataset and combining two image classification techniques commonly used for object recognition. Here, to improve scalability and robustness, we collect a dataset of fabric images from online sources and apply transfer learning to train an end-to-end deep neural network to recognize visual patterns. This new approach achieves 92% accuracy in a general case and 97% when tuned for images from a finger-mounted camera.

Axessibility: a LaTeX Package for Mathematical Formulae Accessibility in PDF Documents

  • Dragan Ahmetovic
  • Tiziana Armano
  • Cristian Bernareggi
  • Michele Berra
  • Anna Capietto
  • Sandro Coriasco
  • Nadir Murru
  • Alice Ruighi
  • Eugenia Taranto

Accessing mathematical formulae within digital documents is challenging for blind people. In particular, document formats designed for printing, such as PDF, structure math content for visual access only. While accessibility features exist to present PDF content non-visually, formulae support is limited to providing replacement text that can be read by a screen reader or displayed on a braille bar. However, the operation of inserting replacement text is left to document authors, who rarely provide such content. Furthermore, at best, description of the formulae are provided. Thus, conveying detailed understanding of complex formulae is nearly impossible. In this contribution we report our ongoing research on Axessibility, a LaTeX package framework that automates the process of making mathematical formulae accessible by providing the formulae LaTeX code as PDF replacement text. Axessibility is coupled with external scripts to automate its integration in existing documents, expand user shorthand macros to standard LaTeX representation, and custom screen reader dictionaries that improve formulae reading on screen readers.

BrightLights: Gamifying Data Capture for Situational Visual Impairments

  • Kerr Macpherson
  • Garreth W. Tigwell
  • Rachel Menzies
  • David R. Flatla

With the growing popularity of mobile devices, Situational Visual Impairments (SVIs) can cause accessibility challenges. When addressing SVIs, interface and content designers are lacking guidelines based on empirically-determined SVI contrast sensitivities. To address this, we developed BrightLights — a game that collects screen-content-contrast data in-the-wild that will enable new SVI-pertinent contrast ratio recommendations. In our evaluation with 15 participants, we found significantly worse performance with low screen brightness versus medium or high screen brightness, showing that BrightLights is sensitive to at least one factor that contributes to SVI (screen brightness). Once validated for in-the-wild deployment, BrightLights data will finally help designers address SVIs through their designs.

Development of an ICT-delivered Control Programme for Use in Aphasia Crossover Intervention Study

  • Áine Kearns
  • Ian Pitt
  • Helen Kelly
  • Deirdre O’Byrne

Aphasia refers to an acquired loss or impairment of the language system that can occur post stroke. Information and Communication Technologies (ICT) can provide an option for the delivery of intensive aphasia rehabilitation but further research is required to support this. A crossover research design can provide a robust methodology for investigating the effectiveness of an ICT-delivered aphasia rehabilitation programme. However, if using a control programme in a crossover design it must be carefully considered. It should be distinct from the intervention but not easily distinguished as a “sham” programme. This can pose challenges for researchers. The design, development and pilot of a control programme for a crossover aphasia rehabilitation research design is presented here.

Evaluation of a Sign Language Support System for Viewing Sports Programs

  • Tsubasa Uchida
  • Hideki Sumiyoshi
  • Taro Miyazaki
  • Makiko Azuma
  • Shuichi Umeda
  • Naoto Kato
  • Yuko Yamanouchi
  • Nobuyuki Hiruma

As information support to deaf and hard of hearing people who are viewing sports programs, we have developed a sign language support system. The system automatically generates Japanese Sign Language (JSL) computer graphics (CG) animation and subtitles from prepared templates of JSL phrases corresponding to fixed format game data. To verify the system’s performance, we carried out demonstration experiments on the generation and displaying of contents using real-time match data from actual games. From the experiment results we concluded that the automatically generated JSL CG is practical enough for understanding the information. We also found that among several display methods, the one providing game video and JSL CG on a single tablet screen was most preferred in this small-scale experiment.

Feasibility of Automatically Assigning Severity Scores to Web Accessibility Problems

  • Shari Trewin
  • Shunguo Yan
  • Diane Margaretos
  • Michael Gower
  • Phill Jenkins
  • Charu Pandhi

We discuss the potential for automatically prioritizing accessibility issues found by automated test tools in an application, based on ‘accessibility impact’ to the end user. We examine three influential factors in assessing the severity of an issue: criticality of the affected component, impact of the type of issue, and mitigating effect of the application context. We describe how these factors could be quantified and combined to yield severity measures consistent with those of a human expert, illustrating with a set of accessibility issues found in a real online web task.

Involving People with Cognitive and Communication Impairments in Mobile Health App Design

  • John Arnott
  • Matthew Malone
  • Gareth Lloyd
  • Bernadette Brophy-Arnott
  • Susan Munro
  • Robyn McNaughton

Multiple challenges face people with cognitive and communication impairments when asked to be involved in the design of technology that is appropriate for them. This population is under-represented in healthcare research and have health inequalities relative to most people. The work discussed here concerns how to adapt research processes to suit people with these difficulties when developing smartphone apps to give access to health promotion information, an area in which health inequalities arise. Strategies are identified to assist participants to understand the proposed area of work, to give consent to participation and be involved with activities such as evaluation. A combination of adaptations is proposed to engage people who would otherwise be excluded. It is clear that strategies used to make research participation accessible can assist people with cognitive and communication impairments to influence and inform the development of technology for their use.

Mixed-Ability Collaboration for Accessible Photo Sharing

  • Reeti Mathur
  • Erin Brady

We conducted two online surveys about current and potential collaborative photo sharing processes among blind and sighted people. We describe existing challenges that blind and visually impaired people encounter when trying to write alternative text for their own photographs and examine how their online sighted friends and family members might be able to contribute assistance as they make their content more accessible to other people with visual impairments.

Redesigning and Deploying the Universal Sound Detector: Notifying Deaf and Hard-of-hearing Users of Audio Signals

  • Joseph S. Stanislow
  • Gary W. Behm

This poster presents design updates and deployment results from the Universal Sound Detector project, or USD. The USD is a redesign of the Programmable Sound Detector [3], a device created to notify deaf and hard-of-hearing (DHH) people of auditory signals from technologies in their environment. Unlike other hardware and software solutions that respond indiscriminately to any sound, the USD can be customized to only recognize a specific sound. The poster describes the USD’s function and implementation, reports on four deployments in different environments, and compares past and present versions of the device’s design before discussing limitations and future work.

Simulation of Motor Impairment in Head-Controlled Pointer Fitts’ Law Task

  • Syed Asad Rizvi
  • Ella Tuson
  • Breanna Desrochers
  • John Magee

Participants with motor impairments may not always be available for research or software development testing. To address this, we propose simulation of users with motor impairments interacting with a head-controlled mouse pointer system. Simulation can be used as a stand-in for research participants in preliminary experiments and can serve to raise awareness about ability-based interactions to a wider software development population. We evaluated our prototype system using a Fitts’ Law experiment and report on the measured communication rate of our system compared to users without motor impairments and with a previously reported participant with motor impairments.

Sonification of Pathways for People with Visual Impairments

  • Dragan Ahmetovic
  • Federico Avanzini
  • Adriano Baratè
  • Cristian Bernareggi
  • Gabriele Galimberti
  • Luca A. Ludovico
  • Sergio Mascetti
  • Giorgio Presti

Indoor navigation is an important service, currently investigated both in industry and academia. While the main focus of research is the computation of users’ position, the additional challenge of conveying guidance instructions arises when the target user is blind or visually impaired (BVI). This contribution presents our ongoing research aimed at adopting sonification techniques to guide a BVI person. In particular we introduce three sonification techniques to guide the user during rotations. Preliminary results, conducted with 7 BVI people, show that some of the proposed sonification technique outperform a benchmark solution adopted in previous contributions.

The Present and Future of Museum Accessibility for People with Visual Impairments

  • Saki Asakawa
  • João Guerreiro
  • Dragan Ahmetovic
  • Kris M. Kitani
  • Chieko Asakawa

People with visual impairments (PVI) have shown interest in visiting museums and enjoying visual art. Based on this knowledge, some museums provide tactile reproductions of artworks, specialized tours for PVI, or enable them to schedule accessible visits. However, the ability of PVI to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. In this paper, we surveyed 19 PVI to understand their opinions and expectations about visiting museums independently, as well as the requirements of user interfaces to support it. Moreover, we increase the knowledge about the previous experiences, motivations and accessibility issues of PVI in museums.

Toward a Technology-based Tool to Support Idea Generation during Participatory Design with Children with Autism Spectrum Disorders

  • Aurora Constantin
  • Juan Pablo Hourcade

Our research explores the development of a novel technology-based prototype to support children and designers during brainstorming, one of the most challenging activities within Participatory Design (PD). This paper describes a proof-of-concept prototype for a tool that aims to empower children with Autism Spectrum Disorders (ASD) during PD to maximise their contributions to the design and their own benefits. Preliminary results revealed that the prototype has the potential for reducing anxiety in children with ASD, and supports unlocking their creativity.

Using Icons to Communicate Privacy Characteristics of Adaptive Assistive Technologies

  • Kellie Poneres
  • Foad Hamidi
  • Aaron Massey
  • Amy Hurst

Adaptive assistive technologies can support the accessibility needs of people with changing abilities by monitoring and adapting to their performance over time. Despite their benefits, these systems can pose privacy threats to users whose data is collected. This issue is amplified by the ambiguity on how user performance data, which might reveal sensitive health data, is used by these applications and whether similar to medical data it is protected from unauthorized sharing with third-parties. In interviews with older adults who experience pointing difficulties, we found that participants felt a lack of agency over their collected pointing data and desired clear communication mechanisms to keep them informed about the privacy characteristics of adaptive assistive systems. Based on this input, we present an icon set, that can be used in online application stores or with the licensing agreements of adaptive systems, to visually communicate privacy characteristics to users.

SESSION: Poster Session 2

Automated Person Detection in Dynamic Scenes to Assist People with Vision Impairments: An Initial Investigation

  • Lee Stearns
  • Anja Thieme

We propose a computer vision system that can automatically detect people in dynamic real-world scenes, enabling people with vision impairments to have more awareness of, and interactions with, other people in their surroundings. As an initial step, we investigate the feasibility of four camera systems that vary in their placement, field-of-view, and image distortion for: (i) capturing people generally; and (ii) detecting people via a specific person-pose estimator. Based on our findings, we discuss future opportunities and challenges for detecting people in dynamic scenes, and for communicating that information to visually impaired users.

Co-design of a Feedback Questionnaire for ICT-delivered Aphasia Rehabilitation

  • Aine Kearns
  • Helen Kelly
  • Ian Pitt

Aphasia is an acquired loss or impairment of the language system that can occur after stroke. Information and Communication Technologies (ICT) can provide an option for the delivery of intensive aphasia rehabilitation but the users’ views (i.e. people with aphasia) must be considered. There is no consensus measure of self-reported feedback in ICT-delivered aphasia rehabilitation and existing ICT usability questionnaires do not present questions in an accessible format for people with aphasia. This research employed a co-design process in which a group of adults with aphasia and the researchers collaborated in design workshops. The final product is an online feedback questionnaire that is accessible for people with aphasia. It provides relevant and meaningful self-reported feedback on participant engagement in ICT-delivered aphasia rehabilitation. This feedback is important when planning and monitoring aphasia rehabilitation.

Design and Testing of Sensors for Text Entry and Mouse Control for Individuals with Neuromuscular Diseases

  • Anna M. H. Abrams
  • Carl Fridolin Weber
  • Philipp Beckerle

For individuals having a motor disorder of neuromuscular origin, computer usage can be challenging. Due to different medical conditions, alternative input methodologies such as speech or eye tracking are no option. Here, piezo sensors, inertial measurement units and force resistance sensors are used to develop input devices that can compensate for mouse and keyboard. The devices are tested in a case study with one potential user with ataxia. Future user studies will deliver additional insights in the users’ specific needs and further improve the developments.

Developing Digital Library Design Guidelines to Support Blind Users

  • Iris Xie
  • Rakesh Babu
  • Melissa D. Castillo
  • Tae Hee Lee
  • Sukjin Youi

ABSTRACT This study investigates the types of help-seeking situations affecting 32 blind users in interacting with five digital libraries (DLs). Multiple methods were applied to collect data: pre-questionnaires, think aloud, transaction logs, and post-questionnaires. The paper identifies 43 types of situations under three categories of physical situations and five categories of cognitive situations. Most important, DL design guidelines are created to support blind users overcoming these situations.

Due Process and Primary Jurisdiction Doctrine: A Threat to Accessibility Research and Practice?

  • Jonathan Lazar

The Web Content Accessibility Guidelines (WCAG) is the most well-documented, well-accepted, set of interface guidelines on the planet, based on empirical research and a participatory process of stakeholder input. A recent case in a U.S. Federal District Court, Robles v. Dominos Pizza LLC, involved a blind individual requesting that Dominos Pizza make their web site and mobile app accessible for people with disabilities, utilizing the WCAG. The court ruled that, due to the legal concepts of due process and primary jurisdiction doctrine, the plaintiff loses the case simply for asking for the WCAG. This court ruling minimizes the importance of evidence-based accessibility research and guidelines, and this poster will provide a background of the case, describe preliminary analysis of related cases, and discuss implications for accessibility researchers.

Facilitating Pretend Play in Autistic Children: Results from an Augmented Reality App Evaluation

  • Mihaela Dragomir
  • Andrew Manches
  • Sue Fletcher-Watson
  • Helen Pain

Autistic children find pretend play difficult. Previous work suggests Augmented Reality (AR) has potential in eliciting pretend play in children with autism. This paper presents the evaluation of an Augmented Reality app to help autistic children engage in solitary pretend play. We followed a user-centred design process, involving various techniques and stakeholders. Results from a pre-post study design suggest the AR system is promising in facilitating quantitative aspects of pretend play in autistic children.

Investigating Mobile Accessibility Guidance for People with Aphasia

  • Brian Grellmann
  • Timothy Neate
  • Abi Roper
  • Stephanie Wilson
  • Jane Marshall

The World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines (WCAG 2.0) have become widely accepted as the standard for web accessibility evaluation. This poster investigates how the mobile version of these guidelines caters for people with aphasia (PWA) by comparing the results from user testing against that of an audit using the guidelines. We outline the efficacy of the guidelines in the broader context of how they cater for various impairments and offer some recommendations for designing for people with aphasia.

Leveraging Pauses to Improve Video Captions

  • Michael Gower
  • Brent Shiver
  • Charu Pandhi
  • Shari Trewin

Currently, video sites that offer automatic speech recognition display the auto-generated captions as arbitrarily segmented lines of unpunctuated text. This method of displaying captions can be detrimental to meaning, especially for deaf users who rely almost exclusively on captions. However, the captions can be made more readable by automatically detecting pauses in speech and using the pause duration as a determinant both for inserting simple punctuation and for more meaningfully segmenting and timing the display of lines of captions. A small sampling of users suggests that such adaptations to caption display are preferred by a majority of users, whether they are deaf, hard of hearing or hearing.

Look, a guidance drone! Assessing the Social Acceptability of Companion Drones for Blind Travelers in Public Spaces

  • Mauro Avila Soto
  • Markus Funk

Using assistance technology always comes with the challenge of social acceptability. While an accessibility device might have great implications for a person with disabilities, it might come with unpleasant social implications. In this paper, we want to assess the social implications of flying companion quadcopters for navigating persons with visual impairments. We conducted an acceptability study with 15 sighted and 5 visually impaired participants and report on the results.

Movement Characteristics and Effects of GUI Design on How Older Adults Swipe in Mid-Air

  • Arthur Theil Cabreira
  • Faustina Hwang

We conducted a study with 25 older adults that aimed to investigate how older users interact with swipe-based interactions in mid-air and how menu sizes may affect swipe characteristics. Our findings suggest that currently-implemented motion-based interaction parameters may not be very well-aligned with the expectations and physical abilities of the older population. In addition, we find that GUI design can shape how older users produce a swipe gesture in mid-air, and that appropriate GUI design can lead to higher success rates for users with little familiarity with this novel input method.

Self-Identifying Tactile Overlays

  • Mauro Avila Sota
  • Alexandra Voit
  • Ahmed Shereen Hassan
  • Albrecht Schmidt
  • Tonja Machulla

Tactile overlays for touch-screen devices are an opportunity to display content for users with visual impairments. However, when users switch tactile overlays, the displayed content on the touch-screen devices still correspond to the previous overlay. Currently, users have to change the displayed content on the touch-screen devices manually which hinders a fluid user interaction. In this paper, we introduce self-identifying overlays – an automated method for touch-screen devices to identify tactile overlays placed on the screen and to adapt the displayed content based on the applied tactile overlay. We report on a pilot study with two participants with visual impairments to evaluate this approach with a functional content exploration application based on an adapted textbook.

Towards Including Workers with Cognitive Disabilities in the Factory of the Future

  • Johan Kildal
  • Iñaki Maurtua
  • Miguel Martin
  • Ibon Ipiña

People who have cognitive disabilities can achieve personal fulfilment and social integration when they access the job market. In the case of working on industrial assembly lines, they can perform at the highest standards when assembly sub-tasks have been adequately adapted. However, the arrival of new production paradigms for the factory of the future (e.g., Industry 4.0) present new challenges and opportunities for workers with cognitive disabilities who will be part of hybrid human-automation assembly cells. We propose that alternative task-sharing approaches with collaborative robots may be more appropriate in the presence of such disabilities, in contrast to practices proposed for non-disabled workers. We describe a representative scenario (assembly of electrical cabinets) around which we are developing our research on this topic.

SESSION: Demos

A Demo of Talkit++: Interacting with 3D Printed Models Using an iOS Device

  • Lei Shi
  • Zhuohao Zhang
  • Shiri Azenkot

Tactile models are important learning materials for visually impaired students. With the adoption of 3D printing technologies, visually impaired students and teachers will have more access to 3D printed tactile models. We designed Talkit++, an iOS application that plays audio and visual content as a user touches parts of a 3D print. With Talkit++, a visually impaired student can explore a printed model tactilely, and use finger gestures and speech commands to get more information about certain elements in the model. Talkit++ detects the model and finger gestures using computer vision algorithms, simple accessories like paper stickers and printable trackers, and the built-in RGB camera on an iOS device. Based on the model’s position and the user’s input, Talkit++ speaks textual information, plays audio recordings, and displays visual animations.

Assistive Debugging to Support Accessible Latex Based Document Authoring

  • Ahtsham Manzoor
  • Murayyiam Parvez
  • Suleman Shahid
  • Asim Karim

This software usability study aims towards evaluating our LaTeX based extension, created to assist blind researchers and writers, in terms of authoring both continuous and non-continuous text [2]. Our extension includes features like speech based error prompts and navigation to the error location which are expected to improve the LaTeX code debugging experience and increase writing productivity. Upon testing our extension, it is observed that a majority of both LaTeX novice and expert users preferred using MS Word for writing continuous text, while the LaTeX experts preferred our extension for writing mathematical content.

Bend Passwords on BendyPass: A User Authentication Method for People with Vision Impairment

  • Daniella Briotto Faustino
  • Audrey Girouard

People with vision impairment are concerned about entering passwords in public as accessibility features (e.g. screen readers and screen magnifiers) make their passwords more vulnerable to attackers. This project aims to use bend passwords to solve this accessibility issue, as they are harder to observe than PINs. Bend passwords are a recently proposed method for user authentication that uses a combination of predefined bend and fold gestures performed on a flexible device. Our inexpensive prototype called BendyPass is made of silicone, with flex sensors able to capture and verify bend passwords, a vibration motor for gesture input haptic feedback, and a button to delete the last gesture or confirm the password. Bend passwords entered on BendyPass provide a tactile method for user authentication, designed to reduce the vulnerability to attackers and help people with vision impairment to better protect their personal information.

Design and Evaluation of a Multimodal Science Simulation

  • Brianna J. Tomlinson
  • Prakriti Kaini
  • Siyan Zhou
  • Taliesin L. Smith
  • Emily B. Moore
  • Bruce N. Walker

We present a multimodal science simulation, including visual and auditory (descriptions, sound effects, and sonifications) display. The design of each modality is described, as well as evaluation with learners with and without visual impairments. We conclude with challenges and opportunities at the intersection of multiple modalities.

Help Kiosk: An Augmented Display System to Assist Older Adults to Learn How to Use Smart Phones

  • Zachary Wilson
  • Helen Yin
  • Sayan Sarcar
  • Rock Leung
  • Joanna McGrenere

Older adults have difficulty using and learning to use smart phones, in part because the displays are too small to provide effective interactive help. Our work explores the use of a large display to temporarily augment the small phone display to support older adults during learning episodes. We designed and implemented a learning system called Help Kiosk which contains unique features to scaffold the smart phone learning process for older adults. We conducted a mixed-methods user study with 16 older adults (55+) to understand the impact of this unique design approach, comparing it with the smart phone’s official printed instruction manual. We found Help Kiosk gave participants more confidence that they were doing the tasks correctly, and helped minimize the need to switch their attention between the instructions and their phone.

Interactively Modeling and Visualizing Neighborhood Accessibility at Scale: An Initial Study of Washington DC

  • Anthony Li
  • Manaswi Saha
  • Anupam Gupta
  • Jon E. Froehlich

Walkability indices such as walkscore.com model the proximity and density of walkable destinations within a neighborhood. While these metrics have gained widespread use (e.g., incorporated into real-estate tools), they do not integrate accessibility-related features such as sidewalk conditions or curb ramps-thereby excluding a significant portion of the population. In this poster paper, we explore the initial design and implementation of neighborhood accessibility models and visualizations for people with mobility impairments. We are able to overcome previous data availability challenges by using the Project Sidewalk API, which provides access to 255,000+ labels about the accessibility and location of DC sidewalks.

Jellys: Towards a Videogame that Trains Rhythm and Visual Attention for Dyslexia

  • Mikel Ostiz-Blanco
  • Marie Lallier
  • Sergi Grau
  • Luz Rello
  • Jeffrey P. Bigham
  • Manuel Carreiras

This demo describes an ongoing research project that aims to develop a video game for the training of two independent cognitive components involved in reading development: visual attention and auditory rhythm. The video game includes two types of gaming activities for each component. First, a proof of concept was carried out with 10 children with dyslexia. The outcome of this proof of concept study served as foundation for the development of a prototype that has been assessed. Human-computer interaction, usability and engagement were measured in a user study with 22 children with dyslexia and 22 without dyslexia. Significant interaction differences between group were not found. Usability and engagement evaluation was positive and will be used to improve the video game. Its efficacy will be tested with a longitudinal training study in developing readers. A video of Jellys user testing is available in https://youtu.be/T9oO9bZFdmM.

Maritime Buoyage on 3D-Printed Tactile Maps

  • Mathieu Simonnet
  • Serge Morvan
  • Dominique Marques
  • Olivier Ducruix
  • Arnaud Grancher
  • Sylvie Kerouedan

Our objective is to conceive 3D-printed maritime maps accessible with visual impairment. Since sea marks are critical elements of the maritime spatial cognition, we determined different shapes to print buoyage along a co-conception process. Our current concern is to adjust the minimum size of each buoy to be easily recognizable by touch. Taking into account previous findings, we set up an experimental design inspired by the opticians Monoyer scale. More precisely, participants are asked to successively identify in-line shapes decreasing by size). Preliminary co-conception feedbacks suggest to print 5mm elements to reduce the time to recognize different buoys and minimize cognitive load during explorations.

MathMelodies 2: a Mobile Assistive Application for People with Visual Impairments Developed with React Native

  • Niccolò Cantù
  • Mattia Ducci
  • Dragan Ahmetovic
  • Cristian Bernareggi
  • Sergio Mascetti

Cross-platform developing techniques have been attracting lot of attention in the last years, especially in the field of mobile application, because they enable the developers to code apps in a same programming language for different platforms (e.g. iOS and Android). One well-known framework for cross-platform development is React Native that presents some features to support accessibility to blind or visually impaired (BVI) people. However, to the best of our knowledge, the accessibility of applications developed with this framework has not been systematically investigated. In this contribution we report our experience in the development of MathMelodies 2, an application that supports BVI children to study mathematics. The former version of MathMelodies was developed with native code for iPad only, while MathMelodies 2 was developed with React Native to run on both iOS and Android smartphones and tablets.

Tangicraft: A Multimodal Interface for Minecraft

  • David Bar-El
  • Thomas Large
  • Lydia Davison
  • Marcelo Worsley

With millions of players worldwide, Minecraft has become a rich context for playing, socializing and learning for children. However, as is the case with many video games, players must rely heavily on vision to navigate and participate in the game. We present our Work-In-Progress on Tangicraft, a multimodal interface designed to empower visually impaired children to play and collaborate around Minecraft. Our work includes two strands of prototypes. The first is a haptic sensing wearable. The second is a set of tangible blocks that communicate with the game environment using webcam-enabled codes.

Turn It Over: A Demonstration That Spatial Keyboards are Logical for Braille

  • Kirsten Ellis
  • Leona Holloway

This demonstration illustrates the importance of the layout of a keyboard to using braille, which is highly dependent on its spatial arrangement. The design and layout of a keyboard for entering braille changes the mental effort required to transform the dots prior to inputting the braille cells. The design of two keyboards that can be used for entering braille onto an electronic device are contrasted for the demonstration. Keyboard 1 has the dots on the top and Keyboard 2 has the dots underneath. Each has inherent advantages and disadvantages.

Using the Musical Multimedia Tool ACMUS with People with Severe Mental Disorders: A Pilot Study

  • Mikel Ostiz-Blanco
  • Alfredo Pina
  • Miriam Lizaso
  • Jose Javier Astráin
  • Gonzalo Arrondo

Music therapy could be an interesting resource to enhance social and cognitive skills in people with mental disorders. The aim of this study is to assess if using the music multimedia tool ACMUS with people with severe mental disorders is feasible and potentially beneficial. The study was a prospective pilot trial with 12 patients who had a diagnosis of schizophrenia or related disorders. It was carried out along nine sessions in small groups. The evaluation tools used were the observational COTE (Comprehensive Occupational Therapy Evaluation) scale, and a satisfaction questionnaire that was completed by the participants. Results showed an improvement in the COTE scores between the first and last session. Results of the satisfaction questionnaire were also positive, as the therapy program was positively rated by the patients. Programs which use the multimedia tool ACMUS for musical therapy sessions for patients with severe mental disorders are feasible and of clinical interest for future research.

Video Gaming for the Vision Impaired

  • Manohar Swaminathan
  • Sujeath Pareddy
  • Tanuja Sunil Sawant
  • Shubi Agarwal

Mainstream video games are predominantly inaccessible to people with visual impairments (VIPs). We present ongoing research that aims to make such games go beyond accessibility, by making them engaging and enjoyable for visually impaired players. We have built a new interaction toolkit called the Responsive Spatial Audio Cloud (ReSAC), developed around spatial audio technology, to enable visually impaired players to play video games. VIPs successfully finished a simple video game integrated with ReSAC and reported enjoying the experience.

SESSION: Student Research Competition

Designing a Context Aware AAC Solution

  • Conor McKillop

Alternative and Augmentative Communication (AAC) software has the ability to improve the quality of life for individuals with a speech, language and communication need. However, the communication rate achievable with current devices is still significantly less than unimpaired speech. The embedded technologies available in mobile devices, present an opportunity to improve the ease and efficiency of communication through context aware computing. This paper explores the iterative design process of a context aware keyboard prototype and discusses the results of its evaluation, which suggest the prototype could have a marked improvement on the communication rate of AAC users.

Exploring an Ambiguous Technique for Eyes-Free Mobile Text Entry

  • Dylan Gaines

Mobile text entry has become an increasingly important part of many peoples’ daily lives. While most input occurs through individual letters being tapped on a virtual QWERTY keyboard, this does not have to be the case. We explore how well users are able to learn an ambiguous keyboard that is modeled after a standard QWERTY layout but does not require users to tap specific keys. We show that this keyboard is a plausible text entry technique for users with little or no vision, with users achieving 19.09 Words per Minute (WPM) and 2.08% Character Error Rate after 8 hours of practice.

Exploring the Performance of Facial Expression Recognition Technologies on Deaf Adults and Their Children

  • Irene Rogan Shaffer

Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages. Without being properly trained, both human observers and existing emotion recognition tools will misinterpret ASL linguistic facial expressions. In this study, we capture over 2,000 photographs of 15 participants: five hearing, five Deaf, and five Children of Deaf Adults (CODAs). We then analyze the performance of six commercial facial expression recognition services on these photographs. Key observations include poor face detection rates for Deaf participants, more accurate emotion recognition for Deaf and CODA participants, and frequent misinterpretation of ASL linguistic markers as negative emotions. This suggests a need to include data from ASL users in the training sets for these technologies.

Gaze Typing using Multi-key Selection Technique

  • Tanya Bafna

Gaze typing for people with extreme motor disabilities like full body paralysis can be extremely slow and discouraging for daily communication. The most popular technique in gaze typing, known as dwell time typing, is based on fixation on every letter of the word for a fixed amount of time, to type the word. In this preliminary study, the goal was to test a new technique of gaze typing that requires fixating only on the first and the last letter of the word. Analysis of the data collected suggests that the newly described technique is 63% faster than dwell time typing for novices in gaze interaction, without influencing the error rate. Using this technique would have a tremendous impact on communication speed, comfort and working efficiency of people with disabilities.

GeniAuti: Tracking Challenging Behaviors of Children with Autism for Data-Driven Interventions

  • Kiwon Choi
  • Dongho Jang
  • Dasol Lee
  • Seoyoung Park

Caregivers of autistic children suffer mostly from their children’s repeated challenging behaviors occurring every day. Thus, it is important for caregivers to track the behaviors to understand the context of challenging behaviors and eventually alleviate them. GeniAuti is a tracking application for caregivers of autistic children, which provides 1) recording of their children’s behaviors in a timely fashion, 2) visualization of data input by the user, 3) references of other autistic children’s behaviors.

MAMAS: Mealtime Assistance to Improve Eating Behavior of Children Using Magnetometer and Speech Recognition

  • Sungmook Leem
  • Eun Jee Sung
  • Sungjin Lee
  • Ilyoung Jin

Children’s problematic eating behavior is one of the biggest problems parents suffer from. Even though the role of parents in building children’s eating habit is critical, it is very difficult for parents to continue on the positive interaction with children during mealtime. We describe our preliminary study to develop a system that provides systematic analysis of parent-child mealtime interaction, so as to promote children’s healthy eating habits. We propose an application called MAMAS, a mealtime assistant using magnetometer and speech recognition, which 1) noninvasively tracks mealtime interaction patterns, 2) augments analysis with self-reported data and quantification, and 3) provides data-assisted analysis for parents’ self-reflection.

Potmote: A TV Remote Control for Older Adults

  • Siddharth Mehrotra

Traditional television remote control presents frequent challenges to older adults. These challenges arise due to lack of feedback and poor design features such as labeling, size, spatial proximity, physical feel, etc. This paper describes the design of an accessible TV remote control (Potmote) created by employing potentiometers with Arduino to enhance tactile feedback and ease of channel selection with ergonomic controls. An experimental study was conducted with 15 older adults to understand how to design a system that would allow them to change channel numbers and volume levels. The result of experiment have shown positive feedback by the subjects.

Using a Telepresence Robot to Improve Self-Efficacy of People with Developmental Disabilities

  • Natalie Friedman
  • Alex Cabral

People with Developmental Disabilities (DD) often rely on other people to perform basic activities such as leaving the house and accessing public spaces. This problem, exaggerated by a decrease in community engagement, has been documented to decrease their sense of self-efficacy. Telepresence robots provide a unique opportunity for people with DD to access public spaces, particularly for those who are homebound or dependent on others for using transportation or buying exhibit tickets. This research evaluates the use of telepresence robots operated by people with DD in exploring a public exhibit. This study was in partnership with Hope Services, an organization that provides skill-improving activities for people with DD. Our analysis consisted of quantitative and qualitative methods using data from semi-structured pre- and post-interviews focusing on participants’ sense of physical and social self- efficacy, and well-being. Our study revealed positive trends toward showing that using telepresence can contribute to wellbeing and physical and social self-efficacy. Therefore, we believe that there is some promise for using telepresence robots to tour an exploratory space for people with DD and that it can be a viable option for those who face accessibility limitations.