ASSETS ’20: The 22nd International ACM SIGACCESS Conference on Computers and Accessibility


Full Citation in the ACM Digital Library

SESSION: Keynote Presentation + Award Conferral

Accessibility Research in the Pandemic: Making a Difference in the Quality of Life
for People with Disabilities

  • Jonathan Lazar

In his ASSETS 2020 keynote talk, Dr. Lazar discusses how technology accessibility
researchers and practitioners have been working during the pandemic to improve the
quality of life for people with disabilities. Adults with disabilities are three times
more likely than adults without disabilities to have the serious underlying medical
conditions which place them at a higher risk of severe illness from COVID-19. Dr.
Lazar discusses two data collection efforts in the accessibility community to understand
technology-related needs during the pandemic, and then highlights current initiatives
to address these needs. Accessibility researchers and practitioners are currently
working on projects that are 1) providing for accessible public health information
and data (such as fully accessible data dashboards), 2) developing tools and technologies
for helping people with disabilities stay healthy and avoid the virus (such as accessible
masks and technologies for social distancing), and 3) ensuring that online learning
and teleconferencing can be accessible for people with various disabilities and needs
(such as Deaf users and users of AAC). Dr. Lazar challenges accessibility researchers
and practitioners to get more involved in researching and developing digital technologies
to improve the quality of life for people with disabilities. Specifically, he suggests
some potential areas of research and development work: technologies to allow for priority
scheduling of deliveries and purchasing of items with limited quantities for people
with disabilities, new uses of crowdsourcing to empower people with disabilities during
the pandemic, and studying the impact of masks on the use of automated speech recognition.
Dr. Lazar ends the speech by challenging everyone to find accessibility projects that
give you joy, are intellectually challenging, and meet the needs of the world!

SESSION: Paper Session 1: Extended Reality for Navigation & Guidance

AIGuide: An Augmented Reality Hand Guidance Application for People with Visual Impairments

  • Nelson Daniel Troncoso Aldas
  • Sooyeon Lee
  • Chonghan Lee
  • Mary Beth Rosson
  • John M. Carroll
  • Vijaykrishnan Narayanan

Locating and grasping objects is a critical task in people’s daily lives. For people
with visual impairments, this task can be a daily struggle. The support of augmented
reality frameworks in smartphones has the potential to overcome the limitations of
current object detection applications designed for people with visual impairments.
We present AIGuide, a self-contained offline smartphone application that leverages
augmented reality technology to help users locate and pick up objects around them.
We conducted a user study to validate its effectiveness at providing guidance, compare
it to other assistive technology form factors, evaluate the use of multimodal feedback,
and provide feedback about the overall experience. Our results show that AIGuide is
a promising technology to help people with visual impairments locate and acquire objects
in their daily routine.

Making Mobile Augmented Reality Applications Accessible

  • Jaylin Herskovitz
  • Jason Wu
  • Samuel White
  • Amy Pavel
  • Gabriel Reyes
  • Anhong Guo
  • Jeffrey P. Bigham

Augmented Reality (AR) technology creates new immersive experiences in entertainment,
games, education, retail, and social media. AR content is often primarily visual and
it is challenging to enable access to it non-visually due to the mix of virtual and
real-world content. In this paper, we identify common constituent tasks in AR by analyzing
existing mobile AR applications for iOS, and characterize the design space of tasks
that require accessible alternatives. For each of the major task categories, we create
prototype accessible alternatives that we evaluate in a study with 10 blind participants
to explore their perceptions of accessible AR. Our study demonstrates that these prototypes
make AR possible to use for blind users and reveals a number of insights to move forward.
We believe our work sets forth not only exemplars for developers to create accessible
AR applications, but also a roadmap for future research to make AR comprehensively
accessible.

SESSION: Paper Session 2: Perspectives on Accessibility

Living Disability Theory: Reflections on Access, Research, and Design

  • Megan Hofmann
  • Devva Kasnitz
  • Jennifer Mankoff
  • Cynthia L Bennett

Accessibility research and disability studies are intertwined fields focused on,
respectively, building a world more inclusive of people with disability and understanding
and elevating the lived experiences of disabled people. Accessibility research tends
to focus on creating technology related to impairment, while disability studies focuses
on understanding disability and advocating against ableist systems. Our paper presents
a reflexive analysis of the experiences of three accessibility researchers and one
disability studies scholar. We focus on moments when our disability was misunderstood
and causes such as expecting clearly defined impairments. We derive three themes:
ableism in research, oversimplification of disability, and human relationships around
disability. From these themes, we suggest paths toward more strongly integrating disability
studies perspectives and disabled people into accessibility research.

Disability and the COVID-19 Pandemic: Using Twitter to Understand Accessibility during Rapid Societal Transition

  • Cole Gleason
  • Stephanie Valencia
  • Lynn Kirabo
  • Jason Wu
  • Anhong Guo
  • Elizabeth Jeanne Carter
  • Jeffrey Bigham
  • Cynthia Bennett
  • Amy Pavel

The COVID-19 pandemic has forced institutions to rapidly alter their behavior, which
typically has disproportionate negative effects on people with disabilities as accessibility
is overlooked. To investigate these issues, we analyzed Twitter data to examine accessibility
problems surfaced by the crisis. We identified three key domains at the intersection
of accessibility and technology: (i) the allocation of product delivery services,
(ii) the transition to remote education, and (iii) the dissemination of public health
information. We found that essential retailers expanded their high-risk customer shopping
hours and pick-up and delivery services, but individuals with disabilities still lacked
necessary access to goods and services. Long-experienced access barriers to online
education were exacerbated by the abrupt transition of in-person to remote instruction.
Finally, public health messaging has been inconsistent and inaccessible, which is
unacceptable during a rapidly-evolving crisis. We argue that organizations should
create flexible, accessible technology and policies in calm times to be adaptable
in times of crisis to serve individuals with diverse needs.

Comparison of Methods for Teaching Accessibility in University Computing Courses

  • Qiwen Zhao
  • Vaishnavi Mande
  • Paula Conn
  • Sedeeq Al-khazraji
  • Kristen Shinohara
  • Stephanie Ludi
  • Matt Huenerfauth

With an increasing demand for computing professionals with skills in accessibility,
it is important for university faculty to select effective methods for educating computing
students about barriers faced by users with disabilities and approaches to improving
accessibility. While some prior work had evaluated accessibility educational interventions,
many prior studies have consisted of firsthand reports from faculty or short-term
evaluations. This paper reports on the results of a systematic evaluation of methods
for teaching accessibility from a longitudinal study across 29 sections of a human-computer
interaction course (required for students in a computing degree program), as taught
by 10 distinct professors, throughout four years, with over 400 students.  A control
condition (course without accessibility content) was compared to four intervention
conditions: week of lectures on accessibility, team design project requiring some
accessibility consideration, interaction with someone with a disability, and collaboration
with a team member with a disability. Comparing survey data immediately before and
after the course, we found that the Lectures, Projects, and Interaction conditions
were effective in increasing students’ likelihood to consider people with disabilities
on a design scenario, awareness of accessibility barriers, and knowledge of technical
approaches for improving accessibility – with students in the Team Member condition
having higher scores on the final measure only. However, comparing survey responses
from students immediately before the course and from approximately 2 years later,
almost no significant gains were observed, suggesting that interventions within a
single course are insufficient for producing long-term changes in measures of students’
accessibility learning. This study contributes to empirical knowledge to inform university
faculty in selecting effective methods for teaching accessibility, and it motivates
further research on how to achieve long-term changes in accessibility knowledge, e.g.
by reinforcing accessibility throughout a degree program.

Access Differential and Inequitable Access: Inaccessibility for Doctoral Students
in Computing

  • Kristen Shinohara
  • Michael McQuaid
  • Nayeri Jacobo

Increasingly, support for students with disabilities in post-secondary education has
boosted enrollment and graduates rates. Yet, such successes are not translated to
doctoral degrees. For example, in 2018, the National Science Foundation reported 3%
of math and computer science doctorate recipients identified as having a visual limitation
while 1.2% identified as having a hearing limitation. To better understand why few
students with disabilities pursue PhDs in computing and related fields, we conducted
an interview study with 19 current and former graduate students who identified as
blind or low vision, or deaf or hard of hearing. We asked participants about challenges
or barriers they encountered in graduate school. We asked about accommodations they
received, or did not receive, and about different forms of support. We found that
a wide range of inaccessibility issues in research, courses, and in managing accommodations
impacted student progress. Contributions from this work include identifying two forms
of access inequality that emerged: (1) access differential: the gap between the access
that non/disabled students experience, and (2) inequitable access: the degree of inadequacy
of existing accommodations to address inaccessibility.

Navigating Graduate School with a Disability

  • Dhruv Jain
  • Venkatesh Potluri
  • Ather Sharif

In graduate school, people with disabilities use disability accommodations to learn,
network, and do research. However, these accommodations, often scheduled ahead of
time, may not work in many situations due to uncertainty and spontaneity of the graduate
experience. Through a three-person autoethnography, we present a longitudinal account
of our graduate school experiences as people with disabilities, highlighting nuances
and tensions of situations when our requested accommodations did not work and the
use of alternative coping strategies. We use retrospective journals and field notes
to reveal the impact of our self-image, relationships, technologies, and infrastructure
on our disabled experience. Using post-hoc reflection on our experiences, we then
close with discussing personal and situated ways in which peers, faculty members,
universities, and technology designers could improve the graduate school experiences
of people with disabilities.

SESSION: Experience Reports Session

Teleconference Accessibility and Guidelines for Deaf and Hard of Hearing Users

  • Raja S. Kushalnagar
  • Christian Vogler

In this experience report, we describe the accessibility challenges that deaf and
hard of hearing users face in teleconferences, based on both our first-hand participation
in meetings, and as User Interface and Experience experts. Teleconferencing poses
new accessibility challenges compared to face-to-face communication because of limited
social, emotional, and haptic feedback. Above all, teleconferencing participants and
organizers need to be flexible, because deaf or hard of hearing people have diverse
communication preferences. We explain what recurring problems users experience, where
current teleconferencing software falls short, and how to address these shortcomings.
We offer specific recommendations for best practices and the experiential reasons
behind them.

Student and Teacher Perspectives of Learning ASL in an Online Setting

  • Garreth W. Tigwell
  • Roshan L Peiris
  • Stacey Watson
  • Gerald M. Garavuso
  • Heather Miller

American Sign Language (ASL) classes are typically held face-to-face to increase
interactivity and enhance the learning experience. However, the recent COVID-19 pandemic
brought about many changes to course delivery methods, primarily resulting in a move
to an online format, which had to occur in a short timeframe. The online format has
presented students and teachers with many opportunities and challenges. In this experience
report, we reflect on the student and teacher perspectives of learning ASL in an online
setting. We use our experience to introduce new online ASL class guidelines, videoconferencing
improvements, and suggest where future research is needed.

Disability design and innovation in computing research in low resource settings

  • Dafne Zuleima Morgado-Ramirez
  • Giulia Barbareschi
  • Maggie Kate Donovan-Hall
  • Mohammad Sobuh
  • Nida’ Elayyan
  • Brenda T Nakandi
  • Robert Tamale Ssekitoleko
  • joyce Olenja
  • Grace Nyachomba Magomere
  • Sibylle Daymond
  • Jake Honeywill
  • Ian Harris
  • Nancy Mbugua
  • Laurence Kenney
  • Catherine Holloway

80% of people with disabilities worldwide live in low resourced settings, rural areas,
informal settlements and in multidimensional poverty. ICT4D leverages technological
innovations to deliver programs for international development. But very few do so
with a focus on and involving people with disabilities in low resource settings. Also,
most studies largely focus on publishing the results of the research with a focus
on the positive stories and not the learnings and recommendations regarding research
processes. In short, researchers rarely examine what was challenging in the process
of collaboration. We present reflections from the field across four studies. Our contributions
are: (1) an overview of past work in computing with a focus on disability in low resource
settings and (2) learnings and recommendations from four collaborative projects in
Uganda, Jordan and Kenya over the last two years, that are relevant for future HCI
studies in low resource settings with communities with disabilities. We do this through
a lens of Disability Interaction and ICT4D.

Breaking Boundaries with Live Transcribe: Expanding Use Cases Beyond Standard Captioning
Scenarios

  • Fernando Loizides
  • Sara Basson
  • Dimitri Kanevsky
  • Olga Prilepova
  • Sagar Savla
  • Susanna Zaraysky

In this paper, we explore non-traditional, serendipitous uses of an automatic speech
recognition (ASR) application called Live Transcribe. Through these, we are able to
identify interaction use cases for developing further technology to enhance the communication
capabilities of deaf and hard of hearing people.

SESSION: Paper Session 3: Input Research

Bespoke Reflections: Creating a One-Handed Braille Keyboard

  • Kirsten Ellis
  • Ross de Vent
  • Reuben Kirkham
  • Patrick Olivier

A plethora of assistive technologies are designed to cater to relatively common types
of disabilities. However, some people have disabilities or circumstances that fall
outside these pluralities, requiring a bespoke assistive technology to be developed
and custom built to meet their unique requirements. To explore the opportunities and
challenges of such an endeavor, we document the process undertaken to build a braille
keyboard for a one-handed blind person over the course of 18-months. This process
involved iterative prototyping within an intensive co-creation process, due to the
unique needs arising from having two intersecting impairments and the challenges of
effectively developing an entirely new format of AAT. Through a structured reflection
on this process, we provide an account of the practical, pragmatic and ethical considerations
that apply when developing a bespoke assistive technology, whilst illustrating the
wider value of bespoke assistive technology development for a more general community
of people with disabilities.

Designing and Evaluating Head-based Pointing on Smartphones for People with Motor
Impairments

  • Muratcan Cicek
  • Ankit Dave
  • Wenxin Feng
  • Michael Xuelin Huang
  • Julia Katherine Haines
  • Jeffry Nichols

Head-based pointing is an alternative input method for people with motor impairments
to access computing devices. This paper proposes a calibration-free head-tracking
input mechanism for mobile devices that makes use of the front-facing camera that
is standard on most devices. To evaluate our design, we performed two Fitts’ Law studies.
First, a comparison study of our method with an existing head-based pointing solution,
Eva Facial Mouse, with subjects without motor impairments. Second, we conducted what
we believe is the first Fitts’ Law study using a mobile head tracker with subjects
with motor impairments. We extend prior studies with a greater range of index of difficulties
(IDs) [1.62, 5.20] bits and achieved promising throughput (average 0.61 bps with motor
impairments and 0.90 bps without). We found that users’ throughput was 0.95 bps on
average in our most difficult task (IDs: 5.20 bits), which involved selecting a target
half the size of the Android recommendation for a touch target after moving nearly
the full height of the screen. This suggests the system is capable of fine precision
tasks. We summarize our observations and the lessons from our user studies into a
set of design guidelines for head-based pointing systems.

Eyelid Gestures on Mobile Devices for People with Motor Impairments

  • Mingming Fan
  • Zhen Li
  • Franklin Mingzhe Li

Eye-based interactions for people with motor impairments have often used clunky or
specialized equipment (e.g., eye-trackers with non-mobile computers) and primarily
focused on gaze and blinks. However, two eyelids can open and close for different
duration in different orders to form various eyelid gestures. We take a first step
to design, detect, and evaluate a set of eyelid gestures for people with motor impairments
on mobile devices. We present an algorithm to detect nine eyelid gestures on smartphones
in real-time and evaluate it with twelve able-bodied people and four people with severe
motor impairments in two studies. The results of the study with people with motor-impairments
show that the algorithm can detect the gestures with .76 and .69 overall accuracy
in user-dependent and user-independent evaluations. Moreover, we design and evaluate
a gesture mapping scheme allowing for navigating mobile applications only using eyelid
gestures. Finally, we present recommendations for designing and using eyelid gestures
for people with motor impairments.

The Reliability of Fitts’s Law as a Movement Model for People with and without Limited
Fine Motor Function

  • Ather Sharif
  • Victoria Pao
  • Katharina Reinecke
  • Jacob O. Wobbrock

For over six decades, Fitts’s law (1954) has been utilized by researchers to quantify
human pointing performance in terms of “throughput,” a combined speed-accuracy measure
of aimed movement efficiency. Throughput measurements are commonly used to evaluate
pointing techniques and devices, helping to inform software and hardware developments.
Although Fitts’s law has been used extensively in HCI and beyond, its test-retest
reliability, both in terms of throughput and model fit, from one session to the next,
is still unexplored. Additionally, despite the fact that prior work has shown that
Fitts’s law provides good model fits, with Pearson correlation coefficients commonly
at r=.90 or above, the model fitness of Fitts’s law has not been thoroughly investigated
for people who exhibit limited fine motor function in their dominant hand. To fill
these gaps, we conducted a study with 21 participants with limited fine motor function
and 34 participants without such limitations. Each participant performed a classic
reciprocal pointing task comprising vertical ribbons in a 1-D layout in two sessions,
which were at least four hours and at most 48 hours apart. Our findings indicate that
the throughput values between the two sessions were statistically significantly different,
both for people with and without limited fine motor function, suggesting that Fitts’s
law provides low test-retest reliability. Importantly, the test-retest reliability
of Fitts’s throughput metric was 4.7% lower for people with limited fine motor function.
Additionally, we found that the model fitness of Fitts’s law as measured by Pearson
correlation coefficient, r, was .89 (SD=0.08) for people without limited fine motor
function, and .81 (SD=0.09) for people with limited fine motor function. Taken together,
these results indicate that Fitts’s law should be used with caution and, if possible,
over multiple sessions, especially when used in assistive technology evaluations.

Input Accessibility: A Large Dataset and Summary Analysis of Age, Motor Ability and
Input Performance

  • Leah Findlater
  • Lotus Zhang

Age and motor ability are well-known to impact input performance. Past work examining
these factors, however, has tended to focus on samples of 20-40 participants and has
binned participants into a small set of age groups (e.g., “younger” vs. “older”).
To foster a more nuanced understanding of how age and motor ability impact input performance,
this short paper contributes: (1) a dataset from a large-scale study that captures
input performance with a mouse and/or touchscreen from over 700 participants, as well
as (2) summary analysis of a subset of 318 participants who range in age from 18 to
83 years old and of whom 53% reported a motor impairment. The analysis demonstrates
the continuous relationship between age and input performance for users with and without
motor impairments, but also illustrates that knowing a user’s age and self-reported
motor ability should not lead to assumptions about their input performance. The dataset,
which contains mouse and touchscreen input traces, should allow for further exploration
by other researchers.

SESSION: Paper Session 4: Tangible Interaction

”I can’t name it, but I can perceive it” Conceptual and Operational Design of ”Tactile
Accuracy” Assisting Tactile Image Cognition

  • Jiangtao Gong
  • Wenyuan Yu
  • Long Ni
  • Yang Jiao
  • Ye Liu
  • Xiaolan Fu
  • Yingqing Xu

Designing a tactile image for blind people is a significant challenge due to the
difficulty of recognizing objects on a 2D line drawing image by touch compared to
vision. In this paper, we proposed ”tactile accuracy”, a new criterion to evaluate
the performance of recognizing 242 raised line images of common objects for 30 subjects
(10 blindfolded sighted subjects, 10 congenitally blind subjects, and 10 late blind
subjects), instead of the conventional ”naming accuracy” used in the visual image
recognition tasks. We used multi-level evaluation criteria including ”tactile accuracy”
to systematically analyze the design factors in tactile images. The results showed
that using multi-level evaluate criteria could help unveil the tactile cognitive preferences
of different types of subjects for personalized learning. Moreover, we reported important
design factors that affect tactile image recognition, thus providing guidelines on
the design of tactile images.

“If you’ve gone straight, now, you must turn left” – Exploring the use of a tangible
interface in a collaborative treasure hunt for people with visual impairments

  • Quentin Chibaudel
  • Wafa Johal
  • Bernard Oriola
  • Marc J-M Macé
  • Pierre Dillenbourg
  • Valérie Tartas
  • Christophe Jouffrais

Tangible User Interfaces (TUI) have been found to be relevant tools for collaborative
learning by providing a shared workspace and enhancing joint visual attention. Researchers
have explored the use of TUIs in a variety of curricular activities and found them
particularly interesting for spatial exploration. However, very few studies have explored
how TUIs could be used as a collaborative medium for people with visual impairments
(VIs). In this study, we investigated the effect of tangible interaction (a small
tangible robot) in a spatial collaborative task (a treasure hunt) involving two people
with VIs. The aim was to evaluate the impact of the design of the TUI on the collaboration
and the strategies used to perform the task. The experiment involved six dyads of
people with VIs. The results showed that the collaboration was impacted by the interaction
design and open interesting perspectives on the design of collaborative games for
people with VIs.

ThermalCane: Exploring Thermotactile Directional Cues on Cane-Grip for Non-Visual
Navigation

  • Arshad Nasser
  • Kai-Ning Keng
  • Kening Zhu

Non-visual feedback (e.g., auditory and haptic) has been used as directional cues
for the blind and visually impaired (BVI) users. This paper presents the design and
the evaluation of ThermalCane, a white-cane grip instrumented with multiple flexible
thermal modules, to offer thermotactile directional cues for BVI users. We also conducted
two thermotactile experiments on users’ perception of ThermalCane. Our first experiment
with twelve BVI users reports on the selection of the thermal-module configuration,
considering the BVI users’ perceptive accuracy and preference. We then evaluated the
effectiveness of the four-module ThermalCane in walking with 6 BVI users, in comparison
with vibrotactile cues. The results show that the thermal feedback yielded significantly
higher accuracy than the vibrotactile feedback. The results also suggested the feasibility
of using thermal directional cues around the cane grip for BVI users’ navigation.

TIP-Toy: a tactile, open-source computational toolkit to support learning across visual
abilities

  • Giulia Barbareschi
  • Enrico Costanza
  • Catherine Holloway

Many computational toolkits to promote early learning of basic computational concepts
and practices are inaccessible to learners with reduced visual abilities. We report
on the design of TIP-Toy, a tactile and inclusive open-source toolkit, to allow children
with different visual abilities to learn about computational topics through music
by combining a series of physical blocks. TIP-Toy was developed through two design
consultations with experts and potential users. The first round of consultations was
conducted with 3 visually impaired adults with significant programming experience;
the second one involved 9 children with mixed visual abilities. Through these design
consultations we collected feedback on TIP-Toy, and observed children’s interactions
with the toolkit. We discuss appropriate features for future iterations of TIP-toy
to maximise the opportunities for accessible and enjoyable learning experiences.

SESSION: Paper Session 5: Accessing Visual Content

Playing With Others: Depicting Multiplayer Gaming Experiences of People With Visual
Impairments

  • David Gonçalves
  • André Rodrigues
  • Tiago Guerreiro

Games bring people together in immersive and challenging interactions. In this paper,
we share multiplayer gaming experiences of people with visual impairments collected
from interviews with 10 adults and 10 minors, and 140 responses to an online survey.
We include the perspectives of 17 sighted people who play with someone who has a visual
impairment, collected in a second online survey. Our focus is on group play, particularly
on the problems and opportunities that arise from mixed-visual-ability scenarios.
These show that people with visual impairments are playing diverse games, but face
limitations in playing with others who have different visual abilities. What stands
out is the lack of intersection in gaming opportunities, and consequently, in habits
and interests of people with different visual abilities. We highlight barriers associated
with these experiences beyond inaccessibility issues and discuss implications and
opportunities for the design of mixed-ability gaming.

TableView: Enabling Efficient Access to Web Data Records for Screen-Magnifier Users

  • Hae-Na Lee
  • Sami Uddin
  • Vikas Ashok

People with visual impairments typically rely on screen-magnifier assistive technology
to interact with webpages. As screen-magnifier users can only view a portion of the
webpage content in an enlarged form at any given time, they have to endure an inconvenient
and arduous process of repeatedly moving the magnifier focus back-and-forth over different
portions of the webpage in order to make comparisons between data records, e.g., comparing
the available flights in a travel website based on their prices, durations, etc. To
address this issue, we designed and developed TableView, a browser extension that
leverages a state-of-the art information extraction method to automatically identify
and extract data records and their attributes in a webpage, and subsequently presents
them to a user in a compactly arranged tabular format that needs significantly less
screen space compared to that currently occupied by these items in the page. This
way, TableView is able to pack more items within the magnifier focus, thereby reducing
the overall content area for panning, and hence making it easy for screen-magnifier
users to compare different items before making their selections. A user study with
16 low vision participants showed that with TableView, the time spent on panning the
data records in webpages was significantly reduced by 72.9% (avg.) compared to that
with just a screen magnifier, and 66.5% compared to that with a screen magnifier using
a space compaction method.

Making GIFs Accessible

  • Cole Gleason
  • Amy Pavel
  • Himalini Gururaj
  • Kris Kitani
  • Jeffrey Bigham

Social media platforms feature short animations known as GIFs, but they are inaccessible
to people with vision impairments. Unlike static images, GIFs contain action and visual
indications of sound, which can be challenging to describe in alternative text descriptions.
We examine a large sample of inaccessible GIFs on Twitter to document how they are
used and what visual elements they contain. In interviews with 10 blind Twitter users,
we discuss what elements of GIF content should be described and their experiences
with GIFs online. The participants compared alternative text descriptions with two
other alternative audio formats: (i) the original audio from the GIF source video
and (ii) a spoken audio description. We recommend that social media platforms automatically
include alt text descriptions for popular GIFs (as Twitter has begun to do), and content
producers create audio descriptions to ensure everyone has a rich and emotive experience
with GIFs online.

Web-ALAP: A Web-based LaTeX Editor for Blind Individuals

  • Safa Arooj
  • Shaban Zulfiqar
  • Muhammad Qasim Hunain
  • Suleman Shahid
  • Asim Karim

In recent years, web-based platforms have become popular for LaTeX document authoring
and presentation due to their hassle-free environments and collaboration features.
Unfortunately, currently available platforms are not accessible to persons with vision
impairments (PVIs) due to poor screen reader and keyboard support. In this paper,
we present Web-Accessible LaTeX-based Authoring and Presentation (Web-ALAP), a web
version of ALAP that makes creating mathematical documents accessible to PVIs. Web-ALAP
provides assistive features like speech-based prompts, automatic narration of the
error message along with the line number for debugging, and essential keyboard shortcuts.
More importantly, the integrated “Math Mode” library offers a natural language description
of the mathematical content within a document, which is not done by other screen readers.
This feature enables PVIs to find and fix semantic errors in equations as well. We
discuss the design choices in Web-ALAP and evaluate its features with 10 visually
impaired participants. The results show that users perform better while working with
Web-ALAP as opposed to another web-based LaTeX editor (Overleaf).

SESSION: UX Panel

User Experience (UX) Panel: Lockdown Experiences

  • Jan McDonald and Carly Davey
  • Roobi Bernareggi
  • Sannah Gulamani
  • Inho Seo
  • Session Chairs: Abi Roper
  • Sergio Mascetti

The User Experience (UX) panel is a popular feature of the annual ASSETS conference experience. It seeks to provide a space for users with diverse backgrounds to share their individual experiences of accessing technology. The UX panel in 2020 was entitled: “Lockdown Experiences” and was aimed at exploring and understanding the challenges and opportunities that arose during the COVID-19 lockdown for people with disabilities. Panellists Jan and Carly, Roobi, Sannah and Inho Seo joined chairs Abi and Sergio to share their experiences and perspectives. Watch the full video via the above link.

SESSION: Paper Session 6: Navigating Open Roads & Spaces

Smooth Sailing? Autoethnography of Recreational Travel by a Blind Person

  • Kate Stephens
  • Matthew Butler
  • Leona M Holloway
  • Cagatay Goncu
  • Kim Marriott

We present an autoethnographic study of an independent blind traveller, Kate. It
recounts her preparation for a 28-day cruise and then her experience onboard the ship.
Her planning notes, field notes and travel diary were analysed in terms of five main
themes: information access, orientation and mobility, tools and technology, cultural
and societal issues, and person-centred issues. This analysis provides a deeply personal
account of the barriers–in particular information access, orientation and mobility
and staff attitudes–that she faced, as well as the skills and tools that she used
to overcome these. A particular focus is Kate’s use of technologies to access visual
information and the provision of accessible maps and models before the trip to help
her build a cognitive map of the ship’s layout.

Travelling more independently: A Requirements Analysis for Accessible Journeys to
Unknown Buildings for People with Visual Impairments

  • Christin Engel
  • Karin Müller
  • Angela Constantinescu
  • Claudia Loitsch
  • Vanessa Petrausch
  • Gerhard Weber
  • Rainer Stiefelhagen

It is much more difficult for people with visual impairments to plan and implement
a journey to unknown places than for sighted people, because in addition to the usual
travel arrangements, they also need to know whether the different parts of the travel
chain are accessible at all. The need for information is presumably therefore very
high and ranges from knowledge about the accessibility of public transport as well
as outdoor and indoor environments. However, to the best of our knowledge, there is
no study that examines in-depth requirements of both the planning of a trip and its
implementation, looking separately at the various special needs of people with low
vision and blindness. In this paper, we present a survey with 106 people with visual
impairments, in which we examine the strategies they use to prepare for a journey
to unknown buildings, how they orient themselves in unfamiliar buildings and what
materials they use. Our analysis shows that requirements for people with blindness
and low vision differ. The feedback from the participants reveals that there is a
large information gap, especially for orientation in buildings, regarding maps, accessibility
of buildings and supporting systems. In particular, there is a lack of availability
of indoor maps.

Understanding In-Situ Use of Commonly Available Navigation Technologies by People
with Visual Impairments

  • Vaishnav Kameswaran
  • Alexander J. Fiannaca
  • Melanie Kneisel
  • Amy Karlson
  • Edward Cutrell
  • Meredith Ringel Morris

Despite the large body of work in accessibility concerning the design of novel navigation
technologies, little is known about commonly available technologies that people with
visual impairments currently use for navigation. We address this gap with a qualitative
study consisting of interviews with 23 people with visual impairments, ten of whom
also participated in a follow-up diary study. We develop the idea of complementarity
first introduced by Williams et al. [53] and find that in addition to using apps to
complement mobility aids, technologies and apps complemented each other and filled
in for the gaps inherent in one another. Furthermore, the complementarity between
apps and other apps/aids was primarily the result of the differences in information
and modalities in which this information is communicated by apps, technology and mobility
aids. We propose design recommendations to enhance this complementarity and guide
the development of improved navigation experiences for people with visual impairments.

PLACES: A Framework for Supporting Blind and Partially Sighted People in Outdoor Leisure
Activities

  • Maryam Bandukda
  • Catherine Holloway
  • Aneesha Singh
  • Nadia Berthouze

Interacting with natural environments such as parks and the countryside improves health
and wellbeing. These spaces allow for exercise, relaxation, socialising and exploring
nature, however, they are often not used by blind and partially sighted people (BPSP).
To better understand the needs of BPSP for outdoor leisure experience and barriers
encountered in planning, accessing and engaging with natural environments, we conducted
an exploratory qualitative online survey (22 BPSP), semi-structured interviews (20
BPSP) and a focus group (9 BPSP; 1 support worker). We also explored how current technologies
support park experiences for BPSP. Our findings identify common barriers across the
stages of planning (e.g. limited accessible information about parks), accessing (e.g.
poor wayfinding systems), engaging with and sharing leisure experiences. Across all
stages (PLan, Access, Engage, Share) we found a common theme of Contribute. BPSP wished
to co-plan their trip, contribute to ways of helping others access a place, develop
multisensory approaches to engaging in their surroundings and share their experiences
to help others. In this paper, we present the initial work supporting the development
of a framework for understanding the leisure experiences of BPSP. We explore this
theme of contribution and propose a framework where this feeds into each of the stages
of leisure experience, resulting in the proposed, PLACES framework (PLan, Access,
Contribute, Engage, Share), which aims to provide a foundation for future research
on accessibility and outdoor leisure experiences for BPSP and people with disabilities.

SESSION: Paper Session 7: Privacy & Other Considerations for AI-Based Accessibility

SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness
for Deaf and Hard of Hearing Users

  • Dhruv Jain
  • Hung Ngo
  • Pratyush Patel
  • Steven Goodman
  • Leah Findlater
  • Jon Froehlich

Smartwatches have the potential to provide glanceable, always-available sound feedback
to people who are deaf or hard of hearing. In this paper, we present a performance
evaluation of four low-resource deep learning sound classification models: MobileNet,
Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only,
watch+phone, watch+phone+cloud, and watch+cloud. While direct comparison with prior
work is challenging, our results show that the best model, VGG-lite, performed similar
to the state of the art for non-portable devices with an average accuracy of 81.2%
(SD=5.8%) across 20 sound classes and 97.6% (SD=1.7%) across the three highest-priority
sounds. For device architectures, we found that the watch+phone architecture provided
the best balance between CPU, memory, network usage, and classification latency. Based
on these experimental results, we built and conducted a qualitative lab evaluation
of a smartwatch-based sound awareness app, called SoundWatch (Figure 1), with eight
DHH participants. Qualitative findings show support for our sound awareness app but
also uncover issues with misclassifications, latency, and privacy concerns. We close
by offering design considerations for future wearable sound awareness technology.

Visual Content Considered Private by People Who are Blind

  • Abigale Stangl
  • Kristina Shiroma
  • Bo Xie
  • Kenneth R. Fleischmann
  • Danna Gurari

We present an empirical study into the visual content people who are blind consider
to be private. We conduct a two-stage interview with 18 participants that identifies
what they deem private in general and with respect to their use of services that describe
their visual surroundings based on camera feeds from their personal devices. We then
describe a taxonomy of private visual content that is reflective of our participants’
privacy-related concerns and values. We discuss how this taxonomy can benefit services
that collect and sell visual data containing private information so such services
are better aligned with their users.

Privacy Considerations of the Visually Impaired with Camera Based Assistive Technologies:
Misrepresentation, Impropriety, and Fairness

  • Taslima Akter
  • Tousif Ahmed
  • Apu Kapadia
  • Swami Manohar Swaminathan

Camera based assistive technologies such as smart glasses can provide people with
visual impairments (PVIs) information about people in their vicinity. Although such
‘visually available’ information can enhance one’s social interactions, the privacy
implications for bystanders from the perspective of PVIs remains underexplored. Motivated
by prior findings of bystanders’ perspectives, we conducted two online surveys with
visually impaired (N=128) and sighted (N=136) participants with two ‘field-of-view’
(FoV) experimental conditions related to whether information about bystanders was
gathered from the front of the glasses or all directions. We found that PVIs considered
it as ‘fair’ and equally useful to receive information from all directions. However,
they reported being uncomfortable in receiving some visually apparent information
(such as weight and gender) about bystanders as they felt it was ‘impolite’ or ‘improper’.
Both PVIs and bystanders shared concerns about the fallibility of AI, where bystanders
can be misrepresented by the devices. Our finding suggests that beyond issues of social
stigma, both PVIs and bystanders have shared concerns that need to be considered to
improve the social acceptability of camera based assistive technologies.

Exploring Collection of Sign Language Datasets: Privacy, Participation, and Model
Performance

  • Danielle Bragg
  • Oscar Koller
  • Naomi Caselli
  • William Thies

As machine learning algorithms continue to improve, collecting training data becomes
increasingly valuable. At the same time, increased focus on data collection may introduce
compounding privacy concerns. Accessibility projects in particular may put vulnerable
populations at risk, as disability status is sensitive, and collecting data from small
populations limits anonymity. To help address privacy concerns while maintaining algorithmic
performance on machine learning tasks, we propose privacy-enhancing distortions of
training datasets. We explore this idea through the lens of sign language video collection,
which is crucial for advancing sign language recognition and translation. We present
a web study exploring signers’ concerns in contributing to video corpora and their
attitudes about using filters, and a computer vision experiment exploring sign language
recognition performance with filtered data. Our results suggest that privacy concerns
may exist in contributing to sign language corpora, that filters (especially expressive
avatars and blurred faces) may impact willingness to participate, and that training
on more filtered data may boost recognition accuracy in some cases.

A Mobile Cloud Collaboration Fall Detection System Based on Ensemble Learning

  • Tong Wu
  • Yang Gu
  • Yiqiang Chen
  • Jiwei Wang
  • Siyu Zhang

Falls are one of the major causes of accidental or unintentional injury death worldwide.
Therefore, this paper proposes a reliable fall detection algorithm and a mobile cloud
collaboration system for fall detection. The algorithm is an ensemble learning method
based on decision tree, named Fall-detection Ensemble Decision Tree (FEDT). The mobile
cloud collaboration system is composed of three stages: 1) mobile stage: a light-weighted
threshold method is used to filter out activities of daily livings (ADLs), 2) collaboration
stage: TCP protocol is used to transmit data to cloud and meanwhile features are extracted
in the cloud, 3) cloud stage: the model trained by FEDT is deployed to give the final
detection result with the extracted features. Experiments show that the proposed FEDT
outperforms the others’ over 1-3% both on sensitivity and specificity and has superior
robustness on different devices.

SESSION: Paper Session 8: Production Tools

How Blind and Visually Impaired Composers, Producers, and Songwriters Leverage and
Adapt Music Technology

  • William Christopher Payne
  • Alex Yixuan Xu
  • Fabiha Ahmed
  • Lisa Ye
  • Amy Hurst

Today, music creation software and hardware are central to the workflow of most professional
composers, producers, and songwriters. Music is an aural art form, but it is notated
graphically, and highly visual mainstream technologies pose significant accessibility
barriers to blind and visually impaired users. Very few studies address the current
state of accessibility in music technologies, and fewer propose alternative designs.
To address a lack of understanding about the experiences of blind and visually impaired
music technology users, we conducted an interview study with 11 music creators who,
we demonstrate, find ingenious workarounds to bend inaccessible technologies to their
needs, but still face persistent barriers including a lack of options, a limited but
persistent need for sighted help, and accessibility features that fail to cover all
use cases. We reflect on our findings and present opportunities and guidelines to
promote more inclusive design of future music technologies.

Understanding Audio Production Practices of People with Vision Impairments

  • Abir Saha
  • Anne Marie Piper

The advent of digital audio workstations and other digital audio tools has brought
a critical shift in the audio industry by empowering amateur and professional audio
content creators with the necessary means to produce high quality audio content. Yet,
we know little about the accessibility of widely used audio production tools for people
with vision impairments. Through interviews with 18 audio professionals and hobbyists
with vision impairments, we find that accessible audio production involves: piecing
together accessible and efficient workflows through a combination of mainstream and
custom tools; achieving professional competency through a steep learning curve in
which domain knowledge and accessibility are inseparable; and facilitating learning
and creating access by engaging in online communities of visually impaired audio enthusiasts.
We discuss the deep entanglement between accessibility and professional competency
and conclude with design considerations to inform future development of accessible
audio production tools.

Value beyond function: analyzing the perception of wheelchair innovations in Kenya

  • Giulia Barbareschi
  • Sibylle Daymond
  • Jake Honeywill
  • Aneesha Singh
  • Dominic Noble
  • Nancy N. Mbugua
  • Ian Harris
  • Victoria Austin
  • Catherine Holloway

Innovations in the field of assistive technology are usually evaluated based on practical
considerations related to their ability to perform certain functions. However, social
and emotional aspects play a huge role in how people with disabilities interact with
assistive products and services. Over a five months period, we tested an innovative
wheelchair service provision model that leverages 3D printing and Computer Aided Design
to provide bespoke wheelchairs in Kenya. The study involved eight expert wheelchair
users and five healthcare professionals who routinely provide wheelchair services
in their community. Results from the study show that both users and providers attributed
great value to both the novel service delivery model and the wheelchairs produced
as part of the study. The reasons for their appreciation went far beyond the practical
considerations and were rooted in the fact that the service delivery model and the
wheelchairs promoted core values of agency, empowerment and self-expression.

Haptic and Auditive Mesh Inspection for Blind 3D Modelers

  • Sebastian Lieb
  • Benjamin Rosenmeier
  • Thorsten Thormählen
  • Knut Buettner

Constructive geometry programming languages, such as OpenSCAD, are often used by
blind 3D modelers because a text-based interface is accessible with established technology,
such as refreshable braille lines or text-to-speech synthesizers. However, there is
currently no direct feedback for blind users to check a constructed 3D mesh object
for errors. This prevents a work pipeline independent from support by a sighted person.
In this paper, we present a system that gives blind modelers an audio-haptic preview
of their 3D object using an inexpensive haptic system with a single end effector.
We first define a baseline approach that contains state-of-the-art features for an
audio-haptic system, which we call ”free mode”. We then propose a novel guided mode
that automatically moves the end effector along the contour of the orthographic projection
of the model and improves the perception of the absolute position on the contour through
audio clues. In a user study with total and near-total blind participants, the novel
guided mode improves the users’ identification task by 34 percent compared to the
baseline system. Furthermore, 3D models created with our full audio-haptic system
contain significantly fewer mistakes and are rated on average 28 percent better than
models created with the baseline system and 84 percent better than models without
any audio-haptic preview.

Accessible Creativity with a Comic Spin

  • Carla Tamburro
  • Timothy Neate
  • Abi Roper
  • Stephanie Wilson

Creativity and humour allow people to be expressive and to address topics which they
might otherwise avoid or find deeply uncomfortable. One such way to express these
sentiments is via comics. Comics have a highly-visual format with relatively little
language. They therefore offer a promising opportunity for people who experience challenges
with language to express creativity and humour. Most comic tools, however, are not
accessible to people with language impairments. In this paper we describe Comic Spin,
a comic app designed for people with aphasia. Comic Spin builds upon the literature
on supporting creativity by constraining the creative space. We report both the design
process and the results of a creative workshop where people with aphasia used Comic
Spin. Participants were not only successful in using the app, but were able to create
a range of narrative, humorous and subversive comics.

SESSION: Paper Session 9: Understanding Access Barriers

“Maps are hard for me”: Identifying How Older Adults Struggle with Mobile Maps

  • Ja Eun Yu
  • Debaleena Chattopadhyay

Despite a global upward trend in mobile device ownership, older adults continue to
use few applications and fewer features. For example, besides directions, maps provide
information about public transit, traffic, and amenities. Mobile maps can assist older
adults to navigate independently, avail city facilities, and explore new places. But
how accessible are current mobile maps to older adults? In this paper, we present
results from a qualitative study examining how older adults use mobile maps and the
difficulties they encounter. 172 problems were identified and categorized across 17
older adults (ages 60+). Results indicate that non-motor issues were more difficult
to mitigate than motor issues and led to maximum frustration and resignation. These
non-motor issues stemmed from three factors, inadequate visual saliency, ambiguous
affordances, and low information scent, making it difficult for older adults to notice,
use, and infer, respectively. Two design solutions are proposed to address these non-motor
issues.

The Role of Sensory Changes in Everyday Technology use by People with Mild to Moderate
Dementia

  • Emma Dixon
  • Amanda Lazar

Technology design for dementia primarily focuses on cognitive needs. This includes
providing task support, accommodating memory changes, and simplifying interfaces by
reducing complexity. However, research has demonstrated that dementia affects not
only the cognitive abilities of people with dementia, but also their sensory and motor
abilities. This work provides a first step towards understanding the interaction between
sensory changes and technology use by people with dementia through interviews with
people with mild to moderate dementia and practitioners. Our analysis yields an understanding
of strategies to use technology to overcome sensory changes associated with dementia
as well as barriers to using certain technologies. We present new directions for the
design of technologies for people with mild to moderate dementia, including intentional
sensory stimulation to facilitate comprehension, as well as opportunities to leverage
advances in technology design from other disabilities for dementia.

Sense and Accessibility: Understanding People with Physical Disabilities’ Experiences with Sensing Systems

  • Shaun K. Kane
  • Anhong Guo
  • Meredith Ringel Morris

Sensing technologies that implicitly and explicitly mediate digital experiences are
an increasingly pervasive part of daily living; it is vital to ensure that these technologies
work appropriately for people with physical disabilities. We conducted on online survey
with 40 adults with physical disabilities, gathering open-ended descriptions about
respondents’ experiences with a variety of sensing systems, including motion sensors,
biometric sensors, speech input, as well as touch and gesture systems. We present
findings regarding the many challenges status quo sensing systems present for people
with physical disabilities, as well as the ways in which our participants responded
to these challenges. We conclude by reflecting on the significance of these findings
for defining a future research agenda for creating more inclusive sensing systems.

“I just went into it assuming that I wouldn’t be able to have the full experience”: Understanding the Accessibility of Virtual Reality for People with Limited Mobility

  • Martez Mott
  • John Tang
  • Shaun Kane
  • Edward Cutrell
  • Meredith Ringel Morris

Virtual reality (VR) has the potential to transform many aspects of our daily lives,
including work, entertainment, communication, and education. However, there has been
little research into understanding the usability of VR for people with mobility limitations.
In this paper, we present the results of an exploration to understand the accessibility
of VR for people with limited mobility. We conducted semi-structured interviews with
16 people with limited mobility about their thoughts on, and experiences with, VR
systems. We identified 7 barriers related to the physical accessibility of VR devices
that people with limited mobility might encounter, ranging from the initial setup
of a VR system to keeping VR controllers in view of cameras embedded in VR headsets.
We also elicited potential improvements to VR systems that would address some accessibility
concerns. Based on our findings, we discuss the importance of considering the abilities
of people with limited mobility when designing VR systems, as the abilities of many
participants did not match the assumptions embedded in the design of current VR systems.

Teacher Views of Math E-learning Tools for Students with Specific Learning Disabilities

  • Zikai Alex Wen
  • Erica Silverstein
  • Yuhang Zhao
  • Anjelika Lynne Amog
  • Katherine Garnett
  • Shiri Azenkot

Many students with specific learning disabilities (SLDs) have difficulty learning
math. To succeed in math, they need to receive personalized support from teachers.
Recently, math e-learning tools that provide personalized math skills training have
gained popularity. However, we know little about how well these tools help teachers
personalize instruction for students with SLDs. To answer this question, we conducted
semi-structured interviews with 12 teachers who taught students with SLDs in grades
five to eight. We found that participants used math e-learning tools that were not
designed specifically for students with SLDs. Participants had difficulty using these
tools because of text-intensive user interfaces, insufficient feedback about student
performance, inability to adjust difficulty levels, and problems with setup and maintenance.
Participants also needed assistive technology for their students, but they had challenges
in getting and using it. From our findings, we distilled design implications to help
shape the design of more inclusive and effective e-learning tools.

Reading Experiences and Interest in Reading-Assistance Tools Among Deaf and Hard-of-Hearing
Computing Professionals

  • Oliver Alonzo
  • Lisa Elliot
  • Becca Dingman
  • Matt Huenerfauth

Automatic Text Simplification (ATS) software replaces text with simpler alternatives.
While some prior research has explored its use as a reading assistance technology,
including some empirical findings suggesting benefits for deploying this technology
among particular groups of users, relatively little work has investigated the interest
and requirements of specific groups of users of this technology. In this study, we
investigated the interests of Deaf and Hard-of-Hearing (DHH) individuals in the computing
industry in ATS-based reading assistance tools, motivated by prior work establishing
that computing professionals often need to read about new technologies in order to
stay current in their profession. Through a survey and follow-up interviews, we investigate
these DHH individuals’ reading practices, current techniques for overcoming complicated
text, and their interest in reading assistance tools for their work. Our results suggest
that these users read relatively often, especially in support of their work, and they
were interested in tools to assist them with complicated texts. This empirical contribution
provides motivation for further research into ATS-based reading assistance tools for
these users, prioritizing which reading activities users are most interested in seeing
application of this technology, as well as some insights into design considerations
for such tools.

SESSION: Paper Session 10: Design Matters

Lessons Learned in Designing AI for Autistic Adults

  • Andrew Begel
  • John Tang
  • Sean Andrist
  • Michael Barnett
  • Tony Carbary
  • Piali Choudhury
  • Edward Cutrell
  • Alberto Fung
  • Sasa Junuzovic
  • Daniel McDuff
  • Kael Rowan
  • Shibashankar Sahoo
  • Jennifer Frances Waldern
  • Jessica Wolk
  • Hui Zheng
  • Annuska Zolyomi

Through an iterative design process using Wizard of Oz (WOz) prototypes, we designed
a video calling application for people with Autism Spectrum Disorder. Our Video Calling
for Autism prototype provided an Expressiveness Mirror that gave feedback to autistic
people on how their facial expressions might be interpreted by their neurotypical
conversation partners. This feedback was in the form of emojis representing six emotions
and a bar indicating the amount of overall expressiveness demonstrated by the user.
However, when we built a working prototype and conducted a user study with autistic
participants, their negative feedback caused us to reconsider how our design process
led to a prototype that they did not find useful. We reflect on the design challenges
around developing AI technology for an autistic user population, how Wizard of Oz
prototypes can be overly optimistic in representing AI-driven prototypes, how autistic
research participants can respond differently to user experience prototypes of varying
fidelity, and how designing for people with diverse abilities needs to include that
population in the development process.

The TalkingBox.: Revealing Strengths of Adults with Severe Cognitive Disabilities

  • Filip Bircanin
  • Laurianne Sitbon
  • Bernd Ploderer
  • Andrew Azaabanye Bayor
  • Michael Esteban
  • Stewart Koplick
  • Margot Brereton

In this paper, we present a case study of the iterative design of TalkingBox, a communication
device designed with a person with a severe cognitive disability and his support network.
TalkingBox combines graphic symbols with tangible technology to foster the use of
symbolic communication by leveraging the person’s strength and interest in memory
matching games. In the course of designing, trialing and iterating the TalkingBox,
we discovered that the design supported not only the development of symbolic communication,
but also revealed new interests and strengths of our participant. TalkingBox highlighted
opportunities for interactions with peers, revealed new skills in visual discrimination,
and evidenced interests. These could, in turn, support staff and family to adapt their
support. More importantly, TalkingBox had become a living portfolio presenting our
participant with severe disability through the lens of their strengths. We discuss
opportunities for research through co-design to open new avenues for future communication
technologies.

Lessons from Expert Focus Groups on how to Better Support Adults with Mild Intellectual
Disabilities to Engage in Co-Design

  • Ryan Colin Gibson
  • Mark D. Dunlop
  • Matt-Mouley Bouamrane

Co-design techniques generally rely upon higher-order cognitive skills, such as abstraction
and creativity, meaning they may be inaccessible to people with intellectual disabilities
(ID). Consequently, investigators must adjust the methods employed throughout their
studies to ensure the complex needs of people with ID are appropriately catered to.
Yet, there are a lack of guidelines to support researchers in this process, with previous
literature often neglecting to discuss the decisions made during the development of
their study protocols. We propose a new procedure to overcome this lack of support,
by utilizing the knowledge of “experts” in ID to design a more accessible workshop
for the target population. 12 experts across two focus groups were successful in identifying
accessibility barriers throughout a set of typical early co-design activities. Recommendations
to overcome these barriers are discussed along with lessons on how to better support
people with ID to engage in co-design.

Inclusive improvisation through sound and movement mapping: from DMI to ADMI

  • Alon Ilsar
  • Gail Kenning

The field of Accessible Digital Musical Instruments (ADMIs) is growing rapidly, with
instrument designers recognising that adaptations to existing Digital Musical Instruments
(DMIs) can foster inclusive music making. ADMIs offer opportunities to engage with
a wider range of sounds than acoustic instruments. Furthermore, gestural ADMIs free
the music maker from relying on screen, keyboard and mouse-based interfaces for engaging
with these sounds. This brings greater opportunities for exploration, improvisation,
empowerment and flow through music making for people living with disabilities. This
paper presents a case study of the a gestural DMI invented by the first author and
shows that system-based considerations that enabled an expert percussionist to achieve
virtuoso performances with the instrument, required minimal hardware and software
changes to facilitate greater inclusivity. Understanding the needs of the users and
customising the system-based movement to sound mappings was of far greater importance
in making the instrument accessible.

Bridging the Divide: Exploring the use of digital and physical technology to aid mobility
impaired people living in an informal settlement

  • Giulia Barbareschi
  • Ben Oldfrey
  • Long Xin
  • Grace Nyachomba Magomere
  • Wycliffe Ambeyi Wetende
  • Carol Wanjira
  • Joyce Olenja
  • Victoria Austin
  • Catherine Holloway

Living in informality is challenging. It is even harder when you have a mobility impairment.
Traditional assistive products such as wheelchairs are essential to enable people
to travel. Wheelchairs are considered a Human Right. However, they are difficult to
access. On the other hand, mobile phones are becoming ubiquitous and are increasingly
seen as an assistive technology. Should therefore a mobile phone be considered a Human
Right? To help understand the role of the mobile phone in contrast of a more traditional
assistive technology – the wheelchair, we conducted contextual interviews with eight
mobility impaired people who live in Kibera, a large informal settlement in Nairobi.
Our findings show mobile phones act as an accessibility bridge when physical accessibility
becomes too challenging. We explore our findings from two perspective – human infrastructure
and interdependence, contributing an understanding of the role supported interactions
play in enabling both the wheelchair and the mobile phone to be used. This further
demonstrates the critical nature of designing for context and understanding the social
fabric that characterizes informal settlements. It is this social fabric which enables
the technology to be useable.

Chat in the Hat: A Portable Interpreter for Sign Language Users

  • Larwan Berke
  • William Thies
  • Danielle Bragg

Many Deaf and Hard-of-Hearing (DHH) individuals rely on sign language interpreting
to communicate with hearing peers. If on-site interpreting is not available, DHH individuals
may use remote interpreting over a smartphone video-call. However, this solution requires
the DHH individual to give up either 1) the use of one signing hand by holding the
smartphone or 2) their ability to multitask and move around by propping the smartphone
up in a fixed location. We explore this problem within the context of the workplace,
and present a prototype hands-free device using augmented reality glasses with a hat-mounted
fisheye camera and mic/speaker. To explore the validity of our design, we conducted
1) a video interpretability experiment, and 2) a user study with 18 participants (9
DHH, 9 hearing) in a workplace environment. Our results suggest that a hands-free
device can support accurate interpretation while enhancing personal interactions.

SESSION: Posters

#ActuallyAutistic Sense-Making on Twitter

  • Annuska Zolyomi
  • Ridley Jones
  • Tomer Kaftan

Autistic individuals engage in sense-making as they seek to better understand themselves
and relate to others within a society formed by neuro-typical social norms. Our research
examines the ways in which autistic individuals engage in sense-making activities
about autism on Twitter. We collected autism-oriented Twitter conversations and Twitter
user profiles data of people participating in those conversation. Our research contributes
empirical evidence demonstrating that that autistic sense-making on Twitter is constituted
by (1) engaging in dynamic discussions of life experiences, (2) countering stigma
with actions of advocacy, and (3) enacting neuro-atypical social norms.

A Navigation Method for Visually Impaired People: Easy to Imagine the Structure of
the Stairs

  • Asuka Miyake
  • Misa Hirao
  • Mitsuhiro Goto
  • Chihiro Takayama
  • Masahiro Watanabe
  • Hiroya Minami

People with visual impairments or blindness (VIB) face many problems when they enter
unfamiliar areas by themselves. To address this problem, we aim to enable people with
VIB to walk alone, even in unfamiliar areas. We propose a navigation method that enables
people with VIB to imagine structures such as staircases easily and thus move safely
when walking alone even in unfamiliar areas. An experiment is conducted with six participants
with VIB walking up or down stairs with four different structures in an indoor environment.
Its results verify that the proposed method could provide appropriate amounts of guidance
messages and convey the messages in a safer manner than the existing method.

A Preliminary Study on Understanding Voice-only Online Meetings Using Emoji-based
Captioning for Deaf or Hard of Hearing Users

  • Kotaro Oomori
  • Akihisa Shitara
  • Tatsuya Minagawa
  • Sayan Sarcar
  • Yoichi Ochiai

In the midst of the coronavirus disease 2019 pandemic, online meetings are rapidly
increasing. Deaf or hard of hearing (DHH) people participating in an online meeting
often face difficulties in capturing the affective states of other speakers. Recent
studies have shown the effectiveness of emoji-based representation of spoken text
to capture such affective states. Nevertheless, in voice-only online meetings, it
is still not clear how emoji-based spoken texts can assist DHH people to understand
the feelings of speakers without perceiving their facial expressions. We therefore
conducted a preliminary experiment to understand the effect of emoji-based text representation
during voice-only online meetings by leveraging an emoji-based captioning system.
Our preliminary results demonstrate the necessity of designing an advanced system
to help DHH people understanding the voice-only online meetings more meaningfully.

A Review of Literature on Accessibility and Authentication Techniques

  • Sarah Andrew
  • Stacey Watson
  • Tae Oh
  • Garreth W. Tigwell

Reliable and accessible authentication techniques are required to maintain privacy
and security. This is paramount as technology plays an increasing role in our lives.
In this paper, we examine the previous work on accessible authentication techniques
for blind/low vision people, deaf/hard-of-hearing people, people with cognitive impairments,
and people with motor impairments. We seek to identify gaps in the current research
to advocate where future efforts are needed to create authentication techniques that
will work for everyone. We found a lot of variability in prior work investigating
the accessibility of authentication techniques, including shortfalls and gaps in the
literature. We make recommendations on the directions future research should take.

An Exploratory Study on Supporting Persons with Aphasia in Pakistan: Challenges and
Opportunities

  • Waleed Riaz
  • Gulraiz Ali
  • Momina Abid
  • Izma Naim Butt
  • Anas Shahzad
  • Suleman Shahid

Since the prevalence of aphasia is one of the major problems faced by older individuals
today, there is an increasing amount of research aimed towards developing technology
based interventions for persons with aphasia (PwA). In developing countries such as
Pakistan, aphasia patients face multiple problems and challenges due to a weak healthcare
system and a severe lack of facilities and available resources. This work is an effort
to understand the needs, problems and challenges of PwA in order to provide more effective
support. In recent years, Virtual Reality (VR) has been the focus of a lot of research,
due to its deep immersive environments, and has shown great promise in improving language
production and speech comprehension among aphasia patients. This study aims to evaluate
the impact of VR enrichment activities on language production and speech comprehension
of PwA. It will also explore the opportunities and challenges of using VR experiences
with PwA through a thematic analysis of the recorded observations made during the
system evaluation.

An OER Recommender System Supporting Accessibility Requirements

  • Mirette Elias
  • Mohammadreza Tavakoli
  • Steffen Lohmann
  • Gabor Kismihok
  • Sören Auer

Open Educational Resources are becoming a significant source of learning that are
widely used for various educational purposes and levels. Learners have diverse backgrounds
and needs, especially when it comes to learners with accessibility requirements. Persons
with disabilities have significantly lower employment rates partly due to the lack
of access to education and vocational rehabilitation and training. It is not surprising
therefore, that providing high quality OERs that facilitate the self-development towards
specific jobs and skills on the labor market in the light of special preferences of
learners with disabilities is difficult. In this paper, we introduce a personalized
OER recommeder system that considers skills, occupations, and accessibility properties
of learners to retrieve the most adequate and high-quality OERs. This is done by:
1) describing the profile of learners with disabilities, 2) collecting and analysing
more than 1,500 OERs, 3) filtering OERs based on their accessibility features and
predicted quality, and 4) providing personalised OER recommendations for learners
according to their accessibility needs. As a result, the OERs retrieved by our method
proved to satisfy more accessibility checks than other OERs. Moreover, we evaluated
our results with five experts in educating people with visual and cognitive impairments.
The evaluation showed that our recommendations are potentially helpful for learners
with accessibility needs.

CLEVR: A Customizable Interactive Learning Environment for Users with Low Vision in
Virtual Reality

  • Adrian H. Hoppe
  • Julia K. Anken
  • Thorsten Schwarz
  • Rainer Stiefelhagen
  • Florian van de Camp

Technological advances enable the development of new low vision aids. One such new
technology is virtual reality (VR). Existing VR applications offer a variety of adjustments
to the content and the user’s environment to support people with low vision. Yet,
an interaction concept to conveniently activate and control these aids is missing.
Therefore, we designed and implemented an interaction concept based on user and expert
feedback and evaluated it in a user study. Our application offers various aids to
support residual vision and a radial menu for intuitive use of these aids. The results
of the user study show that the VR application allows simple task solving comparable
to a desktop computer. Furthermore, the user study gives an insight into the effects
of different visual impairments on the usage of VR, since the field of view has a
larger impact than visual acuity. Overall, our results indicate that VR is a suitable
aid to support the individual needs of users with low vision.

Co-Designing Accessible Science Education Simulations with Blind and Visually-Impaired
Teens

  • R. Michael Winters
  • E. Lynne Harden
  • Emily B. Moore

Design thinking is an approach to educational curriculum that builds empathy, encourages
ideation, and fosters active problem solving through hands-on design projects. Embedding
participatory “co-design” into design thinking curriculum offers students agency in
finding solutions to real-world design challenges, which may support personal empowerment.
An opportunity to explore this prospect arose in the design of sounds for an accessible
interactive science-education simulation in the PhET Project. Over the course of three
weeks, PhET researchers engaged blind and visually-impaired high-school students in
a design thinking curriculum that included the co-design of sounds and auditory interactions
for the Balloons and Static Electricity (BASE) sim. By the end of the curriculum,
students had iterated through all aspects of design thinking and performed a quantitative
evaluation of multiple sound prototypes. Furthermore, the group’s mean self-efficacy
rating had increased. We reflect on our curriculum and the choices we made that helped
enable the students to become authentic partners in sound design.

Constructive Visualization to Inform the Design and Exploration of Tactile Data Representations

  • Danyang Fan
  • Alexa Fay Siu
  • Sile O’Modhrain
  • Sean Follmer

As data visualization has become increasingly important in our society, many challenges
prevent people who are blind and visually impaired (BVI) from fully engaging with
data and data graphics. For example, tactile data representations are commonly used
by BVI people to explore spatial graphics, but it is difficult for BVI people to construct
and understand tactile representations without prior training or expert assistance.
In this work, we adopt a constructive visualization framework of using simple and
versatile tokens to engage non-experts in the construction of tactile data representations.
We present preliminary results of how participants chose to interpret and create tactile
data representations and the preferred haptic exploratory procedures used for retrieving
information. All participants used similar construction strategies and converged upon
3D compact spatial forms to retrieve and display analytical information. These insights
can inform future data visualization authoring and consumption tools that users of
more diverse skill backgrounds can effectively navigate.

Creating questionnaires that align with ASL linguistic principles and cultural practices
within the Deaf community

  • Rachel Boll
  • Shruti Mahajan
  • Jeanne Reis
  • Erin T. Solovey

Conducting human-centered research by, with, and for the ASL-signing Deaf community,
requires rethinking current human-computer interaction processes in order to meet
their linguistic and cultural needs and expectations. This paper highlights some key
considerations that emerged in our work creating an ASL-based questionnaire, and our
recommendations for handling them.

Designing a Remote Framework to Create Custom Assistive Technologies

  • Veronica Alfaro Arias
  • Amy Hurst
  • Anita Perr

3D printing technologies can help individuals get customized assistive technologies
that increase independence. However, designing these technologies is complicated and
many users must rely on fabricators to translate their ideas and transform them into
customized tools. We have developed a design framework, inspired by IDEO’s Human-Centered
Design principles [6], to help organize end-user’s ideas, spark creativity, and engage
end-users in a collaborative design process. It consists of digital artifacts divided
into three main categories: explore, ideate, and create. We conducted virtual interviews
with four occupational therapists and two people with motor disabilities to understand
how end-users describe their abilities and needs in order to facilitate ideation and
collaboration with fabricators.

Designing an Assistant for the Disclosure and Management of Information about Needs
and Support: the ADMINS project

  • Francisco Iniesto
  • Tim Coughlan
  • Kate Lister
  • Wayne Holmes

In this paper, we describe accessible design considerations for the Assistants for
the Disclosure and Management of Information about Needs and Support project (ADMINS).
In ADMINS, artificial intelligence (AI) services are being used to create a virtual
assistant (VA), which is being designed to enable students to disclose any disabilities,
and to provide guidance and suggestions about appropriate accessible support. ADMINS
explores the potential of a conversational user interface (CUI) to reduce administrative
burden and improve outcomes, by replacing static forms with written and spoken dialogue.
Students with accessibility needs often face excessive administrative burden. A CUI
could be beneficial in this context if designed to be fully accessible. At the same
time, we recognise the broader potential of CUIs for these types of processes, and
the project aims to understand the multiple opportunities and challenges, using participatory
design, iterative development and trials evaluations.

Designing Playful Activities to Promote Practice of Preposition Skills for Kids with
ASD

  • Advait Bhat

Children with autism spectrum disorder and other developmental disorders tend to
have difficulty in language and communication, especially in abstract language concepts
like prepositions. Existing clinically used methods of conducting therapy are difficult
to conduct at home. In this paper, we try to show the design and process to translate
an existing therapy technique into a playful activity for children with ASD to practice
prepositions. The design is generated through a deductive process and was based in
theory and expert evaluation. The aim is to increase overall compliance by making
the therapy activity more playful and fun.

EASIER system. Language resources for cognitive accessibility.

  • L. MORENO
  • R. ALARCON
  • P. MARTÍNEZ

Difficulties in understanding texts that contain unusual words can create accessibility
barriers for people with cognitive, language and learning disabilities. In this work,
the EASIER system, a web system that provides various tools which improve cognitive
accessibility, is presented. From a text in Spanish, complex words are detected and
synonyms, a definition and a pictogram are offered for each complex word detected.
Language and accessibility resources were used, such as easy-to-read dictionaries.
The web system can be accessed from both desktop computers and mobile devices. Moreover,
a browser extension is also offered.

Emergency navigation assistance for industrial plants workers subject to situational
impairment

  • Dragan Ahmetovic
  • Claudio Bettini
  • Mariano Ciucci
  • Filippo Dacarro
  • Paolo Dubini
  • Alberto Gotti
  • Gerard O’Reilly
  • Alessandra Marino
  • Sergio Mascetti
  • Denis Sarigiannis

This paper reports our ongoing effort in the development of the ROSSINI system, which
looks to address emergency situations in industrial plants. The user interaction design
of ROSSINI described in this paper takes into account the fact that the user can be
subject to situational impairment (e.g., limited sight due to smoke in the environment).
As such, it is envisioned that existing solutions designed for people with disabilities
can be adopted and extended for this purpose.

Ensuring Accessibility: Individual Video Playback Enhancements for Low Vision Users

  • Andreas Sackl
  • Franziska Graf
  • Raimund Schatz
  • Manfred Tscheligi

Although software products are becoming increasingly accessible and assistive tools
like screen readers becoming widely available, people with low vision still face insufficient
support when it comes to consumption of digital video content. In this paper, we present
an accessible desktop video player software, which allows people with low vision to
adapt the presentation of digital videos according to their specific needs. For visual
enhancement, we implemented a broad range of image manipulation techniques, like adaptation
of contrast, color manipulation (e.g. inverting, grey scale transformations) and edge
detection algorithms for sharpness optimization. Based on the feedback from low vision
users, we discuss how to implement enhancement filter configuration and how to consider
several input modalities.

Evaluation of the acceptability and usability of Augmentative and Alternative Communication
(ACC) tools: the example of Pictogram grid communication systems with voice output.

  • LUCIE CHASSEUR
  • Marion Dohen
  • Benjamin Lecouteux
  • Sébastien Riou
  • Amélie Rochet-Capellan
  • Didier Schwab

The multiplication of communication software based on pictogram grids with voice
output has led to the democratisation of this type of tool. To date, however, there
is no standard, nor systematic evaluation that makes it possible to objectively measure
the suitability of these tools for a given language. There are also no methods for
designers to improve the organisation of words into grids to optimise sentence production.
This paper is a first step in this direction. We represented the Proloquo2Go® Crescendo
vocabulary for a given grid size as a graph and computed the production cost of frequent
sentences in French. This cost depends on the physical distance between the pictograms
on a given page and navigation between pages. We discuss the interest of this approach
for the evaluation as well as the conception of communicative pictogram grids.

Exploring Low-Cost Materials to Make Pattern-Based Lock-Screens Accessible for Users
with Visual Impairments or Deafblindness

  • Lea Buchweitz
  • Arthur Theil
  • James Gay
  • Oliver Korn

Nowadays, the wide majority of Europeans uses smartphones. However, touch displays
are still not accessible by everyone. Individuals with deafblindness, for example,
often face difficulties in accessing vision-based touchscreens. Moreover, they typically
have few financial resources which increases the need for customizable, low-cost assistive
devices. In this work-in-progress, we present four prototypes made from low-cost,
every-day materials, that make modern pattern lock mechanisms more accessible to individuals
with vision impairments or even with deafblindness. Two out of four prototypes turned
out to be functional tactile overlays for accessing digital 4-by-4 grids that are
regularly used to encode dynamic dot patterns. In future work, we will conduct a user
study investigating whether these two prototypes can make dot-based pattern lock mechanisms
more accessible for individuals with visual impairments or deafblindness.

Functionality versus Inconspicuousness: Attitudes of People with Low Vision towards
OST Smart Glasses

  • Karst M.P. Hoogsteen
  • Sjoukje A. Osinga
  • Bea L.P.A. Steenbekkers
  • Sarit F.A. Szpiro

Recent advances in smart glasses technologies bare tremendous potential for people
with low vision. In particular, the use of optical-see through smart glasses has been
gaining momentum in the field. We examined how these devices are perceived by low
vision people and factors that might influence their wide-scale adoption. We conducted
semi-structured interviews with 29 low vision participants. We asked participants
about desired functionalities, aesthetics (including wearing in public versus in private),
preferred interaction mode, and willingness to carry support devices for increased
functionality. We found that the majority of participants in this study preferred
a compact device that looks most similar to a normal pair of glasses, preferred buttons
as an inconspicuous mode of interaction, and are willing to carry support devices
up to the size of a tablet to increase the functionality of the device. Our results
underscore the importance of striking a balance between functionality and aspects
such as inconspicuousness in terms of both aesthetics and device interaction, and
inform further development of this promising technology.

HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users
on a Head-mounted Display

  • Ru Guo
  • Yiru Yang
  • Johnson Kuang
  • Xue Bin
  • Dhruv Jain
  • Steven Goodman
  • Leah Findlater
  • Jon Froehlich

Head-mounted displays can provide private and glanceable speech and sound feedback
to deaf and hard of hearing people, yet prior systems have largely focused on speech
transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype
that uses deep learning to classify and visualize sound identity and location in addition
to providing speech transcription. This poster paper presents a working proof-of-concept
prototype, and discusses future opportunities for advancing AR-based sound awareness.

IncluSet: A Data Surfacing Repository for Accessibility Datasets

  • Hernisa Kacorri
  • Utkarsh Dwivedi
  • Sravya Amancherla
  • Mayanka Jha
  • Riya Chanduka

Datasets and data sharing play an important role for innovation, benchmarking, mitigating
bias, and understanding the complexity of real world AI-infused applications. However,
there is a scarcity of available data generated by people with disabilities with the
potential for training or evaluating machine learning models. This is partially due
to smaller populations, disparate characteristics, lack of expertise for data annotation,
as well as privacy concerns. Even when data are collected and are publicly available,
it is often difficult to locate them. We present a novel data surfacing repository,
called IncluSet, that allows researchers and the disability community to discover
and link accessibility datasets. The repository is pre-populated with information
about 139 existing datasets: 65 made publicly available, 25 available upon request,
and 49 not shared by the authors but described in their manuscripts. More importantly,
IncluSet is designed to expose existing and new dataset contributions so they may
be discoverable through Google Dataset Search.

Investigating Challenges Faced by Learners with Visual Impairments using Block-Based
Programming/Hybrid Environments

  • Aboubakar Mountapmbeme
  • Stephanie Ludi

With an increase in the use of block-based programming environments in k-12 curriculum,
the need for accessibility exists in order to serve all students. Accessible block-based
systems are in their infancy. Such systems would provide students with visual impairments
the opportunity to learn programming and take part in computational thinking activities
using the same systems that are found appealing to most sighted learners. However,
with the presence of these systems little is known about their long-term use in the
educational milieu. As a result, we conducted a survey with twelve teachers of students
with visual impairments to learn about the use of these systems in teaching their
students and to understand the barriers that students face in the learning process.
Our study reveals that only one block-based programming environment is common among
teachers and that several challenges exist. These challenges range from limited learners’
preparedness through difficulties editing and navigating code, to ineffective system
feedback.

Investigating the Use of Social Media in Supporting Children with Cognitive Disabilities
and Their Caregivers from Saudi Arabia

  • Reem N. Alshenaifi
  • Jinjuan Heidi Feng

This study investigates the use of social media in supporting and empowering Saudi
caregivers of children with cognitive disabilities. Through interviews with 13 caregivers,
we examined the motivations and concerns around using social media in relation to
their children or students’ conditions. We also investigated the role of social media
during the COVID-19 pandemic. We found that caregivers used social media with caution
to seek information and emotional support, to spread awareness, and to communicate
and build communities. Our findings also suggest that caregivers face a great deal
of challenges in security and privacy, social stigma and negative discussions, misinformation,
and lack of resources.

Non-visual access to graphical information on COVID-19

  • Leona Holloway
  • Matthew Butler
  • Samuel Reinders
  • Kim Marriott

A critical component of the worldwide response to the novel coronavirus COVID-19 pandemic
has been providing the general public with information about the virus and the health
measures designed to slow its spread. Much of this information has been presented
as visual graphics. Have people who are blind or have low vision (BLV) been able to
gain access to this information through nonvisual media and, if so, how? We investigate
this issue using Australia as a case study.

Painting a Picture of Accessible Digital Art

  • Timothy Neate
  • Abi Roper
  • Stephanie Wilson

Visual creative forms, such as painting and sculpture, are a common expressive outlet
and offer an alternative to language-based expression. They are particularly beneficial
for those who find language challenging due to an impairment – for example, people
with aphasia. However, being creative with digital platforms can be challenging due
to the language-based barriers they impose. In this work, we describe an accessible
tool called Inker. Inker supports people with aphasia in accessing digital creativity,
supported by previously created physical artistic work.

PantoGuide: A Haptic and Audio Guidance System To Support Tactile Graphics Exploration

  • Elyse D. Z. Chase
  • Alexa Fay Siu
  • Abena Boadi-Agyemang
  • Gene S-H Kim
  • Eric J Gonzalez
  • Sean Follmer

The ability to effectively read and interpret tactile graphics and charts is an essential
part of a tactile learner’s path to literacy, but is a skill that requires instruction
and training. Many teachers of the visual impaired (TVIs) report that blind and visually
impaired students have trouble interpreting graphics independently without individual
instruction. We present PantoGuide, a low-cost system that provides audio and haptic
guidance, via skin-stretch feedback to the dorsum of a user’s hand while the user
explores a tactile graphic overlaid on a touchscreen. This system allows programming
of haptic guidance patterns and cues for tactile graphics that can be experienced
by students learning remotely or that can be reviewed by a student independently.
We propose two teaching scenarios (synchronous and asynchronous) and two guidance
interactions (point-to-point and continuous) that the device can support – and demonstrate
their use in a set of applications we co-design with one co-author who is blind and
a tactile graphics user.

Putting Tools in Hands: Designing Curriculum for a Nonvisual Soldering Workshop

  • LAUREN RACE
  • JOSHUA A. MIELE
  • CHANCEY FLEET
  • TOM IGOE
  • AMY HURST

Blind and low vision learners are underrepresented in STEM and maker culture, both
of which are historically inaccessible. In this paper we describe our experience conducting
a three-day nonvisual soldering workshop and discuss the opportunities and challenges
for designing accessible electronics curricula. Workshop attendees learned nonvisual
soldering skills, adapted from publications for blind and low vision electronics professionals
[4, 13, 18], while building a complex circuit. We detail our curriculum design and
its complexities for learners with different levels of technical experience and learning
modalities. While our instruction pacing proved challenging for some, all attendees
succeeded with operating hot soldering irons and commanding basic soldering techniques
over the course of three days. Based on our findings, we provide recommendations for
educators wanting to design similar nonvisual STEM curricula and workshops. These
include supplying tactile and textual instruction to support multiple learning styles
and pacing, and standardizing workshop materials to support nonvisual hands-on learning
for novices.

Reflections on Using Chat-Based Platforms for Online Interviews with Screen-Reader
Users

  • Rachel Menzies
  • Benjamin M. Gorman
  • Garreth W. Tigwell

Within accessibility research, it is important for researchers to understand the
lived experience of participants. Researchers often use in-person interviews to collect
this data. However, in-person interviews can result in communication barriers and
introduce logistical challenges surrounding scheduling and geographical location.
For a recent study involving screen reader users, we used online chat-based platforms
to conduct interviews. Unlike in-person interviews, there was little guidance within
the field on conducting interviews using these platforms with screen reader users.
To understand how effective these platforms were, we collected feedback from our participants
on their experience after completing their interview. In this paper, we report on
our experience of conducting online chat-based interviews with screen reader users.
We present reflections from both the interviewer and participants on their experiences
during the aforementioned study, and outline four lessons we learned during the process.

Substituting Restorative Benefits of Being Outdoors through Interactive Augmented
Spatial Soundscapes

  • Swapna Joshi
  • Kostas Stavrianakis
  • Sanchari Das

Geriatric depression is a common mental health condition affecting majority of older
adults in the US. As per Attention Restoration Theory (ART), participation in outdoor
activities is known to reduce depression and provide restorative benefits. However,
many older adults, who suffer from depression, especially those who receive care in
organizational settings, have less access to sensory experiences of the outdoor natural
environment. This is often due to their physical or cognitive limitations and from
lack of organizational resources to support outdoor activities. To address this, we
plan to study how technology can bring the restorative benefits of outdoors to the
indoor environments through augmented spatial natural soundscapes. Thus, we propose
an interview and observation-based study at an assisted living facility to evaluate
how augmented soundscapes substitute for outdoor restorative, social, and experiential
benefits. We aim to integrate these findings into a minimally intrusive and intuitive
design of an interactive augmented soundscape, for indoor organizational care settings.

Tactile Tone System: A Wearable Device to Assist Accuracy of Vocal Pitch in Cochlear
Implant Users

  • Sungyong Shin
  • Changmok Oh
  • Hyungcheol Shin

Cochlear implantation is an effective tool in speech perception. However, activities
such as listening to music and singing remain challenging for cochlear implant (CI)
users, due to inaccurate pitch recognition. In this study, we propose a method for
CI users to recognize precise pitch differences through tactile feedback. The proposed
system encodes real-time audio signals to 36 musical tones (from C3 to B5), represented
by tactile codes using nine vibration motors in a glove-type device. Two CI users
participated in 15 h of training using our system and showed significant improvement
in pitch accuracy while singing. In addition to the quantitative results, both participants
expressed satisfaction in distinguishing and vocalizing musical tones, which led to
increased interest in music. This study provides opportunities for CI users to engage
more deeply and participate in musical education as well as achieve improved aural
rehabilitation.

TACTOPI: a Playful Approach to Promote Computational Thinking for Visually Impaired
Children

  • Lúcia Abreu
  • Ana Cristina Pires
  • Tiago Guerreiro

The usage of playful activities is common in introductory programming settings. There
is normally a virtual character or a physical robot that has to collect items or reach
a goal within a map. Visually, these activities tend to be exciting enough to maintain
children engaged: there is constant feedback about the actions being performed, and
the virtual environments tend to be stimulating and aesthetically pleasant. Conversely,
in adaptations for visually impaired children, these environments tend to become poorer,
damaging the story at the cost of the programming actions and its dull mechanics (e.g.,
place a arrow block to move the character forward). In this paper, we present TACTOPI,
a playful environment designed from the ground up to be rich in both its story (a
nautical game) and its mechanics (e.g., a physical robot-boat controlled with a 3D
printed wheel), tailored to promote computational thinking at different levels (4
to 8 years old). This poster intends to provoke discussion and motivate accessibility
researchers that are interested in computational thinking to make playfulness a priority.

Towards a gaze-contingent reading assistance for children with difficulties in reading

  • Tobias Lunte
  • Susanne Boll

Reading is a key skill in learning, working and participating in society on all levels.
However, in 2018, 20% of German school students possessed only insufficient levels
of reading proficiency. Frustration associated with these difficulties results in
avoidance of reading, such that struggling readers will often not overcome them on
their own. We present a first approach towards an assistance system that recognizes
reading difficulties by analyzing the user’s gaze behavior and offers dynamic adaptation
of the text presentation. In a formative study, including 34 fifth-grade students,
letter- and syllabication-based assistance significantly and substantially increased
children’s motivation to read. Based on these findings, gaze-contingent assistance
presents a promising approach in improving struggling readers’ reading experience
and motivation to read.

Towards Automatic Captioning of University Lectures for French students who are Deaf

  • Solène EVAIN
  • Benjamin LECOUTEUX
  • François PORTET
  • Isabelle ESTEVE
  • Marion FABRE

Access to higher education of Deaf students is below the national average. Recently,
there has been a growing number of applications for the automatic transcription of
speech, which claim to make everyday speech more accessible to people who are Deaf
or Hard-of-Hearing but we have very little data on how they actually deal with captions.
In this paper, we describe the MANES project, whose long-term goal is to assess captioning
solution for Deaf students’ development of academic literacy. We present the first
technical results of a real-time system to make course captioning suitable for the
target audience.

Towards Recommending Accessibility Features on Mobile Devices

  • Jason Wu
  • Gabriel Reyes
  • Sam C. White
  • Xiaoyi Zhang
  • Jeffrey P. Bigham

Numerous accessibility features have been developed to increase who and how people
can access computing devices. Increasingly, these features are included as part of
popular platforms, e.g., Apple iOS, Google Android, and Microsoft Windows. Despite
their potential to improve the computing experience, many users are unaware of these
features and do not know which combination of them could benefit them. In this work,
we first quantified this problem by surveying 100 participants online (including 25
older adults) about their knowledge of accessibility and features that they could
benefit from, showing very low awareness. We developed four prototypes spanning numerous
accessibility categories (e.g., vision, hearing, motor), that embody signals and detection
strategies applicable to accessibility recommendation in general. Preliminary results
from a study with 20 older adults show that proactive recommendation is a promising
approach for better pairing users with accessibility features they could benefit from.

Understanding the Hand and Wrist Strains Caused by Smart Cane Handles with Haptic
Notification

  • Jagannadh Pariti
  • Tae Oh

Smart canes that use a haptic notification approach for informing cane users of upcoming
obstacles can use vibrators placed on the handle. Prior studies have focused on the
technical and functional performance of obstacle detection systems for smart canes.
However, limited studies have been conducted to understand the hand and wrist strains
on cane users when using a smart cane with a haptic notification approach. Understanding
these strains can result in different handle and haptic notification designs that
can minimize the stress and possibly injury to the cane users. Therefore, a research
team conducted an initial exploratory study in which the participants observed any
challenges they faced when using a smart cane prototype called an Intelligent Mobility
Cane (IMC). The findings indicate that when they try to swing the cane handle and
hold the vibrators on the handle at the same time, they can experience hand and wrist
strain. Also, the vibrations can provide a heightened anxiety in some participants.
To minimize their hand and wrist strains, some participants modified their cane holding
method. We provided a set of design recommendations for future smart cane handles
and haptic approaches based on the participants’ feedback to help reduce the strains
for cane users.

ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos

  • Rosiana Natalie
  • Ebrima Jarjue
  • Hernisa Kacorri
  • Kotaro Hara

Audio descriptions can make the visual content in videos accessible to people with
visual impairments. However, the majority of the online videos lack audio descriptions
due in part to the shortage of experts who can create high-quality descriptions. We
present ViScene, a web-based authoring tool that taps into the larger pool of sighted
non-experts to help them generate high-quality descriptions via two feedback mechanisms—succinct
visualizations and comments from an expert. Through a mixed-design study with N =
6 participants, we explore the usability of ViScene and the quality of the descriptions
created by sighted non-experts with and without feedback comments. Our results indicate
that non-experts can produce better descriptions with feedback comments; preliminary
insights also highlight the role that people with visual impairments can play in providing
this feedback.

When to Add Human Narration to Photo-Sharing Social Media

  • Lawrence H Kim
  • Abena Boadi-Agyemang
  • Alexa Fay Siu
  • John Tang

Social media platforms facilitate communication through sharing photos and videos.
The abundance of visual content creates accessibility issues, particularly for people
who are blind or have low vision. While assistive technologies like screen readers
can help when alt-text for images is provided, synthesized voices lack the human element
that is important for social interaction. Here, we investigate when it makes the most
sense to use human narration as opposed to a screen reader to describe photos in a
social media context. We explore the effects of voice familiarity (i.e., whether you
hear the voice of someone you know) and the perspective of the description (i.e.,
first vs. third person point-of-view (POV)). Preliminary study suggests that users
prefer hearing from a person they know when the content is described in first person
POV, whereas synthesized voice is preferred for content described in third person
POV.

SESSION: Demonstrations

A Portable Hong Kong Sign Language Translation Platform with Deep Learning and Jetson
Nano

  • Zhenxing Zhou
  • Yisiang Neo
  • King-Shan Lui
  • Vincent W.L. Tam
  • Edmund Y. Lam
  • Ngai Wong

As hearing loss is arousing more and more public concern, different researches have
been conducted on translating the sign language into spoken language. However, most
of these researches remain in a theoretical level and few of them investigate how
to realize a real system. In this paper, we introduce an effective and portable Hong
Kong sign language recognition platform which can translate the Hong Kong sign language
within a few seconds. In this platform, there are mainly two parts: a mobile application
and a Jetson Nano. The mobile application accounts for preprocessing the sign video
and transferring the videos to Jetson Nano. Then, Jetson Nano will translate sign
videos into spoken language with the pretrained deep learning model and return the
results to the mobile application. With this platform, non-disabled people can easily
translate and understand the sign performed by deaf people through mobile phones quickly.
We believe that this platform can significantly facilitate the daily communication
between deaf people and the others in Hong Kong.

Action Blocks: Making Mobile Technology Accessible for People with Cognitive Disabilities

  • Lia Carrari
  • Rain Michaels
  • Ajit Narayanan
  • Lei Shi
  • Xiang Xiao

Mobile technology has become an indispensable part of our daily lives. From home automation
to digital entertainment, we rely on mobile technology to progress through our daily
routines. However, mobile technology requires complex interactions and nontrivial
cognitive efforts to use, and is often inaccessible to people with cognitive disabilities.
With this in mind, we designed Action Blocks, an application that provides one-tap
access to digital services on Android. A user and/or their caregiver can configure
an Action Block with customized commands, such as calling a certain person, turning
on the lights. The Action Block is associated with a memorable image (e.g., a photo
of the person to call, an icon of a lightbulb) and placed on the device home screen
as a one-tap button, as shown in Figure 1. Action Blocks was launched in May 2020
and received much useful feedback. In this demonstration, we report the key design
considerations of Action Blocks as well as the lessons we learned from user feedback.

Automated Generation of Accessible PDF

  • Shaban Zulfiqar
  • Safa Arooj
  • Umar Hayat
  • Suleman Shahid
  • Asim Karim

LaTeX is widely used in STEM fields for creating high-quality documents that are converted
to the Portable Document Format (PDF) for dissemination. Currently, available LaTeX
systems do not guarantee that the generated PDFs are compliant with international
accessibility standards. In this work, we present AGAP (Automated Generation of Accessible
PDF) that automates and makes accessible the process of generating accessible PDFs
from LaTeX. AGAP flags accessibility violations and provides guidance on how to fix
them at compile time. AGAP allows interaction through speech synthesis and keyboard
shortcuts thus making it fully accessible to persons with vision impairments (PVIs).
Evaluating the accessible PDF generated using AGAP with a standard accessibility checker
resulted in a much smaller number of violations as opposed to the PDF generated using
another desktop LaTeX editor.

CareHub: Smart Screen VUI and Home Appliances Control for Older Adults

  • Ningjing Sun

While voice user interfaces (VUIs) are becoming promising tools for users to control
home appliances, the VUIs pose great challenges to the older adults. In this paper,
I explored how to use multimedia interactions (i.e., voice+visual output) to enhance
the VUI for older adults to better control home appliances. I conducted two preliminary
studies with six older adults to understand the current usability problems. Given
the results, I designed a smart screen prototype with visual-enhanced VUI. The evaluation
showed that the participants were highly receptive to the prototype and considered
its features effective and accessible.

Keep Your Distance: A Playful Haptic Navigation Wearable for Individuals with Deafblindness

  • James Gay
  • Moritz Umfahrer
  • Arthur Theil
  • Lea Buchweitz
  • Eva Lindell
  • Li Guo
  • Nils-Krister Persson
  • Oliver Korn

Deafblindness, a form of dual sensory impairment, significantly impacts communication,
access to information and mobility. Independent navigation and wayfinding are main
challenges faced by individuals living with combined hearing and visual impairments.
We developed a haptic wearable that provides sensory substitution and navigational
cues for users with deafblindness by conveying vibrotactile signals onto the body.
Vibrotactile signals on the waist area convey directional and proximity information
collected via a fisheye camera attached to the garment, while semantic information
is provided with a tapping system on the shoulders. A playful scenario called “Keep
Your Distance” was designed to test the navigation system: individuals with deafblindness
were “secret agents” that needed to follow a “suspect”, but they should keep an optimal
distance of 1.5 meters from the other person to win the game. Preliminary findings
suggest that individuals with deafblindness enjoyed the experience and were generally
able to follow the directional cues.

Ontology-Driven Transformations for PDF Form Accessibility

  • Utku Uckun
  • Ali Selman Aydin
  • Vikas Ashok
  • IV Ramakrishnan

Filling out PDF forms with screen readers has always been a challenge for people
who are blind. Many of these forms are not interactive and hence are not accessible;
even if they are interactive, the serial reading order of the screen reader makes
it difficult to associate the correct labels with the form fields. This demo will
present TransPAc[5], an assistive technology that enables blind people to fill out
PDF forms. Since blind people are familiar with web browsing, TransPAc leverages this
fact by faithfully transforming a PDF document with forms into a HTML page. The blind
user fills out the form fields in the HTML page with their screen reader and these
filled-in data values are transparently transferred onto the corresponding form fields
in the PDF document. TransPAc thus addresses a long standing problem in PDF form accessibility.

Screen Magnification for Office Applications

  • Hae-Na Lee
  • Vikas Ashok
  • IV Ramakrishnan

People with low vision use screen magnifiers to interact with computers. They usually
need to zoom and pan with the screen magnifier using predefined keyboard and mouse
actions. When using office productivity applications (e.g., word processors and spreadsheet
applications), the spatially distributed arrangement of UI elements makes interaction
a challenging proposition for low vision users, as they can only view a fragment of
the screen at any moment. They expend significant chunks of time panning back-and-forth
between application ribbons containing various commands (e.g., formatting, design,
review, references, etc.) and the main edit area containing user content. In this
demo, we will demonstrate MagPro, an interface augmentation to office productivity
tools, that not only reduces the interaction effort of low-vision screen-magnifier
users by bringing the application commands as close as possible to the users’ current
focus in the edit area, but also lets them easily explore these commands using simple
mouse actions. Moreover, MagPro automatically synchronizes the magnifier viewport
with the keyboard cursor, so that users can always see what they are typing, without
having to manually adjust the magnifier focus every time the keyboard cursor goes
off screen during text entry.

SoundLines: Exploration of Line Segments through Sonification and Multi-touch Interaction

  • Dragan Ahmetovic
  • Cristian Bernareggi
  • Sergio Mascetti
  • Federico Pini

We demonstrate SoundLines, a mobile app designed to support children with visual
impairments in exercising spatial exploration skills. This is achieved through multi-touch
discovery of line segments on touchscreen, supported by sonification feedback. The
approach is implemented as a game in which the child needs to guide a kitten to find
its mother cat by following with a finger the line connecting them.

Supporting Older Adults in Locating Mobile Interface Features with Voice Input

  • Ja Eun Yu
  • Debaleena Chattopadhyay

As mobile applications continue to offer more features, tackling the complexity of
mobile interfaces can become challenging for older adults. Owing to a small screen
and frequent updates that modify the visual layouts of menus and buttons, older adults
can find it challenging to locate a function on a mobile interface quickly—even when
familiar with the application. To address this issue, we present a system that supports
older adults to quickly locate an on-screen feature on a mobile interface using speech
queries. Our system allows users to ask for a function related to the current mobile
screen using voice input. When that function is available, it provides visual guidance
for users to engage with the pertinent user interface (UI) widget. The label and location
of all UI components on the current screen are acquired via the Android’s Assist API.
We discuss four scenarios of use.

Teaching ASL Signs using Signing Avatars and Immersive Learning in Virtual Reality

  • Lorna Quandt

We present here a new system, in which signing avatars (computer-animated virtual
humans built from motion capture recordings) teach introductory American Sign Language
(ASL) in an immersive virtual environment. The system is called Signing Avatars &
Immersive Learning (SAIL). The significant contributions of this work are 1) the use
of signing avatars, built from state-of-the-art motion capture recordings of a native
signer; 2) the integration with LEAP gesture tracking hardware, allowing the user
to see his or her own movements within the virtual environment; 3) the development
of appropriate introductory ASL vocabulary, delivered in semi-interactive lessons;
and 4) the 3D environment in which a user accesses the system.

Teaching Digital Fabrication to Early Intervention Specialists for Designing Their
Own Tools

  • Florian Güldenpfennig
  • Peter Fikar
  • Roman Ganhör

We taught basic principles of digital fabrication to four early intervention therapists
that were specialized in the training of children with cerebral visual impairment
and related disabilities. Here, our intention was threefold. First, we wanted to engage
in digital fabrication together with the therapists to ‘kick-off’ a co-design project
and get to know them; the project was about creating therapeutic toys, and we hadn’t
met our participants or co-designers before. Second, we wanted to give them an impression
of the tools we use and the sorts of designs that we are capable of producing in the
course of such a one-year design project. Third, we aimed at generating a first set
of design ideas. In this paper, we show in which ways teaching digital fabrication
enabled us to accomplish these goals. Interestingly, we did not anticipate one of
our most interesting findings. – As it turned out, the therapists continued creating
their own designs after the project was completed, drawing on their newly developed
digital fabrication skills. Hence, as a fourth outcome, we ‘accidently’ empowered
the participants to address their problems independently.

SESSION: Student Research Abstracts

Assistive Technology Design as a Computer Science Learning Experience

  • Thomas B. McHugh
  • Cooper Barth

As awareness surrounding the importance of developing accessible applications has
grown, work to integrate inclusive design into computer science (CS) curriculum has
gained traction. However, there remain obstacles to integrating accessibility into
introductory CS coursework. In this paper, we discuss current challenges to building
assistive technology and the findings of a formative study exploring the role of accessibility
in an undergraduate CS curriculum. We respond to the observed obstacles by presenting
V11, a cross-platform programming interface to empower novice CS students to build
assistive technology. To evaluate the effectiveness of V11 as a CS and accessibility
learning tool, we conducted design workshops with ten undergraduate CS students, who
brainstormed solutions to a real accessibility problem and then used V11 to prototype
their solution. Post-workshop evaluations showed a 28% average increase in student
interest in building accessible technology, and V11 was rated easier to use than other
accessibility programming tools. Student reflections indicate that V11 can be an accessibility
learning tool, while also teaching fundamental Computer Science concepts.

Behaviors, Problems and Strategies of Visually Impaired Persons During Meal Preparation
in the Indian Context : Challenges and Opportunities for Design

  • Avyay Ravi Kashyap

Meal preparation is a complex multisensorial task that requires many decisions to
be made based on the appearance of the dish. This alienates individuals with low vision
and makes cooking meals independently inaccessible. Products designed for individuals
with low vision rarely aid with tasks that involve application of heat. As people
with vision impairments have different requirements for technology, it is imperative
that the behaviours and problems faced are thoroughly understood. A study to understand
how users perform tasks involving heat application was conducted. Four cooking techniques
commonly used to prepare Indian dishes were identified and interviews were carried
out with a diverse group of visually impaired persons (n=12). The findings include
insights about behaviours, problems and strategies employed by visually impaired persons
while preparing meals using the following techniques: Boiling, Simmering, Roasting,
and Frying. This work describes factors that affect behaviour during meal preparation
by Indian visually impaired persons, and the various strategies used to mitigate challenges
faced. The findings have been used to propose a set of considerations that have implications
on the design of accessibility tools such as assistive devices, rehabilitation programs
and strategies.

CaseGuide: Making Cheap Smartphones Accessible to Individuals with Visual Impairments
in Informal Settlements

  • Roos van Greevenbroek

Individuals with visual impairments in informal settlements (IVIIS) depend highly
on others for access to basic services. Smartphones can help provide assistive technology
and access to basic services but are too expensive for IVIIS or lack accessibility
features. This study explores and promotes a low-cost concept that uses a static interface
overlay app in conjunction with a button enabled phone case, to enable the use of
cheap smartphones and increase IVIIS autonomy and inclusion in society. Using existing
research and an observational study of YouTube videos, design requirements were determined.
A low-fidelity prototype was developed and usertested on one visually impaired and
two blindfolded participants. Although usertests showed promising results, research
and usertesting were limited. Future research and usertests with IVIIS are needed
to validate if CaseGuide is a desirable solution for IVIIS.

Deconstructing a “puzzle” of visual experiences of blind and low-vision visual artists.

  • Yulia Zhiglova

A lot of experiences that come from visual information are often not fully accessible
by people with visual impairments (VIPs). Details like subtle facial expressions,
physical appearance and atmosphere in the space are not easily perceived by VIPs.
Our goal is to design a technological solution to communicate visual details haptically.
In order to better understand how visual details are perceived and interpreted by
VIPs, we conducted semi-structured interviews with six blind and low vision visual
artists. Our interviews focused on understanding how visual information is perceived
and reflected in their artwork. We identified four themes that described the participants’
visual experiences in relation to (1) Perception of Physical Attributes, (2) Interactions
with Others, (3) Identifying Challenging Environments, (4) Strategies and Challenges
of Perceiving the Surroundings. Our findings from this preliminary study will guide
the design of a haptic solution.

Designing Embodied Musical Interaction for Children with Autism

  • Grazia Ragone

This paper describes the design, implementation, and pilot evaluation of an interface
to support embodied musical interaction for children with Autism Spectrum Conditions
(ASC), in the context of music therapy sessions. Previous research suggests music
and movement therapies are powerful tools for supporting children with autism in their
development of communication, expression, and motor skills. OSMoSIS (Observation of
Social Motor Synchrony with an Interactive System) is an interactive musical system
which tracks body movements and transforms them into sounds using the Microsoft Kinect
motion capture system. It is designed so that, regardless of motor abilities, children
can generate sounds by moving in the environment either freely or guided by a facilitator.
OSMoSIS was inspired by the author’s experiences as a music therapist and supports
observation of Social Motor Synchrony to allow facilitators and researchers to record
and investigate this aspect of the therapy sessions. It converts movements into sounds
using Microsoft Kinect body tracking, in the context of an interactive game. From
our preliminary testing with 11 children with autism (aged 5 – 11 years old), we observed
that our design actively connects children, who displayed a notable increase in engagement
and interaction when the system was used.

Establishing a Serious Game on Relationship Boundaries for People with Developmental
Disabilities

  • Samantha Conde

This research project aims to explore the gamification of information gathering related
to social-emotional skills that are directly relevant to personal relationship boundaries.
The target population of this game is adults with developmental disabilities. This
game, titled “Boundaries,” was developed because of the little to no supportive resources
of sexual education for this community. Ten people with developmental disabilities
later tested the game and provided feedback. Our results can be generalized into design
suggestions for games like Boundaries as a vehicle to provide unique insights that
can lead to awareness on issues that are faced by those with developmental disabilities
in particular as well as those with other disabilities.

Supporting Selfie Editing Experiences for People with Visual Impairments

  • Soobin Park

With the increased popularity of social media, editing and sharing selfies using
augmented reality filters, face editors, and sticker features have become popular
social trends. However, it can be challenging for people with visual impairments to
edit and add fun elements to their selfies, although they actively participate in
social media. We conducted an online survey in which 47 participants with visual impairments
identified their experience with and demands for using such features. Based on the
results, we designed and developed a selfie editing application with sticker features
based on voice command and voice feedback for people with visual impairments. We then
conducted a design probe study with four participants who were visually impaired to
provide design guidelines to increase the accessibility of selfie editing apps with
sticker features. Voice command and feedback were both highly appreciated by participants,
and we also investigated their requirements for selfie editing features.

Tapsonic: One Dimensional Finger Mounted Multimodal Line Chart Reader

  • Zeyuan Zhang

This paper focuses on utilizing combination of haptics stimuli and auditory clarification
to elucidate statistical information, specifically line chart, to person with visual
impairment. Past research has explored varieties of vision substitute methods to depict
shape and value information of line charts. It was identified that even though the
general trend could be well interpreted, individual data values were not sufficiently
perceived. This paper proposes a statistic orientated approach, instead of reconstructing
the shape of charts, it adopts a dimensionality reduction strategy to split 2D information
into bidirectional haptics and linear movements. Explicit voiceover of data values
would be provided based on one-dimensional finger movement to assist graph interpretation.
Our evaluation study showed that such an approach enabled users to efficiently decipher
line chart information with an appropriate cognitive demand and high data interpretation
accuracy.

The Compliance Mindset: Exploring Accessibility Adoption in Client-Based Settings

  • Emma L. Holliday

Despite growing awareness around web accessibility and its importance, adoption still
remains incredibly low, leading to the digital exclusion of many who are unable to
access the ever-increasing amount of content and services online. This study extends
previous research into accessibility adoption models to explore client-based settings
such as consultancies and freelance work. A qualitative study consisting of semi-structured
interviews and thematic analysis was performed to derive core and emerging themes.
The main findings were that, while accessibility is well embedded into individual
practitioner workflows, there is a heavy emphasis on compliance which can lead to
issues if it overrules a user-focused approach. These findings were used to inform
four principles for adoption in client-based settings to be tested in future work:
Put users before compliance, always consider accessibility, facilitate practitioner
passion, and automate with care.

SESSION: UX Panel

User Experience (UX) Panel: Lockdown Experiences

Jan McDonald and Carly Davey, Roobi Bernareggi, Sannah Gulamani, Inho Seo. Session Chairs: Abi Roper and Sergio Mascetti

The User Experience (UX) panel is a popular feature of the annual ASSETS conference experience. It seeks to provide a space for users with diverse backgrounds to share their individual experiences of accessing technology. The UX panel in 2020 was entitled: “Lockdown Experiences” and was aimed at exploring and understanding the challenges and opportunities that arose during the COVID-19 lockdown for people with disabilities. Panellists Jan and Carly, Roobi, Sannah and Inho Seo joined chairs Abi and Sergio to share their experiences and perspectives. Watch the full video via the above link.