AI ASSISTIVE TECHNOLOGY FOR EXTENDING SIGHTED GUIDING

Beatrice Vincenzi, HCID Centre, City, University of London beatrice.vincenzi@city.ac.uk

Abstract

Prior work on AI-enabled assistive technology (AT) for people with visual impairments (VI) has treated navigation largely as an independent activity. Consequently, much effort has focused on providing individual users with wayfinding details about the environment, including information on distances, proximity, obstacles, and landmarks. However, independence is also achieved by people with VI through interacting with others, such as through sighted guiding in navigation. Drawing on the concept of interdependence, this research aims to (1) present an exemplary case of interdependence and draw out important implications for designing AI-enabled AT; (2) propose and evaluate a prototype that enhances social interactions in sighted guiding and aids navigation.

Problem and Motivation

The work I present builds on the inspirational research from Cynthia L. Bennett et al [2]. In ASSETS, Bennett et al developed an argument around interdependence that sees AT as a further way to extend the relations between one another, focusing on how actors are made more or less able, relationally, through other actors and with/through AT. Consequently, this frame stresses the work done by people with disabilities in collaboration with others and how access and independence is achieved through this interdependence. The interdependence perspective thus opens up opportunities to think differently about the design of AT and consequently about the use of AI in this context.

Assistive Technology (AT) has become a common part of daily life for many people living with visual impairments, supporting them in activities and routine tasks, such as reading text, navigating the web, using smartphones, and so on [4, 8, 14, 15, 19]. More recently, artificial intelligence (AI) has been promoted as a means for extending these ATs. For example, increased attention has been given to designing and developing AI-based AT to aid independent navigation for people with visual impairments. These technologies aim to solve a functional task, where the user follows instructions to successfully reach a destination. Hence, research has focused on supporting how users navigate physical spaces, and aiding in the identification and proximity of walls, curbs, obstacles, streets, etc. often using beacons [1, 11, 13, 17, 20] or computer vision systems [1, 9, 11, 18].

Attendance at CHI 2019 workshop on "Hacking blind navigation" [7] to present some preliminary ideas of this research provided me with an introduction to an expert community of researchers passionate about the importance of investigating, developing assistive technology around blind navigation. During this occasion I also had the opportunity to engage with a variety of work, which allowed me to increase awareness on recent advances in blind navigation, discuss open research problems, and engage with different independent navigation perspectives. Broadly independent navigation for people with visual impairment is considered a major challenge, drawing significant attention from the HCI research community. However, technologies and applications which go beyond the accomplishment of individual tasks, and pay attention to social activity and relations, remain unexplored in the design of AT.

My research aims to take up the call from Bennett et al. [2] and therefore operates at the intersections of AI and its use in assistive guidance technologies. Specifically, this work is focused on alternatives ways of approaching AI assistive technology design to enhance social interactions and aid navigation. Rather than presuming an instrumentalised idea of independent navigation, I aim to consider what role AI might play in augmenting and extending the interdependencies between people with vision impairments and others. I seek not to 'solve' the 'problem' of navigation, but to explore how and where people's collective capacities might be enhanced [3].

Working with Interdependence

I decided to address the above research problem investigating the sighted guide technique because it represents an illustrative example of people navigate together. Sighted guiding is indeed a well-known technique within visually impaired community and involves a sighted person guiding using touch and voice to forewarn a person with a VI about kerbs, steps, obstacles, etc. If someone needs support, the guide bends the arm parallel to the ground and offers the arm or elbow to the person being guided. Holding the arm’s guide and walking one-half step ahead, the guide guarantees the companion’s safety [16].

To simply approach this aided guidance from a perspective of autonomous travel would be to reduce the ’problem’ of navigation to a sequence of steps and the movement from one place to another, and ultimately, to look to solutions that replace the sighted guide with an AI system. However, when viewed in terms of interdependence, sighted guiding represents an instance of people working together and, thanks to their collaboration, being able to successfully coordinate their actions to move through space. To think with interdependence is then to recognise movement and space not in strictly Euclidean, geometric terms but as something that is co-produced and mutually orientated to accomplish activities like navigation [5]. Moreover, sighted guiding provides for building and intensifying relationships [6, 10], and for feelings of freedom [12].

It is thus through sighted guiding that I aim to develop and refine the evolving work into interdependence and approach AI’s potential in AT from a different perspective. Broadly, the aim of this research is to (1) develop insights into how people with VI and their sighted guides navigate together successfully, how this experience exemplifies interdependence, and show evidence of ruptures and repairs in these interdependent relations; (2) implement and evaluate an AI-enabled assistive technology prototype to extend interpersonal interactions in sighted guiding when interdependencies’ rupture occurs.

During the first stage I have conducted a qualitative study to explore how interdependence is interwoven into the sighted guiding relationship. I recruited 4 pairs of participants composed of a person with VI and a sighted guide with an established relationship through guiding of at least 3 months. This ensured that they had some experience of guiding each other and allowed to investigate their interactions as safely as possible. Wearing two body cameras, participants took daily routine journeys using the sighted guide configuration and video recorded their environment and their interactions. Video data were analysed through interaction analysis and findings highlight important implications for designing AI-enabled AT, which align to the interdependence framework.

Findings provides a step toward future work on investigating and designing AI-enabled assistive technology that extends the ways companions walk and navigate together. Specifically, my future work will take a more specific interest in situations where interdependencies rupture occurs. I am currently investigating in more detail how such breakdowns happen when pairs "let go" of one another. The aim is to draw attention to how resources such as gestures and body movements are used by pairs in sighted guide partnerships and explore how sound could be deployed to complement and extend the sense pairs develop of each other. The intent was to approach this stage through a series of 3 co-design workshops alongside three stages of implementation over 6 months. However, due to COVID-19 and UK Government restrictions on travel and meetings I had to revise the method and procedure. The new study covers two remote phases composed of a pre-task activity and an online interview and involves 3 participant pairs. I am collecting digital audio data, which will be analysed qualitatively using a mixed method including thematic and discourse analysis.

The resulting findings and analysis will drive the design and evaluation of AI-based interventions. Preliminary research has explored the use of computer vision algorithms for body detection and sensors, such as Bluetooth Low Energy (BLE) beacons to help obtain proximity measures. My hope is to demonstrate that AI can have a role in easing moments of repair happening during sighted guiding, and potentially extend the ways people accomplished navigation.

Stage of Research

During the first year and a half of my PhD programme I have completed the following:

I am currently involved in the data collection of a second research study which is due to be completed by the end of year 2. Additionally, I am conducting a second exploration of AI, machine learning techniques and common technology, such as BLE beacon and other built-in sensors in smartphones, for the creation of probes to deploy in the current research study. Use of these probes and users’ feedback will help me refine prototype ideas for a final design stage.

Year 3 will continue with the design and implementation of AI-enabled AT prototypes, and follow up with an evaluation to assess the impact on people with visual impairments and sighted guides when technology is employed in real world settings. Since this year’s Doctoral Consortium at ASSETS will coincide with my early implementation stage, my hope is attending the DC to result in constructive feedback from other PhD students and mentors on my prototype ideas and advice about its future development. For example, I am keen to discuss further how existing computer vision algorithms can be adapted in sighted guiding. This will help me to reflect on alternative ways of producing models which account for how resources are used in concert in the sighted guide relationship, instead of treating bodies and objects as the main elements to extract and detect from an irrelevant background.

Contribution to the Accessibility Field

Overall my findings are intended to contribute to the growing evidence base on the role of AI assistive technology, especially in relation to people with visual impairments. Re-thinking the role of AI in assistive technologies in a navigational context aims to shed light on issues of environment access, as well as, how people with VI achieve independence and access in everyday life. Thus, this research has a direct influence on the accessibility field. Challenging and inaccessible navigational space, becomes accessible when partners collaborate, highlighting the need of technologies that foster independence as well as collaboration.

I am keen to develop and refine the interdependence concept, realising (as Bennett et al express) how complex the relationship between independence and interdependence are, and that AI introduces additional challenges around autonomy. In a complex mixture of micro-interactions and across highly variable contexts such as sighted guiding, autonomy reflects the ability to take an action and open possibility to another action in response. Recognising how one person’s actions trigger or are reciprocated by another is nontrivial and unlikely to be tractable by AI systems for some time. Thus, this research would contribute to explore interventions not to replace the coordination between pairs, but to serve as a further resource for opening up the possibilities and potentially affording the space for new (inter)actions.

In conclusion, through this research, I intend to support this existing collaboration between people with VI and sighted companions and therefore be sensitive to the availability of information to both agencies in ways that were not disruptive to ongoing interaction. Following this direction, I hope to extend ways to accomplish navigation and access the environment. More broadly, this research is part of a wider project I am interested in, where people are given greater capacities to work together through negotiating their interactions continually.

References

  1. Dragan Ahmetovic, Masayuki Murata, Cole Gleason, Erin Brady, Hironobu Takagi, Kris Kitani, and Chieko Asakawa. 2017. Achieving Practical and Accurate Indoor Navigation for People with Visual Impairments. In Proceedings of the 14th Web for All Conference on The Future of Accessible Work - W4A ’17. 1–10. https://doi.org/10.1145/3058555.3058560
  2. Cynthia Bennett, Erin Brady, and Stacy M Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. In ASSETS, Vol. 18. https://doi.org/10.1145/3234695.3236348
  3. Cynthia L Bennett, Daniela K Rosner, and Alex S Taylor. 2020. The Care Work of Access. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
  4. Michele A. Burton, Erin Brady, Robin Brewer, Callie Neylan, Jeffrey P. Bigham, and Amy Hurst. 2012. Crowdsourcig subjective fashion advice using vizwiz: Challenges and opportunities. ASSETS’12 - Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility (2012), 135–142. https://doi.org/10.1145/2384916.2384941
  5. Paul Dourish. 2006. Re-Space-Ing Place: “Place” and “Space” Ten Years On. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (CSCW ’06). Association for Computing Machinery, New York, NY, USA, 299–308. https://doi.org/10.1145/1180875.1180921
  6. Kelly Fritsch. 2010. Intimate assemblages: Disability, intercorporeality, and the labour of attendant care. Critical Disability Discourses/Discours critiques dans le champ du handicap 2 (2010), 1–14.
  7. João Guerreiro, Jefrey P. Bigham, Daisuke Sato, Chieko Asakawa, Hernisa Kacorri, Edward Cutrell, and Dragan Ahmetovic. 2019. Hacking blind navigation. Conference on Human Factors in Computing Systems - Proceedings (2019), 1–8. https://doi.org/10.1145/3290607.3299015
  8. J. M. Hansdu Buf, João Barroso, João M.F. Rodrigues, Hugo Paredes, Miguel Farrajota, Hugo Fernandes, João José, Victor Teixeira, and Mário Saleiro. 2011. The SmartVision navigation prototype for blind users. International Journal of Digital Content Technology and its Applications 5, 5 (2011), 351–361. https://doi.org/10.4156/jdcta.vol5.issue5.39
  9. Rabia Jafri and Marwa Mahmoud Khan. 2018. User-centered design of a depth data based obstacle detection and avoidance system for the visually impaired. Human-centric Computing and Information Sciences 8, 1 (dec 2018), 14. https://doi.org/10.1186/s13673-018-0134-9
  10. Christine Kelly. 2013. Building bridges with accessible care: Disability studies, feminist care scholarship, and beyond. Hypatia 28, 4 (2013), 784–800. https://doi.org/10.1111/j.1527- 2001.2012.01310.x
  11. Young Hoon Lee and Gérard Medioni. 2016. RGB-D camera based wearable navigation system for the visually impaired. Computer Vision and Image Understanding 149 (2016), 3–20. https://doi.org/10.1016/j.cviu.2016.03.019
  12. Hannah Macpherson. 2017. Walkers with visual-impairments in the British countryside: Picturesque legacies, collective enjoyments and well-being benefits. Journal of Rural Studies 51 (2017), 251–258. https://doi.org/10.1016/j.jrurstud.2016.10.001
  13. Parth Mehta, Pavas Kant, Poojan Shah, and AnilK Roy. 2011. VI Navi: A Novel Indoor Navigation System for Visually Impaired People. International Conference on Computer Systems and Technologies – CompSysTech’11 VI-Navi: 11 (2011), 365–371. https://doi.org/10.1145/2023607.2023669
  14. Microsoft. 2019. SeeingAI. https://www.microsoft.com/en-us/ai/seeing-ai Retrieved 15-May-2020.
  15. Optelec. 2014. Ruby - Handheld Magnifier. https://uk.optelec.com/products/880123-007-ruby-hd.html Retrieved 15-May-2020.
  16. RNIB. 2020. Sighted Guide Technique. https://www.rnib.org.uk/advice/guiding-blind-or-partially-sighted-person Retrieved 15-May-2020.
  17. M. Swobodzinski and Raubal M. 2009. An Indoor Routing Algorithm for the Blind: Development and Comparison to a Routing Algorithm for the Sighted. International Journal of Geographical Information Science 23, 10 (2009), 1315–1343.
  18. Yingli Tian, Xiaodong Yang, Chucai Yi, and Aries Arditi. 2013. Toward a computer vision-based way finding aid for blind persons to access unfamiliar indoor environments. Machine Vision and Applications 24, 3 (2013), 521–535. https://doi.org/10.1007/s00138-012-0431-7
  19. Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson. 2019. SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI’19. ACM Press, New York, New York, USA, 1–14. https://doi.org/10.1145/3290605.3300341
  20. Michael Zöllner, Stephan Huber, Hans Christian Jetter, and Harald Reiterer. 2011. NAVI-A proof-of-concept of a mobile navigational aid for visually impaired based on the microsoft kinect. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6949 LNCS, PART 4 (2011), 584–587. https://doi.org/10.1007/978-3-642-23768-3_88

About the Authors

Beatrice is a PhD student at the Centre for HCI Design, City, University of London. Previously, she earned her MSc in Computer Science from University of Padova, Italy. Her current research project explores AI assistive technology to extend sighted guiding partnership, autonomy and capacities.