Artificial Intelligence and the Dignity of Risk

Emily Shea Tanis, Coleman Institute for Cognitive Disabilities, tanis@cu.edu
Clayton Lewis, Coleman Institute for Cognitive Disabilities, clayton.lewis@colorado.edu

Abstract

The increased use of AI-based systems poses risks and opportunities for people with cognitive disabilities. On the one hand, automated administrative systems, such as job applicant screeners, may disadvantage people whose patterns of strengths and weaknesses, and whose life circumstances, differ from those commonly seen in pools of data. On the other hand, people with cognitive disabilities stand to gain from AI’s potential to provide superior support, such as speaker-dependent speech recognition. Further, privacy concerns are heightened, both because of greater likelihood that people with uncommon combinations of attributes can be identified from their data, and because of the potential for discrimination and exploitation. It is important that people with disabilities are able to make self-directed choices about the tradeoffs among risks and benefits, not denying them the dignity of risk that others have. Enabling this calls for advances both in technology and in organization.

1. Introduction

Many applications of artificial intelligence (AI) are based, explicitly or implicitly, on creating a model that describes the characteristics of a group of people, such as job applicants, and the relationships among them. Thus, an AI system for screening job applicants does so by predicting the suitability of candidates, based on data in their applications. The development and use of such systems presents both potential risks and potential benefits for people whose characteristics and life circumstances are outside the aggregate norm, such as people with cognitive disabilities. Affording people the opportunity to learn and understand the risks and benefits poses challenges to how AI systems are developed and deployed today. Article 3 of the Convention on the Rights of Persons with Disabilities discusses the rights of people with disabilities to have agency in their own lives. A regime in which people with cognitive disabilities have the dignity of risk, that is, the freedom to accept risk in return for learning and developing skills, will require both technical and institutional innovations.

2. Risks from AI Applications

People with cognitive disabilities have uncommon patterns of strengths and weaknesses. As Jutta Treviranus [9] and others have noted, an AI system trained on data from large numbers of applicants may not be able to appropriately characterize applicants with  patterns of attributes that do not align with aggregate representations of attributes, leading to discrimination. As big data analysis and data mining grows, so does the potential to express bias through AI and its applications. Unintended consequences that could negatively impact people with cognitive disabilities can be demonstrated in all domains of living. Where a person lives can be impacted through housing vouchers; financial stability can be impacted through online loan applications; employment can be impacted through headhunter selection systems, and even romantic partners can be impacted though automated matching systems.

3. Risks from data gathering

The process of gathering data for training AI systems may itself pose risks to people with disabilities. Even when data are “anonymized”, that is, when clearly identifiable information like names and addresses are removed or not included, experiments have shown that people can be identified, by linking attributes included in different collections of data (see [5] for review).  This linking process is simplified when people have uncommon characteristics, or patterns of characteristics, as people with cognitive disabilities often do. In addition, the fact that a person has a disability may be inferred from anonymized data. Together, these facts mean that a person’s identity, and the fact that they have a cognitive disability, may be easy to discover from collections of data. Many people do not wish to disclose their disability status. These wishes are likely to be overridden by the possibilities created by large-scale data collection.

Not only is the privacy of people with cognitive disabilities at risk, but also the consequences of privacy violations may be more serious than for other people. Sadly, people with cognitive disabilities are often targets of abuse and exploitation. For example, older Americans are often targeted by fundraising firms as they are perceived to have disposable income and limited capacity to recognize financial scams.

Benefits  of AI applications

As discussed in a Coleman Institute for Cognitive Disabilities White Paper [4], people with cognitive disabilities have much to gain from advances in AI, that may lead to the development of robust, flexible personal supports. For example, a travel companion application might help a consumer avoid common pitfalls, like leaving baggage behind, or going to the wrong gate. While non-AI systems can help with some of this today, integrated support that can reason about the traveler’s overall situation, as a human companion would do, is beyond current capabilities. People with cognitive disabilities therefore have a positive stake in the continued development of AI.

4. Balancing risks and benefits

Today, many people with cognitive disabilities, or people who support them, are responding to the risks of AI by withdrawing participation. They may refuse to use online job application systems, feeling that the potential opportunities won’t be offered to them. They may refuse to contribute data about themselves or their activities, fearing loss of privacy. But as argued above. withdrawing to avoid risk comes at a price, in foregone benefit. Some observers feel that the European Union’s General Data Protection Regulation may “inhibit analytical leaps and beneficial new uses of information. A rule requiring human explanation of significant algorithmic decisions will shed light on algorithms and help prevent unfair discrimination but also may curb development of artificial intelligence [3].” Can we do better? Here are some approaches.

Modeling diversity

Jutta Treviranus [9] has suggested a “lawnmower of justice” that rebalances modeling processes to give greater weight to unusual cases in a dataset, and less to the common cases. Doing so could produce systems that do a better job of managing diversity of all kinds, though there are questions to be answered about the overall performance of such systems. The substantial data suggesting that diversity is in itself a virtue in many organizational settings [6] make this an idea very well worth pursuing.

A less ambitious technical approach would be to ask classification systems like applicant screeners to output outlier cases as well as “good” candidates. An employer would then be able to examine the outliers, to see if there are candidates in that category worth further attention.

Transparency

Like other consumers, people with cognitive disabilities would be helped by clear communication of how their data would be used, if they permit it to be collected. As Michael Skirpan and others have argued [8], people have very little idea of these matters today, and make decisions, often only implicitly, without knowing what they are agreeing to, or what it could mean.

Transparency may seem simple, but in fact communicating meaningfully what might be done with data is not simple, even for technically sophisticated people. Exploration is badly needed of how common data risks and benefits can be communicated in an effective way for all users, including people with cognitive disabilities.

Consumer protections in data collection and management

It’s likely that part of a workable system of transparency will be providing help to people in making self-directed decisions. Of course, caregivers today try to play that role, but rarely do they themselves feel comfortable with the issues involved. This is one reason for the withdrawal strategy mentioned earlier: it seems safer to avoid the risks, given that the risks are uncertain.

We think that a new kind of institution is needed to cope with this problem, not just for people with cognitive disabilities, but for consumers generally. A consumer protection agency, let’s call it CP, would gather information about common uses of data. Technically informed staff would undertake to evaluate and communicate the associated risks and benefits in a way that consumers could understand, and trust, without themselves having to master the technicalities.

CP could address another risky aspect of the situation, one that is clearly beyond the reach of individual consumers. A company can publish a policy on how data it collects will be used, but experience has shown (see e.g. [2]) that such policies may not be followed. An audit process is needed, in which CP would be authorized to inspect the actual uses of data, so as to provide confidence to consumers that they understand the actual risks of providing data.

Who would pay for CP’s work? One model, parallel to that used for LEED green building certification (http://leed.usgbc.org/leed.html), would be for companies who wish to collect data from their customers or other members of the public to pay to have their activities certified by CP. If a company declined to be certified, members of the public would thereby be warned to avoid engaging.

5. The dignity of risk

There is a delicate balance between risk and maintaining safety. As mentioned earlier, the current situation is that many people with cognitive disabilities, and/or their caregivers, are choosing to avoid engagement with AI or associated data collection. As mentioned earlier, some privacy protection regimes, such as the GDPR, may close off some uses of data that would be beneficial for people with disabilities. When the decision not to engage is made by parties other than the people with disabilities themselves, they are denied the right to make their own determinations about the balance of risks and benefits. We must scaffold learning to allow individuals the opportunities to make safe choices with limited risks. It is only through experience and choice that one can make informed decisions. The dignity of risk is important to promote autonomy and needs to be acknowledged and realized in the applications of AI.

Acknowledgments

We thank the Coleman Institute for Cognitive Disabilities for support. The contributors to the Coleman Institute White Paper [4] provided valuable background.

References

  1. Cattiau, J. (2019) How AI can improve products for people with impaired speech. Online at https://www.blog.google/outreach-initiatives/accessibility/impaired-speech-recognition/.
  2. Federal Trade Commission (2018) Uber Agrees to Expanded Settlement with FTC Related to Privacy, Security Claims. Online at https://www.ftc.gov/news-events/press-releases/2018/04/uber-agrees-expanded-settlement-ftc-related-privacy-security.
  3. Kerry, C.F. (2018) Why protecting privacy is a losing game today—and how to change the game. Brookings Institution, online at https://www.brookings.edu/research/why-protecting-privacy-is-a-losing-game-today-and-how-to-change-the-game/.
  4. Lewis, C. (2018) Implications of Developments in Machine Learning for People with Cognitive Disabilities. Coleman Institute for Cognitive Disabilities, online at https://www.colemaninstitute.org/wp-content/uploads/2018/12/white-paper-coleman-version-1.pdf
  5. Ohm, P. (2009). Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA l. Rev., 57, 1701.
  6. Page, S. E. (2008). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies-New Edition. Princeton University Press.
  7. Perske, R (February 1972). "The dignity of risk and the mentally retarded". Mental Retardation. 10 (1): 24–7.
  8. Skirpan, M., Cameron, J., & Yeh, T. (2018, February). Quantified Self: An Interdisciplinary Immersive Theater Project Supporting a Collaborative Learning Environment for CS Ethics. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (pp. 946-951). ACM.
  9. Treviranus, J. (2018) Sidewalk Toronto and Why Smarter is Not Better. Medium, online at https://medium.com/datadriveninvestor/sidewalk-toronto-and-why-smarter-is-not-better-b233058d01c8
  10. Wolpert, Julian (1980). "The Dignity of Risk". Transactions of the Institute of British Geographers. 5 (4): 391–401. Ronald E. Anderson. 1992. Social impacts of computing: Codes of professional ethics. Soc Sci Comput Rev 10, 2: 453-469.

About the Authors

Emily Shea Tanis, is Co-director for Policy and Advocacy for the Coleman Institute for Cognitive Disabilities, and Professor of Psychiatry at the University of Colorado Anschutz Medical Campus.  The Coleman Institute for Cognitive Disabilities works to catalyze and integrate advances in technology that promote the quality of life of people with cognitive disabilities. Dr. Tanis is also the Principal Investigator of a federally funded PNS National Longitudinal Study on trends in public expenditures for supports and services for people with intellectual and developmental disabilities and their families across the nation.

Clayton Lewis is Co-director for Technology for the Coleman Institute for Cognitive Disabilities, and Professor of Computer Science at the University of Colorado Boulder. He is a member of the ACM SIGCHI Academy, and was awarded the SIGCHI Social Impact Award. He served as the ACM ASSETS Conference General Chair in 2013.