William C. Payne, New York University


Both music creation tools and novice coding environments often use highly visual interfaces to aid in writing, editing, and navigation. Such designs fail to include blind users and may be totally inaccessible. This work sets out to develop a novel domain specific language, interface, and curriculum in partnership with students and teachers at the Filomen M. D'Agostino Greenberg (FMDG) School, a community music school that reaches blind and visually impaired musicians of all ages and skill levels. The project will consist of an iterative co-design phase and an implementation/evaluation phase in which a small number of students at the school will use the technology during a music composition course to create original works.


Both making and listening to music can be incredibly gratifying and rewarding experiences. Music creation and manipulation are often supported in educational coding environments due to music-s near universal appeal, but coding can also be an effective and engaging notation for exploring musical ideas and gaining a deeper understanding of music [20]. The similarities between writing music and coding are numerous. Both involve processes that evolve over time and must be run or heard to be understood. Both contain structures at multiple levels of hierarchy and abstraction, e.g. a variable referenced within a function within a class, a pitch sounded within a melody within a chorus. An algorithm is notated in code and carried out by a computer just as a musical idea is notated as a score and performed by a musician. Coding syntax and music notation are both symbolic entities distinct yet tethered to the phenomena they represent. They are also both quite challenging and time consuming to learn and use. Researchers in educational music and computing fields have attempted to reduce cognitive load and prevent frustration through increasingly visual tools such as drag-and-drop coding interfaces [31, 35] and curve or shape-driven music composition software [5, 12] that reduce the syntax needed for use and prevent errors/wrong-notes. Tools meant for experts are also highly visual: Code editors aid sighted users in scanning and organizing code with cues like syntax highlighting and code maps [30], while, as my own upcoming work at ASSETS 2020 demonstrates, many commercially-available music products are unusable without vision. In addition to inaccessible technologies, learning music notation is especially hard for blind and visually impaired people as they have limited access to braille or large print music and fewer learning materials, are not usually exposed to braille music notation in primary school, and must still acquire a working knowledge of visual notation if they are to pursue music and encounter sighted musicians [2]. For my dissertation, I hope to address a need for accessible, educational, music computing environments designed explicitly for blind and low-vision users.

Related Work

The work draws on substantial prior research in music education, interdisciplinary computer science education, and assistive technology for people with blindness and/or low vision. While there are many creative coding tools and pedagogies that engage novices in music making [1, 16, 19, 35], few are intended for people with visual impairments or draw on research in the participation of blind people in computing or coding environments developed for non-visual use [30, 37]. Music is an auditory phenomenon, but it is represented and interacted with visually in software. While there are some tactile and haptic interfaces developed with and for blind music creators [21, 23, 38], in general, music interaction researchers have designed little for those with visual impairments compared to other disabilities [17]. Projects most closely related to this work include LilyPond [18] and Project Torino [40]. LilyPond is a powerful music notation software that is text-based and accessible, but it is difficult to learn given the massive number of commands and not meant for novice coders or composers. Project Torino is a tangible music programming environment inclusive of children with visual impairments, but it does not include music notation in its teaching goals and it uses a different programming paradigm than I propose below. Fundamentally, neither tool is intended to aid the teaching of budding composers.


My proposed dissertation work addresses broad accessibility challenges including non-visual navigation of large, hierarchical documents [36], multi-modal input/output intended to reduce cognitive load and adapt to user abilities [27], and Universal Design for Learning [Hitchcock and Stahl 2003]. My collaborators and research partners are students and teachers at FMDG School, an innovative and widely recognized community music school that reaches blind and visually impaired musicians of all ages and skill levels [14]. The school excels at teaching blind and low-vision students braille/large print music notation, music theory, dance, singing, and instruments while providing all students with sheet music targeted to meet their vision ability and skill level. However, FMDG Schools lacks accessible technologies aligned with its pedagogical goals in teaching students to write original music using production software (used to create audio files, e.g. GarageBand [3]) or notation software (used to create scores for musicians to read, e.g. Finale [25]). In music production lessons, the school uses 15-year-old software on Windows 7 that supports the JAWS screen reader [15] because they have not found an easy-to-use or affordable alternative. In music composition and notation lessons, teachers wrestle between three options, each limited in different ways. If teachers use a notation software that has been designed for sighted users but supports screen readers and magnification (e.g. Lime with third-party JAWS support developed by Dancing Dots [10]), students do not work directly with braille music and face a high learning curve. Alternatively, if they use a software designed for braille music, they may have more difficulty producing standard scores for sighted musicians, and the available technologies are not developed for novices who are just learning braille music [32]. Finally, teachers at FMDG School often forgo the computer and use a braille slate and stylus which provides a low barrier of entry, in relation to complicated software, but lacks all benefits of music technology such as the ability to hear the music one is writing before finding musicians to play it.

Proposed Solution

I see an opportunity to address the need for more inclusive music coding tools while directly supporting my partners� goals. I propose developing a system consisting of a notation/domain specific language (DSL), multi-modal interaction environment, and curriculum co-developed with teachers at FMDG School. Unlike most music coding environments, this technology will be designed for non-visual use, will support braille music output, and may explore other input/output modalities including external hardware, body movement, and speech-to-text. Drawing from the goals of FMDG School, the coding environment will promote creative expression over measurable technical growth in which the ability for a user to make music they are proud of outweighs the authenticity of the experience in relation to professional programming practices.

In addition to building an effective music creation tool, I am invested in developing programming literacy [39] and encouraging growth in computational thinking skills [Brennan and Resnick 2012], and I do not want to inadvertently encourage improper programming habits should participants wish to purse STEM opportunities later [26, 33]. Drawing from my own past work teaching students to code through making music and evaluating the music component of the language Scratch and others, the act of imperative music programming can be tedious, inexpressive, and ill-suited for music making in which new students begin by writing long lists of functions each triggering a single note [28]. I draw from substantial music education research, such as that of Jeanne Bamberger, who argues that music training should instead begin with remixing larger musical chunks, like melodies and rhythms, rather than working with the smallest levels of detail, like notes, durations, and intervals, which only highly-trained musicians perceive [4]. Most novice music coding environments are imperative 1 for obvious reasons: Imperative programs map closely to musical scores. Both specify timing, order of events, and instances of classes (e.g. chorus section, repeating motive, etc.). However, I believe that a functional or declarative approach to programming more closely matches the techniques and creative processes of composers as they manipulate and arrange ideas and musical structures. As such, I am especially interested in Functional I/O languages [13], which have been used to teach introductional computer science in interdisciplinary contexts with positive outcomes [34]. Functional programs tend to be much shorter and concise making them easier to navigate and edit.

Methods and Research Progress

My proposed research draws from participatory design [11] and design-based research approaches [9]. Given that I am neither a blind person, nor a practicing music teacher, I see the perspectives of my partners at FMDG School as invaluable to the design of a successful artifact, and I intend to involve them through every step of the process. To date, I have formed a close relationship with FMDG School beginning with my attendance at a multi-day workshop on braille music intended primarily for local teachers. In December 2019, I became the "Accessible Music Technology Fellow" where I work to transcribe and convert sheet music to braille and large print formats as well as assist the school in building an online database of accessible music stored in their library. Before Covid-19 reached New York City, I spent Saturdays at FMDG School interacting with teachers, students, and families and informally observing classes and lessons. Currently, I meet weekly with administrators and staff online.

In 2019, I published a demo of a non-visual web-based drum machine at ASSETS [29], and while I may not expand the drum interface for my dissertation, its development cycle taught me some accessible web fundamentals like ensuring screen reader compatibility/ease-of-navigation and employing high contrast color palettes. Recently, I completed an interview study with blind and visually impaired composers, producers, and songwriters who use music technology. The study identifies strategies used and barriers faced as my participants brought creative ideas to life and discusses the need for multi-modal music creation environments and non-visual music score navigation tools that I propose addressing in my dissertation work. This study was recently accepted to ASSETS 2020. Next, I intend to begin brainstorming and prototyping activities remotely with teachers at FMDG School to identify pedagogical goals and technology requirements. Following that I intend to begin the first of multiple iteration cycles developing a functional programming environment and multi-modal interface. Before I send any prototypes to students at the school, I plan to send iterations to teachers and more experienced musicians to receive feedback about usability and accessibility and request other suggestions to guide progress. Finally, I intend to use the tool in a music composition course and co-create a curriculum with one of the teachers at FMDG School. I am still unsure how I might best evaluate my work especially in regards to learning outcomes (e.g. through pre/post tests, tests to measure transfer learning, etc.)

Program Progress

I have completed Year 3 of study, am no longer enrolled in coursework, and have passed Candidacy exams indicating breadth of knowledge in Music Technology and Music Theory. There are no students or even faculty in my program whose work falls mainly into HCI. The hiring of Professor Amy Hurst in my second year has given me the opportunity and guidance necessary to pursue this research, and I hope to submit my Dissertation Proposal between late 2020 and mid 2021. In other related research projects, I developed an eye-controlled musical interface in partnership with a musician with late-stage amyotrophic lateral sclerosis (ALS) called the Cyclops. An article about the Cyclops will be published at the New Interfaces for Musical Expression (NIME) conference in July. I have also been working to develop, test, and integrate new movement-based creative coding tools with high school dance students [6, 7].

Possible Contributions

There is potential for this work to contribute to inclusive computing and music education practices, especially with its focus on functional music programming, and design of multi-modal interfaces with application broader than music. As a young scholar whose background lies heavily in the creative arts, I look to the mentors and other students at the DC for guidance in determining areas to focus on in which this research might have more significant impact in accessibility.


I thank NYU Steinhardt for offering me the fellowship that enables me to pursue my PhD, FMDG School for warmly welcoming me into their community and sharing with me a wealth of knowledge, and Amy Hurst and our burgeoning assistive technology research group, the NYU Ability Project, for providing unending support and a safe space to learn and grow.


  1. Samuel Aaron, Alan F Blackwell, and Pamela Burnard. 2016. The development of Sonic Pi and its use in educational partnerships: Co-creating pedagogies for learning computer programming. Journal of Music, Technology & Education 9, 1 (2016), 75–94.
  2. Joseph Michael Abramo and Amy Elizabeth Pierce. 2013. An ethnographic case study of music learning at a school for the blind. Bulletin of the Council for Research in Music Education 195, 195 (2013), 9–24.
  3. Apple. 2020. GarageBand for Mac.
  4. Jeanne Shapiro Bamberger. 1995. The mind behind the musical ear: How children develop musical intelligence. Harvard University Press.
  5. Jeanne Shapiro Bamberger and Armando Hernandez. 2000. Developing musical intuitions: A project-based introduction to making and understanding music. Oxford University Press.
  6. Yoav Bergner, Shiri Mund, Ofer Chen, and Willie Payne. 2019. First Steps in Dance Data Science: Educational Design. In Proceedings of the 6th International Conference on Movement and Computing. 1–8.
  7. Yoav Bergner, Shiri Mund, Ofer Chen, and Willie Payne. 2020. Leveraging interest-driven embodied practices to build quantitative literacies: A case study using motion and audio capture from dance. Educational Technology Research and Development (2020), 1–24.
  8. Karen Brennan and Mitchel Resnick. 2012. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 annual meeting of the American educational research association, Vancouver, Canada, Vol. 1. 25.
  9. Design-Based Research Collective. 2003. Design-based research: An emerging paradigm for educational inquiry. Educational Researcher 32, 1 (2003), 5–8.
  10. Dancing Dots. 2020. Dancing Dots: Accessible Music Technology for Blind and Low Vision Performers since 1992.
  11. Betsy DiSalvo, Jason Yip, Elizabeth Bonsignore, and Carl DiSalvo. 2017. Participatory design for learning: Perspectives from practice and research. Taylor & Francis.
  12. M.M. Farbood, E. Pasztor, and K. Jennings. 2004. Hyperscore: a graphical sketchpad for novice composers. IEEE Computer Graphics and Applications 24, 1 (1 2004), 50–54.
  13. Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. 2009. A Functional I/O System or, Fun for Freshman Kids. SIGPLAN Not. 44, 9 (Aug. 2009), 47–58.
  14. Filomen M. D’Agostino Greenberg Music School. 2020. FMDG School: Fostering education, access, and inclusion for people of all ages with vision loss.
  15. Freedom Scientific. 2020. JAWS Screen reader.
  16. Jason Freeman, Brian Magerko, Tom McKlin, Mike Reilly, Justin Permar, Cameron Summers, and Eric Fruchter. 2014. Engaging underrepresented groups in high school introductory computing through computational remixing with EarSketch. In Proceedings of the 45th ACM technical symposium on Computer science education. 85–90.
  17. Emma Frid. 2019. Accessible digital musical instruments—A review of musical interfaces in inclusive music practice. Multimodal Technologies and Interaction 3, 3 (2019).
  18. GNU Project. 2020. LilyPond.
  19. Jamie Gorson, Nikita Patel, Elham Beheshti, Brian Magerko, and Michael Horn. 2017. TunePad: Computational Thinking Through Sound Composition. In Proceedings of the 2017 Conference on Interaction Design and Children (IDC ’17). Association for Computing Machinery, New York, NY, USA, 484–489.
  20. Gena R Greher and Jesse M Heines. 2014. Computational thinking in sound: Teaching the art and science of music and technology. Oxford University Press.
  21. Thomas Haenselmann, Hendrik Lemelson, Kerstin Adam, and Wolfgang Effelsberg. 2009. A tangible MIDI sequencer for visually impaired people. In Proceedings of the 17th ACM international conference on Multimedia. 993–994.
  22. Chuck Hitchcock and Skip Stahl. 2003. Assistive Technology, Universal Design, Universal Design for Learning: Improved Learning Opportunities. Journal of Special Education Technology 18,4 (2003), 45–52.
  23. Aaron Karp and Bryan Pardo. 2017. HaptEQ: A collaborative tool for visually impaired audio producers. In Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences. 1–4.
  24. Miran Lipovaca. 2011. Learn you a haskell for great good!: a beginner’s guide. no starch press.
  25. makemusic. 2020. finale music notation software.
  26. Orni Meerbaum-Salant, Michal Armoni, and Mordechai Ben-Ari. 2011. Habits of Programming in Scratch. In Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education (ITiCSE ’11). Association for Computing Machinery, New York, NY, USA, 168–172.
  27. Sharon Oviatt. 2006. Human-centered design meets cognitive load theory. In Proceedings of the 14th annual ACM international conference on Multimedia - MULTIMEDIA ’06. ACM Press, New York, New York, USA, 871.
  28. William Payne and S. Alex Ruthmann. 2019. Music Making in Scratch: High Floors, Low Ceilings, and Narrow Walls? Journal of Interactive Technology and Pedagogy 15 (2019).
  29. William Payne, Alex Xu, Amy Hurst, and S. Alex Ruthmann. 2019. Non-visual beats: Redesigning the Groove Pizza. In ASSETS 2019 - 21st International ACM SIGACCESS Conference on Computers and Accessibility. Association for Computing Machinery, Inc, 651–654.
  30. Venkatesh Potluri, Priyan Vaithilingam, Suresh Iyengar, Y. Vidya, Manohar Swaminathan, and Gopal Srinivasa. 2018. CodeTalk: Improving Programming Environment Accessibility for Visually Impaired Developers. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, USA, 1–11.
  31. Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman, et al. 2009. Scratch: programming for all. Commun. ACM 52, 11 (2009), 60–67.
  32. Marc Sabatella. 2020. The Accessible Music Notation Project: Braille Music Tools.
  33. Jean Salac and Diana Franklin. 2020. If They Build It, Will They Understand It? Exploring the Relationship between Student Code and Performance. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’20). Association for Computing Machinery, New York, NY, USA, 473–479.
  34. Emmanuel Schanzer, Kathi Fisler, and Shriram Krishnamurthi. 2018. Assessing Bootstrap: Algebra Students on Scaffolded and Unscaffolded Word Problems. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE ’18). Association for Computing Machinery, New York, NY, USA, 8–13.
  35. R Benjamin Shapiro, Annie Kelly, Matthew Ahrens, and Rebecca Fiebrink. 2016. BlockyTalky: A physical and distributed computer music toolkit for kids. NIME.
  36. Ann C. Smith, Justin S. Cook, Joan M. Francioni, Asif Hossain, Mohd Anwar, and M. Fayezur Rahman. 2003. Nonvisual Tool for Navigating Hierarchical Structures. SIGACCESS Access. Comput. 77–78 (Sept. 2003), 133–139.
  37. Andreas M. Stefik, Christopher Hundhausen, and Derrick Smith. 2011. On the Design of an Educational Infrastructure for the Blind and Visually Impaired in Computer Science. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (SIGCSE ’11). Association for Computing Machinery, New York, NY, USA, 571–576.
  38. Atau Tanaka and Adam Parkinson. 2016. Haptic Wave. (2016), 2150–2161.
  39. Annette Vee. 2013. Understanding computer programming as a literacy. Literacy in Composition Studies 1, 2 (2013), 42–64.
  40. Nicolas Villar, Cecily Morrison, Daniel Cletheroe, Tim Regan, Anja Thieme, and Greg Saul. 2019. Physical Programming for Blind and Low Vision Children at Scale. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–4.

About the Authors

Willie Payne is a PhD Candidate in Music Technology at NYU working with Amy Hurst, Alex Ruthmann, and Yoav Bergner. He is invested in developing holistic, creative tools that enable others to express themselves on their own terms and to build confidence and self-worth through positive experiences enacting original art. Previously, he studied under Shaun Kane and Clayton Lewis at CU Boulder where he completed degrees in Computer Science (BS/MS) and Music Composition (BM) and was honored with the distinction of Outstanding Graduate of the College of Engineering.