EVALUATING HAPTIC TECHNOLOGY IN ACCESSIBILITY OF DIGITAL AUDIO WORKSTATIONS FOR VISUALLY IMPAIRED CREATIVES

Christina Karpodini, Birmingham City University, Birmingham, UK, Christina.karpodini@bcu.ac.uk

Abstract

This research suggests new ways of making interaction with Digital Audio Workstations more accessible for musicians with visual impairments. Accessible tools such as screen reader are often unable to support users within the music production environment. Haptic technologies have been proposed as solutions but are often generic and not address the needs of the individual. A series of experiments is being suggested to examine the possibilities of mapping haptic feedback to audio effects parameters. Sequentially, the use of machine learning is being suggested in order enable automated mapping and expand accessibility to the individual. The expected results will provide visually impaired musicians with a new way of producing music but also will provide academic research on material and technologies that can be used for future accessibility tools.

Introduction

Current music production tends to be a visual demanding medium. There is a shift towards more skeuomorphic representations of hardware studio equipment on screen which makes it harder to interact for visually impaired (VI) users. With the availability of software and online applications, bedroom music production has made a huge leap in recent years [1]. As a result, many Digital Audio Workstations (DAW) made redundant the use of bespoke studio hardware like pedals, effect racks, and mixers in their physical from. However, it is argued that such approach also boosted a different type of creative practitioners in this field [22]. Many accessibility tools are now advanced enough and widely used. For example, the digitization of the braille system into displays and editors and the use of synthesized voices for reading text on screen. These methods support the day-to-day lives of people with visual impairments; however, these are not practical and useful for other mediums such as video games, digital art, virtual and augmented reality and music production, that rely heavily on visual feedback.

Haptic technology, which is less invasive and discrete, has been proposed in human-computer interaction studies with a focus on accessibility and VIs with positive results. However, we don’t know how such haptic systems should be applied and tailored to the individual to enhance and provide the best possible experience. This research will explore and examine these methods that can potentially lower the barriers of VI musicians and wider VI community by proposing embodied and immersive haptic focused accessibility tools.

Aims and Research Questions

The aim of this research is to identify the haptic feedback techniques that can best convey visually represented audio processing functions and consequently, to suggest a machine learning algorithm that can enhance a long term relationship between haptic feedback and VI users. As a result, the research questions are: How we can develop the best practice approach in DAW through the use of haptic feedback for VI musicians, and second, how machine learning can adapt and adjust haptic feedback information based on users’ needs.

Rationale

Human-Computer interaction and the evolution of music technology form the basis of the argument about the dysfunctionality of contemporary design methods that often exclude people with disabilities.

Although many elements of the Graphical User Interface (GUI), including DAWs, can be translated into text-based information for screen reader applications, when looking at creating and performing music there are still graphical elements and relationships that cannot be expressed through text. For example, how do we represent sound effects such as compressor, reverb and delay in a haptic format that can be better understood. Current tools such as screen readers disadvantages VI users through an overload of information and potentially discourage others for exploring music.

Human-Computer Interaction

History of HCI reveals patterns that influence the design of accessible technological tools. Paul Dourich, reference in Tanaka [29] propose four stages of interaction evolution, electrical, symbolic, textual, and graphical. The advancement of technology can be reflected, in a similar manner, in these stages through the development from analogue computing to binary states and from coding interaction to visual representations. Through this process, the interaction with technology has become effortless and thus socially accessible. Music production can also be seen through the prism of the four stages of interaction. Tanaka [29] draws the parallels between the two by proposing that analogue synthesizers exist in the electrical stage, audio programming in the symbolic, live coding in the textual and finally digital audio workstations in the graphical stage. However, such evolution excludes VI users.

Haenselmann et. Al [12] suggest that electronic music developed in an uncomfortable way for people with visual impairments from analogue with direct tactile interactions, like holding and striking the strings of the guitar or playing the weighted keys of the piano, to digital devices with most of the feedback information shifted towards the screen or into other visual means of representation. Although, current skeuomorphic representations might have made DAW accessible from the economic and social side of things, sadly they demonstrate a dysfunctional design process where accessibility for VI users is denied. Frid[6], has identified trends in the words “adaptive music” which suggests and propose a new way of thinking design that is inclusive and allow technological tools to be adapted and become accessible to everyone.

Screen Readers and User's Feedback

VI users relay on other senses to access information both in the physical and digital world, mainly through hearing and touch.[25] Advances in technology over the last 50 years pushed the barriers of accessibility, making more and more digital information available to users [16]. Devices such as braille display printers, editors, and screen readers are now widely available, and accessible to those in need [13]. They are focusing mainly on interpreting text and images using different presentation formats or text-to-speech approaches.

Screen Readers were found to be the main tool that VI users can access DAWs [21]. Whilst screen readers are very good at reading text, they have diminished capabilities in conveying visual information, especially in an audio processing environment. Often the information is not translated successfully or the screen reader overlaps with other information causing “cognitive overload” to the users [21].

Nick et al [21] interviewed 4 VI individuals regarding their experience with the existing assistive technology and how this effect their workflow in a DAW. Among other information, his observations include that several actions and time needed to complete a task, the navigation with the help of the screen reader slowed down particular tasks, some graphical features could not be translated to the other senses, and spatial sound manipulation was inaccessible. Similarly, Peyne et al. [24] interviewed 11 individuals with VI and they noted the following challenges. Users could not do many things at the same time, they needed to invent their own methods in order to facilitate their workflow where technology is inaccessible with unsuccessful results frequently. Screen readers do not always follow the updates of the program. Lastly, some users asked for help from facilitators to check on their work, particularly when musical score was used.

Efforts have been made to produce accessible musical instruments, however according to Frid’s review only 3.6% is focused on assisting people with VI [6]. Such projects include, but not limited to, HapticWave [30], ActivePaD [20], CuSE [12], Wedelmusic VIP Module [9], Soundsculpt [5] . These academic projects often fade out rapidly without the support of becoming sustainable solutions [6]. The variety of approaches in these projects, including tools for audio editing, processing, and recording to a haptic musical database, and is evidence of the complexity of sound as raw material and the challenges faced in transforming it into other non-visual forms.

Haptic Technology

Srinivasan et al. [28] propose “computer haptics” as the new discipline that started to appear more in the literature towards the end of the 20th century. It was rapidly adopted and got incorporated into contemporary technological tools such as mobile phones and entertainment applications as well as advancing accessibility Goggin [11]. The advantages of haptic feedback are according to Van Der Linden the ability to significantly reduce the cognitive overload of a musician [17] and according to Charoenchaimonkon et al. [2] to reduce the time of completion of a task especially in comparison to auditory or visual feedback alone. A wide range of applications and case studies demonstrate the positive contribution of vibrotactile feedback, from tools for rehabilitation procedures, to day-to-day assistive technology. Besides the applied research, there is significant literature that examines the possibilities of haptic feedback at a microscopic level focusing on the use of different types of haptic feedback. For example, where on the body is best to perceive a vibrotactile feedback [23] [4] and ways we can use vibration motors, and which one offers the most effective results [26]. Furthermore, Marshall et al. [18] suggests the importance of including haptics it in the new digital musical instruments design. Van Der Linden [17] propose the musicjacket which assists new violin students to improve their posture through haptics.

There is a wide range of projects about vibrotactile feedback in music technology as an assistive tool for VI users. Researchers are focusing on one aspect of the problem at the time such as interviewing VI users [21][24], examining the existing technology [6][19], focusing on developing a prototype [7][3][27]. However, there is a lack of a holistic consideration in order to understand better the long terms needs to propose better solution for the user.

Methodology

The suggested solution for lowering the barrier of accessibility of music creation within the DAWs for visually impaired users includes two stages. The first stage will examine the relationship between the perception of sound and vibrotactile feedback. Semi-open interviews will aim to identify the current barriers VI users are facing during the use of specific software applications and specific functions. The primary interviews will form the basis of a series of user experience testing with both participants with VI and non-VI users. These experiments are designed to examine the relationship between the perception of sound, visual interaction on the screen and vibrotactile feedback, how the users are experiencing haptics and the degree in which it can be used as a solution to the discussed problems.

In the second stage of the methodology, we will propose a machine learning algorithm that automatically adjusts the haptic feedback to the audio effect. We will use the data from the analysis of the above-proposed experiments and map them with data that characterise the functionality of different audio effects. This process aims towards expanding the accessibility of the audio manipulation and provide an enhanced experience where is needed together with the screen readers and other tools users already have in their disposal. This approach also aims to make 3rd party audio plugins more accessible.

Proposed Experiment 1

For the first experiment, we will use a location which can be also accessible to use without causing discomfort to the user [23][4]. Consideration has been given to the choice of embodied (wearable) technology [8]. Embodied systems make the perception of the haptic feedback effortless and reduce even more cognitive overload. Therefore, the designed apparatus for the experiments will be in the form of an adjustable wristband that can be worn anywhere between the wristband and the elbow.

Vibration motors can be used in a variety of ways according to their mechanical properties. The vibration consisted of frequency, amplitude, and duration. Giordano et al. [10] elaborate on the ways the sense of touch can convey a message into pitch, rhythm, roughness, and timbre. Those ways are a combination of the mechanical properties of a vibration motor, and they support the design decisions of this experiment. The chosen methods are the use of amplitude which can be interpreted as roughness, how strong the vibration can get, the use of different rhythmic patterns, and lastly the experiment with distributing the signal in more than one vibrator.

The elements that are inaccessible according to the existing literature are those that are based on visual representation and are accessible to interact with through that representation (see figure 1). Some examples of those are the Equalizer, Filters, Automation, and Panning. These examples can be translated into text (see figure 2) however the access to these number and what they represent as well as the manipulation with the use of a screen reader demands a lot of action, high cognitive overload and delay the creative process [21]. In the proposed experiments of this research, we aim to use haptic feedback as means of an alternative representation of these numbers and examine if that method is effective with possibilities to be applied by the music technology industry.

The System Usability Scale (SUS) will be used to identify the level at which users found the proposed system acceptable and usable during the music production scenario. Thus, a semi-structured interview will take place after each test to further analyse and capture the user’s perspective on the system. Data will be collected from each user during the tasks regarding the completion speed, errors, and accuracy. The recruited visually impaired and non-visually impaired users will undertake a simple test consisting of four tasks with the haptic system and four identical tasks without the haptic system but with the preferred set up which might or not involve audio assistive technology from the screen reader. The setting of the computer will be as close as possible to the one they are familiar with. Tasks are simple, asking users to control only one parameter at a time of the audio effect that is being tested each time. Participants will have a maximum of 1 hour to complete the test.

The tests will be formed as follows:

Graphical user interface. EQ graphical representation in Logic Pro
Figure 1 – EQ effect in graphic representation
EQ controls screen readers accessible
Figure 2 – EQ effect in text base representation

Mapping

In the context of this research mapping according to Hunt et al. [14] is the assignment of elements of one set of data to another. The initial mapping will be the amplitude of the vibration on a fixed frequency to the number that is related to the amplitude of the cut frequency that will be manipulated in an EQ effect. That is the process of adjusting the volume of different frequency bands within an audio signal. The same vibration mode will be mapped to the amplitude of the automation, a way to control changes in parameters over time in an arrangement.

Second Mapping will be using a rhythmic pattern that will express the density of vibration event from very dense (frequent) to more sparse (less frequent). That will be mapped to the amplitude of the frequency band that will be manipulated in an EQ effect. Secondly, it will be tested and mapped with a panning effect where dense vibrations will express the sound being in the center and sparse being left or right.

Vibration Motor

Audio Effect

Amplitude

EQ – frequency - amplitude

Amplitude

Automation amplitude line

Pattern (in a form of density)

EQ – frequency - amplitude

Pattern (in a form of density)

Panning left right

3 vibrators

Panning left right

Amplitude and pattern

Amplitude and frequency of EQ

Table 1 - Mapping: Audio effect to Vibration feedback

Third mapping will include multiple vibrators aligned around the wristband. The test will map three vibrators to the panning of the track, left center right equivalently [15] [2].

In addition, the test that will combine two of the vibration methods to examine a 2D represented effect. EQ is one of the effects that is being used by producers mostly through its visual representation. Non visually impaired users will manipulate the frequency and its amplitude at the same time via graphic feedback. This test aims to examine the possibilities of perceiving this 2D information with vibration feedback. Thus, the amplitude of the vibration will be mapped to the amplitude of the frequency band and the density of the vibration will express the frequency range. Table 1 shows the proposed mapping for the experiment.

Analysis and Results for Experiment 1

The results will be analysed in three different ways. First, the SUS will be analysed and completed according to the standard process derived from the literature review. The focus of the SUS will be on the haptic system. Second, the interviews will be transcribed and analysed in terms of thematic wording from the users. This will help us to establish a wider view of the hypothesis and whether further studies are needed in this subject. The interviews will also provide an overview of the aims and objectives of this study and validate data from the tasks as well as from the SUS. Third is the data captured during the test. The data will be compared with each user when using the haptic and when using the screen reader. This set of data will be further analysed to argue if haptics are effective not only for the visually impaired but for other users. The R programming language will be used to analyse and produce statistical results and identify patterns in the performance data.

Machine Learning Approach

audio-haptic and audio-visual relationships will be analysed and form the basis of this application. As a result, the proposed approach described here is rather theoretical and subject to change. The aim facilitates the mapping of the haptic feedback to audio effects and procedures based on the user’s objective. The system will be able to propose the right use of the haptic feedback among the examined (amplitude, pattern, panning) for the requested parameter of the audio effect. For example, if user has only one vibrotactile feedback method in their disposal, the system will be able to adjust the representation of haptic signal to address the best possible experience for that user.

Status of the Research

My PhD studies started in May 2022 with a full scholarship from Birmingham City University, School of Digital Media Technology, Digital Media Technology Lab (DMT Lab). At the time of writing, I have completed the first two months of my studies. I am in the process of exploring the existing literature in this field and further examining research approaches for supporting the design of my methodology. The immediate goals for the next couple of months are the synthesis of a systematic literature review which explores the publications related to assistive technology for VI music users and the use of haptic feedback, including commercial devices from haptic and music technology companies. An emphasis is given to the haptic hardware for undertaking these experiments and if there is a need to develop a bespoke device

In parallel, I am completing the ethical approval form that is necessary for performing experiments with human subjects and especially with vulnerable adults. I am in contact with organizations and VI individuals that can be part of this study that can help me develop a better narrative and provide feedback during an iterative process. Several positive communications have taken place so far.

The Evisioned Contributions to the Accessibility Field

This research aims to provide a new perspective on the use of haptic feedback as a tool that supports VI user. The focus of this study will be around music production aiming towards removing or significantly reducing the barriers for VI users working in music production. Contrition to knowledge and the accessibility filed would be towards a modular approach that can provide adaptable solutions to users through haptic feedback. Through machine learning the system will be able to provide the best possible haptic experience to the user based on specific tasks and application. This will be based on the experience of the user with haptics and available hardware can be used by the music technology industry in order to broaden the accessibility tools or make their existing technology accessible by incorporating haptic feedback. More specifically, this research will also contribute to the development of general use of machine learning for accessibility tools design and will be a platform for future research development.

Potential Contribution

This research can considerably advance the area of accessible and wearable computing by providing novel, embodied and wearable user interactions to support BLV people in creative expression and learning. Furthermore, the findings from this research could potentially inform future developments in BLV learning, music therapy and identify applications of vibrotactile feedback and haptics to improve accessibility.

The findings and discussion from Study one provides an empirical research contribution. Through a thematic analysis of the interview data, I identify design challenges that can inform the development of future ATs for BLV music learning. Study two will apply a novel method of co-design that will enable BLV people to participate remotely and engage in the design process. The methodology of the study can potentially inform how future research is remotely conducted with BLV participants. Furthermore, the artifacts created will enable researchers to envision new possibilities for wearable technologies and facilitate new insights and knowledge into the development of ATs. Finally, the development and testing of the prototype in Study three will inform the development of a future AT for BLV music learning that will be open source and shared with industry for wider development and distribution.

References

  1. Richard James Burgess. 2014. The history of music production. Oxford University Press. 133 pages.
  2. Eakachai Charoenchaimonkon, Paul Janecek, Matthew N Dailey, and Atiwong Suchato. 2010. A comparison of audio and tactile displays for non-visual target selection tasks. In 2010 International Conference on User Science and Engineering (i-USEr). IEEE, 238–243.
  3. Yuri De Pra, Federico Fontana, and Stefano Papetti. 2021. Interacting with Digital Audio Effects Through a Haptic Knob with Programmable Resistance. In 2021 24th International Conference on Digital Audio Effects (DAFx). IEEE, 113–120.
  4. Nem Khan Dim and Xiangshi Ren. 2017. Investigation of suitable body parts for wearable vibration feedback in walking navigation. International Journal of Human Computer Studies 97 (1 2017), 34–44. https://doi.org/10.1016/j.ijhcs.2016.08.002
  5. Balandino Di Donato, Christopher Dewey, and Tychonas Michailidis. 2020. Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach. PervasiveHealth: Pervasive Computing Technologies for Healthcare. https://doi.org/10.1145/3401956.3404233
  6. Emma Frid. 2019. Accessible digital musical instruments—A review of musical interfaces in inclusive music practice. Multimodal Technologies and Interaction 3 (9 2019). Issue 3. https://doi.org/10.3390/mti3030057
  7. Emma Frid, Hans Lindetorp, Kjetil Falkenberg Hansen, Ludvig Elblaus, and Roberto Bresin. 2019. Sound forest: evaluation of an accessible multisensory music installation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
  8. Francine Gemperle, Chris Kasabach, John Stivoric, Malcolm Bauer, and Richard Martin. 1998. Design for wearability. International Symposium on Wearable Computers, Digest of Papers 1998-October, 116–122. https://doi.org/10.1109/ISWC.1998.729537
  9. Anastasia Georgaki, Spyros Raptis, and Stelios Bakamidis. 2000. A music interface for Visually Impaired people in the WEDELMUSIC environment. Design and Architecture. In Proceedings of the 1st International Symposium on Music Information Retrieval (ISMIR 2000). Plymouth, United States. https://doi.org/10.5281/zenodo.1417291
  10. Marcello Giordano and Marcelo M Wanderley. 2013. Perceptual and technological issues in the design of vibrotactile-augmented interfaces for music technology and media. In International workshop on haptic and audio interaction design. Springer, 89–98.
  11. Gerard Goggin. 2017. Disability and haptic mobile media. New Media & Society 19, 10 (2017), 1563–1580.
  12. Thomas Haenselmann, Hendrik Lemelson, and Wolfgang Effelsberg. 2012. A zero-vision music recording paradigm for visually impaired people. Multimedia Tools and Applications 60 (10 2012), 589–607. Issue 3. https://doi.org/10.1007/s11042-011-0832-z
  13. Wladyslaw Homenda. 2010. Intelligent computing technologies in music processing for blind people. Proceedings of the 2010 10th International Conference on Intelligent Systems Design and Applications, ISDA’10, 1400–1405. https://doi.org/10.1109/ISDA.2010.5687106
  14. Andy Hunt and Marcelo M. Wanderley. 2002. Mapping performer parameters to synthesis engines. Organised Sound 7 (2002), 97–108. Issue 2. https://doi.org/10.1017/S1355771802002030
  15. Jari Kangas, Jussi Rantala, and Roope Raisamo. 2017. Gaze Cueing with a Vibrotactile Headband for a Visual Search Task. Augmented Human Research 2 (12 2017). Issue 1. https://doi.org/10.1007/s41133-017-0008-0
  16. Logan Kugler. 2020. Technologies for the visually impaired. Commun. ACM 63, 12 (2020), 15–17.
  17. Janet Van Der Linden, Erwin Schoonderwaldt, Jon Bird, and Rose Johnson. 2011. MusicJacket - Combining motion capture and vibrotactile feedback to teach violin bowing. IEEE Transactions on Instrumentation and Measurement 60, 104–113. Issue 1. https://doi.org/10.1109/TIM.2010.2065770
  18. Mark T Marshall and Marcelo M Wanderley. 2006. Vibrotactile feedback in digital musical instruments. In Proceedings of the 2006 conference on New interfaces for musical expression. 226–229.
  19. Oussama Metatla, Tony Stockman, and Nick Bryan-Kinns. 2013. AMuST: Accessible Music Studio-Feasibility Report. Technical Report.
  20. Joe Mullenbach, Dan Johnson, James Colgate, and Michael Peshkin. 2012. ActivePaD surface haptic device. Haptics Symposium 2012, HAPTICS 2012 - Proceedings,Vancouver,Canada(032012). https://doi.org/10.1109/HAPTIC.2012.6183823
  21. Bryan-Kinns Nick, Oussama Metatla, and Tony Stockman. 2012. Accessible Music and Sound Studios. http://www.dancingdots.com/prodesc/
  22. Simon Order. 2016. The liminal music studio: between the geographical and the virtual. Critical Arts 30 (5 2016), 428–445. Issue 3. https: //doi.org/10.1080/02560046.2016.1205331
  23. Claudio Pacchierotti, Stephen Sinclair, Massimiliano Solazzi, Antonio Frisoli, Vincent Hayward, and Domenico Prattichizzo. 2017. Wearable haptic systems for the fingertip and the hand: Taxonomy, review, and perspectives. IEEE Transactions on Haptics 10, 580–600. Issue 4. https: //doi.org/10.1109/TOH.2017.2689006
  24. WilliamChristopherPayne,AlexYixuanXu,FabihaAhmed,LisaYe,andAmyHurst.2020.HowBlindandVisuallyImpairedComposers,Producers, and Songwriters Leverage and Adapt Music Technology. ASSETS 2020 - 22nd International ACM SIGACCESS Conference on Computers and Accessibility. https://doi.org/10.1145/3373625.3417002
  25. Gemma Pedrini, Luca Andrea Ludovico, and Giorgio Presti. 2020. Evaluating the Accessibility of Digital Audio Workstations for Blind or Visually Impaired People. https://orcid.org/0000-0002-8251-2231
  26. HelenaPongrac.2006.Vibrotactileperception:Differentialeffectsoffrequency,amplitude,andacceleration.Proceedingsofthe2006IEEEInternational Workshop on Haptic Audio Visual Environments and Their Applications, HAVE 2006, 54–59. https://doi.org/10.1109/HAVE.2006.283803
  27. Claire Richards, abc Roland Cahen, and Nicolas Misdariis Actronika SAS. 2022. DESIGNING THE BALANCE BETWEEN SOUND AND TOUCH: METHODS FOR MULTIMODAL COMPOSITION.
  28. Mandayam A Srinivasan and Cagatay Basdogan. 1997. Haptics in virtual environments: Taxonomy, research status, and challenges. Computers & Graphics 21, 4 (1997), 393–404.
  29. Atau Tanaka. 2019. Embodied musical interaction. In New Directions in Music and Human-Computer Interaction. Springer, 135–154.
  30. Atau Tanaka and Adam Parkinson. 2016. Haptic wave: A cross-modal interface for visually impaired audio producers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2150–2161.


About the Authors

Christina is a PhD student in the DMT Lab at Birmingham City University, UK. Her research focuses on designing and developing novel accessibility tools for processing and experiencing information through haptics in music production. The research challenges current human-computer interaction paradigms and machine learning techniques, intending to devise new ways in which visually impaired users can process information, navigate, interact, and collaborate