Universal Design - the case of Podcast (PUD project)

In what ways can Podcasts be made by all - and used by all?  What will Universal Design of Podcasts look like? In what ways may Podcasts be used in learning situations by students and teachers to engage and inspire for learning?

Since its inception around 2004, the production and use of Podcasts has evolved.  In 2024 podcasts are used in a variety of situations - for a different purposes; entertainment, learning and teaching, health and well being, therapy and sports for example.

Podcasts has primarily been the distribution and packeting of recorded voices. Today, we see Podcasts that are available in many situations to many people by including transcription of the audio, by being tagged with keywords, by including meta data, by including links to books and other material that is reffered to in the podcast, and by including video of the talk - monologue or dialogue. Some podcasts include subtitles, lyrics, closed captions and sign language.  In some podcasts, it is possible to search and navigate through the episode with search terms and navigating an index. Podcasts has the potential to be truly universal designed and multimodal.

The podcast sanser og samspill from Signo Kompetansesenter is a pioneer in making podcasts universally designed. They have developed a guideline for making and developing universally designed podcast.

If you are interested in furthering the work with universal design of podcasts - both investigating the use of podcasts in educational settings - and the production of podcasts - please do not hesitate to contact me.

We have a project named PUD (Podcast and Universal Design) where we intend to investigate the use and making of podcasts especially among students, and ways in which the production might be done simpler with diverse tools.  We want to explore ways in which podcasts can be used as a complement - and alternative to writing and reading texts and written documents.

Technologies for Interaction and use of sound and audio are steadily evolving.  Examples are the Nomono system (“Nomono | Podcasting, Simplified. | Nomono,” n.d.) for the everyday making of podcast, while on the move by automobiles (“Android Auto,” n.d.) or in teaching and learning settings  (“Forelesningsopptak – enkelt opptak og publisering av forelesninger - Universitetet i Oslo,” n.d.), Panopto, and the field of music and musical instruments (Jensenius, 2022).

Human-Computer Interaction literature about sound and audio goes back to the early days of HCI (Frauenberger et al., 2007).  Audio and sound have been a part of HCI research in for example non speak sound for navigation (Brewster, 1998) or audio cubes (Schiettecatte and Vanderdonckt, 2008), and social media use (Karlsen et al., 2016).

Recording, modifying, storing, using, communicating, retrieving, finding, and playing sound is done in many different ways on different devices and tools. The microphone, such as the Lavalier microphone or microphones built into smartphones are sensors and input devices – and standalone loudspeakers or built-in loudspeakers in the form of bone conductive speakers or built-in speakers in pieces of furniture are output devices. However, how to “use”, “interact with” and "make sense" of interaction with these devices and the corresponding functions are still to complicated for many users in various use situations. Today, it is easy for many to for example copy, modyfy, edit and paste text snippets in a document.  Today, it is complicated for many to copy, modify, edit and paste audio snippets in a podcast.

With situational abilities in mind (Saplacan, 2020), and various sitution of use in mind; in what way is the "control" of microphones and loudspeakers as input and output equipment done?

What are the challenges imposed by the context of use? What is the visual, auditory or tangible feedback from the microphone and the loudspeaker when "on" and "off", various modes such as recording, sharing etc?  How can all this be different?

References

Brewster, S.A., 1998. Using nonspeech sounds to provide navigation cues. ACM Trans. Comput.-Hum. Interact. TOCHI 5, 224–259.

Forelesningsopptak – enkelt opptak og publisering av forelesninger - Universitetet i Oslo [WWW Document], n.d. URL https://www.uio.no/tjenester/it/lyd-video/forelesningsopptak/index.html (accessed 8.16.23).

Frauenberger, C., Stockman, T., Bourguet, M.-L., 2007. A Survey on Common Practice in Designing Audio in the User Interface. Presented at the Proceedings of HCI 2007 The 21st British HCI Group Annual Conference University of Lancaster, UK, BCS Learning & Development. https://doi.org/10.14236/ewic/HCI2007.19

Jensenius, A.R., 2022. Sound actions: conceptualizing musical instruments. MIT Press.

Karlsen, J., Stigberg, S.K., Herstad, J., 2016. Probing Privacy in Practice: Privacy Regulation and Instant Sharing of Video in Social Media when Running, in: International Conferences on Advances in Computer-Human Interactions ACHI. pp. 29–36.

Nomono | Podcasting, Simplified. | Nomono [WWW Document], n.d. URL https://nomono.co/?gclid=CjwKCAjw5_GmBhBIEiwA5QSMxP428lcJV1UC2lQrGL8Q1ysJxINcIpyZBIASW1YQn2neywLP2JwREBoCbM4QAvD_BwE (accessed 8.16.23).

Saplacan, D., 2020. Situated Ability: A Case from Higher Education on Digital Learning Environments, in: Antona, M., Stephanidis, C. (Eds.), Universal Access in Human-Computer Interaction. Applications and Practice, Lecture Notes in Computer Science. Springer International Publishing, Cham, pp. 256–274. https://doi.org/10.1007/978-3-030-49108-6_19

Schiettecatte, B., Vanderdonckt, J., 2008. AudioCubes: a distributed cube tangible interface based on interaction range for sound design, in: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, TEI ’08. ACM, New York, NY, USA, pp. 3–10. https://doi.org/10.1145/1347390.1347394

 

Publisert 6. nov. 2023 11:29 - Sist endret 6. nov. 2023 15:48

Veileder(e)

Omfang (studiepoeng)

60