Tag: accessibility

See hear

A dragon from our University collections © The University of Edinburgh CC BY https://images.is.ed.ac.uk/luna/servlet/s/f9p45v

I have a long relationship with speech-to-text technology.

In 1998 we had a room in Student Services where students would go to talk to Dragon Dictate. The more they spoke, the less it understood, the more they would laugh, the more it would transcribe their laughing.  It was a very popular  service as a ‘pick-me-up’.

By 2012 I managed a large collection of contemporary educational oratory -the Oxford Podcasts collection, which includes some fine examples of inspirational rhetoric and clearly communicated ideas. Our interactions with voice recognition software, however, had been frustrating. During the project the team explored various solutions including both automatic translation and human transcription services. We began a project to explore how to best represent the content of our podcasts in text. By focusing on keywords generated by recognition software we were be able to give a searchable interface to users before they listen and represent the amount of relevant content within. Blog post April 2012

7 years later the challenge of making academic audio collections accessible continues to be one which is high in my mind as we roll out lecture recording across the campus at Edinburgh. We’ve been tailoring our Replay roll-out to support the university’s policy for Accessible and Inclusive Learning .

Some people have asked if we are going to have subtitles on our lecture recordings as default. The answer is no, but  we are exploring  creative ideas on how we could do it.

My experience is that automated speech to text although improving, is not fully there yet. And costs remain prohibitive, so transcripts or subtitles are not automated in the lecture recording system. Specialist language in lectures remain tricky and are often subtitled badly. It is also difficult for the transcription to discern whether the lecturer is quoting, reading, muttering or joking. The kind of ‘performance’ and content some of our colleagues deliver would need a highly nuanced translation. All UK HE struggles with this challenge and colleagues are anxious that their speech is not misrepresented by a poor quality subtitle which might be more confusing for learners. Blog post August 2017

The overarching objective of our new project for 2019  is to establish and evaluate an initial pilot Subtitles for Media service and make recommendations for future sustainability and resourcing.

The initial focus will be on designing and piloting a service which can scale and improve the usability/ accessibility of our front facing media content through the addition of subtitles and transcripts as appropriate. The service design will aim to include all users and will be primarily concerned with publicly available University media content hosted on Media Hopper Create, EdWeb or one of the University’s Virtual Learning Environments.

The project will have three strands:

  • Testing the feasibility, viability and cost of a student-led transcription service 

A 3-month pilot will allow us to understand what is needed to establish a sustainable programme of work to support our ambitions based on the outcomes of this pilot phase. The students will gain paid work experience and new digital skills. There is already a thriving market in the local region of students who offer proofreading, transcription, audio typing, subtitling and translation services in their spare time and from home. We will work with academic colleagues in the School of Sociology (Dr Karen Gregory) to research the emerging ‘gig economy’ to understand how best to establish an ethical model for piecework in this area.

  • Research and Development

The project will strike a balance between evaluating and costing a model for a growing service, and Research and Development to ensure we keep sight of technology trends in this area and understand how they might influence service development over time. We will run a series of events to engage with other organisations and our own technology leaders in this field to ensure we understand and are able to take advantage of technology developments and opportunities for funding or partnerships.

  • Improving digital skills and promoting culture change

We aim to move towards a culture where subtitling our media is standard practice at the point of creation, not only because of changing legislation but because it promotes engagement with our media for the benefit of our whole audience, and at the same time promotes digital literacy and digital skills.

In order to achieve all this, the Subtitling for Media Project will:-

  • Establish and evaluate an initial pilot service of a student-led subtitling service
  • Develop a costed plan for an ongoing service including support and service management
  • Make recommendations for future sustainability and resourcing
  • Ensure students are trained to deliver a pilot subtitling service
  • Create an ethical model for student piecework in this area
  • Deliver training and guidance to enable best practice in media creation
  • Develop an understanding of current and future technology to support accessibility and ensure our developing service remains in broad alignment

As part of the ISG vision for the University of Edinburgh we aim to support all digital educators in making informed choices about their digital materials. Through this project to establish a new service, staff and students will develop digital skills in creating and using accessible digital materials.   Benefits will include supporting staff and students to understand how and why to make learning materials accessible, and development of digital skills in support of wide scale engagement with digital education. The Subtitling for Media Project will establish and evaluate an initial pilot service and make recommendations for future sustainability and resourcing.

subtitles as default?

Common Sense of a wholly new type. https://images.is.ed.ac.uk/luna/servlet/s/y2j4j2 (c) University of Edinburgh. Full Public Access.

Some people have asked if we are going to have subtiles on our lecture recordings as default. The answer is no,  but I’d be keen to hear creative ideas on how we could do it. ….. Any ideas which cost less than $3m per year are welcome.

Students with disabilities are, we hope, one of the groups which will most benefit from lecture recording. That is however, quite a diverse group, with a wide range of individual needs, with a variety of existing support in place. Disability Services supported our initial business case with their own papers and contribute to discussions on our policy task group. Accessibility use cases were included in our procurement and selection so we are confident that we chose a good solution from a knowledgeable supplier with a large HE user community.

We’ve been tailoring our Replay roll-out to support the university’s policy for Accessible and Inclusive Learning (which I understand is currently being reviewed)

On accessible and inclusive learning:

Our approach is based on being widely flexible and enabling choices of formats and pedagogy. The draft lecture recording policy  states that recordings are primarily an additional resource, rather than a substitute for attendance, so the recording and slides provide the ‘alternative format’ to enhance the accessibility of a live-delivered lecture.

Some lecturers’ notes and slides provide considerable text to support the recorded audio. Replay recordings will support a wide range of accessibility and inclusivity needs – visually impaired; dyslexia and other similar; various autism spectrum disorders; students who for a number of mental health reasons may find physical attendance overwhelming; students for whom English is not their first language, those who struggle with complex technical terms or latin translations, those who experience debilitating anxiety as a result of missing classes. Where students have a schedule of adjustments that includes having a scribe in class with them, a recording will help the scribe clarify and areas of subject specific terminology.

We are running training sessions for all staff on how to make accessible PowerPoint presentations, often it is the use of .ppt which has the greatest impact on accessibility. Replay itself includes good keyboard controls for the video player, integration with JAWS screen reader software, tab-accessible page navigation and a high contrast user interface.

Recording lectures will require academic staff to use microphones – we know practice is currently patchy. So the act of making a recording can improve accessibility for those in the room even if they never replay the video. We are also introducing dozens more Catchbox microphones to catch more student contributions in the recording.

The Replay video experiments with chalk boards will considerably enhance accessibility for students at the back of the lecture theatre with the ability to ‘zoom in’.

For students using ISG services our service level is as consistent across all of our learning technologies as we can make it. Replay recordings will be made available in a closed VLE environment, alongside eReserve texts from the library, PDF and Word documents, lecture slides etc. Any of these digital artefacts can be requested in an alternative format as part of supporting reasonable adjustments. In the case of the lecture recording this could be supplying a transcript or subtitles. For other artefacts it could be supplying in a larger font, or converting written text into audio format. We don’t pre-judge what the required adjustment might be in any of these cases.

With regard to transcripts/subtitles specifically:

Our experience is that automated speech to text although improving, is not fully there yet. And costs remain prohibitive, so transcripts or subtitles are not automated in the lecture recording system.

Specialist language in lectures remain tricky and are often subtitled badly. It is also difficult for the transcription to discern whether the lecturer is quoting, reading, muttering or joking. The kind of ‘performance’ and content some of our colleagues deliver would need a highly nuanced translation. All UK HE struggles with this challenge and colleagues are anxious that their speech is not misrepresented by a poor quality subtitle which might be more confusing for learners.

Even supposed ‘100% accurate human-mediated subtitling’ is not 100% and often requires a proof-read or edit from the speaker. In some cases colleagues are willing to take on this extra work, for others it is seen as a major barrier.

That said, we have purchased, as part of our bundle, 100 hours of human-mediated subtitling and transcripts ( 99% accurate) and 900 hours of machine speech to text ( approx. 70% accurate). The current planned use cases for this would be:
• where profoundly deaf students  request a transcript;
• where the recordings are not a substitute, but in fact a primary delivery mechanism (e.g distance learning);
• where colleagues are publishing and sharing recordings of their lectures publicly online as open educational resources.
• Where a student with mobility difficulties has been unable to access the venue.

As part of the policy consultation over the coming year we may be able to encourage colleagues to make audio and video recordings downloadable so that students can use their own technology to make transcripts.

For the future:

If, as a result of scaling up recording, we find there is a large additional requirement for transcripts we have a number of options:

• If the institutional commitment to spending is there, we can integrate the third party supplier of our choice. For 50,000 hours of recordings each semester that would be approx $3m per semester.
• We can retain more high quality transcription services.  This may need to be recharged to Schools to recover costs – capping costs would be difficult
• We can look into involving more colleagues in using their personalised, trained ‘speech to text’ tools to create transcripts.
• We are working with colleagues in Informatics to stay aware of the most up to date speech to text technologies.
• We can spend much less than $3m per semester paying students an hourly rate to transcribe lectures in their discipline.

Any other suggestions…..?

access to things

img_2726
Picture taken by me of a window in Budapest. No rights reserved by me.

I am participating in the University of Edinburgh digital skills course ‘23 things for digital knowledge‘. Thing 6 is  about accessibility.  I was listening on Radio 4 to ‘tweet of the day’ this morning while scrolling through Twitter and I mused on the possibility of having tweets actually tweeted, as in spoken outloud. A quick google search revealed instructions on Instructables on how to make it so.

Twitter Enabled Text to Speech

I’m thinking perhaps a day of making accessible tools would be a good use of our new ‘UCreate Studio’ Maker Space in the Main Library.