The conference was really interesting, and quite political. Lots of talk of combatting fake news and a keynote from the ACLU. It was also quite a diverse conference, I thought, and the topic of diversity came up again and again. A lot of people thinking very hard about the sheer scale of getting ‘everyone’ involved in open knowledge.
I mused* on 2 things:
Someone presented some research indicating that women who contribute to Wikipedia do not only edit on ‘women’s topics’ or female biographies. That was not a shock as women, notoriously, are interested in all kinds of topics. But it does mean that getting more female editors does not automatically increase the coverage of our under represented bios.
There was some interesting findings with regard to images. It seems that the images available in Wikimedia Commons to represent people in roles and professions disproportionately portray men in those roles. Even when the profession in question is traditionally female dominated.
The connection between these two I think, must draw upon the same theory of ‘unconscious bias’ as our recruitment training does. Men and women both tend to think that men are more appropriate in professional roles, and more notable for biographies. Unconsciously, even when we pay attention, we may fall foul of our bias.
Much inspired by it all, I return to my main hobby of creating and improving women’s bios. This week I wrote about Prunella Briance, founder of the NCT and Sheila Kitzinger. I felt brave and added a picture of actual breastfeeding to Kitzinger’s page. I think she would have wanted that. Briance, Kitzinger and the NCT fought the good fight to allow women to breastfeed without fear, even in public.
Some people have asked if we are going to have subtiles on our lecture recordings as default. The answer is no, but I’d be keen to hear creative ideas on how we could do it. ….. Any ideas which cost less than $3m per year are welcome.
Students with disabilities are, we hope, one of the groups which will most benefit from lecture recording. That is however, quite a diverse group, with a wide range of individual needs, with a variety of existing support in place. Disability Services supported our initial business case with their own papers and contribute to discussions on our policy task group. Accessibility use cases were included in our procurement and selection so we are confident that we chose a good solution from a knowledgeable supplier with a large HE user community.
Our approach is based on being widely flexible and enabling choices of formats and pedagogy. The draft lecture recording policy states that recordings are primarily an additional resource, rather than a substitute for attendance, so the recording and slides provide the ‘alternative format’ to enhance the accessibility of a live-delivered lecture.
Some lecturers’ notes and slides provide considerable text to support the recorded audio. Replay recordings will support a wide range of accessibility and inclusivity needs – visually impaired; dyslexia and other similar; various autism spectrum disorders; students who for a number of mental health reasons may find physical attendance overwhelming; students for whom English is not their first language, those who struggle with complex technical terms or latin translations, those who experience debilitating anxiety as a result of missing classes. Where students have a schedule of adjustments that includes having a scribe in class with them, a recording will help the scribe clarify and areas of subject specific terminology.
We are running training sessions for all staff on how to make accessible PowerPoint presentations, often it is the use of .ppt which has the greatest impact on accessibility. Replay itself includes good keyboard controls for the video player, integration with JAWS screen reader software, tab-accessible page navigation and a high contrast user interface.
Recording lectures will require academic staff to use microphones – we know practice is currently patchy. So the act of making a recording can improve accessibility for those in the room even if they never replay the video. We are also introducing dozens more Catchbox microphones to catch more student contributions in the recording.
The Replay video experiments with chalk boards will considerably enhance accessibility for students at the back of the lecture theatre with the ability to ‘zoom in’.
For students using ISG services our service level is as consistent across all of our learning technologies as we can make it. Replay recordings will be made available in a closed VLE environment, alongside eReserve texts from the library, PDF and Word documents, lecture slides etc. Any of these digital artefacts can be requested in an alternative format as part of supporting reasonable adjustments. In the case of the lecture recording this could be supplying a transcript or subtitles. For other artefacts it could be supplying in a larger font, or converting written text into audio format. We don’t pre-judge what the required adjustment might be in any of these cases.
With regard to transcripts/subtitles specifically:
Our experience is that automated speech to text although improving, is not fully there yet. And costs remain prohibitive, so transcripts or subtitles are not automated in the lecture recording system.
Specialist language in lectures remain tricky and are often subtitled badly. It is also difficult for the transcription to discern whether the lecturer is quoting, reading, muttering or joking. The kind of ‘performance’ and content some of our colleagues deliver would need a highly nuanced translation. All UK HE struggles with this challenge and colleagues are anxious that their speech is not misrepresented by a poor quality subtitle which might be more confusing for learners.
Even supposed ‘100% accurate human-mediated subtitling’ is not 100% and often requires a proof-read or edit from the speaker. In some cases colleagues are willing to take on this extra work, for others it is seen as a major barrier.
That said, we have purchased, as part of our bundle, 100 hours of human-mediated subtitling and transcripts ( 99% accurate) and 900 hours of machine speech to text ( approx. 70% accurate). The current planned use cases for this would be:
• where profoundly deaf students request a transcript;
• where the recordings are not a substitute, but in fact a primary delivery mechanism (e.g distance learning);
• where colleagues are publishing and sharing recordings of their lectures publicly online as open educational resources.
• Where a student with mobility difficulties has been unable to access the venue.
As part of the policy consultation over the coming year we may be able to encourage colleagues to make audio and video recordings downloadable so that students can use their own technology to make transcripts.
For the future:
If, as a result of scaling up recording, we find there is a large additional requirement for transcripts we have a number of options:
• If the institutional commitment to spending is there, we can integrate the third party supplier of our choice. For 50,000 hours of recordings each semester that would be approx $3m per semester.
• We can retain more high quality transcription services. This may need to be recharged to Schools to recover costs – capping costs would be difficult
• We can look into involving more colleagues in using their personalised, trained ‘speech to text’ tools to create transcripts.
• We are working with colleagues in Informatics to stay aware of the most up to date speech to text technologies.
• We can spend much less than $3m per semester paying students an hourly rate to transcribe lectures in their discipline.
University archive colleagues have been incredibly kind and spent time with me looking through his photos. Apparently there is some genuine interest in the history of electron microscopy and molecular biology these days.
The estate of Dr Peter Highton will be happy to donate whatever we have in our cupboards.