Month: July 2023

AI and ethics, welcoming our robot colleagues

I am delighted that this summer we have  2 student interns working in LTW to help us understand how Chat GPT and Open AIs can help us in our work.

We have long welcomed our robot colleagues.

We already use AI in our transcriptions and captioning services to add speech to text versions for students, and extensively in our media production services to improve video files, edit out cluttered backgrounds and add ALT text.  We use AI to add BSL translations to our MOOCs and a number of additional languages to promote the reach and accessibility of our learning materials.  We already use Chat GPT to generate code.

With our interns’ help we are exploring how we can scale our use of AI prompts to write web content and improve our support based on considerable technical knowledge-bases of our tools.

But with all the hype around we have also started our list of things we would NOT do.

  • We won’t use art generated by AI because we don’t know where it has come from. #payartists
  • We won’t publish anything as OER which has been AI generated because AI cannot consent.
  • We won’t use AI in recruiting/selecting staff because old data sets are biased and skewed.
  • We won’t use use AI to analyse data about our people.
  • We won’t use ‘human finishing’ or content editor services which pay less than a living wage.
  • We won’t use it to write accessibility statements, DPIAs or EQIAs.
  • We won’t be seduced by AI tools  being anthropomorphised by the use of of words like hallucinating and imagining, however cute they are.

It is striking that at most of the events I am invited to to hear about AI, the speakers are men. It makes one long for some diversity of views.

Here’s a really good article by Lorna highlighting some of the challneges for those of us who publish collections and content openly on the web.

Update 18th August 2023

I am delighted to have received the finished report from my AI Summer interns, Bartlomiej Pohorecki  and Wietske Holwerda 

They have conducted an analysis of the current state of play regarding the use of generative AI technologies in LTW,  and identified opportunities those technologies make possible, how to use them in an ethical way and how to consider privacy concerns. The analysis uncovered that there are concrete use cases of generative AI that would benefit us, however this technology is new and has limitations. Additionally, there are potential pitfalls that could arise when implementing those solutions and there must be a strong focus on ethics and privacy. There is a push to use generative AI from management, however  LTW employees do not have sufficient understanding of how to use it and some fear that they will be replaced by it. This calls for a coherent approach to communicating what is the purpose of introducing those solutions into the workplace.

Bart and Wietske  propose using the term “hybrid intelligence” which aims at denoting that the correct approach is not replacing people with artificial intelligence, but creating a synergy between staff and the generative AI tools. 

They identified concrete use cases and provided me a Possible Implementations Suitability Matrix (PISM). They have offered me courses of action and possible stances in regard to AI.  They have discussed areas of impact of generative AI technologies on Education Technology  and when they conducted interviews with key stakeholders at LTW they identified  commonly held misconceptions regarding generative AI, and explained why they are incorrect.   Best of all, they went beyond the generic literature to identify areas where LTW is already strong, unusual and values-led and took special care to think about the impact of AI on those areas such as OER, MOOCs, Wikimedia, accessibility and recruitment of women into tech.

My next step is to continue and our extend AI internship roles to work with business analysts and service teams in order to be able to navigate the AI market efficiently and make responsible decisions while innovating. There is a need for continuous effort for coherent strategy development and deployment of AI systems and a close eye on ethics all round.  

discussing the work at Leeds ( way back)

I was in Leeds again this week. It’s always nice to visit and I do get a wee bit nostalgic for places and times passed. I worked at Leeds 2001-8. One of the things we did back then was to develop institutional services for blogging, podcasting and wiki-ing. I called it ‘LeedsFeeds’. We also had LU-Tube.

It got written up by JISC:

“Promoting blogs, wikis and other RSS enabled applications such as podcasting and news feeds has been part of the Staff and Departmental Development Unit’s support for
the ongoing development of staff information literacy skills.

Web 2.0 can be defined as a set of technologies that allow easy content sharing on the web and that enable social software, ie. Software that supports group processes. Social software includes blogs, wikis, content sharing systems (such as Flickr and YouTube), social bookmarking systems (such as, and content syndication systems. While the first systems that can be classed as Web 2.0 or social software appeared more than a decade ago, during the last three years there has been a strong growth in the number of available social software systems, and in their use. With the rise in use, there are a number of concerns relating to creation, ownership and preservation of the content. Some of these are discussed below.” 

Institutional practice Briefing paper on Web 2 (2007) (