Tag: AI

SADIE: Scoping AI Developments in Edtech at Edinburgh University

I wrote a a while back about the start of our SADIE project looking at AI in the third party systems we provide for our students and staff at University of Edinburgh.

Educational technology (EdTech) services have not been immune to the excitement and rushing wave of AI adoption. It is important for learning technologists in central services to understand the risks of new features being rolled out by our existing technology partners and retain the ability to assess and choose which ones we switch on for use by our community.

It is a fast moving space:

  • An AI detection feature was added to the similarity checking service Turnitin in April 2023.
  • Various AI helper tools have been added to our virtual learning environment Learn since July 2023.
  • Wooclap added an AI wizard to generate multiple choice or open questions in November 2023.

All these services are under the control of the University to enable (or not) and are all currently switched off at Edinburgh due to risks identified.

The biggest barrier to adoption for AI tools is likely to be clear assurances from suppliers on the compliance of their AI features with University policy and legal obligations. We need a common process  which will allow us to be consistent in the evaluation and adoption of AI tools and features. 

The processes we will now use have been developed carefully by senior learning technologists with expertise in providing our central systems. For the most part they are an extended reworking of existing processes for the introduction of new non-AI features into services. As ever we need to take into account the workloads of learning technologists and ensure the processes we develop should not be much of a burden on service teams to adopt these and extend them to AI features. 

Since assessing risks of AI tools will soon become a regular or routine part of business as usual, it is important that decisions on the enabling of AI features are transparent to users. The  Edinburgh AI Innovations Service Release Tracker, and the wider SADIE SharePoint site will give the rationale behind the approach adopted and decisions made. It will also provide advice on the risks of using a tool even if it has been made available. 

The adoption of AI tools and features will likely require a review of University policies, potentially including but not limited to the OER, Lecture Recording, Virtual Classroom and Learning Analytics policies, to take account of the risks identified as part of this project. 

The Scoping AI Developments in EdTech at Edinburgh (SADIE) project was set up to standardise an approach for service teams to test and evaluate the utility and suitability of the AI tools and features being made available in the centrally supported EdTech services. The approach developed looked at the risks of adopting a particular feature and calls upon the expertise of learning technologists within the Schools, as well as that of the service managers in Information Services, in evaluating them. 

We will be monitoring progress closely.

AI and ethics, welcoming our robot colleagues

I am delighted that this summer we have  2 student interns working in LTW to help us understand how Chat GPT and Open AIs can help us in our work.

We have long welcomed our robot colleagues.

We already use AI in our transcriptions and captioning services to add speech to text versions for students, and extensively in our media production services to improve video files, edit out cluttered backgrounds and add ALT text.  We use AI to add BSL translations to our MOOCs and a number of additional languages to promote the reach and accessibility of our learning materials.  We already use Chat GPT to generate code.

With our interns’ help we are exploring how we can scale our use of AI prompts to write web content and improve our support based on considerable technical knowledge-bases of our tools.

But with all the hype around we have also started our list of things we would NOT do.

  • We won’t use art generated by AI because we don’t know where it has come from. #payartists
  • We won’t publish anything as OER which has been AI generated because AI cannot consent.
  • We won’t use AI in recruiting/selecting staff because old data sets are biased and skewed.
  • We won’t use use AI to analyse data about our people.
  • We won’t use ‘human finishing’ or content editor services which pay less than a living wage.
  • We won’t use it to write accessibility statements, DPIAs or EQIAs.
  • We won’t be seduced by AI tools  being anthropomorphised by the use of of words like hallucinating and imagining, however cute they are.

It is striking that at most of the events I am invited to to hear about AI, the speakers are men. It makes one long for some diversity of views.

Here’s a really good article by Lorna https://lornamcampbell.org/higher-education/generative-ai-ethics-all-the-way-down/ highlighting some of the challneges for those of us who publish collections and content openly on the web.

Update 18th August 2023

I am delighted to have received the finished report from my AI Summer interns, Bartlomiej Pohorecki  and Wietske Holwerda 

They have conducted an analysis of the current state of play regarding the use of generative AI technologies in LTW,  and identified opportunities those technologies make possible, how to use them in an ethical way and how to consider privacy concerns. The analysis uncovered that there are concrete use cases of generative AI that would benefit us, however this technology is new and has limitations. Additionally, there are potential pitfalls that could arise when implementing those solutions and there must be a strong focus on ethics and privacy. There is a push to use generative AI from management, however  LTW employees do not have sufficient understanding of how to use it and some fear that they will be replaced by it. This calls for a coherent approach to communicating what is the purpose of introducing those solutions into the workplace.

Bart and Wietske  propose using the term “hybrid intelligence” which aims at denoting that the correct approach is not replacing people with artificial intelligence, but creating a synergy between staff and the generative AI tools. 

They identified concrete use cases and provided me a Possible Implementations Suitability Matrix (PISM). They have offered me courses of action and possible stances in regard to AI.  They have discussed areas of impact of generative AI technologies on Education Technology  and when they conducted interviews with key stakeholders at LTW they identified  commonly held misconceptions regarding generative AI, and explained why they are incorrect.   Best of all, they went beyond the generic literature to identify areas where LTW is already strong, unusual and values-led and took special care to think about the impact of AI on those areas such as OER, MOOCs, Wikimedia, accessibility and recruitment of women into tech.

My next step is to continue and our extend AI internship roles to work with business analysts and service teams in order to be able to navigate the AI market efficiently and make responsible decisions while innovating. There is a need for continuous effort for coherent strategy development and deployment of AI systems and a close eye on ethics all round.