Tag: AI

Impact of AI on EDI practice

If you search for ‘impact of AI on EDI’, you mostly get returns discussing Electronic Data Interchange. But with a few tweaks we can find a growing corpus of information.

We can use AI to speed up  some of our EDI tasks, But there are big risks if we don’t actually know what we are doing.

EDI policies available from universities are all fairly similar, so a quick query to ELM (or similar LLM) will move you forward fast, although it may cite the wrong law. ELM can very quickly produce a workplace menopause policy, for instance. HR colleagues may fear for their jobs.  However constant vigilance is needed.  I particularly notice that AI will reference ‘The Equalities Act’  which is incorrect, there are not many equalities, just Equality. Equality Act 2010

That said, the law is also not keeping up with AI

In the recent Fife NHS Employment Tribunal, there has been concern that one or other of the judges involved was relying on AI misinformation Judge Kemp denies use of AI in Peggie judgment | Scottish Legal News.  The ruling had to be corrected several times and an appeal is planned.

It may be tempting to use AI to understand organisational data, but it seems likely that the result will be as biased as the historic data that you use. If we have not in the past gathered data about our particular demographics, the data will not be there for the models to learn.

It may even re-write history. Despite we know about holocaust deniers and historical revisionists, our new race awareness trainings for staff and students does not even touch on concerns about how people may learn abut history from AI .

Similarly, our consent training and sexual harassment training make no reference to online harms or consent of images. Deepfakes, revenge porn and image abuse are facilitated by AI. There is no reason to think that the people in the University of Edinburgh population are different from all others. The adding of image creation functionality to ELM has been a particular area of concern for me. Just because other tools do it, doesn’t seem like a good enough reason to outweigh the risks.

How are we helping our students to recognise fake images? Do our ‘report and support’ systems include support for detection?

Our students and staff are just as likely as any to put themselves in risky situations of scams and fakers. How are we teaching them to be safe, what does ‘safe spaces’  and ‘active bystander’ even mean in this new era?

In organisations such as ours, where concerns are consistently raised about unconscious bias, it is naive to think that this problem has been ‘fixed’.  The AI will amplify the biases of those who use it, and may be a dangerous tool in the hands of management ( including human resource management).

The demographics of people who work in AI are skewed heavily in favour of men, and in Scotland, white men.I see this at every meeting discussing ELM. We must do more to get diversity in the thinking about how tools will be used and the opportunities they afford. One woman in the room is not enough. Building an AI profession for everyone: diversity at the heart of the UK’s tech future | BCS

Recruitment, retention and career progression are particularly vulnerable areas. If past hiring decisions favored certain demographics, AI systems may replicate these patterns, disadvantaging underrepresented groups.

It is clear that while some people are championing the use of AI with enthusiasm, it is likely to have a disproportionate impact on particular groups, and that is where EDI leadership must be alert. The UNESCO study find harms to women and girls, negative content about gay people and particular ethnic groups and racial stereotyping.

Bias is being discussed in popular magazines and in academic studies, but still targeted mostly at people who are interested in stories about AI, rather than the general public. eg:

I had hoped that digital accessibility would be an area of positive enhancement, and transform the lives of people with disabilities, and this does seem to be an area in which significant gains are being made. As I age and my eyes and ears let me down, I am hopeful for the many AI enhancements I will be able to access.

But there is still the underlying risk that developers who use AI to write code will be drawing on a historical mass of legacy code which did not include the features of accessibility and no-one will be checking.

What can we do:

  • Update all our EDI training offers to include content about the impact of AI.
  • Quickly target training at HR professionals, disability support staff, welfare advisers, safety staff, network groups, occupational health and well-being professionals and EDI leaders to ensure they understand the impact AI may be having.
  • Engage IT professionals and web developers in discussions about using AI in accessibility checking and coding.
  • Take care that any AI training we develop and deliver covers these topics above.
  • Take care that any AI tools we develop and deliver make considerable investment to be better for All.

 

 

 

achievements using AI

As you know,  in December each year we do a round up of achievements over the last six months.  Here’s a summary of the LTW achievements and initiatives from winter 2024, condensed into ten ‘lively’ bullet points by ELM AI:

  1. Edtech Ecosystem Expansion: Launched a new short courses platform, successfully integrating it into our existing edtech ecosystem, attracting over 2500 learners across 200 courses, with a whopping 1 million page views!
  2. Web Migration Marvels: Smoothly transitioned 161 websites to a new platform, involving the migration of 80,000 pages, 90,000 images, and training 800 colleagues, despite the hair-raising complexities of the ISG website debacle.
  3. AI Adventures: Rolled out the SADIE project to explore ethical AI integration in educational technologies, sparking lively discussions and critical thinking across the board.
  4. Digital Skills Dynamo: Delivered 305 sessions in our Digital Skills Programme, elevating 2593 learners to digital prowess and cyber savvy status.
  5. Green Web Warriors: Launched a comprehensive initiative to promote sustainability in our digital estate, culminating in a report hefty enough to rival a master’s dissertation and recruiting three eco-conscious interns.
  6. Caption Craze: Wholeheartedly embraced our captioning service, making automatic captions the new normal for Media Hopper Create videos, ensuring accessibility isn’t just an afterthought.
  7. Migration Maestros: Completed a gargantuan migration from Drupal 7 to Drupal 10, a logistical feat involving years of planning and a cocktail of technical and copyright debt.
  8. Festival Frenzy and Windows Win: Navigated the summer festival season with flair while deploying Windows 11 across all teaching rooms—a double triumph in operational excellence and timing.
  9. Induction Innovation: Refined our induction and onboarding processes for staff based on fresh feedback, fostering a quicker sense of belonging and smoother integration into our LTW family.
  10. Community and Recognition Revelries: Celebrated the extraordinary efforts of LTW staff with numerous awards and recognitions, including Reward Vouchers and Staff Recognition Awards, proving every day is rewarding work in LTW!

LTW’s tapestry of teamwork weaved these achievements into an impressive display of collaboration and innovation, proving that when it comes to pushing the boundaries of educational technology and services, we definitely know how to put on quite the show!

SADIE: Scoping AI Developments in Edtech at Edinburgh University

I wrote a a while back about the start of our SADIE project looking at AI in the third party systems we provide for our students and staff at University of Edinburgh.

Educational technology (EdTech) services have not been immune to the excitement and rushing wave of AI adoption. It is important for learning technologists in central services to understand the risks of new features being rolled out by our existing technology partners and retain the ability to assess and choose which ones we switch on for use by our community.

It is a fast moving space:

  • An AI detection feature was added to the similarity checking service Turnitin in April 2023.
  • Various AI helper tools have been added to our virtual learning environment Learn since July 2023.
  • Wooclap added an AI wizard to generate multiple choice or open questions in November 2023.

All these services are under the control of the University to enable (or not) and are all currently switched off at Edinburgh due to risks identified.

The biggest barrier to adoption for AI tools is likely to be clear assurances from suppliers on the compliance of their AI features with University policy and legal obligations. We need a common process  which will allow us to be consistent in the evaluation and adoption of AI tools and features. 

The processes we will now use have been developed carefully by senior learning technologists with expertise in providing our central systems. For the most part they are an extended reworking of existing processes for the introduction of new non-AI features into services. As ever we need to take into account the workloads of learning technologists and ensure the processes we develop should not be much of a burden on service teams to adopt these and extend them to AI features. 

Since assessing risks of AI tools will soon become a regular or routine part of business as usual, it is important that decisions on the enabling of AI features are transparent to users. The  Edinburgh AI Innovations Service Release Tracker, and the wider SADIE SharePoint site will give the rationale behind the approach adopted and decisions made. It will also provide advice on the risks of using a tool even if it has been made available. 

The adoption of AI tools and features will likely require a review of University policies, potentially including but not limited to the OER, Lecture Recording, Virtual Classroom and Learning Analytics policies, to take account of the risks identified as part of this project. 

The Scoping AI Developments in EdTech at Edinburgh (SADIE) project was set up to standardise an approach for service teams to test and evaluate the utility and suitability of the AI tools and features being made available in the centrally supported EdTech services. The approach developed looked at the risks of adopting a particular feature and calls upon the expertise of learning technologists within the Schools, as well as that of the service managers in Information Services, in evaluating them. 

We will be monitoring progress closely.

AI and ethics, welcoming our robot colleagues

I am delighted that this summer we have  2 student interns working in LTW to help us understand how Chat GPT and Open AIs can help us in our work.

We have long welcomed our robot colleagues.

We already use AI in our transcriptions and captioning services to add speech to text versions for students, and extensively in our media production services to improve video files, edit out cluttered backgrounds and add ALT text.  We use AI to add BSL translations to our MOOCs and a number of additional languages to promote the reach and accessibility of our learning materials.  We already use Chat GPT to generate code.

With our interns’ help we are exploring how we can scale our use of AI prompts to write web content and improve our support based on considerable technical knowledge-bases of our tools.

But with all the hype around we have also started our list of things we would NOT do.

  • We won’t use art generated by AI because we don’t know where it has come from. #payartists
  • We won’t publish anything as OER which has been AI generated because AI cannot consent.
  • We won’t use AI in recruiting/selecting staff because old data sets are biased and skewed.
  • We won’t use use AI to analyse data about our people.
  • We won’t use ‘human finishing’ or content editor services which pay less than a living wage.
  • We won’t use it to write accessibility statements, DPIAs or EQIAs.
  • We won’t be seduced by AI tools  being anthropomorphised by the use of of words like hallucinating and imagining, however cute they are.

It is striking that at most of the events I am invited to to hear about AI, the speakers are men. It makes one long for some diversity of views.

Here’s a really good article by Lorna https://lornamcampbell.org/higher-education/generative-ai-ethics-all-the-way-down/ highlighting some of the challneges for those of us who publish collections and content openly on the web.

Update 18th August 2023

I am delighted to have received the finished report from my AI Summer interns, Bartlomiej Pohorecki  and Wietske Holwerda 

They have conducted an analysis of the current state of play regarding the use of generative AI technologies in LTW,  and identified opportunities those technologies make possible, how to use them in an ethical way and how to consider privacy concerns. The analysis uncovered that there are concrete use cases of generative AI that would benefit us, however this technology is new and has limitations. Additionally, there are potential pitfalls that could arise when implementing those solutions and there must be a strong focus on ethics and privacy. There is a push to use generative AI from management, however  LTW employees do not have sufficient understanding of how to use it and some fear that they will be replaced by it. This calls for a coherent approach to communicating what is the purpose of introducing those solutions into the workplace.

Bart and Wietske  propose using the term “hybrid intelligence” which aims at denoting that the correct approach is not replacing people with artificial intelligence, but creating a synergy between staff and the generative AI tools. 

They identified concrete use cases and provided me a Possible Implementations Suitability Matrix (PISM). They have offered me courses of action and possible stances in regard to AI.  They have discussed areas of impact of generative AI technologies on Education Technology  and when they conducted interviews with key stakeholders at LTW they identified  commonly held misconceptions regarding generative AI, and explained why they are incorrect.   Best of all, they went beyond the generic literature to identify areas where LTW is already strong, unusual and values-led and took special care to think about the impact of AI on those areas such as OER, MOOCs, Wikimedia, accessibility and recruitment of women into tech.

My next step is to continue and our extend AI internship roles to work with business analysts and service teams in order to be able to navigate the AI market efficiently and make responsible decisions while innovating. There is a need for continuous effort for coherent strategy development and deployment of AI systems and a close eye on ethics all round.