If you search for ‘impact of AI on EDI’, you mostly get returns discussing Electronic Data Interchange. But with a few tweaks we can find a growing corpus of information.
We can use AI to speed up some of our EDI tasks, But there are big risks if we don’t actually know what we are doing.
EDI policies available from universities are all fairly similar, so a quick query to ELM (or similar LLM) will move you forward fast, although it may cite the wrong law. ELM can very quickly produce a workplace menopause policy, for instance. HR colleagues may fear for their jobs. However constant vigilance is needed. I particularly notice that AI will reference ‘The Equalities Act’ which is incorrect, there are not many equalities, just Equality. Equality Act 2010
That said, the law is also not keeping up with AI
- Artificial Intelligence: The Need to Update the Equality Act 2010 | OHRH
- New guidance on AI and equality available to public sector bodies | EHRC
In the recent Fife NHS Employment Tribunal, there has been concern that one or other of the judges involved was relying on AI misinformation Judge Kemp denies use of AI in Peggie judgment | Scottish Legal News. The ruling had to be corrected several times and an appeal is planned.
It may be tempting to use AI to understand organisational data, but it seems likely that the result will be as biased as the historic data that you use. If we have not in the past gathered data about our particular demographics, the data will not be there for the models to learn.
It may even re-write history. Despite we know about holocaust deniers and historical revisionists, our new race awareness trainings for staff and students does not even touch on concerns about how people may learn abut history from AI .
Similarly, our consent training and sexual harassment training make no reference to online harms or consent of images. Deepfakes, revenge porn and image abuse are facilitated by AI. There is no reason to think that the people in the University of Edinburgh population are different from all others. The adding of image creation functionality to ELM has been a particular area of concern for me. Just because other tools do it, doesn’t seem like a good enough reason to outweigh the risks.
How are we helping our students to recognise fake images? Do our ‘report and support’ systems include support for detection?
Our students and staff are just as likely as any to put themselves in risky situations of scams and fakers. How are we teaching them to be safe, what does ‘safe spaces’ and ‘active bystander’ even mean in this new era?
In organisations such as ours, where concerns are consistently raised about unconscious bias, it is naive to think that this problem has been ‘fixed’. The AI will amplify the biases of those who use it, and may be a dangerous tool in the hands of management ( including human resource management).
- When AI Amplifies the Biases of Its Users
- AI’s racial bias: A dark reality in the Black community – DefenderNetwork.com
The demographics of people who work in AI are skewed heavily in favour of men, and in Scotland, white men.I see this at every meeting discussing ELM. We must do more to get diversity in the thinking about how tools will be used and the opportunities they afford. One woman in the room is not enough. Building an AI profession for everyone: diversity at the heart of the UK’s tech future | BCS
Recruitment, retention and career progression are particularly vulnerable areas. If past hiring decisions favored certain demographics, AI systems may replicate these patterns, disadvantaging underrepresented groups.
- AI in Recruitment: Opportunities and Ethical Concerns – HR News
- Bias in AI-driven HRM systems: Investigating discrimination risks embedded in AI recruitment tools and HR analytics – ScienceDirect
It is clear that while some people are championing the use of AI with enthusiasm, it is likely to have a disproportionate impact on particular groups, and that is where EDI leadership must be alert. The UNESCO study find harms to women and girls, negative content about gay people and particular ethnic groups and racial stereotyping.
Bias is being discussed in popular magazines and in academic studies, but still targeted mostly at people who are interested in stories about AI, rather than the general public. eg:
- Is AI sexist and racist? – BBC Science Focus Magazine
- Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes | Stanford HAI
- How AI resurrects racist stereotypes and disinformation — and why fact-checking isn’t enough
- Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions – ScienceDirect
I had hoped that digital accessibility would be an area of positive enhancement, and transform the lives of people with disabilities, and this does seem to be an area in which significant gains are being made. As I age and my eyes and ears let me down, I am hopeful for the many AI enhancements I will be able to access.
- Artificial intelligence and the inclusion of Persons with disabilities
- 10 Breakthrough AI Tools Empowering Blind And Low-Vision Users
But there is still the underlying risk that developers who use AI to write code will be drawing on a historical mass of legacy code which did not include the features of accessibility and no-one will be checking.
What can we do:
- Update all our EDI training offers to include content about the impact of AI.
- Quickly target training at HR professionals, disability support staff, welfare advisers, safety staff, network groups, occupational health and well-being professionals and EDI leaders to ensure they understand the impact AI may be having.
- Engage IT professionals and web developers in discussions about using AI in accessibility checking and coding.
- Take care that any AI training we develop and deliver covers these topics above.
- Take care that any AI tools we develop and deliver make considerable investment to be better for All.








